Logo
    • English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • English 
    • English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • Login
View Item 
  •   University of Thessaly Institutional Repository
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • View Item
  •   University of Thessaly Institutional Repository
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.
Institutional repository
All of DSpace
  • Communities & Collections
  • By Issue Date
  • Authors
  • Titles
  • Subjects

The role of compute nodes in privacy-aware decentralized AI

Thumbnail
Author
Tirana J., Pappas C., Chatzopoulos D., Lalis S., Vavalis M.
Date
2022
Language
en
DOI
10.1145/3539491.3539594
Keyword
Learning systems
Privacy-preserving techniques
Decentralised
Decentralized AI
Machine learning models
Model training
Privacy aware
Privacy-aware AI
Sensitive datas
Split learning
Training machines
Voluminous data
Sensitive data
Association for Computing Machinery, Inc
Metadata display
Abstract
Mobile devices generate and store voluminous data valuable for training machine learning (ML) models. Decentralized ML model training approaches eliminate the need for sharing such privacy-sensitive data with centralized entities by expecting each data owner that participates in an ML model training process to compute updates locally and share them with other entities. However, the size of state-of-the-art ML models and the computational needs for producing local updates in mobile devices prohibit the participation of mobile devices in decentralized training of such models. Split learning techniques can be combined with decentralized model training protocols to realize the involvement of mobile devices in model training while preserving the privacy of their data. Mobile devices can produce local updates by splitting the model they are training into multiple parts and delegating the processing of the computationally demanding parts to compute nodes. This work examines the impact of the number of available compute nodes and their interaction. We split ResNet101 ML model into 3,4, and 5 parts, keep the first and the last part in the data owner and assign the processing of the middle parts to compute nodes. Additionally, we analyze the training time when the compute nodes assist multiple data owners in parallel or are responsible for different model parts by forming a pipeline. © 2022 Owner/Author.
URI
http://hdl.handle.net/11615/79711
Collections
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ. [19735]
htmlmap 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

LoginRegister (MyDspace)
Help Contact
DepositionAboutHelpContact Us
Choose LanguageAll of DSpace
EnglishΕλληνικά
htmlmap