Show simple item record

dc.creatorTirana J., Pappas C., Chatzopoulos D., Lalis S., Vavalis M.en
dc.date.accessioned2023-01-31T10:08:35Z
dc.date.available2023-01-31T10:08:35Z
dc.date.issued2022
dc.identifier10.1145/3539491.3539594
dc.identifier.isbn9781450394048
dc.identifier.urihttp://hdl.handle.net/11615/79711
dc.description.abstractMobile devices generate and store voluminous data valuable for training machine learning (ML) models. Decentralized ML model training approaches eliminate the need for sharing such privacy-sensitive data with centralized entities by expecting each data owner that participates in an ML model training process to compute updates locally and share them with other entities. However, the size of state-of-the-art ML models and the computational needs for producing local updates in mobile devices prohibit the participation of mobile devices in decentralized training of such models. Split learning techniques can be combined with decentralized model training protocols to realize the involvement of mobile devices in model training while preserving the privacy of their data. Mobile devices can produce local updates by splitting the model they are training into multiple parts and delegating the processing of the computationally demanding parts to compute nodes. This work examines the impact of the number of available compute nodes and their interaction. We split ResNet101 ML model into 3,4, and 5 parts, keep the first and the last part in the data owner and assign the processing of the middle parts to compute nodes. Additionally, we analyze the training time when the compute nodes assist multiple data owners in parallel or are responsible for different model parts by forming a pipeline. © 2022 Owner/Author.en
dc.language.isoenen
dc.sourceEMDL 2022 - Proceedings of the 6th International Workshop on Embedded and Mobile Deep Learning, Part of MobiSys 2022en
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85135388352&doi=10.1145%2f3539491.3539594&partnerID=40&md5=34f9cc2fd5204594896863c2ceb56d65
dc.subjectLearning systemsen
dc.subjectPrivacy-preserving techniquesen
dc.subjectDecentraliseden
dc.subjectDecentralized AIen
dc.subjectMachine learning modelsen
dc.subjectModel trainingen
dc.subjectPrivacy awareen
dc.subjectPrivacy-aware AIen
dc.subjectSensitive datasen
dc.subjectSplit learningen
dc.subjectTraining machinesen
dc.subjectVoluminous dataen
dc.subjectSensitive dataen
dc.subjectAssociation for Computing Machinery, Incen
dc.titleThe role of compute nodes in privacy-aware decentralized AIen
dc.typeconferenceItemen


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record