Show simple item record

dc.creatorHarth N., Anagnostopoulos C., Voegel H.-J., Kolomvatsos K.en
dc.date.accessioned2023-01-31T08:27:55Z
dc.date.available2023-01-31T08:27:55Z
dc.date.issued2022
dc.identifier10.1016/j.future.2022.03.030
dc.identifier.issn0167739X
dc.identifier.urihttp://hdl.handle.net/11615/73912
dc.description.abstractThe ability to perform computation on devices present in the Internet of Things (IoT) and Edge Computing (EC) environments leads to bandwidth, storage, and energy constraints, as most of these devices are limited with resources. Using such device computational capacity, coined as Edge Devices (EDs), in performing locally Machine Learning (ML) and analytics tasks enables accurate and real-time predictions at the network edge. The locally generated data in EDs is contextual and, for resource efficiency reasons, should not be distributed over the network. In such context, the local trained models need to adapt to occurring concept drifts and potential data distribution changes to guarantee a high prediction accuracy. We address the importance of personalization and generalization in EDs to adapt to data distribution over evolving environments. In the following work, we propose a methodology that relies on Federated Learning (FL) principles to ensure the generalization capability of the locally trained ML models. Moreover, we extend FL with Optimal Stopping Theory (OST) and adaptive weighting over personalized and generalized models to incorporate optimal model selection decision making. We contribute with a personalized, efficient learning methodology in EC environments that can swiftly select and switch models inside the EDs to provide accurate predictions towards changing environments. Theoretical analysis of the optimality and uniqueness of the proposed solution is provided. Additionally, comprehensive comparative and performance evaluation over real contextual data streams testing our methodology against current approaches in the literature for FL and centralized learning are provided concerning information loss and prediction accuracy metrics. We showcase improvement of the prediction quality towards FL-based approaches by at least 50% using our methodology. © 2022 Elsevier B.V.en
dc.language.isoenen
dc.sourceFuture Generation Computer Systemsen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85129465235&doi=10.1016%2fj.future.2022.03.030&partnerID=40&md5=7d131976b291917f2ba5e996dc3e712f
dc.subjectComputation theoryen
dc.subjectDecision makingen
dc.subjectDecision theoryen
dc.subjectDigital storageen
dc.subjectForecastingen
dc.subjectInternet of thingsen
dc.subjectLearning systemsen
dc.subjectComputing environmentsen
dc.subjectData distributionen
dc.subjectEdge computingen
dc.subjectFederated learningen
dc.subjectLocal learningen
dc.subjectNetwork edgesen
dc.subjectOptimal stopping theoriesen
dc.subjectPersonalizationsen
dc.subjectPrediction accuracyen
dc.subjectQuality-aware analyticen
dc.subjectPredictive analyticsen
dc.subjectElsevier B.V.en
dc.titleLocal & Federated Learning at the network edge for efficient predictive analyticsen
dc.typejournalArticleen


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record