Zur Kurzanzeige

dc.creatorAntaris S., Rafailidis D., Girdzijauskas S.en
dc.date.accessioned2023-01-31T07:31:59Z
dc.date.available2023-01-31T07:31:59Z
dc.date.issued2021
dc.identifier10.1007/s13278-021-00816-1
dc.identifier.issn18695450
dc.identifier.urihttp://hdl.handle.net/11615/70642
dc.description.abstractGraph representation learning on dynamic graphs has become an important task on several real-world applications, such as recommender systems, email spam detection, and so on. To efficiently capture the evolution of a graph, representation learning approaches employ deep neural networks, with large amount of parameters to train. Due to the large model size, such approaches have high online inference latency. As a consequence, such models are challenging to deploy to an industrial setting with vast number of users/nodes. In this study, we propose DynGKD, a distillation strategy to transfer the knowledge from a large teacher model to a small student model with low inference latency, while achieving high prediction accuracy. We first study different distillation loss functions to separately train the student model with various types of information from the teacher model. In addition, we propose a hybrid distillation strategy for evolving graph representation learning to combine the teacher’s different types of information. Our experiments with five publicly available datasets demonstrate the superiority of our proposed model against several baselines, with average relative drop 40.60 % in terms of RMSE in the link prediction task. Moreover, our DynGKD model achieves a compression ratio of 21:100, accelerating the inference latency with a speed up factor × 30 , when compared with the teacher model. For reproduction purposes, we make our datasets and implementation publicly available at https://github.com/stefanosantaris/DynGKD. © 2021, The Author(s).en
dc.language.isoenen
dc.sourceSocial Network Analysis and Miningen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85117587192&doi=10.1007%2fs13278-021-00816-1&partnerID=40&md5=ebfad3185282068cc17e61587700c6f3
dc.subjectDeep neural networksen
dc.subjectDistillationen
dc.subjectDynamic graphen
dc.subjectEvolving graphsen
dc.subjectGraph representationen
dc.subjectGraph representation learningen
dc.subjectKnowledge distillationen
dc.subjectNeural-networksen
dc.subjectON dynamicsen
dc.subjectReal-worlden
dc.subjectStudent Modelingen
dc.subjectTeacher modelsen
dc.subjectCell proliferationen
dc.subjectSpringeren
dc.titleKnowledge distillation on neural networks for evolving graphsen
dc.typejournalArticleen


Dateien zu dieser Ressource

DateienGrößeFormatAnzeige

Zu diesem Dokument gibt es keine Dateien.

Das Dokument erscheint in:

Zur Kurzanzeige