Εμφάνιση απλής εγγραφής

dc.creatorFragkou E., Koultouki M., Katsaros D.en
dc.date.accessioned2023-01-31T07:38:54Z
dc.date.available2023-01-31T07:38:54Z
dc.date.issued2022
dc.identifier10.1007/s10489-022-04195-8
dc.identifier.issn0924669X
dc.identifier.urihttp://hdl.handle.net/11615/71791
dc.description.abstractMultilayer neural architectures with a complete bipartite topology have very high training time and memory requirements. Solid evidence suggests that not every connection contributes to the performance; thus, network sparsification has emerged. We get inspiration from the topology of real biological neural networks which are scale-free. We depart from the usual complete bipartite topology among layers, and instead we start from structured sparse topologies known in network science, e.g., scale-free and end up again in a structured sparse topology, e.g., scale-free. Moreover, we apply smart link rewiring methods to construct these sparse topologies. Thus, the number of trainable parameters is reduced, with a direct impact on lowering training time and a direct beneficial result in reducing memory requirements. We design several variants of our concept (SF2SFrand, SF2SFba, SF2SF5, SF2SW, and SW2SW, respectively) by considering the neural network topology as a Scale-Free or Small-World one in every case. We conduct experiments by cutting and stipulating the replacing method of the 30% of the linkages on the network in every epoch. Our winning method, namely the one starting from a scale-free topology and producing a scale-free-like topology (SF2SFrand) can reduce training time without sacrificing neural network accuracy and also cutting memory requirements for the storage of the neural network. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.en
dc.language.isoenen
dc.sourceApplied Intelligenceen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85140338900&doi=10.1007%2fs10489-022-04195-8&partnerID=40&md5=e748b48ee30230cc85a8a1a3bb823456
dc.subjectDeep learningen
dc.subjectMultilayer neural networksen
dc.subjectDeep learningen
dc.subjectFeed forward neural net worksen
dc.subjectMemory requirementsen
dc.subjectModel reductionen
dc.subjectNetwork scienceen
dc.subjectNeural architecturesen
dc.subjectNeural-networksen
dc.subjectResourceconstrained devicesen
dc.subjectScale-freeen
dc.subjectTraining timeen
dc.subjectTopologyen
dc.subjectSpringeren
dc.titleModel reduction of feed forward neural networks for resource-constrained devicesen
dc.typejournalArticleen


Αρχεία σε αυτό το τεκμήριο

ΑρχείαΜέγεθοςΤύποςΠροβολή

Δεν υπάρχουν αρχεία που να σχετίζονται με αυτό το τεκμήριο.

Αυτό το τεκμήριο εμφανίζεται στις ακόλουθες συλλογές

Εμφάνιση απλής εγγραφής