Mostrar el registro sencillo del ítem

dc.creatorChouliaras A., Fragkou E., Katsaros D.en
dc.date.accessioned2023-01-31T07:45:41Z
dc.date.available2023-01-31T07:45:41Z
dc.date.issued2021
dc.identifier10.1145/3503823.3503826
dc.identifier.isbn9781450395557
dc.identifier.urihttp://hdl.handle.net/11615/72812
dc.description.abstractA recent hot research topic in deep learning concerns the reduction of the model size of a neural network by pruning, in order to minimize its training and inference cost and thus, being capable of running on devices with memory constraints. In this paper, we employ a pruning technique to sparsify a Multi-Layer Perceptron (MLP) during training, in which the number of topology connections, being pruned and restored, is not stable, but it adopts either one of the following rules: Linear Decreasing Variation (LDV) rule or Oscillating Variation (OSV) rule or Exponential Decay (EXD) rule. We conducted experiments on three MLP Network topologies, implemented with Keras, using the Fashion-MNIST dataset and results showed that the EXD method is a clear winner since, in that case our proposed sparse network has a faster convergence than the dense version of the same one, while it achieves approximately the same high accuracy (around 90%). Furthermore, it is shown that the memory footprint of the aforementioned sparse techniques is at least 95% less instead of the dense version of the network, due to the weights removed. Finally, we present an improved version of the SET implementation in Keras, using Callbacks API, making the SET implementation more efficient. © 2021 ACM.en
dc.language.isoenen
dc.sourceACM International Conference Proceeding Seriesen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85125615733&doi=10.1145%2f3503823.3503826&partnerID=40&md5=8003fabf720ae03ea7e354d63959da4f
dc.subjectDeep learningen
dc.subjectNetwork layersen
dc.subjectTopologyen
dc.subjectDeep learningen
dc.subjectDynamic pruningen
dc.subjectExponential decaysen
dc.subjectFeed forward neural net worksen
dc.subjectHot research topicsen
dc.subjectKerasen
dc.subjectMultilayers perceptronsen
dc.subjectNeural network sparsificationen
dc.subjectNeural-networksen
dc.subjectSparsificationen
dc.subjectMultilayer neural networksen
dc.subjectAssociation for Computing Machineryen
dc.titleFeed Forward Neural Network Sparsification with Dynamic Pruningen
dc.typeconferenceItemen


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem