• Efficient sparsification of dense circuit matrices in model order reduction 

      Antoniadis C., Evmorfopoulos N., Stamoulis G. (2019)
      The integration of more components into ICs due to the ever increasing technology scaling has led to very large parasitic networks consisting of million of nodes, which have to be simulated in many times or frequencies to ...
    • Feed Forward Neural Network Sparsification with Dynamic Pruning 

      Chouliaras A., Fragkou E., Katsaros D. (2021)
      A recent hot research topic in deep learning concerns the reduction of the model size of a neural network by pruning, in order to minimize its training and inference cost and thus, being capable of running on devices with ...
    • An Overview of Enabling Federated Learning over Wireless Networks 

      Foukalas F., Tziouvaras A., Tsiftsis T.A. (2021)
      In this paper, we provide an overview of enabling federated learning (FL) techniques over wireless networks. More specifically, we present key techniques such as model compression, quantization and sparsification that ...
    • Selective inversion of inductance matrix for large-scale sparse RLC simulation 

      Apostolopoulou, I.; Daloukas, K.; Evmorfopoulos, N.; Stamoulis, G. (2014)
      The inverse of the inductance matrix (reluctance matrix) is amenable to sparsification to a much greater extent than the inductance matrix itself. However, the inversion and subsequent truncation of a large dense inductance ...