Εμφάνιση απλής εγγραφής

dc.creatorGeorgakopoulos S.V., Kottari K., Delibasis K., Plagianakos V.P., Maglogiannis I.en
dc.date.accessioned2023-01-31T07:40:18Z
dc.date.available2023-01-31T07:40:18Z
dc.date.issued2018
dc.identifier10.1016/j.neucom.2017.08.071
dc.identifier.issn09252312
dc.identifier.urihttp://hdl.handle.net/11615/72060
dc.description.abstractConvolutional neural networks (CNNs) are used frequently in several computer vision applications. In this work, we present a methodology for pose classification of binary human silhouettes using CNNs, enhanced with image features based on Zernike moments, which are modified for fisheye images. The training set consists of synthetic images that are generated from three-dimensional (3D) human models, using the calibration model of an omni-directional camera (fisheye). Testing is performed using real images, also acquired by omni-directional cameras. Here, we employ our previously proposed geodesically corrected Zernike moments (GZMI) and confirm their merit as stand-alone descriptors of calibrated fisheye images. Subsequently, we explore the efficiency of transfer learning from the previously trained model with synthetically generated silhouettes, to the problem of real pose classification, by continuing the training of the already trained network, using a few frames of annotated real silhouettes. Furthermore, we propose an enhanced architecture that combines the calculated GZMI features of each image with the features generated at CNNs’ last convolutional layer, both feeding the first hidden layer of the traditional neural network that exists at the end of the CNN. Testing is performed using synthetically generated silhouettes as well as real ones. Results show that the proposed enhancement of CNN architecture, combined with transfer learning improves pose classification accuracy for both the synthetic and the real silhouette images. © 2017en
dc.language.isoenen
dc.sourceNeurocomputingen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85034859046&doi=10.1016%2fj.neucom.2017.08.071&partnerID=40&md5=d81b9b04bdb1d176a7f594d84f57374d
dc.subjectCamerasen
dc.subjectComputer visionen
dc.subjectConvolutionen
dc.subjectConvolutional neural networksen
dc.subjectFeature extractionen
dc.subjectGesture recognitionen
dc.subjectMultilayer neural networksen
dc.subjectNetwork architectureen
dc.subjectTransfer learningen
dc.subjectWell testingen
dc.subjectCalibration modelen
dc.subjectComputer vision applicationsen
dc.subjectFish-eye camerasen
dc.subjectOmni-directionalen
dc.subjectOmnidirectional camerasen
dc.subjectPose classificationsen
dc.subjectThreedimensional (3-d)en
dc.subjectZernike momentsen
dc.subjectImage enhancementen
dc.subjectArticleen
dc.subjectartificial neural networken
dc.subjectbody imageen
dc.subjectbody positionen
dc.subjectcalibrationen
dc.subjectconvolutional neural networken
dc.subjectgeodesically corrected Zernike momenten
dc.subjecthumanen
dc.subjectimage analysisen
dc.subjectmachine learningen
dc.subjectmeasurement accuracyen
dc.subjectpriority journalen
dc.subjectrecognitionen
dc.subjectthree dimensional imagingen
dc.subjecttransfer of learningen
dc.subjectElsevier B.V.en
dc.titlePose recognition using convolutional neural networks on omni-directional imagesen
dc.typejournalArticleen


Αρχεία σε αυτό το τεκμήριο

ΑρχείαΜέγεθοςΤύποςΠροβολή

Δεν υπάρχουν αρχεία που να σχετίζονται με αυτό το τεκμήριο.

Αυτό το τεκμήριο εμφανίζεται στις ακόλουθες συλλογές

Εμφάνιση απλής εγγραφής