• English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • français 
    • English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • Ouvrir une session
Voir le document 
  •   Accueil de DSpace
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • Voir le document
  •   Accueil de DSpace
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • Voir le document
JavaScript is disabled for your browser. Some features of this site may not work without it.
Tout DSpace
  • Communautés & Collections
  • Par date de publication
  • Auteurs
  • Titres
  • Sujets

Geodesically-corrected Zernike descriptors for pose recognition in omni-directional images

Thumbnail
Auteur
Delibasis K.K., Georgakopoulos S.V., Kottari K., Plagianakos V.P., Maglogiannis I.
Date
2016
Language
en
DOI
10.3233/ICA-160511
Sujet
Artificial intelligence
Cameras
Computer vision
Face recognition
Image processing
Pattern recognition
Pixels
Time domain analysis
Fish-eye cameras
Image descriptors
Omni-directional
Pose recognition
Zernike moment invariants
Feature extraction
IOS Press
Afficher la notice complète
Résumé
A significant number of Computer Vision and Artificial Intelligence applications are based on descriptors that are extracted from segmented objects. One widely used class of such descriptors are the invariant moments, with Zernike moments being reported as some of the most efficient descriptors. The calculation of image moments requires the definition of distance and angle of any pixel from the centroid pixel of a specific object. While this is straightforward in images acquired by projective cameras, the classic definition of distance and angle may not be applicable to omni-directional images obtained by fish-eye cameras. In this work, we provide an efficient definition of distance and angle between pixels in omni-directional images, based on the calibration model of the acquisition camera. Thus, a more appropriate calculation of moment invariants from omni-directional videos is achieved in time domain. A large dataset of synthetically generated binary silhouettes, as well as segmented human silhouettes from real indoor videos are used to assess experimentally the effectiveness of the proposed Zernike descriptors in recognising different poses in omni-directional video. Comparative numerical results between the traditional Zernike moments and the moments based on the proposed corrections of the Zernike polynomials are presented. Results from other state of the art image descriptors are also included. Results show that the proposed correction in the calculation of Zernike moments improves pose classification accuracy significantly. The computational complexity of the proposed implementation is also discussed. © 2016 - IOS Press and the author(s). All rights reserved.
URI
http://hdl.handle.net/11615/73184
Collections
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ. [19735]
htmlmap 

 

Parcourir

Tout DSpaceCommunautés & CollectionsPar date de publicationAuteursTitresSujetsCette collectionPar date de publicationAuteursTitresSujets

Mon compte

Ouvrir une sessionS'inscrire
Help Contact
DepositionAboutHelpContactez-nous
Choose LanguageTout DSpace
EnglishΕλληνικά
htmlmap