• English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • Deutsch 
    • English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • Einloggen
Dokumentanzeige 
  •   DSpace Startseite
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • Dokumentanzeige
  •   DSpace Startseite
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • Dokumentanzeige
JavaScript is disabled for your browser. Some features of this site may not work without it.
Gesamter Bestand
  • Bereiche & Sammlungen
  • Erscheinungsdatum
  • Autoren
  • Titeln
  • Schlagworten

Audio-visual speech recognition using depth information from the Kinect in noisy video conditions

Thumbnail
Autor
Galatas, G.; Potamianos, G.; Makedon, F.
Datum
2012
DOI
10.1145/2413097.2413100
Schlagwort
Audio-visual speech recognition
Depth information
Microsoft Kinect
Video noise
Audio signal
Audio visual speech recognition
Audio-visual
Audio-visual database
Automatic speech recognizers
Data stream
MicroSoft
Noise levels
Speech information
System operation
System robustness
Visual modalities
Acoustic noise
Audio acoustics
Speech recognition
Zur Langanzeige
Zusammenfassung
In this paper we build on our recent work, where we successfully incorporated facial depth data of a speaker captured by the Microsoft Kinect device, as a third data stream in an audio-visual automatic speech recognizer. In particular, we focus our interest on whether the depth stream provides sufficient speech information that can improve system robustness to noisy audio-visual conditions, thus studying system operation beyond the traditional scenarios, where noise is applied to the audio signal alone. For this purpose, we consider four realistic visual modality degradations at various noise levels, and we conduct small-vocabulary recognition experiments on an appropriate, previously collected, audiovisual database. Our results demonstrate improved system performance due to the depth modality, as well as considerable accuracy increase, when using both the visual and depth modalities over audio only speech recognition.
URI
http://hdl.handle.net/11615/27629
Collections
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ. [19735]

Verwandte Dokumente

Anzeige der Dokumente mit ähnlichem Titel, Autor, Urheber und Thema.

  • Thumbnail

    Resource-efficient TDNN Architectures for Audio-visual Speech Recognition 

    Koumparoulis A., Potamianos G., Thomas S., da Silva Morais E. (2021)
    In this paper, we consider the problem of resource-efficient architectures for audio-visual automatic speech recognition (AVSR). Specifically, we complement our earlier work that introduced efficient convolutional neural ...
  • Thumbnail

    Detecting audio-visual synchrony using deep neural networks 

    Marcheret E., Potamianos G., Vopicka J., Goel V. (2015)
    In this paper, we address the problem of automatically detecting whether the audio and visual speech modalities in frontal pose videos are synchronous or not. This is of interest in a wide range of applications, for example ...
  • Thumbnail

    Audio-visual speech activity detection in a two-speaker scenario incorporating depth information from a profile or frontal view 

    Thermos S., Potamianos G. (2017)
    Motivated by increasing popularity of depth visual sensors, such as the Kinect device, we investigate the utility of depth information in audio-visual speech activity detection. A two-subject scenario is assumed, allowing ...
htmlmap 

 

Stöbern

Gesamter BestandBereiche & SammlungenErscheinungsdatumAutorenTitelnSchlagwortenDiese SammlungErscheinungsdatumAutorenTitelnSchlagworten

Mein Benutzerkonto

EinloggenRegistrieren
Help Contact
DepositionAboutHelpKontakt
Choose LanguageGesamter Bestand
EnglishΕλληνικά
htmlmap