Emergency incidents detection in assisted living environments utilizing sound and visual perceptual components
Ημερομηνία
2009Λέξη-κλειδί
Επιτομή
The paper presents the concept and an initial implementation of a patient status awareness system that may be used for patient activity interpretation and emergency recognition in cases like elder falls. The system utilizes audio and video data captured from the patient's environment. Visual information is acquired using overhead cameras and audio data is collected from microphone arrays. Proper audio data processing allows the detection of sounds related to body falls or distress speech expressions. Appropriate tracking techniques are applied to the visual perceptual component enabling the trajectory tracking of the subjects. Sound directionality in conjunction to trajectory information and subject's visual location can verify fall and indicate an emergency event. Post fall visual behavior of the subject indicates the severity of the fall (e.g., if patient remains unconscious or patient recovers). A number of advanced classification techniques have been evaluated using the latter perceptual components. The performance of the classifiers has been assessed in terms of accuracy and efficiency and results are presented. Copyright 2009 ACM.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Audio-visual speech recognition using depth information from the Kinect in noisy video conditions
Galatas, G.; Potamianos, G.; Makedon, F. (2012)In this paper we build on our recent work, where we successfully incorporated facial depth data of a speaker captured by the Microsoft Kinect device, as a third data stream in an audio-visual automatic speech recognizer. ... -
Resource-efficient TDNN Architectures for Audio-visual Speech Recognition
Koumparoulis A., Potamianos G., Thomas S., da Silva Morais E. (2021)In this paper, we consider the problem of resource-efficient architectures for audio-visual automatic speech recognition (AVSR). Specifically, we complement our earlier work that introduced efficient convolutional neural ... -
Detecting audio-visual synchrony using deep neural networks
Marcheret E., Potamianos G., Vopicka J., Goel V. (2015)In this paper, we address the problem of automatically detecting whether the audio and visual speech modalities in frontal pose videos are synchronous or not. This is of interest in a wide range of applications, for example ...