Publication details for Professor Toby BreckonKurcius, J.J. & Breckon, T.P. (2014), Using Compressed Audio-visual Words for Multi-modal Scene Classification, Proc. International Workshop on Computational Intelligence for Multimedia Understanding. IEEE.
- Publication type: Conference Paper
- ISSN/ISBN: 9781479979714
- DOI: 10.1109/IWCIM.2014.7008808
- Keywords: Multi-resolution, Bag of words, MFCC, Compressed sensing, Audio-visual, Multi-modal.
- Further publication details on publisher web site
- Durham Research Online (DRO) - may include full text
Author(s) from Durham
We present a novel approach to scene classification using combined audio signal and video image features and compare this methodology to scene classification results using each modality in isolation. Each modality is represented using summary features, namely Mel-frequency Cepstral Coefficients (audio) and Scale Invariant Feature Transform (SIFT) (video) within a multi-resolution bag-of-features model. Uniquely, we extend the classical bag-of-words approach over both audio and video feature spaces, whereby we introduce the concept of compressive sensing as a novel methodology for multi-modal fusion via audio-visual feature dimensionality reduction. We perform evaluation over a range of environments showing performance that is both comparable to the state of the art (86%, over ten scene classes) and invariant to a ten-fold dimensionality reduction within the audio-visual feature space using our compressive representation approach.