Authors: Oana Bălan, Alin Moldoveanu, Florica Moldoveanu
in: Archives of Acoustics (Accepted for publication)
Abstract: The use of individualized Head Related Transfer Functions (HRTF) is a fundamental prerequisite for obtaining an accurate rendering of 3D spatialized sounds in virtual auditory environments. The HRTFs are transfer functions that define the acoustical basis of auditory perception of a sound source in space and are frequently used in virtual auditory displays to simulate free-field listening conditions. However, they depend on the anatomical characteristics of the human body and significantly vary among individuals, so that the use of the same dataset of HRTFs for all the users of the designed system will not offer the same level of auditory performance. This paper presents an alternative approach to the use on non-individualized HRTFs that is based on a procedural learning, training and adaptation to altered auditory cues. We tested the sound localization performance of nine sighted and visually impaired people, before and after a series of perceptual (auditory, visual and haptic) feedback based training sessions. The results demonstrated that our subjects significantly improved their spatial hearing under altered listening conditions (such as the presentation of 3D binaural sounds synthesized from non-individualized HRTFs), the improvement being reflected into a higher localization accuracy and a lower rate of front-back confusion errors.
Download link: Multimodal-Perceptual-Training-Improves-Spatial-Auditory-Performance-in-Blind-and-Sighted-Listeners.pdf (595 downloads)