The Sound Prototyping Tool is an internal tool created for the Sound of Vision project. It is used for creating and testing different sounds rendering models of custom defined 3D virtual scenes that mimic some of the real world situations visually impaired people might face. The tool is used to perform, save and model various sound tests. It also represents a great base which can be extended for other sound rendering related tasks, like headphone quality testing, HRTF testing, etc.
The sound prototyping tool is composed from 3 layer: a custom editor for virtual environments, an interactive sound abstraction layer based on the Csound library and the UI layer.
Virtual scenes and user interaction
The prototyping tool was built with the idea of simplifying the creation, testing and comparison of various sound models approaches that we had to consider for the project. The tools offers a simple yet powerful design approach for creating new sound models based on the CSound sound rendering library. Thus, different approaches can be tested and compared by using a predictable and flexible virtual environment.
The game engine layer provides a virtual 3D scene environment editor that can be used to define from simple to more complex scenes tests. The editor is backed up by common UI components for precise control or used to display various information about objects such as: size, position in 3D space, orientation, shape type or other sound related aspect used for rendering like relative azimuth, elevation, distance to observer (the eyes and the ears of the tester), attenuation, volume, etc.
Scenes can be easily edited, saved and loaded later. For each object or 3D surface a sound model can be described using a special Csound file or by creating a new model using block components provided by the visual programming model implemented into the application. Keyboard and mouse can be used to interact with the scene, move the observer or change camera orientation. All computation related to sound rendering are updated in real-time.
Csound audio layer and the visual programing model
We chose the Csound audio library for providing the audio rendering backbone because it’s a very powerful and flexible audio library. It can be used to generate any kind of audio signal providing you know some theory behind sound design. Even so, the library offers a great variety of options and controls that we thought will fit very well the scope of the application.
From the start the idea behind creating this tool was to offer a simple way to test and design new sound models even if the user doesn’t knows programming or understands sound design techniques. Having that in mind, we tried to implement a simple visual programming interface that can be used to define new audio script files (we call them sound models) by using only a common drag-and drop experience. We implemented the idea of block components which are nothing else than just abstractions of the Csound programming model expressions or block of code into UI elements with input, ouput and various configuration or control options for providing custom values or real-time control during the testing procedure. These kind of components can be easily defined anytime and imported from an XML file without the need to rebuild the tool. Thus, sound effects can be encoded into simple UI blocks that can be arranged to create custom sound rendering approaches whose parameters can be controlled with ease during real-time testing.
Prototyping and testing
As we noted in the beginning of the article, the tool provides a larger base for various other tasks as well as processing 3D environment data using the parallel processing power of video cards. For example, an echolocation wave based approach that can be very expensive to compute just by using the processor power was performed using the GPU.
Another usage of the tool was to offer a test environment for comparing various quality aspects of commercial headphones available. Thus, we added a HRTF test generator with many customizations and the possibility to exported result statistic to custom file format.
The audio rendering layer is based on the Csound library, the UI layer is created using the Qt framework while the virtual environment layer is built on top of the OpenGL 3.3+ API and integrates many common game engine related open-source libraries. In the future we plan to release the tool as an open-source application because it offers a flexible and innovative approach to design and test sound in a 3D virtual space and we do hope others might benefit from using it.