As technology flourishes, the production of copious amounts of multidimensional data has revealed limitations in visual forms of data representation. Due to these limitations, novel forms of data representation, such as sonification, have emerged. Sonification conveys information through non-speech audio. Before it can be implemented, however, sonification must be further investigated in order to ensure that the methods used rest upon a solid scientific foundation.
The purpose of this project is to investigate the parameter within parameter-mapping sonification that most improves understanding of sonified data as well as the effect of culture on one's ability to interpret multiple streams of data. Along with these two variables, the experiment seeks to determine the highest number of data channels a person can accurately track using the specific sonification methods and sound waves of the experiment.
Of the different methods of parameter-mapping sonification, rhythm and frequency (pitch), as well as a combination of the two, were compared.
The effect of the familiarity of sounds was considered by comparing instrument sound with synthetic sounds. Finally, subjects were tested using 2, 4, 6, and 8 data streams. An online survey was launched to gather data.
The survey asked subjects to listen to sound clips and answer multiple choice questions based on the sound clips.
Data analysis showed that the "frequency" and "combination" method of sonification facilitated the differentiation and comprehension of multiple data streams more than the "tipping bucket" method (rhythm), that culture did not significantly impact one's ability to understand sonified data, and that the trials with 6 streams produced the most accurate responses.
The "tipping bucket" method of sonification most likely decreased performance in comparison to "frequency" and "combination" due to auditory masking, and the "synthetic" and "instrument" sounds probably performed equally well due to the presence of synthetic sounds in contemporary music.
The project focuses on the psychoacoustics branch of sonification, specifically, the differentiation of multiple data streams.