Sound is a ubiquitous element in our physical environments: each physical interaction creates an acoustic reaction and we also use sound for communication (speech, music). In contrast, the digital worlds are inherently silent and our listening capabilities is neglected when it comes to investigate data.
In this lecture we first investigate, what sound is, how it works in the real world (acoustics), how we process audio signals (psychoacoustics), and then in more detail, how we can compute sound signals (sound synthesis). Standard approaches such as additive/subtractive synthesis, granular synthesis, FM-synthesis and nonlinear synthesis will be covered.
These techniques can then be used in turn to also represent data sets by using sound, which is called 'Sonification' as the generative module in Auditory Displays. In this lecture, we will introduce into the interdisciplinary research field of sonification, and show and discuss various practical application examples that illustrate how sound can be a useful component, for exploratory data analysis, for visually impaired users, for ambient information systems, etc. Participants can gain hands-on experiences in sound synthesis, experiment with synthesis, timbre morphing and various sonification techniques. We will use different languages and systems, including pyhton (numpy/scipy), SuperCollider3, the python digital signal processing system pyo and data science tools to plot and manipulate representations of sound.
A script of the lecture will be provided.
For Sound synthesis, the textbook 'Elements of Computer Music' by R. Moore is a good reading recommendation, for sonification, parts of the (open access, online-availeble) Sonification Handbook (see http://sonification.de/handbook) will be used.