Sound is a ubiquitous element in our physical environments: each physical interaction creates an acoustic reaction and we also use sound for communication (speech, music). In contrast, the digital worlds are inherently silent and our listening capabilities is neglected when it comes to investigate data.
In this lecture we investigate in greater depth, what sound is, how it works in the real world (acoustics), how we process audio signals (psychoacoustics), and then in more detail, how we can compute sound signals (sound synthesis).
The larger part of the lecture will introduce the most relevant sound synthesis techniques
- Additive synthesis
- Subtractive synthesis,
- granular synthesis,
- wave table synthesis and sampling
- nonlinear synthesis
- physical synthesis models: Karplus-Strong algorithm, Modal synthesis, and synthesis by integration of differential equations.
Participants can gain hands-on experiences in sound synthesis, experiment with synthesis, timbre morphing and various synthesis techniques. We will use different programming languages and sound production systems, including python (numpy/scipy), SuperCollider3, PureData, the python digital signal processing system pyo, and visualization tools to plot and manipulate representations of sound.
The lecture will briefly show modern DAWs (digital audio workstations) to compose complex soundscapes and music tracks in a recording studio.
Sound synthesis techniques as introduced here can in turn be used to represent data, which is called 'Sonification'. This term, this lecture focusses on the synthesis part and leaves a thorough introduction to sonification to the companion lecture 'Auditory Data Science', see ekVV.