I've turned my dynamic compilation code towards audio. I wrote a program recently that is a "lab" for creating audio effects: you type in a few lines of code and press Go, and the code is compiled and run. These processing snippets can be saved, along with parameter settings. I jokingly call it Windows Notespad.
With this tool I can quickly prototype audio effects. As a source I will use Cathy Berberian's reading from Joyce's Ulysses. (This is an excerpt from Luciano Berio's piece Thema, Omaggio a Joyce).
Some interesting changes can be made in the frequency domain. I divide the audio into many overlapping pieces. If I take the dft of each piece, change the phases every 256 samples, and ifft back, this imparts a high tone to the sound (frequency = 44100hz / 256 = 689hz), turning the speaking into singing. If I make each piece 0.1 seconds, ramp the pieces for a smooth transition, and set the phases to random values, I can 'smear' the sound, keeping the frequency content within each piece but losing the envelopes. This resembles adding a lot of reverb.
The original, for comparison:
I wrote a script to plot amplitude (average signal power) over time. Plotted on a log scale, it looks like this:
One thought was to turn this curve into pitch. What if one played a sine wave corresponding to these pitches? I experimented with code until I found an effect like a bird's song.
Another non-linear transformation is to change the pitch based on power. Imagine a poor vocalist who goes sharp on loud notes and sings flat on soft notes. What if the whole band played this way? The effect is too comical to include.
We can find the loudness at a certain point in the audio clip. The idea struck me: what does it sound like if all of the quiet parts are made loud, and the loud parts made quiet? If find the effect to be rather terrifying:
There is much left to explore.