Thursday, November 26, 2009

An idea for music transcription.

Many musicians spend time transcribing music, using computer software such as Sibelius or Finale to enter notes into sheet music. This process, though, could use improvement, particularly when a midi keyboard is not available. (In some programs the computer keyboard mapping is not intuitive; in Finale 2006, one enters notes by typing their letter names. The "g" key produces a g note and so on - but on a qwerty layout, this doesn't work well, because of the non-consecutive placement of a,b, and c. Also, I have yet to see a good way to select note duration.) Both Finale and Sibelius have recently released new transcription features, such as Sibelius' keyboard window introduced in May 2009. Here I will introduce one of my ideas for faster music transcription.

Instead of following an arbitrary metronome when recording, my program lets the performer tap the pulse themselves. This lets the performer play more naturally, and also to slow down for parts of the music that are more complex. I recently finished a proof-of-concept of this system that needs only a standard computer keyboard. The keys are played like a piano, and the Tab key is tapped for every quarter note. It works well; I've already used it to transcribe some of a Bach cantata.

Some screenshots of the prototype, called "Trilling":

A short fragment, saved to MusicXML and imported by Finale:

The same fragment, saved to LilyPond:

Even if you don't have Finale, Sibelius, or LilyPond installed, you can see a rough preview of the music, drawn by my little "scoreview" program:

I plan to take this project forward, adding more features, and creating a truly useful tool for music transcription.

In order to export to MusicXml and LilyPond, I use code from an open source project called Mingus. I'm grateful for that project, which turned out to be very similar to what I needed. I added the capability of tied notes and changed some of the ways it writes musicxml files.

Download source, requires Python 2.5, released under GPLv3.
At the moment, it only runs on Windows because I am using the winsound module to play .wav files in real-time. (I'm not using midi output anymore because, if one doesn't have hardware midi, there is a noticeable delay during playback.)

Sunday, November 22, 2009

Doppler Effect Simulation

Here's a hacked-together-in-one-night project just like the ones from the old days.


This is a Doppler effect simulation, in order to create the sound of particles whizzing past you, in stereo. You, the "observer", are represented by the L and R rectangles. The green numbers represent points along the path of that the particle travels. One specifies the path by moving the position of the numbers, using the mouse. If your points are spread apart, the particle travels faster, and if the points are close, the particle moves slower.

It's fun to play with. I recommend downloading and trying it. By clicking the checkbox, a second particle can be added.




The physics are only approximate, because this is an audio project and not a physics project. Each pixel corresponds to 2 meters. The distance between L and R "observers" is deliberately exaggerated to get a better stereo effect. The volume is scaled by 1/r instead of 1/(r^2), in order to hear the particle more easily when it is far away. This is something to explore further.

I was able to write the code quickly. The expression (V/(V+vS)) gives a frequency shift, so if this value is 1.5, the perceieved pitch is 1.5 times higher. It turns out that this frequency shift is very well suited to code. First, I generate many seconds of source audio, say a sine wave at a fixed frequency. Then, I walk through this source audio at the "frequency shift" rate, interpolating between values when needed. So if the "frequency shift" came to 1.3, I would take the 0th,1.3th,2.6th,3.9th, samples from the source audio as the output audio, which results in output audio with a 1.3 times higher pitch. The whole project simplified to this:

V = 340.0; //speed of sound
//for each timestep:
//move x and y
//find distance between particle and observer
distance = Math.Sqrt((x-xMe)*(x-xMe) + (y-yMe)*(y-yMe));
vS = (distance - prevdistance) / dt;
freqShift = (V / (V + vS));

intensity = scale * (1 / distance);
for (int i = 0; i < dt*sampleRate; i++)
outputAudio[index+i] = intensity * interpolatedValue(sourceAudio, fPositionInSourceAudio); 
fPositionInSourceAudio += freqShift;


Download (Windows), unzip and run Doppler.exe. No setup needed.

Source, released under GPLv3.

Monday, November 16, 2009

Computer keyboard as piano keyboard

A continuation of a program I wrote a few years back, this little app lets you use your computer keyboard as a piano keyboard:

It can be useful for recording fragments of melodies or songs. The results are saved as a standard .mid file. Also, because it supports easy multi-tracking, it can be fun to make a little song by combining a bass part, harmony, and melody with different instruments for each voice.

(Midisketch sends out real time midi events. So, if a midi loopback driver like Maple or LoopBe1 is used, it can act like a midi input device as well. In fact, it might be interesting to have Midisketch install keyboard hooks like Ctrl-alt-a, Ctrl-alt-b, and so on, that would play the corresponding notes even when the program is minimized. Then it could be used just like a midi keyboard, and recognized as such by other software).

Anyways, here is a short video showing the keyboard:

Midisketch demo from bngjbng on Vimeo.

Three or more note polyphony is supported, constrained only by limitations of most keyboards. The "C# MIDI Toolkit" by Leslie Sanford is used to send midi data to the Windows api.

Download, win32.
(Unzip the file and run Midisketch.exe. See also readme.txt. Requires .NET 2).
Released under Gplv3.

Wednesday, November 11, 2009

Audio: noise

I have been continuing some digital audio experiments.

I find red noise to sound calmer and more natural than white noise. (Red noise can be produced by integrating white noise). The way I produce red noise is to start with a single value, add a small random (positive or negative) sample to it, and then continue. In white noise, the samples are unconnected leaps, but for red noise, the samples do not change as drastically.Let's say one takes a small chunk of red noise, 220 samples long. If one plays this chunk repeatedly, a 200hz tone is heard. The tone is colored by the frequencies present in the initial chunk of noise.

Now let's create 15 different chunks of red noise, each of length 220. If we play one of the chunks ten times, then the next ten times, then the next ten times, and so on, eventually repeating,the result is a nice "warble" at 200hz. The upper frequencies present change abruptly about 20 times a second, causing a robotic sound. This is a fun sound that can be used for various purposes.


However, what if our goal is to create a more natural sound? As mentioned before, a problem is that the frequency content changes abruptly. We want to create more subtle changes, while still having the presence of buzzing red noise.

The solution I came up with was inspired by how the red noise was created in the first place. I still begin with a chunk of red noise. After playing it ten times, instead of switching to an entirely different chunk, I instead slightly alter the chunk: by adding small random (positive or negative) values to each of the 220 samples. In this way, the frequency content does not change suddenly, and yet it changes in an unpredictable and subtle way. I find the resulting tone to still be harsh and noise-like, but surprisingly natural and musical. In the past it has only been in the frequency domain that I've been able to produce natural sounds.

Two examples of this "softened noise":



The second is seeded initially with a sine wave, and higher frequencies emerge over time.

I've been using my C audio library, "bcaudio", for these experiments. Writing psuedo-object-oriented C isn't that bad. It's fun to write code that will run very quickly.