An understanding of digital audio signals begins with a quick overview of the different signal types you'll be working with.
Just as some artists choose clay, others watercolor, and still others oil paint...
this will be the medium of your sonic masterpiece!
Prepare to dip your brushes into the following 4 different signal types that we'll discuss:
These are the most familiar because we use our ears to hear them everyday.
This example is brought to you by the tone "A" performed by an upright piano.
The "A" key is struck by our musician, which rotates a felt-padded hammer down on the appropriate string in the sound board.
Upon being struck, the string vibrates at a constant frequency based on factors such as length, tension, thickness, etc.
This string vibrates the air around it at the same frequency setting off a chain reaction of air molecules banging into one another in what is called a pressure wave, known more commonly as a sound wave.
Since this pressure wave is vibrating at 440 Hz we can call it a sound wave by the name "A".
This sound wave is received by an ear and then turned into an electrical signal which is interpreted by the brain.
This fundamental frequency is what we call Pitch.
But how boring it would be if all we could tell was the frequency of the vibrations?
With so many different tones and subtleties - there has to be more to it than that...
So clearly we can do more than just detect the fundamental frequency.
Other factors like the sound board itself, the density of wood in the piano, and the room that it's in all provide additional complexity known as harmonics and overtones.
And these complexities contribute to what's called Timbre.
To be clear, this is not what you yell out to your lumberjack friends when a tree is about to fall.
This word when spoken actually sounds more like Tam-ber.
So what does this mean to us?
Timbre refers to the overall essence of a certain sound.
In other words, it defines the characteristics that make a sound unique.
For example, a guitar's "guitar-ness" versus and piano's "piano-ness".
If you hear a guitar play an "A" then a piano play an "A" there is no doubt you could tell them apart.
Both are vibrating the air at the same fundamental frequency (440 Hz), but the strings and wooden body of the guitar produce a much different sound than that of a piano...even though they both produce the tone in a very similar way.
This is due to the harmonics or overtones generated by each unique instrument.
Ok, enough of that. Lets get down to brass tacks here.
You are mostly concerned with how we can use sound and music in the recording realm.
Its time to talk about digital and analog audio signals.
An electrical audio signal is represented by a continuous stream of information, otherwise known as an analog signal.
These are not as intuitive but still not far from our daily lives.
The speakers in your car take an electrical audio signal as the input, create a vibration, and produce the sounds you hear.
So speakers require an electrical input in the form of voltage in order to produce sound.
Microphones work in a similar way. They produce an electric signal that is proportional to the amount of sound the microphone input detects.
When you stop singing, the information exists only as electrons running down a cable.
These electrons will need to be converted into digital data before they can be understood by our Digital Audio Workstation.
A digital signal is the approximation of the electrical audio signal mentioned earlier.
Rather than being represented as a pure voltage, digital signals are made up of "1's" and "0's".
This is good news for computers because they still don't understand English. They're hung up on this whole binary thing.
So a smooth, continuous, analog signal can be accurately approximated by an individual, discrete, digital signal type.
Each 1 or 0 is defined by a threshold voltage, above which it is called a 1 and below it is called a 0.
For example if the threshold levels are set to 0-2 volts for low and 4-6 volts for high, then any signal below 2 volts will be defined as a 0 and any signal above 4 volts and less than 6 volts is defined as a 1.
Analog and digital signals can be thought of as different languages.
You can use different languages to deliver the same message... but only those who speak your language will be able to understand you.
So to get your acoustic guitar (who speaks analog) to communicate with your computer (who speaks digital) you need to have a translator... or in our case a Analog-to-Digital Converter.
I bet you're so glad to know the elegant science behind this artistic past time.
Or is it just something you endure because you want to play the riff from "Sweet Child o Mine"?
Either way I'm glad you're still with me. Let finish up with our last data type.
MIDI data is unique because rather than being a representation of sound, it is a set of commands that can be used to build sounds.
This is more like cooking up some championship burgers on your grill. You get your meat, your peppers, your blue cheese and combine them into a tasty treat.
Well, I actually think blue cheese is pretty gross... but let me get to the point.
Each ingredient defines the burger, but each ingredient on its own is not the final burger.
In the same way each packet of MIDI data defines the sound produced, even though no piece of MIDI data is actually music.
For a more in depth look (with much better analogies), check out MIDI Basics.
So just think of MIDI data as its own thing rather than a variation of either electrical or digital audio signals.
It is in a class all of its own.
MIDI Data Signal Type - You Are A True Delight!