Controlling volume is one of the most important elements in audio production. Understanding amplitude, volume, normalization, and automation are all part of music production basics and will help you in the recording and mixing phases of your project.
This post was adapted and excerpted from Getting Started with Music Production by Robert Willey (Hal Leonard). Published with permission.
Acoustics: amplitude and volume
Acoustics is the study of sound and how it moves in space. Some basic information about it will help you understand the information an audio recording program displays on the screen and is where your study of music production basics should begin.
Sound begins with vibrations. A vibrating body pushes and pulls on the air molecules around it, creating an alternation of pressure above and below the average pressure caused by the weight of the atmosphere above us. This alternation of higher and lower pressure creates a pressure wave that spreads out through the air. The amplitude is the amount of positive or negative change in air pressure measured at a point in space as the sound wave passes it. Increasing the energy of the vibrating body increases the amplitude of the pressure wave that it projects. You may be familiar with the word amplifier, a device that increases the amplitude of a signal to the point that it has enough energy to push and pull the cone in a loudspeaker back and forth.
When a pressure wave arrives at someone’s head, it enters their ear canal and causes the eardrum at the other end to vibrate. The vibrations are then passed to the inner ear, whose hair-like cells vibrate in response and emit electrical signals that are in turn sent to the brain. The brain interprets these signals as sound. The greater the amplitude of the pressure wave, the harder the cells in the inner ear will vibrate and the louder the sound will seem to the listener.
Volume is the term musicians use to describe how loud something is. In music notation, dynamic markings are written using abbreviations for Italian words, such as f (for forte, or loud), mf (mezzo forte, or medium loud), or pp (pianissimo, or very soft). These markings are used to tell to performers how loudly they should play. This works in a group when everyone is listening to each other and adjusting their performances accordingly. When there are a lot of musicians playing together, a conductor may be needed to let them know with hand signals if they are playing too softly or loudly.
Audio engineers use a measurement system developed for electronic equipment that is more precise than the language of dynamics used by musicians. In this system, the energy of a sound is measured in decibels (dB). On one end of the decibel scale, 0 dB refers to a reference level equal to the softest sound that humans can normally hear. At the other end, 120 dB equals the “threshold of pain,” an intensity level that begins to cause listeners physical pain. A volume knob on a piece of audio equipment adjusts the amplitude of the signal that comes out. Turning up the volume increases the amplitude.
Microphone as transducer
A transducer is a device that converts energy from one form to another. The human ear is a transducer that converts the energy in a pressure wave into an electrical signal for the brain. A loudspeaker is a transducer that converts electrical energy into magnetic energy that moves a speaker cone to drive a pressure wave into the air. A microphone is a transducer that receives a pressure wave and outputs an electrical signal whose voltage varies in a way that is analogous (similar) over time to the way the input pressure varied. That is why the signal that comes out of the microphone is called an analog signal.
Waveform
Normalization
There are design limits to how high the peaks can be and how low the troughs can be in a waveform passing through a piece of equipment. If the preamplifier is set too high on the audio interface while recording, the top and bottom of the waveform will be clipped. The flattened waveform at the points that went out of bounds is a sign of distortion.
The audio information in those areas that has been clipped off is lost, and the waveform cannot be fixed. Turning down the level of the clipped signal afterward makes it softer, but the flattened places in the waveform remain (Figure 3).
It is safest to set the preamplifier level a little lower than you think necessary, in case the performer you are recording suddenly puts out more energy and surprises you with a louder-than-normal sound. If you end up with a track whose amplitude is too low and you can’t rerecord it, you can boost the level afterwards. The resulting sound quality won’t be quite as good as if you had recorded the track with a higher preamplifier level, but it will be better than having part of the recording clipped.
One way to boost the level of a track is a process called normalization. When you select a section of audio and tell the software to normalize it, the computer first reads through the audio from beginning to end to find the spot where the amplitude was the greatest, and then calculates what number that peak would have to be multiplied by to equal the maximum allowed amplitude. After this multiplying factor has been determined, the computer goes back and multiplies every point in the waveform from the beginning to the end of the section by that number.
Automation
In England some audio professionals have the job title of “balance engineer.” In the U.S., this person is more likely to be called a recording engineer (the person who records), as opposed to a mix engineer (the person who mixes). For example, Geoff Emerick and Norman Smith started out as balance engineers when they worked with the Beatles. The title is also very appropriate when discussing the responsibilities of a mix engineer, since balancing the volume between tracks is often more what a mix engineer is concentrating on than mixing the tracks together.
One advantage of the recording studio, and the reason larger facilities have multiple isolation booths, is that each instrument can be recorded on a separate track. If necessary, the volume of each instrument can be changed independently from the rest of the group much more easily than if all the musicians had been together in one room.
Mixing in the early days with analog tape recorders sometimes looked like a performance. Each channel on an old-school mixer has a slider called a fader, named for the faders on lighting boards that turn an individual light up or down on a particular area of the stage. Engineers moved the faders on the mixer up and down to control each track’s volume while the song played back from the multitrack tape recorder; the stereo mix that resulted was recorded on a second tape recorder. Extra engineers were sometimes called in to lend additional pairs of hands when the mix became complicated, and the process had to be repeated as many times as necessary, or as patience and budget allowed, to make all the right moves from start to finish over the course of the song. It was hard to fix small details that were noticed during the days that followed a session, since all the controls would have to be set back up by hand to the way they had been before, and all the fader moves repeated that had been originally made correctly.
Most instruments get louder and brighter the higher they go, which is one reason songwriters usually have the melody of the chorus sung on higher notes to cut through the band and stick in the listener’s memory. The opposite effect happens for low notes, which usually come out softer and duller.
Sometimes it is left up to engineers to fix these sorts of problems. One of the tools a computer program provides is automation, or the storing of the operations made on the mixer’s controls – information that is then stored as part of the song session file. Numerous layers of automation can be recorded one by one, allowing the engineer to focus on a different aspect each time the song plays, instead of having to get everything right at once. For example, the first time through the song, the engineer could be recording changes in volume level faders on the singer’s and guitarist’s tracks. The next time the song is played, those fader movements will be recalled and will be automatically repeated while the volume of the drums and bass guitar can be adjusted. Automation data can be edited just like any other parameter, so you can fix one part without having to change everything else. If you decide a week later that everything is perfect except for a couple of final details, you can open up the song session file and fix just the necessary parts while leaving everything else as it sounded before.
Automation is written while playing a track back, not while it’s being recorded. The track itself stays in play mode rather than record mode, so that its audio is not erased. Studio One from PreSonus (the program used for this publication) has five automation modes: off, read, touch, latch, and write.
- Off. Any previously recorded automation is ignored.
- Read. Plays back any previously recorded automation. It is recommended to set the Automation Mode to “off ” or “read” when you don’t intend to record any automation, because otherwise any adjustments you make to experiment with the settings will be recorded and subsequently repeated each time the song is played back.
- Touch. Any changes you make are recorded as long as you are touching a physical control or holding down the mouse button. As soon as you let go, the setting reverts to its previously recorded position. This mode is useful if you want to make a quick adjustment, like boosting the volume of part of a solo in the middle of a section, and then let the control glide back to where it was.
- Latch. Any changes are recorded until you press the stop button. You don’t have to stay in contact with the controls while playback is engaged.
- Write. The position of any controls writes over what was there before. This mode is often used the first time through.
Fade-ins and fade-outs
Gradually turning up the volume of a track is called fading in. Fading out is the opposite process: gradually reducing the amplitude of a signal. Both terms are inherited from the film industry. In the early days, filmmakers were afraid to cut from one scene directly to another because they thought it might confuse the audience, so they would fade to black before switching scenes. Fade-outs are sometimes used in songs to give the impression that the musicians are continuing to play off into the distance even after the song ends.
Getting Started with Music Production is for anyone interested in developing a more efficient and creative approach to music production, and it’s structured so thoughtfully that it can be used as a textbook for a modular, activity-oriented course presented in any learning environment. The fundamental concepts and techniques delivered in this book apply seamlessly to any modern DAW. The book includes 73 video tutorials, formatted for portable devices, that help further explain and expand on the instruction in the text. All supporting media is provided exclusively online, so whether you’re using a desktop computer or a mobile device, you’ll have easy access to all of the supporting content. Buy it at HalLeonardBooks.com.
Robert Willey grew up on the San Francisco peninsula, studied classical piano and performed with the Palo Alto Chamber Orchestra, attended Stanford University, and got a masters in computer music and Ph.D. in theoretical studies from University of California San Diego. He taught at the State University of New York Oneonta for three years, at the University of Louisiana at Lafayette for 11, and since 2013 is at Ball State University in Muncie, Indiana. Other books by Willey include Brazilian Piano – Choro, Samba, and Bossa Nova.
Read More
What is the difference between loudness, volume and gain?
Home studio posts – recording tips for producers, engineers, and musicians
Can’t You Hear Me Knockin’?
Hearing Protection and Your Music Career
Signal Processing For The Home Studio Owner: Part 1, Compressors, Limiters, and EQ
2 thoughts on “Music production basics – Part 1: amplitude and automation”