top of page
Writer's pictureFrank Demilt

THE POST PRODUCTION PROCESS

So, you have completed the first step of your song creating process and are ready for step two. Your vocals are all recorded. Your lead sounds clear, your stacks and backgrounds are in time with the lead, and your adlibs are cohesive with the rest of the vocals (or however you choose to do them, because let’s face it most artists adlibs are all over the place and have nothing to do with what they are saying, and just consist of a bunch of yelling and noises). Now it is time for step two, my favorite part the process, and the step that makes your track ready to be heard by the public, the mixing and mastering of your song. Before going further I will say this, it is important to master your song, as this makes your song comparable to all other released songs, and makes it easier to be played across all streaming platforms, but as a new artist who may or may not have the budget or understanding of what mastering is, you can get away with not mastering your songs, for now anyway.

There are two options to get your song mixed. Just like recoding, you can choose to do it yourself (which I don’t recommend as this process is more complicated than recording and is composed of a deep understanding of sound and how each plug in works) or you can send your session to a mixing engineer for them to do it for you. If you choose to send it to a mixing engineer it is important to find the right one. Each engineer has a genre they understand and are better at mixing than the rest. For me it was R&B and Hip-Hop. I listen to these genres more than any other, I worked with more of these types of artists when recording, it was my wheel house because I knew what these tracks were supposed to sound like when finished. If an artist were to send me a country track, honestly I would do my best but it may turn out sounding more like a hip-hop track in the end, and this wasn’t the sound the artist was going for. However, this also differs if you are sending the engineer your vocals over a two-track (meaning an MP3 version of the beat you used, normally meaning you got it from YouTube or a different beat site, either for free or for a low price) or if you are sending them the full tracked out stems of the beat (meaning that each instrument is separated on its own track). If you just have the two-track, this is fine, most new artists don’t have access to the full tracked out stems, and honestly this makes the engineers job a little easier as they don’t have to worried about mixing each individual instrument and can focus solely on the sound of your vocals and balance that with the two-track. I personally liked getting the stems as you have more control over how the entire song sounds and can manipulate everything to make it fit cohesively better, but it is more work for the engineer and will cost you more money. A mix will cost you anywhere from $20 all the way up to thousands of dollars for the top mixing engineers. Be careful though, getting a $20 mix may not be the best sound, but if you are on a budget and this is all you can afford by all means spend the $20 to get an engineer to do the mix for you, especially if you are just starting out and don’t understand the mixing process.

What if you choose to say, “Fuck it,” and do the mix yourself? You feel confident enough that you can get the job done and don’t want to worried about sending your music off to an unknown person who doesn’t have the same sonic vision as you do. Maybe you feel if you mix your songs yourself, not only will be saving money, but you will have full creative control over what the song and your vocals sound like. I commend you for your confidence. If you choose to do it this way, you will be able to spend as much time as needed to get your song to sound the way you want, as normally a mixing engineer will spend whatever time they have mixing your song (the lower the price most likely the less time they will spend, and may even just place your song in their already made mixing template, tweak a few settings and send it back. Or a top engineer may spend even less time because they have no vested interest in you as an artist) and send the MP3 back to you. If you like the mix, great, if you don’t, you are either stuck with that mix they sent back, or if you are lucky you will be able to get a few revisions of the mix, but most likely never more than three before you have to pay extra for them to fix what you don’t like. This can be frustrating as if you don’t get the sound you wanted from the mixing engineer and you are out of revisions and/or money you are now stuck with whatever the mix sounds like and are now faced with the choice of sending it to another engineer and hoping they get it right, releasing the song with the mix you have but don’t like, or scrapping the song completely because it doesn’t sound how you wanted it to. This is a tough place to be, and I have seen countless artists come to me and say they had an engineer screw up a song they wanted to put out, but never were able to release it because the engineer they got to mix the song couldn’t get it right. All that time you spent recording and the effort you put into that song is now for nothing more than practice, and a lesson learned to not use that engineer again.

You have learned your lesson, and now are going to mix the song yourself, where do you start? For this example, let’s assume you recorded to just a two-track so you are only mixing your vocals and not each individual instrument. Each person will start at a different place, with a different technique, however, with just vocals I like to start with the hook of the song because this is the most important part, as this is the part of the song that you want people to sing and remember. Before starting mixing, I will tell you to go on YouTube and watch the literal thousands of videos on how to mix and the different techniques of mixing and mixing vocals for different genres. There is a lot that goes into this, and not knowing how the different plugins work, how each thing should sound, and where each aspect of the song should go is a monumental task for a beginner. Trust me, it took me years before I understood even how to work the software plugins, and with more being released each year I still don’t know them all or how to use them. First, I would start with an EQ. This is used to take out, or add different frequencies to your vocals. Each part of your voice is a different frequency and will affect how your voice sounds. For this article I will keep it basic and not go too deep into how to manipulate your vocals, so I would suggest starting with a preset from the manufacturer of the plugin. For each EQ there is a setting for vocals (many different types of vocals and where they will sit in the mix, but again I will keep it surface level for this). These presets are a good starting point, as they will boost and cut different frequencies that are troublesome in almost every vocal. Cutting the lows is where I begin. Taking out everything below about 70hz because it cuts the boominess from the voice but also in most cases can’t be heard by most people anyway(especially on headphones, laptop speakers and phone speakers where most people are now listening to music). Somewhere around the 200-500 range is where I cut next, as this range is where a boxy sound comes from that is unpleasant to the ears. Boosting some of the high range, usually around the 10-12k to give some airiness and presence to the vocals is good too, but be careful because this can cause sibilance and a harsh Ess sound. Each vocals is different so you have to play around with the different frequencies to see what you should boost and cut for your voice. Last, be careful when doing this because cutting too much and make the vocals sound thin, and boosting too much can make the vocals sound harsh, or bassy depending on where you are boosting.

Second comes compression. There are many different types of compression and things compression can be used for. Engineers have different orders they do compression in, but I like to de-ess first. This means using a plug in called, “De-ess,” as a way to get rid of some of the harsh sounding esses that come out when recording. After that I use different compressors depending on the sound I am looking for. Each compressor has its own unique sound that it gives the vocal even before changing the settings. Something to understand about using compressors is that too much compression will squish the vocal making it hard to hear and literally having a squashed sound. Each compressor (at least the basic ones) have settings that include, threshold, attack, release, input and output. The threshold is the level the compressor begins to work at. This means that until the vocal hits this db threshold, the compressor won’t activate. The input is the level of the sound going into the compressor, and the output is the level of the sound coming out of the compressor. In the case of vocals these two are usually correlated, as the higher the input, the lower the output and visa versa. This happens because if the level is louder coming in, the level needs to be lower coming out to balance the overall level of the sound. The Attack and Release controls provide a remedy to this ailment, because they determine how quickly the compressor’s gain‑reduction reacts to changes in the input signal level: the former specifies how fast the compressor reacts in reducing gain, while the latter specifies how fast the gain reduction resets. If the attack and release times are too short, increase these and the compressor will react more slowly, which means that it’s likely to deal with this particular balance problem more effectively, because it’ll track longer‑term level variations (such as those between our verse and chorus) rather than short‑term ones (such as those between the individual strum transients and the ringing of the guitar strings between them).

Now that you have your EQ and compressor settings how you would like them and you are liking the sound of the vocals, it’s time to add the efx. Depending on the sound you are going for, will depend on which efxs, and how much you should use. Take The Weeknd for example, most of his vocal tracks have a lot of efx because that is the sound he is going for to create an ambiance to his tracks. However, if you listen to most rap tracks the vocals have very little effects on them, because the focus is on the lyrics and not the efx around the vocals. However, all vocals have reverb and delay on them. The reverb is used to give the perception that the vocals are sitting in different types of rooms or spaces. As a beginner, I recommend using one of the presets given to you by the plugins and simply seeing which one sounds the best for the vocals you are putting the reverb on. The lead vocal reverb will be at a different setting and/or level than the backgrounds and adlibs to make them sit in different spaces and separate them from each other. The delays are used to also give the vocal the perception of space, but can also be used on certain words to have the word repeat after being said. There are a variety of delays that can be used all for different purposes but to keep it simple a quarter note delay is pretty standard and should give you the desired effect. The shorter the delay the faster the words will repeat, the longer the delay the longer it will take fort the word to repeat after being said. These efx should be set different for the verse and the chorus, as you want one to stand out different from the other, but for the most part they will be pretty similar.

Mixing the backgrounds and adlibs is different from the lead vocals. You want the lead to stand out from the rest, as this is the main vocal of the song. Your backgrounds (or stacks) are there to support your lead. These are used to enhance certain words and give emphasis to what you are saying in certain parts of the song. These vocals don’t necessarily need to be fully heard, but they should be audible. The stacks will have a tighter compression and a different EQ setting to make them not interfere with the lead. These vocals should sit behind the lead, and in some cases, as stated before, not be heard but used as a power efx behind the lead to give emphasis. The adlibs are vocal efx that should be separated from all other vocals of the song. They need to be heard but should never be louder than the lead and should never overpower any other vocal in the song. I generally like to put a telephone efx on the adlibs to separate them from the lead, and get them out of the way, so they can be heard but not interfere and take away from the lead. All engineers and artists have a different perspective on how the adlibs should sound and this is usually an artist preference that the engineer should adhere too. In the case of Migos, their adlibs sound like their leads but are an accent and don’t interfere with what they are saying, they simply emphasis the last line or word said and fill in the space between the lead. Eminem uses adlibs as vocal effects and doesn’t always say anything about what the lyrics are, but more so uses them to complete the story of what he is saying in the lead. Some artists don’t use adlibs at all in certain songs because they are unneeded. Be careful with adlibs as too many can take away from the lead and make the song cluttered and busy, whereas to little sometimes can create too much space in the song if your lead vocals have a lot of breaks in them.

The last step in post production is mastering. This is taking the final mix and making it translate the same way on all platforms. There are certain programs and plugins you can use to master, but still nothing beats a mastering engineer. If you are just staring out, again this step can be skipped but to be comparable with the bigger artists you need to master your songs. This process makes the level of your song the same on each platform you release on, so that your song isn’t louder or softer on one platform compared to the others. There are other aspects to mastering that include further compression and EQ and specialization, but for sake of a beginner mixing article I won’t get into that. Each platform has a different level that it plays its songs at, and a different compression algorithm that it uses to do so. Make sure you are adhering to these constructs or your song will not only not translate properly to each platform, but the song will sound crushed or distorted on certain platforms.

Once you have the mix finalized to your liking, and the mastering makes your song consistent across all platforms, you are ready for release.

1 view0 comments

Recent Posts

See All

Soundcloud Embed Example

Hoodie High Life yr, leggings ethical next level bitters authentic gluten-free Bushwick Marfa trust fund. Slow-carb 8-bit Helvetica...

20 Myths About Music

Marfa bitters kogi pop-up scenester, forage four loko. Migas biodiesel Odd Future Bushwick, Williamsburg retro cold-pressed plaid...

Comments


Post: Blog2_Post
bottom of page