In light of my more popular blog posts I decided to really give the game away here and post up some of my favoured mix/production techniques.
A review of my commercially available music (http://www.rpgnow.com/product_info.php?products_id=57994&src=co...
) brought to light that the freely available music on my website is not of the quality of my current output. To this end I've decided to write a whole new set of material for the site. One thing this brought to light was how much more effort I now put into production and so I thought it was worth sharing more info on the blog.
A big collection of hats
Working from a home studio on media music requires you to take on a lot of different roles. Composition and arrangement or orchestration is just the beginning. It is highly unlikely that you will be using an engineer or producer. You'll probably even master the music yourself. Like most composers, I really started this with absolutely no interest in mixing/production but increasingly I've become aware of just how important this is to the finished product. If you are creating music for your own pleasure then you might not want to worry about the production but if you are working on commercial music then I think it is vital for your music to sound professional and it's worth learning how to mix your music.
What do you have in mind?
I think one problem with many people's mixes is that they don't really have a frame of reference to begin with. When creating a track I think it's very important to be aware of what you expect the end result to sound like. I'm not particularly talking about composition or arrangement here but more about the overall sonic quality of the track. Do you want it to sound "honest", "live", "realistic" or do you want it to sound overcompressed, punchy or larger-than-life? One common mistake is to start off worrying whether digital orchestration sounds absolutely realistic. Never forget that the absolute closest reference people have is from recordings of real orchestras and these are already processed and changed by their very nature. Between using spot microphones to lift instruments out of a mix to compressing bass instruments to adding reverb on the mixing desk, most orchestral recordings have already been altered to some extent. At the very least, the sound has been coloured by the microphones and (often) recording to tape.
My best recommendation when starting out is to try and use existing recordings as your benchmarks. For me, I've been working with the great soundtrack recordings and these are fantastic to learn from.
Getting the layout sorted
Now before we run straight into the mix I want to briefly mention the sonic positioning of instruments. One of my previous blog articles deals with how I do this but effectively you'll need to be able to pan instruments left to right and use reverb (and volume level) to create depth. In all but the most abstract or electronic scores you'll probably want to keep some element of consistent positioning throughout the mix.
You'll generally be working sound sources that are: -
2. Stereo with existing positioning (and usually with ambience in the sound)
3. Stereo with full panning (i.e. across the entire stereo range)
For number 1 you can use the regular panning controls. For number 2 you don't need to do anything if they're already in the right place (otherwise treat as number 3). For number 3 you will probably require a power panner allowing you to adjust both the stereo position and the stereo width. Some DAWs may include this, Vienna Ensemble includes this and I also use Waves S1 Stereo Imager for this purpose.
It is worth listening to how any stereo narrowing affects your instruments. Excessive reduction of a stereo image can induce really bad phasing - if this happens it's probably worth just taking one channel of that instrument and treating it as mono. I've also found that reducing width on my True Strike percussion samples makes them sound like they're in a tiny room so I no longer do this. The point here (and throughout the mixing process) is to listen to the instruments before and after the effect is used and listen out for these kind of problems.
For depth placing you are likely to need a good reverb. I use Altiverb for this purpose but there are algorithmic reverbs that work well enough (Sonnox Reverb has an excellent reputation for depth placement). With convolution reverbs it is worth experimenting with various mic distances for the depths (i.e. I use close mics for the nearer instruments). It is also important to use pre-delay well. I usually use somewhere around 40-50ms for the ERs and 50-70ms for the tail. If the values are too low then they clutter up the original sound, too high and they sound detached from the sound (more like a delay effect). Using Altiverb is an article in itself and so I won't expand on it much here.
Ok, so here are some of my basic tricks that really help to bring a mix to life. These are very much from the modern school of mixing and you may wish to temper them a little for a more traditional sound.
Keep this frequency clear
Low frequencies really, really clutter up a mix and add "mud". They are often enhanced by reverb and can really be a problem when used with broadband compressors. It is usually a good idea to try and remove a lot of this.
Close microphone recordings have a "proximity effect" which can build up their bass frequencies ... http://en.wikipedia.org/wiki/Proximity_effect_(audio)
Therefore it can be a good idea to run most of your sounds through a high-pass filter and drop everything below 50Hz at the very least. Vocals and guitars should probably lose everything below around 80Hz. Pick which instruments will define your low end and leave these unfiltered. They are likely to be big percussion and/or bass instruments. For everything else it's worth filtering out the unnecessary low frequencies. This step alone can really help to clean up your mix and also give you a lot more headroom to raise the overall levels.
Another thing worth considering is to shelve off some of the bass in reverb. Similarly increase the damping on low frequencies to prevent a build up of echoes.
After this it is possible to cut or boost specific frequencies for instruments. This is usually done to allow them to cut through the mix. I try not to do this too much as it can really make things sound unnatural but often I'll boost a little high frequency or upper mids to cut through. Just be sure that your not boosting multiple instruments around the same frequency. Give each instrument its own specific frequency range. Also instead of boosting just the one frequency heavily, consider boosting multiples of the frequency (so 500Hz, 1kHz, 2KHz) each by a smaller amount.
For my EQing I tend to just use the built-in stuff in Cubase but I also use Waves Q. If I want to add shine to strings then I either use Voxengo GlissEQ or PSP Neon.
Squash and squeeze
Adding compression can really be a useful way of controlling volumes for brass and percussion and it's certainly a trick I use. Hopefully you will have already attenuated the bass frequencies so the compressor can function more effectively. Generally you'll want to keep a fairly average attack speed to let the initial sound through untouched. Just a slight amount of compression will often help control these sections effectively. Again you need to play with the sound yourself to hear the effect.
My main compressor is Waves C1. In addition I'll sometimes use Sonnox Inflator on these sections to add a little grit.
Reaching the limit
Ok ... I'm about to give you an example track here and one thing you should hopefully notice is how loud the track is. I listened to some similar demos on various websites and realised just how loud people can make tracks these days. There are a lot of ways to get this effect. Remember that dropping any un-needed low frequencies will clear up headroom for that extra volume. After that you are going to need to look at various tools.
The first trick would be to check whether you can ride the faders for your track. This is sometimes an effective way to keep at a loud volume without losing too much in the dynamics.
After that you should probably look at any really high peaks. Can they be compressed or limited without losing the effect? Often an entire track can be made a lot louder when a couple of peaks are limited.
Then we need to look into loudness tools. I tend to use various combinations but I recommend Izotope Ozone 3, Sonnox Inflator and Nomad Factory D82 Sonic Maximizer. They all have great qualities.
So ... hopefully this gives some insight into my technique. Here is a brief example of a finished track. http://www.jamessemple.com/music/epicjourney.mp3
The main theme is played using VSL Epic Horns and bass trumpets. I've added the Fanfare Trumpets for the higher notes. Drums are a big combo of TAIKO, CineToms, True Strike and Storm Drum. Strings are VSL Appassionata plus EW 70-piece ensemble patch. Low brass is a mix of Project SAM and EW.
The drums and brass hits are a little compressed and have some Sonnox Inflator on there. All reverb is done with Altiverb. The final bus has D82 followed by L1 limiter. For most other recent tracks I've used Ozone instead. Just depends what works at the time.
If people have questions then I'm happy to expand on subjects I've covered here.