1. Didja accidentally blow through the whole, "We're using our real names" thing on registration? No problem, just send me (Mike) a Conversation message and I'll get you sorted, by which I mean hammered-into-obedient-line because I'm SO about having a lot of individuality-destroying, oppressive shit all over my forum.
    Dismiss Notice
  2. You're only as good as the harshest criticism you're willing to hear.
    Dismiss Notice

Mixing orchestra - Reverbs, EQs and other tricks

Discussion in 'Tips, Tricks & Talk' started by Patryk Scelina, Jul 4, 2017.

  1. Hey guys. First of all thanks to Mike for starting this forum. I think it's gonna be fun to talk here with everyone of You.
    I'm currently at the point of painful experimentation process with mixing samples to achieve best sounding orchestra posible.

    I already done dozen of different template setups with various combinations of reverbs, EQs and various usage of groups and send to find what works best for me. I used to separate writing from mixing so it was a little bit easier to make everything sounding cohesive in a seperate session.

    Now I'm mostly struggling with my setup performance. I'm trying to find as light way as posible to make it sound good. I started building templates with zilions of groups and sends for every single instrument family to generate early reflections, tails etc.

    By now I ended up using algoritmic reverb for ERs and free Samplicity IRs for hall reverb with different send level for every instrument, deppending on it's position in a room. Plus I find quite useful to add a free plugin Proximity to push some intruments a little bit further in space.

    So I'm starting this thread for two reasons:
    1. I think it would be cool for everybody to share some ideas of getting good sounding templates and mixes

    2. I'd like to ask you if you have any advice how to make seamless mach of two different libraries; some wet samples with samplemodeling for instance.

    And second question do you guys use any trick to make richer sound of the orchestra ? I found my template to be not so bad, but I think overal sound of single instruments aren't so wide and full as it should be. I've been comparing it to some Star Wars and Idiana Jones scores and on those recordings even solo horn can fill entire room, while sampled horn even with strong reverb sounds weak and thin if you know what I mean.

    I hope there will be few others to discuss this subject.
    Best wishes to all :)
     
  2. Great, and big questions. I can tell you that wherever possible I try to use dry sends/mics and start by trying to unify the eq character of the direct, dry sends. I also sequence my stuff dry so I don't hide bad performances in the reverb wash. Garbage in/Garbage out. My second step is the positioning L/R and front to back, which I mostly do with Haas delay tricks, and then a little EQ stuff for things farther away. In general, I try to get that and then put a general reverb on the whole thing, just like a real orchestra does. But in my class on it (Virtuosity I believe), I talk about comparing my stems with the ones Shawn Murphy did of my last recording, and reproducing the section-mic-bleed that happens, where, say, violins will pick up some of the trumpets, even in the direct mics. This has the effect of smoothing out the room somewhat, I find. It's a bit complicated to set-up at first, but you get used to it.
     
  3. Thanks for reply Mike.
    I studied your "Virtuosity" class couple of times and I'm not quite convinced how to setup that bleeding. I perfectly understand your concept but I'm just not quite sure how this is going to change the feel of an instrument if you send strings to brass group for example while they are using the same reverb. Isn't that just summing the same signal over and over again ? The only thing which could make a difference would be to send strings to brass ERs becuase those reverbs are really different for every family.

    By the way I use those Shawn Murphy's stems you added all the time. It's the best thing to analisys I've got so far.
     
  4. Yes, it's the ER's I send around the room, not the dry signals. So the Strings' directs get a little of the Brass's ER, for example. That's the sound in the stems.
     
  5. It's not a huge offering, but I have to say, that I think many people in their intermediate-level game (me), give way too much thought to their reverb setups vs everything else musical in their arrangement. Not poo-poo-ing your concerns, but getting your arrangement, orchestration (I suck at this myself) and instrument balance will get you much further and solve many problems at which point, the reverb becomes a way smaller issue. That said, I do like Mike's suggestion of keeping things as dry as possible while working. I generally do the same and with libraries like Spitfire or Hollywood, I use close samples most of the time. One other thing, added reverb is smoke and mirrors. It's necessary, yes, but at the same time, if you want to write epic percussion with massive, strings and brass, all doing action-level acrobatics, a standard hall reverb with 2.8 to 3.5 seconds of decay just becomes a mess. It's one of the reasons I've been straying from Spaces a bit lately. Right now I'm really liking my algorhythmic reverbs so that I can easily adjust decay. Just a thought. Hope this helps. I've been concentrating a LOT on mixing lately and while I am not the best, I have come a LONG way in the past two years. I'm starting to make more moves based on instinct rather than thinking in terms of calculation room characteristics. So far, no complaints from the clients, only praises.

    OT: I hope this forum works Mike. Feels like a fresh start. I like it.
     
  6. One thing that really helped me in my template was to have a high pass filter on basically every instrument group in my template. I use Spitfire stuff primarily, and there can be a ton of low end buildup when everything is playing at once. It makes sense for the heavy low end in the low strings, low brass, percussion, etc., but it's just junk for high woodwinds or violins. The built in EQ in Cubase shows the frequency range before and after EQ, so I try to make sure that I'm not cutting any essential frequencies to an instrument. I just want to get rid of any low end buildup below 100 Hz or so.

    For low end instruments specifically, I might still have a high pass filter with the cutoff at 30 Hz or thereabouts. For what I'm doing (game music), no one is really going to hear anything that low anyway. If I'm adding tons of big, booming percussion, I might move the cutoff for low strings and brass up a bit more to say 50-60 Hz to give the drums more space.

    This simple change to my template vastly improved my mockup mixes and cut down on me needing to add close mics to every section (a RAM killer) or unnatural volume levels to try to get everything balanced. It also saves you from having to do some dramatic EQ'ing on the final master bus that might alter the overall sound of the mix in an unnatural way.
     
  7. I believe this is as good a place as any to start posting.

    What I hear a lot of mixes missing is the stereo image of an orchestral recording. It's that "placed - but nice and wide" sound you hear on recordings, where instruments seemingly sometimes, on some notes, passages or dynamics, sound like they're coming from the opposite side. Or the difference in the stereo image of samples from Close mics in libraries vs. the Tree mics. That is, besides the room, due to the recording technique - the decca tree. It has similar results to the Stereo AB technique. The mics used are usually Neumann M50 omnidirectional mics. There's quite a lot of material if you want to read up on the technique, but the basics are similar to the Stereo AB - sound coming from one side will hit each mic in a slightly different time.

    So that explains why the "Delay Trick" works, or what exactly is it emulating.

    Now, obviously, each instrument will have different delay times for each mic due to their positioning in the recording space, and if you want to do the math, knock yourself out. However, there's a neat piece of software that'll do the work for you: Virtual Sound Stage 2. It's also an okay shot at creating early reflections, mainly getting the plus because it also features position morphing for the ER. The way it works is that you drop it on every track of a close mic'd instrument you want to mix in, then place it where you want on the GUI, or using the knobs. The instances crosstalk across the session so it doesn't matter which one you open, you can modify any of them. It also has a decent simulation of air absorption. You can also drop the ER simulation by choosing the "Open field" space. To get back the Decca Tree, VSS2 also has a number of Mic Setup simulation options, as well as XYZ positioning, which will be reflected in changes in stereo width, air absorption and and ER amount/tone values.

    And man, do I love their Decca Tree simulation. It has made my life so much easier.

    Apparently (at least I think, because of how the positions of the instruments change based on Mic Position) it changes the delay times to the mics based on the position of the instrument, per source (since it has an instance on every instrument/source. The stereo image is wide, rich, and homogeneous, and you're still able to discern where the sounds are coming from.

    Here's an example of a quick short Sample Modeling Brass demo with 1 Horn, 1 Trombone, 1 Trumpet and a Tuba in a hall. The 1st playthrough is positioned in Spat, the 2nd playthrough is positioned with VSS2 along with Decca Tree simulation.

    I'll also note that I always use it AFTER the reverb on the instrument. While the Audio Police may stop and ask why is exactly the same signal hitting all three mics when they're spaced apart in the room and therefore, the room response is different in those positions, sadly there isn't much that can be done here before it becomes too much of a hassle with little improvements to the final result - it'd be easier to simply go and record it live. And if I put VSS before the reverb, due to the short delay on both channels introduced by VSS, I get a small metallic ring in the higher frequencies, probably because of the delay multiplication (when ERs get simulated in the reverb itself).

    But sure, using a simple delay to try and achieve similar results will get you pretty far, although from personal experience, the results aren't nearly as consistent as with VSS2.
     
  8. Thanks Aaron. I've been using a combination of delay mixer in Cubase to pan the signal plus EAReverb for ERs. If you don't know it it's a algoritmic reverb which also has positioning feature. Works great for me since it's not only adding more wet signal and Pan but it also color the sound if distance increases. I used some demo of VSS2 but I wasn't sure if my system would handle so many instances along with all instrument tracks.
    So by now I use send channels with EAReverb for every instrument group with different distance settings.
    The exeption is samplemodeling stuff which I treat with inserted EAReverb and it alows me to get rid of dry signal almost etirely and I try to match the sound to other instruments. But I found very difficult to imitate that real sound of a solo instrument in a room. When I listen to some recordings and than play the same line with my VI it sounds like a fake instrument with reverb you know :D I need to find out how to get that color. I guess it's mostly EQ work with Reverb but I'm still not so happy of my results.
     
    Paul T McGraw and Aaron Venture like this.
  9. I forgot to stress that this is the part that is the most jarring in the bad mixes I hear. Tree-recorded samples are used, and then they put a close mic'd sample/recording/instrument and it's that difference in the stereo that sounds incredibly off and jarring. If I'm writing a pop tune, EDM, whatever, the piano can go in as it is I drop some reverb on it and it's fine. But if I'm gonna mix it into an orchestral setting where most of the samples were recorded with Tree mics, I have to put VSS on it (or the delay trick, if you will).

    I believe VSS2 is rather light on resources, especially if you disable the ER simualtion. I have about 30 instances in my mix and it's all going realtime without trouble on an i5-4690. The trouble starts when I enable Reverb on every channel, so that's why I keep it off per-channel while working, and simply enable it when I'm exporting.

    As for the Sample Modeling stuff, you gotta kill the dry signal. I use Altiverb only for the "Direct" coloring (does wonders, especially for trombones), and slap reverb on top of that.
     
    Runar Lundvall likes this.
  10. Hi Patryk

    I am by no means an expert here. As a matter of fact, mixing is one of weaker points but I'll share some tips that have helped me make some progress. The first is to set my template as unprocessed as possible. Spitfire samples are the core of my template so I leave all of those with default mic positions and no processing when I'm writing (they are plenty "wet" as is). Then I feed all of my non-Spitfire stuff through a reverb buss, just so the spacial difference isn't to jarring while I'm writing.

    After that, your piece is going to dictate a lot of mixing choices. If the piece is primarily con sord string pads with a horn solo, that horn will need a lot of low-mid energy to "fill up the room" as you say and a lot of verb tail. If your piece is primarily tutti writing, you are going to have to EQ cut that same horn in a few places to avoid the mud. I know it sounds a bit simplified but letting the composition tell you what needs to be done mixing wise was really a big step for me.
    From there it is preference and work flow that dictate.
     
    Noam Levy and Patryk Scelina like this.
  11. Hey Mike!

    I just completed your Virtuosity-class, and _loved_ it! :) Even though your DAW and my (Cubase) have a bit of a different layout, could you elaborate on the chain of bleeding sections, like you talk about in the masterclass? I have already set up a completely new template, balanced out with dynamics, volume, and panning using the stereo-delay-option you show in your class.

    Say I have 3 Trumpets, chained to an Trumpet Group, coloured with stereo delay for panning and some Altiverb IR as an input. How would the chain be to make the trumpets bleed into other sections of my choice?

    Thanks for all your comments, videos and tutorials, both on your site and on VI-forum! Next up for me is Orchestration 1 :)
     
  12. @Runar Lundvall
    I use Cubase 9 Pro so this should work for you. There are literally a dozen different ways to do this but I'll explain one here now. The first thing you will have to is setup your trumpet verb on a send channel. Cubase 9 pro makes the real easy. I think Cubase calls them FX send channels. There are plenty of youtube vids to demonstrate. Use the "sends" section of trumpet track to adjust how much direct trumpet signal you send to your FX channel. Do the same for another section (Woods for example). Now to "bleed" the trumpet verb into the woods section, to the the "sends"section of the TRUMPET FX Channel (the one with the trumpet reverb inserted) and route some of the trumpet's reverb signal to the to the woods group channel. BOOM. Done.

    g
     
  13. just curious: but that procedure does only make sens working with fx send channels reverb? I don´t even work with reverb with my spitfire templates because I don´t really need more of reverb. So I guess bleeding makes in my case no sense?
     
  14. Ah, simply wonderful! Thank you so much for your answer. I use Cubase Pro 8.5, so it worked out _exactly_ as you said it would! :) Do you use this method with Sample Modeling-libraries? Just curious as I have struggled with getting the sound to be "in the room", without the obvious closemic-sound in the front of what you are hearing. Is using Altiverb as a direct input and adjusting the wetness, the same as blending the drymics with Altiverb as a send, soundwise? There may be a total obvious answer to this, but I'm just a noob trying to get a grip of this whole mockup-obsession that many of us seems to share ;)
     
  15. Hey @Alexander Schiborr
    True Spitfire stuff is plenty wet enough. I use a mix of Spitfire (strings and brass) with Berlin Woodwinds and some 8Dio Woods as well. The 8Dio and BWW stuff need some verb just to get them to sit with Spitfire so I use some sends there. I also dial in some close mics with the trees in SF for some extra bite. Then run the entire orchestral mix through a very light convolution verb with a bricasti IR just for some glue. From time to time I may bleed the BWW verb over to the brass group but not often.
     
  16. Yeah, I've worked with the sample modeling stuff and the VSL stuff. You have to really know what you are doing to get a hall sound out of them. For me, what really worked was using two separate convolution reverbs with separate IRs; one for early reflections and one for tail. Then you real have to work to dial those in.

    I used to do the same thing with LA Scoring Strings. Take a look here:

     
    Runar Lundvall likes this.
  17. Thank you so much for your video and comments! I've tried to get the same sound as Mike Verta did in his SM-demo, using his tricks from the Virtuosity-masterclass. I took your advice about using VSS and an IR (Altiverb). This is my result so far:


    Next up is trying the bleeding-option and see where it gets me! :)
     
  18. Nice! You're pretty much there. I know plenty people (some on this forum) that like their brass close and bright. You've got that now. Everything from here is on is preference and taste. Now you can play with the balance of wet and dry signal or slap a vintage eq over the whole thing for some subtle warmth.
     
    Runar Lundvall likes this.
  19. Thanks again, for tips and comments! Really appreciate it :)
     
  20. I found that doing the section-mic-bleed thing led to some phase issues. Do you do something, maybe routing-wise or EQ-wise, to avoid that?
     

Share This Page