1. Didja accidentally blow through the whole, "We're using our real names" thing on registration? No problem, just send me (Mike) a Conversation message and I'll get you sorted, by which I mean hammered-into-obedient-line because I'm SO about having a lot of individuality-destroying, oppressive shit all over my forum.
    Dismiss Notice
  2. You're only as good as the harshest criticism you're willing to hear.
    Dismiss Notice

Mixing orchestra - Reverbs, EQs and other tricks

Discussion in 'Tips, Tricks & Talk' started by Patryk Scelina, Jul 4, 2017.


  1. I don´t use VSS actually (but I did test their demo a while ago). I didn´t know that harp libraries are mostly recorded in center. The libraries where I have Harp libraries from are recorded in position. However to your questions: I would highly recommend not to use it with samples recorded in position. So for instance like Spitifire, orchestral tools etc. By using those with VSS you mess up the natural ER / Room which will end up sounding strange.
    For dry instruments, there are afaik 3 important steps:
    1. Creating ER
    2. Reverbtrial
    3. EQ
     
    Patryk Scelina likes this.
  2. Thanks Alex. That's what I thought. Just needed a confirmation.
     
  3. I'm personally not a fan of its reverb so I only use it for positioning (I select empty field), and mainly for centered or dry libraries. Never for ones recorded in place. And I'll actually use it after the reverb, since the room sound would hit the mics and then the instrument would get naturally positioned. I use it for piano, chimes, solo strings... Everything that's dry-ish, middle and in-your-face. It works wonderfully.

    There are no rules, though. Do whatever sounds the best to you.
     
    Steve Schow and Patryk Scelina like this.
  4. Hi there again. I don't want to start a new thread so I'm wrtiting here and hope some of You will notice that :)
    Is there anyone using Hollywood Sound IRs from Numerical Sound ? I've noticed some time ago there is a Mike's demo on their website. But I'm just wondering is there actually anyone using it on daily basis ? I saw there are multiple IR for specific instruments so It looks like You'd have to have supercomputer to run 50 instances of Convolution Reverb plugin to make individual bus for every single instrument or ensemble. Any thoughts ?
     
  5. Don't know about that library and it does sound interesting. Does it also offer multiple MIC distances for each position?
    I run 42 instances of Altiverb for Sample Modeling Brass and 9 for Clarinets (3 mic positions for each instrument). One more for strings, one for Adventure Brass, one for the rest of the woodwinds. ¯\_(ツ)_/¯
     
  6. I am surprised at the reverb counts. Don't think I have more than 5 in my current template (one per section, one "glue" reverb).
     
  7. I'm not quite sure if they made different IRs for specific MIC positions. I'm convinced they just made different IR's for specific instrument placement in combination with tilt filter to simulate timbre change with distance. But still using as many reverb instances sounds little bit crazy to me. I don't use more than 10 reverbs at once and even with that my system overloads from time to time.
     
  8. #48 Aaron Venture, Jun 19, 2018
    Last edited: Apr 29, 2020
    2020 edit: Disregard all, just use Precedence + Breeze.
    _______________________

    It's because I change the "Size" slightly for every instrument in the family, so that the early and late get ever so slightly more spread out - their timing changes. You can also do it by having all the early+late reverbs spread out on receives, and send the instruments to them. But I would still need to use 3 Altiverbs on Sample Modeling, because of the "Direct" option in Altiverb. I believe this just takes the first sample of the IR recording and coats the incoming sound in that. It simulates the instrument going through a set of microphones, and that's the most important thing with Sample Modeling. And then I move mix the three together, much like you would set up a mic mix in any library.

    The result (Berlin Trombone 1 till 0:07 vs. SM Trombone after 0:07, shit playing but that's beside the point :D): https://www.dropbox.com/s/lmbnez83ej5vzo1/trombones exp legato no honky.mp3?dl=0

    If I use only one instance of Altiverb or use any other IR reverb and just fiddle the mix, I can't get even close. Well, one Altiverb is on the way, but not quite there.

    [​IMG]
     
  9. @Aaron Venture that is a remarkable demo. I am completely astonished. Do you think this would work for VSL instruments if starting with just the dry samples? And if so, how many instances of Altiverb would be needed for 3 trumpets, 4 horns, 3 trombones, and tuba?
     
    Rohann van Rensburg likes this.
  10. I use 3 instances of Altiverb per instrument, so 33 :D I actually did this for the tuba in Tennessee River, so yes, it would definitely work. A word of caution: Altiverb uses ~100 MB of RAM per instance.

    A little bit more on the subject.

    The direct mic is crucial for dry instruments because it will conform them to the sound of the room and the microphone response. Signal X enters an algorithmic reverb, and the reverb reacts to it, and sends out signal X+R. Right?

    Okay, now, an impulse response was recorded in a certain room by setting up two calibrated speakers and playing a sine sweep, speaker by speaker, which is then captured by two microphones. Mic L will capture the right speaker same as Mic R, but Mic L will be a bit more quieter, a tiny bit less shiny (air absorption) and slightly late (longer distance than that to Mic R). You then do the same for left speaker. Then you mix the two signals down, and deconvolve the final stereo file. This is called True Stereo. Standard stereo would be to have one speaker play a sweep that is then captured by two mics and the recordings are simply panned left and right, then deconvolved. Altiverb has these IRs in the browser under "Mono source" for each room.

    So your dry sound enters the convolution reverb, and the room reacts to its properties. That is then captured and colored by two microphones (the sweep was captured by the microphones). If you now dial in the MIX, you will hear your original source, and the colored and mic'd room sound. So if your original sound is X, and room is R, X+R go into two microphones which are YY. So NO-DIRECT Altiverb outputs RYY, and you dial down the mix know and now you have X mixed with RYY. This is cool for adding some tail to a recording, a library, whatever. It doesn't work for creating a believable space for bone-dry instruments. You need to hear the colored source along with the colored room. All of it has to go through the mics. it needs to be XYY + RYY. You can then do with the final as you wish, use Haas panning and whatnot. But the original source needs to go away completely.

    Also, when you load all these Altiverb instances on your instruments, you need to adjust the Size knob per instrument. So your Trombone 1 has all of its 3 Altiverb instances at 100% size. Trombone 2 has them at 105%. This will "enlarge" the room by stretching the sample and correcting the pitch-shifting that occurs because of it. But the percentage is low enough so you won't actually notice the stretch. But now your reflections are slightly different. That's enough to fool you into believing that you're now hearing two trombones one near the other (once you do proper imaging) instead of just two trombones heading into a reverb. Also, use the pitch modulator that will "liven up" the room. I don't have depth at 100%, usually between 20 and 60%, and set different rate for each instance of Altiverb (even on the same instrument). One increment is enough for them to not pitch-modulate at the same time.

    Because if you don't want to do the Size thing, you might as well just drop 3 Altiverbs on your trombone bus, and adjust the imaging on your individual trombones before they head into Altiverb, because it's the exact same convolution you might as well save some RAM and CPU cycles. Make sure to at least have different pitch modulation for each instance.

    I'd urge anyone (who wants to and has Altiverb) to try this out and then experiment on their own and find which solution they like best.
     
  11. @Aaron Venture I am struggling to understand. It is not your fault. I have always had problems understanding mixing concepts. Please bear with me as I try to clarify what you have told me.

    But first, in your example using Berlin Brass, did you just use the Tree Mic? If not, what mics did you use?

    OK, starting with Sample Modeling or VSL, both completely dry, you first put the signal into an instance of Altiverb. Does it matter what venue? The 100% wet output from Atliverb you then send to the second instance of Altiverb. This would be your hall sound, like Mechanics Hall. That output which could be about 50% wet, you then send to your stereo output. Is that right? Where does the third instance of Altiverb come into play?

    This is the big reason I like MIR Pro. It is easy. But there is no arguing with your skill. I love how you made the tuba sound in my piece.
     
  12. It's alright!

    Tree at 0dB, close at -5ish, ambient -3ish.

    Oh, that's embarrassing. I didn't explain the main thing. I'm sorry.

    You pick whatever room you want. Make sure the mix is 100% wet. Go to the Positioner and turn it on. For the 5m mic (or the closest one which you choose as your close mic, move the positioner all the way forward and shorten the decay time. That's your close mic. You can see how I set up my three mics in the positioner in the picture above.

    Now, Reaper has very flexible routing. Like cables in a mixer. Only you have infinite cables. It has this
    upload_2018-6-20_16-57-0.png

    for every single insert or a synth. Left one is input, right one is output. That's for the current track. So I take the Close Altiverb and change the output to 3 and 4. After that comes Ambient Altiverb, which takes the input from 1 and 2. Now, because the Close Altiverb before it is outputting to 3 and 4, the signal on 1 and 2 is the same one that enters the Close Altiverb. I now route the Ambient Altiverb to 5 and 6. Lastly, I put the Main Altiverb and it receives at 1 and 2 and outputs at 1 and 2. I then drop Reaper's Channel Mixer and set the inputs accordingly. You can also see it on the picture. Main is 1/2, Close 3/4, Ambient 5/6 and it outputs to 1/2. Now, after it, 1 and 2 is overwritten by that output. This is my one-track mic setup. It's all in one track, it's clean, and the Channel Mixer is really easy to use as a mic mixer.

    I don't know about Cubase's routing, but you can send the output of your instrument to 3 new tracks. Mute the original (or remove the master send or whatever it's called—you don't want to hear that track, just the 3 you're sending to) and then drop 1 Altiverb on each of them. One is Close, one is Main, one is Ambient. You then adjust the volumes of these tracks as if they were individual mic positions (they technically are now).

    For starters, you can just copy my Altiverb settings from the picture to get the feel for it. Each one also has a scoop at 3700Hz, about -3,5 dB.
    upload_2018-6-20_17-6-17.png
     
  13. Aaron, I am sending you a PM.
     
    Bradley Boone likes this.
  14. Hi Aaron, can you go into a little more detail regarding Precedence and Breeze? Would you still use Altiverb as a send?
     

Share This Page