1. Didja accidentally blow through the whole, "We're using our real names" thing on registration? No problem, just send me (Mike) a Conversation message and I'll get you sorted, by which I mean hammered-into-obedient-line because I'm SO about having a lot of individuality-destroying, oppressive shit all over my forum.
    Dismiss Notice
  2. Discussion areas for the individual classes are unlocked for all users. Let's see if this makes it any more useful. If not, we'll drop this or organize under a single banner to save space and lean things out.
    Dismiss Notice

EastWest PLAY -- Articulations, Purging Samples, etc

Discussion in 'Template Balancing' started by Rohann van Rensburg, May 16, 2019.

  1. Hey all,

    Is there any way to optimize PLAY beyond the EW recommended setting adjustments? Wondering if there's a way to purge samples like in Kontakt so I can load up the template but only use what I need, rather than having all EW articulations loaded. The lack of keyswitching is a real pain.

    Also wondering if there's a "recommended articulations" list anywhere. Strings is more obvious to me, brass and winds less so.
     
  2. PLAY isn't as flexible as Kontakt in the memory department. You can't play from a fully purged instrument like with Kontakt. There are a handful of ways to minimize RAM usage: stream more from disk by changing the sample cache (SSDs required), reduce total number of loaded Microphones, reduce loaded articulations (like unused articulations in a keyswitch patch), "purge" unused samples after they're flagged, or disable PLAY as an instrument in your DAW (or in VEPro). As far as I know, PLAY doesn't support playing purged samples on the fly Kontakt-style. Finally, it is cumbersome, but if you can "freeze/render/bounce in place" a track in your DAW, it is possible to save the audio and unload the instrument which frees up system resources. This is a pain to edit, because if you want to make an edit to the underlying MIDI, you need to reload the instrument. Severely slows down the creative process.

    As for keyswitching, EW has some keyswitch patches, but you can also use expression map capabilities if your DAW supports them. It is because of this lack of memory flexibility that I have been moving more to Kontakt-based libraries (even though I own the full Hollywood Diamond suite). If you have a specific situation, then I can try to provide you more tailored advice.
     
  3. Thank you!
    Re: RAM. Using SSD's so changed the cache, but it seems to affect CPU which is somewhat concerning. Do you find HWB and HWW to be as heavy as HWS? Good idea unloading unused keyswitch articulations. It's a shame they have no purge ability the way Kontakt does, it's extremely convenient. Freezing is certainly a last-resort option, but as you say it can be frustrating creatively. That said, I'm mostly writing on piano and orchestrating on notation, so I should have a solid idea about what I want before I get to the MIDI performance.

    Haven't looked into expression mapping, will have to do so. What libraries do you find yourself using now?

    Something else that drives me nuts are how many patches are velocity trigger in other libraries. It makes libraries like SO, Ra, Silk, Ghostwriter, etc considerably less useful.
     
  4. No DAW has as full featured an articulation mapping system as Cubase, but Logic and Reaper (with Reaticulate) can do quite well. It is indispensable for people like me who don't like excessive track counts.
    I still use HWSO a fair amount (disabled template), but the major Kontakt libraries are better for me with a purged setup. By major libraries, I mean the flagship developers like: Orchestral Tools, Cinesamples, and Spitfire Audio. I have a mishmash of libraries from each of those developers.

    Yes, the inconsistencies of articulation controls in EW are a pain, but I like Spitfire's flexible approach best (UACC, keyswitch, CC1 is velocity, etc.). I suppose you get to know a library better the more you work with it, just like an instrument. You should also checkout some of the playable libraries too if you're a keyboardist - some have been developed by members on this board.
     
  5. Studio One is what I'm using, and is unfortunately still quite young, so no expression mapping currently unfortunately. Highly requested feature, though, and they've been pretty good with updates. If they actually add it I'll happily upgrade to the newest version.

    That makes sense. There are a lot of things that irritate me about Kontakt but it's a more full-fledged interface than PLAY.

    All my EW stuff is "rented" currently, albeit at student pricing. $240CAD a year, which isn't bad, but I'd eventually like to own things as EW doesn't meaningfully update their libraries. Convenient that Mike uses HWS for his Template Balancing video so I may as well get used to it for the time being. Really interested in Aaron's brass library though, and a good woodwinds library. Those seem more difficult to come by.
     
  6. #6 Rohann van Rensburg, Jun 4, 2019
    Last edited: Jun 4, 2019
    @Bradley Boone : I just noticed there are "Master Keyswitch" patches in Hollywood Strings. Did these not used to exist? I imagine one would just balance them within the mixer window, but I wonder if this would be more efficient? It's curious that Mike doesn't use them, not sure if he explained why in the class.
     
  7. I don't know what you mean by "master keyswitch" patches. There are finger position keyswitch patches that have KSFP in the name, and the sustain keyswitch patches which just have KS. The brass and winds have sustain/short keyswitch patches as well.

    You balance them in the articulation section of the player window (not the mixer window). The level control isn't very precise, you click on a knob and drag up or down to set the output volume for the articulation sample (or release tail which has RT in the name). I think the volumes go from -60dB to 12db, but it is a challenge to hit an arbitrary number precisely like -3.31dB. All of the sound gets routed to the mixer, where you can mix outputs and mic positions, but articulation balancing is only available in the Player window.

    If you're looking to save RAM with the keyswitch patches, you can unload unused articulations by clicking the 2nd check box next to the level knob. For instance, if the piece doesn't call for trills or tremolo, feel free to dump them. Hope this helps, and if anyone else has a different approach, I'd love to hear it.
     
  8. #8 Rohann van Rensburg, Jun 6, 2019
    Last edited: Jun 6, 2019
    My mistake. They're just "Sus" keyswitch patches. Doesn't seem like most people use them?

    Thanks though, that does help. Will see what this template looks like when finished and will go from there.

    If you have time, would love to know how you balance Spitfire samples within Kontakt. Haven't really ventured there yet.
     
  9. Any tips on mic positions? I'm trying to follow along in the Template Balancing class and balance at the same time as Mike (it's a long class and don't want to lose my spot), and I'm wondering why Mike has certain mic positions selected when he mentions wanting it as dry as possible. It rather drastically changes the panning and overall timbre, so I'm curious to know how to apply that.
     
  10. That's a great class! I need to re-watch it (Template Balancing) as well.

    So, there is a LOT of info to distill here and answer your question. First, the samples in this library were recorded "in-place" in the standard studio seating. The close mics were recorded....close and in the center of the section, so there's little room sound in the sample. You hear the direct sound before or shortly after the early reflections that define the room size. The close mics are usually panned as well by default, the other mic positions (Mid, Main, & Surround) are centered. There are 2 other mic positions with these libraries: Divisi (placed on the outside of each section) and only selected when you load a divisi patch, and "Vintage" which is a ribbon mic to replace the Surround.

    Using close mics is common when you want less room noise to better spatialize the audio. It is also helpful when blending with other libraries recorded in different venues. Close mics also have more pronounced bowing, articulation, and beater noise (percussion), because those sounds are inversely related to the distance to the listener.

    The more mic positions you add, you will also SUBSTANTIALLY increase the amount of memory required. Essentially, using only close mics you can add your own reverb with a send FX to a group of tracks and save that memory and emulate the further positions (also EQ to push it back into the mix). This trades CPU to calculate the reverb for the RAM of the samples with the baked in room noise. Most major libraries ensure that their multi-mic stuff is in-phase so that when you output multiple positions you don't have destructive phase issues.
    ----- ----- ----- ----- -----
    There are two main ways of mixing multi-mic patches: 1) in the sample player (PLAY) and out through one stereo channel, and 2) separate stereo channel outs to mix independently in the DAW. Option 1 is the least flexible for audio control, but easiest to setup and manage a lower number of audio returns in your DAW. You just enable the mics and have the instrument set to send all audio on one stereo pair (1-2, or 3-4, etc.). Unlike a lot of Kontakt libraries, PLAY won't receive micphone volume changes via CC, so once you set the levels in the player, they're set. For instance, Close -6dB, Mid 0dB, Main -6dB would keep those relative proportions throughout the recording process. Option 2 is more flexible, but more cumbersome to setup and requires more channels going into your DAW. You'd load the 3 mic positions and click the [...] icon below each mic (next to the [M] for mute) and set each mic to its own output. You'll also have to enable these channels in your DAW (or Vienna Ensemble Pro) or it won't listen for them and only receive channels 1-2. Now you are receiving 3 stereo pairs and can automate them in your DAW. This is useful if you're changing the character of the sound (intimate, distant, soloist/clarity, or section width, etc.). It is also useful to see the meters in your DAW's channel strip so you can see the relative levels without having PLAY open. If you're SUPER crazy and have more computer than you know what to do with, you can have 6 stereo outs (Close Short 1-2, Mid Short, 3-4, Main Short 5-6, Close Long 7-8, Mid Long 9-10, Main Long 11-12) and route all of your instruments like that to control short/long for separate reverb sends, but that isn't really worth it to me.
    ----- ----- ----- ----- -----
    I don't really have a preset or formula that I follow myself with mic mixing, but if I want more than one mic, then I personally send on separate channels. That way I can capture the sound independently and use it, manipulate it, or mute it as needed because I've already captured it. With Cubase, I just freeze and disable the instrument to free up resources. It locks out MIDI editing, but I can still manipulate the audio, or reload the instrument to correct bad MIDI programing.

    Hope this makes sense and helps.
     
  11. Thanks for this! I'm keeping this on hand as a reference, as I unfortunately don't have the greatest rig (single i7 with only 32GB RAM) and will have to keep an eye on where to save resources. I wish I could freeze and disable in Studio One (there may be a way but I'm unaware). I'm inclined to mainly choose one mic set as it will be a lot less resource intensive, and the problem with multiple mics is the reverb buildup that inevitably happens (though I'm not sure how problematic this actually is with larger templates). I only have Main and Close mics available to me (I don't have Diamond).

    So in the Template Balancing class, Mike's template is mostly close-mics (control and mixing different libraries) which he explains, but then a handful of articulations have multiple mics on. I imagine this is partly for the more specialized articulations, since he runs Vi1 and Vi2 together and not all specialized articulations are available in both sections, but I'm not sure why else. I'm not sure if it's a consistency problem within the library.

    Do you have a preference, generally, for recorded vs. applied reverb? Spitfire's Chamber Strings sound lovely in Air Studios, but it seems to be hard to blend convincingly.
     
  12. #12 Bradley Boone, Jun 29, 2019
    Last edited: Jun 29, 2019
    Be sure to do a high pass EQ on your reverb channel to cut out all of the low end that isn't needed on the reverb. None of that low end info needs to boom around the room and significantly reduces reverb build up.
    I haven't re-watched his tutorial in a while, but it is easier to match libraries with close recordings and place them in the same space (I'm sure that's what he's showing). It is also possible to match with A/B and Tree mics with a glue reverb over everything, just make sure you have the right send levels and pre-delays if needed. If you can turn off early reflections on your tail reverb, then I would turn them off (don't confuse the room with 2 sets of early reflections, the ones baked into the sample and the one added by the reverb).
    I don't have SCS, but I do have some Spitfire stuff. If I'm matching another library, I loop some short staccatos from the library I'm matching to the same sample played in the Air studio to align the tail lengths. I'm not the best mixer, but there's a lot of info around about using various reverb setups.

    I don't have a preference, but reverb without the proper delays kills the clarity of your mix. It can also be a crutch to hide bad playing (lack of expression) because you're covering it up with a wash of sound. Scoring stages seem to have shorter reverb tails than concert venues, so know what kind of sound you're trying to emulate. I think high sounds can handle more reverb than low, and long/slow notes can handle more reverb than short/fast, but you can really chase your tail trying to measure out every little detail and have complex channel routing schemes to get it all right.

    If you have Spaces II or Altiverb (or any good convolution reverb, probably something already in your DAW will be fine) and a good algorithmic reverb that can modulate the tail and without the early reflections, then you should be good. Experiment with send levels, panning, EQ and find what you like. Unfortunately there isn't a one-size-fits-all solution because the libraries are different, the tempos are different, and the intended effect is different for every piece. Oh, save presets in your plugins, so you can recall them in later projects and you don't have to reinvent the wheel every time you find something that works.
    I'm pretty sure you can do this in Studio One. Freeze the rendered audio and disable the VST instrument. It is more cumbersome to work this way, but it saves a ton of resources. I think Studio One calls this function "Transform to Rendered Audio."
     
  13. Have you got good results with SCS working on "Template Balancing" masterclass?
     

Share This Page