Composers' Forum

Music Composers Unite!

Trio for Flute, Viola and Cello, part II: Chasing Dorian Grey

Hi all,

After a computer failure, I could finally complete the second part of my trio for flute, viola and cello. Like the first part, it has a color in the title, but it also refers to a (traditional) unusual scale. The title already reveals the scale here.

I hope you enjoy the listen,

Jos

Chasing Dorian Grey

Views: 108

Reply to This

Replies to This Discussion

Jos,

I voted for part one in the Colors contest, and I like part two even better! The instruments seem a little more co-equal in this one, and, like the first, weave together wonderfully. I like how the rhythm of the second section complements the rhythm used in the first and final section. The sound quality is excellent, and the instruments sound almost real. There is much I can learn from this piece.

Well done!

Thanks, James. I really appreciate your comment!

The toughest part of the realistic sound is always the 'longer' notes, the slow passages, the legatos and slurs. Short notes with distinct attacks are in fact pretty simple. That's why you hear these days so many percussive use of strings in film scoring (almost exclusively). That is relatively simple to realize and the workflow can be speeded up that way (not unimportant for film composers!). Long notes are ways more complicated to bring to live. You have to imagine the bow movements and their dynamics, the attacks and releases, pressure, bow speed... So good samples are the start, the rest is a critical ear and trial and error. Like most users of samples, I had a long learning curve and I'm still not satisfied for a 100%. The fact that VSL now almost exclusively invests in the Synchron instruments doesn't make it any easier for me and all the users of the "quiet studio recordings". Synchron has the advantage of being ready to use (positioning, IR, reverb), but is therefore limited in use and variation, unless you switch off the room ambience. That would mean that you've waisted a lot of money to come back to the previous dry instruments. It may be obvious that I don't use any Synchron instruments. I bought some, but they couldn't convince me at all.

Jos

This is a beautiful traditional sounding short piece Jos and very enjoyable to listen to. Your trio sound is well arranged and your handling of the instruments is realistic to the point I wouldn't question it if you said that this was a live recording.

I've recently started working with VSL and I admire what you and others have done with these sounds. I have some questions if you have time.  Are you using the free VSL player or have you purchased the 'Vienna Ensemble Pro' player? Dane Aubrun has spoken well about it and I believe he uses the 'dry' samples that you mention and I have as well.

You mention the issue of treating longer notes, are you using the velocity X fade controller in the player and is that something you do a lot or are there other methods to make long notes more effective?

To my ear one thing that I notice with VSL users that might sound unrealistic is the consistent perfection of the balance and ambience that is possible with these samples . A lot of live recordings have blurred passages and odd resonances that are difficult to avoid in a live situation and difficult to replicate in a mock up, and of course why would anyone want to include mistakes etc.?

As far as ambience VSL is now promoting the MIR system which is interesting, do you have any thoughts on that?

Thank you for posting your work and discussing it!

Hi Ingo,

Thanks for listening and commenting.

I use the VI pro player and for larger scores the VE Pro (which is not a player, but a well equipped mixer) to save CPU. VE Pro can work as a separate server, keeping the CPU load away from the normal processing.

And I prefer the dry samples (recorded in silent studios), because they offer full freedom to place them wherever you want in the room and they are not influenced by some fix room sound or reverb (like the Synchron instruments).

For the longer notes, I absolutely use the Velocity XF fader (automized in an effect lane), because that way the sound is layered in fluent velocity variations, depending on strength and that results in a different sound. A slow and soft bow is not the same as a fast and pressing bow, they sound totally different. That contributes to the more natural feel of playing.

Further, I use the humanize effect of the VI player and I try to vary as much as possible the velocities and sometimes the bow directions. Normally that happens automatically, but you can to some extent influence it (choice of patches). Every sample has a starting attack and the longer patches have a release tail built in. You can manipulate these as well, even leave them out whenever necessary. The very short patches don't have a release, they're almost only attack.

Of course the samples are next to perfect, but they don't need to sound perfect. You can manipulate the attacks, offset, dynamic curves, releases... and humanize them. Here VI Pro offers many possibilities to choose from. The so called odd resonances can't possibly be a goal. They sometimes occur in live recordings (I have experienced this more than once), but recording engineers will try to wipe them out because they're ugly. So why would we want to incorporate them in the Vienna samples...?

And finally MIR. I own MIR Pro, but I seldom use it for the stage placement and ambience. It is easy to use, doesn't demand much acoustic knowledge, but it colors the sound pretty much. For that reason I prefer VSS (Virtual Sound Stage) better: it doesn't have a room color (in fact no room sound at all) and offers a neutral positioning of the instruments/sections (AND it's a lot cheaper). Because there's only IR (impulse responses) in it, you have to use an extra tail reverb VST in a separate reverb bus just before the master bus. There you only use the tail reverb, no more IRs, to your taste.

Vienna has superb samples, but I find it sad that they only invest in the newer Synchron instruments and leave the older libraries totally aside. That is not very polite to all the composers who have put tons of money in the previous libraries. They seem to expect that we will purchase everything again in the Synchron version...

So, I hope this gives you some insight and an answer to your questions.

Best,

Jos

Thank you Jos for your discussion here, it helps me to understand the use of samples. I will have to spend more time to understand what is available from VSL and the use of samples in general.

To this point I have used keyswitches to access different articulations and made varying adjustments to velocity to get different timbres. For ambience I'm using Reaper for panning, stereo width control and the East West Spaces reverb plugin. The VSL free player has controls for these functions that I have not yet used and I haven't yet purchased the VI Pro player that you mention and Dane Aubrun has also recommended.  I am not getting the sound quality that I hear from you and Dane and others here on CF so I will continue to work on this.

I hope to post a piece for a small chamber group if I can get some better results.

Hi Ingo,

You're welcome.

The first thing you should do is listening to good music (the kind you would want to produce) with a critical and analytic mind. This should provide you some answers: how do the instruments sound, how do they play, how is the orchestral balance, how is the ambience sound, how much room is present... ?

Then there is a golden rule: less is more! The cause of inferior recordings is in most cases due to an exaggerated reverb. And that originates in the application of two (or even more) VSTs to create that ambience. Therefore you should understand how a true and believable reverb is produced in a live recording.

  1. The instruments play, the walls around reflect most of the sounds pretty soon afterwards (milliseconds). These reflections are the so called 'early reflections' or impulse responses (IR). These responses create the room feel (size, shape) to our ears.
  2. After that, the slower reflections fade out in the space. That is the hall effect or tail reverb. That can last for quite some time (from 1 to sometimes 10 or more seconds) before fading away.

The two kinds of reverb are totally different. I saw that you use two different VSTs to create your room sound. Than you should use ONLY the IRs in Reaper and ONLY tail reverb in East West Spaces. Have a closer look to the two of them. If too much reverb is present (the two VSTs producing hall and IR) the final result will become muddy and unclear (not transparent). It takes some time and exercise to master that balance.

Jos

I misspoke about my use of reverb Jos, currently I'm using only the EW Spaces IR plugin on the master buss in Reaper. I don't care for reverb and I try and use it sparingly but many listeners seem to expect a large amount of it. I suppose it's possible to effectively combine two reverb sources but I haven't tried that. To my ear any significant amount of reverb blurs intricate passages.

I try and listen daily to a variety of music, so what my ear identifies as good music seems to be a moving target if you will.  But in spite of my changing tastes I usually have a fairly clear idea of what I want to compose so now the problem is how to get VSL to give me a good representation of that, so thanks for your help.

Hi Ingo,

I forgot a couples of things which might be important:

  • the choice of the best patches for an articulation (which is not always what the patches' name indicates)
  • a good balance and panning (think of a live concert: even when you are ultimately right of the orchestra - in front of the basses mostly, you can still hear a pretty good amount of the first violins in the left side). A good panner spreads the sound throughout the orchestra in equilibrated proportions, not just in terms of exclusively left or right.

Si I advise you to experiment with the samples and their patches as much as possible in many differents situations and articulations. You will end up creating your own setup of the VI player to work with as suites best your needs and wishes.

Lots of success!

Jos

So Jos if I understand correctly, you have a separate track on your DAW, (Reaper in my case) for each instrument or ensemble patch; and so each track has an instance of the VSL player with settings for that patch.  And then do you also have an instance of the Virtual Sound Stage on each track as well?  I see that VSS has presets for certain instruments in certain rooms. So do you then put any plugins on the 2 channel master output track?

Excuse the interjection here, please, gentlemen, I have a few moments before going out into the arctic wastes of southern England for shopping before returning to the arctic wastes of notation software....just loving it like I love a decent toothache................not.

Just to say, Jos, this is a superb, lively piece that in real life would keep the string players on their toes all right! - true ensemble players. It could indeed be live, recalling the few occasions I've been close to live ensemble players. The timbres and nuances you incite are 'as good as'. . .Most enjoyable.

As I also use VI pro (but not VE pro as I work on just the one computer) and have been moderately happy with the player's internal reverb, no need for me to comment further, except to agree with you on quality. I try to achieve a little more 'naturalness' (is there such a word?) in the daw by not snapping to the grid.....which leads to the chore of making a second copy to shape up for the notation side.

Otherwise, back over to your discussion.

All the best,

Dane.

Dane now I've got to turn up the heater and just strum my guitar nondigitally :)  When you have time to update us on your notation adventures we're interested. And VSL says that Dorico can set articulations in Synchron automatically? Oh my.

This is a great discussion for me though as I'm seeing two different approaches getting great results using more or less the same tools. I had thought that putting an IR reverb system on the DAW output channel would be pretty realistic but Jos has pointed out that rooms are three dimensional and we should be controlling the placement of the instruments in all three dimensions, which makes sense, especially for a larger ensemble I think?

@ Ingo: Actually, I don't work the way you suggest. Like an old school composer, I always start with a detailed written score (in my notation program Notion or in the future Dorico). From there I report all the input to my DAW (Studio One Pro), both programs from Presonus. Because they're related, you can export everything directly to the other program in both directions. That is quite easy. But I choose to only export the note data and some velocities. All the rest I tweak by hand and add the necessary automations (expression, velocity XF, release, attack,...). But before the tweaking I need to enter a whole bunch of key switches related to VI Pro and its setup (my setup). So your supposition that I have a separate track for every patch is not true. I know that some people work that way, but I don't. I like to have an overview of every musical phrase and movement per instrument. Not an endless split-up of a line in patches and tracks. Of course all the instruments have an instance of VSS, but that is in fact only one instance, repeating itself for each instrument. I use a particular room for all the instruments, but for larger scores, I divide them into groups (Strings, woodwinds...), so that I can correct wherever necessary, without having to tweak the entire room setup. After the mixer, I send everything to a tail reverb (sum)bus with the 'sends' feature. There the tail is set to maximum (100%). With the bus slider, I give the preferable amount of wet hall reverb to the tail. (No more IR here!). From there it goes to the master bus. Of course it may be needed to insert here and there some compression to bring extra cohesion or presence into the ensemble, but usually I try to avoid this. In the master bus I insert a master EQ and master compression, as well as a dither (used in the final master for CD compression). All that means, that I almost always work with notes in my DAW and hardly ever with audio tracks. Some specialists find this way disturbing and time consuming, because you are busy with notation, key switches, performance and mastering all the time. And that's true alas. But I don't compose at the piano, I do it from my head and my instrument to check it out is my sound machine (=computer)...

@Dane: Thanks Dane for your comment. VE Pro can be useful on a single computer as well, to keep your CPU load away from the musical processing. I never use the internal reverb, because it doesn't provide any real room information. As to humanization, I work with 3 different things: the VI internal humanization with the amount regulated with the sliders (humanization/tuning) and with the humanization feature in Studio One (scaled velocities to be set between certain values and the same for the offset and release of the patches. This way, I can unsnap them from the grid in an irregular way, within certain definable borders).
 I don't have to make a notation anymore, since it was already there before I started the DAW job.

Jos

Reply to Discussion

RSS

Sign up info

Read before you sign up to find out what the requirements are!

Store

© 2021   Created by Gav Brown.   Powered by

Badges  |  Report an Issue  |  Terms of Service