Hi! I am glad to return after a long period of inactivity on the forum.

During that time I was trying to build a web service, which allows everyone to upload music in MIDI format and get it performed with professional virtual instruments without installing any software: https://artinfuser.com

Most of orchestra instruments are implemented and also piano, pipe organ and jazz drum set are available. You can upload your music and try. I hope that this service can be useful for students and people who do not have virtual instruments installed. And also for composers who do not want to spend time on tweaking virtual instruments. 

You are able to tweak hundreds of parameters for each instrument, for example increase/decrease length of automatic crescendo, decrease use of glissando articulations, shorten legato transitions, increase or decrease vibrato intensity and more.

Here is how it is done:

1. You upload MIDI file, choose setting and start processing

2. Algorithm uses multiple algorithms to select appropriate articulations, randomize parameters and draw curves for dynamics, vibrato intensity, vibrato speed - thus creating "prepared" MIDI file.

3. Prepared MIDI file is imported into DAW with virtual instruments loaded. Then music is rendered into MP3.

4. Now you can download MP3 or prepared MIDI file as you want. You can even download separate MP3 files for each instrument.

You can listen to an example of audio that  is created automatically on the main page: https://artinfuser.com/studio

You need to be a member of Composers' Forum to add comments!

Join Composers' Forum

Email me when people reply –


  • This is very interesting, but I think automated mockups are many lightyears away from realism.  It still takes the diligence of a human poring over every articulation (specific to the libraries that are used).  If you are offering this as a professional service, you might want to consider adding in a human element (for an additional fee, I suppose).

  • Hi. Thanks for your reply. 

    It seems to me that this method already works good for student counterpoint exercises. See example of fully automatic mockup by MGen: https://www.youtube.com/watch?v=mLNadlwiDas

    Also, here is my piece The Cloud:

    1. This mockup was created by me manually: https://www.youtube.com/watch?v=wz8pAQnEF5E

    2. This mockup was created automatically using different instruments (see MP3 link at the bottom of the page): http://artportal.su/ctracker/file.php?f_id=30

    To me manual mockup sounds better, but I cannot say that it is awful. To me it sounds definitely better than Sibelius or Finale sound. And this is just the beginning.

    I am planning to use all approaches that I can think of to emulate manual mockup, and your criticism and advices will help a lot.

    John Driscoll said:

    This is very interesting, but I think automated mockups are many lightyears away from realism.  It still takes the diligence of a human poring over every articulation (specific to the libraries that are used).  If you are offering this as a professional service, you might want to consider adding in a human element (for an additional fee, I suppose).

  • Hi Alexey, nice to hear from you after all this time!

    Congratulations and good luck on your project. I've heard your demo trumpet on an internet café with lots of street noise, but I still liked it well.

    I don’t understand how it works cause I'm not technically minded, but would it interpret the composer's markings as entered in Sibelius, or what?

    You see, I'm still stuck with Sibelius 8.6 (I don’t like its sound library) and with Note Performer as a playing machine which improves sound rendering quite a bit, but still think that your trumpet example sounds better.

    Would you be programming a classical guitar sound also, or more guitar types?

    How a composer like me, quite unknowledgeable about technology would be able to help you?

    At the moment I'm still in Greece gigging, and without internet connection at home, but I will return to England for a few months in the near future.



  • Hi, Socrates. I am sorry for a long pause, I did not see notification about your message for some reason. Currently system interprets only small part of composers markings. Later it will recognize more markings. Currently recognized:

    - Dynamics (except dynamics pins inside long notes, where MGen builds his own curve, obeying border dynamics)

    - Staccato

    - Nonlegato

    - Accent (converted to dynamics)

    I am planning orchestra instruments so far. After finishing orchestra, we'll see.

    Most important help for the project will be reporting problems with resulting sound. I will try to improve and fix.

    These videos show how default Sibelius playback differs from default MGen playback (it is used to adapt MIDI):

  • Excellent results, Alexey!

    Your sound is much more enhanced than Sibelius, though I think that if you take the trouble to tweak Sibelius a little you can improve its sound. Did you export a strait midi file from Sibelius and uploaded in Mgen?

    I do not see any composer's markings in Sibelius score, so I cannot decide(?) about that, but I like the Mgen very much-far more realistic rendering in all 3 examples.

    (Was that a canto Firmo by Fux or somebody, btw? - I know what to do with my contrapuntal exercises from student years now! I may have a look at them again and reload them to Mgen thanks!)

    When I will experiment with your system a little I'll start giving feed-back. Thanks for all your work, mate. Keep it up and Happy New Year!

  • The comparisons sound good, but doing better than Sibelius is not much of a challenge, Much more interesting would a comparison with NotePerformer.

    I wrote a tool that does some of what you are doing. My base sounds were EastWest Symphonic Orchestra (Silver), which is a bottom-of-the-line library. My tool worked with MusicXML input, which may be the best way to do this kind of thing, although NotePerformer and Synful Orchestra both appear to manage by reading a MIDI stream and guessing the intent. For NotePerformer, the input stream may include some articulation information through program changes--I don't really know exactly what Sibelius provides to a plugin Sound Set.

    I gave up on my tool because it's a ton of work to maintain and NotePerformer was good enough. And NotePerformer renders immediately rather than with a bunch of post-processing,

    The dream of automatically creating an expressive and realistic rendition of a work straight from the score may be harder to bring to reality than you think. Just stepping up from MIDI to MusicXML is a ton of work.

    A web service that could create a great rendition of my music would be wonderful. I'd actually prefer something integrated into SIbelius, like NotePerformer. Of course, if you are relying on someone's sample library, a solution like NotePerformer would require some sort of sub-license. For a web service, you might want to check with the sample library owners before making any big investments. I have no idea what uses they permit, but they might not care to have their sound samples used to generate music for people who haven't actually paid for the samples.

    I wish you luck with this. You may be underestimating the work required; I'd be more hopeful if you were actually a team of 4-6 programmer/musicians, but I hope you'll prove me wrong.

  • Thank you for your replies.

    Yes, I exported MIDI file from Sibelius and impored into Composer Tools site straight. Socrates, I would be very glad if you import your music and counterpoint exercises into the site. 

    I added comparison with Sibelius Sounds and NotePerformer for each instrument type:


    Same music played by Sibelius MIDI: https://youtu.be/I2505Mzwr5k

    Same music played by Sibelius Sounds: https://youtu.be/HStTpyPacLU

    Same music played by NotePerformer: https://youtu.be/srIs3kPFG-w

    Same musi player by MGen: https://youtu.be/I2505Mzwr5k?t=21s


    Same music played by Sibelius MIDI: https://www.youtube.com/watch?v=CelSFg-a4nU
    Same music played by Sibelius Sounds: https://youtu.be/H63J762KWTU
    Same music played by NotePerformer: https://youtu.be/g5Y2mjjovQY
    Same musi player by MGen: https://youtu.be/CelSFg-a4nU?t=21s


    Same music played by Sibelius MIDI: https://youtu.be/iNWm-dXzebQ
    Same music played by Sibelius Sounds: https://youtu.be/quKSxtQA8QY
    Same music played by NotePerformer: https://youtu.be/xoaYT_iq4zE
    Same musi player by MGen: https://youtu.be/iNWm-dXzebQ?t=21s

    Antonio, this is great that you were working on a similar project. You are right, that it is not easy to make project successful alone. I am working with Alexey Shegolev, composer and lecturer from Canada. We will be glad if you want to join.

    I am not currently thinking about integration into Sibelius, because best virtual instruments would require licenses, HDD, RAM and CPU power for each user. Yet, this idea is important and I am planning to think about it after some time.

    Question about EULA is also important. I am creating mixes for site visitors, which is allowed by EULA of each virtual instrument that I use, but if project becomes commercial some time, I am planning to contact library owners anyways.

    Do you think that current implementation is not enough expressive and realistic? Could I ask you to upload some music of yours to check?

    Also, I investigated Sibelius MIDI export and many things are already exported: https://docs.google.com/document/d/e/2PACX-1vR1e8vQWkvXNWT5buGwsRhO...

    By the way, today I finished implementing staccato support and it is now already available on site.

    Here is how site works:

    Imported MIDI file -> Adapted MIDI file -> MP3 file for each instrument -> MP3 with mix

    You can download adapted MIDI file for each piece and import it into your DAW to tweak it a bit if you have same virtual instruments installed.

    Also, you can download MP3 file for each instrument and remix the whole piece.

  • Sorry, I missed seeing your reply till now.

    I uploaded two test cases. One is an idea for a piano piece, The version you produced sounded nice, but I'm guessing that it was mostly due to the fact that the piano sound came from a great library. It did not sound more or less expressive than the Sibelius/Note Performer combination. Maybe it wasn't quite as heavy-handed as NotePerformer tends to sound. Right now, it looks like a great way to get a nice piano sound without buying a sound library,

    The second piece is a portion of a piano trio I wrote. You can hear the entire thing at composer.freixas.org/work/44. The instruments are flute, clarinet and piano. I'm sorry to say that the version you produced was painful to hear. Whatever tweaks you are applying I suspect make the piece sound worse than it would sound with no tweaks at all. The music uses no articulations other than legato.

    Tools like NotePerformer and Synful excel with string and wind instruments because they use a synthesis technique that make note transitions, particularly legato transitions, sound realistic. Most sound packages (the ones I know about, anyway) use some sort of interpolation technique that may be less successful. My knowledge comes from the low end--I realize people do produce some nice music with sound libraries. I'd like to hear how a top-end library handles a solo, unaccompanied instrument such as a violin or clarinet (I haven't checked since I know I can't afford them).

    It may be difficult to beat a handcrafted file. I am far from being an expert in that field. I know that there are bunch of tricks you can use, and some of those could be codified into something that an expert system could apply. But then people might perform each instrument part and tweak the result, something that would be hard to match starting with a score or notation-software-produced MIDI file.

  • Antonio, thank you for using Composers Tools. I found a bug that erroneously forced all woodwind articulations to glissando, this is why your trio sounded strange I believe. I fixed that and rerun rendering - please check, I hope you like it better.

    These are some of the algorithms that were automatically applied to this trio:

    - Dynamics curves were generated based on midi note velocities: https://i.imgur.com/z3e5EV1.png

    - Vibrato intensity and frequency were generated for woodwind instruments: https://i.imgur.com/z3e5EV1.png

    - Legato transition speed was selected for each transition based on note length with randomization applied

    - Tempo, velocity, dynamics, vibrato intensity, vibrato frequency, note starts and note ends controlled randomization

    Parameters of all algorithms can be controlled in task config on site. Results of these algorithms work can be heard better if you listen to each instrument solo MP3 track (click on Perform, than Stems).

    As for the piano - several algorithms were also applied, but piano algorithms do not yield that much difference so far, I will work with them later and describe them.

    I believe that most effective algorithms that I developed so far are applied to strings and brass.

    Can you make this trio public (click Visibility) so that forum users also can listen to the results of rendering?

  • Thanks for the fix. It does sound a lot better, but nowhere near good enough. The flute sounds artificial, particularly the trill around 0:18 and the sixteenth-note pattern I use around 0:27.Note endings seem to cut off in a mechanical way for both flute and clarinet.

    I've made the file public and I've attached here the NotePerformer versions of the flute and clarinet parts. The entire piece, with all the parts playing simultaneously, can be heard at composer.freixas.org/work/44.

    Antonio Freixas

    Piano Trio-extract-flute.mp3

    Piano Trio-extract-clarinet.mp3

This reply was deleted.