Music Composers Unite!
That's idea. What I am looking for are some NUTS & BOLTS ideas hear.
My first precept is to analyze the oscillation patterns of a recorded human voice then transpose the analysis into notation.
i.e. 10,000 violins,each given a note (or notes) to play, in a row, but all 10,000 violins would only take as long as it does to say, "How the hey there do I do this?"
I have been milling over this idea for 20 years,
Any body out there got some good solid ideas on how to achieve it. I am not looking for ethereal or philosophical exposition here. What I am seeking is some down to the ground implementation tools.
Got a HO?
Whoah, that's some necromancy.
Anyway, this may be relevant to the topic at hand: http://repmus.ircam.fr/orchidee
Steve Vai's the closest I've heard someone come to mimicking human speech/noises on an instrument, in his case a guitar via a wah pedal.
I don't really know how to reproduce an actual human voice with instruments, but I have thought about imitating at least the inflections of human voice using glissandos. If you take a recording of speech, for example, and strip away (or mentally tune out) the consonants and vowels, retaining only the curvature of the pitch, you'll find that it is actually very intricate, and virtually never a stationary, flat pitch. Arguably, this could serve as a new model of music, in which we actually dispense with the idea of the tone (hence "atonal" in the truest sense of the word, not as in 12-tone music, but discarding the idea of a stationary pitched tone), replacing it with glissandi that imitate the inflections of human speech.
So a phrase like "Are you sure?" could be rendered approximately, as C - G>A - C#>D (where - indicates a leap in pitch, and > indicates a glissando between two notes). These pitches are not exact, nor are the intervals, and actually differ from speaker to speaker. But it's the relative shape of the pitch contour that matters. And it's also dialect-dependent (an Englishman may perhaps render it as approximately C - Eb>E - F#>F#, (where ',' denotes a drop to an octave below)).
As for imitating the sound quality of speech, I think a cello in mid to low register comes close to imitating a low, male voice.
If you want to reproduce actual consonants and vowels, you might want to look at speech spectrograms and compare them with various instruments, to see which one(s) could be combined to approximate the same kind of spectrum of overtones. Vowels may somewhat be imitated by bowing a stringed instrument in different positions, say up the fingerboard or close to the bridge, which changes the sound quality. Perhaps you could look at the spectrograms of these various tones and compare them with spectrograms of human vowels, to see if there is some resemblance you can take advantage of.
Consonants may be a bit harder, because they are very short in duration compared to vowels, and are sometimes recognized more by their side-effects on the adjacent vowels rather than at the point of articulation. For example, the spectrograms of T and D are essentially identical, except for a subtle change of vowel quality before and after the consonant. Duplicating this using instruments might be quite difficult. The difference between T and K is also quite subtle, and probably will require very precise manner-of-articulation if it is to be reproduced using instruments.
That's basically what Orchidee does for you automatically. Saves you 20 years of research :P
H. S. Teoh said:
If you want to reproduce actual consonants and vowels, you might want to look at speech spectrograms and compare them with various instruments, to see which one(s) could be combined to approximate the same kind of spectrum of overtones.