• Dave:  "Open the Pod bay doors, please, HAL."

    HAL:   "I know you and Frank were planning to disconnect me, and that is something I cannot allow to happen."

    • I saw 2001 again this summer at the Hollywood Bowl and man, did it land differently this time!

  • Well, considering that most of what I heard on there sounded equivalent to a toddler fiddling with a Casio keyboard, I think no, this is not the end as we know it. However, as programming and AI become more sophisticated, and as society, in general, moves toward a culture of immediate gratification and results, it could be the start of something very, very bad.

    • I thought we're already in a culture of instant gratification. I mean, how many people even bother listening to (let alone write) fugues anymore?  😉

      Doesn't stop me from continuing to compose, though. I'm an AI skeptic, and I still have my doubts about whether, if ever, an AI could ever be made that could replicate my work. It could come up with a pretty good imitation, perhaps, but it would be missing what makes my music "click".  Perhaps if it could theoretically ingest a million non-existent samples of my compositions, it could come close. But even then it could only create pastiches of what it has consumed; it couldn't create something new that I haven't yet done in my current musical corpus.  And in spite of all the hype, I'm still not holding my breath until that day comes when it would be able to do that.

  • We'll just have to be really good then, won't we? Or not worry about the idiots who, if they're going to use a service like this, were never going to hire a real human anyway. 

  • To be honest, I worry more about my stocks.

  • It may eventually grasp a hold in the popular scene... I mean, blimey, it doesn't take much intelligence to hammer out a few nice-sounding chords to a formula. Pondering some pop music and stuff backing documentaries, films even, most people don't care as long as it keeps coming. They hear it but don't listen to it.

    But it won't replace art music (if you'll pardon the expression), just as photography hasn't replaced fine art or recorded music hasn't replaced learning an instrument and live performance.

    I can't see how it can produce music to touch human emotions without understanding human emotions in a deeper way than trawling psychological studies in a database about them.....Which means it'll have to become self-aware and have emotions; and when that happens they may not sync with human emotions. I can't imagine a machine feeling the grief of bereavement as a consequence of some barbaric war, then want to express its feelings to a human.

    Humanity will be over well before that happens.

    In London recently I've witnessed civilisation going to the dogs without AI.

    Could be AI comes up with some nice, comforting things though.


    • The thing about the present state of AI is that it's not even at the level of trawling psychological studies about humans. What current AI algorithms are doing is ingesting large amounts of input -- basically large chunks of the internet, or some database that has enough data to smooth out the otherwise-obvious artifacts of the algorithm -- and using correlations in the input to associate the human prompt with an output that's created by interpolating the data it has ingested.

      To use music as a metaphor, it's like taking a million compositions by Beethoven (I know he didn't write a million pieces, but supposing he did), and calculating the probability that Beethoven would write, for example, a D following a C, or an E following a C, etc..  Then, when the human prompt asks for a "composition by Beethoven", it would pick a starting note (perhaps by averaging over all starting notes of the hypothetical million Beethoven compositions to find the most likely one), then repeatedly select the next note based on how likely it would follow the previous one -- based on the probabilities it has seen in the million Beethoven compositions it has ingested.  This is a greatly simplified picture of what actually happens in the machine, or course, but it shows you the essence of what "AI" algorithms are actually doing.

      IOW, the machine isn't even on the level of understanding anything about its input yet. All it's doing is building up a passive memory of correlations in the copious amount of training data fed to it, and interpolating from these correlations an output that most closely resembles a typical human response to the input prompt.  Does it actually "understand" what it's outputting? Heck no, the algorithm is completely oblivious to that. Its responses are constructed from probabilities computed from its training data; it does not actually understand the meaning of the data itself.  It doesn't even understand why it chose the output it did; all it does is the equivalent of saying "based on the training data I was given, X is the most likely output to a human prompt of Y".  It's basically a souped-up version of an algorithm that computes the closest curve that fits some isolated data points on a graph.  This is why I'm a total AI skeptic.  All the excitement and hype is simply just that: hype.  We're still a long way from a self-conscious machine. A very long way.  We've barely scratched the surface yet.

      And once AI gets to the self-conscious point -- which is still a long, long way away -- there remains the issue you pointed out: how would it acquire emotions, and how would it acquire emotions that are similar enough to human emotions that it could express something that makes sense to a human?  It has no reason to have such emotions.  The programmers that will build it would have no reason to program such a thing into its code in the first place.  What benefit would such an endeavor bring?  And even if they programmed it that way, what would drive the AI to develop said emotions?  It doesn't have to fight for survival.  It doesn't have to worry about not having the next meal because its boss fired it.  It doesn't have to worry about impressing the girl AI next door in the hopes of getting together with her.  It doesn't have to worry about feeding its AI children.  It doesn't have to deal with its hypothetical AI children causing a ruckus and not allowing it to rest -- because it doesn't need rest to begin with. Nor food.  Nor companionship.  Nor physical comfort -- it's completely digital and divorced from harsh physical reality.  And I mean, these are only the basest needs of human life -- we haven't even gotten to higher-level needs like psychological well-being, aspirations and hopes, etc..  It's a loooooong way off before AI can even come close to the level of human emotion that might allow it to create something convincing in music or art.

      Basically, in my view, today's AI is not even at the level of a newborn human baby, and already people are talking about "singularity" and being replaced by AI bots. It's laughable, really.

  • I tend to believe AI is considerably more advanced than public awareness acknowledges. The world's military entities may have possessed and harnessed AI technologies long before such advancements entered the public sphere. This implies a substantial gap between the cutting-edge capabilities held by the military and the knowledge accessible to the general public or potentially even major corporations for commercial use. I suspect these (for now) clandestine advancements are undergoing refinement, and eventually, they will become accessible to the most influential military powers globally. Only then will they subsequently trickle down and integrate into civilian society.

    If we look at the pace of development in computing technology over the past three decades, I suspect we will see a similar global change and a complete alteration of everyday life as companies and households scramble to utilize AI for lower operating costs or increased quality of everyday life, respectively. Whether or not that means less work for composers and musicians, I fear so- as a "real" composer and a group of "real" musicians may become the luxury expense only reserved for large scale projects, akin to building a timber framed home versus a cookie cutter pre-engineered home. My main concern centers on the repercussions of such advancements, particularly the idea of widespread job displacement, which, in the worst-case scenario, could spawn a massive depression. But, as with computing technology, I'm sure an entirely new market of jobs will emerge.

    I do not know how I feel about the technology needing to attain consciousness to significantly and accurately replace human roles in various jobs. I don't know if AI really needs to achieve a sense of self-awareness to reach terrifying levels of power, but I gather it is more about its capability to execute tasks traditionally done by humans with a level of precision that could lead to substantial workforce consequences.

    To me, this is already scary......>



    It may sound like crap and not really be quite correct, but in 10 years, I bet it won't sound like crap and have a lot more contrapuntal knowledge.. but the fact it can already do this is really scary to me. 

    • I don't find this scary at all. It sounds almost exactly like some of the fugal analysis essays I've read online, which I suspect are the sources from which chatGPT drew the authoritative-like wording "in this fugue, the subject is presented first ... then imitated...".  The fact that the charts show essentially zero understanding of fugal structure in spite of the convincing wording of the surrounding text confirms my skepticism, that chatGPT is basically just extrapolating / interpolating from existing fugal analysis essays and music charts without truly understanding what it's doing.

      A realization of the philosophical zombie, IMNSHO.

      Philosophical zombie
      A philosophical zombie (or "p-zombie") is a being in a thought experiment in philosophy of mind that is physically identical to a normal person but d…
This reply was deleted.

Topics by Tags

Monthly Archives