You hit it dead on there and futher up. It can replicate on the basis of probabilities but won't understand what it's doing or be able to aim its efforts to a particular audience.
In fact it would have no reason to want to compose music for human consumption. That fact that efforts are put into forcing it to do that are a human affectation, the hubris of programmers. It would have inter-machine communicating far narrower and faster, limited by what it needs to say.
Most of the public undertand very little about communication / information theory and the redundancies contained in human communication. As someone deeply interested in complex systems, I came to realise within the definition of system (a collection of functionally related components) those relationships ARE communication (even if not in words) and have to be understood by both transmitter and receiver. AI machines would have to develop their own linguistics to act together to create their own reality and ecology in which to live, if that makes sense.
I agree to the extent there are "grey areas" but most of what you descrbe is technological advance, Military stuff will still be under control of people sitting in comfortable penthouse offices operating playstation-like controllers commanding drones, getting autocratic weapons in place and the like. They've taken advantage of the huge increases in capacity and speeds of current-day computing.
They talk here in England about medical singularities - AI that can scan x-rays for cancers etc hugely faster and more accurately than humans - but in the vein of HS Tech, that's because they have millions of previous cases and prognoses to "learn" from, not because they can inherently "understand" x-rays and cancer.
To me, it's "Intelligence" when it can decide (itself) what it needs to learn - and find it, without further human intervention. In other words it can self-program and have sufficient "abend" routines to have a survival instinct (like the Umbrella Corporation computer).
I don't see that happening any time soon. For it to be so it would need a reason to "live". To it, war would be about taking some kind of economic advantage, taking over another machine's land.
But what I fear most with current "advancements" concerns whether it knows right from wrong in what it picks up. Can it determine fake from actual and if so, how does it deal with it?
Mistakes are going to happen. The billions of people brought within its current reach are vulnerable, simply because mistakes already inherent are on the very databases it trawls - before it creates more because its progammers have not foreseen all possibilities.
What I'm more afraid of is gullible people believing the hype that AI in its current state can "think", and thereby become more vulnerable to falsehoods generated by the algorithm when it "hallucinates", as they put it. (Though IMNSHO it's not so much hallucination as a consequence of the way the algorithm is designed -- it's inherently probabilistic, and therefore vulnerable to wrong extrapolations from training data that seem similar enough to be related but actually have different semantics, semantics which current AI models are unable to process.) A modern version of "I saw it on TV, it must be true", i.e., "I heard it from ChatGPT, it must be true".
Dave Dexter > David LillyNovember 18, 2023 at 6:34am
AI as used to streamline mundane or technical tasks, as humans have been doing with technology forever, doesn't worry me. Even if the military has some super-AI at their disposal, for which heavy citations needed, it doesn't conflict with my core beliefs on the subject.
I have often reflected on the similarity between computer music composition/performance and sex robots. However close by technical measures they make either to the real thing, they are both still wanking.
I think the real danger is that AI generated music will become normal. In a decade or two, if you play a recording of Casal's Bach Cello suites to a younger person, they may say, "This somehow sounds artificial, like it wasn't made by a computer."
Replies
You hit it dead on there and futher up. It can replicate on the basis of probabilities but won't understand what it's doing or be able to aim its efforts to a particular audience.
In fact it would have no reason to want to compose music for human consumption. That fact that efforts are put into forcing it to do that are a human affectation, the hubris of programmers. It would have inter-machine communicating far narrower and faster, limited by what it needs to say.
Most of the public undertand very little about communication / information theory and the redundancies contained in human communication. As someone deeply interested in complex systems, I came to realise within the definition of system (a collection of functionally related components) those relationships ARE communication (even if not in words) and have to be understood by both transmitter and receiver. AI machines would have to develop their own linguistics to act together to create their own reality and ecology in which to live, if that makes sense.
I agree to the extent there are "grey areas" but most of what you descrbe is technological advance, Military stuff will still be under control of people sitting in comfortable penthouse offices operating playstation-like controllers commanding drones, getting autocratic weapons in place and the like. They've taken advantage of the huge increases in capacity and speeds of current-day computing.
They talk here in England about medical singularities - AI that can scan x-rays for cancers etc hugely faster and more accurately than humans - but in the vein of HS Tech, that's because they have millions of previous cases and prognoses to "learn" from, not because they can inherently "understand" x-rays and cancer.
To me, it's "Intelligence" when it can decide (itself) what it needs to learn - and find it, without further human intervention. In other words it can self-program and have sufficient "abend" routines to have a survival instinct (like the Umbrella Corporation computer).
I don't see that happening any time soon. For it to be so it would need a reason to "live". To it, war would be about taking some kind of economic advantage, taking over another machine's land.
But what I fear most with current "advancements" concerns whether it knows right from wrong in what it picks up. Can it determine fake from actual and if so, how does it deal with it?
Mistakes are going to happen. The billions of people brought within its current reach are vulnerable, simply because mistakes already inherent are on the very databases it trawls - before it creates more because its progammers have not foreseen all possibilities.
What I'm more afraid of is gullible people believing the hype that AI in its current state can "think", and thereby become more vulnerable to falsehoods generated by the algorithm when it "hallucinates", as they put it. (Though IMNSHO it's not so much hallucination as a consequence of the way the algorithm is designed -- it's inherently probabilistic, and therefore vulnerable to wrong extrapolations from training data that seem similar enough to be related but actually have different semantics, semantics which current AI models are unable to process.) A modern version of "I saw it on TV, it must be true", i.e., "I heard it from ChatGPT, it must be true".
AI as used to streamline mundane or technical tasks, as humans have been doing with technology forever, doesn't worry me. Even if the military has some super-AI at their disposal, for which heavy citations needed, it doesn't conflict with my core beliefs on the subject.
I have often reflected on the similarity between computer music composition/performance and sex robots. However close by technical measures they make either to the real thing, they are both still wanking.
I think the real danger is that AI generated music will become normal. In a decade or two, if you play a recording of Casal's Bach Cello suites to a younger person, they may say, "This somehow sounds artificial, like it wasn't made by a computer."