Yes, me too. On an intellectual level, it is completely fascinating to see how a tipping point has been reached and things are now escalating fast. But the disruption to our existence is potentially huge, and the scope for undemocratic, unaccountable control by a few (... and then probably by AGI will very different aims/values to us), is scarily real.
the disruption to our existence is potentially huge
I take the line these days that the threat to human existence is huge and lowering every day of the week and has been for many decades. Humans seem to be dead set on destruction from every angle - of themselves and everything else. War Games came out in 1983. Govts were the worry then (pretty much pre-internet). Corporations are far more unaccountable, rogue and profit-driven.
What gets my goat (instead) is that people rush forward to "innovate" hell for leather to beat the competition, ignoring the obvious inability to regulate or control their creations going forward. They cannot view their work as having integrity, even if they list all the upsides.
Mike Wooldridge, the professor who gave the RI Xmas lectures, also appeared on radio 4's The Life Scientific. In it, he said that he knew of no plausible scenario in which AI poses an existential threat to the species, although they can, of course, do things like flying planes into mountains. Other existential threats are available.
"he said that he knew of no plausible scenario in which AI poses an existential threat to the species"
The point is that it paves the way for exponential learning curves in private/corporate settings. Where it will go is unknowable. It's like trying to map the internet or the sixth dimension.
Mike Wooldridge, the professor who gave the RI Xmas lectures, also appeared on radio 4's The Life Scientific. In it, he said that he knew of no plausible scenario in which AI poses an existential threat to the species, although they can, of course, do things like flying planes into mountains. Other existential threats are available.
AGI could be a threat if its aims conflict with our own, even if it does not set out to cause harm, much like we have been a threat to species across the planet, mostly without the specific intention of creating a problem, eg overhunting to extinction.
@Fire, and even though companies may start off with good intentions, these can fall by the wayside. Google's "do not harm" was laudable, but doesn't really stand up to close scrutiny today when, for example, much of their income is from military contracts and they develop apps that track for purposes of control.
Posts