Some people like to concentrate on the potential harms of AI
I think the exponentiality is terrifying for some people. As with the advent of social media, the creators seem to give very little thought to where it will end up. It's a race to "be the first" to make the breakthrough / seal the patent. It's often driven by profit at the end of the day.
Some people like to concentrate on the potential harms of AI, but there are so many positives. We have been using the technology for some time, to help us interpret MRI scans, and the results are very encouraging.
No doubt about it. AI is highly useful already and will become more so as advances are made. But there are potentially downsides and a lot of unknowns about where its independence may head and the implications for society. Weaponised autonomous drones spring to mind as an obvious example. These exist already, but if humans lose control of their programming and they start making their own "decisions", that could create a nightmare scenario.
@Fire, yes, I'm not sure where the boundary between algorithms and AI technically lies.
The whole subject gets complicated quite quickly and though there are developers considering this, others are ploughing ahead with things with very limited oversight or legislative restriction.
The whole subject gets complicated quite quickly and though there are developers considering this, others are ploughing ahead with things with very limited oversight or legislative restriction.
that's it. Unknown unknowns with huge implications.
The third episode is interesting. They say "AI can be used in ways that are not ethical or moral". "We must always speak up if we feel an AI system is being unfair or being bias". They seem to miss the point that the world cannot regulate the use of tech once it's out there, any more than we can regulate the internet or human cloning. It seems naive in the extreme to think that companies will always use human oversight to manage AI decision-making. AI learns and operates autonomously - that's the cheapest, easiest, fastest and most scalable point of the exercise.
I am a bit behind with my viewing, but have just watched the first episode. Very impressed with the use of AI in bionic arms and to possibly decode the "language" of other species. The experiment with the water and funnels was great for showing how a threshold is needed for neurons to pass on signals.
There is so much positive stuff that AI can contribute, especially in the field of medicine. The difficulty is it could also be used to inflict great harms, eg facial recognition used to intimidate and harass, autonomous weaponised drones, etc. Plus, the scope for it to develop beyond human control is just so huge, and, as you say, @Fire, once the genie is out of the bottle ...
Recently, somebody said it would be scary how fast AI is learning. I wish that was true for what we use. Our "Alexa" has to go a long way.
Apart from that, people get a wrong impression from what AI is capable to do for them and believe it could be part of their computer system just like an ordinary program. No-one seriously discusses the technical/system requirements that it needs.
Have just watched episode 2. Thought it was a good balance between the light-hearted (singing cup cakes, etc 😁) and more serious applications. Will be fantastic if it can be widely used to help with diagnosing and tracking the progression of Parkinson's.
Was reading recently about DeepMind and the game Go. It really does show how machine learning will move beyond what the human brain can do quite rapidly. In some ways, that's good, eg the billions of hrs saved deciphering protein folding, but in other ways that may cause problems for us as humans.
I do have concerns not just for the safety of humans in an AI world (autonomous weapons or computer progs that can't be overridden, say), but also a fear that the activities that make life satisfying and create opportunities to feel a sense of achievement may get downgraded or disappear entirely. If AI can produce instant "anything" (from literature to music to world record performance in any field), why would humans work at developing skills?
Posts
that's it. Unknown unknowns with huge implications.
I wish that was true for what we use. Our "Alexa" has to go a long way.
Apart from that, people get a wrong impression from what AI is capable to do for them and believe it could be part of their computer system just like an ordinary program. No-one seriously discusses the technical/system requirements that it needs.
I ♥ my garden.
I do have concerns not just for the safety of humans in an AI world (autonomous weapons or computer progs that can't be overridden, say), but also a fear that the activities that make life satisfying and create opportunities to feel a sense of achievement may get downgraded or disappear entirely. If AI can produce instant "anything" (from literature to music to world record performance in any field), why would humans work at developing skills?