The AI Apocalypse?
Is AI—artificial intelligence—hype or substance? Can machines think? Can they ever
achieve consciousness? Will the AI "Singularity" eventually destroy us all?
Image of Ultron from https://upload.wikimedia.org/wikipedia/en/4/4f/Ultron2013.jpg
I think it's hype. Movies like "Avengers: Age of Ultron," combined with the bold predictions of "futurists," keep the idea in the public consciousness...but those same futurists have repeatedly predicted that we were 10-20 years away from inventing AI for several decades.
With a record like this, why should we believe bold futuristic predictions now? Also, Herbert Simon, who made one such prediction in 1965, was brilliant--as evidenced by his 1978 Nobel Prize in Economics--and his some of his ideas inform my own research! Nonetheless, his statement was, in hindsight, quite wrong!
Such predictions are bound to fail, because you can never know whether the pace of innovation will stay the same, speed up, or slow down. Something like AI is dependent upon many things, such as advances in processor capabilities, advances in programming, a number of brilliant minds collaborating and competing to solve the problem (which is largely a function of demand for the problem to be solved)—all of which requires funding, which must be sustained for an indefinite period of time! Predicting all of these factors is a fool's errand!
You never know what the future holds, so it's both simple-minded and arrogant to try to predict what will happen 10 or 20 years from now!
Why that doesn't stop people from making such predictions publicly? Because when you make a bold prediction that something unexpected will happen within a given time frame, that gets people's attention! And attention often precedes professional accolades, money, or other desirable consequences.
***I've long been of the opinion that computers don't think in any real sense. They're input-output machines, nothing more. Computers 'think' in the same sense that a light switch 'thinks'...it does exactly what it's told to do, when it's told to do it.
Though I've argued about this with a number of academically-inclined people, this isn't at all how thinking works in people, or in animals more complex than a worm.
The fact is that we don't understand how consciousness, or really even complex thought, occurs. So how can we reproduce it in a completely linear input-output machine like a computer?
People who actually work in machine learning say that the popular understanding of AI—fueled by the predictions by media-savvy 'futurists,' fawning articles by tech journalists, and Hollywood [which is notorious for taking an interesting but faux-intellectual idea that has not even a grain of truth and milking it endlessly]—is so far removed from reality that most people wouldn't recognize actual AI if the ghost of Alan Turing himself knocked them over with it!
I feel vindicated.