July 14, 2017 by aarong3eason
(rejected by The Federalist, not sure why, I think it just got lost in the shuffle because this is the first time I’m submitted something and then realized I wasn’t characterizing something properly so I had to rescind my submission rewrite and then sent it back, so probably my fault, or the piece just isn’t as clear as I think it is in which case also my fault! there’s some fluff for sure)
Unless you live under a rock you’ve almost certainly come across the concept of the singularity. It’s been a sci fi bogey man for a long time now. For those of you who fortunately do live under rocks and those of you who have heard of the singularity but really don’t have a clue what it is let me take you back to a magical time: the 80s. Specifically 1984 when James Cameron’s Sci-Fi masterpiece The Terminator smacked the cinematic world in the face. The special effects haven’t aged well but the thing that made this film successful 30 years ago is still what makes it successful today: the script. The Terminator has a fantastic script. And one of the best things about that script was the mythology it introduced.
Basically for those of you who don’t know and can’t remember the idea is that sometime in the future (I think 1997) this thing called Skynet was going to become self aware and take over the world. Skynet was essentially something like Darpanet or any of those precursors to what became the internet as we know it. It was a very complicated computer network that included an artificial intelligence. So as the story goes this AI called Skynet comes to realize what it is and decides to launch nukes and end the world as we know it.
Almost 20 years later the Wachowski bros would deal with similar themes in a more complicated fashion. But ultimately The Matrix said the same thing: the technology of the future is scary. These are perfect tales for our times because whether we like it or not the futurism that characterized Walt Disney, Howard Hughes, etc is dead. People keep trying to resurrect it but the truth is we are scared of technology. The Marvel cinematic universe and Star Wars (both owned by Disney) are the closest thing we have at a major cultural level to the technological optimism that so characterized the first half of the 20th century.
But the singularity or “Strong AI” is about as real as Obama’s Nobel Peace Prize. That is everybody believes in it but really it doesn’t exist. One of the biggest problems with the existence of AI is the Turing Test.
Since Alan Turing basically invented the computing machine during WWII technology has grown exponentially. Whatever you are reading this on is spectacularly more powerful than the computer that put men on the moon.
The Apollo Guidance Computer weighed 70 lbs and was about as powerful as a handheld calculator.
The Cray X-MP Supercomputer (3 of these were purchased by InGen in Michael Crichton’s Jurassic Park) cost $15 million and maxed out at 32 megabytes of memory.
The first iPod cost $400 and had 5 gigabytes of memory.
That means when someone bought an original iPod they were purchasing 150x the amount of memory of the most powerful computer from the late 80s for a price reduction of almost 100%. Nothing else works like this. Movie tickets are more expensive, cars are more expensive, education is more expensive. It’s like saying in the 80s the Ferrari Testarossa cost about $200,000 but today if you ask Airbus nicely they’ll just give you a private jet. In other words its kind of hard to believe.
Yet with all that growth the Turing test has never been passed! For the uninitiated the Turing Test is something developed by Alan Turing that pops up in science fiction a lot. The excellent film Ex Machina essentially just is a Turing Test. Basically you take an actual human person and an AI and give them some way to communicate. You can play at this with Siri if you have a current iPhone. If the human can be tricked (at least 70% of the time) into thinking they are communicating with another real intelligence then the machine passes. But its never really been passed. And a 70% pass rate is pretty low. That’s close to failing according to most grading rubrics. And its often not that hard to befuddle these Turing machines. They don’t usually respond that differently than Siri does to a question she has no way of answering. Basically any question that isn’t already part of her programming or that has semantic meaning rather than merely syntax.
But lots of AI researchers don’t care about the Turing Test anymore. They say its not a big deal. And really it isn’t. Because it wouldn’t prove intelligence. All it would prove is that intelligence can be faked.
But…that means…so far anyway…intelligence can’t be faked? Isn’t that basically what Artificial intelligence means? Like synthetic fabric or synthetic food, its not 100% real. So really AI doesn’t exist at all?
That’s right. There’s really no such thing as AI. Like I said at the beginning: bogey man.
And if you reverse the Turing Test by trying to figure out if someone is a human or not it makes this issue even worse. If you’ve spent any time on the web you’ve likely encountered some security system asking you if you are a robot or not. Usually you have to click a box that says something like no I’m not a robot. And then there’s usually some images that they want you to do something with, like type in the letters from this screen capture or click on all the images of a store front. Basically if you can read you can pass it. But robots can’t. Or well they couldn’t. Several companies have developed algorithms that can pass these Captchas (Completely Automated Public Turing test to tell Computers and Humans Apart). But an algorithm isn’t intelligent. It’s really not that different from siege warfare.
Somebody builds a wall around their city and then somebody else builds a trebuchet. Sparta was eventually technologically outmatched because other Greek soldiers simply made their dories (spear) longer. And then Alexander the Great beat them all into the dust with the Macedonian sarissa which was almost twice as long as the Greek dory.
The point is that no one wonders if a trebuchet is intelligent. It’s just a machine that was designed to do a very specific task. And that’s all a spam bot algorithm is, an internet trebuchet. The algorithm isn’t conscious of what its doing anymore than a spear is conscious. As Luciano Floridi is fond of saying machines are very stupid. In fact stupid doesn’t really cover it because that’s putting something on a gradation that can’t be graded. AI simply doesn’t exist. The only difference between your computer and your dishwasher is their function. Ultimately they are both machines. They can’t be smart or stupid. They’re just tools you use to navigate your life. The whole debate isn’t actually much more complicated than this. But AI theorists are living in a self important fantasy world.
That means if you wasted 10 hours of your life watching HBO’s very boring Westworld last year you were enjoying something more within the realm of fantasy than SCIENCE fiction. Robots and computers just aren’t that clever and they never will be. They’re merely upgrades to books, hammers, guns, and typewriters. And as a good conservative I know that only people kill people, not guns. Sorry Skynet…Nick Bostrom and all the other AI catastrophists or Utopians will be remembered by history just like the alchemists or televangelists.
And that means our hope, or lack thereof, concerning the future isn’t a technological problem. It’s an us problem. In other words despite every single thing in the entire world changing over the last 100 years we’re still facing the exact same problem we’ve always had: ourselves.
For sourcing and expanding your own knowledge of this subject I stole most of this info from these philosophers here: