When the rapture comes, Google's cars will be unmanned... / by Rob Smith

A friend sent me this article from the NYT the other day, 'cause he knows I work in AI. I read it and Public Enemy starting playing in my head. 

I've just got to say it: those academics and journalists who say AI is oh-so-close to becoming ubiquitous in day-to-day life these days, replacing humans in many tasks, are just echoing hype that's been coming around about every 5 years since the 1950s (actually, since Babbage, and even Liebniz). But this time around, the hype is backed by some of the most successful companies in the world, who provide lots of services that we depend on and who we trust. And that makes this hype lots more dangerous than in the past. So a brother gotta represent.

The NYT article is a few years old, but it's a great example of the AI hype that the media is dishing out thick and fast nearly every day lately:

"The scientists and engineers at the Computer Vision and Pattern Recognition conference are creating a world in which cars drive themselves, machines recognize people and “understand” their emotions, and humanoid robots travel unattended, performing everything from mundane factory tasks to emergency rescues."


A reporter being impressed at yet another in the endless series of academic conferences with gee-whiz results proves nothing new, except that this is a very marketable story these days.

The prime example of how marketable comes in the form of the 

driverless car hype

, which is just taken as true these days, in report after report. A few days ago an article came out with a headline saying that

driverless cars are now going on the roads

. Except that if you check under the hood of this widely report story, you find out that these "cars" are really more like golf carts, have a top speed of 25 miles per hour, and are only capable of driving on certain routes in the small town of Mountain View, CA, which Google has hi-res 3-D scanned and data processed, at great expense, utilising lots of human effort as well as big data crunching. Those roads are largely where the "driverless cars" have logged the "millions of miles" everyone is talking about.

The reality is that in these new "road-ready" vehicles a


 will have to be present at all times while they are moving in normal, non-regulated conditions. So this is just complicated cruise control, ala Google.

Will AI assist people in some driving tasks? Sure.

It already does so in parking, in controlling speed, in avoiding collision in near-miss emergencies, etc.

Those assistive ideas will continue to advance.

But think about it: we've had very sophisticated autopilots in planes for years, and planes have special rules for staying very, very far away from each other, in a space whose only unforeseen obstacle is wind and a very, very occasional bird. Even in that clear-sailing, low-density world, we have human air traffic controllers constantly watching like hawks. As a matter of law and practicality we absolutely do not let planes fly themselves, at all. Even drones are really flown by people, no matter how much technology aids them.

As a side issue, this is precisely why the reports of impending drone deliveries from Amazon are hype. This will not happen, except perhaps over the Antarctic or Outback: airspace laws won't allow it, and even if they did, the expert manpower load (think of the guys who fly military drones, but in a huge workforce that makes sure every geek on Earth quickly gets the latest X-men release on DVD) make this nonsense a practical impossibility. Autonomous drones won't be allowed, and military-style remote controlled drones are commercially farcical.

Back to the road: In our future, will people be sitting in a car, watching it drive itself? Nope. It'll never be approved legally or for liability.

The truth is we can't get people to drive without looking at their phones when they

actually are required to control the damned cars

, much less when they are the emergency backup system in a driverless vehicle. Who will insure these things? Who will change law to allow them to drive on our roads? Answer: no one. Or at least no one who is not tricked by this ridiculous load of hype. If we do get talked into this idea, it won't be long before it is crushed, for some very good reasons. But I think people are sensible and just plain scared enough, that when the rubber hits to road on this development, it will run straight into a wall.

What about with some tech-enabled limits and controls: cars driving themselves without a driver, to come pick you up and take you to your destination, or deliver heavy goods, say? This will only happen if we put those vehicles in specially reserved lanes on certain pre-planned routes, with lots of tight controls on what the cars can do.

So hey: I've got an idea: let's just put down rails, and add the necessary human operator (local or remote) to cover the unforeseen. Then you've got yourself what you call a



Which is a far better technology for these purposes anyway. The world needs less cars, not more: it seems that everyone is forgetting that. Cars have been a social disaster, and have degraded the quality of life and transport dramatically around the world. Think how much worse driverless cars will make this problem. Are A-holes with SUVs not enough of an irritation to you. Just think of those same cars, but with no human to blow your horn at, just some rich guy in the backseat reading



 while Google helps him cut you up.

London has had the good sense to restrict cars, and even London has even more problems  to solve in this area (like HGVs

with drivers

 that can't coexist non-lethally with eco-friendly bikes). I don't see London, or any of the other polluted, near-gridlocked cities of the world moving towards driverless cars, unless they are really the trains I've made note of.

But back to AI. AI is helping advance lots of important areas (like medicine and healthcare), but the reality is it's doing it in ways that have almost nothing to do with the way human intelligence works. The truth is that AI is really successful in supplementing human intelligence in some well-posed settings, but not very good at replacing human intelligence in any non-trivial human decision making settings. Most people really don't understand these fact, and that's what I'm trying to write about these days.

Why do companies like Google and Amazon want us to believe this hype? Perhaps it's just because they believe it themselves. Perhaps it's an irrational arrogance of the newly rich and powerful. Or perhaps it's just a brand prestige manoeuvre.

But in any case, it's hype.  

Terminator X, why don't you tell what time it is, boyeeeee!