Technologies are finally starting to deliver, says chief innovation officer Bernard Meyerson.
IBM Cars and AI:
Read in 6 minutes
As IBM’s chief innovation officer and chairman of the World Economic Forum’s Meta-Council on Emerging Technologies, Dr. Bernard Meyerson is at the forefront of key discussions on how the latest technological advances are likely to be applied in the near-term future.
I had a chance to sit down with Meyerson during his recent visit to Toronto and we chatted about some of his favourite topics: artificial intelligence and self-driving cars. The two are top of mind at the World Economic Forum and personally exciting, he says, because of their potential to change the world.
Yet, while both are at the height of hype in the tech world, much of the general public still has trouble believing either has progressed beyond the realm of science-fiction.
Meyerson has some thoughts on that issue. Here’s an abridged transcript of our conversation.
Artificial intelligence is the big buzzed-about topic in the technology world, but the average person probably still doesn’t believe in it, or even fears it. Why is that?
It’s ridiculous. Everyone thinks of these things as discontinuous jumps, as leaps – we’re living in caves and then we’re taking starships. That’s not how the world works, there are several millennia separating the events.
When people say AI, you know what I think of? I think of this really pleasant voice that said, “Good morning Dave,” as it was murdering all of the people in 2001: A Space Odyssey, or The Terminator. That kind of baggage is just endless. It’s really easy to make really bad movies about really good things because you can just pretend to understand what they do.
The reality is that AI is just a continuum where at some point, things start to help people. The sad part is that AI got a bad rap because people couldn’t get their head around the fact that yes, there are other things AI does besides kill people.
Basically, computers and AI scale things better than a human does. If you want to keep track of 3,000 texts on oncology, well, I can’t read that many. I don’t care how old I get, I sure can’t remember them all. Then there’s the 400,000 other things you need to know, so how on Earth is a physician who reads an average of five papers a month supposed to do this?
Computers scale and they remember because they weren’t drunk on Sunday, or whatever the reason. They do this stuff really, really well. Humans, meanwhile, have things they don’t – like emotions and common sense. We can extrapolate things instantly. AI is more like augmented intelligence or accessible intelligence, call it what you want.
What’s never been around before is a seamless integration of all of this: natural language processing at the front end so that it knows what you asked, natural language processing at the back end so you know what it just said, and in between a judgment engine that is learning so that over time, as you keep updating the knowledge paths, it keeps re-evaluating what it heard.
The stuff that’s real junk is pretty much ignored over time and it comes back and gives you an answer that is probably correct.
So why is AI getting so much hype now?
You’re starting to see the facts come out and that’s that people are putting it out into the general population for general use, and that’s a good thing. Not everybody is fudging and catching up and yeah, it’s real now. A lot of people are competent and doing good work with it.
What about the worries about AI’s exponential growth, with Moore’s Law and all that?
Well, Moore’s Law has been dead for almost 10 years. That’s the other bit of garbage that’s out there. Gordon Moore was a genius and he had it right, but Moore’s Law was tied to the Law of Scaling, which is how you make a transistor smaller. For 34 years, it worked.
If you just followed Moore’s Law … after 40 years you would have a million times as many things on a chip that’s the same size. That means if it’s a 10-watt chip and you did nothing else with it, it’d be a 10-million-watt chip, which in your laptop would produce a brief but terribly exciting experience.
The Law of Scaling deals with how you make that next chip, which has twice as much on it for exactly half as much power per part. So if it’s 10 watts, it’s still 10 watts, but still does twice as much.
That died in 2003. Since then, if you look at the speed or what they call the thread in any microprocessor, it hasn’t gotten any better. You can still make the chip smaller by spending obscene amounts of money on all kinds of materials that you never had to in the past.
What’s happened is that the whole field has gone from many players to maybe three or four, which is why that kind of progress – the million-fold that we got [in the past] – ain’t happening [anymore].
But it’s continuing on a spiritual level, isn’t it?
Well, yeah, exactly. You’re going to get the performance from other things. Because the chip isn’t going to give you much, they’re working on programming that accelerates AI and other things that are similar at a higher rate of speed without the transistor being any better.
The world is changing because of this, but yes, we do worry about what happens when it becomes self aware or whatever you want to call it.
It’s just that we can’t even get systems that can perfectly answer, “Where’s the nearest bathroom?” They get close, but it’s such a long way away.
It’s not that you ignore it. You don’t want to ignore anything that could be that detrimental to society, but the flip side of the coin is you don’t want to do something where I could save this person if I could quickly find the reference that I need and sort through the garbage references I should avoid, which is exactly what the systems we have can do.
You don’t want to just stop doing it. That’s the Luddites and the Luddites never win.
The average person also probably doesn’t believe that self-driving cars are really coming. How do you convince them?
Virtually every Subaru now comes with auto-braking and all the initial self-driving features that keep you from killing yourself from poor judgement at best, foolishness at worst. They already have it.
The funny thing is that the slow rate of introduction has kind of been beneficial because it’s snuck up on people. The fact is, your car already takes away your authority to kill yourself.
In 2002, I bought a Corvette, and I was making a left turn. Somebody was travelling at like three times the speed limit and coming over a rise, literally airborne. As I was seeing this car coming broad side to me, I stepped on the gas while making a turn to get out of the way.
What normally would have happened [before that] is you would have raced forward, slammed into the guardrail and spun around. But actually, the car just systematically hung the left and as the tires approached the limit of traction, the system automatically throttled the engine back to maximum speed without loss of control. The electric nanny.
Every vehicle by law now has a stability control system. It’s already here, but it’s here in a very mild form. The only thing it doesn’t really do much is steer and even that’s not true, because it steers with the brakes. That’s how stability control happens.
Yes it will be here, the question is when. The legalities – six states have finally cleared autonomous vehicles. Uber is now running a fleet of autonomous vehicles in Pittsburgh. They still have drivers in them because they want backup, but they’re already here and they already work.
Will traditional car makers get to wide-scale rollout before the likes of Tesla or Google?
They’re playing hardball. It’s going to be a foot race. The advantage the traditional companies have is scale. Tesla is the anomaly because they have shocking scale at the high end of the market.
It’s not clear and I’m hesitant to bet. But certainly Elon Musk has done an extraordinary job of going to zero to where he is. The major companies are not going to sit by and let this happen.
Are self-driving cars an easier sell to the public than AI because of their life-saving potential?
It actually is. Every time someone bets against one of these things, inevitably they’re proven wrong because of the unanticipated benefits exceeding grossly any of the downsides. It’ll sneak up on folks simply by being a given.