If you were asked to name the company that’s contributing most to the future of technology, who would you name? Google? Maybe Apple? Hands up if you think it’s Microsoft?
I’d argue that it’s none of those illustrious names. In fact, I’d say that it’s Nvidia – you know, the “graphics cards people”.
Nvidia has found itself at the forefront of the wave of intelligent machines which are about to change technology forever. Deep learning, which allows machines to effectively programme themselves, is one of those huge leaps in technology which occasional happen, and thanks to the fact that graphics processors turn out to be ideal for machine learning, Nvidia is in the process of transforming itself from “the graphics card company” into “the AI computing company”.
If I had any doubts about the importance of machine learning, then Nvidia CEO Jen-Hsun Huang’s keynote speech dispelled them. In the next ten years there is very little about our lives which machine learning won’t touch. Mostly, it will be invisible and gradual: recommendation engines which get better over time; better machine translation of language, and translation of speech to text; and improvements in areas like insurance, where decision making is already largely in the hands of machines.
However, the biggest and most obvious place where machine learning will have an impact is cars. As Jen-Hsun explained, “a car which drives itself” is a complicated system which goes beyond simply recognizing objects and steering around them. The car needs to understand that just because there’s a space doesn’t mean you can drive through it. Human beings don’t drive by scanning constantly for every object in their path and computing the optimum way around them. They learn the ability to drive through spaces, how to avoid objects, and wisdom to drive effectively too. Driving is a complex and many-layered piece of skill which can’t be reduced to a single algorithm.
And that’s why Nvidia’s autonomous car software platform, called Driveworks, splits the process of driving into three units. DriveNet looks for things, computing a 3D model of where they are and how big they are rather than trying to do flat object recognition. OpenRoadNet looks for absence, for spaces which the car can safely move in – for example, understanding that it’s OK to go between the lines on the road, but not always OK to drive over them. And PilotNet is a behavioural network which “just drives”.
I’m probably over-simplifying how the three work together but essentially it works like this: PilotNet decides where it would like to drive. DriveNet and OpenRoadNet act as the metaphorical brakes, ensuring that the driving approach desired by PilotNet is appropriate and safe.
Within a few years, a new car will be able to drive itself in almost every circumstance, more safely and economically than any human being. That’s what machine learning will enable, and Nvidia aims to be at the forefront of that revolution. It won’t be the only one started by machine learning, but it will be the most obvious big change to the way we live. And I can’t wait for it to happen.