Friday, April 28, 2017

AI and deep learning...

AI and deep learning...  My long time readers will know that I am very skeptical of artificial intelligence (AI) in general.  By that I mean two separate things these days, as the field of AI has split into two rather different things the past few years.

The first (and oldest) meaning was the general idea of making computers “intelligent” in generally the same way as humans are intelligent.  Isaac Asimov's I, Robot stories and their derivatives perfectly illustrated this idea.  The progress researchers have made on this sort of AI is roughly the same as the progress they've made on faster-than-light space travel: nil.  I am skeptical that they ever will, as everything I know about computers and humans (more about the former than the latter!) tells me that they don't work the same way at all.

The second more recent kind of AI is generally known by the moniker “deep learning”.  I think that term is a bit misleading, but never mind that.  For me the most interesting thing about deep learning is that nobody knows how any system using deep learning actually works.  Yes, really!  In this sense, deep learning systems are a bit like people.  An example: suppose spent a few days learning how to do something new – say, candling eggs.  You know what the process of learning is like, and you know that at the end you will be competent to candle eggs.  But you have utterly no idea how your brain is giving you this skill.  Despite this, AI researchers have made enormous progress with deep learning.  Why?  Relative to other kinds of AI, it's easy to build.  It's enabled by powerful processors, and we're getting really good at building those.  And, probably most importantly, there are a large number of relatively simple tasks that are amenable to deep learning solutions.

Deep learning systems (which are programs, sometimes running on special computers) we know how to train, but we don't know how the result works, just like with people.  You go through a process to train them how to do some specific task, and then (if you've done it right) they know how to do it.  What's fascinating to me as a programmer is that no programming was involved in teaching the system how to do its task – just training of a general purpose deep learning system.  And there's a consequence to that: no programmer (or anyone else) knows how that deep learning system actually does its work.  There's not even any way to figure that out.

There are a couple of things about that approach that worry me.

First, there's the problem of how that deep learning system will react to new inputs.  There's no way to predict that.  My car, a Tesla Model X, is a great example of such a deep learning system.  It uses machine vision (a video camera coupled with a deep learning system) to analyze the road ahead and decide how to steer the car.  In my own experience, it works very well when the road is well-defined by painted lines, pavement color changes, etc.  It works much less well otherwise.  For instance, not long ago I had it in “auto-steer” on a twisty mountain road whose edges petered off into gravel.  To my human perception, the road was still perfectly clear – but to the Tesla it was not.  Auto-steer tried at one point to send me straight into a boulder!  :)  I'd be willing to bet you that at no time in the training of its deep learning system was it ever presented with a road like the one I was on that day, and therefore it really didn't know what it was seeing (or, therefore, how to steer).  The deep learning method is very powerful, but it's still missing something that human brains are adding to that equation.  I suspect it's related to the fact that the deep learning system doesn't have a good geometrical model of the world (as we humans most certainly do), which is the subject of the next paragraph.

Second, there's the problem of insufficiency, alluded to above.  Deep learning isn't the only thing necessary to emulate human intelligence.  It likely is part of the overall human intelligence emulation problem, but it's far from the whole thing.  This morning I ran across a blog post talking about the same issue, in this case with respect to machine vision and deep learning.  It's written by a programmer who works on these systems, so he's better equipped to make the argument than I am. 

I think AI has a very long way to go before Isaac Asimov would recognize it, and I don't see any indications that the breakthroughs needed are imminent...

1 comment:

  1. Seems like there needs to be a meta-lesson taught to the machines - when in unfamiliar territory, slow down.

    It's hard because, you don't want them to learn to slow down, you want them to slow down while they're learning.

    ReplyDelete