DARPA is launching a significant new AI initiative; it could be a bad mistake.
DARPA (The Defense Advanced Research Projects Agency)has an awesome record of success in promoting the development of computer technology; without its interventions we probably wouldn’t be talking seriously about self-driving cars, and we might not have any internet. So any big DARPA project is going to be at least interesting and quite probably groundbreaking. This one seeks to bring in a Third Wave of AI. The first wave, on this showing, was a matter of humans knowing what needed to be done and just putting that knowledge into coded rules (this actually smooshes together a messy history of some very different approaches). The second wave involves statistical techniques and machines learning for themselves; recently we’ve seen big advances from this kind of approach. While there’s still more to be got out of these earlier waves, DARPA foresee a third one in which context-based programs are able to explain and justify their own reasoning. The overall idea is well explained by John Launchbury in this video.
In many ways this is timely, as one of the big fears attached to recent machine learning projects has arisen from the fact that there is often no way for human beings to understand, in any meaningful sense, how they work. If you don’t know how a ‘second wave’ system is getting its results, you cannot be sure it won’t suddenly go wrong in bizarre ways (and in fact they do). There have even been moves to make it a legal requirement that a system is explicable.
I think there are two big problems, though. The demand for an explanation implicitly requires one that human beings can understand. This might easily hobble computer systems unnecessarily, denying us immensely useful new technologies that just happen to be slightly beyond our grasp. One of the limitations of human cognition, for example, is that we can only hold so many things in mind at once. Typically we get round this by structuring and dividing problems so we can deal with simple pieces one at a time; but it’s likely there are cognitive strategies that this rules out. Already I believe there are strategies in chess, devised by computers, that clearly work but whose conditional structure is so complex no human can understand them intuitively. So it could be that the third wave actually restores some of the limitations of the first, by tying progress to things humans already get.
The second problem is that we still have no real idea how much of human cognition works. Recent advances in visual recognition have brought AI to levels that seem to match or exceed human proficiency, but the way they break down suddenly in weird cases is so unlike human thought that it shows how different the underlying mechanisms must still be. If we don’t know how humans do explainable recognition, where is our third wave going to come from?
Of course, the whole framework of the three waves is a bit of a rhetorical trick. It rewrites and recategorises the vastly complex, contentious history of AI into something much simpler; it discreetly overlooks all the dead ends and winters of disillusion that actually featured quite prominently in that story. The result makes the ‘third wave’ seem a natural inevitability, so that we ask only when and by whom, not whether and how.
Still, even projects whose success is not inevitable sometimes come through…
Original Content Source