2017-07-24

Halting of the AI superintelligence.

TL;DR; There's proofs that to test a program you need to run it, thus an AI could not improve itself without a continuing risk of killing itself in the process (without constraining itself out of full improvements).


Every once in a while i see a post on artificial intelligence (the one from "wait but why" has been especially popular among fellow programmers) and I've also been asked by people not into computing about the impact of self driving cars on employment (Think of self driving Über cars but possibly more importantly how highway transportation will change the employment of truckers).

To summarize i'd say that there's both good and bad news. The bad news is that many of those truckers will probably be out of jobs (there will still be a need to watch out for robbers and the actual job of loading goods won't go away entirely).

The good news is that for the foreseeable future we shouldn't worry about any artificial super-intelligence changing our society without our control. Now a blanket statement like that might draw some criticism and looking at the advances we have with Teslas that predicts accident conditions and robots from Boston Dynamics that can put away your dishes it's easy to think that we're closing in on getting really smart AI's, maybe even as smart as the AI in the Terminator movies (once an AI can reason like a human there is no reason it couldn't get even smarter).

However if you scratch the surface a bit all of the advances we're seeing is just improved training of machines to handle predefined situations, the great improvements we've seen in the last 10 years has been due to the fact that graphics processors became usable to do fairly simple calculations and algorithms at amazing speeds compared to the your computers regular processor.

To put that increase in perspective. Your mobile phones graphics processor could probably have rendered better graphics for the 1993 Jurassic Park movie on it's own today in realtime than a few rooms of computers did in a few weeks(months?) back in the day.

With the availability of ridiculously fast graphics processors developers were suddenly able to do more expensive calculations and that enabled the usage of more extensive neural network based algorithms that had been too slow to use efficiently beforehand. And now in 2017 there is even appearing custom processors for "AI" problems and extensions to the graphics processing units's (GPU) to increase speeds even more for this kinds of problems.

So now what we can do all this, why shouldn't real AI's also become better?

Because any general AI is an entirely different beast than a program that can solve a specific human defined task (recognizing handwriting, recognizing traffic, walking, walking in a room,etc). A general AI would need to reason about hitherto unknown problems and to do that it'd need to be able to create new internal models for it (instead of predefined ones). There isn't anything close to an idea on how we do that ourselves in our brains or how an machine could do that.

An approach that is usually cited as a probable path that must be taken to evolve an AI would be to have self modifying programs. IE the AI would figure out what works and then continue evolving that way to become better with each generation of itself.

Some people have already started experimenting with modifying programs with AI's but those programs are usually tasked at doing specific small tasks and verifying that they work is easy (since the base program usually solved the problem itself to begin with). And this verification is the big reason why general evolving AI's probably won't see the light of day.

To verify that a very small program works is fairly easy, even some slightly more advanced programs might be possible to verify but as soon as a certain conditions appear in programs it becomes impossible to verify that the program will run properly with less processing power than it would take to actually run the entire program in full. This is known in computing as the Halting Problem and was proven long ago by Alan Turing (The same guy who's most known for being one of the foremost code breakers for the British during WW2).

So why is this a problem? Well it's quite simple when you put it together. An advanced AI looking to improve itself would need to change itself, however since proving itself fault free would be as expensive as running (IE the entire of it's own life) it would not be able to improve itself without eventually introducing errors that in an indeterminate number of generations could terminate itself.

And since it would work on it's own finding the error that eventually caused the problem would probably be an futile task without storing every previous generation and known data set of the world knowledge it used for improvements... even then the task of finding a probably minuscule set of errors with the help of humans would probably constrain such an AI to be at best slightly inferior to humans.

One argument around this would be to have an set of autonomous AI's that could support each other but for them to be the road to super intelligence they would probably need to "breed" with each other or otherwise take similar roads to improvement leading to the same problem with causes of failure being transplanted.

Now there could be an implementation in the future that proves me wrong but they'd need to sidestep this logic conundrum.