Monday, December 26, 2016

Is AI Magic?

AI is what everybody knows about. It is embodied in the idea that computers will someday be able to do everything a human mind can do. For instance, to play board games, to drive cars, to recognize people, to interpret human speech and respond to it, to design circuit boards and skyscrapers, to watch for intrusions, to keep track of bills or accounts, to look up interesting facts, and a thousand more tasks. It also means the computers will learn by themselves. Humans teach themselves many, many things, and in order to equal human intelligence, AI would have to do that as well.

One could quibble and say AI begins not at the peak of human intelligence, but when a computer is just as capable as a rather dumb person. This is not as mundane as it sounds. There is a lot a smart human being can think of that a dumb one cannot.

If we wish to go on and predict how AI will transform alien civilizations, and make them more efficient, or powerful, or anything at all, it is necessary to ensure that AI is not magic. In other words, AI is not one of those things that it is easy to think of and dream about, but for some basic reason, cannot happen. A prime example of magic is faster-than-light space travel. If physics weren’t already advanced beyond many other sciences, and it did not have a handle on what was the nature of force and energy in the universe, FTL might not be so easily recognized as magic. If this latter situation was accepted, alienology might be expecting space travel to be happening a thousand times faster than is actually possible, and be a thousand times more prevalent. This would result in a huge difference in what was predicted.

Perhaps AI is another form of magic, but it simply hasn’t been recognized as such yet. While this won’t have as strong an effect as FTL would, it would have a strong effect. So it behooves us to analyze AI more carefully. This means human intelligence, the target capability, needs to be sorted out in some more detail.

Human beings process data with neural nets, and that may give an illusion that AI must do the same. The recent work with what is called ‘deep learning’, which is the same thing that has been called neural networks for fifty years, may support that illusion. Neural networks can be used in recognition situations, where a machine can observe repetitions of an activity or an image, and draw conclusions from it. Humans recognize faces using a neural network – because they do everything with one. But it is not necessary to do that. Algorithms, meaning some mathematical formulas that can be embodied in an efficient computer program, can do such recognition much faster than a neural network, measured in processing units. Algorithms can replace neural networks in very many situations. Sometimes, it is not at all clear how to write such an algorithm or what values to use for it, and running a competent neural network might assist in figuring one out.

Motion is another example of where algorithms can be superior to neural networks. Having a robot move requires complicated algorithms, and the big breakthrough was learning how to write them in layers. Since the brain works in layers, this was not too surprising, but programming in this way was finally seen to make sense.

Algorithms are very useful in trying to get a computer to achieve AI, as it simulates a neural net very poorly, and has no chance of having the same number of processors as the brain does, as each neuron is a simple data processor. Nor could anyone expect to have the same variety of processors in a computer as in the brain: the number of discrete types of neurons is large, but controversial, as some differences are hard to detect.

It can be surmised that, if there are tasks which can only be done by a humongous neural network, and not a collection of algorithms, AI will not be achievable, even in computers as large as we wish to conceive of. So, are there any of these?

The brain’s neural net is extremely good at linkages. It can remember a long series of events, and many details about each. But each detail has linkages to other things, such as other events or series of events, and each of these will have details. Each specific detail is linked within the brain, in a way that a computer database cannot imitate. A computer can have an immense database, capable of recording much more data than the brain can, but the brain remembers unstructured data, or rather data where each detail has its own unique connections, features, linkages, and events.

Memory additions in a computer database have to have some structure that the program which interprets them understands. The brain has no such structures. It operates through something that would almost appear random, if there was any way it could be recorded and exposed to analysis. For example, a unique word might be connected with a book where it was first noticed, and much could be remembered about the book. Instead, it might be associated with some individual who used it more often than most, and that individual might have a huge assortment of details, able to be structured in many different ways. A particular color might be associated with a thousand different things, and the particular thousand that one individual remembers could be quite distinct from the number than another individual remembers. These things form the context of the thoughts that emerge from one individual’s brain, and lead to the creativity that humans possess. Can an AI program be made to operate with such complete randomness of structure? Is there any AI without such creativity?

Perhaps it can be said that simpler tasks, such as face recognition or object recognition or any of a huge number of individual tasks can be translated into algorithms, and a computer possessing the ability to perform these tasks might be considered intelligent in some degree. But to be able to mimic the thinking ability of humans requires destructuring data, and algorithms do not work with that. Furthermore, it is not likely that microprocessors can ever come close to simulating the huge number of neurons that function inside a human brain, so that even if there was some way to build random linkages, there would not be enough to become equivalent to a competent person’s thinking.

So, AI is not magic, as many intelligence tasks can be done by algorithms or neural networks, or combinations of them, but building an AI that has the equivalent of talented human intelligence is. Trying to describe an alien civilization following their reaching the final step of technology will be a bit harder as the line between these two levels of AI would have to be drawn, and then the implications of that line inferred.

No comments:

Post a Comment