Sunday, April 10, 2016

Two Kinds of Thinking

By now, everybody knows there are two types of thinking, or two types of computing. Here on Earth, we are seeing some successes in AI in certain well-defined tasks, that have been too formidable before. While I don't read all the news, the articles that showed a computer has beaten the international Go champion were unavoidable. Why didn't this happen twenty years ago? Because nobody believed in the second kind of thinking. Chess was won by a machine some time ago, but that was done by sheer computational power. Go had to have another type of thinking. The articles referred to it as deep thinking or some such nickname, but what they were talking about was multi-layered associative neural networks. That's what you have in your skull.

The first type of thinking was invented before computers were invented. It is simple: programming. A program, at least it used to be, was a set of instructions that were to be executed in order. There could be branches and tests, but by an large it was doing one thing after another. It's how you communicate many things, like trades. You know, to fix a car that doesn't start, do step one, then step two, then step three, and so on. Here and there are some measurements or tests, and that affects the particular sequence of actions. Sequential games are just perfect for this type of code. You just figure out in advance what the moves might be, and pick the best one. Checkers or chess, or many others, are subject to this. You figure out what happens in ten or fifteen moves, and if your opponent can only figure out six or eight moves, he/she loses. There has to be a criteria for measuring the value of the different states you compute out, but in many games that is not too hard to find.

This type of thinking is universal in the world of computing today. Anywhere you look, there is a set of sequential commands to follow. Software engineering has become a discipline that figures out how to organize these sets of commands, how to verify and validate them, how to proof them against unexpected results, and more. The trillions of lines of code that exist in the world are all this type of computing. You might say this happened because silicon likes to be either on or off. It likes discrete things. Or you could say it happened because binary is simpler than rational numbers. Or you could say that determinism is simpler than fuzzy logic. Or you could just say that mankind is still pretty primitive.

Man's brain isn't pretty primitive. We finally got around to believing it was a useful paradigm to follow. Now that computational power has gotten much grander in scale, maybe a few hundreths or thousands of a percent of the human brain, neural networks begin to make sense. Neural networks work by not following any prescribed set of commands. Instead, they work on weights. One node of one layer of a neural net evaluates a set of inputs, most likely the outputs of a lower layer, by matching it against a template of weights, and the output is a measure of the degree of match. The next layer has a broader scope, and evaluates its own match against the inputs, which are the outputs of a dozen or a hundred nodes from the next lower layer. It may sound like a pyramid, but it isn't because each layer looks at the outputs of the previous layer, and there are scads of alternatives as to what comes out, meaning lots of nodes on all layers.

Once someone has figured this out, and also how to efficiently make a computer core, which can only do sequential stuff, simulate an associative network, then it can be applied to all kinds of problems, like anything the human brain can do, if the human brain was in a box. There is one problem, however, you have to train an associative network. You don't have to train a program, you just write it. But an associative network has many more free parameters than it has nodes, and they all have to get set. This was a big problem until some smart dude had a baby. If you watch a baby learn, you immediately see how neural nets get trained. A newborn baby can't do anything except pulse some random inputs to its muscles and suckle. That last bit is hard-coded. When you watch it, you see it initially notices a correlation between the random arm motions it makes and the visual field. This quickly boils down to the baby learning how to move its arms. The same thing happens a million times over, and the baby's brain begins to function and become the brain of a toddler, whom we all know is quite capable.

In those articles about the Go program, nobody talks about how the networks they used were trained, but it's more or less obvious. You go layer by layer, and just put in some reward structure. They train themselves, just like a baby does. You can affect the environment and speed up the training, or you can just go out to lunch.

Now that we have passed that hurdle, we on Earth will be able to see some decent AI results. We can expect that all alien civilizations that pass into asymptotic technology will have passed that hurdle as well. But it is an early hurdle. Since programmed computing and associative computing both do different things well, it is obvious to everybody that combining them produces the most capability. Alien civilizations will have done that also, early in their careers. What's the asymptote on this?

Can silicon be built to be more efficient than a wetware brain? Say, on a kilogram basis or on a joule basis? It depends on the test. Some things yes, some things no. So, on an alien planet, you can expect to see a diverse combination, as one of the precepts of an alien civilization is that it would be efficient. You would see pure neural nets, hybrids, and pure logical sequences.

This brings up an interesting point. Humans can think, well, some humans can think, in logical sequences, in other words, think like a program. How did they manage to get an associative neural network to do that? Well, the answer is clear, but we don't need to go into it here. What we do need to recognize is that biological entities and mechanical entities both have striking advantages and disadvantages, as well as a certain degree of overlap, and any advanced society would see both. Perhaps biological-mechanical hybrids as well, but there are certain disadvantages of these.

Comparisons between mechanical thinking and biological thinking are unfair until there has been enough genetic exploration and experimentation to grant to all members of a particular generation a very large intelligence. Machines are designed to be the best they can be, so why shouldn't aliens as well? There is a strong correlation between intelligence, measured in the right way, and the ability to do logical thinking, so the dividing line between tasks that are left to aliens and those that are turned over to AI will certainly move following the genetic grand transition. Biological stuff, like us, might turn out to be pretty good, after all.

No comments:

Post a Comment