Computer Learning: Why It's Important to Understand Now More Than Ever

The program above is playing both sides of Tic-tac-toe, and learning as it goes. Here's how it works.
The program is playing randomly. It doesn't use any traditional strategy, but after each move it checks to see if either X or O has won and if it has, it starts over. The first moves for X and O are saved at the beginning of each round, and the grid on the right represents games won or lost that start from that position. For example, if X plays in the top left for its first move and wins, the value in the top left corner goes up.

After several rounds a pattern emerges: the center square has the most wins, followed by the corners. This is not a surprise to anyone who's played before, but it's news to the computer program. After a thousand rounds the computer uses the information it has gathered to tell player X to use the highest rated square as its starting position. You should see the win statistics for x go up at this point.

Despite being programmed without any tic-tac-toe strategy, the most important approach has already been learned: play in the center as soon as you can and if the center is taken, the corners. The way it learned was by "brute force" - playing repetitive games until a pattern emerged - but the results, over time, are valid.

While I have created a program that has "learned", this tic-tac-toe game is no closer to sentience than a toaster. "Learning" in this context means collecting data, then making decisions based on that data. We are asking for the word "learn" to do a lot if it covers what just happen in the program as well as what happens in the human brain, but such is the nature of language. We could call it (and I'm borrowing from Wikipedia here), "making data driven predictions or decisions, through building a model from sample inputs", but that doesn't quite roll off the tongue. Or more importantly, most people are not going to know what this means. When humans and programs learn, they acquire information that they didn't have previously, and in this way the word is accurate.

It matters for three reasons

1. We hear headlines and read articles about computers learning and it's easy to forget the extraordinary double duty that the word "learning" is doing. This is especially easy as programs get significantly more complicated than tic-tac-toe simulations, but the rule still applies. Yes, IBM's Watson is very impressive, and it is advertised and promoted as if it is almost sentient, but in reality it is as "alive" as the tic-tac-toe simulation above. IBM promotes the lie that Watson is alive to sell its brand, and we buy it both consciously and subconsciously. Subconsciously because our language relies on metaphor and consciously because we are obsessed with the idea of sentient machines. A series of ads with IBM Watson talking to celebrities reinforces this idea. Google, in keeping up with IBM, has named their advanced computing division the not so subtle "Deep Mind".


There's a reason IBM wants us to think of their algorithm as a person.

2. The metaphor extends both ways. While we have called what the computer does "learning", we also use language about how the computer works to describe the mind. The easiest example is how we think about memory. In reality there is no "place" in the brain that "holds" our memories the way there is in the computer, yet it is the way most of us describe how our memories work. The storing of information in the computer and the human mind is categorically different, yet like "learning", "memory" is asked to cover them both. The idea of how human memory works and the terms we use to describe it have evolved with our adoption of technology in general, and this is especially true with the computer. My grandfather had a riddle: if you call a dog's tail a leg, how many legs does it have? Four. Just because you call a tail a leg doesn't make it one.

That said, discoveries are often made by using the metaphor used to describe one discipline and applying that to another, and it's true that the metaphor that the mind is like a computer has already led to interesting avenues of exploration in both medicine and computer science. But this only works if the reality that the relationship is an analogy - not fact - is maintained.

Both Robert Epstein in "The empty brain" and George Zarkadakis in In Our Own Image explore this idea much further.

3. What actually IS happening is going largely unnoticed. Imagine that instead of tic-tac-toe the program was doing something a lot more complicated at much higher speeds. Say, for example, the game was the stock market and the goal was to make money. The program could take all the input from the current market and weigh that against past performance. It wouldn't need to understand the entire complexity of the rules any more than the tic-tac-toe game does. It might only find a tiny negligible advantage, but because it's a computer it could very quickly execute that advantage several thousand times. The program would "learn" and "evolve", but it would never need to "understand" - why waste cycles on understanding when time and money are at stake? Unlike Google and IBM, investment companies promote their software as tools not as sentient individuals, even though their programs are very sophisticated "learning" algorithms.


There's a reason e-trade wants us to think of their algorithm as a machine.

The playground in which the financial programs are playing in is full of other algorithms all trying to find a similar tiny advantage to leverage. Sometimes these programs bounce off each other in unexpected ways, as dramatically evidenced by the Flash Crash of 2010. So increasingly the program "learns" to leverage the market in ways that we humans will never fully understand. Just because we don't understand what stock market programs are doing doesn't meant that they are "alive".

Intuitively, this makes sense because the financial software is not marketed as being "alive and conscious", but the same holds true for software like Watson and Deep Mind. Just because they are "learning" beyond what humans can understand does not mean that they are any closer to being sentient. It's a myth that originated from language's dependence on analogy, and exploited to sell articles, or computer services, or get grants, or entertain, or fear-monger.

Be leery and observant when people describe how technology works. Here's an example - this comes from the Wikipedia page on Neural networks. Watch as the description shifts from metaphor to implied fact without evidence or citation.

"The goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are more abstract. Modern neural network projects typically work with a few thousand to a few million neural units and millions of connections, which is still several orders of magnitude less complex than the human brain and closer to the computing power of a worm."

Yes, computer scientists are working with neuroscience to model computers that are analogous to theories of how the brain works. The paragraph ends, however, with the implication that the only thing between a neural network and consciousness is the number of "neural units". It starts by acknowledging the metaphor ("in the same way") and ends by cleverly conflating it into fact. It loses the understanding that although a neural network is "like" a mind, it is not in fact actually a mind. It's like hearing that a thousand monkeys with typewriters will eventually write Hamlet, and getting excited because you've got 20 mokeys already. In truth we know remarkably little about how the brain works, let alone where "consciousness" lies. Or as A. K. Dewdney puts it much further down the same Wikipedia page,

"Other than the simplest case of just relaying information from a sensor neuron to a motor neuron almost nothing of the underlying general principles of how information is handled by real neural networks is known."

4. I lied, there are four reasons. Think of this one as the epilog.
Google's Deep Mind, or IBM's Watson is much much closer to the tic-tac-toe program I wrote than it is to consciousness. In a Facebook comment recently someone said my position on A.I. was "optimistic" because I would not subscribe to the idea that Skynet (ask a nerd if you don't know this reference) was about to come on-line. I like being called an optimist, but in fact the opposite is probably true. Capitalism uses both the promise and the threat of artificial intelligence as a smokescreen. This is because actual computing is complicated and we're not always 100% sure where it's headed or what it's doing. Investors don't like uncertainty, nor do consumers, academics, or grant givers - not appearing to be sure doesn't pay in any context. But prop up the illusion that we are on the threshold of a technological singularity, and much will be forgiven. A lot of amazing work is being done under the premise of searching for AI. Think of it like the race to get to the moon and all the technological advances that came from that research, except that in this case we're never actually going anywhere.

Google's Deep Mind winning the Go tournament is an amazing accomplishment for the team, but also a scary milestone for a lot of people. Go is a fantastically complicated game and even the fastest computers cannot use brute force to examine every possible position on the board (although the program did employ the strategy of playing itself millions of times as I did with tic-tac-toe). For many the loss was a sad milestone and Google is definitely in a delicate position of being the company that finally "defeated humanity". To combat this Google relies heavily on human brain metaphors like "deep mind" and "neural networks" (or to use Google's vernacular the even more impressive "deep neural networks") to frame their technology. It's comforting to think that Google is working on "intelligence" because it implies that, although they are making algorithms that are designed to work in ways we cannot understand, soon we will be able to "talk" to them. Of course Google never says this literally, they allow the analogy to do this work implicitly.

So whup-de-do. Who cares that the search for AI is modern day alchemy, everybody is getting what they want, right? For many that's true, but I would argue that computer literacy is vital, especially now.
In the campaign leading up to the election, Clinton described wanting to create a task force to break modern encryption. Encryption is what holds the internet together. It's how doing business on-line is possible - remember the algorithms that run the stock exchange? Without encryption and the public's faith that it cannot be broken, the entire world economy would collapse. The government has a tough task of providing security and maintaining privacy and trust, but breaking encryption would do more damage to western civilization than any terrorist attack ever could. She's trying to halfway pop a balloon. Clinton is smart and pro-business but she fundamentally misunderstands just how computers, the internet, and encryption have become the cornerstone of our civilization.
But even if it was a good idea, asking for a task force to break encryption is like thinking with enough money and the right people anti-gravity boots are inevitable. Some problems cannot simply be "fixed" with money and brains, but this is the the impression that Silicon Valley wants to convey - part scientist, part business genius, part magician.
Here's the scary part: Clinton was the smart one.

Of course I don't blame AI for Clinton's misunderstanding of how technology works, but I do blame the general atmosphere in Silicon Valley of organized misinformation. We need to start with examining the language used to describe technology that tries to both impress and obfuscate. Let's ask hard questions of the Goggles and IBMs (I pick on them but of course they are not alone), and let's give them room to provide us with complicated answers.
White male CEO's are using our own fears and hopes for A.I. as a shell game to further their agenda. Now they have the ear of an insane president who is running government like a failing business. We cannot make the mistake of thinking that Zuckerburg and friends are going to become the moral center of the country - we need to start by holding them responsible and accountable for how they describe their products.

links