2.11.2006

Robot Love

In recent interview in New Scientist with artist Mari Velonaki, we get a glimpse of an interactive exhibit where robots and people can interact.
Velonaki has collaborated with robotics scientists at the centre to create Fish-Bird, a live exhibition comprising a pair of moody, love-struck robots disguised as wheelchairs that can communicate through movement and written text.
What is strikingly different about this approach to machine intelligence is the emphasis on emotion over problem-solving. This is something I've pondered for a while. If we really want to be able to talk to computer-based intelligence, doesn't it have to at least attempt to emulate emotion? Click on the photo below to see one attempt.But what are the limits for machine emotion? If we translate it into human terms, this becomes even more interesting. Computer programs are changeable in a way that human brains are not. A computer program can change itself if it's so designed. To a point we can do the same with our own brains--teaching ourselves to learn to play piano, for example. More recently, we can swallow pretty little pills that adjust our brain chemistry. What are the limits of this trend? Suppose that we, like a flexible computer program, could change our own programming?

Suppose, for example, that you want to be happy all the time. If there existed a way to modify brain chemistry by swallowing a pill to satisfy your desire, it seems inevitable that many would do just that. It's hard to argue otherwise given the problem with drugs like crack.

Now suppose some general 'desire' emotion exists, and we can make it go away with a pill. In a few heartbeats we would become completely satisfied, if catatonic citizens. What's to prevent that? Only other desires (not to be catatonic, for example). But in the most general case, all desire is satisfed with this pill, so only an initial inclination not to take that first one could save you.

This initial inclination is a matter of chance, begetting an evolutionary mechanism for selecting for specimens not inclined to take the first pill. Over time, this becomes ossified by evolution into a real limit on behavior. In other words, behaviors that lead to severe disability will be strongly selected against. That's why it's hard bring yourself to jump off of a bridge.

The conclusion is that these hard-coded safeguards are essential to survival. Therefore any program that can modify itself will be selected against in favor of one with limits. So in order to build intelligent machines, we should be interested in what these limits should be. We could call them, say, emotions.

0 Comments:

Post a Comment

<< Home