Back in 1950 Alan Turing wrote a paper entitled "Computing Machinery and Intelligence" in which he proposed a test (which has been since named the "Turing Test") to determine if an artificial intelligence had been successfully created. The basic idea is that if an artificial intelligence is created such that person A can have a conversation (via typed text) with persons B and C, one of whom is another human sitting at a keyboard and one of whom is a computer program, such that A is not able to tell which of the two entities he's talking to is a human and which is a computer, then this would be evidence that the computer is an artificial intelligence to the extent that it can functionally behave like a human.
Of course, it's a bit flattering to think that the way to determine if a machine is a "thinking machine" is by seeing if it can interact with us humans in a social setting. On the flip side, some levels of human interaction are so low level that one can imagine a fairly simple compluter program succeeding at them pretty well. For instance, I would imagine you could probaby be moderately successful in creating a computer program that could present a reasonable facsimile of human communication via Facebook statuses, links and comments. But that's in part because communication on Facebook is pretty simplistic and you can always not respond or throw out non sequitors without seeming like anything other than a fairly normal Facebook user.
Building a computer program capable of producing a passable simulation of a more wide-ranging conversation is, however, a lot trickier, and such a feat is clearly some ways off in the future.
Thinking about this, however, in relation to the question of whole brain emulation which I wrote about last week it occurs to me that there are two questions here:
1) Is it possible to create a computer program which could fool a human into thinking that he is having a conversation with another human.
2) Is it possible to create a computer program which is actually capable of having a conversation for its own sake.
The first of these is achievable if one can build a good enough set of algorithms around how humans respond to questions, common knowledge for conversational grist, etc. The second, however, is much trickier. When we converse with someone we generally do so because we want to communicate something to that person and/or we want to know something about that person. In other words, conversation is essentially relational. You want to know about the person you are talking with, and you want them to know about you. You want to establish areas of common interest, experience and belief. You want to bring that person to share certain ideas about you or about the world.
I'm not sure that it would ever be possible to build a computer program which had these feelings and desires. Oh, sure, you could give it a basic like/not-like function where it tries to achieve commonalities or confidences and if it is rebuffed puts its interlocutor in the "not like" category and relates to it differently. But this is very different from actually wanting to know about someone and wanting to like and be liked by them. How our own human emotions work in this regard is far from clear to us, so I can't imagine that we're in any prosition to understand them so well that we can create copies in others.
Of course, in real conversation you often can't tell if the other person you're talking with, even face to face, is actually interested in you or actually likes you. This question is a source of considerable concern in human interactions. So perhaps the question of whether a computer can care about who you are or what you talk about is irrelevant to the question of whether a computer could be designed that could pass the Turing Test. But I do think that the question is probably quite relevant to whether a computer could ever be a person.
Parresian eis ten Eisodon ton Hagion
1 hour ago
9 comments:
Further complicating this is that there are entire classes of people who can't meet the criteria necessary for question #2 above, who might be incapable of empathy, but are no less human (or intelligent) for it.
That's a good point, and especially important to bring up in a modern context.
With a more medieval or classical view of human nature, you can say something like, "Man is a rational animal" and not mean by that, "Therefore anyone who doesn't have sufficient reasoning capacity is not a human." But the mainstream of modern thinking seems to reject the idea that there is any sort of essential character which all human beings share, and so people take the Peter Singer approach and rule that anyone who lacks a sufficient amount of specific attributes can be treated differently than other humans.
I think the distinction that I'd make in this case is that it is in the nature of being human to be able to care about someone that one is relating to in conversation -- though for specific human individuals this may not be possible because of development or defect. If no AI entities are able to do this thing (care about and be interested in others) then that would seem to set their nature aside as something wholly other.
Very good distinction.
So... to really pass the Turning Test, it would be pretty much the same as the old "moral being" thing? That is, if they act like a moral being (empathy enough to WANT to converse would be an example) then you should assume that they are?
I think Jimmy Akin went over the moral being thing about Vampires a while back; I could be phrasing it wrong.
(If I remember the theories right, most hold that humans who really do lack empathy have been damaged, rather than naturally not having any empathy.)
Dear Darwins:
Nice piece. Have you read much about Godel's Incompleteness Theorem? This is one of the more misunderstood theorems in mathematics. In it, Godel proved that not formal system can be both complete and consistent, which means if a system is found to be consistent with itself, there will always be simple theorem about numbers that cannot be decided from within the system itself. What people often miss about this, because knowing it would require reading the proof itself not just the theorem, is that (1) any formal system can be reduced to a Number Theoretic system, and (2) the "undecidable" questions are only undecidable from within the system - in fact, their truth values can easily be decided with a simple exercise in logic.
What does this have to do with your post? Well, the human brain (distinguished here from the mind) is a formal system, albeit a complicated one. Thus, there are simple questions about numbers that will not be decidable from within this system ... yet these question are in fact decidable by the human mind (here distinguished form the brain). Thus, without knowing it, I think that Godel furnishes us with a mathematical proof that the mind is more than the brain, or more crudely ... mind over matter. If this is true, the artificial intelligence (properly understood) is not possible.
What I find interesting about this argument is that it doesn't need to get into the more obvious problems with the Turing Test (emotions, etc.), but can maintain the existence of simple to answer questions about numbers, which seem to be the computer's speciality, that are (1) not decidable by the computer, and (2) more importantly, are decidable by the human mind. Finding the questions may be difficult ... coding them perhaps even harder ... but they exists none the less, and therefore, even on the level of computer-friendly questions, the Turing Test is bound to fail.
Jake said,
Thus, without knowing it, I think that Godel furnishes us with a mathematical proof that the mind is more than the brain, or more crudely ... mind over matter.
Actually, Godel drew exactly this conclusion from his results, although not very many people followed him in accepting the inference.
Brandon,
Very interesting. I didn't know that. Thank you for the information. You wouldn't happen to have a reference for that, woud you? I would love to incorporate it into the talk I give on Godel's proof.
Pax,
Jake
Jake & Brandon,
No, I hadn't known that. Fascinating.
I think I've got a book on Godel in my Amazon cart somewhere, but honestly all I really know about his theorem is the name. I wish I knew a lot more math than I do, but my efforts at adult self-education in this regard have been slow to put it charitably.
Hi, Jake,
Godel discusses it, if I recall correctly, in the Gibbs lecture; there might also be something in some of his correspondence with Hao Wang. I don't recall him tackling AI directly, but I think (I don't have the Gibbs lecture in front of me, and it's been a while) he does criticize Turing indirectly, using the incompleteness theorem to argue that the mind is not describable by any single formal system due to the inexhaustibility of the mathematics available to it.
Darwin,
J.R. Lucas is probably the most famous exponent of the general kind of argument Jake was suggesting. His paper Minds, Machines and Godel is online and pretty readable as an introduction. The basic argument, I think, is actually easy to follow; at least I can follow it. It's the objections and replies that get dizzyingly technical, and some of them I can follow and some of them are way beyond me.
I hear you on the difficulties of mathematical self-education. I've been trying to get a handle on category theory for some time now, and it's like traying to navigate a large maze in an unreliable go-cart that continually slams into the wall.
Post a Comment