Because most philosophies that frown on reproduction don't survive.

Wednesday, January 19, 2011

The Materialism of Limited Toolset

I make a point of always trying to listed on the EconTalk podcast each week -- a venue in which George Mason University economics professor Russ Roberts conducts a roughly hour-long interview with an author or academic about some topic related to economics. A couple weeks ago, the guest was Robin Hanson, also an economics professor at GMU, who was talking about the "technological singularity" which could result from perfecting the technique of "porting" copies of humans into computers. Usually the topic is much more down-to-earth, but these kinds of speculations can be interesting to play with, and there were a couple of things which really struck me listening to the interview with Hanson, which ran to some 90 minutes.

Hanson's basic contention is that the next big technological leap that will change the face of the world economy will be the ability to create a working copy of a human by "porting" that person's brain into a computer. He argues that this could come much sooner than the ability to create an "artificial intelligence" from scratch, because it doesn't require knowing how intelligence works -- you simply create an emulation program on a really powerful computer, and then do a scan of the brain which picks up the current state of every part of it and how those parts interact. (There's a wikipedia article on the concept, called "whole brain emulation" here.) Hanson thinks this would create an effectively unlimited supply of what are, functionally, human beings, though they may look like computer programs or robots, and that this would fundamentally change the economy by creating an effectively infinite supply of labor.

Let's leave all that aside for a moment, because what fascinates me here is something which Roberts, a practicing Jew, homed in on right away: Why should we believe that the sum and total of what you can physically scan in the brain is all there is to know about a person? Why shouldn't we think that there's something else to the "mind" than just the parts of the brain and their current state? Couldn't there be some kind of will which is not materially detectable and is what is causing the brain to act the way it is?

(Or to use the cyber-punk terminology which seems more appropriate with this topic: How do we know there's not a ghost in the machine?)

Hanson's answer is as follows (this section starts around minute 32 of the podcast):
"I have a physics background, and by the time that you're done with physics that should be well knocked into you, that, you know, certainly most top scientists, if you ask them a survey question will say, 'Yeah, that's it.' There really isn't room for much else. Sorry. It's not like it's an open question here. Physics has a pretty complete picture of what's in the world around us. We've probed every nook and cranny, and we only ever keep finding the same damn stuff.

We have enormous progress on seeing the stuff our world is made of. Almost everything around you is the same atoms, the same protons, electrons, the rare neutrino that flies around. And that's pretty much it. You have to get pretty far off to see some of the strange materials and things that physicists sometimes probe. Physicists have to make these enormous machines and create these very alien environments in order to find new stuff to study because they've so well studied the material around us. The things our world is made out of are really, really well established. How it combines together in interesting ways gets complicated and then we don't get it, but the stuff that it's made out of, we get.

Your head is made out of chemicals. We've never seen anything else. It's always theoretically possible that when something's really complicated and you don't know how to predict the complexity from the parts, you could say, 'Well therefore, it could be this whole is different from the parts, because it's too difficult to predict.'
...
We should separate two very different issues here. One is technological understanding and knowing how things work and how to make things, and the other is knowing what the world is made of. So, I make this very strong and confident claim: We know what the world is made of, and we know what pieces they are and how they interact at a fine grain. But at higher levels of organization, we don't know how to make other things like, even, photosynthesis in cells. We don't know how to make a photosynthesis machine. You could take your cell phone out of your pocket and take it apart and you wouldn't know how to make a phone like that.... We don't know how it works, but we're pretty sure what it's made out of."

Now, this line of thinking seems fairly familiar to me from talking with materialist/atheists of a scientific bent: We have all these great scientific tools, and all they've ever detected is matter and energy, never a "will" or a "beautiful" or a "soul", and so therefore it's pretty clear that when we talk about our minds we're really talking about our brains and there just isn't anything there except chemicals and electricity.

However, it seems to me that this presents a rather obvious blind spot. We, as human persons, experience all sorts of things which would seem to be evidence of having a will which decides things in a non-deterministic fashion. We also respond to ideas such as "beautiful" or "justice" or "good" in ways that would suggest that there is something there that we're talking about.

When we say, "Physicists have done all this work, and all they've ever found is matter and energy," you are really saying, "Given the tools and methodology physicists use, all they are able to detect is matter and energy." But I'm not clear how getting from that to, "Therefore there is nothing other than matter and energy," is anything other than an assumption.

Is there any valid reason why we should accept the jump from, "Tools that scientists use to detect things can only detect the existence of material things," to "Only material things exist"?

This seems particularly troublesome given that the project here is supposedly to create an emulation program which can be given a brain scan and then act like an independent human. If our experience of being human is that there is something in the driver seat, something which decides what is beautiful or what is right or who to marry or whether we want rice pudding for lunch today, then unless there is some active, non-deterministic thing within the brain which can be measured by this scan, then what you get is going to be, for lack of a better word, dead.

7 comments:

Rebekka said...

I can't even comprehend how that would work. Also, in my experience (working with neurosurgical patients and having participated in a few resuscitations) brains are so incredibly fragile, complicated, and so dependent on the whole-body infrastructure that I just can't imagine how a scan, no matter how detailed, would be able to project the fabulous processes into a computer. But maybe my imagination just falls short of the mark.

Brandon said...

Even assuming it were possible in principle, given the clear fact that each human brain is custom-tailored, so to speak, to the particular body in which it exists, and has spent a lifetime adapting to it, makes me suspect that it would be like trying to swap engines in cars in a world in which every single car is custom-built for different purposes: nothing in principle impossible about it, maybe, but in practice it would be a miracle if it ended up working properly.

Hanson's argument is an example of what I like to call scientifictionism: everything's material, the argument goes, because in the end when science has completely explained everything it will have done so in terms of exactly those constituents and principles of which we are aware, and nothing more. But even if we assume that this is true, treating it as fact is science fiction: yes, assuming it is possible, science in the end could end up doing things that way, but it could also end up having to posit something more to the world than we thought we knew (which it has sometimes done on other questions, so history shows it to be a real possibility) or any number of other results that, prior to actually having in hand the complete scientific explanation, could for all we know be true instead. Neither Hanson nor anyone else has any idea how to explain thought and will in terms of electrons and protons, so until we actually have the existence proof, the proof that it can be done, which with scientific explanations usually requires actually doing it, claiming it can be done is science fiction, not science.

As most of Hanson's work is, I must say; very interesting, if you take it as science fiction, or even science fiction that might conceivably have some truth to it, but not very interesting at all if taken as more substantial than speculation.

John Beegle said...

Fr. Stanley Jaki had some excellent discussions in his books on the limits of the scientific method and of AI. One of the points he argued is that free will cannot simply "emerge" due to the complexity of a program. He believed that no AI program will be able to be programmed to have free will.

ekbell said...

After spending a tiny bit of time wondering what it would be like to be one of these emulated beings (assuming that it was possible to duplicate a mind) I get the heeby-jeebies.


Just losing all muscle memory would be frustrating enough but I've seen a lot more verbage about emulating the mind then emulating the sense of being in a body or for that matter emulating how the mind responds to and experiences change.

Without the sense of being in a body such a mind would most likely precieve itself as crippled.

Without the knowledge of how that particular mind responds to change there can be no proper longterm emulation.

It's a good thing that I think that such people are anticipating far, far beyond their facts.

Brandon said...

I second JD's mention of Jaki on the subject -- Jaki is brusque, but he usually makes good points on this subject. One of the things he notes somewhere is that when people are asked how this brain produces thought, they simply respond by analogy to the most advanced information technology of the time -- it works like a telegraph system! like a telephone system! like a computer, with each neuron functioning like a vacuum tube! &c. And, of course, the sort of scenario Hanson has in mind requires taking the brain to be in reality very much like a modern computer, in more than the mere fact that they both handle information somehow, some way. But not even all materialists think this is right; someone like Searle, for instance, would argue that brains just aren't that computer-like.

Word-recognition term: cogicize, which really should be a word -- half cogitate, half exercise.

Anthony said...

I don't see there being an impossibility in the idea that all our mental processes and our consciousness are "merely" emergent properties of material phenomena, nor any contradiction between the idea that it is impossible to program a computer to have free will and the idea that humans actually do have free will as an emergent property of their brain chemistry and organization.

Nor do I see any contradiction between these statements and Christian belief.

Sean said...

A fundamental problem with the "physics proves materialism" argument is that physics relies on mathematics, and mathematical objects are immaterial. It is like arguing that one does not have eyes, because one cannot directly see them.
It is more plausible to argue that physics proves the existence of spirit, because physics shows that immaterial objects of thought describe material objects with great precision. In the same way, being able to see proves that one has eyes, although it is very difficult to deduce much about eyes just from knowing that one can see.

(Sorry for the late comment, but I didn't notice this post until a little while ago.)