Because most philosophies that frown on reproduction don't survive.
Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Wednesday, September 26, 2012

Fish Heads, for fun and mental profit

I know that there are philosophical songs and then there are philosophical songs, but around here few pieces of music have occasioned as much discussion as Fish Heads, by Barnes and Barnes.


The list of conditions for what is or is not a fish head have led to great speculation as to which of our acquaintance may actually be a fish head in disguise.

For example, here are some of the conditions of fish headery:
a) they don't wear sweaters;
b) they don't play baseball;
c) they're not good dancers;
d) they don't play drums.

Some of the youngsters were a bit worried that Daddy might be a fish head, then, until it was pointed out that he does, occasionally, wear sweaters. In fact, most of us make the non-fish head category by virtue of our positive association with sweaters, although Julia is a good dancer, Jack has played the drums, and I have played baseball. Any one of these is sufficient to establish non-fish head status, but we like to be doubly protected.

What are the positive conditions of being a fish head?
a) roly-poly;
b) in the morning, happy and laughing;
c) in the evening, floating in the stew;
d) get into movies free;
e) can't talk.

Fortunately, c) bars Baby from being a fish head, although her dinnertime habits make one question.

But now, examine this statement: "Roly-poly fish heads are never seen drinking cappuccino in Italian restaurants with Oriental women (yeah)." There's a lot to unpack here. Do fish heads never drink cappuccino, or is the cappuccino ban only in effect when they are at Italian restaurants? What if they're at an Italian restaurant with Polish women? Or Oriental men? We're not given enough information to make broader statements, but with the help of Graph Jam, we put the statement into the form of a Venn diagram.

Let me anticipate correction by pointing out for myself that yes, I misspelled "cappuccino".
So we can say with certainty that if one meets all three of these conditions, one is definitely not a fish head. But wait! What happens if one meets all these conditions invisibly? After all, non-fish headery is contingent on being seen doing all these things.

Obviously, there are layers of richness here that we have yet to unpack.

Tuesday, December 06, 2011

Assuming Meaning

A while back, Jennifer Fulwiler wrote a piece about her conversion from atheism to Catholicism, in which she talked about how Christianity answered the question of why we act as if life has meaning when in a strictly materialist world it would appear not to. This post drew a fair amount of criticism from atheists (and some believers) who insisted that even if we assume that humans are strictly material, deterministic organisms (with free will, goodness, etc. being mental constructs/illusions) that doesn't mean that life isn't beautiful and full of meaning.  After all, we naturally act like life has meaning, why not just assume that life has meaning to the extent that we act like it does? (via Leah) Ross Douthat tries to put together a thought experiment to address this line of argument:
Suppose, by way of analogy, that a group of people find themselves conscripted into a World-War-I-type conflict — they’re thrown together in a platoon and stationed out in no man’s land, where over time a kind of miniature society gets created, with its own loves and hates, hope and joys, and of course its own grinding, life-threatening routines. Eventually, some people in the platoon begin to wonder about the point of it all: Why are they fighting, who are they fighting, what do they hope to gain, what awaits them at war’s end, will there ever be a war’s end, and for that matter are they even sure that they’re the good guys?

…At this point, one of the platoon’s more intellectually sophisticated members speaks up. He thinks his angst-ridden comrades are missing the point: Regardless of the larger context of the conflict, they know the war has meaning because they can’t stop acting like it has meaning. Even in their slough of despond, most of them don’t throw themselves on barbed wire or rush headlong into a wave of poison gas. (And the ones who do usually have something clinically wrong with them.)… Instead, given how much meaningfulness is immediately and obviously available — right here and right now, amid the rocket’s red glare and the bombs bursting in air — the desire to understand the war’s larger context is just a personal choice, with no necessary connection to the question of whether today’s battle is worth the fighting.
One of the things that strikes me about this exchange is the extent to which it underlines different modes of thinking. From Douthat and Fulwiler, we have have an essentially teleological mode of thinking, one in which the question "why is that?" and "what does that mean?" in some final sense are the most important human questions. The opposing view in this case is a functional view which seems to draw a lot from engineering and scientific methods of the more procedural sort: "Okay, look, we're not really sure why we should think any of this has meaning, but clearly we do so that's functionally good enough to go with for now. Let's get on with other stuff."

I'm somewhat flummoxed as to how one would find it remotely satisfying to address a question such as "meaning" from a strictly functional perspective. Though perhaps that just shows how much in the former camp I am. However prone to skepticism I am, and it's a straing that runs strongly in me, one of the things that makes Catholicism so much more intellectually satisfying to me than the alternative of agnosticism is that I don't see how one can answer questions like "why do we exist" and "what is our purpose" with a shrug of "Well, we seem to be here, so who cares."

Monday, October 03, 2011

Excessive Parsimony as Intellectual Poverty

Just Thomism had a post last week that struck me because it encapsulated my reaction to the materialism-as-parsimonious-explanation point of view:
John Wilkins explains the reason he is a physicalist:
When I lost my belief in religion I had to decide what I needed to accept as a bare minimum. I decided that I needed to believe in the physical world. I never found the slightest reason to accept the existence of anything else. To this day I am a physicalist only because I never found the need to be anything else.
The principle of parsimony suggests that one should not believe in more than one needs to. Even if it does make you feel comfortable.

There are many reasons why someone would be a physicalist (John himself gives others), but this one is complete and absolutely fundamental. There simply is no reason behind this one, or at least there need not be. After this, physicalism can fall back into the defensive activity of answering various objections- it need not seek to do any more to establish itself in a positive way. If we tried to push the analysis any further back, we would slip into the non-rational sphere of personal and somatic characteristics, the infinite ocean of the subconscious, and the dark causality of whatever else there is.

My fundamental reason is the contrary of Wilkins. His challenge was to believe as little as possible, mine was to believe in the greatest thing possible. His fundamental outlook is critical and minimalist, my fundamental outlook is to find the greatest or loftiest thing that I can. He appeals to parsimony, and there is also a clear implied appeal to certitude; my appeal is to the natural desire to seek what is highest and most perfect. He takes it as obvious that one should never posit more than he needs to; I take it as equally obvious that no one would ever settle for the merely necessary and minimal. He might well see my choice as wishful thinking or a naive uncritical approach that could leave me duped in a thousand ways; but I see his as choice as mean, scrupulous, and closed- minded. His appeal is to Ockham’s razor, mine is to Aristotle’s dual axioms that what is most perfect in itself is least knowable to us and that we cannot but seek the beatitude that comes from knowing what is most perfect in itself.

To put it in a word, John sees everything beyond the minimum given in initial experience as a threat to philosophy, and even as unphilosophical; I see the whole point of philosophy as finding some object beyond this minimum given in initial experience.

Thursday, September 01, 2011

Evidence, Belief and Will

[This post originally ran back in December of 2006. I've made several changes from the original due to typos and accuracy.]

I had the chance to catch up on John Farrell's blog yesterday, and from there came across an interesting post by Ed of Dispatches From The Culture Wars which dealt with whether a theist could be a positive influence on science:
I reject the notion that belief in God, in and of itself, takes anything away from science education. Ken Miller is a theistic evolutionist. His scientific work is impeccable, as are his efforts in science education. Can Moran point to anything at all in Miller's scientific work that is "sloppy"? I doubt it. Can he point to anything at all in his work on science education, the multiple textbooks that he has authored on evolutionary biology, that is affected in any way whatsoever by his Christian faith? Again, I doubt it.

So what he's really arguing here is that despite Miller's successful work in the laboratory explaining molecular evolution and his astonishingly tireless work on behalf of sound science education all over the country, the mere fact that he believes in God somehow undermines the principles of science. Further, that I should be ashamed for not declaring Miller my enemy as he has. And if your bullshit detector isn't in overdrive right now, it must be broken.

All of this just reinforces my suspicions that we simply are not on the same team and are not working the same goal. My goal is to protect science education. Moran's goal is to protect his atheism against any and all religious impulse, even if held by people who are excellent scientists and defenders of science education. And as his team pursues their goal they seek nothing less than a purge of the most valuable members of my team as we work to achieve ours.

This in and of itself is an important point to be made, but the comments quickly veered off in the more basic direction of an argument of whether religious belief is so irrational that all other views held by a believer were thus suspect. From one commenter:
The belief in a god doesn't necessarily mean that one can't do good science, but it does make all that person's ideas less credible. To believe in something for which not only is there no evidence (like leprechauns and gods) but for which every attempt to find evidence has turned up nothing is to raise doubts about how rational one can be about anything.
Now, anyone how reads much stuff written by skeptics will already be tired of this line of thinking, but this particular statement struck me as so bald in its assumptions that it's actually useful in unpacking some of what's going on in the materialist vs. religious debate.

One basic assumption that those on the "religion is totally irrational" side make is that there is no other form of evidence than physical evidence and that there is no other form of inquiry than scientific inquiry. Thus, when one commenter said it was not irrational to accept the existence of non-physical reality, one of the materialist partisans snapped back, "non-physical reality, is that where all the married bachelors live?"

What this person is clearly doing is unconsciously making an assumption about what 'reality' consists of. Many things that we think of as very real in our human experience do not exist in a pure physical form. Some of these are mathematical concepts. For instance, there is no such thing in physical reality as a perfect circle. Does this mean that circles do not exist? We can define a circle mathematically, but all of the circular things that we in fact find in the world are (however minutely) imperfectly circular.

Another set of non physical things which we often believe that we experience (though perhaps imperfectly) consists of qualities such as "goodness", "justice", "love", etc. We experience things that seem to contain these qualities to a greater or lesser degree, but we cannot actually find physical evidence of the qualities themselves. In a given circumstance, a husband giving his wife a dozen roses might be evidence of love. In another circumstance, he might do it so that she won't suspect that he's sleeping with his secretary. Even assuming an infinitely wide frame of reference such that all external circumstances (such as the secretary) were known, no degree of strictly physical evidence can prove the existence of the non-physical quality: love. One could, of course, dispense with the idea of love entirely, and insist that it is simply biologically advantageous in the long term for each mate to believe that the other one has "love" for the other since this creates greater family stability and thus more successful rearing of offspring. This explanation can be seen as responsible for all our experiences of "love" but it is not necessarily satisfying from a human point of view.

This brings us to the other thing that I think often goes un-acknowledged in these kind of conversations: In any given situation, there is often more than one conclusion which explains all of one's experiences with logical consistency, and at such a point, one must make a decision what to believe. This decision is not merely arbitrary. Usually you will make it because you are convinced by one of the experiences or observations which make up the "evidence" that you are weighing.

In a classic example, it is logically consistent with one's observations of the world to conclude either that there is an outside world populated by other thinking, acting entities or to conclude that one's entire experience of the world is the result of a demented imagination, and there is in fact one reality but one's self. Both explain all of one's experiences and are logically consistent. However, since solipsism if profoundly un-useful, few people choose to believe it.

Similarly, well before monotheism became dominant in the West, some pagan philosophers had worked around to the idea that since no thing exists without a cause, and since an infinite regression of causes doesn't make any sense, that there must be a single, eternal, uncreated thing which existed by its nature and was in turn the cause of all other things. The "unmoved mover" proof of God's existence thus goes back further than Christianity does. However, modern non-believers generally laugh it off with a "If you can believe God exists without a creator, why not believe the universe exists without a creator?"

The answer is, of course, that one can. The force in the "unmoved mover" argument is that our experience generally tells us that normal physical things always have causes, and thus the universe as a whole must have a cause while is wholly different from all those things which we normally experience. However, if one is ready to instead believe that just this one time the physical universe behaved in a way wholly different from how we've ever experienced it to behave, that belief is also fully self consistent. One must, in the end, make a decision which metaphysics to believe. The evidence cannot make that decision for you. There is no one conclusion which is so overwhelmingly clear as to be unavoidable. Rather, if one is willing to accept the implications of either, one may then adopt that belief with full logical rigor.

At the end of the day, belief in God, or belief in a spouse's love, or belief that all men are created equal, or any such belief, may be supported by an incredible amount of evidence, but the belief itself is a choice. The evidence will take you so far. Belief does not have to be some sort of "blind leap". But it is a crossroads, and one must decide which way to go.

Tuesday, August 30, 2011

Information and Metaphysical Conclusions

I was struck by Kyle's post on Friday "Abortion, Rational Decision-Making, and Informed Consent", but it took me a while thinking it over to come to an explanation of exactly what I find wrong about it. Kyle is addressing the issue of "informed consent" laws which require a woman seeking an abortion to view an ultrasound of her baby or read an explanation of fetal development at the stage of pregnancy her child is at. He is concerned, however, that such laws miss the real moral point:
Catarina Dutilh Novaes explains her worry about some new laws requiring physicians to show a woman an ultrasound of the fetus and describe its status, organs and present activity before performing an abortion. She writes: “It does not take a lot of brain power to realize that what is construed here as ‘informed decision’ is in fact yet another maneuver to prevent abortions from taking place by ‘anthropomorphizing’ the fetus” and “it is of striking cruelty to submit a woman to this additional layer of emotional charge at such a difficult moment.” She’s right, I suspect, about the underlying motivation behind the laws and the suffering their practice would impose. If the legislators and activists pushing these laws recognize the suffering they may inflict, they clearly see it as justified, weighing, as they do, the vital status of the nascent life as greater than the emotional status of the expectant mother.
...
There’s something to this. The information the physician is legally required to communicate by these new laws informs in a very limited way: it doesn’t provide evidence of personhood or a right to life or any such metaphysical or moral reality. The sight and description of the fetus may give the appearance of a human life worthy of respect, but, as pro-lifers note, appearance is not indicative of moral worth. An embryo doesn’t look like a human being, but that appearance doesn’t signify anything moral or metaphysical about it.

The woman, for having this information, is not in any better position to make a rational, ethical decision. It may cause her to “see” the nascent life as human, but it doesn’t offer her a rational basis for such a perception. Her consent is no more informed after seeing and hearing the physical status of the life within her, and so these new “informed consent” laws don’t achieve what they are supposedly designed to do.

There are places conducive to informing people about the nascent life’s stages of development and about what exactly, scientifically speaking, abortion does to that life. A high school health class, for example. There, the scientific information about the unborn life and abortion can be more thoroughly considered, and once fully understood, serve in other settings as a reference point for metaphysical and moral considerations. Consent to abortion should be informed, but the information these new laws require to be communicated does not on its own result in informed consent or provide an additional basis for a rational, ethical decision. Why? Because, by itself, appearance is not ethically relevant and can also be misleading.
Now on the basic point, I agree with Kyle: appearance is not moral worth. A person is not worthy of human dignity simply because someone looks at him or her and sees similarity. To say that would be to suggest the converse: that when someone looks at another and sees simply "other" he is justified in not treating that person with human dignity. For instance, one could imagine (though I think it is the far less likely option) a situation in which a woman is leaning against abortion because she thinks that the child inside her will look "just like a baby", she sees a fuzzy ultrasound of something that still looks like a tadpole on an umbilical cord, and she thinks, "Oh, that's all? It must not be a baby yet. I'll abort."  Clearly, in this case, the information would have led to the wrong conclusion.  An appearance of similarity or dissimilarity does not a person make.

At the same time, the suggestion that informed consent laws are a bad idea just rubs me the wrong way, not just from a pragmatic point of view but from a moral one, and when I have this kind of conflict between instinct and reason, I tend to poke at the issue until I come up with a reason why it is that the apparently reasonable explanation seems wrong to me.

Having gone through this poking exercise, I realized that the issue is that Kyle's argument seems to imply that there are two sets of information -- information which relates to personhood, and information which relates to other qualities (appearance, sound, texture, etc.) -- and that informed consent laws are problematic because they require that people be provided with the latter type of information (information about appearance) when the relevant question is one of personhood, and thus only information relating to whether the being in question is a person would be applicable to the decision being made.

This seems reasonable for a moment until you try to think what information is actually in the first set, the set of information which relates to personhood. And here lies the paradox: there is none.

As beings who are both physical and rational, we understand the metaphysical concept of "person", but the inputs which we can receive from the outside world (things which we might be informed of as "facts" via "informed consent") are all sensory inputs. We reach the conclusion metaphysical, "This other being is a person, just as I am a person," based on sensory information, not metaphysical information.

Famously, in the movie Juno the main character is persuaded not to have an abortion when her pro-life classmate tells her that her baby has fingernails. This detail is what humanizes the baby in Juno's mind and causes her to decide not to abort the baby. Responding to this example, Kyle says:
The scene in Juno shows the effectiveness of giving a description of the fetus in order to humanize it, and it’s good that she chose to keep the baby, but she didn’t exactly make an informed ethical decision. Whether or not her baby had fingernails is irrelevant to the morality of abortion. It doesn’t follow that because the baby had fingernails that it was a human being with a right to life that the law should protect, but acting as though this information about fingernails led to “informed consent” implies that it does.
At the literal level, of course, the attribute "having fingernails" is not something that makes a being a person. We would not say, "Man is an animal with fingernails." Nor, if a human being through some genetic deformity was born without fingernails would be conclude that that member of our species was not a "person" because he lacked fingernails.

And yet, it is invariably through these surface level details that information comes into our minds and allows us, eventually, to form enough of an understanding of something that we are able to form metaphysical conclusions about it.

Picture, if you will, that at this moment I were to head down to the local coffee shop, and there I found Kyle sitting at a table with a banana.

"Darwin," Kyle informs me. "This banana is actually a person. It's an intelligent space alien."

My first reaction, after ordering a triple espresso, would doubtless to be respond, "It doesn't look like an alien. It looks like a banana."

My statement would have been about appearance, and yet, it would be completely normal for me to form the metaphysical conclusion that the banana was not a person based on this appearance combined with my experience of other similarly looking fruits. If a moment later, the thing-that-looked-like-a-banana were to rise in the air and trace in glowing letters a refutation of Derrida's claim that apartheid in South Africa was a consequence of phonetic writing which, "by isolating and hypostasizing being, ... corrupts it into a quasi-ontological segregation" -- I would rapidly revise my conclusions since this would be behavior far more in keeping with my experience of persons than with my experience of bananas.

The fact is that we will invariably reach the metaphysical conclusion "this is a person" based on a grouping of non-metaphysical sensory inputs. A materialist approach would to be say that this means that metaphysical conclusions never follow from "the data" and thus should be abandoned. Since there is no specific, observable characteristic which I can say "this is what makes something a person", this approach would reject personhood as a useful concept.

I would argue, instead, that it is precisely because we are beings able to perceive metaphysical realities through our sense of reason that we are able to take in a number of pieces of sensory "information" about something outside of ourselves and use those pieces of information to reach a metaphysical conclusion. In the case of deciding whether the unborn child is a "person" in the moral sense, pieces of information which might be key would be: member of our species (human), has unique DNA different from mother than father, heart is beating, eyes have formed, moves spontaneously, etc. None of these pieces of information is metaphysical in import, and yet, from the combination of them all, many people would form the conclusion that the creature in question is "a human being".

Further, there is simply a visceral reaction to seeing someone. Recall the New York Times piece on "twin reduction" that was going around a few weeks ago:
One of Stone’s patients, a New York woman, was certain that she wanted to reduce from twins to a singleton. Her husband yielded because she would be the one carrying the pregnancy and would stay at home to raise them. They came up with a compromise. “I asked not to see any of the ultrasounds,” he said. “I didn’t want to have that image, the image of two. I didn’t want to torture myself. And I didn’t go in for the procedure either, because less is more for me.” His wife was relieved that her husband remained in the waiting room; she, too, didn’t want to deal with his feelings.
Kyle's is right in saying that appearance itself is not evidence of personhood, but he is wrong in saying that this means that an ultrasound would not form a piece of "information" which would lead to a more "informed consent" in regards to abortion. In the end, no piece of information is in and of itself evidence of personhood. And yet, it is through these incomplete clues, these pieces of information which do not themselves indicate personhood, that we know that anyone at all is a person -- indeed, that anyone at all exists.

Friday, August 05, 2011

Knowledge, Faith and Will

Kyle writes about the concern which his stance against the possibility of having certain religious knowledge has caused in some quarters:
If I’m less than certain in my religious faith, is my faith then weak or in question? In forsaking any certainty, do I risk forsaking my faith?

At the risk of sounding coy, I must confess the answer to these questions is possibly. Anyhow, I have two reasons for why I have no religious certainty and why I don’t think such certainty is really possible.

First, the basis of my religious knowledge—my knowledge of revealed truths—is the say-so of self-defined religious authorities—authorities who claim, without proof or conclusive evidence, that they speak for God. I believe them to be divinely inspired, at times, but neither they nor I can prove this for certain.

Second, what I call my religious faith may be something other than religious faith, either in part or in total. ... I cannot dismiss the possibility that my faith isn’t something otherwise than a response to a revealing God. It’s possible that what I call my faith experiences are the result of digestion, bodily chemistry, neurosis, the fear of death, or the desire for meaning. Because I do not know myself with certainty, I cannot know my faith with certainty. I cannot say for sure what it is.
This strikes me as conflating faith and knowledge, when it seems to me that in fact they are rather different things.

Knowledge is subject of all the limitations of evidence which Kyle points out. After all, I am not entirely sure in my knowledge that Kyle exists. Sure, I remember a long-haired guy who walked around Steubenville at the same time I was there and wrote his thesis on Tolkien as literature -- but it could be that all this in my mind is merely the result of the pokings of Cartesian demon who is intent on spicing up my otherwise drab existence by inserting the illusion of a person like Kyle.

But at a certain point -- even knowing that I have less than absolute certainty about my evidence -- I make a choice to believe that Kyle exists. I don't have to do this. I could, I suppose, refuse to make a decision as to whether or not Kyle exists -- kind of like how I might refuse to make a decision as to whether there was a real being of some sort whom the ancient Greeks worshiped under the name of Apollo. Or I might hold that Kyle exists, but hold it rather hesitantly and refuse to take any actions or risks that would depend on Kyle definitely existing. (Like, say, lending him money.)

However, while the firmness with which I placed faith in Kyle's existence might depend on the extent to which I felt I had firm proof of his existence, the who aren't necessarily connected. I could refuse to believe that Kyle existed even in the face of overwhelming evidence (say, his whacking me about the head with a toy light saber) or I could insist on believing that he existed even if he refused to give me any evidence of his existence (say, if he never responded to my Facebook friend request).

Bringing the discussion back from Kyle to God, it seems to me that Kyle's friends need not necessarily fear that Kyle will "lose his faith" (i.e., decide not to believe in God) because Kyle finds that he does not have firm knowledge of God's existence -- because Kyle can choose to believe (firmly or not so much so) in God's existence irrespective of any doubts he may have of the firmness of his evidence for God.

Similarly, when someone asks Kyle if he is certain in his religious faith, it seems to me that the question is not, "do you have complete certainty of God's existence" (something which, it seems to me, is not possible in this life) but rather, "Are you likely to choose to stop believing in God." This is a question, thus, about Kyle's actions, not about his knowledge.

Friday, July 01, 2011

Trolley Madness

At last, I have come across the Trolley Problem which truly gets at the difficulties of modern life.
On Twin Earth, a brain in a vat is at the wheel of a runaway trolley. There are only two options that the brain can take: the right side of the fork in the track or the left side of the fork. There is no way in sight of derailing or stopping the trolley and the brain is aware of this, for the brain knows trolleys. The brain is causally hooked up to the trolley such that the brain can determine the course which the trolley will take.

On the right side of the track there is a single railroad worker, Jones, who will definitely be killed if the brain steers the trolley to the right. If the railman on the right lives, he will go on to kill five men for the sake of killing them, but in doing so will inadvertently save the lives of thirty orphans (one of the five men he will kill is planning to destroy a bridge that the orphans’ bus will be crossing later that night). One of the orphans that will be killed would have grown up to become a tyrant who would make good utilitarian men do bad things. Another of the orphans would grow up to become G.E.M. Anscombe, while a third would invent the pop-top can.

If the brain in the vat chooses the left side of the track, the trolley will definitely hit and kill a railman on the left side of the track, ‘Leftie,’ and will hit and destroy ten beating hearts on the track that could (and would) have been transplanted into ten patients in the local hospital that will die without donor hearts. These are the only hearts available, and the brain is aware of this, for the brain knows hearts. If the railman on the left side of the track lives, he too will kill five men, in fact the same five that the railman on the right would kill. However, ‘Leftie’ will kill the five as an unintended consequence of saving ten men: he will inadvertently kill the five men rushing the ten hearts to the local hospital for transplantation. A further result of ‘Leftie’s’ act would be that the busload of orphans will be spared. Among the five men killed by ‘Leftie’ are both the man responsible for putting the brain at the controls of the trolley, and the author of this example. If the ten hearts and ‘Leftie’ are killed by the trolley, the ten prospective heart-transplant patients will die and their kidneys will be used to save the lives of twenty kidney-transplant patients, one of whom will grow up to cure cancer, and one of whom will grow up to be Hitler. There are other kidneys and dialysis machines available; however, the brain does not know kidneys, and this is not a factor.

Assume that the brain’s choice, whatever it turns out to be, will serve as an example to other brains-in-vats and so the effects of his decision will be amplified. Also assume that if the brain chooses the right side of the fork, an unjust war free of war crimes will ensue, while if the brain chooses the left fork, a just war fraught with war crimes will result. Furthermore, there is an intermittently active Cartesian demon deceiving the brain in such a manner that the brain is never sure if it is being deceived.

What should the brain do?
Excerpted from:
– Michael F. Patton Jr., “Tissues in the Profession: Can Bad Men Make Good Brains Do Bad Things?”, Proceedings and Addresses of the American Philosophical Association, January 1988

Wednesday, June 22, 2011

Moral Sense and Unequal Exchange

Every week I make a point of finding the time to listen to the EconTalk podcast -- a one hour interview on some economics related topic conducted by Prof. Russ Roberts of George Mason university. Roberts himself has economic and political views I'm often (though not always) in sympathy with, but he's a very fair and thoughtful interviewer and has a wide range of guests. This week's interview was with a semi-regular on the show, Prof Mike Munger of Duke University, and the topic was the concept of euvoluntary exchange which Munger has been attempting to create.

Munger's project aims to identify why it is that some seemingly voluntary transactions are seen as morally repugnant by most people, and are either socially disapproved of or outright outlawed. So for example, say that Frank is very poor and desperately wants to provide for his family. Tom is very rich and is loosing eyesight in both his eyes. His doctor believes they can pull off a revolutionary new surgery and transplant a healthy eye into him, but they need the eye of a live, healthy person who matches Tom's blood type and DNA well. Frank is a match and is willing to give up an eye in return for a million dollars.

Now, there are a few people who lean heavily in the rationalistic direction who would say this sounds like a great idea because it makes most people better off, but most people would react to this with revulsion, and it is in fact illegal to do this kind of thing in the US.

The interesting thing is that voluntarily donating an organ (so long as giving it up isn't considered too big a detriment to you) is considered morally admirable, and is legal. So, for instance, there was a case a year or two ago in our parish where one young woman in the parish donated a kidney to another parishioner who needed a transplant.

Munger's argument is that in the Frank and Tom example, the transaction may seem voluntary but it's not really voluntary because of the disparity in means between Tom and Frank. Transactions that are really, truly voluntary (euvoluntary) are, he argues, always just, at least, in and of themselves as transactions. This is leaving aside the question of whether the thing one seeks to procure is something which you should have or not -- say weapons grade plutonium.

The reason why we're comfortable with someone donating a kidney but not with someone selling a kidney is that in the case of selling the kidney we assume that someone may be doing the act out of desperate need for money rather than a true desire to help. Thus, we approve of the donation of the kidney because it is clear that it's motivated out of a sincere desire to help the other, but we deprecate the selling of a kidney because we fear that someone is being taken advantage of.

To distinguish between euvoluntary and non-euvoluntary transactions, Munger suggests that we'd look at the BATNA or Best Alternative to a Negotiated Agreement.

The example that he gives is as followed (rewritten because the transcription is a little hard to follow, sounding too much like conversation): Say you walk into the grocery store and you see that they're selling bottles of water for $1000. This is totally outrageous, but there's another store right across the street selling water bottles for $0.99, so rather than getting upset you walk across the street and buy water for a dollar. The $1000 price may be stupid, but it isn't seen a really wrong because your alternative to paying that $1000 price is just a five minute walk and a good laugh.

Now, say you've been lost in the desert for two days and you're on the point of dying of thirst. Someone drives up in a Jeep and you ask him if he has a bottle of water. "Sure," he says, "But it'll cost you $1000." When you hesitate, he starts to drive away, leaving you to die. So you agree to give him $1000 for his water.

I think most people would agree that this was an incredibly unjust thing to do, and the reason, Munger argues, is because there's a huge disparity in BATNAs between the two parties in the exchange. If the guy in the Jeep doesn't make the sale, he just drives on and is none the worse. If I don't buy, I die. Thus, the transaction is seen as unjust because it's not really free -- no more so that if he put a gun to my head and demanded a thousand dollars -- even though both parties are, on the face of it, better off because of the transaction. (You don't die of thirst and the guy in the Jeep has $1000.)

Now, I'm not entirely sure whether this is a moral intuition that we have (we certainly treat it as one) or if it's just a matter of social conditioning: we as Americans don't like things which we see as unequal or unfree. But it becomes an important question because through this moral sense we sometimes avoid transactions which could arguably benefit both parties a lot -- usually the "unfree" one far more than the better off one. Russ Roberts provides some interesting anecdotes later in the interview:
[These are from the show transcript, so forgive the conversational tone.]
[O]ne of the students in the class told me a story I found fascinating and that relates to what we've been talking about. She had been visiting in Nepal, and she had clothes that needed cleaning; and she found out she could hire a washer woman to do your clothes for you. No washing machine or laundromats where she was. So, she went to hire someone and the wage was so appallingly low--let's say it was ten cents an hour--she was so horrified that she decided not to hire this woman. It would be exploitative.

I said: You've exploited her by not exploiting her. Maybe she'll only find something much worse. Maybe she'll take it voluntarily, not euvoluntarily. She was looking for work. Maybe she had a hungry child or needed money for medical care; she was desperate enough to work for ten cents and hour; and the student refused to engage in this transaction allegedly because she cared about the woman. As an economist this is very difficult to understand. The more I thought about it, the more I understood it.

When I tell this story to my students, one of the reactions is always: She should have paid her more. Then you ask the question: How much more? American minimum wage? American living wage? And if you offered her $10 an hour instead of ten cents an hour, what would her reaction be? Would she be thrilled? Offended? Where would the line form when the word got out that you were paying 100 times the going rate. There'd be a giant rent-seeking contest. How would you deal with that. I put myself in my student's shoes and tried to think about why you could come to that conclusion and feel good about it. It's the disparity in betnas. This is a student who was going to come back to American life, earn an extraordinarily large income by Third World standards and perhaps even a decent income by Western standards, and certainly compared to this woman. The gap between their wellbeing was so large over a lifetime that this was simply an unimaginable transaction. It's as if that transaction is inherently exploitative, not because of the features of the transaction, but because of the disparity in betnas. Because both people clearly could be made better off? Why wouldn't you joyfully engage in this transaction? She couldn't do it. The punchline to the story is that she did her own laundry, threw out her shoulder, and ended up hiring the woman anyway eventually.
...
The personal experience that I had like this was I was once house sitting for someone while I was in Santiago, Chile, working at a think tank there the summer between years in graduate school. And it turned out that with the house there was a cook. So, I came home the first day I was house sitting and I put my feet up on the coffee table to read the newspaper, and a woman comes out of the kitchen and asks me in Spanish what I'd like for dinner. And I said: What do you mean? And she informed me she was the cook and was going to make whatever I wanted. And I was extremely uncomfortable with this; I said: Well, make whatever you feel like. And she was extremely uncomfortable.

So we eventually came to some conclusion about what she was going to cook, and she's in the kitchen cooking and I'm in the living room reading and I realize this is making me very uncomfortable. This woman is cooking for me who I'm only implicitly paying. She was being paid, but only a small amount. I went into the kitchen to chat with her, which totally violated the social norms; she was very uncomfortable. We proceeded to have an awkward conversation in very bad Spanish. I asked what music she liked; she liked Frank Sinatra and Julio Iglesias. I came to like Frank Sinatra.

[Munger] It's painful to hear this.

[Roberts] The next part was the more interesting part. I thought, sports is something people have in common; I asked her what her favorite football, soccer team. She rooted for Colo Colo, which is what the poor people in Santiago rooted for.

[Munger] You could see that coming.

[Roberts] And of course, all my friends rooted for Universidad de Chile. And I have to mention--what I loved about soccer in Chile, when my friends told me that was their favorite team I asked how it was tied to the University of Chile. They said: Well, there isn't one. I said: What do you mean? They said: They just use the name of the school. I thought how nice, because in America we pretend that the people with the name of that team are associated with the school--in fact, they are kind of like employees, unpaid employees in college. But here they actually totally sever the connection. The Duke University basketball team could be, like, the Celtics. But anyway, I realized again the disparity in our lifetime situations was just inherently uncomfortable. I did not like this woman cooking for me.

[Munger]You felt you were exploiting her somehow.

[Roberts] Right, and I was trying to soften that by chatting with her. As if that was going to help. Oh good, he came in to chat with me--

[Munger]--is he going to grab me?

[Roberts] Not just that. I don't think that was the worry. First, I violated the social norm that I'm trying to make conversation with her, and two, my conversation is not very good; all it does is enhance the feeling that I'm the Universidad de Chile fan and she's the Colo Colo fan. It was a total failure. I would have much preferred that she would not cook for me. I didn't want her there. I didn't want her to do that.

[Munger] Maybe even to the extent of if they had offered, you wouldn't have fired her, but if you said--maybe they'll give her a sabbatical, we'll give her the month off. And they wouldn't have paid her.

[Roberts] I would have said: Great!

[Munger] My question is: how much would you have paid to avoid having to deal with that, with that sense of exploitation?

[Roberts] The answer is some. Maybe even up to the point of saying: Lay her off for a month.

[Munger] Even though you wish her no ill.

[Roberts] Correct. It was an uncomfortable experience. It gave me an insight into my student, when I thought back on it. I think it's really no more complicated than that--just having such a big disparity in life situation. And in particular, if some of my life situation is contingent on the consummation of this contract. That explains why all of these different transactions are illegal, why all of these different situations we feel bad about if not formally illegal. Maybe it's not a transaction, but a sort of social relationship.
This is pretty much exactly the kind of thought process which often causes people to impose regulations (say relating to third world manufacturing, or to child labor) which end up hurting the people they seek to help. No question, it's appalling that people should be working away for fourteen hours a day sewing undershirts for only $2 a day, or that twelve year old kids are working in factories, but often these "sweatshop" and "child labor" conditions have waiting lists, because the alternatives for those seeking work are picking over garbage heaps or being sucked into prostitution at age 12 rather than factory work.

To the extent that we often leave people in even worse conditions for fear that engaging in business with them would be exploitive, it seems like the effort to understand why we make these moral intuitions (and how valid they are) is important. I'm not sure that Munger has really got much further than describing the phenomenon, but it seems an interesting and important first step.

Tuesday, January 25, 2011

A Conversation with Nobody

Back in 1950 Alan Turing wrote a paper entitled "Computing Machinery and Intelligence" in which he proposed a test (which has been since named the "Turing Test") to determine if an artificial intelligence had been successfully created. The basic idea is that if an artificial intelligence is created such that person A can have a conversation (via typed text) with persons B and C, one of whom is another human sitting at a keyboard and one of whom is a computer program, such that A is not able to tell which of the two entities he's talking to is a human and which is a computer, then this would be evidence that the computer is an artificial intelligence to the extent that it can functionally behave like a human.

Of course, it's a bit flattering to think that the way to determine if a machine is a "thinking machine" is by seeing if it can interact with us humans in a social setting. On the flip side, some levels of human interaction are so low level that one can imagine a fairly simple compluter program succeeding at them pretty well. For instance, I would imagine you could probaby be moderately successful in creating a computer program that could present a reasonable facsimile of human communication via Facebook statuses, links and comments. But that's in part because communication on Facebook is pretty simplistic and you can always not respond or throw out non sequitors without seeming like anything other than a fairly normal Facebook user.

Building a computer program capable of producing a passable simulation of a more wide-ranging conversation is, however, a lot trickier, and such a feat is clearly some ways off in the future.

Thinking about this, however, in relation to the question of whole brain emulation which I wrote about last week it occurs to me that there are two questions here:

1) Is it possible to create a computer program which could fool a human into thinking that he is having a conversation with another human.

2) Is it possible to create a computer program which is actually capable of having a conversation for its own sake.

The first of these is achievable if one can build a good enough set of algorithms around how humans respond to questions, common knowledge for conversational grist, etc. The second, however, is much trickier. When we converse with someone we generally do so because we want to communicate something to that person and/or we want to know something about that person. In other words, conversation is essentially relational. You want to know about the person you are talking with, and you want them to know about you. You want to establish areas of common interest, experience and belief. You want to bring that person to share certain ideas about you or about the world.

I'm not sure that it would ever be possible to build a computer program which had these feelings and desires. Oh, sure, you could give it a basic like/not-like function where it tries to achieve commonalities or confidences and if it is rebuffed puts its interlocutor in the "not like" category and relates to it differently. But this is very different from actually wanting to know about someone and wanting to like and be liked by them. How our own human emotions work in this regard is far from clear to us, so I can't imagine that we're in any prosition to understand them so well that we can create copies in others.

Of course, in real conversation you often can't tell if the other person you're talking with, even face to face, is actually interested in you or actually likes you. This question is a source of considerable concern in human interactions. So perhaps the question of whether a computer can care about who you are or what you talk about is irrelevant to the question of whether a computer could be designed that could pass the Turing Test. But I do think that the question is probably quite relevant to whether a computer could ever be a person.

Wednesday, January 19, 2011

The Materialism of Limited Toolset

I make a point of always trying to listed on the EconTalk podcast each week -- a venue in which George Mason University economics professor Russ Roberts conducts a roughly hour-long interview with an author or academic about some topic related to economics. A couple weeks ago, the guest was Robin Hanson, also an economics professor at GMU, who was talking about the "technological singularity" which could result from perfecting the technique of "porting" copies of humans into computers. Usually the topic is much more down-to-earth, but these kinds of speculations can be interesting to play with, and there were a couple of things which really struck me listening to the interview with Hanson, which ran to some 90 minutes.

Hanson's basic contention is that the next big technological leap that will change the face of the world economy will be the ability to create a working copy of a human by "porting" that person's brain into a computer. He argues that this could come much sooner than the ability to create an "artificial intelligence" from scratch, because it doesn't require knowing how intelligence works -- you simply create an emulation program on a really powerful computer, and then do a scan of the brain which picks up the current state of every part of it and how those parts interact. (There's a wikipedia article on the concept, called "whole brain emulation" here.) Hanson thinks this would create an effectively unlimited supply of what are, functionally, human beings, though they may look like computer programs or robots, and that this would fundamentally change the economy by creating an effectively infinite supply of labor.

Let's leave all that aside for a moment, because what fascinates me here is something which Roberts, a practicing Jew, homed in on right away: Why should we believe that the sum and total of what you can physically scan in the brain is all there is to know about a person? Why shouldn't we think that there's something else to the "mind" than just the parts of the brain and their current state? Couldn't there be some kind of will which is not materially detectable and is what is causing the brain to act the way it is?

(Or to use the cyber-punk terminology which seems more appropriate with this topic: How do we know there's not a ghost in the machine?)

Hanson's answer is as follows (this section starts around minute 32 of the podcast):
"I have a physics background, and by the time that you're done with physics that should be well knocked into you, that, you know, certainly most top scientists, if you ask them a survey question will say, 'Yeah, that's it.' There really isn't room for much else. Sorry. It's not like it's an open question here. Physics has a pretty complete picture of what's in the world around us. We've probed every nook and cranny, and we only ever keep finding the same damn stuff.

We have enormous progress on seeing the stuff our world is made of. Almost everything around you is the same atoms, the same protons, electrons, the rare neutrino that flies around. And that's pretty much it. You have to get pretty far off to see some of the strange materials and things that physicists sometimes probe. Physicists have to make these enormous machines and create these very alien environments in order to find new stuff to study because they've so well studied the material around us. The things our world is made out of are really, really well established. How it combines together in interesting ways gets complicated and then we don't get it, but the stuff that it's made out of, we get.

Your head is made out of chemicals. We've never seen anything else. It's always theoretically possible that when something's really complicated and you don't know how to predict the complexity from the parts, you could say, 'Well therefore, it could be this whole is different from the parts, because it's too difficult to predict.'
...
We should separate two very different issues here. One is technological understanding and knowing how things work and how to make things, and the other is knowing what the world is made of. So, I make this very strong and confident claim: We know what the world is made of, and we know what pieces they are and how they interact at a fine grain. But at higher levels of organization, we don't know how to make other things like, even, photosynthesis in cells. We don't know how to make a photosynthesis machine. You could take your cell phone out of your pocket and take it apart and you wouldn't know how to make a phone like that.... We don't know how it works, but we're pretty sure what it's made out of."

Now, this line of thinking seems fairly familiar to me from talking with materialist/atheists of a scientific bent: We have all these great scientific tools, and all they've ever detected is matter and energy, never a "will" or a "beautiful" or a "soul", and so therefore it's pretty clear that when we talk about our minds we're really talking about our brains and there just isn't anything there except chemicals and electricity.

However, it seems to me that this presents a rather obvious blind spot. We, as human persons, experience all sorts of things which would seem to be evidence of having a will which decides things in a non-deterministic fashion. We also respond to ideas such as "beautiful" or "justice" or "good" in ways that would suggest that there is something there that we're talking about.

When we say, "Physicists have done all this work, and all they've ever found is matter and energy," you are really saying, "Given the tools and methodology physicists use, all they are able to detect is matter and energy." But I'm not clear how getting from that to, "Therefore there is nothing other than matter and energy," is anything other than an assumption.

Is there any valid reason why we should accept the jump from, "Tools that scientists use to detect things can only detect the existence of material things," to "Only material things exist"?

This seems particularly troublesome given that the project here is supposedly to create an emulation program which can be given a brain scan and then act like an independent human. If our experience of being human is that there is something in the driver seat, something which decides what is beautiful or what is right or who to marry or whether we want rice pudding for lunch today, then unless there is some active, non-deterministic thing within the brain which can be measured by this scan, then what you get is going to be, for lack of a better word, dead.

Monday, April 06, 2009

Philosophy and Health

Philosophy is often seen as one of those highly impractical, strictly academic fields, and yet, it has a way of being at the root of everything.

I was struck, recently, by a contrast in two statements about medicine. In an article about the importance of finding medical ways to enhance female sex drive, I ran across a claim along the lines of, "Many experts believe that more than 50% of women over 30 suffer abnormally low interest in sex and would benefit from sexual drive enhancing medication if it became available." The immediate connection my mind made was: No more than 5% of the population is attracted primarily to his or her own sex, and yet this is not considered a medical abnormality.

These two together show that the medical community (and our society in general) clearly has some sort of philosophy of the human person and philosophy of sexuality, which is doubtless assumed and unstated. Women, it is believed, ought to have a sexual drive equal to that of men, regardless of whether that is what we find in nature or not. (Even though there are some obvious evolutionary reasons why males would be physically more interested in frequency of copulation than females.) And yet if one primarily experiences sexual attraction to one's own sex, even though that both "doesn't fit the plumbing" and is evolutionarily useless, that is perfectly fine and healthy, even if this is a condition found in only a small percentage of the population.

Medicine is, in its modern form, generally an empirical field. Yet the question of "What is normal?" and "What is abnormal?" is a question that we always answer philosophically rather than empirically.

Necessarily so. Often our sense of what "ought" to happen is directly contrary to the observed usual occurrence. "Health" is not simply what we observe to be the usual, otherwise we would consider the "healthy" result of a diagnosis of lymphoma to be death.

We chase the telos just as much as in Aristotle's time, and yet we do not acknowledge that what we are doing is anything other than an "empirical science".

Friday, January 23, 2009

The Age of Data

As someone who earns his daily bread and crumpets by doing data analysis, yet has a deep affection for the worldviews of times past, it has struck me more and more of late that "data" has become a term granted incredible reverence in our modern world. Constantly I hear people who are educated and move within educated circles, but who have no particular understanding of data itself, insist that they must be "shown the data" to believe something.

Some little while ago I asserted in conversation about youthful sexual morality that it was entirely possible for a young person to, should he or she so choose, remain chaste until marriage. "The data doesn't support that," I was told. When I responded that data on the topic merely showed that many people do not so choose, but that it was nonetheless entirely possible to pursue this course (and mentioned my own experiences and those of a few friends by way of example) I was informed tendentiously that, "The plural of anecdote is not data."

Others address problems which might rightly be addressed via data, but have no understanding of what data means. "It's shocking that in this day and age half our students are still below average in reading ability," I was once told. I'm afraid I laughed.

Or slightly more less obviously foolish, "The bottom 20% of earners don't make any more, after you adjust for inflation, then they did twenty years ago." This is true in a certain statistical sense, but it fails to account for the fact that individual people move through these classifications quite fluidly. The nineteen-year-old who was in the bottom 20% ten years ago is by no means necessarily still there now.

From my vantage point as a producer of data, it strikes me that the illusion under which far too many people labor is that data itself tells you something, and that it tells you this clearly and with authority.

Data is a highly modern phenomenon. When William the Conqueror commissioned the Domesday book in 1085, it took a year to complete and the feat of cataloging with relative accuracy all the people and their property in England was so remarkable that it is remembered to this day. The project required sending people out on horseback throughout the kingdom to interview people and write down the results over the course of a year, and no one attempted anything of similar scale again in England for several hundred years.

Today computers and telephones not only make it easy to collect census and survey data, but all manor of transactions (performed on computers) pour out fast quantities of data as a sort of golden waste product. In a modern corporation such as the one I work for, it is impossible to sell people things and ship them out without in the process producing so much data about who bought what, when, and for how much, that we "data monkeys" are challenged to drink in half the available insights from the firehose stream of information turned upon us.

Data sits out there in tables: thousands or millions of records of individual facts or events which those of us who access them can sum and average and graph and flip this way and that in pivot tables. Sitting at my computer I can take millions of rows of data which are the by product of a week's worth of consumer electronics orders and in an hour or two's worth of work in Access and Excel tell you how often certain products were bought together, what people think is a good price for a 42" TV, or whether a glossy insert in the weekend newspaper sells more digital cameras than our website. The sorts of functions I can run against a data set in moments represents levels of analysis that would almost certainly have been impossible more than fifty years ago. In the days when people really did wear green eye shades and sharpen pencils, it would simply not have been possible to gather and casually experiment with hundreds of thousands of rows of data in the way that I so casually do every day.

The fact that there is simply so much data around in the modern world allows us to investigate all sorts of interesting questions using data. But what must be realized is that "data" is simply the collection of lots of individual records about individual events. It may not be the plural of anecdote, but the it is the plural of event. And data does not itself have obvious meaning. One must seek to find some sort of pattern in it, and that pattern may not be right, in the sense that it may well not accurately describe the experiences or motivations of most or any of the people who were involved in the individual events whose descriptions are now "data".

And this is what people need to understand about data. Data is not the deeper essence of the universe, the real world of which mere events are but an imperfect instantiations. Rather, data represents the partial leavings of reality. Traces of past events. Footprints and shadows. Clues left behind by the real events, which can only at times be accurately deduced from them.

There are amazing insights that can be gained from data analysis, not only about present day events, but about the past. (Some fascinating historical research I've read lately has been based on analysis of data built off the centuries of data recorded in parish registries from throughout Europe, now entered into databases by historians.) But these insights are only good to the extent that a good analyst is able to correctly identify patterns which reflect reality.

I'm glad that we live in the age of data. All other things aside, I find it fascinating to play with, and it gives me a good living. But given the modern obsession with data as a defining source of truth, people need to see through the hype and recognize what data actually is and isn't. Data is simply the collection of a set of records which tell you "someone did this" or "this person had this characteristic". By looking at the frequency with which people did some given thing or possessed some given trait, we can learn some very interesting things. But data can never answer qualitative questions for us, though it may provide us with the inputs to make qualitative judgments.

Data cannot tell you what people are capable of or what they should do, but it can tell you what people often do do. Data cannot tell you what the best health care system is, but it can tell you the life expectancies of people with various illnesses in different countries, the average cost of treatments, or the average wait times for procedures. Data cannot tell you what marriage is or what culture has a desirable family structure, but it can tell you who gets divorced and how frequently. And data cannot tell you how accurately data answers all the worlds problems -- though there is data on how much more data we produce very year. Indeed if the data I've seen on that point is accurate, the age of data is just beginning.

Thursday, December 18, 2008

Freedom as a Political Good

Historically the Catholic Church has had, or has been perceived to have, a rocky relationship with "freedom" in the sense that the term has come to be used in a political and cultural sense since the Enlightenment.

Freedom in the modern sense is often taken to mean, "I'm free to do whatever I want without anyone telling me what to do." The Church, on the other hand, generally takes freedom to mean, "Freedom to do that which is good." The Church sees sin as enslaving and as reducing one's capacity to choose freely in the future, and as such even where acting contrary to the good is in no way forbidden, doing wrong is not seen by the Church as exercising "freedom".

So the in the moral sense, the Church does not hold "freedom" in the sense of simply doing whatever you want to be a good. Rather, the Church holds doing the good to be the good, and freedom to be the means of achieving that.

I speak above in the moral sense. However, let us look now at the political question of freedom. There are several senses of "freedom" that one can speak about politically. Sometimes we talk about a country being "free" in that it is not a dictatorship or more specifically in that it has a free press and a moderately democratic form of government. However the sense that I'd like to examine is "freedom" in the sense of "not legislating morality" or more generally erring on the side of not restricting personal action even in cases where one is sure that the action in question is wrong.

The argument for using the state to restrict people from doing things that are wrong is pretty straight forward: Generally when we refer to something as being "wrong" we are talking about something which causes injury to others, or at least to the actor himself. For example, it is wrong to steal, and it is illegal to steal because stealing causes injury both to the person who is stolen from and also (though this is of more interest to the moralist than the lawmaker) because stealing causes damage to the person who steals as well.

Generally, when people argue against making something destructive illegal, they do so on one of two bases:

1) Outlawing the activity would cause social destruction greater than the activity itself.

2) Outlawing the activity would set a dangerous precedent because of disagreements in society as to the definition of what is destructive.

A good example of the first of these is the debate over whether the ban on drugs causes more damage than simply having drugs legally available would. This is an interesting prudential debate, but what I'm interested in looking into more deeply is the second reason.

One of the classic examples that occurs to me in this regard is the sort of case that comes up every so often in which parents with religious beliefs against some particular medical procedure are in conflict with doctors who want to save the life of their child by means of the forbidden procedure. Now, from my position as someone confident that the parent's beliefs about blood transfusion or chemo therapy or heart surgery or what have you, my initial thought would be that the parents should be prevent from inflicting the damages resulting from incorrect beliefs upon their child.

However, given that our society has an ever decreasing degree of consensus as to which beliefs are erroneous and which are correct, this strikes me as a dangerous precedent to set. If today I support the strong arm of the government being used to overrule the beliefs of another set of parents because I am certain their beliefs are erroneous, it's not inconceivable that at some point in the future the majority of the population will decide that my beliefs are erroneous and take away my ability to make decisions about my children.

The Church learned this the hard way in regards to religious freedom. For much the Church's history in Europe, it had strongly supported governments providing their backing to enforce tithing, stamp out heresy, etc. This seems a right and obvious thing to do when the societal consensus was that the Church represented theological truth -- and thus only erring sects suffered pressure from the state. However, when the Reformation and Enlightenment brought other religious groups and anti-religious groups into power, the Church found the tradition of using the state to stamp out error turned against it.

Given that modern society has seen the increased breakdown of social consensus on a wide variety of moral and religious topics (or a vast increase in diversity, depending on how you want to spin it) and at the same time an ever increasing ability of the central state to regulate everyday life, it seems necessary to ask: Should we consider political freedom (defined as the refusal to use the power of the state to regulate behavior) to be a positive good, or should we simply consider it a temporary compromise to be used in those areas where we are concerned that the societal tide is shifting away from us?

To take the latter position is to open ourselves up for accusations of hypocrisy, though the difference between pragmatism and hypocrisy is sometimes narrow. To take the former is admit to ourselves more bluntly than we are often willing to do that we as a state and a society are unable to "stamp out evil" in our midst -- because we are unable to agree on what evil is.

For now, I think my own conclusion is perhaps closer to the latter, though with deference to the former: If the state is at all to be seen as a protector of the common good, it must at times restrict the freedom of people to do what they want, based on a social consensus that what they want is bad for them and for others. However, we must be hesitant to use that power too much, and principledly so, because it is a weapon that can at any time be turned back upon us on our dearest beliefs. And so we must always be seeking the correct balance between combating the most socially destructive wrongs, while being hesitant enough to restrict others' freedom that we can avoid being oppressed overmuch when we find ourselves in the minority.

Friday, January 18, 2008

Pinker & Morality

Stephen Pinker graced last Sunday's New York Times Magazine with a lengthy article titled "The Moral Instinct". In it, he seeks to explain (and applaud) recent research by psychologists, "evolutionary psychologists" (a term I use with roughly the same appreciation as Stephen Jay Gould did) and neuroscientists into the origins of morality.

(Many thanks to the reader who sent the article along and went a couple rounds of discussion on it with me via email.)

Working from the basic assumption that morality consists of a set of emotional/psychological urgings and repugnances which find their origin in humanity's evolutionary past, those investigating the moral instinct have tried to classify sets of moral reactions and speculate on how these might have come to be. Though lengthy, Pinker keeps things spiced up with illustrations and dilemmas. However, many of these seem to assume a very un-reflected view of morality -- on where moral "thought" is basically a matter of gut urgings which one is at a loss to explain. For instance, when talking about taboos Pinkers provides the following examples:
Julie is traveling in France on summer vacation from college with her brother Mark. One night they decide that it would be interesting and fun if they tried making love. Julie was already taking birth-control pills, but Mark uses a condom, too, just to be safe. They both enjoy the sex but decide not to do it again. They keep the night as a special secret, which makes them feel closer to each other. What do you think about that — was it O.K. for them to make love?

A woman is cleaning out her closet and she finds her old American flag. She doesn’t want the flag anymore, so she cuts it up into pieces and uses the rags to clean her bathroom.

A family’s dog is killed by a car in front of their house. They heard that dog meat was delicious, so they cut up the dog’s body and cook it and eat it for dinner.

Most people immediately declare that these acts are wrong and then grope to justify why they are wrong. It’s not so easy. In the case of Julie and Mark, people raise the possibility of children with birth defects, but they are reminded that the couple were diligent about contraception. They suggest that the siblings will be emotionally hurt, but the story makes it clear that they weren’t. They submit that the act would offend the community, but then recall that it was kept a secret. Eventually many people admit, “I don’t know, I can’t explain it, I just know it’s wrong.” People don’t generally engage in moral reasoning, Haidt argues, but moral rationalization: they begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.

Two things strike me in this set of examples:

First, Pinker assumes that any rationale behind moral prohibitions must be pragmatic. All possible reasons provided for disapproving of incest are pragmatic, and the example is formulated in order to foil these sorts of objections. From his overall tone, I think this reflects an assumption (indeed, probably a deeply held belief) on Pinker's part that moral objections to something must, at root, be pragmatic and physical in their repercussions. If he'd posed the incest question to me, my response would have been something along the lines of, "It was wrong because their action violated the inherent meanings both of the relationship between siblings and the meaning of sex/relationship between lovers." I have a feeling that Pinker would see that as just being a fancy way of saying, "I don't like it", but that simply serves to underscore the fact that we'd be talking about different things in regards to morality.

Second, he doesn't seem to take into account any difference between inherent meaning and cultural meaning. Using the flag as a dustcloth and eating the family pet are both violate senses of respect and meaning which are cultural in nature. The flag does not have an inherent meaning. However, using it as a dustrag is offensive because of certain cultural understandings both of what the flag means and what using a piece of cloth as a dustrag means. Similarly, the relationship of family to pet and the prohibition of eating pets are cultural. Incest and sex outside of marriage, however, violate inherent relationship types which cross cultural bounds. (This is not to say that all cultures necessarily share a prohibition against incest, though certainly most do, but rather that the relationship of "siblings" is something inherent to the human person, and that relationship inherently does not include "someone you have sex with".)

Pinker realizes he's playing with fire here, and concedes that many may see trying to develop an evolutionary understanding of morality as explaining it away:
And “morally corrosive” is exactly the term that some critics would apply to the new science of the moral sense. The attempt to dissect our moral intuitions can look like an attempt to debunk them. Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes.

However, he goes on to try to argue that discerning the evolutionary origins of morality will in fact reveal certain very real norms:
In his classic 1971 article, Trivers, the biologist, showed how natural selection could push in the direction of true selflessness. The emergence of tit-for-tat reciprocity, which lets organisms trade favors without being cheated, is just a first step. A favor-giver not only has to avoid blatant cheaters (those who would accept a favor but not return it) but also prefer generous reciprocators (those who return the biggest favor they can afford) over stingy ones (those who return the smallest favor they can get away with). Since it’s good to be chosen as a recipient of favors, a competition arises to be the most generous partner around. More accurately, a competition arises to appear to be the most generous partner around, since the favor-giver can’t literally read minds or see into the future. A reputation for fairness and generosity becomes an asset.

He goes on to argue that both the necessity of cooperation suggested by the iterative variation of the prisoner's dilemma and the golden rule as a means to persuading others to treat you nicely are moral norms that have been hardwired into humanity by evolution.

Many may find that they want something a bit more, when it comes to morality. Sure, in a society with certain assumptions (notably an idea that people are inherently or functionally equal) it may be the case that most people will benefit most of the time by treating others as they want to be treated and cooperating rather than betraying, but "most people most of the time" is not exactly what the majority of people seek when they look to "morality".
Now, if the distinction between right and wrong is also a product of brain wiring, why should we believe it is any more real than the distinction between red and green? And if it is just a collective hallucination, how could we argue that evils like genocide and slavery are wrong for everyone, rather than just distasteful to us?

Putting God in charge of morality is one way to solve the problem, of course, but Plato made short work of it 2,400 years ago. Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist? And if, on the other hand, God was forced by moral reasons to issue some dictates and not others — if a command to torture a child was never an option — then why not appeal to those reasons directly?

This throws us back to wondering where those reasons could come from, if they are more than just figments of our brains. They certainly aren’t in the physical world like wavelength or mass. The only other option is that moral truths exist in some abstract Platonic realm, there for us to discover, perhaps in the same way that mathematical truths (according to most mathematicians) are there for us to discover. On this analogy, we are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.

Moral realism, as this idea is called, is too rich for many philosophers’ blood. Yet a diluted version of the idea — if not a list of cosmically inscribed Thou-Shalts, then at least a few If-Thens — is not crazy. Two features of reality point any rational, self-preserving social agent in a moral direction. And they could provide a benchmark for determining when the judgments of our moral sense are aligned with morality itself.

Will all due respect, Pinker was sleeping through his Plato class. Plato didn't argue that morality couldn't come from God, rather he argued that "the Good" must always be singular. It can't simply be "what pleases the gods"; especially when you have a bunch of bickering gods who often do things ever their devotees regard as immoral. This is one of the reasons that Christians so readily embraced Plato, because they saw his singular "the Good" which remained untouched and eternal above the strife of the pagan deities as being a close approximation to the one, good and eternal God of Jewish/Christian revelation.

But sticking to the realm of human reason -- does he present a good reason for rejecting a Platonic approach to morality? Well, it's "too rich for many philosophers’ blood". Are we to take that as much of anything more than, "They don't like it"? This certainly seems to underline the idea that faith is an act of the will as much as the intellect.

Plato held that we often know truths without recognizing it, until those truths are drawn out of us. Pinker seems to be suffering from something of a lack of drawing out in his reactions to morality.

On the one hand, he wants to see morality as a biological/psychological phenomenon: a set of basic rules for how primates best get along together which has been programmed into us through countless generations of human social interaction. He boils these down to rules basic enough to be acceptable to modern culture "be fair to other people", "treat others as you want them to treat you", etc. But then in his closing he attempts to use this to make all sorts of absolute assertions: Being against human cloning is irrational. Homosexual relationships are okay. Racism is bad.

And yet, none of these can be conclusively derived from the rules which he has decided to keep. And indeed, nothing can be conclusively derived from them, since the very nature which he assigns to morality is one of "society functions best if most people do X" rather than "everyone must do X".

The fact is, Pinker himself is not comfortable with certain things he despises (racism, genocide, sexism, homophobia) being only wrong some of the time, or only wrong for some people, and yet in the end he cannot come up with an explanation of strictly psychological/biological morality which shows that it always and everywhere wrong to violate his preferred norms of behavior. The understanding of morality he puts forth allows him to discard those norms that he doesn't like, but it doesn't allow him to retain those that he does.

Thursday, January 03, 2008

Do We Need Morality?

My team at work tends to generate a lot of interesting lunchtime conversation (we go out for team lunch once a week) in part due to our Indian/US cultural split. We have two Hindus and one Jain on the Indian side and a Baptist, a Methodist, a "sort of spiritual" ex-Catholic and me on the American side, which certainly provides a variety of opinion. (The day our boss, the Baptist, was out, it turned out that all our Indians doubted reincarnation, while our Methodist and ex-Catholic both believed in it.)

One day our Jain team-member (he keeps all of Jainism's dietary requirements, but he says his beliefs are untraditional enough in regards to the gods that he upsets his mother) threw out a question that generated it's due share of controversy: Why do we have morality?

He contented that morality essentially set up a second and parallel set of laws, enacted by those without the political authority to control the legal system. Why have both religiously determined morality and legality? Why not just have a single authority with a single set of rules?

This struck me as an interesting question, because for the life of me I cannot imagine people not having ideas of morality that deviate (or may deviate) from whatever legal/societal restrictions they find themselves under.

Imagine for a moment a situation in which a priest/aristocrat class sets all laws and there is no religious our moral structure separate from that single set of leaders. It is announced, one day, that having a beard is a moral abomination and all men must shave daily or have their heads cut off. Everyone follows this lead but one man, as he lifts his razor in the morning, thinks to himself: "This is not right. I should be able to grow a beard if I want to. Cutting a man's head off because of his hair is wrong."

That man has just invented a personal moral system. So long as we are capable of receiving instruction from some other source and thinking, "No, that's not how it is. Things are actually this other way," we will have systems of morality which are separate from the law.

Now I should say, this line of argument did not win over my colleague. He argued that when someone looks at an precept that is given to him and thinks, "That is not right," he is simply wishing that he were in charge instead.

That really, is what leaves me most confused about the line of argument. I am frankly rather flummoxed as to how one could not see the holding of such a conviction as morality.

Thursday, December 20, 2007

Faith in an Orderly Universe

It's not only Catholic cardinals who manage to create a firestorm when they write NY Times op-eds about science and philosophy. Physicist Paul Davies landed quite a bit of attention last month with an editorial titled Taking Science on Faith. His point is one that has always struck me as deeply compelling: that science as a discipline relies on an implicit faith that the universe acts according to knowable laws.
Clearly, then, both religion and science are founded on faith — namely, on belief in the existence of something outside the universe, like an unexplained God or an unexplained set of physical laws, maybe even a huge ensemble of unseen universes, too. For that reason, both monotheistic religion and orthodox science fail to provide a complete account of physical existence.

This shared failing is no surprise, because the very notion of physical law is a theological one in the first place, a fact that makes many scientists squirm. Isaac Newton first got the idea of absolute, universal, perfect, immutable laws from the Christian doctrine that God created the world and ordered it in a rational way. Christians envisage God as upholding the natural order from beyond the universe, while physicists think of their laws as inhabiting an abstract transcendent realm of perfect mathematical relationships.
This view ruffled quite a few feathers. A follow up article (the one that actually caught my eye the other day) by Dennis Overbye describes some of the flack that Davies has caught from other scientists, science enthusiasts, and anyone else who felt like writing to the Time letters column:

His argument provoked an avalanche of blog commentary, articles on Edge.org and letters to The Times, pointing out that the order we perceive in nature has been explored and tested for more than 2,000 years by observation and experimentation. That order is precisely the hypothesis that the scientific enterprise is engaged in testing.

David J. Gross, director of the Kavli Institute for Theoretical Physics in Santa Barbara, Calif., and co-winner of the Nobel Prize in physics, told me in an e-mail message, “I have more confidence in the methods of science, based on the amazing record of science and its ability over the centuries to answer unanswerable questions, than I do in the methods of faith (what are they?).”
However, the attempts that Overbye quotes to explain science's reliance on an orderly universe without recourse to a leap of faith sound suspiciously like an attempt to do the same thing in different words:
Pressed, these scientists will describe the laws more pragmatically as a kind of shorthand for nature’s regularity. Sean Carroll, a cosmologist at the California Institute of Technology, put it this way: “A law of physics is a pattern that nature obeys without exception.”
That sounds very observational and pragmatic... except for the "without exception" part at the end there. True, we don't have to tune in every morning to the daily gravity report to see how fast things are falling that day, but saying that the laws of physics describe how the world behaves "without exception" based on a few hundred years of modern science (during most of which we interpreted our observations as pointing to laws other than our current understanding of physics) strikes me as taking something very like a leap of faith.

The issue, I think, is that some people who spend a lot of time and attention on science (I think this is actually more of an issue with science enthusiasts and low level science teachers as opposed to serious high level research scientists -- though one finds it at times there as well) have rather too much invested in the idea that scientific methodologies are The One Reliable Way of Finding Out How the Universe Really Works.

And yet, taken on their own, scientific methodologies are generally formulated to determine how things appear to work in a given set of situations and times. It's our faith that the universe works in a knowable, orderly, fairly universal fashion that allows us to turn five hundred years of modern science (or 2500 if you want to date science from the Greeks) into knowledge of how things work "without exception."

What's ironic, in a sense, is that Davies is not trying to advocate more respect for faith via his editorial. Rather, his last two paragraphs issue a call to seek a new, less universal way of understandings the "laws" of science:
It seems to me there is no hope of ever explaining why the physical universe is as it is so long as we are fixated on immutable laws or meta-laws that exist reasonlessly or are imposed by divine providence. The alternative is to regard the laws of physics and the universe they govern as part and parcel of a unitary system, and to be incorporated together within a common explanatory scheme.

In other words, the laws should have an explanation from within the universe and not involve appealing to an external agency. The specifics of that explanation are a matter for future research. But until science comes up with a testable theory of the laws of the universe, its claim to be free of faith is manifestly bogus.
If science were to be a totally self-contained discipline, I see the importance of what he's advocating. Though at the same time, I'm not entirely clear what these explanations internal to the universe would look like. The strong nuclear force works because... why?

That's the funny thing about "laws" in physics. Contrary to how my third grade science book tried to explain it, a law is not simply a hypothesis that has been tested many times. A law is something which seems to be universally the case, and yet has to be taken just "as is". There's not necessarily a "why" involved.

This is just fine if you simply consider science a methodology for explaining how material systems behave. It's rather more problematic if you have hopes of science being the one true method of knowing things for sure. Which is what leaves those interested in science who are comfortable with having a metaphysics in a better spot than those who imagine that one is better off without one.

Tuesday, December 11, 2007

Science & Faith: Different Ways of Knowing

An interesting discussion broke out over the weekend on a somewhat older post linking to a John Farrell post regarding his book on physicist Fr. Lemaitre. It touches on an interesting and important principle, though I wanted to bring it up and add my $0.02.

Commenter Jnewl says:
...[O]ne thing that does stick out like a sore thumb to me (presuming I'm not misunderstanding it) is when Farrell quotes St. Thomas in support, evidently, of the notion that scientists can take no position on questions that find an answer in theology. This is preposterous.

While St. Thomas certainly denies the possibility of demonstrating that the world had a beginning, it does not follow from this that he thought it could not be known. It can be known--indeed, more certainly known--according to the light of that higher science known as theology, which derives its principles from Scripture, which is inerrant. From the little bit he says here, it seems as if Farrell considers Faith to be something more akin to a tentative hypothesis than a firm and unwavering belief in things not seen. If so, this is very far from what St. Thomas himself believes.

The quote from Lemaitre that Farrell provides immediately following this seems to validate my interpretation, as Lemaitre there seems to be saying that it is illegitimate for a scientist to hold an opinion about a matter from Faith that he also investigates as a scientist. But this is, again, preposterous. If he has faith, then he doesn't just opine that the world had a beginning. He KNOWS it.
John clarifies his and Lemaitre's position a bit in the subsequent comments, which I won't quote here. What I wanted to address was the question of what we know about the world via science versus what we know about the world via faith.

The example of St. Thomas Aquinas and his understanding of the universe's creation is an interesting place to start off. The best science of Aquinas' time (Aristotelian physics and natural philosophy) suggested that the world had always existed in the same form that it did then. Obviously, this presented a problem for Christian theologians and philosophers who believed that, In principio creavit Deus caelum et terram. St. Thomas of course held that the results of faith and reason could not be different, and so (and if remembering back to medieval philosophy reading ten years ago is fuzzy, please correct me) he concluded that even if in a temporal sense the physical universe had always existed, God (who is himself eternal) was still its final cause to willed it constantly into being, and so in an ontological sense the world was created by God "in the beginning" even if one could not find a temporal beginning to the universe. The universe was no less created by God for all that he had always created it rather than having created it at some fixed point in time as we understand it.

What is, I think, important in all this is to understand what it is that we learn from our faith, versus what we learn from science. Christianity tells us, though the Bible and through the teachings of the Church, certain things about the universe, ourselves, and the history of salvation in the world:
  • The world was created by God.
  • We are made in the likeness of God and thus have rational minds, free will, and immortal souls which are capable of happiness forever in union with God or of rejecting God and receiving final damnation.
  • Christ came into the world to suffer and die for the remission of sins.
And many more...

Science can tell us what sorts or results normally take place in certain kinds of repeatable situations involving materials objects and/or forces:
  • Light behaves both like a particle and like a wave and travels at 299,792,458 m/s.
  • Gravity acts on two objects with a force proportional to their masses and inversely proportional to the square of the distance between them.
  • Once the human brain "dies" it generally doesn't "revive" and the rest of the person appears mentally inert

And so on...

Now clearly, sometimes the claims of science and our faith may come into conflict. They are not describing two hermetically sealed areas of knowledge. I am not an expert, but it seems to me that those strands of neuroscience which deny the existence of free will are explicitly in conflict with the tenets of our faith. But then, I'm fairly confident that those scientists who claim to be proving complete determinism in regards to our minds are not going to be successful in the end.

But there are a number of areas in which the two do not necessarily touch as much as some might imagine.

Our faith includes the knowledge (I think Jnewl is quite correct in saying that faith does involve knowledge) that God created the universe. However, it does not tell us when (other than "in the beginning") or how or what the universe looked like then or now. Science has provided us with a number of answers over the millenia as to when the universe began and what it looks like. In each age, there have certainly been those who have built the current scientific understanding of when the world was created too much into their religious beliefs, and also those who have attempted to simply use the bible as a science text book, but on due consideration I think the two fields provide us with fairly separate pieces of information.

Certainly, since the Big Bang is our current understanding of the physical origins of the universe, and since it has a certain "in principio" dramatic flair to it, we Christians tend to strongly identify the moment of creation with the Big Bang. However, if in another few decades some compelling piece of evidence were to come along for an oscillating universe or for some completely other cosmological model, I don't think one would be right in any way to say that the Christian understanding has been "disproved". While faith and science both provide us with knowledge about the origins on the universe, they provide us with very different kinds of knowledge.

This, I think, is where it's important to keep our scientific and faith knowledge separate. Not because there are two realities, one of faith and one of science, but rather because faith and science are generally telling us rather different things about our one reality. At the cosmic level (as opposed to questions of morals and salvation history) our faith tells us about what things are, about their natures. Science, on the other hand, tells us about how things work and about their history in a strictly physical sense.

To the human experience, I think that in most ways what faith tells us is actually rather more important in this respect than what science tells us. And in that sense, it's important not to overly shackle our faith to our current understanding of science.