Because most philosophies that frown on reproduction don't survive.
Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Friday, March 22, 2013

Future Cloudy

Perhaps artificially so...

The political dilemma over geoengineering – deliberate, large-scale intervention in the climate system designed to counter global warming or offset some of its effects – will perhaps be most acute in China.

In December, the country listed geoengineering among its Earth science research priorities, in a marked shift in the international climate change landscape noticed by China specialists Kingsley Edney and Jonathan Symons.

On the one hand, China's rapid economic growth has seen a huge escalation in its greenhouse gas emissions, which on an annual basis overtook those of the United States five years ago. Sustained GDP growth provides China's Communist party with its only claim to legitimacy, its "mandate of heaven". China's efforts to constrain the growth of its emissions have been substantial, and certainly put to shame those of many developed nations.

Yet neither China's efforts nor those of other countries over the next two or three decades are likely to do much to slow the warming of the globe, nor halt the climate disruption that will follow. Global emissions have not been declining or even slowing. In fact, global emissions are accelerating. Even the World Bank, which for years has been criticised for promoting carbon-intensive development, now warns that we are on track for 4C of warming, which would change everything.

China is highly vulnerable to water shortages in the north, with declining crop yields and food price rises expected, and storms and flooding in the east and south. Climate-related disasters in China are already a major source of social unrest so there is a well-founded fear in Beijing that the impacts of climate change in the provinces could topple the government in the capital. Natural disasters jeopardise its mandate.

So what can the Chinese government do? Continued growth in greenhouse gas emissions is a condition for its hold on power, but climate disruption in response to emissions growth threatens to destabilise it.

Geoengineering has immediate appeal as a way out of this catch-22. While a variety of technologies to take carbon out of the air or to regulate sunlight are being researched, at present by far the most likely intervention would involve blanketing the Earth with a layer of sulphate particles to block some incoming solar radiation.

Spraying sulphate aerosols could mask warming and cool the planet within weeks, although it would not solve the core problem of too much carbon dioxide in the atmosphere and oceans.

Should it start to seem that we are seeing real negative effects due some some kind of global warming, China choosing unilaterally to do something about it might be one of the more likely eventualities. The thing about being a one party dictatorship of sorts is that there isn't a whole lot of worrying about governmental checks and balances, and other countries with more red tape might be happy to sit back and let China take the blame.

Of course, just as now it's virtually impossible to determine whether any short term change in the weather has anything to do with global climate change (which the chicken little faction eager to blame any hurricane, tornado, drought or early frost on global warming, while critics rightly point out that all of these things have a tendency to happen at intervals anyway), once anyone starts trying any geoengineering you can bet that critics will blame any adverse weather occurances on the geoengineering. It would make a perfect scapegoat since it would be very hard to prove that it wasn't at fault for any given thing.

Tuesday, November 20, 2012

Why the Age of the Earth Matters

Senator Marco Rubio got some publicity of a kind he probably didn't want this week with the publication of an interview in GQ in which he was asked about the age of the Earth:
GQ: How old do you think the Earth is?
Marco Rubio: I'm not a scientist, man. I can tell you what recorded history says, I can tell you what the Bible says, but I think that's a dispute amongst theologians and I think it has nothing to do with the gross domestic product or economic growth of the United States. I think the age of the universe has zero to do with how our economy is going to grow. I'm not a scientist. I don't think I'm qualified to answer a question like that. At the end of the day, I think there are multiple theories out there on how the universe was created and I think this is a country where people should have the opportunity to teach them all. I think parents should be able to teach their kids what their faith says, what science says. Whether the Earth was created in 7 days, or 7 actual eras, I'm not sure we'll ever be able to answer that. It's one of the great mysteries.
Since it is one of the most essential functions of the news media to catch Republican politicians saying dumb things and then discuss the sheer dumbness of what was said for as long as possible, we'll be hearing about this for a while. I'm not clear from Rubio's answer whether he thinks is something of a Young Earth creationist, and he's trying to sound less scary about it, or whether he just wants to avoid offending the sensibilities of those who are Young Earth creationists by not flatly disagreeing with them. Either way it's a bit dispiriting.

Rubio makes the argument that the age of the universe doesn't actually have anything to do with the sort of everyday concerns that a Senator deals with. Over at Forbes, Alex Knapp points out that the age of the universe actually does have huge implications for the kind of science we deal with in our everyday lives.
The emphasis in Rubio’s statement is mine. I say that because the age of the universe has a lot to do with how our economy is going to grow. That’s because large parts of the economy absolutely depend on scientists being right about either the age of the Universe or the laws of the Universe that allow scientists to determine its age. For example, astronomers recently discovered a galaxy that is over 13 billion light years away from Earth. That is, at its distance, it took the light from the Galaxy over 13 billion years to reach us.

Now, Marco Rubio’s Republican colleague Representative Paul Broun, who sits on the House Committee on Science and Technology, recently stated that it was his belief that the Universe is only 9,000 years old. Well, if Broun is right and physicists are wrong, then we have a real problem. Virtually all modern technology relies on optics in some way, shape or form. And in the science of optics, the fact that the speed of light is constant in a vacuum is taken for granted. But the speed of light must not be constant if the universe is only 9,000 years old. It must be capable of being much, much faster. That means that the fundamental physics underlying the Internet, DVDs, laser surgery, and many many more critical parts of the economy are based on bad science. The consequences of that could be drastic, given our dependence on optics for our economic growth.

Here’s an even more disturbing thought – scientists currently believe that the Earth is about 4.54 billion years old because radioactive substances decay at generally stable rates. Accordingly, by observing how much of a radioactive substance has decayed, scientists are able to determine how old that substance is. However, if the Earth is only 9,000 years old, then radioactive decay rates are unstable and subject to rapid acceleration under completely unknown circumstances. This poses an enormous danger to the country’s nuclear power plants, which could undergo an unanticipated meltdown at any time due to currently unpredictable circumstances. Likewise, accelerated decay could lead to the detonation of our nuclear weapons, and cause injuries and death to people undergoing radioactive treatments in hospitals. Any of these circumstances would obviously have a large economic impact.
Knapp does a good job of pointing out that issues like the age of the universe are not simple trivia from a scientific point of view. If someone were really serious about believing that the universe was only 9,000 years old, it would imply that a lot of the physical laws we take for granted at the moment (a lot of our paradigms) are wrong. I think it's important that people have an understanding of how seemingly separate areas of scientific knowledge are in fact intimately tied together, so this is a very useful reminder.

That said, I think this misses something about the way in which most people who say that they think the Earth is only a few thousand years old actually use that belief. I've read explanations by Creationists that attempt to put together some story as to how we see light from objects more than 10,000 light years away, how radioactive decay could have been faster in the past, etc. in order to explain how the world looks the way it is while being less than 10,000 years old. However, these explanations invariably seem to be focused on coming up with an explanation as to how things used to be different for a while in the past -- they never attempt to make any predictions about the world behaving in strange and unexpected ways in the future. This is, of course, one of the several reasons that "creation science" can't really be considered a science, it's not predictive. Creation science is the attempt to use scientific language to explain how two seemingly incompatible things could be true: the world could look and act the way it does now (far away objects, radio isotope dating, fossils, etc.) and yet be very young. However, now and in the future, "creation science" is comfortable assuming that the world will continue to work exactly the way it does now -- not in the crazy ways it allegedly did for a couple days 9,000 years ago.

One can simply see that as being very bad science, and I think that's certainly appropriate. But as I think about Rubio's comments in particular, it strikes me that part of what's going on in many cases when people express doubts as to the age of the universe is that they're effectively walling off the question of the age of the universe and choosing to think of that in a context other than a scientific one. Sen. Rubio's expressed doubts as the the Earth's age and Rep. Broun's expressed belief that it's only 9,000 years old don't actually have any implications for science and technology applications in the present because they don't think about the age of the universe as a scientific question. I doubt very much that they expect the laws of physics to suddenly start acting differently any more than any other person does. Like anyone else making the leap from inductive knowledge to general laws, they are quite happy to act as if the speed of light and the breakdown of radioactive elements is constant. They just don't want to apply those practical beliefs to the question of the age of the universe.

In one sense, this isn't that odd. There are lots of areas of life where we don't attempt to apply science as a way of answering questions because science is incompetent to answer them. Examples of such questions would include: What is the meaning of life? Does my wife love me? Is Brahms better than Shostakovich? Should I become an academic or go into business?

What is odd is that those who believe in or hold open the possibility of a young earth are choosing to take a topic which science would appear to be well suited to "How old is the Earth?" and choosing to hold that out as an area where they do not apply science.

UPDATE: Of course, it's not just the Right that has its science problems. As a friend quipped on Facebook: "Rubio says he doesn't know the earth's age. Obama says he doesn't know when life begins." One of those is more likely than the other to result in making bad decisions.

Wednesday, November 14, 2012

Paradigms and Inductive Knowledge

Since I've been on business trips last week and now this, I've had a bit more time than usual for reading, so I've been working through Jim Manzi's Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society. The book is about the use of controlled tests in business and politics, but the first hundred pages or so is a very interesting discussion of the development of the scientific method and of the Randomized Field Trial.

One of the really interesting things he talks about in this section is the way in which science gets around the Problem of Induction: the fact that one cannot get from an observation that things have happened a certain way in a number of specific instances to an absolute rule.

So, for instance, the law of gravity, for all that we call it a law, has not been absolutely "proved" by extensive experimentation. It could be that if you drop a quarter right now, it will float in the air, or fly upwards, rather than falling to the ground. However, even though the extensive observation of falling objects doesn't prove the law of gravity, we act as if the law of gravity is proved because doing so allows us to make all sorts of useful predictions. Indeed, if a scientist observes something happening which appears to be contrary to gravity, his first reaction would probably be to assume that there is some other factor at play and that gravity is, in fact, still applicable. Assuming Newton's laws becomes a "paradigm".
[Thomas] Kuhn argued that to make practical progress, a group of scientists accepts an underlying set of assumptions about the physical world, along with accepted experimental procedures, supporting hypotheses, and so on. This paradigm helps to create a coherent discipline. The day-to-day work of scientists is to solve intellectual puzzles that fall within the relevant paradigm. Kuhn calls this normal science, or "worker-bee" science. Anomalies -- factual observations that contradict the tenets of the paradigm -- are rejected and either they are held aside as problems to be solved later or the paradigm is modified slightly to accommodate them. [Uncontrolled, page 24]
Of course, the example I used above related to Newton's laws is a perfect example of this as it is a paradigm that held sway for many years but was modified/replaced in the 20th century by Einstein's theory of relativity. Einstein's theory could explain phenomena which Newton's couldn't (no fault of Newton's, they were unobservable with the technology of his time), and so relativity became the new dominant paradigm.

Of course, there's not just one operative paradigm. Paradigms can be nested and there are different sets of paradigms which apply to different fields.

What's fascinating to me about this is how it uses a sort of factional competition to both get around the problem of induction (that we can't derive absolute laws from observations) while getting around the danger of getting locked into bad explanations that would seem to come with saying "let's just assume it's true".

I'd picked up the book because of my interest in test and control experiments in a business or civic setting which Manzi talks about later in the book. (He was one of the founders of Applied Predictive Technologies, a company which produces the Test & Learn software that I use at work.) However, thus far, I've actually found the first third of the book in which Manzi discusses the epistemology of the experimental method to be the most fascinating part of it.

Sunday, December 12, 2010

The Transmission of Human Life

The transmission of human life is a most serious role in which married people collaborate freely and responsibly with God the Creator. It has always been a source of great joy to them, even though it sometimes entails many difficulties and hardships.

The fulfillment of this duty has always posed problems to the conscience of married people, but the recent course of human society and the concomitant changes have provoked new questions. The Church cannot ignore these questions, for they concern matters intimately connected with the life and happiness of human beings.
--Humanae Vitae 1

She urges man not to betray his personal responsibilities by putting all his faith in technical expedients.
-- HV 18
In a hospital room on the Greek island of Crete with views of a sapphire sea lapping at ancient fortress walls, a Bulgarian woman plans to deliver a baby whose biological mother is an anonymous European egg donor, whose father is Italian, and whose birth is being orchestrated from Los Angeles.

She won't be keeping the child. The parents-to-be—an infertile Italian woman and her husband (who provided the sperm)—will take custody of the baby this summer, on the day of birth.
The Wall Street Journal's article "Assembling the Global Baby" is about the new business of surrogacy. I use the term "business" advisedly: there is a product that can be customized to the demands of the consumer, which is being outsourced because foreign workers will do the job for less than their first-world counterparts. And the excessive inventory is liquidated if the buyer doesn't want to purchase it.
Some of his own clients have faced the abortion decision, Mr. Rupak says. "Sometimes they find the money" to pay for more children than they expected, he says. After all, they went to such lengths. And if they decide otherwise, Mr. Rupak says, "We don't judge."
From this it follows that they are not free to act as they choose in the service of transmitting life, as if it were wholly up to them to decide what is the right course to follow. -- HV 10
PlanetHospital's most affordable package, the "India bundle," buys an egg donor, four embryo transfers into four separate surrogate mothers, room and board for the surrogate, and a car and driver for the parents-to-be when they travel to India to pick up the baby.

...Mr. Rupak says he is vigilant about the risks inherent in a lightly regulated business. He says he stopped using egg donors from Georgia in Eastern Europe, for instance, because a black market for eggs has sprung up in the region. This fall, Greek authorities busted a group of Romanian and Bulgarian men for allegedly forcing poor immigrant women to undergo egg extractions.
No statement of the problem and no solution to it is acceptable which does violence to man's essential dignity; those who propose such solutions base them on an utterly materialistic conception of man himself and his life. The only possible solution to this question is one which envisages the social and economic progress both of individuals and of the whole of human society, and which respects and promotes true human values. -- HV 23, quoting Mater et Magistra
...The couple planned on having two children. But their two surrogate mothers in India each became pregnant with twins.

At 12 weeks into the pregnancies, Mr. Aki and his husband decided to abort two of the fetuses, one from each woman. It was a very painful call to make, Mr. Aki says. "You start thinking to yourself, 'Oh, my god, am I killing this child?'"
Consequently, unless we are willing that the responsibility of procreating life should be left to the arbitrary decision of men, we must accept that there are certain limits, beyond which it is wrong to go, to the power of man over his own body and its natural functions—limits, let it be said, which no one, whether as a private individual or as a public authority, can lawfully exceed. These limits are expressly imposed because of the reverence due to the whole human organism and its natural functions... -- HV 17

Monday, June 28, 2010

African Rift Likely to Form New Ocean

Scientists have been paying special attention to the East African Rift since 2005, when a series of rifts began to open at the boundary of the African and Somalian plates. As the series of fissures has continued to develop, and more seismic measurements have been taken, over the last five years, Dr. Tim Wright has come to the conclusion that what we are seeing is the formation of a new ocean -- or of a new island, depending on how you want to look at it.


According to the presentation Wright is presenting to the Roay Society's Summer Exhibition, over the next ten million years we will see part of Somalia and Ethiopia separate from the rest of Africa to form a large island in the Indian Ocean. The process of tectonic divergence being observed on land here is similar to what goes on below the ocean at the Mid-Atlantic Rift and other place of tectonic divergence around the globe. And it makes for some pretty impressive pictures.

Wednesday, November 25, 2009

Programmer Smack Talk and Global Warming

I've been amused to watch some of the arguments going on out in the blogsphere as discussion of the hacking of the climate change servers moves off into a discussion of the quality of the code being used by climate researchers to model global warming.

Example:
Commenter One: Much of the code in the academic world tends to be written by grad students that have taken a class in programming and get told to write it.

Commenter Two: This is totally untrue. I never took a class in programming before writing my crappy undocumented code.

There's a certain wry self recognition for me here as well: I've never taken a class in programming, and I build mostly undocumented models to predict revenue and profits at specific price points based on past data. My results are directionally correct when you look at whole categories of products, but can be wildly off when projecting specific instances. (I try to make this clear to those who use my data, but people are always looking for certainty in life, even if they have to imagine it.)

The difference is, of course, that I'm seeking to mitigate the risks people take in making decisions that they're going to make anyway. "Gee, I really feel like we need to turn this product 50% off for the holidays." "Well, past experience shows that we wouldn't sell many more units, but would lose a whole lot of money. Let's try something else."

You would think that if you were going to, say, recommend that the entire world ratchet levels of CO2 emmissions back to the levels of the 1800s (with all the impacts to living standards and, let's be honest here, human life, which that entails), you would aspire to higher levels of accuracy and transparancy.

In a sense, I would imagine that these climate researchers have much the same justification for their actions that I do: They're just giving people common sense advise. I advise people not to waste too much profit margin. They advise people not to emit too much CO2.

Waste enough profit margin and your company goes out of business. Get enough CO2 in your atmostphere, and you get to enjoy the kind of climate that Venus has. From the point of view of serious environmentalists, who often seem to assume that any change made by humans to the planet is pretty clearly a bad thing, it may not seem like one needs to bring a lot of rigor to advising people to not burn fossil fuels. From that point of view, of course doing all these "unnatural things" will have bad consequences.

However, for the rest of us, the fact that modern industrial technology allows six billion people to live on this planet -- and for many of them to do so in greater material comfort than at any previous time in history -- is pretty clearly a good thing. And standing in front of that yelling "Stop" requires some pretty rigorous evidence. This isn't something that can be left to buggy code whose results are massaged into shape manually when they're going to come out into the light.

Thursday, October 08, 2009

Ardi: Looking at the Latest Missing Link

Virtually everyone with any access to news last week probably heard about Ardi, a 4.4 million year old skeleton of a human ancestor found in Ethiopia. However, given the tendency of the mainstream media to cover every ancient primate discovery as "Scientists discover 'missing link' which 'changes everything'" those who don't track these things can easily become confused, or even rather suspicious of the whole thing.

So, what is Ardi, and why is this discovery a big deal?

Ardi is a 45% complete skeleton of a female individual from the hominin species Ardipithecus ramidus. This is not a new species: we've known about Ardipithecus ramidus since a small number of bones from a member of the species was found in 1992 and formally described and named in 1994. Living about 4.4 million years ago, Ardipithecus ramidus is also not the oldest human ancestor known or a common ancestor between humans and our apparent closest genetic living relatives, the chimps. However, the excitement about Ardi (found along with less complete remains of a number of other Ardipithecus ramidus individuals and also fossil evidence about the plants and animals present in their environment) is not just hype. It is a very important find. Here's why:

Very Complete, Very Old
Invariably, Ardi has been compared to the other famous hominid find, Lucy who made headlines back in the 70s. However, Ardi is both more complete than Lucy and also over a million years older. Lucy was a 40% complete skeleton, about 3.2 million years old, belonging to the species Australopithecus afarensis.

We have a few fossil finds from hominid species which are older than Ardi, but we don't know nearly as much about these species because the finds are much more fragmentary. Sahelanthropus tschadensis lived 6-7 million years ago, but the only fossils found so far of it are a partial skull. Orrorin tugenensis lived 6 million years ago, but all we have is a leg bone and a few fragments. So while basically all we know about these earlier species is that we have a few scraps of bone from a creature that looks to be a hominid and doesn't belong to any other known species, we now have a very clear idea of what Ardipithecus ramidus looked like, and thus what hominids living 4.4 million years ago were like.

A Missing Link
Is Ardi a "missing link"? Well, she (and the other remains found in the same place -- much more partial remains of 35 other individuals) is certainly a missing link in the sense that these fossils provide us with a lot of fascinating information about a certain stage in hominid evolution. But there is no single "missing link" in the hominid ancestry chain. Fossils of primates in general are so rare that piecing together the more distant periods of human ancestry is very, very hard. While the charts we see in books and articles suggest seamless lines of descent, the actual evidence we have is often quite fragmentary, and even the links of the chain that we do have are often only partial. One stage or even a whole species may be represented by only a partial skull or most of a leg -- enough to tell it's different from known species, but not enough to have a very complete picture of the species. The below chart (excuse my poor freehand drawing skills) shows the problem, and why there's often dispute among biologists as to where the actual branches are, and whether we're descendants or cousins of some hominid species.

What is often referred to as "the missing link" is the hope of finding a species which appears to be a direct ancestor of both modern chimps and modern humans. Ardipithecus ramidus is not such a link, and indeed, some researchers are suggesting that Ardi points to that common ancestor being more ancient that previously believed.

What Ardi Tells Us
One of the most interesting things about Ardi is what she seems to indicate about human/chimp divergence. It had been widely assumed at one point that the common ancestor between humans and primates probably looked a lot like a chimp. Our DNA shows that we're closely related to chimps, and because we often have difficulty not thinking about evolution in terms of "progress" (especially when we're talking about ourselves) it's natural to think of chimps as the "ancient" form and to talk about "humans evolving from chimps".

Lucy knocked a bit of a hole in this thinking back in the 70s by showing that upright posture went back to Australopithecus afarensis 3+ million years ago, putting to rest the already crumbling idea that hominids prior to Homo erectus had been "knuckle draggers".

Now we have Ardi, who despite having a big toe that would have allowed her to grip things thing her feet, has a pelvis and legs which are clearly adapted to walking upright 4.4 million years ago. Even the leg bones we have from Orrorin tugenensis 6 million years ago appear to suggest a bi-pedal posture (though it's harder to know from such incomplete remains). So with Ardi's well preserved skeleton for confirmation, it's starting to look very much like human ancestors have been bipedal for a very long time. Large brains and other adaptations are later, but it would appear that it may have been the chimps and gorillas who developed adaptations for arboreal life, and in the process shifted to walking on all fours and putting weight on the knuckles of their hands -- rather than these being features that our ancestors shed.

Ardi did have proportionally much longer arms than more modern human ancestors, and her fingers were long for gripping branches. Her feet could still grip better than ours can (though not as well as modern great apes). Her brain was about the same size as that of a chimp, and she stood about four feet tall (the height of my seven-year-old.) But while she probably did not possess any of the traits that we see as uniquely human (language, higher consciousness, reason, complex tool-making, etc.) she looked less "like an ape" than expectations would have been in the past.

For more detailed information, the following are interesting links:

At long last, meet Ardipithecus ramidus

Ardipithecus: We Meet At Last

And if you really want the mother lode, the journal Science (which put out a special issue with all the original research papers on Ardi) has taken the unprecedented step of making all of the papers available on their site if you fill out a free registration. The Science Magazine Ardipithecus site is here.

Tuesday, August 18, 2009

Blame the Neolithic

A brief article in The Economist relays some evidence a palaeoclimatologist has recently put forward that anthropogenic global warming began 5000-7000 years ago, as a result of slash-and-burn agriculture spreading throughout Europe and Asia. Or to be less exciting, ice core samples show that CO2 and methane levels started rising 5000-7000 years ago, and since it's known that agriculture was spreading widely at that time, Dr. William Ruddiman of UV Charlottesville (among others) argues that early agriculture may be to blame. Although the world population 5000 years ago was obviously much smaller, the efficiency of agriculture was so much lower (Ruddiman estimates per capita land use was 10x higher than in recent recorded history.

The idea of early societies causing heavy environmental tolls is not new. There's fairly wide support for the idea that deforestation and over-mining contributed to the collapse of the Bronze Age cultures in the Mediterranian. However, the idea that "global warming" started with the late neolithic is kind of charming. Please consider adopting a more hunter-gatherer lifestyle!

More practically, this strikes me as underlining that there is not some single, sacred, stability point which industrial civilization has destroyed. We humans and our planet have always had an effect on each other, and it's virtually impossible for us to avoid that. The course of wisdom lies in trying to avoid making more impact than necessary (while not setting unrealistic goals or stiffling development) and being prepared to deal with unwanted effects that may occur.

Friday, May 22, 2009

Ida and the Missing Link



If you follow science headlines at all, you have doubtless heard about Ida, the diminutive 40 million year old primate who was unveiled to the world this week with nearly unprecedented publicity. Google even got into the excitement with an Ida-themed Google header.

So, what's so special about this find? Is it the "missing link" in human evolutionary history as many mainstream news headlines have suggested?

Well, to the extent that "the missing link" is coloqually used to refer to the most recent common ancestor of humans and chimps, not even close. Ida is more than four times older. Any most paleontologists still aren't sure that the species she belongs to is in our line of ancestry at all. This graphic from the New Scientist shows visually what's being argued about here. What is very much news here is that Ida may help answer some questions about the very, very early history of primates, when the ancestors of modern apes, monkeys and humans were diverging from those of modern lorises and lemurs. However, that's not as exciting a story, so the media seems to be blunding about in the fashion they often do in reporting any specialized field.

The other thing that's incredibly exciting about Ida is that she is an unusually well-preserved fossil, which among early primates is especially rare. (Given that good fossilization requires being quickly covered in fine sediment somewhere like a gentle river, you can see why tree dwelling creatures wouldn't get the treatment very often.) Ida was covered, immediately after her untimely demise, by solf volcanic ash which left a fossil which is 95% complete, fully articulated, and includes prints of her fur and organs.
The precise composition of the volcanic deposits in which Ida was found even allowed preservation of her soft tissue. “You can see the fur, the ears, all of the gut contents [leaves and a fruit], all the fingertips and toes,” Smith says.

Smith and her colleagues were able to guess Ida’s age based on the fossil’s teeth. “She was just turning over and replacing her baby teeth in the front of her face, and the molars were coming in the back,” Smith says. Because Ida had many teeth forming at the same time, Smith thinks the primate must have grown up fast, developing much quicker than a human would. Ida died before she was 1 year old, Smith and her colleagues suggest. Comparisons with a similar animal, the squirrel monkey, led the researchers to guess that Ida might have lived for 15 or 20 years had she not met an early demise.
This has allowed us to get an unusually accurate view of what this early primate looked like. She's rather fetching, really.

The full paper on her (from which that sketch comes) can be found here.

Wednesday, May 20, 2009

Children of the Corn



Here to the north of Austin, we live in an odd patchwork of new neighborhood, business parks and shopping malls interspersed with open fields. Cattle graze in the field next to our supermarket, and corn grows cross the street from our bank.

Seeing the orderly fields of corn, I'd never realized that corn represents an intriguing mystery in regards to plant evolution and the history of humanity's interaction with the plants we live off of. First domesticated in Central America around 7000 years ago, corn as we find it today is a domestic-only plant in that it is virtually incapable of reproducing in the wild.

One of the characteristics of corn that makes it such a useful crop is the incredibly high return of kernels harvested to kernels planted. Biologically, one of the reasons for this abundance is that unlike other grasses which have been domesticated as agricultural grains, the corn cob forms halfway down on the plan, closer to sources of water and nutrients, and thus the plant is able to put more energy into seed growth. In other cereals, the seeds are at the very top of the stalk, at the plant's farthest extremety.

Another great feature of corn is that the cob is covered by a husk, which largely protects the grain from pests. It pretty much requires a creature with opposable thumbs to get the husk off, which means you loose less of the grain prior to harvest. Plus, the kernels are well-rooted into the cob, as compared to grains like wheat where the ripe seeds can easily fall from the ear of grain.

However, all of this -- particularly the firmness of the kernels in the cob and the husk covering it -- means that if there are no humans to harvest the corn, very, very little of it will succeed in naturally reseeding. If a cornfield were abandoned before harvest and you returned in five years to see if any wild corn was left growing, you would probably find few to no corn plants.

This means that corn as we find it today must be biologically fairly different from the corn ancestor which Central Americans first found in the wild and domesticated. The predominant theory out there is that corn is descended from the grass called teosinte which is found in Mexico even today, but the differences between the two plants are extensive, though there is enough genetic similarity to make it pretty clear they are related. Teosinte grains is far out on the extremities of a banching stalk, the grains are covered by hard outer covering (like the chaff of wheat), the grains are not strongly rooted in a cob-like structure, and they are not covered by a husk that remains closed.

The National Science Foundation has a nice comparison here:


The prevailing theory at the moment is apparently that teosinte underwent a series of major mutations during a very short period of time which resulted in the corn we see today. I find that a bit unsatisfying, since series of major, conventient, stable mutations are hard to come by. Thus I was interested to find this article about Prof. Mary Eubanks of Duke University, who has been working on the theory that corn as we know it today is the result of multiple hybridizations between teosinte and another wild grass called tripsacum. She's developed a hybrid of tripsacum and modern corn which exhibits many of the properties of the ancient ears of corn dating back 5000+ years that have been found in caves in Mexico. Apparently she has pretty decent genetic evidence for this as well by now.

While I'm not remotely an expert, I must admit to finding the hybridization explanation somewhat more convincing on the face of it than the sudden large mutation explanation. And I had never realized that corn was so interesting.

Monday, April 06, 2009

Philosophy and Health

Philosophy is often seen as one of those highly impractical, strictly academic fields, and yet, it has a way of being at the root of everything.

I was struck, recently, by a contrast in two statements about medicine. In an article about the importance of finding medical ways to enhance female sex drive, I ran across a claim along the lines of, "Many experts believe that more than 50% of women over 30 suffer abnormally low interest in sex and would benefit from sexual drive enhancing medication if it became available." The immediate connection my mind made was: No more than 5% of the population is attracted primarily to his or her own sex, and yet this is not considered a medical abnormality.

These two together show that the medical community (and our society in general) clearly has some sort of philosophy of the human person and philosophy of sexuality, which is doubtless assumed and unstated. Women, it is believed, ought to have a sexual drive equal to that of men, regardless of whether that is what we find in nature or not. (Even though there are some obvious evolutionary reasons why males would be physically more interested in frequency of copulation than females.) And yet if one primarily experiences sexual attraction to one's own sex, even though that both "doesn't fit the plumbing" and is evolutionarily useless, that is perfectly fine and healthy, even if this is a condition found in only a small percentage of the population.

Medicine is, in its modern form, generally an empirical field. Yet the question of "What is normal?" and "What is abnormal?" is a question that we always answer philosophically rather than empirically.

Necessarily so. Often our sense of what "ought" to happen is directly contrary to the observed usual occurrence. "Health" is not simply what we observe to be the usual, otherwise we would consider the "healthy" result of a diagnosis of lymphoma to be death.

We chase the telos just as much as in Aristotle's time, and yet we do not acknowledge that what we are doing is anything other than an "empirical science".

Friday, January 30, 2009

Empirical Methodology vs. Human Nature

Sorry to be naught but a linker this week, but things have been busy...

This New York Times Magazine article, interviewing three female researchers studying female desire, struck me as an interesting example of how scientific methodologies are of limited use in describing the human person. Many of their conclusions and ideas are interesting, and you can see how they reflect lived experience to a limited extent, but the fruits of all this research are also startlingly inadequate in describing something as universally experienced as human love and sexuality. And necessarily so, I would tend to think.

Tuesday, January 06, 2009

Contraceptive or Abortive?

Clearly as Catholics, we're not supposed to be doing either one, nor do I have any desire to, but in regards to questions of conscience and regulation the facts of the matter are clearly important. It's been discussed often in pro-life circles over the years that oral contraception is designed such that it sometimes allows fertilization but prevents implanation -- thus in effect causing a spontaneous abortion several hours after conception. This is generally applied even more so to "morning after" pills, which is a reason why some pharmicists have conscience objections to dispensing such medications. Given that that Protestant half of the pro-life movement is often fairly comfortable with birth control in concept, this "the pill causes abortion a certain percentage of the time" argument has often been used to help unify the pro-life movement against birth control.

Regardless of whether it's true, I think that clearly the contraceptive mentality and the approach to sex it entails is certainly a major cause of support for abortion.

All that said, I'd very curious as to the reaction of Catholics more educated in the precise medical details of human reproduction than I am to this Slate post, which argues that the evidence for abortificant properties to The Pill and Plan B is slim to none.

I've no particular interest in endorsing Saletan's opinions generally, he's the one who made the to my mind rather nutty argument a while back that Planned Parenthood was overall an anti-abortion organization because contraception prevents unwanted pregnancies. But if it's correct as a matter of medical science that it's very, very rare for oral contraceptive and even "morning after" pills to cause spontaneous abortions, we'll do nothing but make ourselves look unconcerned with the truth (and the scientific truth of embryology is very much on our side when it comes to what the pro-life movement says about abortion as opposed to what the "just a clump of cells" people say) if we keep pushing it.

Feedback from those with medical or scientific knowledge of the issue would be appreciated.

Wednesday, November 26, 2008

Raising The Dead

If, as I did, you spent much of your youth reading about creatures of the past, and especially if you had a chance to frequent museums, as I did the Page Museum at Los Angeles' La Brea Tar Pits, the idea of seeing some of the large land mammals which went extinct within the last 50,000 years brought back to life can't help seeming a bit attractive.

Who could resist the chance to see an American Mastodon (above) or a Smilodon, or "saber-toothed cat" (below) alive again?


Scientists are increasingly thinking that it would in fact be possible to produce live specimens of recently extinct mammals, such as the Woolly Mammoth, which died out in Europe and America (like the above creatures) only around 8,000 B.C.


The cause of these extinctions was probably dual: changing climate after the end of the last glacial period, and excessive hunting by humans. That virtually all large mammal species (among them the American Horse and American Camel) other than the American Bison vanished from North America shortly after the flourishing of human populations here is probably not entirely a coincidence. (A similar pattern occurred in other parts of the world, notably Australia where a number of large marsupial species died out right after the arrival of humans.)

Given how recent these extinctions were, there's fairly "fresh" genetic material still around from them, and fairly close living relatives around to serve as surrogate parents, so of course as genetic technology advances scientists have become increasingly interested in trying to bring back one or more of these recently extinct species by means of using original genetic and cloning type procedures. The New York Times reports:
Scientists are talking for the first time about the old idea of resurrecting extinct species as if this staple of science fiction is a realistic possibility, saying that a living mammoth could perhaps be regenerated for as little as $10 million....

A scientific team headed by Stephan C. Schuster and Webb Miller at Pennsylvania State University reports in Thursday’s issue of Nature that it has recovered a large fraction of the mammoth genome from clumps of mammoth hair. Mammoths, ice-age relatives of the elephant, were hunted by the modern humans who first learned to inhabit Siberia some 22,000 years ago. The mammoths fell extinct in both their Siberian and North American homelands toward the end of the last ice age, some 10,000 years ago.

Dr. Schuster and Dr. Miller said there was no technical obstacle to decoding the full mammoth genome, which they believe could be achieved for a further $2 million. They have already been able to calculate that the mammoth’s genes differ at some 400,000 sites on its genome from that of the African elephant.

There is no present way to synthesize a genome-size chunk of mammoth DNA, let alone to develop it into a whole animal. But Dr. Schuster said a shortcut would be to modify the genome of an elephant’s cell at the 400,000 or more sites necessary to make it resemble a mammoth’s genome. The cell could be converted into an embryo and brought to term by an elephant, a project he estimated would cost some $10 million. “This is something that could work, though it will be tedious and expensive,” he said.
I must admit, I find this a rather exciting idea. However, it seems that no sooner do people come up with the ability to do something like this than they get other ideas:
The same would be technically possible with Neanderthals, whose full genome is expected to be recovered shortly, but there would be several ethical issues in modifying modern human DNA to that of another human species.
Love the charming understatement of "there would be several ethical issues in modifying modern human DNA to that of another human species", eh?

Oh, but fear not. We've got some deep thinkers here who are working hard to make sure that they don't engage in any research that treads ethically thin ice:
But the process of genetically engineering a human genome into the Neanderthal version would probably raise many objections, as would several other aspects of such a project. “Catholic teaching opposes all human cloning, and all production of human beings in the laboratory, so I do not see how any of this could be ethically acceptable in humans,” said Richard Doerflinger, an official with the United States Conference of Catholic Bishops.

Dr. Church said there might be an alternative approach that would “alarm a minimal number of people.” The workaround would be to modify not a human genome but that of the chimpanzee, which is some 98 percent similar to that of people. The chimp’s genome would be progressively modified until close enough to that of Neanderthals, and the embryo brought to term in a chimpanzee.

“The big issue would be whether enough people felt that a chimp-Neanderthal hybrid would be acceptable, and that would be broadly discussed before anyone started to work on it,” Dr. Church said.
I'm sorry, perhaps I shouldn't be flip and derisive, but in the words of my generation: Is this guy for real?

Does he really imagine that the big moral objection that people have to the idea of cloning Neanderthals is just a matter of whether a human egg or chimp egg is used?

The Neanderthals were around until about 25,000 years ago. They made complex tools, wore simple jewelry, made clothes, buried their dead, used fire, and there's fairly strong evidence that they also created art and had equal speech abilities to modern humans. One of the main controversies about Neanderthals is whether they are a separate species in genus homo or a separate sub species within homo sapiens (homo sapiens sapiens vs. homo sapiens neandertalis).

Even leaving aside thorny religious questions (of which I think there are definitely many -- given that I don't think creating genetic knock-offs of modern humans is remotely acceptable, and also given that the evidence is decent that the Neanderthals were themselves religiously aware in some sense, and thus not to be taken as a non-entity in the question) this seems like an appalling idea from a strictly humanitarian perspective. Create one or more "resurrected" members of an extinct group of humans as a scientific and technical stunt? A group of humans which clearly had mental and physical (and I would assume thus emotional) characteristics not so different from our own -- and yet almost certainly different as well.

Would the researchers "own" these Neanderthals? Would they be citizens with their own rights? What kind of life are you setting someone up for by artificially bringing him into the world 25,000 years after those like him died out, and making him a curiosity among a sea of those similar yet not the same?

Don't get me wrong, I can certainly find the idea of meeting other types of humans interesting, but artificially creating them strikes me as not only irresponsible, but inhumane.

Well, I'm apparently not the only one who had some ethical questions about the article, because the New York Times chose to run an editorial about the article. Their ethical concern?
The first mammoth would be a lonely zoo freak, vulnerable to diseases unknown to its ancestors. To live a full and rewarding life, it would need other mammoths to hang out with, a mate to produce a family and a suitable place to live. The sort of environment it is used to — the frigid wastes of Siberia and North America — are disappearing all too fast...

If scientists do bring back a few mammoths, we suspect our warming world won’t look any more hospitable than the one that did them in.
Yes, they ponder whether it would be moral to bring a woolly mammoth into a world with global warming.

Do you ever get the impression there are people with very different philosophical and moral compasses than your own in our country?

Friday, August 15, 2008

The Pill and Mate Selection

This has been bouncing around the Catholic blogsphere due to being picked up by the blog at First Things, but I post it here in hopes of perhaps drawing a comment out of Razib or someone else with some more serious biological knowledge. It would seem that evolutionary psychologist Stewart Craig Roberts has a paper coming out in the current issue of the Proceedings of the Royal Society: Biological Sciences in which he presents data that women show different odor-based preferences in regards to men when they are pregnant and when they are on The Pill (which uses hormones to reproduce some effects of pregnancy, thus surpressing ovulation.)
While several factors can send a woman swooning, including big brains and brawn, body odor can be critical in the final decision, the researchers say. That's because beneath a woman's flowery fragrance or a guy's musk the body sends out aromatic molecules that indicate genetic compatibility.

Major histocompatibility complex (MHC) genes are involved in immune response and other functions, and the best mates are those that have different MHC smells than you. The new study reveals, however, that when women are on the pill they prefer guys with matching MHC odors.

MHC genes churn out substances that tell the body whether a cell is a native or an invader. When individuals with different MHC genes mate, their offspring's immune systems can recognize a broader range of foreign cells, making them more fit.

Past studies have suggested couples with dissimilar MHC genes are more satisfied and more likely to be faithful to a mate. And the opposite is also true with matchng-MHC couples showing less satisfaction and more wandering eyes.

"Not only could MHC-similarity in couples lead to fertility problems," said lead researcher Stewart Craig Roberts, an evolutionary psychologist at the University of Newcastle in England, "but it could ultimately lead to the breakdown of relationships when women stop using the contraceptive pill, as odor perception plays a significant role in maintaining attraction to partners."

The study involved about 100 women, aged 18 to 35, who chose which of six male body-odor samples they preferred. They were tested at the start of the study when none of the participants were taking contraceptive pills and three months later after 40 of the women had started taking the pill more than two months prior.

For the non-pill users, results didn't show a significant preference for similar or dissimilar MHC odors. When women started taking birth control, their odor preferences changed. These women were much more likely than non-pill users to prefer MHC-similar odors.

"The results showed that the preferences of women who began using the contraceptive pill shifted towards men with genetically similar odors," Roberts said....

"When women are pregnant there's no selection pressure, evolutionarily speaking, for having a preference for genetically dissimilar odors," Roberts said. "And if there is any pressure at all it would be towards relatives, who would be more genetically similar, because the relatives would help those individuals rear the baby."

So the pill puts a woman's body into a post-mating state, even though she might be still in the game.

”The pill is in effect mirroring a natural shift but at an inappropriate time,” Roberts told LiveScience.
Obviously this is just one factor in relationship dynamics, but it does strike me as interesting in that it seems http://www.blogger.com/img/gl.link.gifto me that birth control is a fairly culturally disruptive technology which generally speaking was taken up without a whole lot of thought about anything other than the obvious benefits.

It's also an example of the ways in which things we don't think of affect our feelings and actions. No one, I'm sure, would think, "Boy, my boyfriend just doesn't smell alluring anymore." (Unless, perhaps, she was about to tell him to go take a shower rather than plopping down on the couch next to her after returning from the gym.) But a thought of, "He just doesn't seem exciting anymore" or "We just don't seem to have a spark these days" might well include a response to senses that we do not actively think about.

UPDATE: Razib puts up a good post on the question here. And provides a link to the original study here.

Worth noting is the confluence of interests that gives this story so much play. In the mainstream press, it's a quirky result about something which nearly everyone takes -- probably good mostly for a laugh. "Hey, did you hear the one about how your girlfriend is more likely to dump you for her brother when she's on the pill?"

Meanwhile, in the small subculture of those who have rejected birth control, it serves as a bit of an "I told you so".

In the end, it strikes me as a bit interesting -- more as an example of how a physical reaction can unconsciously affect our personal choices than as a proof that women on the pill will form bad relationships. (After all, there's nothing that would necessarily make a relationship with someone who happened to have a more similar immunity profile a "bad relationship".) Much more concerning, if one is listing off reasons to be cautious of the wide use of birth control, is that having fertility be strictly optional removes the biological incentive from a lot of ancient social structures that we pretty much take for granted, and don't want to see go away.

Tuesday, June 17, 2008

Engineering Our Way Out of Crisis

The Times of London has an interesting article about efforts to produce crude oil from plant waste such as straw and woodchips using genetically modified micro-organisms which excrete oil as a waste product. Thus far, their efforts have been very small scale, but it's a fascinating effort.

“Ten years ago I could never have imagined I’d be doing this,” says Greg Pal, 33, a former software executive, as he squints into the late afternoon Californian sun. “I mean, this is essentially agriculture, right? But the people I talk to – especially the ones coming out of business school – this is the one hot area everyone wants to get into.”

He means bugs. To be more precise: the genetic alteration of bugs – very, very small ones – so that when they feed on agricultural waste such as woodchips or wheat straw, they do something extraordinary. They excrete crude oil.

Unbelievably, this is not science fiction. Mr Pal holds up a small beaker of bug excretion that could, theoretically, be poured into the tank of the giant Lexus SUV next to us. Not that Mr Pal is willing to risk it just yet. He gives it a month before the first vehicle is filled up on what he calls “renewable petroleum”. After that, he grins, “it’s a brave new world”.

Mr Pal is a senior director of LS9, one of several companies in or near Silicon Valley that have spurned traditional high-tech activities such as software and networking and embarked instead on an extraordinary race to make $140-a-barrel oil (£70) from Saudi Arabia obsolete. “All of us here – everyone in this company and in this industry, are aware of the urgency,” Mr Pal says.

What is most remarkable about what they are doing is that instead of trying to reengineer the global economy – as is required, for example, for the use of hydrogen fuel – they are trying to make a product that is interchangeable with oil. The company claims that this “Oil 2.0” will not only be renewable but also carbon negative – meaning that the carbon it emits will be less than that sucked from the atmosphere by the raw materials from which it is made.

Fascinating stuff. I would imagine there are also some folks out there working hard at super-efficient CO2 converting micro-organisms, to be used as "scrubbers" in power plants and perhaps even internal combustion engines.

I know that some fellow science enthusiasts find me overly blasé about the prospect of global warming, peak oil, etc., but given humanity's track record over the last few hundred years, it strikes me as fairly likely we'll manage to engineer our way out of any scenarios in which earth becomes uninhabitable for us. This struck me particularly when I found myself flipping through the environmentally alarmist tome Six Degrees the other day. It discusses what would happen if the earth's average temperature increased six degrees Centigrade (about 11 degrees F), as the most extreme global warming models currently suggest could be possible in the next hundred years or more. Needless to say, it's pretty dire. Author Mark Lynas predicts that a full 6C rise could result in the collapse of civilization, the extinction of most plant and animal species, and a return to the stone age, if humans didn't die out completely.

Now here's the thing I find unconvincing about these kind of scenarios: They're invariably based on global warming skyrocketing while humanity sits down and suffers the consequences, with billions dying, civilisation vanishing, etc. Maybe I've got too much of a Heinlein mentality, but I don't see humanity going quietly into the night (or desert, as the case may be.) If we started to see really massive, destructive effects that were clearly the result of global warming, expect someone (if not in the West, in the developing nations like China and India) to take matters into their own hands and do something massive. For instance, if you created a massive underground explosion along the lines of Krakatoa (think underground nuclear test that makes the USSR's "Tsara Bomba" look small) you could win yourself a couple years of unusually cool world temperatures as a result of suspended dust in the atmosphere. Heck, perhaps there's even an easier way to get that many particulates into the upper atmosphere. Similarly, genetically modified plants and micro-organisms might be used to try to drastically reduce CO2 fast.

Now obviously, we don't think about trying these things right now because they're easy things to get wrong, with the possibility of a run-away GMO wiping out hundreds of existing plant species, or your attempt to get particulates into the atmosphere causing world-wide nuclear fallout. But if countries like China are in danger of collapse, or tens of millions of people world-wide are starving, expect the caution to go to the winds.

This isn't to say that the earth might not end up trashed, but my guess it that it would be trashed by massive (and sometimes poorly thought out) attempt to stem off disaster, rather than by the warming itself. I'm pretty sure we could keep the climate from cooking us out of existence. The question is, can we do that without trashing the environment, or will we do it by large and clumsy means. Either way, I don't see world temperatures going up 6C. There may be environmental disasters in our future, but I don't see that one being it.

(Note: It's also entirely possible that the natural systems of our planet have the ability to adjust to process much more CO2 than we imagine, without going into serious global warming. I consider that fairly possible, but I'm ignoring it for the purposes of this discussion.)

Thursday, May 15, 2008

Random Biological Thought of the Day

One of the things that sets us apart from our close genetic relatives among the large primates is that human females do not have obvious visual queues as to when they are fertile. Sure, as every NFP using couple knows, with enough study we can figure these things out with a fair degree of accuracy. But it's certainly not something where you can simply look at your wife from across the room and see, "Ah, fertile at the moment, are we."

By comparison, when researchers are watching bands of chimps, the physical signs of a female chimp becoming fertile are so obvious that the researches can spot them from a distance. And the male chimps certainly are not in any doubt. (Of course, it helps in this regard that chimps don't wear clothes, but you get the idea.)

It strikes me that as humans became, well... human, and our social structures began to develop, that the fact that it's not readily obvious when women are and are not able to conceive probably helped to re-enforce the need for marriage (or stable mating arrangements, if you want to sound all analytical about it) and for family social structures. This lack of certainty created a need for social structures that emphasized long term fidelity.

Thinking about it this way: our lack of certainty as to whether any given act of intercourse will lead to children is one of the aspects of the human creature which had a fundamental influence on how societies developed. Or put it in moral terms: this physical reality reflects the intention that families be based on permanent fidelity.

All of which suggests that the advent of generally effective birth control would be very socially disruptive. (Which I think one could certainly argue it has.) And that if 100% effective artificial birth control were developed (which could be turned off and on at will yet allowed no user error or failure) it would be even more socially destructive.

In a strictly cultural sense: the social structures we're used to surrounding marriage and the family are based on the assumption of not knowing when sex will result in offspring. In a moral sense: that physical reality reflects that we are creatures who are made to work a certain way -- and our morals surrounding marriage are the "operating manual" for how to live successfully within that reality.

When we change these things, we in some real sense change who we are.

Friday, January 18, 2008

Pinker & Morality

Stephen Pinker graced last Sunday's New York Times Magazine with a lengthy article titled "The Moral Instinct". In it, he seeks to explain (and applaud) recent research by psychologists, "evolutionary psychologists" (a term I use with roughly the same appreciation as Stephen Jay Gould did) and neuroscientists into the origins of morality.

(Many thanks to the reader who sent the article along and went a couple rounds of discussion on it with me via email.)

Working from the basic assumption that morality consists of a set of emotional/psychological urgings and repugnances which find their origin in humanity's evolutionary past, those investigating the moral instinct have tried to classify sets of moral reactions and speculate on how these might have come to be. Though lengthy, Pinker keeps things spiced up with illustrations and dilemmas. However, many of these seem to assume a very un-reflected view of morality -- on where moral "thought" is basically a matter of gut urgings which one is at a loss to explain. For instance, when talking about taboos Pinkers provides the following examples:
Julie is traveling in France on summer vacation from college with her brother Mark. One night they decide that it would be interesting and fun if they tried making love. Julie was already taking birth-control pills, but Mark uses a condom, too, just to be safe. They both enjoy the sex but decide not to do it again. They keep the night as a special secret, which makes them feel closer to each other. What do you think about that — was it O.K. for them to make love?

A woman is cleaning out her closet and she finds her old American flag. She doesn’t want the flag anymore, so she cuts it up into pieces and uses the rags to clean her bathroom.

A family’s dog is killed by a car in front of their house. They heard that dog meat was delicious, so they cut up the dog’s body and cook it and eat it for dinner.

Most people immediately declare that these acts are wrong and then grope to justify why they are wrong. It’s not so easy. In the case of Julie and Mark, people raise the possibility of children with birth defects, but they are reminded that the couple were diligent about contraception. They suggest that the siblings will be emotionally hurt, but the story makes it clear that they weren’t. They submit that the act would offend the community, but then recall that it was kept a secret. Eventually many people admit, “I don’t know, I can’t explain it, I just know it’s wrong.” People don’t generally engage in moral reasoning, Haidt argues, but moral rationalization: they begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification.

Two things strike me in this set of examples:

First, Pinker assumes that any rationale behind moral prohibitions must be pragmatic. All possible reasons provided for disapproving of incest are pragmatic, and the example is formulated in order to foil these sorts of objections. From his overall tone, I think this reflects an assumption (indeed, probably a deeply held belief) on Pinker's part that moral objections to something must, at root, be pragmatic and physical in their repercussions. If he'd posed the incest question to me, my response would have been something along the lines of, "It was wrong because their action violated the inherent meanings both of the relationship between siblings and the meaning of sex/relationship between lovers." I have a feeling that Pinker would see that as just being a fancy way of saying, "I don't like it", but that simply serves to underscore the fact that we'd be talking about different things in regards to morality.

Second, he doesn't seem to take into account any difference between inherent meaning and cultural meaning. Using the flag as a dustcloth and eating the family pet are both violate senses of respect and meaning which are cultural in nature. The flag does not have an inherent meaning. However, using it as a dustrag is offensive because of certain cultural understandings both of what the flag means and what using a piece of cloth as a dustrag means. Similarly, the relationship of family to pet and the prohibition of eating pets are cultural. Incest and sex outside of marriage, however, violate inherent relationship types which cross cultural bounds. (This is not to say that all cultures necessarily share a prohibition against incest, though certainly most do, but rather that the relationship of "siblings" is something inherent to the human person, and that relationship inherently does not include "someone you have sex with".)

Pinker realizes he's playing with fire here, and concedes that many may see trying to develop an evolutionary understanding of morality as explaining it away:
And “morally corrosive” is exactly the term that some critics would apply to the new science of the moral sense. The attempt to dissect our moral intuitions can look like an attempt to debunk them. Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes.

However, he goes on to try to argue that discerning the evolutionary origins of morality will in fact reveal certain very real norms:
In his classic 1971 article, Trivers, the biologist, showed how natural selection could push in the direction of true selflessness. The emergence of tit-for-tat reciprocity, which lets organisms trade favors without being cheated, is just a first step. A favor-giver not only has to avoid blatant cheaters (those who would accept a favor but not return it) but also prefer generous reciprocators (those who return the biggest favor they can afford) over stingy ones (those who return the smallest favor they can get away with). Since it’s good to be chosen as a recipient of favors, a competition arises to be the most generous partner around. More accurately, a competition arises to appear to be the most generous partner around, since the favor-giver can’t literally read minds or see into the future. A reputation for fairness and generosity becomes an asset.

He goes on to argue that both the necessity of cooperation suggested by the iterative variation of the prisoner's dilemma and the golden rule as a means to persuading others to treat you nicely are moral norms that have been hardwired into humanity by evolution.

Many may find that they want something a bit more, when it comes to morality. Sure, in a society with certain assumptions (notably an idea that people are inherently or functionally equal) it may be the case that most people will benefit most of the time by treating others as they want to be treated and cooperating rather than betraying, but "most people most of the time" is not exactly what the majority of people seek when they look to "morality".
Now, if the distinction between right and wrong is also a product of brain wiring, why should we believe it is any more real than the distinction between red and green? And if it is just a collective hallucination, how could we argue that evils like genocide and slavery are wrong for everyone, rather than just distasteful to us?

Putting God in charge of morality is one way to solve the problem, of course, but Plato made short work of it 2,400 years ago. Does God have a good reason for designating certain acts as moral and others as immoral? If not — if his dictates are divine whims — why should we take them seriously? Suppose that God commanded us to torture a child. Would that make it all right, or would some other standard give us reasons to resist? And if, on the other hand, God was forced by moral reasons to issue some dictates and not others — if a command to torture a child was never an option — then why not appeal to those reasons directly?

This throws us back to wondering where those reasons could come from, if they are more than just figments of our brains. They certainly aren’t in the physical world like wavelength or mass. The only other option is that moral truths exist in some abstract Platonic realm, there for us to discover, perhaps in the same way that mathematical truths (according to most mathematicians) are there for us to discover. On this analogy, we are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.

Moral realism, as this idea is called, is too rich for many philosophers’ blood. Yet a diluted version of the idea — if not a list of cosmically inscribed Thou-Shalts, then at least a few If-Thens — is not crazy. Two features of reality point any rational, self-preserving social agent in a moral direction. And they could provide a benchmark for determining when the judgments of our moral sense are aligned with morality itself.

Will all due respect, Pinker was sleeping through his Plato class. Plato didn't argue that morality couldn't come from God, rather he argued that "the Good" must always be singular. It can't simply be "what pleases the gods"; especially when you have a bunch of bickering gods who often do things ever their devotees regard as immoral. This is one of the reasons that Christians so readily embraced Plato, because they saw his singular "the Good" which remained untouched and eternal above the strife of the pagan deities as being a close approximation to the one, good and eternal God of Jewish/Christian revelation.

But sticking to the realm of human reason -- does he present a good reason for rejecting a Platonic approach to morality? Well, it's "too rich for many philosophers’ blood". Are we to take that as much of anything more than, "They don't like it"? This certainly seems to underline the idea that faith is an act of the will as much as the intellect.

Plato held that we often know truths without recognizing it, until those truths are drawn out of us. Pinker seems to be suffering from something of a lack of drawing out in his reactions to morality.

On the one hand, he wants to see morality as a biological/psychological phenomenon: a set of basic rules for how primates best get along together which has been programmed into us through countless generations of human social interaction. He boils these down to rules basic enough to be acceptable to modern culture "be fair to other people", "treat others as you want them to treat you", etc. But then in his closing he attempts to use this to make all sorts of absolute assertions: Being against human cloning is irrational. Homosexual relationships are okay. Racism is bad.

And yet, none of these can be conclusively derived from the rules which he has decided to keep. And indeed, nothing can be conclusively derived from them, since the very nature which he assigns to morality is one of "society functions best if most people do X" rather than "everyone must do X".

The fact is, Pinker himself is not comfortable with certain things he despises (racism, genocide, sexism, homophobia) being only wrong some of the time, or only wrong for some people, and yet in the end he cannot come up with an explanation of strictly psychological/biological morality which shows that it always and everywhere wrong to violate his preferred norms of behavior. The understanding of morality he puts forth allows him to discard those norms that he doesn't like, but it doesn't allow him to retain those that he does.

Thursday, December 20, 2007

Faith in an Orderly Universe

It's not only Catholic cardinals who manage to create a firestorm when they write NY Times op-eds about science and philosophy. Physicist Paul Davies landed quite a bit of attention last month with an editorial titled Taking Science on Faith. His point is one that has always struck me as deeply compelling: that science as a discipline relies on an implicit faith that the universe acts according to knowable laws.
Clearly, then, both religion and science are founded on faith — namely, on belief in the existence of something outside the universe, like an unexplained God or an unexplained set of physical laws, maybe even a huge ensemble of unseen universes, too. For that reason, both monotheistic religion and orthodox science fail to provide a complete account of physical existence.

This shared failing is no surprise, because the very notion of physical law is a theological one in the first place, a fact that makes many scientists squirm. Isaac Newton first got the idea of absolute, universal, perfect, immutable laws from the Christian doctrine that God created the world and ordered it in a rational way. Christians envisage God as upholding the natural order from beyond the universe, while physicists think of their laws as inhabiting an abstract transcendent realm of perfect mathematical relationships.
This view ruffled quite a few feathers. A follow up article (the one that actually caught my eye the other day) by Dennis Overbye describes some of the flack that Davies has caught from other scientists, science enthusiasts, and anyone else who felt like writing to the Time letters column:

His argument provoked an avalanche of blog commentary, articles on Edge.org and letters to The Times, pointing out that the order we perceive in nature has been explored and tested for more than 2,000 years by observation and experimentation. That order is precisely the hypothesis that the scientific enterprise is engaged in testing.

David J. Gross, director of the Kavli Institute for Theoretical Physics in Santa Barbara, Calif., and co-winner of the Nobel Prize in physics, told me in an e-mail message, “I have more confidence in the methods of science, based on the amazing record of science and its ability over the centuries to answer unanswerable questions, than I do in the methods of faith (what are they?).”
However, the attempts that Overbye quotes to explain science's reliance on an orderly universe without recourse to a leap of faith sound suspiciously like an attempt to do the same thing in different words:
Pressed, these scientists will describe the laws more pragmatically as a kind of shorthand for nature’s regularity. Sean Carroll, a cosmologist at the California Institute of Technology, put it this way: “A law of physics is a pattern that nature obeys without exception.”
That sounds very observational and pragmatic... except for the "without exception" part at the end there. True, we don't have to tune in every morning to the daily gravity report to see how fast things are falling that day, but saying that the laws of physics describe how the world behaves "without exception" based on a few hundred years of modern science (during most of which we interpreted our observations as pointing to laws other than our current understanding of physics) strikes me as taking something very like a leap of faith.

The issue, I think, is that some people who spend a lot of time and attention on science (I think this is actually more of an issue with science enthusiasts and low level science teachers as opposed to serious high level research scientists -- though one finds it at times there as well) have rather too much invested in the idea that scientific methodologies are The One Reliable Way of Finding Out How the Universe Really Works.

And yet, taken on their own, scientific methodologies are generally formulated to determine how things appear to work in a given set of situations and times. It's our faith that the universe works in a knowable, orderly, fairly universal fashion that allows us to turn five hundred years of modern science (or 2500 if you want to date science from the Greeks) into knowledge of how things work "without exception."

What's ironic, in a sense, is that Davies is not trying to advocate more respect for faith via his editorial. Rather, his last two paragraphs issue a call to seek a new, less universal way of understandings the "laws" of science:
It seems to me there is no hope of ever explaining why the physical universe is as it is so long as we are fixated on immutable laws or meta-laws that exist reasonlessly or are imposed by divine providence. The alternative is to regard the laws of physics and the universe they govern as part and parcel of a unitary system, and to be incorporated together within a common explanatory scheme.

In other words, the laws should have an explanation from within the universe and not involve appealing to an external agency. The specifics of that explanation are a matter for future research. But until science comes up with a testable theory of the laws of the universe, its claim to be free of faith is manifestly bogus.
If science were to be a totally self-contained discipline, I see the importance of what he's advocating. Though at the same time, I'm not entirely clear what these explanations internal to the universe would look like. The strong nuclear force works because... why?

That's the funny thing about "laws" in physics. Contrary to how my third grade science book tried to explain it, a law is not simply a hypothesis that has been tested many times. A law is something which seems to be universally the case, and yet has to be taken just "as is". There's not necessarily a "why" involved.

This is just fine if you simply consider science a methodology for explaining how material systems behave. It's rather more problematic if you have hopes of science being the one true method of knowing things for sure. Which is what leaves those interested in science who are comfortable with having a metaphysics in a better spot than those who imagine that one is better off without one.