Monday, January 31, 2005

Hobbits, Aliens & Morality

Actually, the impulse behind this post started with my reflections about Neanderthals. Over time (and most recently on a NOVA special) I have been exposed to the finding that modern humans and Neanderthals coexisted in Europe for thousands of years prior to the extinction of Neanderthals (only 35,000 years ago). The poignant and tragic dimensions of the story emerge as we consider how similar Neanderthals were to us, and how nearly certain it is that modern humans were the proximate cause of their dying out (through competition for resources and very likely direct warfare).

Recently, an archeological discovery was made in Indonesia of a tiny hominid relative, homo floresiensis (immediately nicknamed “hobbits” -- see the review of the initial findings by Carl Zimmer and more recent follow up). It looks like these small folks lived as recently as 18,000 years ago! Again, we find a species of hominids which were contemporaneous to us. Again, modern humans likely played a role in their demise.

If through some circumstance one of these species had lived long enough to have been part of our more recent recorded history, there is unfortunately little doubt of what the outcome would have been. Given our grim history of racism, war and genocide with regard to our own species, there seems no question that our treatment of near-humans would have been even worse, had they continued to exist up to the present day.

But putting aside what we would have done and instead considering what one ought to do, I ask: should killing a Neanderthal or a hobbit be considered the equivalent of murdering a human in a contemporary moral system?

For humans, the natural origins of our moral impulses and conceptions of how we ought to treat one another can be in harmony. Genetic science has shown that we modern Homo sapiens are a strikingly homogeneous bunch. Experts estimate we can all trace our origins to common ancestors (mitochondrial Eve, for instance) who lived a very brief 60,000 to 150,000 years ago or so.

This seems like an unqualified good and fortunate state of affairs. All of us are close relatives to each other. Therefore, as I argued in a recent post, our naturally grounded impulses to treat our own kin with kindness and generosity can readily come to the fore on behalf of all humans, as our global culture evolves.

Would we extend this circle to include near-humans, or is there a justification for “species-ism” in favoring human life?

Now, ethical treatment of animals is already a great moral challenge of our time. Here, most all agree a moral system must include an imperative to treat animals well, but at the same time it is our undeniable inclination to value their lives less than those of fellow humans. Looking at the reasons for this, we begin with our ancestral need for animal products as tools for survival. Furthermore, it makes evolutionary sense that we lack natural impulses to treat genetically distant species with care as we do our close kin. But what ought we to do? Can an argument be made that the lives of animals are as valuable as our own? Can we come up with an objective argument that they are of inferior value? I have taken a stab at this and thought about a theory which places a value on the robustness and complexity of subjective experience (see this old post). But my confidence in such a model is tentative at this point.

I come away from these musings thinking we should feel grateful that we don’t currently face the moral dilemmas which would accompany sharing our planet with near-humans. I’m pretty sure we’re still not up to it.

One parting thought: What if we were faced with aliens with comparable or even greater intelligence than us? Could we recognize their worth, or would we fall back on our natural species-ism? Of course, this could be a moot consideration if they don’t appreciate our value and have bigger guns.


TheJew said...

I've thought about this question before.

The answer is that our moral treatment of others is conditional on their ability to understand and reflect that moral treatment back on ourselves.

To put it another way: Don't kid yourself, Steve. If a cow ever got the chance he'd eat you and everyone you care about!If you could reason with the things, you could say "Stay within these bounds and everything is ok. If you come on our side, we will punish you. We will stay on our side. If we come on your side you may punish us." If this statement (or some analogous statement) could be understood, we can modify their behavior without resorting to treating them like objects.

Steve said...

Thanks. I think your idea essentially puts the value on their intelligence. I'll think about that. My first impulse, though, is to consider that they may have feelings (esp. of pain) which exceed their cognitive skill, and we would want to consider that, too.

Tom C said...

I'd say you have to distinguish between being a moral agent (ie, someone who can be held accountable for their actions) and a moral object (someone whose feelings and interests must be taken into account). It seems reasonable to suppose that moral agency depends on how far a neanderthal (or whatever) can make real choices, whereas it seems a reasonable idea that being a moral object just requires the having of real experiences.
We don't know, but it must be a reasonable guess that Neanderthals were pretty close to us on both counts (in fact, didn't they have slightly larger brains?).
I doubt whether aliens with superbrains could be moral beings in any stronger sense than we already are - but to argue against myself, I do see animals as probably being at a somewhat lower point than us on some kind of moral continuum.

Steve said...

I guess you must be right about the need to treat moral agents differently. I still have residual discomfort with putting too much emphasis on cognitive skill vs. quality of experience. Another thought experiment might be to compare our moral stance toward a choice-making robot who we are assured cannot feel pain, and an animal without equivalent moral reasoning who (we are assured) does.