Hide table of contents

Summary: Desire theories of welfare hold that our welfare consists in the degrees to which our desires or preferences are satisfied or frustrated. Some issues and subtleties exist in attempting to develop a satisfactory desire theory that counts the suffering of non-reflective animals but uses reflective preferences in humans, while making continuous tradeoffs between reflective preferences in reflective humans and revealed preferences in nonreflective animals. I briefly describe possible solutions and discuss the practice of making such tradeoffs in this post.

(Disclaimer: This is a short post I wrote in about 1-2 hours based on accumulated background knowledge, but no targeted literature review on the specific topic, so plausibly contains errors or oversights.)

 

Weighing reflective and revealed preferences in theory

Reflective preferences/desires are preferences that individuals endorse after thought. When you ask someone their preferences about an issue, you generally expect to get their reflective preferences. Only beings capable of a certain degree of thought have reflective preferences, and this probably excludes many nonhuman animals (at least without training with language).

Revealed preferences are preferences of individuals we infer based on their actual choices in real world situations. Basically all living animals have revealed preferences and even non-conscious beings may have revealed preferences, although I assume only the preferences of conscious beings actually matter.

If only reflective preferences count ethically, then many nonhuman animals (and plausibly some humans with limited cognitive capacities) capable of suffering and their interests wouldn't matter in themselves at all. If reflective preferences lexically dominate revealed preferences, i.e. reflective preferences are always prioritized over revealed preferences whenever they disagree, then the result is practically the same, and we may as well ignore non-reflective beings.

However, if we instead allow continuous tradeoffs between reflective preferences and revealed preferences, optionally ignoring revealed preferences in an individual when their reflective preferences are available, then we can get continuous tradeoffs between human and nonhuman animal preferences. I'd guess this could be justified by an account on which both revealed and reflective preferences are measures of some true underlying weights of desires, but reflective preferences could happen to be more accurate in individuals capable of reflection. Alternatively, for a moral anti-realist or moral pluralist, they might just be inclined to make such continuous tradeoffs, without any need for an underlying moral construct that explains both in a unified manner. 

(Edited to add) Reflective and revealed preferences might come from different kinds of conscious evaluations or judgements of circumstances: life satisfaction is a reflective evaluation, while revealed preferences may (often but not always) be motivated by pleasure and suffering or result from reinforcement with pleasure and suffering, and pleasure and suffering are themselves or involve "felt evaluations" or "felt judgements". Suffering involves judging circumstances negatively, while pleasure involves judging them positively, where these judgements are felt, not reasoned. The distinction between reflective judgements and felt ones can be blurry, too, because reflective judgements will, I think, necessarily be based at least in part on evaluative impressions or intuitions, themselves felt judgements, although not necessarily hedonistic in nature. I see reflective preferences as coming from pulling out felt judgements, reasoning about them, and weighing them and inferences based on them against one another. Without felt judgements to start, there's nothing to reason about and weigh together to come to any overall reflective judgement other than indifference. In some cases, when there's no reflection to do, a reflective judgement just is a felt judgement. So, it would be odd to discount nonhuman animals' felt judgements, including their pleasure and suffering.

(Another approach could be to consider what animals' reflective preferences would be were they capable of reflection, and deal only with idealized reflective preferences (credits to M.B.), but I'll set that possibility aside here.)

The next section assumes we can make such continuous tradeoffs between reflective and revealed preferences, and discusses how to actually do so between humans and nonhuman animals, through first isolating the reflective desire-based weights of physical pain in humans.

 

Weighing preferences in practice

Physical pain typically also results in or is otherwise associated with functional impairments that limit activities. People tend to avoid activities that cause them physical pain, even if they're important for satisfying desires they actually think are more important than avoiding the pain, on reflection. Or, the cause of their physical pain, like an injury, may also limit their capacities and activities, but not through the pain itself.


So, in considering an intervention that reduces physical pain and measuring its effects on life satisfaction, QALYs, DALYs or some other reflective desire-based measure, we'd be capturing not only just the reduction in the desire-based harm of the suffering from the pain itself, but also the effects of allowing people to pursue activities they wouldn't otherwise, and the effects of both the pain reduction and functional improvements on overall long-term mood, which further impacts desire satisfaction. However, there's lots of data on the EQ-5D dimensions of health-related welfare, and we could estimate the effects of the pain/discomfort dimension on life satisfaction or QALYs, DALYs, holding constant the other EQ-5D dimensions of mobility, self-care and usual activities (either including or excluding anxiety/depression) and baseline demographic info. There may be practical issues in actually doing so with EQ-5D data, say because the data is not sufficiently precise about intensities, frequencies and durations of suffering, but this illustrates what we could do: just control for other factors. This should give us a desire-based weight just for the suffering pain causes in humans, not (or not primarily) through its effect on limiting activities or frustrating other desires.

Then, fixing another animal species, with

  1. a multiplier between humans' average absolute/cardinal reflective desire-based weight to some of their own individual suffering and the other species' average absolute/cardinal revealed preference-based weight to some of their own individual suffering,
  2. humans' reflective desire-based weights between various desires, and intensities, frequencies and durations of suffering in our own individual tradeoffs, and
  3. the other species' revealed preference-based weights between various desires, and intensities, frequencies and durations of suffering in their own individual tradeoffs,

we can make tradeoffs between revealed preferences in that other species and reflective desires in humans.

In other words, we have two separate welfare scales: (2) humans' reflective preferences to measure our own welfare, and (3) a nonhuman animal species' revealed preferences to measure their own welfare, and we use (1) to put them on a common scale and make tradeoffs between them.

1 may unfortunately turn out to be fairly arbitrary, and there are issues to consider with the two envelopes problem for moral uncertainty.

20

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Can you clarify the difference between these two paragraphs?  They read the same to me, but I'm guessing I'm missing something here.

(1) i.e. reflective preferences are always prioritized over revealed preferences whenever they disagree, then the result is practically the same, and we may as well ignore non-reflective beings.

(2) However, if we instead allow continuous tradeoffs between reflective preferences and revealed preferences, optionally ignoring revealed preferences in an individual when their reflective preferences are available, then we can get continuous tradeoffs between human and nonhuman animal preferences.

The first is meant to apply generally, even across beings, so that humans' reflective preferences are always prioritized over nonhumans' revealed preferences. We can break ties with nonhumans', but that will be so rare that it practically won't matter.

The second means that sometimes we will prioritize revealed preferences over reflective preferences, and so sometimes the revealed preferences of nonhuman animals over the reflective preferences of humans.

The "optionally" part just means that if a particular being has both revealed and reflective preferences about something, we could use those particular reflective preferences and ignore those particular revealed preferences, although others' revealed preferences may take priority. You could imagine that you have "true preferences", and both revealed and reflective preferences are ways to try to measure them, but reflective preferences are always more accurate than revealed preferences, not that they're more important. So, it's like saying we have two measures of some individuals' welfare (both revealed and reflective preferences) and we just prefer to use the strictly more accurate one (revealed preferences) when both are available, but it doesn't mean the welfare of those for whom only the measure that's less accurate in humans (revealed preferences) is available matters less.

Thank you so much. This is a concise synopsis of how net suffering/revealed preference nor reflective preference capture what seems to me to be optimal outcomes.

It seems self evident to me, but actually articulating that suffering is not the whole point, but neither is sentience, is proving tricky and making me question if I'm actually just Wrong About It. (Also I'm not very familiar with the philosophy in this area)

I'm not sure I understand number 2 - are humans imposing their human reflectance desires as surrogate for the non-humans? Or are humans attempting to interpret what the non-humans reflectance values would be, and imposing those? Saying reflective desires of humans made me initially interpret it as simply balancing human desires against non-human desires for cooperative living, but I no longer think that is the meaning you were intending to convey.

Thank you! :)

In 2, we'd ask a human about their own preferences concerning their own suffering and their other desires, and average over multiple humans.

The weight we give to nonhuman animals' desires relative to humans depends on 3, their revealed preferences, or the weights nonhuman animals give to their own desires through their actual choices/behaviours, which we can observe, and 1, which tells you how to make tradeoffs between humans' revealed preferences and animals reflective preferences by identifying something to normalize both scales by.

Basically, we have two separate welfare scales: humans' reflective preferences as their own welfare (2), and nonhuman animal species' revealed preferences as their own welfare (3), and we use 1 to put them on a common scale.

I've made some edits to the post to try to make this a bit clearer.

Thanks, I get it now

Curated and popular this week
Relevant opportunities