2024-04-24 80,000 Hours Podcast - #182 – Bob Fischer on Comparing the Welfare of Humans, Chickens, Pigs, Octopuses, Bees, and More

@tags:: #lit✍/🎧podcast/highlights
@links::
@ref:: #182 – Bob Fischer on Comparing the Welfare of Humans, Chickens, Pigs, Octopuses, Bees, and More
@author:: 80,000 Hours Podcast

=this.file.name

Book cover of "#182 – Bob Fischer on Comparing the Welfare of Humans, Chickens, Pigs, Octopuses, Bees, and More"

Reference

Notes

Quote

(highlight:: What is a "Moral Weight"?
Transcript:
Speaker 1
So a moral weight is a way of converting, in theory anyway, a way of converting some unit of interest into another unit of interest. So we think we've got ways of measuring human health and well-being. We think we've got ways of measuring animal well-being. The question is, how do we get those things on the same scale? A moral weight is in theory of function that takes the one and converts it into the other. So what the moral weight project tries to do is say, given that we're trying to do the most good, we have to make comparisons between very different kinds of causes. Some of those causes we're used to comparing when we're trying to figure out how to compare, you know, giving anti-malarial bed nets to kids versus providing clean drinking water for Somebody else. We have ways of measuring those different projects and thinking about how much good they're doing. But then we want to say, here's how much good we do when we're distributing those anti-malarial bed nets versus here's how much good we do when we get laying hands out of cages on a factory Farm. Our mind might just experience some vertigo and we might not have any idea how to proceed. So what the moral weight project tries to do is provide tools for comparing these very different kinds of causes.)
- Time 0:04:15
-

Quote

(highlight:: Water + Bucket Analogy for Thinking About Animals' Capacity for Welfare
Transcript:
Speaker 1
And we can think about individuals as like buckets and welfare as like water. And so we can say, look, maybe some kinds of organisms are very large buckets, right? They can contain lots of welfare. And maybe some individuals are very tiny buckets, right? Like the smallest of all possible buckets and can only contain the smallest fraction of welfare. And then what we can do is say, well, your capacity for welfare is essentially the size of that bucket. And so if we are willing to think about welfare that way, which is controversial, not everybody wants to do that, but if you're willing to think about welfare that way, which many people Are, then you can say, all right, let's compare the sizes of these buckets. And that's going to give us a basic way of thinking about how much welfare is at stake in different individuals.)
- Time 0:06:10
-

Quote

(highlight:: What is "Welfare Range" in Species Comparison?
Transcript:
Speaker 2
And then in figuring out how big these buckets are for different species, you've got this concept of a welfare range. Can you talk me through that? Sure.
Speaker 1
So think of this as the difference between looking at the whole life of the organism versus how things are going for an organism at a moment. In other words, what are the possibilities over the course of the lifespan versus what are the possibilities at a time slice? And what's happening here is when we are trying to make comparisons across species, what we're often doing is saying not, you know, chicken versus human. We're comparing X number of life years of benefit to a human versus X number of life years of benefits to chickens. And to do that, we want to isolate just the momentary thing and then spread that across the life year, right? So rather than factoring in lifespan. So a welfare range is just how well things can go for you at a time versus how badly things can go for you at a time. So the peak in the valley.)
- Time 0:10:19
-

Quote

(highlight:: Why Neuron Count May Not Be a Good Proxy for Suffering Capacity
Transcript:
Speaker 1
Like Richard Dawkins actually has this line at some point where he says, oh, well, maybe simpler organisms actually need stronger pain signal. Because they don't learn as much as we do and they don't remember all these facts. And so they need big alarm bells to keep them away from fitness reducing threats. And so it's always possible that you have a complete inversion of the relationship that people imagine and you want to make sure that your model captures that.)
- Time 0:23:20
-

Quote

(highlight:: Welfare Range of Chickens v.s. Humans
Transcript:
Speaker 1
So what we conclude is roughly that the welfare range of chickens is about a third of that of humans. So let me quickly jump in and say what that means so that because it sounds crazy. But what's going on there is we're asking the question, how intense is the pleasure that a chicken's getting from ordinary sorts of experiences on average versus how intense are the Pains? And we're guessing, hey, look, maybe it's about a third is intense versus a third is painful. So we're not saying one human equals three chickens. We're not saying that we could do some sort of straightforward calculus like that. Instead, what we're saying is when you think about the relative intensities of pleasures and pains, maybe that's roughly the difference that we're dealing with.)
- Time 0:33:06
-

Quote

(highlight:: Separating Subjective Experience and Memory Encoding
Summary:
The rate of subjective experience may differ from the rate of memory encoding where remembered experiences could feel longer despite being encoded faster.
This concept presents a controversial and uncertain hypothesis, leading to discussions on empirical and philosophical assumptions.
Transcript:
Speaker 1
Well, so first of all, great. I mean, well, not great. I wish you hadn't had the car accident, but secondly, you know, great in the sense that. Yeah. So we do want to separate the question of the rate of a subjective experience from the rate of memory encoding and then the nature of memory later on. Right. So it could turn out that the remembered experience is encoded in a way where it still feels like it took three seconds retrospectively, but in the moment you are having more units of Subjective experience per unit of objective time. Like those things are totally compatible. That being said, like it's a highly controversial hypothesis, really unclear. You know, there's just lots to say here about all of the empirical assumptions that are being smuggled in and how plausible those are. The philosophical assumptions that are being smuggled in under Ed Sporgensen at GPI has a new paper about this.)
- Time 1:13:33
-

Quote

Assessing Rate of Subjective Experience Based On Visual Sample Rate (Fan Speed Example
Transcript:
Speaker 1
Let's think about you're looking at the blades of a fan. And there's a rate at which they spin where you still see individual fan blades. Then there's a rate at which they spin where you stop seeing individual fan blades. It's just a blur, right? Yeah, I like that. Nice. And the thought here is that different organisms might be better able to get that information where they still look like individual fan blades at higher speeds, right? And that looks like a difference in the sample rate. Right. They're visual information sample rate. And now do we know that they're visual information sample rate somehow demonstrate something about the clock speed of consciousness? No, we do not. But it's a reasonable hypothesis. It's a possibility. And so whatever credency was signed to that possibility, you can then say, well, given these differences across organisms, you can apply some discount to that and then use it as a way Of saying, well, maybe you get this kind of variation in the subjective experience of time or rather the rate of subjective experience.)
- Time 1:17:16
-

Quote

(highlight:: Types of Risk Aversion in Altruistic Decision-Making
Transcript:
Speaker 1
One very common form of risk aversion comes in different types. One very common form says like what I'm really worried about is the worst case scenario. And so, you know, there are people who get to the airport four hours before their flight because they absolutely do not want to miss their flight. And then there are people like me who skate in 45 minutes beforehand, right? Like, OK, so I am less risk averse than you might be, Louisa, because I am willing to sort of say, thank God for pre check. Let's just we're going to make it through, right? So, but that what's going on there is a difference not in terms of the probabilities necessarily that we're assigning to being able to make it to the flight, but rather what our attitudes Are toward those probabilities and perhaps our attitudes toward the value of the different outcomes, not just how bad you think it would be, but how much you care about that particular Bad outcome. So if you are risk aversion, the sense that you want to avoid worst case outcomes, you're going to rank your options differently than you would if you were a straight expected value maximizer. And so something like I'm part of the case for focusing on the long term future and saying we want to avoid existential risks is precisely you're worried about worst case outcomes seems Really bad if we all die, right? And we don't have more of humanity. But likewise, it'd be really bad if shrimp were sentient and we were putting them in these, you know, low water quality environments where they cannibalize each other and where we're Ripping off their eye stocks so that they mature faster and where, you know, they are desperately trying to escape when harvested and they're suffocating, et cetera, like that would Be really bad. If we were doing all those things to shrimp and they were sentient. So if you are and avoid the worst case person, then when you're comparing something like helping pigs and helping shrimp, actually you're going to be much more inclined to think you Should help to shrimp, even if your probability of sentience for them is much lower, even if the welfare range estimate that you have for them is much smaller. Awesome.
Speaker 2
That was really clear to me. So I like the term worst case person. So that's one type of risk aversion. And I think you've come up with three types. So maybe let's talk through all of them. What's another one?
Speaker 1
So another one would be if you are worried about futility, not doing anything at all, right? You're you're carried out difference making. And if you think about the motivation for doing the most good or at least one of the arguments that got me interested in trying to do the most good, you think about Peter Singer's classic Thought experiment and he's saying, Hey, look, if you can prevent something really bad from happening and it's not really going to cost you anything, then you really should. And so when you're thinking about this child drowning in the pond, like, of course, you got to go save the child. And likewise, because you can help these far away starving strangers, you should help them. Well, what's the moral intuition that's driving that? Like fundamentally, it's this thought that you can do this. Like you actually have this power. It's not like, Hey, you're gambling. You've got a point zero zero one percent chance of making the world better. It's like, no, no, no, you can pluck that kid out of the pond and you can via your donation, prevent this kid from starving over there in some faraway place. And so concern for difference making is going to change what we focus on because it's essentially going to penalize options where we think the odds of doing good are worse or it's also Going to penalize options where we might end up doing a bad thing, right, where there's a downside risk. So shrimp look good on worst case scenario risk aversion. They look worse than they would unexpected value maximization if you're a difference making focus person because you could be throwing your money away, right? Because maybe they're just not sentines at all.
Speaker 2
Yeah, it's funny because I feel very strongly resident, like both of those resonate with me, which is annoying. It makes a very unclear cut, but I'm like, yeah, I do want to prevent worst case outcomes. I don't want to tolerate giving really any chance at all to shrimp being sentient and me not having done anything about the horrible ways in which they're raised. And then another part of me is like, there are people dying of malaria and we know how to make sure they aren't and we know how bad it is to have malaria and to lose a loved one to malaria. And like, I'm not going to spend my money making sure that like shrimp who are like maybe plausibly sentient, but are like weird and small and very impossible to study, making sure that They're like having a slightly better welfare because their tank is a bit bigger or a bit less muddy or something. Anyways, I imagine lots of people will resonate with both of these. So we'll have to come back to what we do when we when we have that kind of conflict. But first, what's the third kind of risk aversion that is worth talking about here?
Speaker 1
Sure, the third kind is called ambiguity aversion. And that's essentially where you've got hard to specify probability ranges, where you're really uncertain about what probability to assign to something. And so this differentiates things affecting humans and things affecting animals and, of course, among the animals. So, you know, I don't know about you, but when it comes to the probability I assign to pigs being moral subjects, to pigs having moral standing, pretty high. Like I'm more confident that pigs matter than I am of lots of other things. Maybe I'm at like point eight, point nine. I really think that they are going to be morally important entities. What do I think about, you know, shrimp just to go back to the same example? Well, I mean, I'm more confident than I was, but still not super confident. And it's not just that I'm less confident, but I'm also less confident about what probability to assign. So the range of possible values around pigs for me is pretty narrow, like maybe low end is point seven and the upper end is the point nine nine or something. But for shrimp, it's like, well, maybe they matter as much as pigs or maybe they don't matter at all. And like it's all over the map, right? Big wide range is very uncertain about what to do.)
- Time 1:41:23
-

Quote

(highlight:: How Risk Aversion Affects Decisions Around Whether to Focus on Shrimp, Chickens, or Humans
Transcript:
Speaker 1
If you're just comparing humans, chickens and shrimp and you're a straight expected value maximizer, we'll of course shrimp win because there are trillions of them. And so even given very small moral weights for shrimp, they just dominate. Now, suppose that you are worst case scenario risk averse. Well, now the case for shrimp looks even better, right? Then it did before that if you were a straight expected value maximizer. But then suppose you go in for one of those other two forms of risk aversion. You're worried about difference making or you don't really like ambiguity. Well, that was penalized shrimp and maybe quite a lot to the point where the human causes look a lot better than the shrimp causes. The really interesting thing is that chickens actually look really good across the various types of risk aversion. So if you're a straight expected value maximizer, the shrimp beat the chickens. But once you've got one of those other kinds of risk aversion in play, you're worried about difference making or ambiguity that actually chickens look really good and they even beat The human causes. And the reason for that is really simple. It's just that there are so many chickens and we think they probably matter a fair amount.)
- Time 1:54:16
-


dg-publish: true
created: 2024-07-01
modified: 2024-07-01
title: #182 – Bob Fischer on Comparing the Welfare of Humans, Chickens, Pigs, Octopuses, Bees, and More
source: snipd

@tags:: #lit✍/🎧podcast/highlights
@links::
@ref:: #182 – Bob Fischer on Comparing the Welfare of Humans, Chickens, Pigs, Octopuses, Bees, and More
@author:: 80,000 Hours Podcast

=this.file.name

Book cover of "#182 – Bob Fischer on Comparing the Welfare of Humans, Chickens, Pigs, Octopuses, Bees, and More"

Reference

Notes

Quote

(highlight:: What is a "Moral Weight"?
Transcript:
Speaker 1
So a moral weight is a way of converting, in theory anyway, a way of converting some unit of interest into another unit of interest. So we think we've got ways of measuring human health and well-being. We think we've got ways of measuring animal well-being. The question is, how do we get those things on the same scale? A moral weight is in theory of function that takes the one and converts it into the other. So what the moral weight project tries to do is say, given that we're trying to do the most good, we have to make comparisons between very different kinds of causes. Some of those causes we're used to comparing when we're trying to figure out how to compare, you know, giving anti-malarial bed nets to kids versus providing clean drinking water for Somebody else. We have ways of measuring those different projects and thinking about how much good they're doing. But then we want to say, here's how much good we do when we're distributing those anti-malarial bed nets versus here's how much good we do when we get laying hands out of cages on a factory Farm. Our mind might just experience some vertigo and we might not have any idea how to proceed. So what the moral weight project tries to do is provide tools for comparing these very different kinds of causes.)
- Time 0:04:15
-

Quote

(highlight:: Water + Bucket Analogy for Thinking About Animals' Capacity for Welfare
Transcript:
Speaker 1
And we can think about individuals as like buckets and welfare as like water. And so we can say, look, maybe some kinds of organisms are very large buckets, right? They can contain lots of welfare. And maybe some individuals are very tiny buckets, right? Like the smallest of all possible buckets and can only contain the smallest fraction of welfare. And then what we can do is say, well, your capacity for welfare is essentially the size of that bucket. And so if we are willing to think about welfare that way, which is controversial, not everybody wants to do that, but if you're willing to think about welfare that way, which many people Are, then you can say, all right, let's compare the sizes of these buckets. And that's going to give us a basic way of thinking about how much welfare is at stake in different individuals.)
- Time 0:06:10
-

Quote

(highlight:: What is "Welfare Range" in Species Comparison?
Transcript:
Speaker 2
And then in figuring out how big these buckets are for different species, you've got this concept of a welfare range. Can you talk me through that? Sure.
Speaker 1
So think of this as the difference between looking at the whole life of the organism versus how things are going for an organism at a moment. In other words, what are the possibilities over the course of the lifespan versus what are the possibilities at a time slice? And what's happening here is when we are trying to make comparisons across species, what we're often doing is saying not, you know, chicken versus human. We're comparing X number of life years of benefit to a human versus X number of life years of benefits to chickens. And to do that, we want to isolate just the momentary thing and then spread that across the life year, right? So rather than factoring in lifespan. So a welfare range is just how well things can go for you at a time versus how badly things can go for you at a time. So the peak in the valley.)
- Time 0:10:19
-

Quote

(highlight:: Why Neuron Count May Not Be a Good Proxy for Suffering Capacity
Transcript:
Speaker 1
Like Richard Dawkins actually has this line at some point where he says, oh, well, maybe simpler organisms actually need stronger pain signal. Because they don't learn as much as we do and they don't remember all these facts. And so they need big alarm bells to keep them away from fitness reducing threats. And so it's always possible that you have a complete inversion of the relationship that people imagine and you want to make sure that your model captures that.)
- Time 0:23:20
-

Quote

(highlight:: Welfare Range of Chickens v.s. Humans
Transcript:
Speaker 1
So what we conclude is roughly that the welfare range of chickens is about a third of that of humans. So let me quickly jump in and say what that means so that because it sounds crazy. But what's going on there is we're asking the question, how intense is the pleasure that a chicken's getting from ordinary sorts of experiences on average versus how intense are the Pains? And we're guessing, hey, look, maybe it's about a third is intense versus a third is painful. So we're not saying one human equals three chickens. We're not saying that we could do some sort of straightforward calculus like that. Instead, what we're saying is when you think about the relative intensities of pleasures and pains, maybe that's roughly the difference that we're dealing with.)
- Time 0:33:06
-

Quote

(highlight:: Separating Subjective Experience and Memory Encoding
Summary:
The rate of subjective experience may differ from the rate of memory encoding where remembered experiences could feel longer despite being encoded faster.
This concept presents a controversial and uncertain hypothesis, leading to discussions on empirical and philosophical assumptions.
Transcript:
Speaker 1
Well, so first of all, great. I mean, well, not great. I wish you hadn't had the car accident, but secondly, you know, great in the sense that. Yeah. So we do want to separate the question of the rate of a subjective experience from the rate of memory encoding and then the nature of memory later on. Right. So it could turn out that the remembered experience is encoded in a way where it still feels like it took three seconds retrospectively, but in the moment you are having more units of Subjective experience per unit of objective time. Like those things are totally compatible. That being said, like it's a highly controversial hypothesis, really unclear. You know, there's just lots to say here about all of the empirical assumptions that are being smuggled in and how plausible those are. The philosophical assumptions that are being smuggled in under Ed Sporgensen at GPI has a new paper about this.)
- Time 1:13:33
-

Quote

Assessing Rate of Subjective Experience Based On Visual Sample Rate (Fan Speed Example
Transcript:
Speaker 1
Let's think about you're looking at the blades of a fan. And there's a rate at which they spin where you still see individual fan blades. Then there's a rate at which they spin where you stop seeing individual fan blades. It's just a blur, right? Yeah, I like that. Nice. And the thought here is that different organisms might be better able to get that information where they still look like individual fan blades at higher speeds, right? And that looks like a difference in the sample rate. Right. They're visual information sample rate. And now do we know that they're visual information sample rate somehow demonstrate something about the clock speed of consciousness? No, we do not. But it's a reasonable hypothesis. It's a possibility. And so whatever credency was signed to that possibility, you can then say, well, given these differences across organisms, you can apply some discount to that and then use it as a way Of saying, well, maybe you get this kind of variation in the subjective experience of time or rather the rate of subjective experience.)
- Time 1:17:16
-

Quote

(highlight:: Types of Risk Aversion in Altruistic Decision-Making
Transcript:
Speaker 1
One very common form of risk aversion comes in different types. One very common form says like what I'm really worried about is the worst case scenario. And so, you know, there are people who get to the airport four hours before their flight because they absolutely do not want to miss their flight. And then there are people like me who skate in 45 minutes beforehand, right? Like, OK, so I am less risk averse than you might be, Louisa, because I am willing to sort of say, thank God for pre check. Let's just we're going to make it through, right? So, but that what's going on there is a difference not in terms of the probabilities necessarily that we're assigning to being able to make it to the flight, but rather what our attitudes Are toward those probabilities and perhaps our attitudes toward the value of the different outcomes, not just how bad you think it would be, but how much you care about that particular Bad outcome. So if you are risk aversion, the sense that you want to avoid worst case outcomes, you're going to rank your options differently than you would if you were a straight expected value maximizer. And so something like I'm part of the case for focusing on the long term future and saying we want to avoid existential risks is precisely you're worried about worst case outcomes seems Really bad if we all die, right? And we don't have more of humanity. But likewise, it'd be really bad if shrimp were sentient and we were putting them in these, you know, low water quality environments where they cannibalize each other and where we're Ripping off their eye stocks so that they mature faster and where, you know, they are desperately trying to escape when harvested and they're suffocating, et cetera, like that would Be really bad. If we were doing all those things to shrimp and they were sentient. So if you are and avoid the worst case person, then when you're comparing something like helping pigs and helping shrimp, actually you're going to be much more inclined to think you Should help to shrimp, even if your probability of sentience for them is much lower, even if the welfare range estimate that you have for them is much smaller. Awesome.
Speaker 2
That was really clear to me. So I like the term worst case person. So that's one type of risk aversion. And I think you've come up with three types. So maybe let's talk through all of them. What's another one?
Speaker 1
So another one would be if you are worried about futility, not doing anything at all, right? You're you're carried out difference making. And if you think about the motivation for doing the most good or at least one of the arguments that got me interested in trying to do the most good, you think about Peter Singer's classic Thought experiment and he's saying, Hey, look, if you can prevent something really bad from happening and it's not really going to cost you anything, then you really should. And so when you're thinking about this child drowning in the pond, like, of course, you got to go save the child. And likewise, because you can help these far away starving strangers, you should help them. Well, what's the moral intuition that's driving that? Like fundamentally, it's this thought that you can do this. Like you actually have this power. It's not like, Hey, you're gambling. You've got a point zero zero one percent chance of making the world better. It's like, no, no, no, you can pluck that kid out of the pond and you can via your donation, prevent this kid from starving over there in some faraway place. And so concern for difference making is going to change what we focus on because it's essentially going to penalize options where we think the odds of doing good are worse or it's also Going to penalize options where we might end up doing a bad thing, right, where there's a downside risk. So shrimp look good on worst case scenario risk aversion. They look worse than they would unexpected value maximization if you're a difference making focus person because you could be throwing your money away, right? Because maybe they're just not sentines at all.
Speaker 2
Yeah, it's funny because I feel very strongly resident, like both of those resonate with me, which is annoying. It makes a very unclear cut, but I'm like, yeah, I do want to prevent worst case outcomes. I don't want to tolerate giving really any chance at all to shrimp being sentient and me not having done anything about the horrible ways in which they're raised. And then another part of me is like, there are people dying of malaria and we know how to make sure they aren't and we know how bad it is to have malaria and to lose a loved one to malaria. And like, I'm not going to spend my money making sure that like shrimp who are like maybe plausibly sentient, but are like weird and small and very impossible to study, making sure that They're like having a slightly better welfare because their tank is a bit bigger or a bit less muddy or something. Anyways, I imagine lots of people will resonate with both of these. So we'll have to come back to what we do when we when we have that kind of conflict. But first, what's the third kind of risk aversion that is worth talking about here?
Speaker 1
Sure, the third kind is called ambiguity aversion. And that's essentially where you've got hard to specify probability ranges, where you're really uncertain about what probability to assign to something. And so this differentiates things affecting humans and things affecting animals and, of course, among the animals. So, you know, I don't know about you, but when it comes to the probability I assign to pigs being moral subjects, to pigs having moral standing, pretty high. Like I'm more confident that pigs matter than I am of lots of other things. Maybe I'm at like point eight, point nine. I really think that they are going to be morally important entities. What do I think about, you know, shrimp just to go back to the same example? Well, I mean, I'm more confident than I was, but still not super confident. And it's not just that I'm less confident, but I'm also less confident about what probability to assign. So the range of possible values around pigs for me is pretty narrow, like maybe low end is point seven and the upper end is the point nine nine or something. But for shrimp, it's like, well, maybe they matter as much as pigs or maybe they don't matter at all. And like it's all over the map, right? Big wide range is very uncertain about what to do.)
- Time 1:41:23
-

Quote

(highlight:: How Risk Aversion Affects Decisions Around Whether to Focus on Shrimp, Chickens, or Humans
Transcript:
Speaker 1
If you're just comparing humans, chickens and shrimp and you're a straight expected value maximizer, we'll of course shrimp win because there are trillions of them. And so even given very small moral weights for shrimp, they just dominate. Now, suppose that you are worst case scenario risk averse. Well, now the case for shrimp looks even better, right? Then it did before that if you were a straight expected value maximizer. But then suppose you go in for one of those other two forms of risk aversion. You're worried about difference making or you don't really like ambiguity. Well, that was penalized shrimp and maybe quite a lot to the point where the human causes look a lot better than the shrimp causes. The really interesting thing is that chickens actually look really good across the various types of risk aversion. So if you're a straight expected value maximizer, the shrimp beat the chickens. But once you've got one of those other kinds of risk aversion in play, you're worried about difference making or ambiguity that actually chickens look really good and they even beat The human causes. And the reason for that is really simple. It's just that there are so many chickens and we think they probably matter a fair amount.)
- Time 1:54:16
-