We live in the midst of many moral catastrophes, all at once. Technological and material abundance has given us the opportunity to create a truly flourishing world, and we have utterly squandered that opportunity. Will the same be true of the future?
Sadly, I think that's likely. A thread.
Today: Just consider our impact on farmed animals. Every year, tens of billions of factory-farmed land animals live lives of misery.
I think human wellbeing has improved dramatically over the last few centuries. But the suffering directly caused by animal farming - just one moral error! - is enough to outweigh much or all of those gains.
Maybe you don’t care about animal welfare. But then just think about extreme poverty, inequality, war, consumerism, or global restrictions on freedom. These all mean that the world falls far short of how good it could be.
In fact, I think most moral perspective should regard the world today as involving a catastrophic loss of value compared to what we could have achieved.
None of this is to say that the world today is *bad*, in the sense of worse than nothing. It's saying that the world, today, is only a fraction as good as it could be.
That’s today. But maybe the future will be very different? After superintelligence, we’ll have material and cognitive abundance, so everyone will be extremely rich and well-informed, and that’ll enable everyone to get most of what they want - and isn’t that enough?
I think not. I think it's likely that people in the future will squander most of the potential that could have been achieved - in part because there are many subtle ways in which things could go wrong.
Consider population ethics. Does an ideal future society have a small population with very high per-person wellbeing, or a very large population with lower per-person wellbeing? Do lives that have positive but low wellbeing make the world better, overall?
Population ethics is notoriously hard - in fact, there are “impossibility theorems” showing that no theory of population ethics can satisfy all of a number of obvious-seeming ethical axioms.
And different views will result in radically different visions of a near-best future. On the total view, the ideal future might involve vast numbers of beings each of comparatively lower welfare; a small population of high-welfare lives would miss out on almost all value. But on critical-level or variable-value views, the opposite could be true.
If future society gets its population ethics wrong (either because of misguided values, or because it just doesn’t care about population ethics either way), then it’s easy for it to lose out on almost all potential value.
And that’s just one way in which society could mess things up. Consider, also, mistakes future people could make around: Attitudes to digital beings Attitudes to the nature of wellbeing Allocation of space resources Happiness/suffering tradeoffs Banned goods Similarity/diversity tradeoffs Equality or inequality Discount rates Decision theory Views on the simulation Views on infinite value Reflective processes
For most of these issues, there is no “safe” option, where a great outcome is guaranteed on most reasonable moral perspectives. Future decision-makers will need to get a lot of things right.
And it seems like getting just one of these wrong could be sufficient for losing much or most achievable value. A single flaw could be enough to make future society far worse than it could have been.
That is, we can represent the value as the product of many factors. If so, then a truly great future needs to do very well on essentially every one of those factors; doing badly on any one of them is sufficient to lose out on most value.
Needless to say, this is a big challenge. For much more discussion of these issues, see "No Easy Eutopia":
@William_Kiely "Conceivably *moral uncertainty alone* would be enough to prevent us from picking a future that is at least 0.1% optimal, even if we had full power to pick any concrete future we wanted." - not if we were fully able to reflect.
14,55K