It’s obviously possible to derive ought from is, even if we don’t yet know how it’s done. Oughts are inferences about world states, and world states are inferences from observations. Babies are born as barbarians, and learn first objects, then subjects, then oughts.
Even if you think morality is somehow latently encoded into every baby genetically, then it was encoded there by the learning processes of biological or cultural evolution. Whether a genetic belief or a memetic one, it was inferred ultimately from observation.
This means morality is not arbitrary, any more than medicine is. Certain moral beliefs are more true than others, in the sense that they improve your long term predictive accuracy more.
It also means morality is relative. The correct moral rules to follow are dependent on the being in question and the context in which they find themselves. But since it does not depend arbitrarily, there is always some way to translate between moral frames.
So this is a theory of moral relative realism, more or less. Just like mass or the passage of time, morality is very real and very much relative to an (epistemic) inertial reference frame.
An inertial reference frame is simply any frame where the rate of change of the parts do not change unless acted upon. An epistemic inertial reference frame is one where change in your beliefs is motion caused by time, and change-in-change of beliefs is caused by inference.
(Example, in case it’s helpful: I can believe the hand of the clock has continued moving with my eyes closed, that belief can continuing “rotating” on its own without new inference from data or other beliefs.)
Ok so if morality is a belief (indirectly) from observation, and beliefs are relative to epistemic reference frame, what kinds of evidence are they inferred *from*, and what is it that they are beliefs *about*? Not sure, a few things l notice that seem important:
Moral debates happen at many scales at once. The interpersonal, community, societal, global are all common but there is also a morality for eg employees. Morality is related to membership in groups.
Moral debates tend to happen either within the context of a group, or about which group frames are valid or more important. Within a frame they seem to be about the flourishing of the group. Between frames they seem to be about identity.
Reflective rules like the categorical imperative or the golden rule have a capacity to be universal in some sense, by telling you how to infer rules from a choice of group. They’re not much good at helping you pick group(s) tho.
Most moral dilemma thought experiments work by putting two important frames in tension with each other, in order to highlight the question of priority between them. The answer is always context dependent tho, relative priority between frames depends on 1000 details.
You wouldn’t expect a physics thought experiment to tell you much if you removed all info about the relative positions of the particles. You shouldn’t expect an ethics thought experiment to tell you much if you remove the groups or group-relative identities of the actors.
There is such a thing as moral progress, just as there is such a thing as physics progress. We can have relatively-better theories which predict relatively-higher/lower scale phenomena. And also we never get to “the Truth”, just better models.
@mathbot10 So you can’t have an anti-inductive system bc it will (deliberately) fail homeostasis, and thus shift to an energy level incompatible with its structural binding energies, and die.
@mathbot10 You also can’t have a system with (overly) false beliefs of morality or inference for the same reason, it will die bc it isn’t modeling the world well.
6,32K