What is ethics? Is this ethical, why?

Oliver Hu
13 min readDec 25, 2020

--

A common challenge in both work and daily life is how to judge if an action or a policy is ethical or not. There are few things deemed wholeheartedly ethical or unethical, most are in grey area. For example, theft is normally considered unethical, but in the context that a kid stole necessary medicine from a pharmacy to save his/her sick and poor family. Is this ethical or not? Another example is the book *Being Mortal by Atul Gawande*, what is the most ethical way to relieve the pain of a family member in serious illness. In a corporation environment, “what is best for our customers?”

Unfortunately, you still have to make a decision at times to judge such a case. Different agents would have different perspectives and context, as law enforcement, as a parent, as a decision maker in a corporation. The challenge falls into: how to form your ranked principles to make a call on whether something is ethical or not, in a high quality but quick manner? And equally important, how would you articulate your decision in the framework of your company’s or personal values and cultures?

I attended a tech talk for ethical engineering and design, and found some leads there. I was overwhelmed with all the concepts and tools mentioned, but one takeaway I got out of the talk is this is a very important question for me to contemplate and acquire a thinking toolbox solving controversial issues. So I emailed the speaker later to do a case study on how he would apply the ethical framework to analyze a controversial case, it was an informative and educational discussion.

The question I asked him was a controversial one and I have been trying to sort out the ethics behind it:

How would you analyze our diversity hiring policy that we require a portion of the final round interview candidates must come from under represented groups?

The speaker approached the problem fairly systemic. He shared some of his background and personal take, then shared his toolkit:

From a **deontological perspective**, hiring practices must be put in place in such a way that they create the type of environment we would like all companies to create. In the scenario you outlined, this means by implementing a practice like this, we are saying this — changing hiring practices to ensure diverse candidates are given fair consideration — is what everyone should be doing. This is in line with company’s DIBs policies and practices in my opinion.

From a **utilitarian perspective**, we look at what decisions benefit the most people. Considering the general underrepresentation of marginalized groups in our field, the systemic injustices imposed on these groups, issues like stereotype threats and impostor syndrome, and what we know about everything I said above, one can argue a mandate on hiring underrepresented groups to management positions will have significantly better utility outcomes than hiring within dominant groups because the new hires from underrepresented groups will open the door for more people from those groups in the future, and give the company access to a new pool of possible talent which has previously been inaccessible.

Finally, from a **capability approach perspective** where we look at the capabilities granted to the persons acted upon, correcting for biased hiring practices by reserving seats at the candidate table for underrepresented groups grants them the necessary capabilities to be and do what they have reason to value *in spite of* systemic barriers.

He later sent me the framework behind this which is based on classical moral philosophy: utilitarianism, deontology, and virtual ethics.

Deontological Ethics (Stanford Encyclopedia of Philosophy)

Consequentialism (Stanford Encyclopedia of Philosophy)

The Capability Approach (Stanford Encyclopedia of Philosophy)

Virtue Ethics (Stanford Encyclopedia of Philosophy)

The following notes are taken from the course [learning course around technology and design ethics], which I found much easier to understand than the detailed explanation in the Stanford encyclopedia. At the bottom of the blog are the excerpts I extracted from the Stanford site, most content are interconnected and backed by a ton of examples. So I ended up copying more than half the content to my note app…

What is Ethics

Ethics is **NOT** list of Do and Don’ts. In reality, ethics is a framework we use to judge what’s right and wrong as a society.

Morals vs Ethics

Morales is personal judgements about what is right and wrong. Morals are different from person to person.

4 Approaches to ethics

There has been thousand years of discussion which one is the “best” or “only” theory to judge whether something is ethical or not, but apparently there is no conclusion..thus why people feel ethics is an unapproachable topic. They may feel mutually exclusively but they could be used together.

Capability Approach: modify capability

Allow the user to do and be what they value. The core purpose of every design is to modify the person that interacts with that design in some way. A social media app gives people the capability to publish content in public space snd sharing experiences online.
Capability modification

  • Granting.
  • Enabling.
  • Limiting.
  • Removing.

Ethics of the capabilities are measured by whether or not they allow the person to be an do what they have reason to value.
Focus on equitable and common goods.

Consequentialism/Utilitarianism: consequences

Whether the consequences of the capabilities would create better outcomes for everyone. Everyone includes the company, the primary user and anyone else affected. Immediate and long term.
*Problems of consequentialism*
1. Narrowly defines who is relevant.
2. Measures overall utility.

Duty Ethics: Duty of care to users

Is this the decision you’d want every other person in the same situation to make? Does everyone making this same decision lead us into a world where we can all live and thrive?
Duty ethics looks at the overall reasoning behind why we do what we do. What precedences we are setting for our community.
**Worksheet**
1. Best practices. What already exists? Why not use best practices?
2. Expectations. What are the reasonable expectations?
3. Duties and responsibilities. The requirement of adherence to a standard of reasonable care while performing any acts that could foreseeable harm others.

Virtue Ethics: The effect our design has on us as creators.

The goodness and rightness of a decision is measured by what that decision does to the person making that decision. We judge our actions based on if we are making decisions our ideal self would be making.

**Vallor’s Technomoral Virtues**
1. honesty
2. self control
3. humility
4. justice
5. courage
6. empathy
7. care
8. civility
9. flexibility
10. perspective
11. magnanimity
12. wisdom

Headline test: are you comfortable with your decision being the headline news nationwide?

Notes on Deontological Ethics
[Deontological Ethics (Stanford Encyclopedia of Philosophy)](https://plato.stanford.edu/entries/ethics-deontological/)
For the essence of consequentialism is still present in such positions: an action would be right only insofar as it maximizes these Good-making states of affairs being caused to exist.
The two criticisms pertinent here are that consequentialism is, on the one hand, overly demanding, and, on the other hand, that it is not demanding enough:
* for consequentialists, there is no realm of moral permissions, no realm of going beyond one’s moral duty (supererogation), no realm of moral indifference. All acts are seemingly either required or forbidden. And there also seems to be no space for the consequentialist in which to show partiality to one’s own projects or to one’s family, friends, and countrymen, leading some critics of consequentialism to deem it a profoundly alienating and perhaps self-effacing moral theory
* On the other hand, consequentialism is also criticized for what it seemingly permits. It seemingly demands (and thus, of course, permits) that in certain circumstances innocents be killed, beaten, lied to, or deprived of material goods to produce greater benefits for others. Consequences — and only consequences — can conceivably justify *any* kind of act, for it does not matter how harmful it is to some so long as it is more beneficial to others.

**A well-worn example** of this **over-permissiveness of consequentialism** is that of a case standardly called, Transplant. A surgeon has five patients dying of organ failure and one healthy patient whose organs can save the five. In the right circumstances, surgeon will be permitted (and indeed required) by consequentialism to kill the healthy patient to obtain his organs, assuming there are no relevant consequences other than the saving of the five and the death of the one. Likewise, consequentialism will permit (in a case that we shall call, Fat Man) that a fat man be pushed in front of a runaway trolley if his being crushed by the trolley will halt its advance towards five workers trapped on the track. We shall return to these examples later on.

The most familiar forms of deontology, and also the forms presenting the greatest contrast to consequentialism, hold that some choices cannot be justified by their effects — that no matter how morally good their consequences, some choices are morally forbidden
In this sense, for such deontologists, **the Right is said to have priority over the Good**. If an act is not in accord with the Right, it may not be undertaken, no matter the Good that it might produce (including even a Good consisting of acts in accordance with the Right).

### Agent-centered Deontological Theories
An agent-relative reason is an objective reason, just as are agent neutral reasons; **. An agent-relative reason is so-called because it is a reason relative to the agent whose reason it is; it need not (although it may) constitute a reason for anyone else.** Thus, an agent-relative *obligation* is an obligation for a particular agent to take or refrain from taking some action; and because it is agent-relative, the obligation does not necessarily give anyone else a reason to support that action. Each parent, for example, is commonly thought to have such special obligations to his/her child, obligations not shared by anyone else. Likewise, an agent-relative *permission* is a permission for some agent to do some act even though others may not be permitted to aid that agent in the doing of his permitted action. Each parent, to revert to the same example, is commonly thought to be permitted (at the least) to save his own child even at the cost of not saving two other children to whom he has no special relation

At the heart of agent-centered theories (with their agent-relative reasons) is the idea of agency. The moral plausibility of agent-centered theories is rooted here. **The idea is that morality is intensely personal, in the sense that we are each enjoined to keep our own moral house in order**. Our categorical obligations are not to focus on how our actions cause or enable other agents to do evil; the focus of our categorical obligations is to **keep our own agency free of moral taint**.

Agent-centered theories famously divide between those that emphasize the **role of intention** or other **mental states** in constituting the morally important **kind of agency**, and those that emphasize the **actions of agents** as playing such a role.

On the first of these three agent-relative views, it is most commonly asserted that it is our intended ends and intended means that most crucially define our agency. If we intend something bad as an end, or even as a means to some more beneficent end, we are said to have “set ourselves at evil,” something we are categorically forbidden to do.

Three items usefully contrasted with such intentions are *belief*, *risk*, and *cause*.
If we predict that an act of ours will result in evil, such prediction is a **cognitive state** (of belief); it is **not a conative state of intention** to bring about such a result, either as an end in itself or as a means to some other end. In this case, our agency is involved only to the extent that we have shown ourselves as being willing to tolerate evil results flowing from our acts; but we have not set out to achieve such evil by our acts.
Likewise, a risking and/or causing of some evil result is distinct from any intention to achieve it. We can intend such a result, and we can even execute such an intention so that it becomes a trying, without in fact either causing or even risking it. Also, we can cause or risk such results without intending them.

For example, we can intend to kill and even try to kill someone without killing him; and we can kill him without intending or trying to kill him, as when we kill accidentally. Intending thus does not collapse into risking, causing, or predicting; and on the version of agent-centered deontology here considered, it is intending (or perhaps trying) alone that marks the involvement of our agency in a way so as to bring agent-centered obligations and permissions into play

The second kind of agent-centered deontology is one **focused on actions, not mental states**. Such a view can concede that all human actions must originate with some kind of mental state, often styled a volition or a willing; such a view can even concede that volitions or willings are an intention of a certain kind.
First, *causings* of evils like deaths of innocents are commonly distinguished from *omissions* to prevent such deaths.

Second, *causings* are distinguished from *allowings*. In a narrow sense of the word we will here stipulate, one *allows* a death to occur when: (1) one’s action merely removes a defense the victim otherwise would have had against death; and (2) such removal returns the victim to some morally appropriate baseline (Kamm 1994, 1996; MacMahan 2003). **Thus, mercy-killings, or euthanasia, are outside of our deontological obligations (and thus eligible for justification by good consequences)** so long as one’s act: (1) only removes a defense against death that the agent herself had earlier provided, such as disconnecting medical equipment that is keeping the patient alive when that disconnecting is done by the medical personnel that attached the patient to the equipment originally; and (2) the equipment could justifiably have been hooked up to another patient, where it could do some good, had the doctors known at the time of connection what they know at the time of disconnection.

Third, one is said not to *cause* an evil such as a death when one’s acts merely *enable* (or aid) some other agent to cause such evil (Hart and Honore 1985). Thus, one is not categorically forbidden to drive the terrorists to where they can kill the policeman (if the alternative is death of one’s family), even though one would be categorically forbidden to kill the policeman oneself (even where the alternative is death of one’s family) (Moore 2008). Nor is one categorically forbidden to select which of a group of villagers shall be unjustly executed by another who is pursuing his own purposes (Williams 1973).

Fourth, one is said not to *cause* an evil such as a death when one merely redirects a presently existing threat to many so that it now threatens only one (or a few) (Thomson 1985). **In the time-honored example of the run-away trolley (Trolley), one may turn a trolley so that it runs over one trapped workman so as to save five workmen trapped on the other track, even though it is not permissible for an agent to have initiated the movement of the trolley towards the one to save five** (Foot 1967; Thomson 1985).

Fifth, our agency is said not to be involved in mere *accelerations* of evils about to happen anyway, as opposed to *causing* such evils by doing acts necessary for such evils to occur (G. Williams 1961; Brody 1996). Thus, when a victim is about to fall to his death anyway, dragging a rescuer with him too, the rescuer may cut the rope connecting them. Rescuer is accelerating, but not causing, the death that was about to occur anyway.

All of these last five distinctions have been suggested to be part and parcel of another centuries-old Catholic doctrine, that of the doctrine of doing and allowing (see the entry on [doing vs. allowing harm](https://plato.stanford.edu/entries/doing-allowing/) ) (Moore 2008; Kamm 1994; Foot 1967; Quinn 1989). According to this doctrine, one may not *cause* death, for that would be a killing, a “doing;” but one may fail to prevent death, allow (in the narrow sense) death to occur, enable another to cause death, redirect a life-threatening item from many to one, or accelerate a death about to happen anyway, if good enough consequences are in the offing. As with the Doctrine of Double Effect, how plausible one finds these applications of the doctrine of doing and allowing will determine how plausible one finds this cause-based view of human agency.

**A third kind of agent-centered deontology can be obtained by simply conjoining the other two agent-centered views (Hurd 1994). This view would be that agency in the relevant sense requires both intending and causing (i.e., acting) (Moore 2008)**. On this view, our agent-relative obligations do not focus on causings or intentions separately; rather, the content of such obligations is focused on *intended causings*. For example, our deontological obligation with respect to human life is neither an obligation not to kill nor an obligation not to intend to kill; rather, **it is an obligation not to *murder*, that is, to kill in execution of an intention to kill**.

### Patient-Centered Deontological Theories
A second group of deontological moral theories can be classified, as *patient*-centered, as distinguished from the *agent*-centered version of deontology just considered. These theories are **rights-based rather than duty-based**.
All patient-centered deontological theories are properly characterized as theories premised on people’s rights. An illustrative version posits, as its core right, the right against being used only as means for producing good consequences without one’s consent. More specifically, this version of patient-centered deontological theories proscribes the *using* of another’s body, labor, and talent without the latter’s consent.

The injunction against using arguably accounts for these contrasting reactions. After all, in each example, one life is sacrificed to save five. Yet there appears to be a difference in the means through which the net four lives are saved. In Transplant (and Fat Man), the doomed person is used to benefit the others. They could not be saved in the absence of his body. In Trolley, on the other hand, the doomed victim **is not used**. The workers would be saved whether or not he is present on the second track.

Patient-centered deontologists handle differently other stock examples of the agent-centered deontologist. Take the acceleration cases as an example. When all will die in a lifeboat unless one is killed and eaten; when Siamese twins are conjoined such that both will die unless the organs of one are given to the other via an operation that kills the first; when all of a group of soldiers will die unless the body of one is used to hold down the enemy barbed wire, allowing the rest to save themselves; when a group of villagers will all be shot by a blood-thirsty tyrant unless they select one of their numbers to slake the tyrants lust for death — in all such cases, the causing/accelerating-distinguishing agent-centered deontologists would permit the killing but the usings-focused patient-centered deontologist would not. (For the latter, all killings are merely accelerations of death.)

--

--