Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Saturday, May 2, 2020

Zoom week 3 youtube link and Zoom details for Monday 5/4

Here's the video for last week. Our guest had technical difficulties and didn't end up making it in time, so it turned into just a little Q&A about filling out forms, ethics, moral inconsistency, and religion.


M.E. Thomas is inviting you to a scheduled Zoom meeting.

Topic: Sociopathworld Zoom Meeting 20 0504
Time: May 4, 2020 12:00 PM Pacific Time (US and Canada)

Join Zoom Meeting
https://us04web.zoom.us/j/71647075746

Meeting ID: 716 4707 5746

Monday, August 7, 2017

Trust as Explained by Game Theory

This was an interesting page/exercise sent to me via Twitter applying the concepts of game theory to the generation and maintenance of trust.

People no longer trust each other. Why? And how can we fix it? An interactive guide to the game theory of trust: http://ncase.me/trust/

It takes like 20-30 minutes to complete. At first I was turned off a little by the arbitrary constraints of the game, but they end up dealing with that issue later on -- so patience pays off! I've seen these models before, but it was interesting to apply it more directly to trust. Also, I hadn't seen the addition of mistakes/misunderstandings into the model before too. That has already changed the way I view others and the world. For instance (this might not make sense until you do the exercise), a friend of mine recently had an Amazon package fail to be delivered. She assumed that it was some shady neighbors stealing the package and was going to stop having any packages delivered, even though she has had like 20 successful package deliveries so far. I encouraged her to keep trying until she has another package go missing, just in case there was a mistake or other one off occurrence that shouldn't necessarily change her game playing strategy. It's a risky strategy maybe, but in her case she has no other convenient alternative for package delivery.

Without really remembering, I had applied essentially the "Diamond Rule" to this game. I think this worked ok (and probably works better with actual people than bots?), but it is true that in a situation in which there is a mistake, it can also compound a mistake into a global loss.

There's that phrase "fool me once, shame on you, fool me twice shame on me". But this game suggests a more optimal rule, when mistakes are factored in: "Fool me once, ok, I take it on the chin. Fool me twice, shame on you with punishment."

Sunday, January 12, 2014

Ethical sociopaths?

Can sociopaths be ethical? A certain type of ethical, certainly. But before we talk about potential sociopathic limitations, this NY Times article ("In Life and Business Learning to be Ethical") about the issues with ethics that almost all of humanity shares:

The problem, research shows, is that how we think we’re going to act when faced with a moral decision and how we really do act are often vastly different.

Here’s just one of many examples from an experiment at Northeastern University: Subjects were told they should flip a coin to see who should do certain tasks. One task is long and laborious; the other is short and fun.

The participant flips the coin in private (though secretly watched by video cameras), said David DeSteno, a professor of psychology at Northeastern who conducted the experiment. Only 10 percent of them did it honestly. The others didn’t flip at all, or kept flipping until the coin came up the way they wanted.
***
[W]e need to be more aware of the ways we fool ourselves. We have to learn how to avoid subconsciously turning our backs when faced with a moral dilemma. And then we must be taught how to challenge people appropriately in those situations.

“When people predict how they’re going to act in a given situation, the ‘should’ self dominates — we should be fair, we should be generous, we should assert our values,” said Ann E. Tenbrunsel, a professor of business ethics at the University of Notre Dame who is involved in the EthicalSystems website. “But when the time for action comes, the ‘want’ self dominates” — I don’t want to look like a fool, I don’t want to be punished.

“Our survival instinct is to want to be liked and to be included,” said Brooke Deterline, chief executive of Courageous Leadership, a consulting firm that offers workshops and programs on dealing with ethical situations. “We don’t willfully do bad things, but when we’re under threat our initial instinct is to downplay or ignore problematic situations.”

Sociopaths may not have the same set of ethics (or issues implementing ethics -- our survival instinct is not so much to be liked and included), but it's also possible for sociopaths to have a personal preference about how they wish to act, even if it is just a personal aesthetic as opposed to be a moral code. Webster defines ethics as many things, including "a guiding philosophy". Maybe for sociopaths that would look something more like utilitarianism or "the diamond rule", as opposed to a saint's altruism and golden rule, but even criminals have codes.

At the heart of any choice to ascribe to a set of ethics, whether empath or sociopath, is a belief that your choices matter -- that you and others around you are affected by everything you choose to do. You don't have to believe in right and wrong to understand that you are what you eat. And I wouldn't want life any other way. What would be the point of making choices if they didn't matter? And if you believe your choices matter, it's only natural to ascribe to some sort of "guiding philosophy" about how to make those choices. So yes, sociopaths can and are ethical. Could sociopaths ever be considered more ethical than empaths? 

Thursday, December 19, 2013

Ethics

I was talking to a sociology professor acquaintance of mine, who also has been diagnosed with Asperger's (interesting combination). She was discussing the process of getting an experiment approved by her institution. I am always interested to hear different iterations of ethical codes, so I started asking her about the sociological approach to ethics, which is apparently very different from the psychological approach and is abhorrent to anthropologists. She told me that sociologists have a bad reputation from studies like Tuskegee syphilis experiment (arguably not even a sociological experiment) and the Milgram experiment.

To me the Milgram experiment is just good science. Get some ordinary person via a classified ad, put them in a room, instruct them to torture a third person, and see how far they are willing to go, based solely on the "authority" of the person conducting the experiment.

The sociologist acquaintance of mine thought that the Milgram experiment is harmful to test subjects because people want to believe that they are a good person, not someone who is capable of doing horrific things, and the test deprives them of that belief. I told her that the experiment did society a favor by forcing at least some of its members to face hard facts, i.e. almost anyone is capable of the world's worst horrors, if only put in the right situation. My argument was that if we fail to understand our capabilities for evil as well as for good, than we are doomed to repeat the atrocities of yesteryear. We agreed to disagree about this point.

Later in the conversation, however, she began talking about how she uses her charisma and the structure of the class to get her students to realize that they are racist, that they have knee-jerk reactions unsupported by any evidence, and that the logical conclusions of their positions would be tenets that they would be unwilling to acknowledge as their own, despite being the root of their misinformed views. Of course I support her manipulating her students to the point of shaking the very foundations of their beliefs, but I did mention to her that I thought it was a little hypocritical that on the one hand she thought it was "unethical" to expose experiment subjects to the realization that they too could be torturers given the right circumstances, but she was willing to basically tell her students that their belief systems were completely flawed. People in her classes cry when they realize how small-minded they have been. How is this any different than the Milgram experiment, I asked? Because if it is different, it seems to only be a matter of degree of harm, not type.

When I finally got her to realize my point, she gave me a look as if she were going to cry too and started asking me if I believe in the "soul" and why would I be asking all of these questions. I felt bad for having let the mask slip (apparently, although I thought we were just having a reasonable discussion). I tried unsuccessfully to backtrack saying things like your students arguably impliedly consent to this treatment by signing up for your class (no they don't, the class is required, she is the only one who teaches it), or for going to university in the first place (can you really be said to consent to being the mental plaything of your professors by going to university?). I woke up the next day to a very long email (Asperger's) going into aspie detail with sentences like this "When we assess the consequences of policies or laws or teaching philosophies that are driven by normative and evaluative ideological considerations, the assessment can be shifted from 'right' or 'wrong' to 'functional' or 'dysfunctional'" and "And of course, one could argue that by making assessments on the basis of what is functional/dysfunctional for society (vs. individuals), we are also saying, as a normative/evaluative issue, that the well-being of society is more important than giving effect to the norms and values of sub-groups in society. This is especially (ethically) problematic in that what is functional for society may actually serve to further marginalize vulnerable minority groups (antithetical to certain democratic values), but if the society is not healthy, then the rest becomes moot (maybe)." And then she basically went on to say that society values critical thinking skills, so jacking with her student's minds is fine, ethically speaking.

I think this is illustrative of the true point of systems of ethics, which is -- let's agree on some random value system that we'll call "common" or "normal" and either enforce it past the point of bearing any resemblance to what it was meant to accomplish in the first place or ignore it whenever it is convenient. If the end is always going to justify the means, what is the point of even discussing the ethics of the process?

Sunday, June 23, 2013

Animal morality?

ABC News reports on a recent book, "The Bonobo and the Atheist," by Frans de Waal, who argues that other primates have at least the building blocks of morality. De Waal suggests that this proves we had inborn morality first, then came up with the idea of religion and god to make sense of those moral inclinations, rather than vice versa.

Those and other human-like characteristics, that have been clearly documented by other researchers as well, at least show they have some grasp of morality. It doesn't mean they are moral -- especially chimps, which can be very violent -- but they have the "basic building blocks" for morality, de Waal argues.

Chimps, he says, "are ready to kill their rivals. They sometimes kill humans, or bite off their face." So he says he is "reluctant to call a chimpanzee a 'moral being.'"

"There is little evidence that other animals judge the appropriateness of actions that do not directly affect themselves," he writes. Yet, "In their behavior, we recognize the same values we pursue ourselves.

"I take these hints of community concern as a sign that the building blocks of morality are older than humanity, and we don't need God to explain how we got to where we are today," he writes.

Is this right? That hints of community concern are the basis for our sense of morality? He says that there are instances of primates feeling guilt or shame:

For example, Lody, a bonobo in the Milwaukee County Zoo, bit the hand -- apparently accidentally -- of a veterinarian who was feeding him vitamin pills.

"Hearing a crunching sound, Lody looked up, seemingly surprised, and released the hand minus a digit," de Waals writes.

Days later the vet revisited the zoo and held up her bandaged left hand. Lody looked at the hand and retreated to a distant corner of the enclosure where he held his head down and wrapped his arms around himself, signs of both grief and guilt.

And here's the amazing part. About 15 years later the vet returned to the zoo and was standing among a crowd of visitors when Lody recognized her and rushed over. He tried to see her left hand, which was hidden behind the railing. The vet lifted up her incomplete hand and Lody looked at it, then at the vet's face, then back at the hand again.

Was he showing shame and grief? Or was it fear of a possible reprisal? The ape at least realized he had done something wrong, de Waal argues, showing the seeds of moral behavior.

The chimp "realized he had done something wrong," but was it a moral judgment of "wrongness"? Or do chimps keep score in their society such that if a chimp does something that another chimp doesn't like, there will be retribution. In other words, does it mean that chimps are moral, or that they hold grudges? I actually think this is a more interesting explanation -- that the urge to punish others for perceived infraction (a sense of justice) is very primitive, such that we share this trait with primates. Interesting how humans have not really modified that impulse much, despite evidence that restorative justice is actually more effective both in terms of victim satisfaction and offender accountability than retributive justice.

The articles mentions other primate behaviors, including displays of deep grief and compassion for each other, but as the article states "[w]hen an ape expresses grief or guilt or compassion he is living out the blueprint for survival in a culture that is becoming more complex, and possibly more dangerous." That is, they are not making judgments of moral right or wrong, they are just acknowledging they exist as one individual in a larger society (like the sociopaths willingness to be a team player). From the examples given, primates do seem to have a system of "values," e.g. they do cleverly use orgies to stop wars, but that seems to be a utilitarian assessment (orgies = good and war = bad), not a moral one.

Thursday, June 6, 2013

Morality only as it applies to in-group actions

I discuss in the book the legal distinction between acts that are malum in se (something is wrong for its own sake) and only malum prohibitum (something is wrong because there is a law prohibiting it). An interesting question for malum in se is what makes something wrong for its own sake? Interesting research with small children sheds light on the mental origins of the distinction. From the Wall Street Journal's "Zazes, Flurps and the Moral World of Kids":

Back in the 1980s, Judith Smetana and colleagues discovered that very young kids could discriminate between genuinely moral principles and mere social conventions. First, the researchers asked about everyday rules—a rule that you can't be mean to other children, for instance, or that you have to hang up your clothes. The children said that, of course, breaking the rules was wrong. But then the researchers asked another question: What would you think if teachers and parents changed the rules to say that being mean and dropping clothes were OK?

Children as young as 2 said that, in that case, it would be OK to drop your clothes, but not to be mean. No matter what the authorities decreed, hurting others, even just hurting their feelings, was always wrong. It's a strikingly robust result—true for children from Brazil to Korea. Poignantly, even abused children thought that hurting other people was intrinsically wrong.

This might leave you feeling more cheerful about human nature. But in the new study, Dr. Rhodes asked similar moral questions about the Zazes and Flurps. The 4-year-olds said it would always be wrong for Zazes to hurt the feelings of others in their group. But if teachers decided that Zazes could hurt Flurps' feelings, then it would be OK to do so. Intrinsic moral obligations only extended to members of their own group.

The 4-year-olds demonstrate the deep roots of an ethical tension that has divided philosophers for centuries. We feel that our moral principles should be universal, but we simultaneously feel that there is something special about our obligations to our own group, whether it's a family, clan or country.

So even though there are moral origins to the distinction between malum in se and malum prohibitum acts, those moral principles underlying the distinction are not universal -- and not really that "moral" either, to the extent that they justify otherwise wrongful actions against people who happen to be different enough to somehow justify mistreatment.
Join Amazon Prime - Watch Over 40,000 Movies

.

Comments are unmoderated. Blog owner is not responsible for third party content. By leaving comments on the blog, commenters give license to the blog owner to reprint attributed comments in any form.