Showing posts with label evil. Show all posts
Showing posts with label evil. Show all posts

Monday, May 6, 2013

The virtues (?) of victimhood

For a lot of spiritual/religious people there is the interesting issue of theodicy, the problem of evil: “how we justify the existence of suffering with belief in a God who created us, who loves us, and who providentially manages the world.” I've noticed that people (here in the comments and in my real life) seem to want to give meaning to bad things, typically in one of a few ways: (1) that God is testing them (and so presumably as long as they hang in there, the bad thing gave them a chance to prove themselves and is at worst neutral), (2) that they suffer to make them stronger (so the bad thing is really a blessing in disguise), or (3) they suffer as a testament to the evil of other men (and those men are going to be condemned or punished, so a net negative). This last reason is the most troubling to me. A lot of people come to the comment section with judgment on their tongue and calls for blood for the sociopaths that have wrecked their lives and so deserve untold horrors.. For some of these people, this one experience has come to define their existence.

When religious people think of someone who really had it rough, they frequently will think of Job. Job not only lost everything, all of his wealth, family, friends, he suffered immense physical pain. Job basically had it about as bad as you can get it. But there was no one for Job to hate except God, which he declined to do. As his reward, God gives him double what he had before. Dostoevsky writes in the Brothers Karamazov:

God raises Job again, gives him wealth again. Many years pass by, and he has other children and loves them. But how could he love those new ones when those first children are no more, when he has lost them? Remembering them, how could he be fully happy with those new ones, however dear the new ones might be? But he could, he could. It's the great mystery of human life that old grief passes gradually into quiet, tender joy.

But I have a feeling that for a lot of the victims that come here, having their lives restored wouldn't be nearly enough for them to relinquish their claims to victimhood. In their mind, giving up their hurt would also mean giving up the meaning and sense of purpose they've assigned to that hurt. Giving up their pain would mean giving up their hopes for justice -- that the wrongdoers will eventually suffer commensurate to their misdeeds. These people would rather live a life of eternal victimhood than they would a world in which things eventually get better.

The Brothers Karamazov is one of my favorite books. One of the characters Ivan struggles with this desire for justice:

I must have justice, or I will destroy myself. And not justice in some remote infinite time and space, but here on earth, and that I could see myself. I have believed in it. I want to see it, and if I am dead by then, let me rise again, for if it all happens without me, it will be too unfair. I want to see with my own eyes the hind lie down with the lion and the victim rise up and embrace his murderer. I want to be there when everyone suddenly understands what it has all been for. All the religions of the world are built on this longing, and I am a believer.

Apart from the established health benefits of forgiving and letting go of past hurts, Ivan's position is simply inconsistent with reality. There is no perfect justice. To keep clamoring for it suggests a significant break with reality. This is particularly true of justice against people like me, who don't really believe in “right.” Everything just is. If bad things happen to me, I wouldn't recognize them as any sort of retribution for past wrongs. I do not believe life is "fair" that way. I wouldn't actually feel like I was being punished, so what's the point? 

Friday, November 23, 2012

Saturday, October 13, 2012

Taming artificial intelligence

This was an interesting article by David Deutsch in Aeon Magazine about artificial general intelligence (AGI). There were a lot of things he touched upon that would seem relevant to this audience, like how little we know about how our brains work, the nature of self-awareness, our sense of self and sense of purpose and the origins of both, etc. One of the most interesting parts, though, was when he addresses some of the "scary" things about creating a machine with artificial general intelligence, particularly regarding some peoples' concerns about the AGI being more powerful than we are and how it would choose to use that power. He addressed it in a very open-minded and enlightened way:


Some people are wondering whether we should welcome our new robot overlords. Some hope to learn how we can rig their programming to make them constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of robotics’), or to prevent them from acquiring the theory that the universe should be converted into paper clips (as imagined by Nick Bostrom). None of these are the real problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive — economically, intellectually or whatever — as most people; and that such a person could do enormous harm were he to turn his powers to evil instead of good.

These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of ‘good’ needs continual improvement. How should society be organised so as to promote that improvement? ‘Enslave all intelligence’ would be a catastrophically wrong answer, and ‘enslave all intelligence that doesn’t look like us’ would not be much better.

The parallel is not exact between AGIs and sociopaths, and of course his solution is a non-solution. He doesn't even manage to really define what he means by evil, except with a quick parenthetical allusion to morality. Maybe the machines would have a more workable form of "morality"? But it's an interesting question: Is there anything so special about our morality that we would try to indoctrinate AGIs to it? Is there enough logic to human morality that they would accept it? If so, then we don't really need to use the word "morality," do we? We could just appeal to their logic. Same with sociopaths. If morality is really such a universal "good" (pardon all of the quotes), then can't we also appeal to a sociopath's logic? Or sense of self preservation? Or even the sociopaths self-interest regarding living in a relatively stable society in which most people are engaged in societal profitable endeavours that also benefit the sociopath in indirect ways? Civilization is vulnerable, but in a lot of ways it is robust. I behave in a civilized way because it works, it reaps rewards. (Not that AGIs would necessarily experience those side-effects as "rewards," which is I guess why people are so concerned.)

By the way, I have a friend who is an exceptionally creative person who is capable of being a thousand times more productive as most people. And that is a scary thought to me, that she had so much power, so I can empathize with people who fear sociopaths.
Join Amazon Prime - Watch Over 40,000 Movies

.

Comments are unmoderated. Blog owner is not responsible for third party content. By leaving comments on the blog, commenters give license to the blog owner to reprint attributed comments in any form.