This was an interesting page/exercise sent to me via Twitter applying the concepts of game theory to the generation and maintenance of trust.
People no longer trust each other. Why? And how can we fix it? An interactive guide to the game theory of trust: http://ncase.me/trust/
It takes like 20-30 minutes to complete. At first I was turned off a little by the arbitrary constraints of the game, but they end up dealing with that issue later on -- so patience pays off! I've seen these models before, but it was interesting to apply it more directly to trust. Also, I hadn't seen the addition of mistakes/misunderstandings into the model before too. That has already changed the way I view others and the world. For instance (this might not make sense until you do the exercise), a friend of mine recently had an Amazon package fail to be delivered. She assumed that it was some shady neighbors stealing the package and was going to stop having any packages delivered, even though she has had like 20 successful package deliveries so far. I encouraged her to keep trying until she has another package go missing, just in case there was a mistake or other one off occurrence that shouldn't necessarily change her game playing strategy. It's a risky strategy maybe, but in her case she has no other convenient alternative for package delivery.
Without really remembering, I had applied essentially the "Diamond Rule" to this game. I think this worked ok (and probably works better with actual people than bots?), but it is true that in a situation in which there is a mistake, it can also compound a mistake into a global loss.
There's that phrase "fool me once, shame on you, fool me twice shame on me". But this game suggests a more optimal rule, when mistakes are factored in: "Fool me once, ok, I take it on the chin. Fool me twice, shame on you with punishment."
People no longer trust each other. Why? And how can we fix it? An interactive guide to the game theory of trust: http://ncase.me/trust/
It takes like 20-30 minutes to complete. At first I was turned off a little by the arbitrary constraints of the game, but they end up dealing with that issue later on -- so patience pays off! I've seen these models before, but it was interesting to apply it more directly to trust. Also, I hadn't seen the addition of mistakes/misunderstandings into the model before too. That has already changed the way I view others and the world. For instance (this might not make sense until you do the exercise), a friend of mine recently had an Amazon package fail to be delivered. She assumed that it was some shady neighbors stealing the package and was going to stop having any packages delivered, even though she has had like 20 successful package deliveries so far. I encouraged her to keep trying until she has another package go missing, just in case there was a mistake or other one off occurrence that shouldn't necessarily change her game playing strategy. It's a risky strategy maybe, but in her case she has no other convenient alternative for package delivery.
Without really remembering, I had applied essentially the "Diamond Rule" to this game. I think this worked ok (and probably works better with actual people than bots?), but it is true that in a situation in which there is a mistake, it can also compound a mistake into a global loss.
There's that phrase "fool me once, shame on you, fool me twice shame on me". But this game suggests a more optimal rule, when mistakes are factored in: "Fool me once, ok, I take it on the chin. Fool me twice, shame on you with punishment."