How do you hold an artificial intelligence (AI) accountable for its actions?"Punishment!" We said; but...
How does one punish an AI?
The same way one would punish a person: Take away something that it cares about.
What does an AI care about such that taking it away will cause a change in behavior?
Why would taking something away cause a change?
What would even motivate an AI in the first place?
"hmmm...." We said...
What if an AI's motivation worked in a completely different way from a human's motivation?
What if the AI's value system was built like an insect hive's? Where no member could even conceive of the idea of performing a "bad" (i.e. independent, self-serving, coming at the cost of another) action?
Does an ant colony ever have a rogue ant problem?
(I think it safe to say that humans have rogue human problems, even without AI.)
Perhaps the rogue AI problem comes from the hubristic assumption that a "good" (i.e. functional, effective, general) AI, needs to be modeled on human intelligence?
Perhaps, just as a fish doesn't know water, we are blind to our primate sense of fairness and justice, evolved to manage exactly the kind of intelligence we happen to have. Because of this, we can't see an alternative to the idea that a human based intelligence must come with a human based motivational system, including individuality and rule questioning behaviors.
Are we, in fact, creating the control problem by assuming that the intelligence we create should function like our own?
(Kevin Kelly has something to say about this from a slightly different angle: AI or Alien Intelligence)
Problem ludzkości polega na wierze w hipostazy typu "dobro" i "zło". A inteligencja to nadal niezdefiniowane pojęcie. Trudno wobec tego się dziwić, że powstają takie niejasności, o jakich mowa powyżej...
ReplyDelete