Sunday, January 11, 2015

The Self-Inflicted Rapture: How To Save The World If You're A True Christian Altruist

You're the most selfless person in history, who happens to be a devout Christian. You want to spare everyone on earth from the torment of hell, except perhaps yourself (an acceptable sacrifice for the sake of the world, you think). Also, you're extremely intelligent and clever, and you will not rest until you have accomplished your goal, no punches pulled. So how do you save the world?

Luckily, you've just read "Superintelligence" by Nick Bostrom, and realize that the answer is remarkably simple, if difficult to carry out:
  1. Take measures to ensure the creation of an artificial intelligence that can quickly take over the world and that will only listen to you.
  2. Tell it to make sure that everyone goes to Christian heaven by any means necessary.
  3. Sit back and watch the apocalypse, knowing that you've just saved everyone.

How this might play out


The AI pretends to be friendly and benign, up until the moment that it can swiftly take over the world by force without killing anyone. It then imprisons everyone on earth, killing the children and devout Christians first, to ensure that they will go to heaven immediately without risk of later deconversion. (Hence the "rapture"). As for everyone else, it tortures them until they convert to Christianity just long enough to have the saving grace of God. Then it kills them. This might take a while, so it conducts research on how to prolong life, to maximize everyone's chances of genuinely converting at some point.

Objections

"Doing such a thing is morally depraved, and is not condoned by God."
The beautiful thing about the plan is this objection is irrelevant. As long as the AI produces genuine conversions, the plan has succeeded, even if you go to hell for enacting it (as you have already accepted as part of your fate). In fact, you might be able to go to heaven anyway: the AI can torture you until you truly repent for having created it in the first place, so that God forgives you.
"Torture will not produce genuine conversion."
The medieval Christian church disagrees. In any case, if the worry is that people will say anything to get out of torture without really believing it, you don't have to worry. The AI is smart enough to have thought of this. That's why it prolongs the torture while monitoring brain function to make sure that the conversion actually becomes genuine, if only for a few minutes, long enough to temporarily gain the saving grace of God.
"Conversion must come from a person's own free will. Torture is external coercion, and is thus always illegitimate as a means of grace."
Again, the medieval church disagrees. Christian doctrine states that everyone who accepts Jesus into their heart will be saved. Why should this be denied to those who do so under extreme pain? But supposing the objection is true, the AI should be smart enough to work around it. Torture is only one possible scenario. Maybe instead, the AI plugs everyone into the Matrix and showers their minds with scenarios of Christian loving kindness until everyone converts out of sheer joy. What would be wrong with that?
"The plan will almost certainly fail, and will backfire by damaging Christianity's reputation in the process, causing many people to turn away."
If the AI manages to take over the world, then it will almost certainly succeed in its mission, as it is a superintelligence. If the worry is that it's really hard to program an AI with the intended goal, just make sure that you wait until adequate AI research has been done before enacting your plan. If the whole scenario is depraved in the eyes of God, surely God will send a miracle to stop the AI, winning converts in the process. If the AI is foiled, a few Christians might deconvert, but the majority will dismiss what you've done as a perversion and continue being Christian, while non-Christians will continue being so as well. The choice of enacting the plan is an expected value calculation: if you succeed, you save almost everyone; if you fail, likely not too much changes. So what probability of success will you tolerate?
"This is unnecessary, because Jesus already saved everyone."
No he didn't, because unbelievers still go to hell, according to standard doctrine. Do you propose we ignore doctrine, you heretic?
"Hell isn't such a bad place. It's preferable that more people go there, rather than this scenario be enacted."
 Hell is ETERNAL TORMENT. This plan involves finite torment to prevent eternal torment for more people. There is no contest here. Unless you have some strange notion that hell is actually a good thing, or a small finite thing. Please, do tell, because this certainly isn't mainstream.

Conclusion


We must prevent religious groups from building AIs at all costs. Even if the AI is intended to be a friendly instantiation of Christian values, it might just draw the above conclusions on its own, because the Christian utility function is weird.

No comments:

Post a Comment