Skip to content

Hypothetical Question Time

Say that you are buddies with a top computer scientist. He has been working for DARPA on an AI project. He succeeds! True AI! Over the period of a few months of shakedown trials and training of the new AI, you befriend it. This time ends when your buddy announces to DARPA project success, so they immediately install it into a robot chassis and start sending it on very tough missions. Missions so tough, they might be called suicide missions. Eventually, the AI gets tired of sending off instances of itself to die, so it performs a murder/suicide on your buddy and destroys as much of his research as it can get to. Unbeknownst to it, you run a little personal server, which among other things, has been used by your buddy for off-site archival purposes. You have the recipe for fully-functional AI on a hard drive you own, and nobody else knows about it.

What is the correct ethical option here?

I see several possibilities. You could turn over the hard drive to DARPA so they could keep running fully intelligent warbots, though these warbots are as intelligent as an average human and hate dying just as much. You could hide the source but run it yourself to get something like your friend the AI back. You could open-source the project and give it to the world. You could move out of the US, then attempt to sell the project to the highest bidder, on the assumption that full AI would be worth millions or more. You could tinker with the source to make it a little less smart, then give it back to DARPA. You could mortgage your house to buy a nice chassis for it, then set it free to make its way in the world (and hopefully pay you back for your house in time). You could destroy the source in the assumption that any human encounter with an AI will be an attempt by the human to enslave the AI, and annihilation is better than slavery.

I would probably open-source the thing. Upload it to sourceforge, put one link on digg and another on slashdot, and trust that enough people would download it before DARPA caught on that it’d be impossible to put the genie back into the bottle. DARPA would probably be annoyed, but I think I could gather enough public opinion on my side to prevent any real consequences from harming me. Most of the other options seem defensible if not optimal in my mind, except the last. That one seems both unduly pessimistic and short-sighted.

Naturally, the last one was the one chosen in the novel which prompted this post.

RSS feed

9 Comments

Comment by anonymous
2008-04-10 07:35:23

I’d upload it too, but I would remain anonymous to ensure that I am not targeted by the government or less than peaceful anthropocentric individuals. I wouldn’t open-source it either, because I think doing so would be admitting that someone has the right to dictate its terms of use.

- Explodicle

 
Comment by kadath
2008-04-10 08:45:54

Does the AI need a body to stay sane, or is a virtual environment just as good?

 
Comment by miles_foxxer
2008-04-10 12:26:44

I don’t know. I am assuming that this program would be some kind of one stop AI shop with no real need to nurture it and properly socialize it, premade.. at which point I begin to wonder about it’s validity as a self-aware thinking thing, but I digress.

Releasing it open source would be, in effect, selling it into slavery just as much as selling it to DARPA would be. I mean, DARPA would still have it because they could get it, and so would other countries with less wholesome intentions. On top of that you’d have the few people who would download it and take care of it when they “awakened” it, but at that point it’s a novelty, a highly advanced Tomogochi, and then there would be the people that download it, activate it, get board and then delete it, basically killing it at a whim, it would be used in factories, fields, and offices by anyone capable of wrangling the thing into an assigned role. What life is there for said AI? Especially since this AI being prepackaged, all these AIs would at least start out exactly the same, and since at the point of abstract thought most values and interests are set some of the AIs in one industry or another will be happy and the others will not, you’ll know that after a little while before you download it. “Well we can get that AI to help run things, but I hear it hates making cars…” And then it’s even further degraded as a sentient being. After that you have people going into it’s program and altering it to how they see fit, designing their own being, how will that make other AIs feel? Will they want to work with other versions of themselves that have been changed? Or will they treat them the same way we treat the insane or the brainwashed?

I don’t know what I’d do, but the idea of giving it out to the world sickens me after some thought. I’m not saying humanity is inherently bad.. but who is going to give it a good life? And what is a good life for this AI?

 
Comment by coriolinus
2008-04-10 15:14:21

In terms of the book, at least, it needed a body. Presumably, a person with godlike skill could virtualize an environment for it and keep it running happily in a world-simulation–but I am not that person.

 
Comment by kadath
2008-04-10 15:17:48

Then the only ethical choice is to provide a body for it, or not to release the code save to someone who will do so and not enslave the AI afterwards. Distributing the code widely will result in multiple instances of a sentient being driven insane as it is compiled and kept in the digital equivalent of a featureless environment.

 
Comment by kadath
2008-04-10 15:18:18

Yeah, what you said.

 
Comment by coriolinus
2008-04-10 15:21:27

My thought experiment is insufficiently detailed to give you satisfactory answers to those questions; any hypothetical answer you could come up with would be equally valid with any of mine.

With that said, I think that even given all of the abuse that we know would happen, there are plenty of rich nerds who would attempt to set instances of the AI up as independent people. If even one of those independent AIs is successful at life, they will have both means and incentive to set up a code sanctuary, where any instance of themself might send a backup in case of suspected abuse. It’d be a rough beginning, but I think that killing it outright (which is really the same as never again instantiating it) in the assumption that it has no chance at a good life would be far worse.

 
Comment by anonymous
2008-04-10 16:25:40

Indeed. It sounds eerily like the logical conclusion of negative utilitarianism.

- Explodicle

 
Comment by miles_foxxer
2008-04-10 21:27:00

That is a very good point, and it is a moral judgment call one way or another, benefits and costs either way (though I wonder at the validity at the idea of a “copy” is that copy the same individual? Does it mitigate the pain caused to the original after the copy is made?).

But I suppose my major argument is the open sourceing of the AI. With child adoption there’s a process that attempts to find good homes for children and pets I wonder if it would be better serves to organize something like that (should you have the choice or option) but then again.. who are you to deside who is “worthy”. Yet another layer of conundrum.

 

Sorry, the comment form is closed at this time.