Comments on: Hypothetical Question Time http://www.coriolinus.net/2008/04/10/hypothetical-question-time/ read, and be entertained Sun, 12 Sep 2010 18:15:26 +0000 hourly 1 http://wordpress.org/?v= By: miles_foxxer http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1464 miles_foxxer Fri, 11 Apr 2008 02:27:00 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1464 That is a very good point, and it is a moral judgment call one way or another, benefits and costs either way (though I wonder at the validity at the idea of a "copy" is that copy the same individual? Does it mitigate the pain caused to the original after the copy is made?). But I suppose my major argument is the open sourceing of the AI. With child adoption there's a process that attempts to find good homes for children and pets I wonder if it would be better serves to organize something like that (should you have the choice or option) but then again.. who are you to deside who is "worthy". Yet another layer of conundrum. That is a very good point, and it is a moral judgment call one way or another, benefits and costs either way (though I wonder at the validity at the idea of a “copy” is that copy the same individual? Does it mitigate the pain caused to the original after the copy is made?).

But I suppose my major argument is the open sourceing of the AI. With child adoption there’s a process that attempts to find good homes for children and pets I wonder if it would be better serves to organize something like that (should you have the choice or option) but then again.. who are you to deside who is “worthy”. Yet another layer of conundrum.

]]>
By: anonymous http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1463 anonymous Thu, 10 Apr 2008 21:25:40 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1463 Indeed. It sounds eerily like the logical conclusion of <a href="http://en.wikipedia.org/wiki/Utilitarianism#Negative" rel="nofollow">negative utilitarianism</a>. - Explodicle Indeed. It sounds eerily like the logical conclusion of negative utilitarianism.

- Explodicle

]]>
By: coriolinus http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1462 coriolinus Thu, 10 Apr 2008 20:21:27 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1462 My thought experiment is insufficiently detailed to give you satisfactory answers to those questions; any hypothetical answer you could come up with would be equally valid with any of mine. With that said, I think that even given all of the abuse that we know would happen, there are plenty of rich nerds who would attempt to set instances of the AI up as independent people. If even one of those independent AIs is successful at life, they will have both means and incentive to set up a code sanctuary, where any instance of themself might send a backup in case of suspected abuse. It'd be a rough beginning, but I think that killing it outright (which is really the same as never again instantiating it) in the assumption that it has no chance at a good life would be far worse. My thought experiment is insufficiently detailed to give you satisfactory answers to those questions; any hypothetical answer you could come up with would be equally valid with any of mine.

With that said, I think that even given all of the abuse that we know would happen, there are plenty of rich nerds who would attempt to set instances of the AI up as independent people. If even one of those independent AIs is successful at life, they will have both means and incentive to set up a code sanctuary, where any instance of themself might send a backup in case of suspected abuse. It’d be a rough beginning, but I think that killing it outright (which is really the same as never again instantiating it) in the assumption that it has no chance at a good life would be far worse.

]]>
By: kadath http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1461 kadath Thu, 10 Apr 2008 20:18:18 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1461 Yeah, what you said. Yeah, what you said.

]]>
By: kadath http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1460 kadath Thu, 10 Apr 2008 20:17:48 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1460 Then the only ethical choice is to provide a body for it, or not to release the code save to someone who will do so and not enslave the AI afterwards. Distributing the code widely will result in multiple instances of a sentient being driven insane as it is compiled and kept in the digital equivalent of a featureless environment. Then the only ethical choice is to provide a body for it, or not to release the code save to someone who will do so and not enslave the AI afterwards. Distributing the code widely will result in multiple instances of a sentient being driven insane as it is compiled and kept in the digital equivalent of a featureless environment.

]]>
By: coriolinus http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1459 coriolinus Thu, 10 Apr 2008 20:14:21 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1459 In terms of the book, at least, it needed a body. Presumably, a person with godlike skill could virtualize an environment for it and keep it running happily in a world-simulation--but I am not that person. In terms of the book, at least, it needed a body. Presumably, a person with godlike skill could virtualize an environment for it and keep it running happily in a world-simulation–but I am not that person.

]]>
By: miles_foxxer http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1458 miles_foxxer Thu, 10 Apr 2008 17:26:44 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1458 I don't know. I am assuming that this program would be some kind of one stop AI shop with no real need to nurture it and properly socialize it, premade.. at which point I begin to wonder about it's validity as a self-aware thinking thing, but I digress. Releasing it open source would be, in effect, selling it into slavery just as much as selling it to DARPA would be. I mean, DARPA would still have it because they could get it, and so would other countries with less wholesome intentions. On top of that you'd have the few people who would download it and take care of it when they "awakened" it, but at that point it's a novelty, a highly advanced Tomogochi, and then there would be the people that download it, activate it, get board and then delete it, basically killing it at a whim, it would be used in factories, fields, and offices by anyone capable of wrangling the thing into an assigned role. What life is there for said AI? Especially since this AI being prepackaged, all these AIs would at least start out exactly the same, and since at the point of abstract thought most values and interests are set some of the AIs in one industry or another will be happy and the others will not, you'll know that after a little while before you download it. "Well we can get that AI to help run things, but I hear it hates making cars..." And then it's even further degraded as a sentient being. After that you have people going into it's program and altering it to how they see fit, designing their own being, how will that make other AIs feel? Will they want to work with other versions of themselves that have been changed? Or will they treat them the same way we treat the insane or the brainwashed? I don't know what I'd do, but the idea of giving it out to the world sickens me after some thought. I'm not saying humanity is inherently bad.. but who is going to give it a good life? And what is a good life for this AI? I don’t know. I am assuming that this program would be some kind of one stop AI shop with no real need to nurture it and properly socialize it, premade.. at which point I begin to wonder about it’s validity as a self-aware thinking thing, but I digress.

Releasing it open source would be, in effect, selling it into slavery just as much as selling it to DARPA would be. I mean, DARPA would still have it because they could get it, and so would other countries with less wholesome intentions. On top of that you’d have the few people who would download it and take care of it when they “awakened” it, but at that point it’s a novelty, a highly advanced Tomogochi, and then there would be the people that download it, activate it, get board and then delete it, basically killing it at a whim, it would be used in factories, fields, and offices by anyone capable of wrangling the thing into an assigned role. What life is there for said AI? Especially since this AI being prepackaged, all these AIs would at least start out exactly the same, and since at the point of abstract thought most values and interests are set some of the AIs in one industry or another will be happy and the others will not, you’ll know that after a little while before you download it. “Well we can get that AI to help run things, but I hear it hates making cars…” And then it’s even further degraded as a sentient being. After that you have people going into it’s program and altering it to how they see fit, designing their own being, how will that make other AIs feel? Will they want to work with other versions of themselves that have been changed? Or will they treat them the same way we treat the insane or the brainwashed?

I don’t know what I’d do, but the idea of giving it out to the world sickens me after some thought. I’m not saying humanity is inherently bad.. but who is going to give it a good life? And what is a good life for this AI?

]]>
By: kadath http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1457 kadath Thu, 10 Apr 2008 13:45:54 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1457 Does the AI need a body to stay sane, or is a virtual environment just as good? Does the AI need a body to stay sane, or is a virtual environment just as good?

]]>
By: anonymous http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1456 anonymous Thu, 10 Apr 2008 12:35:23 +0000 http://www.coriolinus.net/2008/04/10/hypothetical-question-time/#comment-1456 I'd upload it too, but I would remain anonymous to ensure that I am not targeted by the government or less than peaceful anthropocentric individuals. I wouldn't open-source it either, because I think doing so would be admitting that someone has the right to dictate its terms of use. - Explodicle I’d upload it too, but I would remain anonymous to ensure that I am not targeted by the government or less than peaceful anthropocentric individuals. I wouldn’t open-source it either, because I think doing so would be admitting that someone has the right to dictate its terms of use.

- Explodicle

]]>