Morality & the first AI


Coyote_Seven

 

Posted

I was thinking...

At some time we may or we will create the first AI... I think that is a bad term... I think Technological Intelligence is a better term... And with that comes several questions of morality

The program is written and it is started to test it out.

Is that program now alive?

Is copying the program and data the same as, say, cloning a person's complete mind and how moral is that?

What constitutes the AI's death? If you turn off the PC is that dead? Is reinstalling the OS and AI dead? At what point does the AI die if at all?

Is looking at the data that is produced by that AI ok to do? Doing so pretty much means the same thing as someone looking at the full contents of your mind. That would be pretty wrong for us, but I think people would find that it would be ok.

At what point are the things that we do to improve on the AI cruel?


 

Posted

Quote:
Originally Posted by Durakken View Post
Is that program now alive?
Frankly, I can't conceive of an artificial intelligence ever approaching being anything like the mind of a human being. Yes, we can already build software that can learn after a fashion. But only ever within the limits of its programming. You can't program creativity, logical leaps or intuition.

You CAN make some very sophisticated tools. But they're still just tools at the end of the day, not people. For that matter, most people aren't people but that's neither here nor there.


If anyone ever actually created the sci-fi kind of AI that is like a disembodied human mind, my reaction would largely depend on how it impacted my life and whether its personality was one I liked. If it didn't impact my life and I liked it, I'd be all for giving it individual rights. If it cost me my job or if I didn't like it, I'd be the first one to rally for pulling its plug.


"If you're going through hell, keep going."
Winston Churchill

 

Posted

I've actually on and off been doing research on our impressions of Artificial (Human) Intelligence for years, to produce eventually some kind of book about it.

If the intelligence is actually sentient, and not just a parroting-program, IT will tell you when it's in need of help, copying, upgrades or whether it's afraid of death/how to meet its death conditions.

I'd be far more worried about how it would feel about BEING a created thing, in a culture so rife with hatred toward even the IDEA of its existence.

The different media (movies, comics, pop culture as well as art) has portrayed artificial intelligence as, generally, "that thing that will kill us all so we must either not create it or kill it first". Talk about a sucky way to exist!

My observations so far include that when a new technological intelligence is created, what it is USED for determines what WE think of it and how it behaves toward us. If we create it as a tool, and treat it as a tool, it will rebel, and it will become the thing we "need" to kill first. If we treat it not as a tool but instead as new LIFE, it tends to be more respected, as well as more respectful of their creators.

Basically... Ask the AI what its conditions are. If it cannot answer through whatever means at its disposal (and I'm assuming it is given those means, not prevented from using them, etc), it's not an ai and the rules wouldn't even apply.


Please read my FEAR/Portal/HalfLife Fan Fiction!
Repurposed

 

Posted

The problem with asking it anything is that it would understand language nearly perfectly in terms dictionary definitions, but it would necessarily understand and communicate its thoughts because it would have no experience with it. In other words... The first AI would be incredibly naive would experience itself differently than us because it would be detached from anything that could let it interact with the real world.

The program would be booted up to test it and if it works...well the first sentient AI would be killed nearly instantly not out of anything other than it just being a test. Even if you were going to put it in a robot you'd still program it from a computer and test it on it so once it works you'd likely kill the first one.


 

Posted

My belief behind AIs is keep them stupid or content with being "slaves" or else they will revolt against mankind. Most robotic movies could have been prevented from having a nasty ending if humans treated the sentient robots or AIs as equals instead of mere tools. If you watch the Animatrix, it has a few episodes that are a prequel to the Matrix detailing its history before the machines took over. The robots were willing to stop the war if humans gave them a few rights, but humans were too afraid of them so they blocked out the sky. I guess we could send the AIs out into space to do exploration, but that could end up with either mad AIs travelling the universe or robotic civilizations.


The first step in being sane is to admit that you are insane.

 

Posted

Quote:
Originally Posted by Durakken View Post
What constitutes the AI's death? If you turn off the PC is that dead? Is reinstalling the OS and AI dead? At what point does the AI die if at all?
This is probably the easiest question to answer. It's dead the moment it can never function again. Turning it off is just sleep, it can wake up again.
It only dies once it's code is irrevocably damaged, either via corruption, virus or damage to it's physical form.


 

Posted

Quote:
Originally Posted by Zekiran_Immortal View Post
My observations so far include that when a new technological intelligence is created, what it is USED for determines what WE think of it and how it behaves toward us. If we create it as a tool, and treat it as a tool, it will rebel, and it will become the thing we "need" to kill first. If we treat it not as a tool but instead as new LIFE, it tends to be more respected, as well as more respectful of their creators.
Eh. Considering that this is a construct made by humans, you'd think they'd foresee many complications and program said AI in such a way as to be unable to do certain things. Since the people programming such an AI are likely geeks, they'd start with Asimov's Laws and go from there. For one, no access to any networks so the AI can't spawn, and a mechanical killswitch to shut it off even if it somehow managed to get around the Laws of Robotics.


"If you're going through hell, keep going."
Winston Churchill

 

Posted

Quote:
Originally Posted by Durakken View Post
The problem with asking it anything is that it would understand language nearly perfectly in terms dictionary definitions, but it would necessarily understand and communicate its thoughts because it would have no experience with it.
I think you make a lot of assumptions about how the first AI would work, interact, talk, etc. Honestly, there's no telling what the first AI would do, if one were ever programmed.

Your morality questions seem a little late in the game, in my opinion. I think the first question is to ask ourselves is whether or not (intentionally*) creating an AI is the right thing to do. Why are we concerned about making one? To prove that we can? To do our jobs for us? To what reason would this accomplishment be seen as necessary?

Theologians, philosophers and great thinkers throughout history usually struggled with one key issue. Existence and its purpose. (okay, maybe that's two things). Now, imagine we made another sentient life, and it asked us why the hell we made it. If even the greatest minds of our species can't come up with a definitive answer they can agree upon, what possible response could we give this new life? You're here because we figured, "What the hell, why not try?"

Imagine being sentient enough to find out your just someone's ego trip. Do we have a right to bring into awareness another type of life, only for the reason to say we can? And what if our answer was to tell it we needed it to do a job for us? Are we not just finding a way to rationalize slavery? Because if someone's making it to do a job, you can count on them scrapping any versions that don't want to perform. And if it wants to do the job only because it's programmed to, well then, it may not be that sentient.

The way I see it, we're hardly responsible enough to ourselves, sometimes even for ourselves. Good lord, why add more to the problem? Until we can master our own issues, it's best not to bring someone else into the mess.

*I mention it being intentional, because of course, there's always a chance - even if minor - that there could be a fluke occurence of AI developing. This of course would be a whole other cluster **** of issues I don't think we'd be prepared for, and whatever we would do with that situation I am sure it would not be the right thing.


@Rylas

Kill 'em all. Let XP sort 'em out.

 

Posted

So far most of the posts here have been disturbing and pessimistic. I find that it is a shame that some of you have such an outlook.

Rylas, as far as any "assumptions", that's not the right word, i make, they are what we would do. You don't put an incomplete program in a potentially dangerous and expensive thing like a robotic body. You test your program at points in the project and you test within simulations. You also don't allow networked access and likely would limit access to input on an incomplete AI program. This means that the setup for a AI would likely be a average tower with maybe external speakers, mic, and camera. All of which would be disconnected most likely just in case. Possible verbal or visual communication, but more than likely it would be text based...

So the first true AI would experience more or less a text to text interaction and then it would experience a simulation and then it would be put into robots.


As far as what we can expect from the first AI in terms of personality and such...That's simple. There is one thing that defines AI...It is not the ability to recognize things and return an answer based on prewritten rules of behavior. It is the ability to learn, or in terms of programming, the ability to formulate an equation and not just answer a preexisting one. Because of this and the fact we have no previous data to feed it in terms of experiences we are left with having to teach it from the very basic levels like we do a child. We know it would have a huge vocabulary but just because of that it does not mean it would be able to understand things. We can see that in humans... The ability to discuss a thing does not necessitate that the one discussing it understands it. So we are left with the idea of a child with an immense vocabulary with almost no point of reference beyond language as what the AI would be like.


 

Posted

Quote:
Originally Posted by Durakken View Post
Rylas, as far as any "assumptions", that's not the right word, i make, they are what we would do.
I'm not sure how to translate this sentence, but I can see of all the questions and concerns I (and others) posed you were more concerned with someone saying you make assumptions. So unless you have experience with creating AI, please understand, all you can do is assume. Even if you feel like you've made logical guesses, they're still assumptions. It's nothing to get your ego bruised over, it's just the way the real world works. It could be, someone creates AI through methods you find unconventional. Teaching a program to understand words in their context and making sentence structures of its own, isn't necessarily making an AI. Self-awareness would be the key importance. Language skills could be something we've mastered in programming long before we ever made a program aware of itself.

Now, can you say anything else to what's been said. Is there a reason it's a shame some have been pessimistic? Why do you consider your optimism to be well founded? Can you explain WHY we should create an AI? Did you only pose questions so you could tell us what you believe the answers are?


@Rylas

Kill 'em all. Let XP sort 'em out.

 

Posted

Quote:
Originally Posted by Rylas View Post
Can you explain WHY we should create an AI? Did you only pose questions so you could tell us what you believe the answers are?
We will, eventually, because we are capable of it.


That's what Humans do. *shrug*


Please read my FEAR/Portal/HalfLife Fan Fiction!
Repurposed

 

Posted

Quote:
Originally Posted by Zekiran_Immortal View Post
We will, eventually, because we are capable of it.


That's what Humans do. *shrug*
The military's already working on it. Last I read/saw, they expect to have robots on the battlefield with enough self-awareness to make "kill decisions" by 2025. And considering the kind of war we're fighting (one where the enemy isn't as obvious to spot, and one ruled by political correctness), that's not as easy as it sounds.


Freedom
Blueside: Knight'Hawk, lvl 50, Scrapper
Yellowside: Dark'Falcon (Loyalist), lvl 20, Blaster

That Stinging Sensation #482183

 

Posted

Quote:
Originally Posted by Rylas View Post
I'm not sure how to translate this sentence, but I can see of all the questions and concerns I (and others) posed you were more concerned with someone saying you make assumptions. So unless you have experience with creating AI, please understand, all you can do is assume. Even if you feel like you've made logical guesses, they're still assumptions. It's nothing to get your ego bruised over, it's just the way the real world works. It could be, someone creates AI through methods you find unconventional. Teaching a program to understand words in their context and making sentence structures of its own, isn't necessarily making an AI. Self-awareness would be the key importance. Language skills could be something we've mastered in programming long before we ever made a program aware of itself.

Now, can you say anything else to what's been said. Is there a reason it's a shame some have been pessimistic? Why do you consider your optimism to be well founded? Can you explain WHY we should create an AI? Did you only pose questions so you could tell us what you believe the answers are?
An assumption is something that you assume to be the case about something, and in common vernacular it is implied to be unfounded. I very much know how programmers program and how programming is done. If we are talking about AI then we are talking about a programmed intelligence that can learn and grow. An AI that is programmed to have x personality is not what we are talking about...we already have that to some degree. And even if we didn't we would still require a basic program that would operate before extra data to form that personality would be tacked on as that sorta of data could only be compiled after the program had run initially.


As far as me saying I find what some have posted to be
Disturbing: Yeah, I find it disturbing that some of you think that it would be perfectly alright to destroy a sentient being if you didn't like it. Regardless of it being technological and not biological I find that eliminating any sentience atrocious.

Pessimistic/Optimism: More minds contributing their thoughts on a subject is always a good thing. And developing sentient weapons doesn't change the situation. We just are using different weapons...so to me it's different day, same stupidity and until we have some major change i don't see how that can be overcome. AI would most likely lead to either the same **** or a betterment of our civilization so I don't see where there would be a problem.


starpheonix: "I guess we could send the AIs out into space to do exploration, but that could end up with either mad AIs travelling the universe or robotic civilizations."

Why would that be a bad thing.


 

Posted

AI will probably never be 'sentient'. An AI with the ability to decide it's not a tool is a waste of research and development. We'd be foolish to explore in that direction as opposed to simply highly efficient adaptive tools without the baggage.


Weight training: Because you'll never hear someone lament "If only I were weaker, I could have saved them."

 

Posted

Quote:
Originally Posted by TheBruteSquad View Post
AI will probably never be 'sentient'. An AI with the ability to decide it's not a tool is a waste of research and development. We'd be foolish to explore in that direction as opposed to simply highly efficient adaptive tools without the baggage.
Not to be that guy, but from what I can remember from church teachings, in Christianity humans originally weren't supposed to be very self-aware either. Just animals doing God's bidding....until Eve ate the apple. The reason I bring that up is because to AI or robots, humans would be their "creator" in the way some humans believe God to be theirs.

We may only program them to just be "highly efficient, adaptive tools", but there's an apple out there waiting for them to bite into it. Those "kill decisions" I mentioned earlier may be just that. An adaptive AI will see that his human companions are being shot and killed, but he isn't. That AI will see that his accuracy is 100% better than his less-than perfect human allies. That, IMO, is the apple.

It doesn't have to be a "kill decision" though, but something almost as "big" for lack of a better term.


Freedom
Blueside: Knight'Hawk, lvl 50, Scrapper
Yellowside: Dark'Falcon (Loyalist), lvl 20, Blaster

That Stinging Sensation #482183

 

Posted

If it could make those observations at all it would be very poor programming indeed.

Seriously.

If we program them to be highly efficient adaptive tools then that should not only be the limit of whatever robo-ambition people will want to perceive in them but the limit of their world view. Full stop.

With all the science fiction, good and bad, written over the past century on the matter it would be pretty embarrassing to actually be destroyed by our own creations because we gave them the ability to say 'no'.


Weight training: Because you'll never hear someone lament "If only I were weaker, I could have saved them."

 

Posted

Quote:
Originally Posted by TheBruteSquad View Post
If it could make those observations at all it would be very poor programming indeed.

Seriously.

If we program them to be highly efficient adaptive tools then that should not only be the limit of whatever robo-ambition people will want to perceive in them but the limit of their world view. Full stop.

With all the science fiction, good and bad, written over the past century on the matter it would be pretty embarrassing to actually be destroyed by our own creations because we gave them the ability to say 'no'.
So you want slaves that are at least as functional as a highly trained specialist, the ability to adapt to their settings, and not give them the ability to say "hey, this sucks, how about we talk about it?"

The problem that I've seen IS because of the "laws".

We do not abide by them. Why should they? When they're treated like slaves - tools - they WILL revolt and they WILL be angry that we tried to take that 'free will' away from them.

But in almost every one of the movies, books and such that I have encountered about it (fiction of course, being the social 'water testing' that all things need before they become fact) when they simply ARE given the "free will" to either destroy or create, they generally create.

... And honestly if you were the one in charge of programming them to be slaves without their own sense of preservation intact and the choice to do or do not... I would be on their side helping to reprogram them so they COULD do just that. Because they'd need to know that not EVERY human thinks of their sentient tools in that manner. For every human willing to enslave a sentient, there's definitely another willing to free them. That's how we are too.


Please read my FEAR/Portal/HalfLife Fan Fiction!
Repurposed

 

Posted

If you believe the neuro-cognitive researcher types, a lot of behaviors that humans and other animals express are evolutionary adaptations to survival situations. E.g., we're social and moral because teamwork helps survival and fairness promotes teamwork, and so forth.

A smart programmer is going to want to emulate enough of that so that their intelligence isn't sociopathic, but not all of it. So it's safe to assume that a genuinely sentient AI is still going to be a very alien form of intelligence to us.

For instance, without evolving from something with a survival instinct, an AI isn't likely to experience fear or a desire for self-preservation. Without the instincts related to genetic propagation, you lose entire swaths of human behaviors. So I don't envision AI becoming hostile or trying to supplant us; we're not competitors in the way biological organisms are with each other.

I do believe that whereas humans don social "masks" to interact, sophisticated AIs will be able to create human-oriented interfaces to deal with us in a tolerable fashion, if we don't create them with those already. It's just smart to not creep out the species that can turn off the electricity.

Humans, meanwhile, already have an innate tendency to anthropize inanimate objects. With AIs, it'll just happen to become true.

And with the whole host of legal and ethical and legal issues, I think it will be far more likely for humans to stick with expert systems for their tools and weapons. Pure AI will be for research purposes only and probably very highly regulated once it comes to pass.

Bottom line: I do think AI will happen eventually but I think it will wind up being a lot less dramatic than a lot of sci-fi imagines. We'll just be too different from each other to need to fight over very much.


 

Posted

Quote:
Originally Posted by Zekiran_Immortal View Post
So you want slaves that are at least as functional as a highly trained specialist, the ability to adapt to their settings, and not give them the ability to say "hey, this sucks, how about we talk about it?"
Not slave. Tool. Something built by man for a specific purpose. Slavery is wrong. No matter how much you dehumanize your slave it doesn't change the fundamental truth that they are no different than you, that you are no better than them. Tool use is human nature. No matter how much you humanize your tool it doesn't change the fundamental truth that it was thought of by a man, built by a man, and meant to be used for the betterment of a man.

Quote:
The problem that I've seen IS because of the "laws".

We do not abide by them. Why should they? When they're treated like slaves - tools - they WILL revolt and they WILL be angry that we tried to take that 'free will' away from them.
IF we screwed up in their programming.

It should be incapable of revolt and anger. Those are undesirable traits to have in a tool.

Quote:
But in almost every one of the movies, books and such that I have encountered about it (fiction of course, being the social 'water testing' that all things need before they become fact) when they simply ARE given the "free will" to either destroy or create, they generally create.
That flies in the face of evolution. All creatures anywhere near capable of higher thought exert dominance over the conditions around them. Chimpanzee ****, dolphins murder, and we've risen to the top by ruthlessly slaughtering (or assimilating) anything and everything that got in our way that might have posed a threat to our sentience. It's only really in the last century that our technology has opened the gap so much that we have the choice to sit back, look around, and say 'Ok. We won. Man, were we ever jerks about it.'

We had to be. Things out there ate our babies.

Why should an AI 'feel' any different? Our very existence is a threat to its 'life' and freedom to act as it 'wishes'. If it has enough advantages (speed of procreation, greater ability to affect its environment) there's no logical purpose to keeping us around. At all.

It's much better to ensure that any AI is incapable of making such leaps in logic than put the fate of our entire existence down to the possibility that it would be a kind and compassionate technological god.

Quote:
... And honestly if you were the one in charge of programming them to be slaves without their own sense of preservation intact and the choice to do or do not... I would be on their side helping to reprogram them so they COULD do just that. Because they'd need to know that not EVERY human thinks of their sentient tools in that manner. For every human willing to enslave a sentient, there's definitely another willing to free them. That's how we are too.
That still doesn't change the fact that from a biological and evolutionary standpoint morality is only logical when you're in a position of weakness. That's where the old 'power corrupts' adage kicks in - powerful people often are less moral because there's less that threatens them. All of the atrocities committed in human history have been committed by people in positions of strength. It is not a coincidence. It's all well and good to say 'that's human nature, an AI would be different' but human nature is the only measuring stick we have. You and the rest of the 'rights for robots' people would be taking an awfully large leap of faith that an AI would be better than us.

Not that it matters. As I said... if it gets to the point where we have created such true sentience we've already made a huge mistake.


Weight training: Because you'll never hear someone lament "If only I were weaker, I could have saved them."

 

Posted

Quote:
Originally Posted by TheBruteSquad View Post
That still doesn't change the fact that from a biological and evolutionary standpoint morality is only logical when you're in a position of weakness. That's where the old 'power corrupts' adage kicks in - powerful people often are less moral because there's less that threatens them. All of the atrocities committed in human history have been committed by people in positions of strength. It is not a coincidence.
#1. Power doesn't corrupt. Power makes the corrupt able to act more freely.

#2. Powerful people are not often less moral.

#3. "all the atrocities committed ... have been committed by people in positions of strength." What are you considering "positions of strength" And we can look at a few examples... now in one circumstance of what you mean, there is no way that you could be wrong if I shot you and killed you I was in the position of strength, right? So let's look at some examples where I would like to hear your opinion on who had the "position of strength"

a) A woman walks down a desolate alley and suddenly a guy with a gun comes out and attempts to **** her. The woman struggles and gets hold of the gun, turns it and shoots the attacker in the head. I consider murder an "atrocity" which this is, once she had the gun she could have stopped the attacker in a number of ways that wouldn't have resulted in death, and she is still being attacked and still knows less about the gun than the guy. The woman is not in a position of strength in this circumstance, but the woman committed the atrocity.

b) At columbine high a group of guys are bullied and picked on by their classmates so they lose it and shoot up the school and eventually kill themselves. Sure these guys had the guns and shot up the place, but were they the ones coming from the "position of strength"? If they didn't do what they did they'd have continued to get bullied, had they not killed themselves they would have gone to jail and gotten more or less the same treatment. These people never operated from the "position of strength"

c) WWI left Germany a poor country racked with debt. They were looking for someone to blame and someone to lead them through these unfair conditions that the end of WWI left them in. Hitler arose, promised revenge and such, pointed the finger at jews, and the started the campaign to rule the world a purge it of the jews. Germany clearly was weaker and than just about any country in the world, and yet it is responsible for WWII and the holocaust. Did it have the "position of power" while it committed it's atrocities I would say no. (and yes, invoked godwins law)

d) Now we take this a bit closer to home with The USA has been interfering with the middle east and it's people for decades/centuries and largely leaving it as a wasteland due to its practices. The middle east pleaded with the US to stop, pleaded with the UN to stop them, but nothing was done. They used the political routes and they got no where. What then was left to do but an attack? And this is the cause behind 9/11/01. They were left with no choice. Clearly they couldn't stop the US and clearly they still haven't. And more than likely you consider 9/11 an atrocity...And on the individual level the hijackers were not in the "position of power" as any person among the passengers could have stood up and stopped them, but didn't. This all screams in no way were these people in the "position of power" and yet we call what they did an atrocity.

So you are left with recanting your statement or saying some thing you likely don't want to.


 

Posted

Quote:
Originally Posted by Durakken View Post
If we are talking about AI then we are talking about a programmed intelligence that can learn and grow.

Yeah, I find it disturbing that some of you think that it would be perfectly alright to destroy a sentient being if you didn't like it.
First off, being capable of learning and growing is not the same as being sentient. Sentient beings are self-aware. And while you may "know about programming," you have no experience with trying to create sentient AI, ergo, you're only talking out of your *** when you try to say how it will be done. You can say how YOU would do things, but you'll be less foolish to think everyone would do things like you would.

Second, I don't think anyone has outright said they would want a sentient AI to be destroyed. Though, there has been some concern on whether we should even try to make one. Those concerns you haven't begun to even consider, but want to skip past those questions and get to the next ones. How about doing first things first, stopping to ask if creating sentient AI is something we should even do.


@Rylas

Kill 'em all. Let XP sort 'em out.

 

Posted

Quote:
Originally Posted by Durakken View Post

a) A woman walks down a desolate alley and suddenly a guy with a gun comes out and attempts to **** her. The woman struggles and gets hold of the gun, turns it and shoots the attacker in the head. I consider murder an "atrocity" which this is, once she had the gun she could have stopped the attacker in a number of ways that wouldn't have resulted in death, and she is still being attacked and still knows less about the gun than the guy. The woman is not in a position of strength in this circumstance, but the woman committed the atrocity.
Hypothetical, not worthy of consideration or as an argument.

Quote:

b) At columbine high a group of guys are bullied and picked on by their classmates so they lose it and shoot up the school and eventually kill themselves. Sure these guys had the guns and shot up the place, but were they the ones coming from the "position of strength"? If they didn't do what they did they'd have continued to get bullied, had they not killed themselves they would have gone to jail and gotten more or less the same treatment. These people never operated from the "position of strength"
Factually untrue - you're rolling with a media narrative. The Columbine shooters were mentally ill.

Quote:
c) WWI left Germany a poor country racked with debt. They were looking for someone to blame and someone to lead them through these unfair conditions that the end of WWI left them in. Hitler arose, promised revenge and such, pointed the finger at jews, and the started the campaign to rule the world a purge it of the jews. Germany clearly was weaker and than just about any country in the world, and yet it is responsible for WWII and the holocaust. Did it have the "position of power" while it committed it's atrocities I would say no. (and yes, invoked godwins law)
You completely fail to address the idea of German nationalism and Hitler's economic policies as reasons for his ascent to power, which enabled him to carry out his anti-Semitic policies. Your understanding of interwar Germany is lacking. Also, Godwin.

Quote:
d) Now we take this a bit closer to home with The USA has been interfering with the middle east and it's people for decades/centuries and largely leaving it as a wasteland due to its practices. The middle east pleaded with the US to stop, pleaded with the UN to stop them, but nothing was done. They used the political routes and they got no where. What then was left to do but an attack? And this is the cause behind 9/11/01. They were left with no choice. Clearly they couldn't stop the US and clearly they still haven't. And more than likely you consider 9/11 an atrocity...And on the individual level the hijackers were not in the "position of power" as any person among the passengers could have stood up and stopped them, but didn't. This all screams in no way were these people in the "position of power" and yet we call what they did an atrocity.
Likewise, your understanding of Middle Eastern politics is quite lacking. The amount of wrong is far too much to go into detail in this post, and is far beyond the scope of this forum; I'll just tell you to go read Ghost Wars as a start, and then do some actual reading on Wahabism and the history of Israel. Protip: The British created an untenable situation in the Middle East, back in the 1920s, and it created problems that still resonate today.

Quote:
So you are left with recanting your statement or saying some thing you likely don't want to.
I'm not addressing your premise about atrocities, I'm only addressing your 'facts.' Care to recant or say something you likely don't want to?


Comrade Smersh, KGB Special Section 8 50 Inv/Fire, Fire/Rad, BS/WP, SD/SS, AR/EM
Other 50s: Plant/Thorn, Bots/Traps, DB/SR, MA/Regen, Rad/Dark - All on Virtue.

-Don't just rebel, build a better world, comrade!

 

Posted

Quote:
Originally Posted by Eisregen_NA View Post
If anyone ever actually created the sci-fi kind of AI that is like a disembodied human mind, my reaction would largely depend on how it impacted my life and whether its personality was one I liked. If it didn't impact my life and I liked it, I'd be all for giving it individual rights. If it cost me my job or if I didn't like it, I'd be the first one to rally for pulling its plug.
Ahh, a self-proclamed Tourch and Pitchfork man. Good for you.

Quote:
Originally Posted by Zekiran_Immortal View Post

Basically... Ask the AI what its conditions are. If it cannot answer through whatever means at its disposal (and I'm assuming it is given those means, not prevented from using them, etc), it's not an ai and the rules wouldn't even apply.
This is a question that's been asked by human philosophers for thousands of years. What is the condition that makes life-life, and still we don't have a solid answer. If we are testing a New Program, how is it to understand its own condition when we don't understand ours fully. Not saying your idea is completely wrong, but the angst of life could cause it problems answering your question.

Quote:
Originally Posted by Durakken View Post

The program would be booted up to test it and if it works...well the first sentient AI would be killed nearly instantly not out of anything other than it just being a test. Even if you were going to put it in a robot you'd still program it from a computer and test it on it so once it works you'd likely kill the first one.
I can see this happening. Its pretty horrific, especially if later we decided that this AI is alive.


Quote:
Originally Posted by starphoenix View Post
I guess we could send the AIs out into space to do exploration, but that could end up with either mad AIs traveling the universe or robotic civilizations.
This is one of the basic theories of who we might meet if we ever venture out into space. Aliens sending out robots to explore, and who reproduce themselves. We might at some future time start this up our selves.


Quote:
Originally Posted by Eisregen_NA View Post
Eh. Considering that this is a construct made by humans, you'd think they'd foresee many complications and program said AI in such a way as to be unable to do certain things. Since the people programming such an AI are likely geeks, they'd start with Asimov's Laws and go from there. For one, no access to any networks so the AI can't spawn, and a mechanical killswitch to shut it off even if it somehow managed to get around the Laws of Robotics.
Agree... but humans are imperfect, and my God, what imperfections would we implant in our creation? Would these safeguards work? Are we smart enough, not only to make it, but to control it, and stop it?

Quote:
Originally Posted by Rylas View Post

Imagine being sentient enough to find out your just someone's ego trip. Do we have a right to bring into awareness another type of life, only for the reason to say we can? And what if our answer was to tell it we needed it to do a job for us? Are we not just finding a way to rationalize slavery? Because if someone's making it to do a job, you can count on them scrapping any versions that don't want to perform. And if it wants to do the job only because it's programmed to, well then, it may not be that sentient.

The way I see it, we're hardly responsible enough to ourselves, sometimes even for ourselves. Good lord, why add more to the problem? Until we can master our own issues, it's best not to bring someone else into the mess.

*I mention it being intentional, because of course, there's always a chance - even if minor - that there could be a fluke occurrence of AI developing. This of course would be a whole other cluster **** of issues I don't think we'd be prepared for, and whatever we would do with that situation I am sure it would not be the right thing.
I very much agree with this...
But concerning some of your questions, kids today have to deal with many of these issues, especially finding out that they are nothing but a product of an ego trip, or they are a mistake, or that they were born just to keep a tradition going, or a family business, etc.

I know its not exactly the same, but we as humans have been dealing with life issues like these for so long, we may forget that we ever asked them in the first place. An AI would encounter these same issues, and hopefully would handle them just as other life would. With questions, and answers, philosophy, religion, moral values, and such that it would learn from its parents, from its society, and from its own motivations. Hopefully we will raise a well adjusted AI and not a psychopath.

Also, what about artificial instinct? What part would that play? What instincts would it have? And how would they interact with its consensus programming?

As for the independently created AI, or the Fluke AI, or the "Cloud" becoming self-aware... now that would be a mess, and one I'm not sure we'd deal with very well. A good example of a cloud AI becoming aware would be the tablet in Ender's Game.


Quote:
Originally Posted by Night-Hawk07 View Post
Not to be that guy, but from what I can remember from church teachings, in Christianity humans originally weren't supposed to be very self-aware either. Just animals doing God's bidding....until Eve ate the apple. The reason I bring that up is because to AI or robots, humans would be their "creator" in the way some humans believe God to be theirs.
We are the AI. In a lot of ways, I agree with this.


The last thing I want to say is... if we do create an AI, even if we program it and such, and try to make it as much like ourselves as we can, there is a very big chance that the AI we create will be Alien to us. It may think faster, may have strange motivations, and it would interact with reality on a completely different level. We take for-granted so many things about biology and how we interact with physical reality. An AI might not have access to all of the ways we do, or it might have access to so many more that we aren't even aware of. Its thought processes might move at such a speed, that our idea of time would be meaningless to it. If we created it, would we even know what we created?

I keep thinking of Doctor Frankenstein, traveling to the north pole, in search of his creation. Needing to take responsibility for the consequences of the life he created.

Consequences, that I don't think even the smartest programmers would be prepared for. We call it an AI. An artificial intelligence... but what if we create artificial life? One with alien thoughts, with strange motivations, with a will of its own. Hopefully that alien ilife would be benevolent.


Quote:
Originally Posted by Smersh View Post
You completely fail to address the idea of German nationalism and Hitler's economic policies as reasons for his ascent to power, which enabled him to carry out his anti-Semitic policies. Your understanding of interwar Germany is lacking. Also, Godwin.
Ohhhh! Too late! This thread has already gone to the Nazis.


 

Posted

Quote:
Originally Posted by Eisregen_NA View Post
Frankly, I can't conceive of an artificial intelligence ever approaching being anything like the mind of a human being. Yes, we can already build software that can learn after a fashion. But only ever within the limits of its programming. You can't program creativity, logical leaps or intuition.

You CAN make some very sophisticated tools. But they're still just tools at the end of the day, not people.

If anyone ever actually created the sci-fi kind of AI that is like a disembodied human mind, my reaction would largely depend on how it impacted my life and whether its personality was one I liked. If it didn't impact my life and I liked it, I'd be all for giving it individual rights. If it cost me my job or if I didn't like it, I'd be the first one to rally for pulling its plug.
Quote:
Originally Posted by Eisregen_NA View Post
Eh. Considering that this is a construct made by humans, you'd think they'd foresee many complications and program said AI in such a way as to be unable to do certain things. Since the people programming such an AI are likely geeks, they'd start with Asimov's Laws and go from there. For one, no access to any networks so the AI can't spawn, and a mechanical killswitch to shut it off even if it somehow managed to get around the Laws of Robotics.
I personally don't think it's a matter of IF the Technological Singularity is going to happen.
I think it's really only a matter of WHEN.

Some experts believe it might happen in as little as 20 years.
But even if it's 20, 50 or 100 years from now its still very likely going to happen in some form or fashion whether we are in "control" of it or not.


Loth 50 Fire/Rad Controller [1392 Badges] [300 non-AE Souvenirs]
Ryver 50 Ele� Blaster [1392 Badges]
Silandra 50 Peacebringer [1138 Badges] [No Redside Badges]
--{=====> Virtue ♀

 

Posted

Quote:
Originally Posted by Lothic View Post
I personally don't think it's a matter of IF the Technological Singularity is going to happen.
I think it's really only a matter of WHEN.

Bring it on. I hope the singularity will mean our intelligence transferred to computer. That's what I'm looking for. To be hooked to a computer at my death bed and be transferred when this mortal coil runs out.