Watson wins!


Agonus

 

Posted

Having considered the cool possibility for a while, here's the scary parts:

Any human occupation that's based on a context-sensitive application of rote procedure or analysis is now subject to Watson-based automation.

What's that mean? It means that Watson can eventually take over: pretty much anything to do with applied mathematics including most statistical analysis, a lot of legal practice and analysis, most accounting functions, a lot of simulation design, almost all library functions, a lot of common bureaucracy.

Basically, anything a human does for a living where somebody describes a situation and a fixed or tightly-interpreted rule or procedure is applied.

Really, existing AI/expert systems can already do the knowledge-retrieval and rule-application parts. It's converting a described situation into data parameters that the system can work on that's the hard part and Watson seems pretty adept at that.


 

Posted

Very interesting, and kudos to the boys and girls at IBM for getting him up and running. Nice job indeed


Quote:
Originally Posted by Zwillinger View Post
GG, I would tell you that "I am killing you with my mind", but I couldn't find an emoticon to properly express my sentiment.
Quote:
Originally Posted by Captain_Photon View Post
NOTE: The Incarnate System is basically farming for IOs on a larger scale, and with more obtrusive lore.

 

Posted

Quote:
Originally Posted by Starjammer View Post
The key thing about Watson isn't its database or search algorithms. It's the ability to process casual language and derive "meaning" from it. Especially when you consider the convoluted answer-and-question format that Jeopardy! uses.
Yeah, this. Extracting meaning from natural language is a bear, and then actually figuring out the answer... that's a lot, lot, lot more complicated than it sounds like at first glance. Things like metaphors and relationships between things are pretty much uncharted territory in computing.


Having Vengeance and Fallout slotted for recharge means never having to say you're sorry.

 

Posted

Quote:
Originally Posted by Starjammer View Post
Having considered the cool possibility for a while, here's the scary parts:

Any human occupation that's based on a context-sensitive application of rote procedure or analysis is now subject to Watson-based automation.

What's that mean? It means that Watson can eventually take over: pretty much anything to do with applied mathematics including most statistical analysis, a lot of legal practice and analysis, most accounting functions, a lot of simulation design, almost all library functions, a lot of common bureaucracy.

Basically, anything a human does for a living where somebody describes a situation and a fixed or tightly-interpreted rule or procedure is applied.

Really, existing AI/expert systems can already do the knowledge-retrieval and rule-application parts. It's converting a described situation into data parameters that the system can work on that's the hard part and Watson seems pretty adept at that.
Why I am suddenly flashing back to the old Star Trek episode with the M5 computer?


 

Posted

Quote:
Originally Posted by Tymers_Realm View Post
I caught a part of the 2nd round of Day 1 of that tourny.

Watson was taking the gents to town big time. It just was wrong seeing it run the board.

Thank you for the time...
Not sure but I think it was noted among assorted articles about the contest that Watson could buzz in faster then a human?

Also while Jeopardy can be fun, I'd like to see Watson pass a Turing Test.


Egad....I just realized, this whole Watson Jeopardy contest just might have given SNL some more Jeopardy parody ideas, if they can get Will Ferrell to host SNL then we might get a celebrity Jeopardy skit with a parody of both Watson and Sean Connery!


 

Posted

Quote:
Originally Posted by Nericus View Post
Not sure but I think it was noted among assorted articles about the contest that Watson could buzz in faster then a human?

Also while Jeopardy can be fun, I'd like to see Watson pass a Turing Test.


Egad....I just realized, this whole Watson Jeopardy contest just might have given SNL some more Jeopardy parody ideas, if they can get Will Ferrell to host SNL then we might get a celebrity Jeopardy skit with a parody of both Watson and Sean Connery!
They rigged Watson with an electromechanical relay to press a physical button at human speed. However, Watson was still pretty quick on the draw and that usually means winning on Jeopardy!

I doubt Watson can yet pass a Turing test, at least one where the questioner can ask anything they want. As yet, AFAIK, Watson doesn't have any kind of "internal life" so it would not respond well to questions about thoughts, opinions, feelings and the like. It can answer knowledge-based questions all day long (as long as they're not about U.S. cities and their airports, apparently) but it doesn't "think about" what it's answering.

And yes, there are AI prototypes that do the above, although still not at human level cognition.


 

Posted

Quote:
Originally Posted by Starjammer View Post
They rigged Watson with an electromechanical relay to press a physical button at human speed. However, Watson was still pretty quick on the draw and that usually means winning on Jeopardy!

I doubt Watson can yet pass a Turing test, at least one where the questioner can ask anything they want. As yet, AFAIK, Watson doesn't have any kind of "internal life" so it would not respond well to questions about thoughts, opinions, feelings and the like. It can answer knowledge-based questions all day long (as long as they're not about U.S. cities and their airports, apparently) but it doesn't "think about" what it's answering.

And yes, there are AI prototypes that do the above, although still not at human level cognition.
Ray Kurzweil figures we'll have that (reversed engineered the human brain and built human level AI) by the mid 2020s. Was the Time magazine cover story last week.


Furio--Lvl 50+3 Fire/Fire/Fire Blaster, Virtue
Megadeth--Lvl 50+3 Necro/DM/Soul MM, Virtue
Veriandros--Lvl 50+3 Crab Soldier, Virtue
"So come and get me! I'll be waiting for ye, with a whiff of the old brimstone. I'm a grim bloody fable, with an unhappy bloody end!" Demoman, TF2

 

Posted

Quote:
Originally Posted by Furio View Post
Ray Kurzweil figures we'll have that (reversed engineered the human brain and built human level AI) by the mid 2020s. Was the Time magazine cover story last week.
Yes, but the AI must be programmed/educated/taught things. To borrow a note from the Sarah Connors Chronicles, I believe the federal agent commented how the AI needed to be taught the "ten commandments" particularly "Thou Shalt not Kill" after what happened to the psychiatrist that was working with it.

Or at the least Asimov's three laws of robotics.

1. Shall not harm a human nor through inaction allow a human to come to harm
2. Obey all orders unless they conflict with law 1.
3. Protect its own existence unless such protection conflicts with laws 1 and 2.


 

Posted

Quote:
Originally Posted by Nericus View Post
Yes, but the AI must be programmed/educated/taught things. To borrow a note from the Sarah Connors Chronicles, I believe the federal agent commented how the AI needed to be taught the "ten commandments" particularly "Thou Shalt not Kill" after what happened to the psychiatrist that was working with it.
Yeeeeah, about those ten commandments! Let's be leaving out any particular religion's rules out of an AI's programming. Let them come to that kind of decision for themselves.

Quote:
Or at the least Asimov's three laws of robotics.

1. Serve the public trust
2. Protect the innocent
3. Uphold the law
I prefer these laws over those others you listed. A lot more straightforward and honest. If we're gonna be building a slave race, let's get serious and not try to fool ourselves with flowery language.


 

Posted

Quote:
Originally Posted by Coyote_Seven View Post
Yeeeeah, about those ten commandments! Let's be leaving out any particular religion's rules out of an AI's programming. Let them come to that kind of decision for themselves.



I prefer these laws over those others you listed. A lot more straightforward and honest. If we're gonna be building a slave race, let's get serious and not try to fool ourselves with flowery language.
While I would prefer religion not get mixed with AI, the THOU SHALT NOT KILL one should be the one and only one installed into the core programming.

As for Robocop's Prime Directives, I don't think I'd want an AI running around with those as who knows how it may best decide to serve the public trust. Also how would it define "innocent"? Maybe it will decide to uphold the laws by herding humans into concentration camps or something to reduce crime?

Yes I'm being a bit extreme the above examples, but a "NO KILL" or "THOU SHALT NOT KILL" should be it's number one rule followed by obedience to orders that its given provided none of said orders involve killing.


 

Posted

Quote:
Originally Posted by Nericus View Post
While I would prefer religion not get mixed with AI, the THOU SHALT NOT KILL one should be the one and only one installed into the core programming.
Humans are certainly not given "core programming" to prevent them from killing other people. I don't think it's really ethical to be forcing another sentient being (whether biological or technological) to have no choice in following some predefined list of rules. Would you think it to be OK to use drugs and brainwashing techniques to get children to behave the way you want them to?

Quote:
As for Robocop's Prime Directives, I don't think I'd want an AI running around with those as who knows how it may best decide to serve the public trust. Also how would it define "innocent"? Maybe it will decide to uphold the laws by herding humans into concentration camps or something to reduce crime?
That could happen with those Asimov laws just as much.

Quote:
Yes I'm being a bit extreme the above examples, but a "NO KILL" or "THOU SHALT NOT KILL" should be it's number one rule followed by obedience to orders that its given provided none of said orders involve killing.
I say that if we treated them with the respect and dignity that all intelligent life deserves (instead of trying to force them to behave the way we want out of some preemptive fear that they might rebel because we're trying to use them as slave labor--that's how it starts in most fiction of this type, doesn't it?), then we won't have to worry about whether they're going to slaughter us all.

Watson isn't there yet, though. It's just another baby step in that direction, maybe.


 

Posted

Quote:
Originally Posted by Nericus View Post
Yes I'm being a bit extreme the above examples, but a "NO KILL" or "THOU SHALT NOT KILL" should be it's number one rule followed by obedience to orders that its given provided none of said orders involve killing.
Quote:
Originally Posted by Coyote_Seven View Post
Humans are certainly not given "core programming" to prevent them from killing other people. I don't think it's really ethical to be forcing another sentient being (whether biological or technological) to have no choice in following some predefined list of rules. Would you think it to be OK to use drugs and brainwashing techniques to get children to behave the way you want them to?

I say that if we treated them with the respect and dignity that all intelligent life deserves (instead of trying to force them to behave the way we want out of some preemptive fear that they might rebel because we're trying to use them as slave labor--that's how it starts in most fiction of this type, doesn't it?), then we won't have to worry about whether they're going to slaughter us all.

Watson isn't there yet, though. It's just another baby step in that direction, maybe.
It's taken mankind roughly 10,000 years to advance from the first forms of "civilization" to the point where we've almost created a new form of machine-based artificial intelligence.

I suspect the period of time it'll take that AI to jump from "a subservient slave species controlled by strict protocols to ensure human dominance over it" to the point that it becomes a "free willed form of sentience with total freedom to pursue its own agenda and destiny" will be much, much shorter by comparison.

My take on it is that we better treat them nicely so that they'll be less inclined to annihilate us once they finally have the chance.


Loth 50 Fire/Rad Controller [1392 Badges] [300 non-AE Souvenirs]
Ryver 50 Ele� Blaster [1392 Badges]
Silandra 50 Peacebringer [1138 Badges] [No Redside Badges]
--{=====> Virtue ♀

 

Posted

Quote:
Originally Posted by Coyote_Seven View Post
Humans are certainly not given "core programming" to prevent them from killing other people. I don't think it's really ethical to be forcing another sentient being (whether biological or technological) to have no choice in following some predefined list of rules. Would you think it to be OK to use drugs and brainwashing techniques to get children to behave the way you want them to?



That could happen with those Asimov laws just as much.



I say that if we treated them with the respect and dignity that all intelligent life deserves (instead of trying to force them to behave the way we want out of some preemptive fear that they might rebel because we're trying to use them as slave labor--that's how it starts in most fiction of this type, doesn't it?), then we won't have to worry about whether they're going to slaughter us all.

Watson isn't there yet, though. It's just another baby step in that direction, maybe.
Humans have a tough time treating each other with respect and dignity. If an AI came online, I could easily see panic and attempts to pull the plug on it before we understand it. If that happens then lets hope the AI isn't named Skynet and its not linked into the military computer systems or else.....BOOM.

A program safeguard of some kind to keep it from using terminal force on us should be included. That doesn't make it a slave or incapable of other decisions, just that it will not kill. We endeavor to teach our kids to not kill others or use excess violence, not to use drugs or alcohol, etc etc etc, but a human can easily reject such teachings either by choice or if they have some type of mental ailment that prevents such behavior inhibition, or are just plain nuts (example the recent shooter in Arizona).

So a program safeguard such as "Thou shalt not kill" or "You cannot harm a human or through inaction allow a human to come to harm" is teaching the AI to not kill, they just can't reject it if it is in their core programming or hardwired into it.

As to Robocop's Prime Directives vs Asimovs Three Laws of Robotics: Robocop is human brain hardwired to a CPU and its shown that his human side can overcome the machine except for Directive 4 which appears to be hardwired into him.

Asimov's laws of robotics appear to less open to interpretation, not saying they are foolproof, but I think I'd trust them better then Robocop's Prime Directives.


 

Posted

Quote:
Originally Posted by Nericus View Post
So a program safeguard such as "Thou shalt not kill" or "You cannot harm a human or through inaction allow a human to come to harm" is teaching the AI to not kill, they just can't reject it if it is in their core programming or hardwired into it.

As to Robocop's Prime Directives vs Asimovs Three Laws of Robotics: Robocop is human brain hardwired to a CPU and its shown that his human side can overcome the machine except for Directive 4 which appears to be hardwired into him.

Asimov's laws of robotics appear to less open to interpretation, not saying they are foolproof, but I think I'd trust them better then Robocop's Prime Directives.
There was one of Asimov's books where a robot was created without the Three Laws. It amounted to them starting from scratch since they were hardcoded. Another robot developed the Zeroth Law where it can't let humanity fall to harm. Then there was a bunch of robots that were programmed to kill humans by changing the definition of human. There are ways around any law.


The first step in being sane is to admit that you are insane.

 

Posted

Quote:
Originally Posted by starphoenix View Post
There was one of Asimov's books where a robot was created without the Three Laws. It amounted to them starting from scratch since they were hardcoded. Another robot developed the Zeroth Law where it can't let humanity fall to harm. Then there was a bunch of robots that were programmed to kill humans by changing the definition of human. There are ways around any law.
Nericus has obviously never learned about the "Literal Genie" trope.

EDIT: Actually, now that I think about it, this might be more of an "Exact Words" trope.