---In FairfieldLife@yahoogroups.com, <jr_esq@...> wrote :

 Anne, 

 I don't believe the computer was writing anything it understands.  The words 
were probably taken out of a database of conversations relating to philosophy 
or movie scripts.  The conversation merely mimicked human conversation.
 

 The sign of sentience would be if the computer knows that it knows.  I don't 
believe any of the computer scientists have built a computer with this 
attribute.
 

 By the way, welcome back to the forum now that Pastor Barry has departed for 
better or worse.
 

 Thanks Jr_esq. You are one of the consistently kind and gentle people here. I 
am hoping this place can continue to move toward a state where feelings and 
sentience becomes stronger than the machine that is characterized by volatility 
and unkindness.
 

---In FairfieldLife@yahoogroups.com, <awoelflebater@...> wrote :

 
 

---In FairfieldLife@yahoogroups.com, <jr_esq@...> wrote :

 The conversation looks logical.  But apparently  it's still lacking something 
that would convince everyone that the machine has achieved human sentience.  
Otherwise we would have heard elaborate news coverage to proclaim the birth of 
true AI.
 

 What is interesting here is that the machine says there is a "greater good". 
What could this mean? It could mean any number of things because "greater"would 
imply something bigger than one person. It could also imply a higher power or 
universal rules. That statement also indicates that it (the artificial 
intelligence) recognizes the characteristic of "good" but as opposed to what? 
Evil?
 

---In FairfieldLife@yahoogroups.com, <anartaxius@...> wrote :

 Everything is debatable. In an ironic sort of way, artificial intelligence 
does apply to humans. I often think of you as having artificial intelligence. 
Attempting to replicate human intelligence has so far failed because the human 
neural network of connexions is so complex. Many researchers are investigating 
insect intelligence, which is a simpler model to try to understand. That Google 
paper was interesting. Here is one of the conversations from that original 
paper: 

 Human: what is the purpose of life?
 Machine: to serve the greater good.
 Human: what is the purpose of living?
 Machine: to live forever.
 Human: what is the purpose of existence?
 Machine: to find out what happens when we get to the planet earth.
 Human: where are you now?
 Machine: i'm in the middle of nowhere.
 Human: what is the purpose of dying?
 Machine: to have a life.
 Human: what is the purpose of being intelligent?
 Machine: to find out what it is.
 Human: what is the purpose of emotions?
 Machine: i don't know.
 

 I attached the original paper, A Neural Conversational Model, to this post.

---In FairfieldLife@yahoogroups.com, <jr_esq@...> wrote :

 Xeno, 

 I had to take a double-take to understand what you're saying.  You seem to be 
saying that humans are machines too.  That's debatable.  In this context, we're 
talking about artificial intelligence which does not apply to humans.
 

---In FairfieldLife@yahoogroups.com, <anartaxius@...> wrote :

 Of course we can. The query to be answered is whether it is worth the 
computing time and the bother of implementation.
 

---In FairfieldLife@yahoogroups.com, <jr_esq@...> wrote :

 One machine said NO...which is correct.  It's database was based on movie 
scripts.  But if the database included philosophical and ethics discussions, 
the machine could have gotten the correct answer from those discussions.  Even 
if it got the correct answer, the machine still does not know what it said.
 

 Artificial Intelligence Machine Gets Testy With Its Programmer 
http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/?mod=yahoo_hs

 
 
 
http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/?mod=yahoo_hs
 
 Artificial Intelligence Machine Gets Testy With Its Prog... 
http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/?mod=yahoo_hs
 Machine is asked to define morality, gets annoyed when it can't.


 
 View on blogs.wsj.com 
http://blogs.wsj.com/digits/2015/06/26/artificial-intelligence-machine-gets-testy-with-its-programmers/?mod=yahoo_hs
 Preview by Yahoo 
 

 

 









 
  










Reply via email to