RE: [agi] Epiphany - Statements of Stupidity

2010-08-08 Thread John G. Rose
Well, these artificial identities need to complete a loop. Say the
artificial identity acquires an email address, phone#, a physical address, a
bank account, logs onto Amazon and purchases stuff automatically it needs to
be able to put money into its bank account. So let's say it has a low profit
scheme to scalp day trading profits with its stock trading account. That's
the loop, it has to be able to make money to make purchases. And then
automatically file its taxes with the IRS. Then it's really starting to look
like a full legally functioning identity. It could persist in this fashion
for years. 

 

I would bet that these identities already exist. What happens when there are
many, many of them? Would we even know? 

 

John

 

From: Steve Richfield [mailto:steve.richfi...@gmail.com] 
Sent: Saturday, August 07, 2010 8:17 PM
To: agi
Subject: Re: [agi] Epiphany - Statements of Stupidity

 

Ian,

I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites, and whenever a
meeting was set up, it would issue a message to its human owners to alert
the police to go and arrest the pedophiles at the arranged time and place.
No human interaction was needed between arrests.

I can imagine an adaptation, wherein a program claims to be manufacturing
explosives, and is looking for other people to deliver those explosives.
With such a story line, there should be no problem arranging deliveries, at
which time you would arrest the would-be bombers.

I wish I could tell you more about the British project, but they were VERY
secretive. I suspect that some serious Googling would yield much more.

Hopefully you will find this helpful.

Steve
=

On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

I wanted to see what other people's views were.My own view of the risks is
as follows. If the Turing Machine is built to be as isomorphic with humans
as possible, it would be incredibly dangerous. Indeed I feel that the
biological model is far more dangerous than the mathematical.

 

If on the other hand the TM was not isomorphic and made no attempt to be,
the dangers would be a lot less. Most Turing/Löbner entries are chatterboxes
that work on databases. The database being filled as you chat. Clearly the
system cannot go outside its database and is safe.

 

There is in fact some use for such a chatterbox. Clearly a Turing machine
would be able to infiltrate militant groups however it was constructed. As
for it pretending to be stupid, it would have to know in what direction it
had to be stupid. Hence it would have to be a good psychologist.

 

Suppose it logged onto a jihardist website, as well as being able to pass
itself off as a true adherent, it could also look at the other members and
assess their level of commitment and knowledge. I think that the true
Turing/Löbner  test is not working in a laboratory environment but they
should log onto jihardist sites and see how well they can pass themselves
off. If it could do that it really would have arrived. Eventually it could
pass itself off as a peniti to use the Mafia term and produce arguments
from the Qur'an against the militant position.

 

There would be quite a lot of contracts to be had if there were a realistic
prospect of doing this.

 

 

  - Ian Parker 

On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

 Philosophical question 2 - Would passing the TT assume human stupidity and

 if so would a Turing machine be dangerous? Not necessarily, the Turing
 machine could talk about things like jihad without
ultimately identifying with
 it.


Humans without augmentation are only so intelligent. A Turing machine would
be potentially dangerous, a really well built one. At some point we'd need
to see some DNA as ID of another extended TT.


 Philosophical question 3 :- Would a TM be a psychologist? I think it would
 have to be. Could a TM become part of a population simulation that would
 give us political insights.


You can have a relatively stupid TM or a sophisticated one just like humans.
It might be easier to pass the TT by not exposing too much intelligence.

John


 These 3 questions seem to me to be the really interesting ones.


   - Ian Parker





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

 


agi |  https://www.listbox.com/member

Re: [agi] Epiphany - Statements of Stupidity

2010-08-08 Thread Ian Parker
If you have a *physical* address an avatar needs to *physically* be there. -
Roxxy lives here with her friend Miss Al-Fasaq the belly dancer.

Chat lines as Steve describes are not too difficult. In fact the girls
(real) on a chat site have a sheet in front of them that gives the
appropriate response to a variety of questions. The WI (Women's Institute)
did an investigation of the sex industry, and one volunteer actually became
a *chatterbox*.

Do such entities exist? Probably not in the sex industry, at least not yet.
Why do I believe this? Basically because if the sex industry were moving in
this direction it would without a doubt be looking at some metric of brain
activity to give the customer the best erotic experience. You don't ask Are
you gay? You have men making love to men, women-men and women-women. Fing
out what gives the customer the biggest kick. You set the story of the porm
video.

In terms of security I am impressed by the fact that large numbers of bombs
have been constructed that don't work and could not work. Hydrogen
Peroxidehttp://en.wikipedia.org/wiki/Hydrogen_peroxide can
only be prepared in the pure state by chemical reactions. It is unlikely
(see notes on vapour pressure at 50C) that anything viable could be produced
by distillation on a kitchen stove.

Is this due to deliberately misleading information? Have I given the game
away? Certainly misleading information is being sent out. However it
is probably not being sent out by robotic entities. After all nothing has
yet achieved Turing status.

In the case of sex it may not be necessary for the client to believe that he
is confronted by a *real woman*. A top of the range masturbator/sex aid
may not have to pretend to be anything else.


  - Ian Parker

On 8 August 2010 07:30, John G. Rose johnr...@polyplexic.com wrote:

 Well, these artificial identities need to complete a loop. Say the
 artificial identity acquires an email address, phone#, a physical address, a
 bank account, logs onto Amazon and purchases stuff automatically it needs to
 be able to put money into its bank account. So let's say it has a low profit
 scheme to scalp day trading profits with its stock trading account. That's
 the loop, it has to be able to make money to make purchases. And then
 automatically file its taxes with the IRS. Then it's really starting to look
 like a full legally functioning identity. It could persist in this fashion
 for years.



 I would bet that these identities already exist. What happens when there
 are many, many of them? Would we even know?



 John



 *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
 *Sent:* Saturday, August 07, 2010 8:17 PM
 *To:* agi
 *Subject:* Re: [agi] Epiphany - Statements of Stupidity



 Ian,

 I recall several years ago that a group in Britain was operating just such
 a chatterbox as you explained, but did so on numerous sex-related sites, all
 running simultaneously. The chatterbox emulated young girls looking for sex.
 The program just sat there doing its thing on numerous sites, and whenever a
 meeting was set up, it would issue a message to its human owners to alert
 the police to go and arrest the pedophiles at the arranged time and place.
 No human interaction was needed between arrests.

 I can imagine an adaptation, wherein a program claims to be manufacturing
 explosives, and is looking for other people to deliver those explosives.
 With such a story line, there should be no problem arranging deliveries, at
 which time you would arrest the would-be bombers.

 I wish I could tell you more about the British project, but they were VERY
 secretive. I suspect that some serious Googling would yield much more.

 Hopefully you will find this helpful.

 Steve
 =

 On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

 I wanted to see what other people's views were.My own view of the risks is
 as follows. If the Turing Machine is built to be as isomorphic with humans
 as possible, it would be incredibly dangerous. Indeed I feel that the
 biological model is far more dangerous than the mathematical.



 If on the other hand the TM was *not* isomorphic and made no attempt to
 be, the dangers would be a lot less. Most Turing/Löbner entries are
 chatterboxes that work on databases. The database being filled as you chat.
 Clearly the system cannot go outside its database and is safe.



 There is in fact some use for such a chatterbox. Clearly a Turing machine
 would be able to infiltrate militant groups however it was constructed. As
 for it pretending to be stupid, it would have to know in what direction it
 had to be stupid. Hence it would have to be a good psychologist.



 Suppose it logged onto a jihardist website, as well as being able to pass
 itself off as a true adherent, it could also look at the other members and
 assess their level of commitment and knowledge. I think that the
 true Turing/Löbner  test is not working in a laboratory environment

Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Steve Richfield
John,

You brought up some interesting points...

On Fri, Aug 6, 2010 at 10:54 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
  On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.com
  wrote:
  statements of stupidity - some of these are examples of cramming
  sophisticated thoughts into simplistic compressed text.
 
  Definitely, as even the thoughts of stupid people transcends our
 (present)
  ability to state what is happening behind their eyeballs. Most stupidity
 is
  probably beyond simple recognition. For the initial moment, I was just
  looking at the linguistic low hanging fruit.

 You are talking about, those phrases, some are clichés,


There seems to be no clear boundary between clichés and other stupid
statements, except maybe that clichés are exactly quoted like that's just
your opinion while other statements are grammatically adapted to fit the
sentences and paragraphs that they inhabit.

Dr. Eliza already translates idioms before processing. I could add clichés
without changing a line of code, e.g. that's just your opinion might
translate into something like I am too stupid to to understand your
explanation.

Dr. Eliza has an extensive wildcard handler, so it should be able to handle
the majority of grammatically adapted statements in the same way, by simply
including appropriate wildcards in the pattern.

are like local K
 complexity minima, in a knowledge graph of partial linguistic structure,
 where neural computational energy is preserved, and the statements are
 patterns with isomorphisms to other experiential knowledge intra and inter
 agent.


That is, other illogical misunderstanding of the real world, which are
probably NOT shared with more intelligent agents. This present a serious
problem with understanding by more intelligent agents.

More intelligent agents have ways of working more optimally with the
 neural computational energy, perhaps by using other more efficient patterns
 thus avoiding those particular detrimental pattern/statements.


... and this present a communications problem with agents with radically
different intelligences, both greater and lesser.


 But the
 statements are catchy because they are common and allow some minimization
 of
 computational energy as well as they are like objects in a higher level
 communication protocol. To store them is less bits and transfer is less
 bits
 per second.


However, they have negative information content - if that is possible,
because they require a false model of the world to process, and produce
completely erroneous results. Of course, despite these problems, they DO
somewhat accurately communicate the erroneous nature of the thinking, so
there IS some value there.


 Their impact is maximal since they are isomorphic across
 knowledge and experience.


... the ultimate being: Do, or do not. There is no try.


 At some point they may just become symbols due to
 their pre-calculated commonness.


Egad, symbols to display stupidity. Could linguistics have anything that is
WORSE?!


  Language is both intelligence enhancing and limiting. Human language is a
  protocol between agents. So there is minimalist data transfer, I had no
  choice but to ... is a compressed summary of potentially vastly complex
  issues.
 
  My point is that they could have left the country, killed their
 adversaries,
  taken on a new ID, or done any number of radical things that they
 probably
  never considered, other than taking whatever action they chose to take. A
  more accurate statement might be I had no apparent rational choice but
 to
  

 The other low probability choices are lossily compressed out of the
 expressed statement pattern. It's assumed that there were other choices,
 usually factored in during the communicational complexity related
 decompression, being situational. The onus at times is on the person
 listening to the stupid statement.


I see. This example was in reality a gapped or ellipsis, where
reasonably presumed words were omitted. These are always a challenge, except
in common places like clichés where the missing words can be automatically
inserted.

Thanks again for your thoughts.

Steve
=


  The mind gets hung-up sometimes on this language of ours. Better off at
  times to think less using English language and express oneself with a
 wider
  spectrum communiqué. Doing a dance and throwing paint in the air for
  example, as some *primitive* cultures actually do, conveys information
 also
  and is medium of expression rather than using a restrictive human chat
  protocol.
 
  You are saying that the problem is that our present communication permits
  statements of stupidity, so we shouldn't have our present system of
  communication? Scrap English?!!! I consider statements of stupidity as a
 sort
  of communications checksum, to see if real interchange of ideas is even
  possible. Often, it is 

Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Ian Parker
I wanted to see what other people's views were.My own view of the risks is
as follows. If the Turing Machine is built to be as isomorphic with humans
as possible, it would be incredibly dangerous. Indeed I feel that the
biological model is far more dangerous than the mathematical.

If on the other hand the TM was *not* isomorphic and made no attempt to be,
the dangers would be a lot less. Most Turing/Löbner entries are chatterboxes
that work on databases. The database being filled as you chat. Clearly the
system cannot go outside its database and is safe.

There is in fact some use for such a chatterbox. Clearly a Turing machine
would be able to infiltrate militant groups however it was constructed. As
for it pretending to be stupid, it would have to know in what direction it
had to be stupid. Hence it would have to be a good psychologist.

Suppose it logged onto a jihardist website, as well as being able to pass
itself off as a true adherent, it could also look at the other members and
assess their level of commitment and knowledge. I think that the
true Turing/Löbner  test is not working in a laboratory environment but they
should log onto jihardist sites and see how well they can pass themselves
off. If it could do that it really would have arrived. Eventually it could
pass itself off as a *peniti* to use the Mafia term and produce arguments
from the Qur'an against the militant position.

There would be quite a lot of contracts to be had if there were a realistic
prospect of doing this.


  - Ian Parker

On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

  Philosophical question 2 - Would passing the TT assume human stupidity
 and
  if so would a Turing machine be dangerous? Not necessarily, the Turing
  machine could talk about things like jihad without
 ultimately identifying with
  it.
 

 Humans without augmentation are only so intelligent. A Turing machine would
 be potentially dangerous, a really well built one. At some point we'd need
 to see some DNA as ID of another extended TT.

  Philosophical question 3 :- Would a TM be a psychologist? I think it
 would
  have to be. Could a TM become part of a population simulation that would
  give us political insights.
 

 You can have a relatively stupid TM or a sophisticated one just like
 humans.
 It might be easier to pass the TT by not exposing too much intelligence.

 John

  These 3 questions seem to me to be the really interesting ones.
 
 
- Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-07 Thread Steve Richfield
Ian,

I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites, and whenever a
meeting was set up, it would issue a message to its human owners to alert
the police to go and arrest the pedophiles at the arranged time and place.
No human interaction was needed between arrests.

I can imagine an adaptation, wherein a program claims to be manufacturing
explosives, and is looking for other people to deliver those explosives.
With such a story line, there should be no problem arranging deliveries, at
which time you would arrest the would-be bombers.

I wish I could tell you more about the British project, but they were VERY
secretive. I suspect that some serious Googling would yield much more.

Hopefully you will find this helpful.

Steve
=
On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

 I wanted to see what other people's views were.My own view of the risks is
 as follows. If the Turing Machine is built to be as isomorphic with humans
 as possible, it would be incredibly dangerous. Indeed I feel that the
 biological model is far more dangerous than the mathematical.

 If on the other hand the TM was *not* isomorphic and made no attempt to
 be, the dangers would be a lot less. Most Turing/Löbner entries are
 chatterboxes that work on databases. The database being filled as you chat.
 Clearly the system cannot go outside its database and is safe.

 There is in fact some use for such a chatterbox. Clearly a Turing machine
 would be able to infiltrate militant groups however it was constructed. As
 for it pretending to be stupid, it would have to know in what direction it
 had to be stupid. Hence it would have to be a good psychologist.

 Suppose it logged onto a jihardist website, as well as being able to pass
 itself off as a true adherent, it could also look at the other members and
 assess their level of commitment and knowledge. I think that the
 true Turing/Löbner  test is not working in a laboratory environment but they
 should log onto jihardist sites and see how well they can pass themselves
 off. If it could do that it really would have arrived. Eventually it could
 pass itself off as a *peniti* to use the Mafia term and produce
 arguments from the Qur'an against the militant position.

 There would be quite a lot of contracts to be had if there were a realistic
 prospect of doing this.


   - Ian Parker

 On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

  Philosophical question 2 - Would passing the TT assume human stupidity
 and
  if so would a Turing machine be dangerous? Not necessarily, the Turing
  machine could talk about things like jihad without
 ultimately identifying with
  it.
 

 Humans without augmentation are only so intelligent. A Turing machine
 would
 be potentially dangerous, a really well built one. At some point we'd need
 to see some DNA as ID of another extended TT.

  Philosophical question 3 :- Would a TM be a psychologist? I think it
 would
  have to be. Could a TM become part of a population simulation that would
  give us political insights.
 

 You can have a relatively stupid TM or a sophisticated one just like
 humans.
 It might be easier to pass the TT by not exposing too much intelligence.

 John

  These 3 questions seem to me to be the really interesting ones.
 
 
- Ian Parker




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Mike Tintner
sTEVE:I have posted plenty about statements of ignorance, our probable 
inability to comprehend what an advanced intelligence might be thinking, 

What will be the SIMPLEST thing that will mark the first sign of AGI ? - Given 
that there are zero but zero examples of AGI.

Don't you think it would be a good idea to begin at the beginning? With 
initial AGI? Rather than advanced AGI? 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
Mike,

Your reply flies in the face of two obvious facts:
1.  I have little interest in what is called AGI here. My interests lie
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several
reasons, as it is directly applicable to Dr. Eliza, and because it casts a
shadow on future dreams of AGI. I was hoping that those people who have
thought things through regarding AGIs might have some thoughts here. Maybe
these people don't (yet) exist?!
2.  You seem to think that a walk before you run approach, basically a
bottom-up approach to AGI, is the obvious one. It sure isn't obvious to me.
Besides, if my statements of stupidity theory is true, then why even
bother building AGIs, because we won't even be able to meaningfully discuss
things with them.

Steve
==
On Fri, Aug 6, 2010 at 2:57 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  sTEVE:I have posted plenty about statements of ignorance, our probable
 inability to comprehend what an advanced intelligence might be thinking,

 What will be the SIMPLEST thing that will mark the first sign of AGI ? -
 Given that there are zero but zero examples of AGI.

 Don't you think it would be a good idea to begin at the beginning? With
 initial AGI? Rather than advanced AGI?
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Matt Mahoney
Mike Tintner wrote:
 What will be the SIMPLEST thing that will mark the first sign of AGI ? - 
 Given 
that there are zero but zero examples of AGI.
 
Machines have already surpassed human intelligence. If you don't think so, try 
this IQ test. http://mattmahoney.net/iq/

Or do you prefer to define intelligence as more like a human? In that case I 
agree that AGI will never happen. No machine will ever be more like a human 
than 
a human.

I really don't care how you define it. Either way, computers are profoundly 
affecting the way people interact with each other and with the world. Where is 
the threshold when machines do most of our thinking for us? Who cares as long 
as 
the machines still give us the feeling that we are in charge.

-- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, August 6, 2010 5:57:33 AM
Subject: Re: [agi] Epiphany - Statements of Stupidity


sTEVE:I have  posted plenty about statements of ignorance, our probable 
inability to  comprehend what an advanced intelligence might be thinking, 

 
What will be the SIMPLEST thing that will mark the  first sign of AGI ? - Given 
that there are zero but zero examples of  AGI.
 
Don't you think it would be a good idea to begin at  the beginning? With 
initial AGI? Rather than advanced AGI? 

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
statements of stupidity - some of these are examples of cramming
sophisticated thoughts into simplistic compressed text. Language is both
intelligence enhancing and limiting. Human language is a protocol between
agents. So there is minimalist data transfer, I had no choice but to ...
is a compressed summary of potentially vastly complex issues. The mind gets
hung-up sometimes on this language of ours. Better off at times to think
less using English language and express oneself with a wider spectrum
communiqué. Doing a dance and throwing paint in the air for example, as some
*primitive* cultures actually do, conveys information also and is medium of
expression rather than using a restrictive human chat protocol.

 

BTW the rules of etiquette of the human language protocol are even more
potentially restricting though necessary for efficient and standardized data
transfer to occur. Like, TCP/IP for example. The Etiquette in TCP/IP is
like an OSI layer, akin to human language etiquette.

 

John

 

 

From: Steve Richfield [mailto:steve.richfi...@gmail.com] 



To All,

I have posted plenty about statements of ignorance, our probable inability
to comprehend what an advanced intelligence might be thinking, heidenbugs,
etc. I am now wrestling with a new (to me) concept that hopefully others
here can shed some light on.

People often say things that indicate their limited mental capacity, or at
least their inability to comprehend specific situations.

1)  One of my favorites are people who say I had no choice but to ...,
which of course indicates that they are clearly intellectually challenged
because there are ALWAYS other choices, though it may be difficult to find
one that is in all respects superior. While theoretically this statement
could possibly be correct, in practice I have never found this to be the
case.

2)  Another one recently from this very forum was If it sounds too good to
be true, it probably is. This may be theoretically true, but in fact was,
as usual, made as a statement as to why the author was summarily dismissing
an apparent opportunity of GREAT value. This dismissal of something BECAUSE
of its great value would seem to severely limit the authors prospects for
success in life, which probably explains why he spends so much time here
challenging others who ARE doing something with their lives.

3)  I used to evaluate inventions for some venture capitalists. Sometimes I
would find that some basic law of physics, e.g. conservation of energy,
would have to be violated for the thing to work. When I explained this to
the inventors, their inevitable reply was Yea, and they also said that the
Wright Brothers' plane would never fly. To this, I explained that the
Wright Brothers had invested ~200 hours of effort working with their crude
homemade wind tunnel, and ask what the inventors have done to prove that
their own invention would work.

4)  One old stupid standby, spoken when you have make a clear point that
shows that their argument is full of holes That is just your opinion. No,
it is a proven fact for you to accept or refute.

5)  Perhaps you have your own pet statements of stupidity? I suspect that
there may be enough of these to dismiss some significant fraction of
prospective users of beyond-human-capability (I just hate the word
intelligence) programs.

In short, semantic analysis of these statements typically would NOT find
them to be conspicuously false, and hence even an AGI would be tempted to
accept them. However, their use almost universally indicates some
short-circuit in thinking. The present Dr. Eliza program could easily
recognize such statements.

OK, so what? What should an AI program do when it encounters a stupid user?
Should some attempt be made to explain stupidity to someone who is almost
certainly incapable of comprehending their own stupidity? Stupidity is
forever is probably true, especially when expressed by an adult.

Note my own dismissal of a some past posters for insufficient mental ability
to understand certain subjects, whereupon they invariably come back
repeating the SAME flawed logic, after I carefully explained the breaks in
their logic. Clearly, I was just wasting my effort by continuing to interact
with these people.

Note that providing a stupid user with ANY output is probably a mistake,
because they will almost certainly misconstrue it in some way. Perhaps it
might be possible to dumb down the output to preschool-level, at least
that (small) part of the output that can be accurately stated in preschool
terms.

Eventually as computers continue to self-evolve, we will ALL be categorized
as some sort of stupid, and receive stupid-adapted output.

I wonder whether, ultimately, computers will have ANYTHING to say to us,
like any more than we now say to our dogs.

Perhaps the final winner of the Reverse Turing Test will remain completely
silent?!

You don't explain to your dog why you can't pay the rent from The Fall of
Colossus.

Any thoughts?

Steve





Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Mike Tintner
Maybe you could give me one example from the history of technology where 
machines ran before they could walk? Where they started complex rather than 
simple?  Or indeed from evolution of any kind? Or indeed from human 
development? Where children started doing complex mental operations like logic, 
say, or maths or the equivalent before they could speak?  Or started running 
before they could control their arms, roll over, crawl, sit up, haul themselves 
up, stand up, totter -  just went straight to running?**

A bottom-up approach, I would have to agree, clearly isn't obvious to AGI-ers. 
But then there are v. few AGI-ers who have much sense of history or evolution. 
It's so much easier to engage in sci-fi fantasies about future, topdown AGI's.

It's HARDER to think about where AGI starts - requires serious application to 
the problem.

And frankly, until you or anyone else has a halfway viable of where AGI will or 
can start, and what uses it will serve,  speculation about whether it's worth 
building complex, sci-fi AGI's is a waste of your valuable time.

**PS Note BTW - a distinction that eludes most AGI-ers -  a present computer 
program doing logic or maths or chess, is a fundamentally and massively 
different thing from a human or AGI doing the same, just as a current program 
doing NLP is totally different from a human using language.   IN all these 
cases, humans (and real AGIs to come) don't merely manipulate meaningless 
patterns of numbers,   they relate the symbols first to concepts and then to 
real world referents - massively complex operations totally beyond current 
computers.

The whole history of AI/would-be AGI shows the terrible price of starting 
complex - with logic/maths/chess programs for example - and not having a clue 
about how intelligence has to be developed from v. simple origins, step by 
step, in order to actually understand these activities.



From: Steve Richfield 
Sent: Friday, August 06, 2010 4:52 PM
To: agi 
Subject: Re: [agi] Epiphany - Statements of Stupidity


Mike,

Your reply flies in the face of two obvious facts:
1.  I have little interest in what is called AGI here. My interests lie 
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several 
reasons, as it is directly applicable to Dr. Eliza, and because it casts a 
shadow on future dreams of AGI. I was hoping that those people who have thought 
things through regarding AGIs might have some thoughts here. Maybe these people 
don't (yet) exist?!
2.  You seem to think that a walk before you run approach, basically a 
bottom-up approach to AGI, is the obvious one. It sure isn't obvious to me. 
Besides, if my statements of stupidity theory is true, then why even bother 
building AGIs, because we won't even be able to meaningfully discuss things 
with them.

Steve
==

On Fri, Aug 6, 2010 at 2:57 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  sTEVE:I have posted plenty about statements of ignorance, our probable 
inability to comprehend what an advanced intelligence might be thinking, 

  What will be the SIMPLEST thing that will mark the first sign of AGI ? - 
Given that there are zero but zero examples of AGI.

  Don't you think it would be a good idea to begin at the beginning? With 
initial AGI? Rather than advanced AGI? 
agi | Archives  | Modify Your Subscription  


  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Ian Parker
I think that some quite important philosofical questions are raised by
Steve's posting. I don't know BTW how you got it. I monitor all
correspondence to the group, and I did not see it.

The Turing test is not in fact a test of intelligence, it is a test of
similarity with the human. Hence for a machine to be truly Turing it would
have to make mistakes. Now any *useful* system will be made as intelligent
as we can make it. The TT will be seen to be an irrelevancy.

Philosophical question no 1 :- How useful is the TT.

As I said in my correspondence With Jan Klouk, the human being is stupid,
often dangerously stupid.

Philosophical question 2 - Would passing the TT assume human stupidity and
if so would a Turing machine be dangerous? Not necessarily, the Turing
machine could talk about things like jihad without
ultimately identifying with it.

Philosophical question 3 :- Would a TM be a psychologist? I think it would
have to be. Could a TM become part of a population simulation that would
give us political insights.

These 3 questions seem to me to be the really interesting ones.


  - Ian Parker

On 6 August 2010 18:09, John G. Rose johnr...@polyplexic.com wrote:

 statements of stupidity - some of these are examples of cramming
 sophisticated thoughts into simplistic compressed text. Language is both
 intelligence enhancing and limiting. Human language is a protocol between
 agents. So there is minimalist data transfer, I had no choice but to ...
 is a compressed summary of potentially vastly complex issues. The mind gets
 hung-up sometimes on this language of ours. Better off at times to think
 less using English language and express oneself with a wider spectrum
 communiqué. Doing a dance and throwing paint in the air for example, as some
 **primitive** cultures actually do, conveys information also and is medium
 of expression rather than using a restrictive human chat protocol.



 BTW the rules of etiquette of the human language protocol are even more
 potentially restricting though necessary for efficient and standardized data
 transfer to occur. Like, TCP/IP for example. The Etiquette in TCP/IP is
 like an OSI layer, akin to human language etiquette.



 John





 *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]

 To All,

 I have posted plenty about statements of ignorance, our probable
 inability to comprehend what an advanced intelligence might be thinking,
 heidenbugs, etc. I am now wrestling with a new (to me) concept that
 hopefully others here can shed some light on.

 People often say things that indicate their limited mental capacity, or at
 least their inability to comprehend specific situations.

 1)  One of my favorites are people who say I had no choice but to ...,
 which of course indicates that they are clearly intellectually challenged
 because there are ALWAYS other choices, though it may be difficult to find
 one that is in all respects superior. While theoretically this statement
 could possibly be correct, in practice I have never found this to be the
 case.

 2)  Another one recently from this very forum was If it sounds too good to
 be true, it probably is. This may be theoretically true, but in fact was,
 as usual, made as a statement as to why the author was summarily dismissing
 an apparent opportunity of GREAT value. This dismissal of something BECAUSE
 of its great value would seem to severely limit the authors prospects for
 success in life, which probably explains why he spends so much time here
 challenging others who ARE doing something with their lives.

 3)  I used to evaluate inventions for some venture capitalists. Sometimes I
 would find that some basic law of physics, e.g. conservation of energy,
 would have to be violated for the thing to work. When I explained this to
 the inventors, their inevitable reply was Yea, and they also said that the
 Wright Brothers' plane would never fly. To this, I explained that the
 Wright Brothers had invested ~200 hours of effort working with their crude
 homemade wind tunnel, and ask what the inventors have done to prove that
 their own invention would work.

 4)  One old stupid standby, spoken when you have make a clear point that
 shows that their argument is full of holes That is just your opinion. No,
 it is a proven fact for you to accept or refute.

 5)  Perhaps you have your own pet statements of stupidity? I suspect that
 there may be enough of these to dismiss some significant fraction of
 prospective users of beyond-human-capability (I just hate the word
 intelligence) programs.

 In short, semantic analysis of these statements typically would NOT find
 them to be conspicuously false, and hence even an AGI would be tempted to
 accept them. However, their use almost universally indicates some
 short-circuit in thinking. The present Dr. Eliza program could easily
 recognize such statements.

 OK, so what? What should an AI program do when it encounters a stupid user?
 Should some attempt be made to explain 

Re: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread Steve Richfield
John,

Congratulations, as your response was the only one that was on topic!!!

On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.comwrote:

 statements of stupidity - some of these are examples of cramming
 sophisticated thoughts into simplistic compressed text.


Definitely, as even the thoughts of stupid people transcends our (present)
ability to state what is happening behind their eyeballs. Most stupidity is
probably beyond simple recognition. For the initial moment, I was just
looking at the linguistic low hanging fruit.

Language is both intelligence enhancing and limiting. Human language is a
 protocol between agents. So there is minimalist data transfer, I had no
 choice but to ... is a compressed summary of potentially vastly complex
 issues.


My point is that they could have left the country, killed their adversaries,
taken on a new ID, or done any number of radical things that they probably
never considered, other than taking whatever action they chose to take. A
more accurate statement might be I had no apparent rational choice but to


The mind gets hung-up sometimes on this language of ours. Better off at
 times to think less using English language and express oneself with a wider
 spectrum communiqué. Doing a dance and throwing paint in the air for
 example, as some **primitive** cultures actually do, conveys information
 also and is medium of expression rather than using a restrictive human chat
 protocol.


You are saying that the problem is that our present communication permits
statements of stupidity, so we shouldn't have our present system of
communication? Scrap English?!!! I consider statements of stupidity as a
sort of communications checksum, to see if real interchange of ideas is even
possible. Often, it is quite impossible to communicate new ideas to
inflexible-minded people.



 BTW the rules of etiquette of the human language protocol are even more
 potentially restricting though necessary for efficient and standardized data
 transfer to occur. Like, TCP/IP for example. The Etiquette in TCP/IP is
 like an OSI layer, akin to human language etiquette.


I'm not sure how this relates, other than possibly identifying people who
don't honor linguistic etiquette as being (potentially) stupid. Was that
your point?

Steve
==


 *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]

 To All,

 I have posted plenty about statements of ignorance, our probable
 inability to comprehend what an advanced intelligence might be thinking,
 heidenbugs, etc. I am now wrestling with a new (to me) concept that
 hopefully others here can shed some light on.

 People often say things that indicate their limited mental capacity, or at
 least their inability to comprehend specific situations.

 1)  One of my favorites are people who say I had no choice but to ...,
 which of course indicates that they are clearly intellectually challenged
 because there are ALWAYS other choices, though it may be difficult to find
 one that is in all respects superior. While theoretically this statement
 could possibly be correct, in practice I have never found this to be the
 case.

 2)  Another one recently from this very forum was If it sounds too good to
 be true, it probably is. This may be theoretically true, but in fact was,
 as usual, made as a statement as to why the author was summarily dismissing
 an apparent opportunity of GREAT value. This dismissal of something BECAUSE
 of its great value would seem to severely limit the authors prospects for
 success in life, which probably explains why he spends so much time here
 challenging others who ARE doing something with their lives.

 3)  I used to evaluate inventions for some venture capitalists. Sometimes I
 would find that some basic law of physics, e.g. conservation of energy,
 would have to be violated for the thing to work. When I explained this to
 the inventors, their inevitable reply was Yea, and they also said that the
 Wright Brothers' plane would never fly. To this, I explained that the
 Wright Brothers had invested ~200 hours of effort working with their crude
 homemade wind tunnel, and ask what the inventors have done to prove that
 their own invention would work.

 4)  One old stupid standby, spoken when you have make a clear point that
 shows that their argument is full of holes That is just your opinion. No,
 it is a proven fact for you to accept or refute.

 5)  Perhaps you have your own pet statements of stupidity? I suspect that
 there may be enough of these to dismiss some significant fraction of
 prospective users of beyond-human-capability (I just hate the word
 intelligence) programs.

 In short, semantic analysis of these statements typically would NOT find
 them to be conspicuously false, and hence even an AGI would be tempted to
 accept them. However, their use almost universally indicates some
 short-circuit in thinking. The present Dr. Eliza program could easily
 recognize such statements.

 OK, 

RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 The Turing test is not in fact a test of intelligence, it is a test of
similarity with
 the human. Hence for a machine to be truly Turing it would have to make
 mistakes. Now any useful system will be made as intelligent as we can
 make it. The TT will be seen to be an irrelevancy.
 
 Philosophical question no 1 :- How useful is the TT.
 

TT in its basic form is rather simplistic. It's thought of usually in its
ideal form, the determination of an AI or a human. I look at it more of
analogue verses discrete boolean. Much of what is out there is human with
computer augmentation and echoes of human interaction. It's blurry in
reality and the TT has been passed in some ways but not in its most ideal
way.

 As I said in my correspondence With Jan Klouk, the human being is stupid,
 often dangerously stupid.
 
 Philosophical question 2 - Would passing the TT assume human stupidity and
 if so would a Turing machine be dangerous? Not necessarily, the Turing
 machine could talk about things like jihad without
ultimately identifying with
 it.
 

Humans without augmentation are only so intelligent. A Turing machine would
be potentially dangerous, a really well built one. At some point we'd need
to see some DNA as ID of another extended TT.

 Philosophical question 3 :- Would a TM be a psychologist? I think it would
 have to be. Could a TM become part of a population simulation that would
 give us political insights.
 

You can have a relatively stupid TM or a sophisticated one just like humans.
It might be easier to pass the TT by not exposing too much intelligence.

John

 These 3 questions seem to me to be the really interesting ones.
 
 
   - Ian Parker 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com