I think that some quite important philosofical questions are raised by
Steve's posting. I don't know BTW how you got it. I monitor all
correspondence to the group, and I did not see it.

The Turing test is not in fact a test of intelligence, it is a test of
similarity with the human. Hence for a machine to be truly Turing it would
have to make mistakes. Now any "*useful*" system will be made as intelligent
as we can make it. The TT will be seen to be an irrelevancy.

Philosophical question no 1 :- How useful is the TT.

As I said in my correspondence With Jan Klouk, the human being is stupid,
often dangerously stupid.

Philosophical question 2 - Would passing the TT assume human stupidity and
if so would a Turing machine be dangerous? Not necessarily, the Turing
machine could talk about things like jihad without
ultimately identifying with it.

Philosophical question 3 :- Would a TM be a psychologist? I think it would
have to be. Could a TM become part of a population simulation that would
give us political insights.

These 3 questions seem to me to be the really interesting ones.


  - Ian Parker

On 6 August 2010 18:09, John G. Rose <johnr...@polyplexic.com> wrote:

> "statements of stupidity" - some of these are examples of cramming
> sophisticated thoughts into simplistic compressed text. Language is both
> intelligence enhancing and limiting. Human language is a protocol between
> agents. So there is minimalist data transfer, "I had no choice but to ..."
> is a compressed summary of potentially vastly complex issues. The mind gets
> hung-up sometimes on this language of ours. Better off at times to think
> less using English language and express oneself with a wider spectrum
> communiqué. Doing a dance and throwing paint in the air for example, as some
> **primitive** cultures actually do, conveys information also and is medium
> of expression rather than using a restrictive human chat protocol.
>
>
>
> BTW the rules of etiquette of the human language "protocol" are even more
> potentially restricting though necessary for efficient and standardized data
> transfer to occur. Like, TCP/IP for example. The "Etiquette" in TCP/IP is
> like an OSI layer, akin to human language etiquette.
>
>
>
> John
>
>
>
>
>
> *From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
>
> To All,
>
> I have posted plenty about "statements of ignorance", our probable
> inability to comprehend what an advanced intelligence might be "thinking",
> heidenbugs, etc. I am now wrestling with a new (to me) concept that
> hopefully others here can shed some light on.
>
> People often say things that indicate their limited mental capacity, or at
> least their inability to comprehend specific situations.
>
> 1)  One of my favorites are people who say "I had no choice but to ...",
> which of course indicates that they are clearly intellectually challenged
> because there are ALWAYS other choices, though it may be difficult to find
> one that is in all respects superior. While theoretically this statement
> could possibly be correct, in practice I have never found this to be the
> case.
>
> 2)  Another one recently from this very forum was "If it sounds too good to
> be true, it probably is". This may be theoretically true, but in fact was,
> as usual, made as a statement as to why the author was summarily dismissing
> an apparent opportunity of GREAT value. This dismissal of something BECAUSE
> of its great value would seem to severely limit the authors prospects for
> success in life, which probably explains why he spends so much time here
> challenging others who ARE doing something with their lives.
>
> 3)  I used to evaluate inventions for some venture capitalists. Sometimes I
> would find that some basic law of physics, e.g. conservation of energy,
> would have to be violated for the thing to work. When I explained this to
> the inventors, their inevitable reply was "Yea, and they also said that the
> Wright Brothers' plane would never fly". To this, I explained that the
> Wright Brothers had invested ~200 hours of effort working with their crude
> homemade wind tunnel, and ask what the inventors have done to prove that
> their own invention would work.
>
> 4)  One old stupid standby, spoken when you have make a clear point that
> shows that their argument is full of holes "That is just your opinion". No,
> it is a proven fact for you to accept or refute.
>
> 5)  Perhaps you have your own pet "statements of stupidity"? I suspect that
> there may be enough of these to dismiss some significant fraction of
> prospective users of beyond-human-capability (I just hate the word
> "intelligence") programs.
>
> In short, semantic analysis of these statements typically would NOT find
> them to be conspicuously false, and hence even an AGI would be tempted to
> accept them. However, their use almost universally indicates some
> short-circuit in thinking. The present Dr. Eliza program could easily
> recognize such statements.
>
> OK, so what? What should an AI program do when it encounters a stupid user?
> Should some attempt be made to explain stupidity to someone who is almost
> certainly incapable of comprehending their own stupidity? "Stupidity is
> forever" is probably true, especially when expressed by an adult.
>
> Note my own dismissal of a some past posters for insufficient mental
> ability to understand certain subjects, whereupon they invariably come back
> repeating the SAME flawed logic, after I carefully explained the breaks in
> their logic. Clearly, I was just wasting my effort by continuing to interact
> with these people.
>
> Note that providing a stupid user with ANY output is probably a mistake,
> because they will almost certainly misconstrue it in some way. Perhaps it
> might be possible to "dumb down" the output to preschool-level, at least
> that (small) part of the output that can be accurately stated in preschool
> terms.
>
> Eventually as computers continue to self-evolve, we will ALL be categorized
> as some sort of stupid, and receive stupid-adapted output.
>
> I wonder whether, ultimately, computers will have ANYTHING to say to us,
> like any more than we now say to our dogs.
>
> Perhaps the final winner of the Reverse Turing Test will remain completely
> silent?!
>
> "You don't explain to your dog why you can't pay the rent" from *The Fall
> of Colossus*.
>
> Any thoughts?
>
> Steve
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to