Re: [agi] The Next Wave

2003-01-11 Thread Ed Heflin
Kevin,

A belated congratulations on your phenomenal mimetic achievement
...the 2002 Loebner Prize Contest for Most Human Computer via Ella.

Your winning indicates a certain level of understanding of the
pursuit of AGI, not to mention your seriousness and commitment.

But, I guess your seriousness of the pursuit might have to be second-guessed
given your admission that your The Next Wave post was intended to be
humorous.

Not to worry...you may have contributed more with your 'funny' forward
thinking than just a ' feebly frivolous failure.'

For starters you bring up the important issue of human psychology in the
creation process - What's the motivation for building an AGI?  I mean you
are contemplating expending an enormous amount of thought, effort, and
energy to create this so called AGI and if, at the end of the day, all you
get is an indifferent or even unfriendly, w.r.t. humans, artificial human
entity, why do it?

And as a Sophist, I know well that a glib mountaineering response like
..because it was there works well to explain motivations for trying to
understand how and why humans work cognitively and why they are the way they
are, after all, the human is the mountain in the analagistic reference to
'there'.  There most be a reasonable motivation that addresses the
motivation for engaging in a creative processes that contemplates building
something complex beyond ourselves, a so called AGI.

It seems to me that one reasonable motivation might be that you want to
build an AGI that can 'outperform' humans in one or more significant ways to
solve complex problems in a complex environment.  What better arena to test
the 'metal' of an AGI than the physical world.

From my training as an experimental physicist, I would suggest
that your 'wish list' of programmed directives for testing the 'metal' of
the AGI's
 TIME TRAVEL 
 PARALLEL UNIVERSES 
 GENETIC ENGINEERING 
 ULTIMATE KNOWLEDGE 
is as unlikely as it is interesting.

Unlikely for 'human scientists' given present theoretical structures and
experimental approaches, but interesting for 'AGI scientists' given 'a new
kind of science'

And the reference to 'a new kind of science' is, in fact, to Stephan
Wolfram's most recent 'opus mangus' of over 1000 pages by the same name A
New Kind of Science.

For those unfamiliar with Wolfram or his work, Steve created Mathematica,
the world's leading software system for technical computing and symbolic
programming, and, among other things, studied complexity theory in
everything from biology to physics, a la cellular automata, over the last 10
years resulting in the book A New Kind of Science replete with his
published thoughts and findings.

The thoughts and findings from the book seem rather startling for an 'AGI
scientist' given 'a new kind of science'.  These results are captured in
Wolfram's  Principle of Computational Equivalence paraphrased as:
1. All the systems in nature follow computable rules. (strong AI)
2. All systems that reach the fundamental upper bound to their complexity,
namely Turing's halting problem, are equivalent.
3. Almost all systems that are not obviously weak reach the bound and are
thus equivalent to the halting problem.

Wolfram's  Principle of Computational Equivalence suggest that theoretical
approaches, and perhaps even experimental approaches, to science vis-a-vis
attempts to formulate science in terms of traditional mathematics falls
short of capturing all the richness of the complex world.  What is needed is
'a new kind of science'.  And that 'a new kind of science' can be achieved
through the use of algorithmic models and experimentation the likes of which
he studies.

If you take Steve's A New Kind of Science at face value...and I believe
Steve is well worth considering since he is a very serious, intelligent
scientist ..., you are left with some rather startling implications for an
'AGI scientist' that, at the most fundamental level, is build en silico and
cognates digitally through algorithms.

...AGI design...hmm, I wonder what Steve is up to these days?

Ed
- Original Message -
From: Kevin Copple [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 10, 2003 8:42 AM
Subject: [agi] The Next Wave


 It seems clear that AGI will be obtained in the foreseeable future.  It
also
 seems that it will be done with adequate safeguards against a runaway
entity
 that will exterminate us humans.  Likely it will remain under our control
 also.

 HOWEVER, this brings up another wave of issues we must debate.  An AGI
will
 naturally begin building and programming itself, and quickly develop
 abilities that our human minds cannot hope to achieve.  We need a
consensus
 on limits for humans using the AGI abilities, perhaps leading to some
 programmed directives for the AGI's.  Here is my effort to start a list:

  TIME TRAVEL 

 Likely the AGI will quickly learn how to travel through time.  Should we
 develop rules of conduct in advance?  Sure, it's tempting to think of

[agi] A New Kind of Science

2003-01-11 Thread Ben Goertzel

Ed,

Your comments on A New Kind of Science are interesting...

 And the reference to 'a new kind of science' is, in fact, to Stephan
 Wolfram's most recent 'opus mangus' of over 1000 pages by the same name A
 New Kind of Science.

Some of you may have seen my review of this book, which appeared in the
June issue of the Extropy magazine:

http://www.extropy.org/ideas/journal/current/2002-06-01.html
 
A terrifying number of reviews of the book are collected here:

www.math.usf.edu/~eclark/ANKOS_reviews.html

 The thoughts and findings from the book seem rather startling for an 'AGI
 scientist' given 'a new kind of science'.  These results are captured in
 Wolfram's  Principle of Computational Equivalence paraphrased as:
 1. All the systems in nature follow computable rules. (strong AI)
 2. All systems that reach the fundamental upper bound to their complexity,
 namely Turing's halting problem, are equivalent.
 3. Almost all systems that are not obviously weak reach the bound and are
 thus equivalent to the halting problem.

Right.  So the main claim is that nearly all complex systems are
implicitly universal computers...
 
And my answer is: Probably ... but so what?  Different universal
computers behave totally differently in terms of what they can compute
within fixed space and time resource bounds.  And real-world
intelligence is all about what can be computed within fixed space and
time resource bounds.

Given unbounded space and time resource bounds, AI is a trivial
problem.  Many have stated this informally (as I did in '93 in my book
The Structure of Intelligence); Solomonoff proved it one way in his
classic work on algorithmic information theory, and Marcus Hutter proved
it even more directly

Since his Principle of Computational Universality does not speak about
average-case space and time complexity of various computations using
various complex systems, it is essentially vacuous from the point of
view of AGI.

 Wolfram's  Principle of Computational Equivalence suggest that theoretical
 approaches, and perhaps even experimental approaches, to science vis-a-vis
 attempts to formulate science in terms of traditional mathematics falls
 short of capturing all the richness of the complex world.  What is needed is
 'a new kind of science'.  And that 'a new kind of science' can be achieved
 through the use of algorithmic models and experimentation the likes of which
 he studies.

What we need for AGI is a pragmatic understanding of the dynamical
behavior of certain types of systems (AGI systems) in certain types of
environments.  This type of understanding is not ruled out by Wolfram's
Principle, fortunately... we are not seeking a completely general
understanding of all complex systems, which IS ruled out by algorithmic
information theory (which shows that, given the finite size of our
brains, we can't understand systems of greater algorithmic information
than our brains).

Whether we can achieve the needed understanding via mathematical
theorem-proving is not yet clear.  It hasn't been achieved yet via ANY
mechanism -- experimental, mathematical, or divine-inspiration ;_)
 
I share some of Wolfram's skepticism regarding theoretical math's
ability to deal with very complex systems like AGI's.  And yet on long
airplane flights I find myself doodling equations in a notebook, trying
to come up with the novel math theory that will allow us to prove such
theorems after all

 If you take Steve's A New Kind of Science at face value...and I believe
 Steve is well worth considering since he is a very serious, intelligent
 scientist ..., you are left with some rather startling implications for an
 'AGI scientist' that, at the most fundamental level, is build en silico and
 cognates digitally through algorithms.
 
 ...AGI design...hmm, I wonder what Steve is up to these days?

The sections on AI and cognition in Wolfram's book are among the
weakest, sketchiest, least plausible ones.  He clearly spent 50 times as
much effort on the portions dealing with his speculative physics
theories.  The odds that he's seriously working on anything related to
AGI are very small, I feel.

I agree that building an AGI and learning about its dynamics through
experimentation is a valid course.  It's what I'm doing!  But I'm not
ready to dismiss the possibility of fundamental math progress as readily
as Wolfram is.

A working AGI would be a huge advance over current  AI systems.  A
useful math theory of complex systems would be a huge advance over
current math.  I am more confident in the former breakthrough than the
latter, but consider both to be real possibilities...

My general idea about what a math theory of complex systems is the idea
of a theory of patterns, as I've sketched very loosely in some prior
publications.  But I have not proved any deep theorems about the theory
of patterns ... it's hard.  A breakthrough is needed... maybe Wolfram is
right and it will never come... I dunno..

On the other hand, Wolfram 

[agi] [Fwd: Robots and human emotions]

2003-01-11 Thread Ben Goertzel


Sensitive robots taught to gauge human emotion
http://www.eet.com/story/OEG20030107S0033

NASHVILLE, Tenn.  #151;  Robotics designers are working with
psychologists here at Vanderbilt University to improve human-machine
interfaces by teaching robots to sense human emotions. Such
sensitive robots would change the way they interact with humans
based on an evaluation of a person's mood.

We believe that many of our human-to-human communications are
implicit #151; that is, the more familiar we are with a person, the
better we are at understanding them. We want to determine whether a
robot can sense a person's mood and change the way it interacts [with
the human] for more natural communications, said Vanderbilt
assistant professor Nilanjan Sarkar.

We don't want to give a robot emotions; we just want them to be
sensitive to our emotions, added Craig Smith, Vanderbilt associate
professor of psychology and human development.

Sarkar, an engineer, initiated the research project with Smith, a
psychologist, with the insight that there is no universal method of
detecting emotions in humans. This impressed Smith, who had
independently noticed that years of research in psychology had failed
to uncover the Rosetta stone of human emotions. The bottom line for
both researchers was that people express the same emotions in
different ways; thus, any universal method for detecting emotions
with robots would be doomed.

Psychologists have been trying to identify universal patterns of
physiological response since the early 1900s, but without success. We
believe that the lesson to be learned there is that there are no such
universal patterns, said Smith.

Consequently, the team's research project has two parts: sensing the
unique patterns of behavior that mark an individual person's
emotions, and converting that information in real-time into
actuator-style commands to the robot to facilitate communications
between humans and machines.

We have established the feasibility of the individual-specific
approach that we are taking, and there is a good chance that we can
succeed, said Smith.

Emotional data

The approach taken by the researchers was adopted from voice- and
handwriting-recognition technologies: Information on baseline
features is compiled for each person, and then the features that
indicate each mental state are identified for that person. Armed with
their personalized emotion-recognition system, the researchers hope
to use diverse data steams from users to create a more intuitive
interface.

In their prototype studies, sensors are worn by the person being
monitored by the robot. For example, heart rate monitors would gauge
the user's anxiety level, and the robotic responses would be adjusted
accordingly. With the sensors in place on the subject, the
researchers observe data streams for the subject in various
situations, such as while the subject is playing a videogame. 

By subjecting each person to the same anxiety-producing situations in
the game, the researchers obtained electrocardiogram profiles for
specific mental states. 

One such experiment gathered information from the same user's sensors
over a six-month period in order to validate the feasibility of the
personalized approach.

So far, Sarkar's team has performed preliminary analysis of the
profiles using conventional signal-processing algorithms and
experimental methods like fuzzy logic and wavelet analysis. They have
found patterns in the variations in the interval between heartbeats
that could be personalized.

Specifically, two frequency bands vary predictably with changes in
stress. Sarkar's team is now conducting similar analyses using other
available biosensors, including skin conductance (which changes when
people sweat under stress) and facial muscles (such as furrowing the
brow or clenching the jaw). 

The team is also expanding the programming of its small robot to
allow the robot to make better use of this information when
communicating with people.

'I sense you are anxious'

In a current experiment the small robot explores its environment with
a St. Bernard rescue hound-style human-machine interface. When the
robot finds a person, it examines the subject's data streams to
determine that person's mental state, then responds accordingly. For
instance, when finding an anxious person, the robot says: I sense
that you are anxious. Is there anything I can do to help?

In the future, the research team wants to be able to discriminate
between bad anxiety and good excitement, since both produce
similar physiological profiles. They also plan to map out other
psychological states, such as boredom and frustration.

For the latter, Smith has already devised an anagram-based system
that can frustrate test subjects by systematically increasing in
difficulty. The team is also analyzing different data streams, such
as electroencephalogram brain wave monitors and more subtle measures
of cardiovascular activity.


---
To unsubscribe, change your address, 

Re: [agi] A New Kind of Science

2003-01-11 Thread Ben Goertzel
At 

www.santafe.edu/~shalizi/notebooks/ cellular-automata.html

Wolfram's book is reviewed as a rare blend of monster raving egomania
and utter batshit insanity ... (a phrase I would like to have
emblazoned on my gravestone, except that I don't plan on dying, and if I
do die I plan on being frozen rather than buried) 

The context is:


* Dis-recommended:
 Stephen Wolfram, A New Kind of Science [This is almost, but not quite,
a case for the immortal ``What is true is not new, and what is new is
not true''. The one new, true thing is a proof that the elementary CA
rule 110 can support universal, Turing-complete computation. (One of
Wolfram's earlier books states that such a thing is obviously
impossible.) This however was shown not by Wolfram but by Matthew Cook
(this is the ``technical content and proofs'' for which Wolfram
acknowledges Cook, in six point type, in his frontmatter). In any case
it cannot bear the weight Wolfram places on it. Watch This Space for a
detailed critique of this book, a rare blend of monster raving egomania
and utter batshit insanity.] 


-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Shane Legg
Pei Wang wrote:

In my opinion, one of the most common mistakes made by people is to think AI
in terms of computability and computational complexity, using concepts like
Turing machine, algorithm, and so on.  For a long argument, see
http://www.cis.temple.edu/~pwang/551-PT/Lecture/Computation.pdf. Comments
are welcome.


It's difficult for me to attack a specific point after reading
through your paper because I find myself at odds with your views
in many places.  My views seem to be a lot more orthodox I suppose.

Perhaps where our difference is best highlighted is in the
following quote that you use:

   “something can be computational at one level,
but not at another level” [Hofstadter, 1985]

To this I would say: Something can LOOK like computation
at one level, but not LOOK at computation at another level.
Nevertheless it still is computation and any limits due to
the fundamental properties of computation theory still apply.

Or to use an example from another field: A great painting
involves a lot more than just knowledge of the physical
properties of paint.  Nevertheless, a good painter will know
the physical properties of his paints well because he knows
that the product of his work is ultimately constrained by these.

That's one half of the story anyway; the other part is that I
believe that intelligence is definable at a pretty fundamental
level (i.e. not much higher than the concept of universal Turing
computation) but I'll leave that part for now and focus on this
first issue.

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Ben Goertzel

Shane Legg wrote, responding to Pei Wang:
 Perhaps where our difference is best highlighted is in the
 following quote that you use:
 
 “something can be computational at one level,
  but not at another level” [Hofstadter, 1985]
 
 To this I would say: Something can LOOK like computation
 at one level, but not LOOK at computation at another level.
 Nevertheless it still is computation and any limits due to
 the fundamental properties of computation theory still apply.

Shane, i think you and pei are  using different language to say very
similar things...

It seems to me that NARS, Novamente, and any other programs that run on
Turing machine hardware (like contemporary computers) CAN be analyzed in
terms of computation theory.  The question is, the extent to which this
is a USEFUL point of view.   There may, for some programs, be
noncomputational perspectives that are more useful.

For example, suppose we have a program that simulates a stochastic or
quantum process.  It may be more convenient to view this program in
terms of randomness or quantum dynamics than in terms of strict Turing
computation.  This view may explain more about the high level abstract
behavior of the program.  But still at the low level there is an
explanation for the program in terms of computing theory.

This is a special case of the general observation that: Often, in a
complex system, the patterns observable in the system at a coarse level
of observation, are not useful patterns in the system at a fine level of
observation...

It may be more convenient to think about and study an AGI program in a
noncomputational way ... if one is looking at the overall behaviors 
structures of the program ... but if one wants to look at the EXACT
actions taken by the system and understand them, one has got to take the
computational point of view and look at the code and its effect on
memory and processor...

 That's one half of the story anyway; the other part is that I
 believe that intelligence is definable at a pretty fundamental
 level (i.e. not much higher than the concept of universal Turing
 computation) but I'll leave that part for now and focus on this
 first issue.

Intelligence may be *definable* at that level -- and I'd argue that
Pei's definition of intelligence (roughly: doing complex
goal-achievement with limited knowledge and resources) could even be
formulated at that level.  

But the structures and dynamics needed to make intelligence happen under
reasonable space and time resource constraints -- THESE, I believe,
necessarily involve primary theoretical constructs VERY DIFFERENT FROM
computation theory, which is a theory of generic computational processes
not a theory that is very useful for the specific study of computational
processes that give rise to intelligence on an emergent level...

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Pei Wang
Shane,

One issue that make that version of the paper controversial is the term
computation, which actually has two senses: (1) whatever computer
does,and (2) what defined as `computation' in computability theory.  In
the paper I'm using the second sense of the term.  (I'm revising the paper
to make this more clear.)

My argument, briefly speaking, is that it is quite possible, in the current
computer, to solve problems in such a way that is non-deterministic (i.e.,
context-sensitive) and open-ended (as in anytime algorithms).  Such a
process doesn't satisfy the definition of computation, doesn't follow a
predetermined algorithm, and has no fixed complexity.

To implement such a process requires no magic --- actually many existing
systems already go beyond computability theory, though few people has
realized it.  An concrete example is my NARS --- there is a demo at
http://www.cogsci.indiana.edu/farg/peiwang/NARS/ReadMe.html (you know that,
but some others don't). The system's capacity at the surface level cannot be
specified by computability theory, and the resource it spends on a question
is not fixed.

For that level issue, one way to see it is through the concept of virtual
machine.  We all know that at a low level computer only has procedural
language and binary data, but at a high level it has non-procedural language
(such as functional or logical languages) and decimal data.  Therefore, if
virtual machine M1 is implemented by virtual machine M2, the two may still
have quite different properties.  What I'm trying to do is to implement a
non-computing system on a computing one.

If you are still unconvinced, think about this problem: say the problem you
are trying to solve is to reply my current email. Is this problem
computable?  Do you follow an algorithm in solving it?  What is the
computational complexity of this process?

Pei

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, January 11, 2003 5:12 PM
Subject: Re: [agi] AI and computation (was: The Next Wave)


 Pei Wang wrote:
  In my opinion, one of the most common mistakes made by people is to
think AI
  in terms of computability and computational complexity, using concepts
like
  Turing machine, algorithm, and so on.  For a long argument, see
  http://www.cis.temple.edu/~pwang/551-PT/Lecture/Computation.pdf.
Comments
  are welcome.

 It's difficult for me to attack a specific point after reading
 through your paper because I find myself at odds with your views
 in many places.  My views seem to be a lot more orthodox I suppose.

 Perhaps where our difference is best highlighted is in the
 following quote that you use:

 “something can be computational at one level,
  but not at another level” [Hofstadter, 1985]

 To this I would say: Something can LOOK like computation
 at one level, but not LOOK at computation at another level.
 Nevertheless it still is computation and any limits due to
 the fundamental properties of computation theory still apply.

 Or to use an example from another field: A great painting
 involves a lot more than just knowledge of the physical
 properties of paint.  Nevertheless, a good painter will know
 the physical properties of his paints well because he knows
 that the product of his work is ultimately constrained by these.

 That's one half of the story anyway; the other part is that I
 believe that intelligence is definable at a pretty fundamental
 level (i.e. not much higher than the concept of universal Turing
 computation) but I'll leave that part for now and focus on this
 first issue.

 Shane

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Ben Goertzel
Pei:

 For that level issue, one way to see it is through the concept
 of virtual
 machine.  We all know that at a low level computer only has procedural
 language and binary data, but at a high level it has
 non-procedural language
 (such as functional or logical languages) and decimal data.  Therefore, if
 virtual machine M1 is implemented by virtual machine M2, the two may still
 have quite different properties.  What I'm trying to do is to implement a
 non-computing system on a computing one.

Interestingly though, even if M1 and M2 are very different, bisimulation may
hold.

For example, NARS can simulate any Turing machine -- it has universal
computation power -- but this will often be a very inefficient simulation
(you need to use HOI with  maximal confidence and boolean strength) ..

The problem is that bisimulation, without taking efficiency into account, is
a pretty weak idea.  This is a key part of my critique of wolfram's
thinking...

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Pei Wang

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, January 11, 2003 9:42 PM
Subject: Re: [agi] AI and computation (was: The Next Wave)


 Hi Pei,

  One issue that make that version of the paper controversial is the term
  computation, which actually has two senses: (1) whatever computer
  does,and (2) what defined as `computation' in computability theory.
In
  the paper I'm using the second sense of the term.  (I'm revising the
paper
  to make this more clear.)

 Ok, so just to be perfectly clear about this.  You maintain that a
 real computer (say my laptop here that I'm using) is able to do
 things that are beyond what is possible with a theoretical computer
 (say a Turing machine).  Is that correct?

Yes. See below.

 If so, then this would seem to be the key difference of opinion
 between us.

Right. Again let's use NARS as a concrete example.  It can answer questions,
but if you ask the same question twice to the system at different time, you
may get different answers. In this sense, there is no algorithm that takes
the question as input, and produces an unique answer as output. You may say
that there is still an algorithm (or many algorithms) in the system, which
take many other factors into account in producing answers, which I agree
(simply because that is how NARS is coded), but still, there is no single
algorithm that is soly responsible for the question-answering process, and
that is the point. The cooperations of many algorithms, under the influence
of many factors beside the current input, is not necessarily equivalent to
an algorithm, or a Turing machine, as defined in the Theory of Computation.
The main idea in Turing Computation is that the machine serves as a function
that maps each input uniquely into an output. Intelligence, with its
adaptivity and flexivity, should not been seen as such a fixed mapping.

  If you are still unconvinced, think about this problem: say the problem
you
  are trying to solve is to reply my current email. Is this problem
  computable?  Do you follow an algorithm in solving it?  What is the
  computational complexity of this process?

 I have no reason that I can think of to believe that a response to
 your email could not be generated by an algorithm.  Perhaps a big
 fancy one with a high computation complexity, but I don't see any
 reason why not.

I'm not asking about whether it could --- of course I can image an
algorithm that does nothing but take my email as input, and produce the
above reply of yours as output.  I just don't believe it is how your mind
works.  For one thing, to use an algorithm to solve a problem means, by
definition, if I repeat the question, you'll repeat the answer. Since I know
you in person, I'm sure you are more adaptive than that.  ;-)

Cheers,

Pei

 Cheers
 Shane


 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Ben Goertzel


Pei wrote:
 Right. Again let's use NARS as a concrete example.  It can answer
 questions,
 but if you ask the same question twice to the system at different
 time, you
 may get different answers. In this sense, there is no algorithm that takes
 the question as input, and produces an unique answer as output.
 You may say
 that there is still an algorithm (or many algorithms) in the system, which
 take many other factors into account in producing answers, which I agree
 (simply because that is how NARS is coded), but still, there is no single
 algorithm that is soly responsible for the question-answering process, and
 that is the point.

Pei!!  I get the feeling you are using a very nonstandard definition of
algorithm !!

The cooperations of many algorithms, under the
 influence
 of many factors beside the current input, is not necessarily equivalent to
 an algorithm, or a Turing machine, as defined in the Theory of
 Computation.
 The main idea in Turing Computation is that the machine serves as
 a function
 that maps each input uniquely into an output. Intelligence, with its
 adaptivity and flexivity, should not been seen as such a fixed mapping.

No!!!

Consider a Turing machine with three tapes:

* input
* output
* internal state

Then the correct statement is that the Turing machine maps each (input,
internal state) pair into a unique output.

This is just like NARS.  If you know its input and its internal state, you
can predict its output.  (Remember, even if there is a quasi-random number
generator in there, this generator is actually a deterministic algorithm
whose output can be predicted based on its current state).

The mapping from NARS into a 3-tape Turing machine is more natural than the
mapping from NARS into a standard 1-tape Turing machine.

BUT, it is well-known that there is bisimulation between 3-tape and 1-tape
Turing machines.  This bisimulation result shows that NARS can be mapped
into a 1-tape Turing machine...

The bisimulation between 3-tape and 1-tape Turing machines is expensive, but
that only shows that the interpretation of NARS as a 1-tape Turing machine
is *awkward and unnatural*, not that it is impossible.


 I'm not asking about whether it could --- of course I can image an
 algorithm that does nothing but take my email as input, and produce the
 above reply of yours as output.  I just don't believe it is how your mind
 works.  For one thing, to use an algorithm to solve a problem means, by
 definition, if I repeat the question, you'll repeat the answer.
 Since I know
 you in person, I'm sure you are more adaptive than that.  ;-)

You should broaden your definition of algorithms to include algorithms with
memory.

This is standard in CS too -- e.g. the theory of stack automata...

You should say

to use a deterministic algorithm to solve a problem means, by
 definition, if I repeat the question and you have the same internal state
as the first time I asked the question, you'll repeat the answer.

Shane has a memory.  So does a simple stack automaton.

But they're both basically deterministic robots ;-)

[though Shane has more capability to amplify chance (i.e. too complex for
the observer to understand) fluctuations into big patterns than a simple
stack automaton, which is related with his greater degree of
consciousness...]


-- ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]