Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread YKY (Yan King Yin)

On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:

YKY: Consciousness is not central to AGI .

The human mind consists of a two-tier structure. On top, you have this

conscious, executive mind that takes most of the decisions about which way
the system will go - basically does the steering. On bottom, you have the
unconscious, subordinate mind that does nearly all the information
processing, both briefing and executing the executive mind's decisions,
putting the words in its mouth and forming the thoughts in its head, while
continually pressuring the executive mind with conflicting emotions, and at
the same time monitoring and controlling the immensely complex operations of
the body.

That sounds reasonable.  You're talking about the executive / planner
module.  My focus is on the truth maintenance module, which operates
somewhat passively, and would require high-level directives from the
planner, including value-based bias.  The executive should be able to
control all other modules.

I tried not to use the term emotion in AGI, but I guess most people like
it as a metaphor.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] A New Approach to AGI: What to do and what not to do (includes my revised algorithm)

2007-05-06 Thread Jean-Paul Van Belle
to find a word in a big list you should really use a dictionary / hash
table instead of binary search... ;-)
(ok i know that wasnt the point you were trying to make :)
Jean-Paul 

PS: [META] - people pls to cut off long message includes - some of us
don't enjoy always on high bandwidth :(
 a [EMAIL PROTECTED] 05/06/07 2:36 AM 
For example, in computational
linguistics, the algorithm can use a binary search to find records
relating to a word, instead of scanning the whole database.

What I mean is that the database can use indexes with a binary search
algorithm to locate the word faster. This means that it avoids scanning
each and every record of the database to find the pixel representation
of the letters of the word (the bitmap image of the word).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Benjamin Goertzel

Mike,

The extent to which there is a rigid distinction between these two tiers in
the human brain/mind is not entirely clear.  The human brain seems to have
some distinct memory subsystems associated with various sorts of short term
memory or working memory, but the notion of executive processing
overall is IMO best thought of as a fuzzy set.  Yes, there are some parts of
the brain clearly shown (by fMRI and PET) to be involved with overall
coordination, but the knowledge/memories associated by these brain regions
is not necessarily the totality of what can occur in subjective conscious
awareness.

I think that the working memory and the autonomic nervous system are best
viewed as two extremes, with a continuum of conscious intensity levels
existing between them.

For relatively recent thinking on the underpinnings of consciousnes in the
human brain, check out the edited volume

-- Neural Correlates of Consciousness, by Thomas Metzinger

His single-author book

-- Being No One

is also very good, though I disagree with his take on AI at the end of the
book.  (he argues it would be unethical to create AGI's because it would be
unethical to experiment on their half-formed, probably buggy conscious
minds.)

In Novamente we do have an AttentionalFocus concept which is much like what
you call the conscious tier.  We have chosen the term attentional focus
to avoid getting into arguments related to the nature of consciousness and
the first person versus third person perspectives on mind.  Each item in the
attentional focus is associated with a distributed network of other items
that are not necessarily in the attentional focus, which ties in with the
fuzziness of the executive function as mentioned above.

-- Ben G

On 5/6/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
 YKY: Consciousness is not central to AGI .

 The human mind consists of a two-tier structure. On top, you have this
conscious, executive mind that takes most of the decisions about which way
the system will go - basically does the steering. On bottom, you have the
unconscious, subordinate mind that does nearly all the information
processing, both briefing and executing the executive mind's decisions,
putting the words in its mouth and forming the thoughts in its head, while
continually pressuring the executive mind with conflicting emotions, and at
the same time monitoring and controlling the immensely complex operations of
the body.

That sounds reasonable.  You're talking about the executive / planner
module.  My focus is on the truth maintenance module, which operates
somewhat passively, and would require high-level directives from the
planner, including value-based bias.  The executive should be able to
control all other modules.

I tried not to use the term emotion in AGI, but I guess most people like
it as a metaphor.

YKY
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner
Well, there obviously IS a conscious, executive mind, separate from the 
unconscious mind, whatever the enormous difficulties cognitive sicentists had 
in first admitting its existence and now in identifying its correlates! And you 
still seem to be sharing some of those old difficulties in talking about it. 
Science generally still has some of those difficulties too. They shouldn't be 
there. Social organizations have chief executives and appear more or less 
incapable of functioning without them. The individual organization that is a 
human being appears to need an executive mind for much the same reasons - 
though those reasons need defining.

Note that Fodor acknowledges the embarrassing truth that sicence can currently 
offer no explanation of why the conscious mind exists - rational, deterministic 
computers and machines clearly do not have or need one,  functioning perfectly 
as entirely unconscious affairs.

One immediate reason, applicable to AGI - although it will take the next 
Cognitive Revolution to recognize this - is that the two minds, almost 
certainly, think very differently. The unconscious mind thinks more or less 
algorithmically, (at least most of the time), rapidly in set ways - like a 
rational computer - it has to. Its function is to get things done.

The conscious mind thinks literally, freely. How long it will spend on any 
given decision, and what course of thought it will pursue in reaching that 
decision are definitely NOT set, but free. (How does Pei's NARS fit in here?)  
Should I buy the marshmallow or the creme caramel ice cream? Hmm that's a tough 
one. I want to get this right... And I could and will resolve that decision in 
a few more seconds OR at other times, I could still be here thinking about it 
several minutes later OR at other times I could wander off in mid-thought to 
another subject entirely. No computer currently thinks like this - thinks 
freely and crazily as opposed to rationally and deterministically. Anyone who 
produces one - that has a similar practicality to the animal/human executive 
mind - will literally usher in the next Cognitive Revolution.

You guys are clearly moving that way - but still appear to have a somewhat 
confused philosophical understanding of why all this is really necessary.

(One interesting, but tangential issue is that the unconscious mind does appear 
to have a certain freedom too - it's hard to see dreams, for example,  as 
deterministic affairs, Well, your dreams maybe, but not mine, you 
understand...).
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 06, 2007 10:37 AM
  Subject: Re: [agi] The Advantages of a Conscious Mind



  Mike,

  The extent to which there is a rigid distinction between these two tiers in 
the human brain/mind is not entirely clear.  The human brain seems to have some 
distinct memory subsystems associated with various sorts of short term memory 
or working memory, but the notion of executive processing overall is IMO 
best thought of as a fuzzy set.  Yes, there are some parts of the brain clearly 
shown (by fMRI and PET) to be involved with overall coordination, but the 
knowledge/memories associated by these brain regions is not necessarily the 
totality of what can occur in subjective conscious awareness. 

  I think that the working memory and the autonomic nervous system are best 
viewed as two extremes, with a continuum of conscious intensity levels 
existing between them.

  For relatively recent thinking on the underpinnings of consciousnes in the 
human brain, check out the edited volume 

  -- Neural Correlates of Consciousness, by Thomas Metzinger

  His single-author book

  -- Being No One

  is also very good, though I disagree with his take on AI at the end of the 
book.  (he argues it would be unethical to create AGI's because it would be 
unethical to experiment on their half-formed, probably buggy conscious minds.) 

  In Novamente we do have an AttentionalFocus concept which is much like what 
you call the conscious tier.  We have chosen the term attentional focus to 
avoid getting into arguments related to the nature of consciousness and the 
first person versus third person perspectives on mind.  Each item in the 
attentional focus is associated with a distributed network of other items that 
are not necessarily in the attentional focus, which ties in with the fuzziness 
of the executive function as mentioned above. 

  -- Ben G


  On 5/6/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: 
On 5/6/07, Mike Tintner  [EMAIL PROTECTED] wrote: 
 YKY: Consciousness is not central to AGI .
  
 The human mind consists of a two-tier structure. On top, you have this 
conscious, executive mind that takes most of the decisions about which way the 
system will go - basically does the steering. On bottom, you have the 
unconscious, subordinate mind that does nearly all the information processing, 
both briefing and 

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Benjamin Goertzel

Mike,

The conscious mind thinks literally, freely. How long it will spend on any

given decision, and what course of thought it will pursue in reaching that
decision are definitely NOT set, but free.



Ah, well, I'm glad to see the age-old problem of free will versus
determinism is solved now!  Mike has spoken!! ;-)

Seriously ... have you read Libet's work on free will and the brain?  Have
you read Dennett's book Freedom Evolves?  How about The Illusion of
Conscious Will?

The illusion of free will is a pretty subtle issue.  I have made my own
hypothesis regarding the sort of mechanism that underlies it in the human
mind/brain, which is described in my 2006 book the Hidden Pattern and in
preliminary form here:

http://www.goertzel.org/dynapsyc/2004/FreeWill.htm

You guys are clearly moving that way - but still appear to have a somewhat

confused philosophical understanding of why all this is really necessary.




Mike ... really ... has it ever occurred to you that you might NOT have a
deeper understanding of these issues than people who have read all the
existing literature on the topics and thought about them for decades??

On some topics, naive intuition can be misleading.  Especially topics that
involve illusions we humans have **evolved** to hold intuitively, so as to
make our lives simpler...

Please note that the naive notion of freedom you advocate contradicts all
known physics including quantum physics and (all currently seriously debated
variants of) quantum gravity.  (As an aside, it also contradicts most
mystical and spiritualistic thinking which denies the typical, naive Western
over-hyping of the autonomous individual.)

I remember a story by Kafka about a monkey trapped in a cage, who developed
human-level intelligence with the goal of escaping the cage.  I don't recall
the wording but , translated into Goertzel-ese idiom, Kafka wrote something
like: The monkey was not seeking freedom.  By no means.  Freedom is just a
complicated illusion.  What the monkey was seeking was something simpler and
more profound and important: **a way out** 

;-)

This monkey is also seeking a way out, and I don't think the old illusions
of free will are necessary (or sufficient) for this purpose...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-06 Thread J. Storrs Hall, PhD.
On Saturday 05 May 2007 23:29, Matt Mahoney wrote:
 About programming languages.  I do most of my programming in C++ with a
 little bit of assembler.  AGI needs some heavy duty number crunching.  You
 really need assembler to do most any kind of vector processing, especially
 if you use a coprocessor like a graphics card or PS3 type hardware.  You
 can get hundreds of GFlops for a few hundred dollars now, so why not use
 it?

Look at Brook (http://graphics.stanford.edu/projects/brookgpu/) ... and GPGPU 
in general (http://www.gpgpu.org/cgi-bin/blosxom.cgi).

If you want to use the built-in SIMD instructions in the X8x architecture, 
there are versions of BLAS that support them: both AMD and Intel have native 
versions for download, and there is ATLAS 
(http://math-atlas.sourceforge.net/), FFTW (http://www.fftw.org/), and many 
similar packages of functions -- there is also libSIMDx86 
(http://sourceforge.net/projects/simdx86/) for general purpose vector and 
matrix processing.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Storrs Hall, PhD.
Consider a ship. From one point of view, you could separate the people aboard 
into two groups: the captain and the crew. But another just as reasonable 
point of view is that captain is just one member of the crew, albeit one 
distinguished in some ways. 
One could reasonably take the point of view that the executive functions in a 
mind are performed by a module that is not all that much different in kind 
from the other ones, it just happens to be the one that is the fixpoint of 
the controller of relation in the architecture graph.

Josh

On Sunday 06 May 2007 00:18, Mike Tintner wrote:
 ...
 The human mind consists of a two-tier structure. On top, you have this
 conscious, executive mind that takes most of the decisions about which way
 the system will go - basically does the steering. On bottom, you have the
 unconscious, subordinate mind that does nearly all the information
 processing, both briefing and executing the executive mind's decisions,
 putting the words in its mouth and forming the thoughts in its head, while
 continually pressuring the executive mind with conflicting emotions, and at
 the same time monitoring and controlling the immensely complex operations
 of the body.
...

 You guys think you can have a successful AGI without the same basic
 structure?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Benjamin Goertzel

As Nietzsche put it, from a functional point of view, consciousness is like
the general who, after the fact, takes responsibility for the largely
autonomous actions of his troops ;-)

However, none of these metaphors addresses the issue of first vs. third
person perspectives

I hate to trumpet The Hidden Pattern again, but therein I deal with such
issues at length and depth...

ben

On 5/6/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:


Consider a ship. From one point of view, you could separate the people
aboard
into two groups: the captain and the crew. But another just as reasonable
point of view is that captain is just one member of the crew, albeit one
distinguished in some ways.
One could reasonably take the point of view that the executive functions
in a
mind are performed by a module that is not all that much different in kind
from the other ones, it just happens to be the one that is the fixpoint of
the controller of relation in the architecture graph.

Josh

On Sunday 06 May 2007 00:18, Mike Tintner wrote:
 ...
 The human mind consists of a two-tier structure. On top, you have this
 conscious, executive mind that takes most of the decisions about which
way
 the system will go - basically does the steering. On bottom, you have
the
 unconscious, subordinate mind that does nearly all the information
 processing, both briefing and executing the executive mind's decisions,
 putting the words in its mouth and forming the thoughts in its head,
while
 continually pressuring the executive mind with conflicting emotions, and
at
 the same time monitoring and controlling the immensely complex
operations
 of the body.
...

 You guys think you can have a successful AGI without the same basic
 structure?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] A New Approach to AGI: What to do and what not to do (includes my revised algorithm)

2007-05-06 Thread Lukasz Kaiser

PS: [META] - people pls to cut off long message includes - some of us
don't enjoy always on high bandwidth :(


[META] Yes, that is a very important point for me as well. As this list is
getting more and more active I'm wasting more and more time scrolling through
messages (often top-posted) to find the content. Whenever you can, please cut!

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Storrs Hall, PhD.
On Sunday 06 May 2007 07:49, Benjamin Goertzel wrote:
 As Nietzsche put it, from a functional point of view, consciousness is like
 the general who, after the fact, takes responsibility for the largely
 autonomous actions of his troops ;-)

That's actually pretty close to the way (I think) it really works ...

 I hate to trumpet The Hidden Pattern again, but therein I deal with such
 issues at length and depth...

As long as the trumpets are blaring, Beyond AI is coming out this month, with 
the coolest cover I've seen on any non-fiction book (he says modestly):
http://www.amazon.com/Beyond-AI-Creating-Conscience-Machine/dp/1591025117
or just search for Beyond AI.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner


Consider a ship. From one point of view, you could separate the people 
aboard

into two groups: the captain and the crew. But another just as reasonable
point of view is that captain is just one member of the crew, albeit one
distinguished in some ways.


Really? Bush? Browne [BP, just dismissed]? Trump? Ballmer? Gates? Kapor? 
Semel? Branson? Sarkozy? Blair?  JUST members of the crew? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Pei Wang

Mike,

Since you mentioned me and NARS, I feel the need to clarify my
position on the related issues.

*. I agree with you that in many situations, the decision-making
procedure doesn't follow predetermined algorithm, which give people
the feeling of free will. On the other hand, at a deeper level, each
basic operations in the process does roughly follow a fixed routine,
and how these operations form the decision-making procedure are
determined by many factors at the moment. This mechanism is already
implemented in NARS, and is discussed in detail in
http://nars.wang.googlepages.com/wang.computation.pdf . Whether such a
process is free or determined to a large extent depends on the
context of the discussion: determined by whom? given what? The system
does have a choice among options from time to time, though given the
design and the experience of the system, these choices are not
arbitrary at all.

*. I disagree with you on the two-tier structure, though it is
indeed intuitively obvious. As Ben said On some topics, naive
intuition can be misleading, which has been shown in many times in
the history of AI and CogSci. The conscious/unconscious distinction
does exist, but to me, it shows that our self-perception has its
limits, just like our perception of the outside environment. I don't
see your evidence for the two to be separate, rather than just
different. What is your evidence for The unconscious mind thinks
more or less algorithmically? To me, it is just the opposite --- to
follow an algorithm needs conscious effort. If you are talking about
automated behaviors or acquired skills, then that is a different issue
from unconscious thinking.

*. I also feel that you mixed several different issues all together in
the discussion: free-will/determinism, conscious/unconscious,
centralize/decentralize, which may be taken as confused philosophical
understanding on your side. ;-)

Pei


On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:



Well, there obviously IS a conscious, executive mind, separate from the
unconscious mind, whatever the enormous difficulties cognitive sicentists
had in first admitting its existence and now in identifying its correlates!
And you still seem to be sharing some of those old difficulties in talking
about it. Science generally still has some of those difficulties too. They
shouldn't be there. Social organizations have chief executives and appear
more or less incapable of functioning without them. The individual
organization that is a human being appears to need an executive mind for
much the same reasons - though those reasons need defining.

Note that Fodor acknowledges the embarrassing truth that sicence can
currently offer no explanation of why the conscious mind exists - rational,
deterministic computers and machines clearly do not have or need one,
functioning perfectly as entirely unconscious affairs.

One immediate reason, applicable to AGI - although it will take the next
Cognitive Revolution to recognize this - is that the two minds, almost
certainly, think very differently. The unconscious mind thinks more or less
algorithmically, (at least most of the time), rapidly in set ways - like a
rational computer - it has to. Its function is to get things done.

The conscious mind thinks literally, freely. How long it will spend on any
given decision, and what course of thought it will pursue in reaching that
decision are definitely NOT set, but free. (How does Pei's NARS fit in
here?)  Should I buy the marshmallow or the creme caramel ice cream? Hmm
that's a tough one. I want to get this right... And I could and will resolve
that decision in a few more seconds OR at other times, I could still be here
thinking about it several minutes later OR at other times I could wander off
in mid-thought to another subject entirely. No computer currently thinks
like this - thinks freely and crazily as opposed to rationally and
deterministically. Anyone who produces one - that has a similar practicality
to the animal/human executive mind - will literally usher in the next
Cognitive Revolution.

You guys are clearly moving that way - but still appear to have a somewhat
confused philosophical understanding of why all this is really necessary.

(One interesting, but tangential issue is that the unconscious mind does
appear to have a certain freedom too - it's hard to see dreams, for example,
 as deterministic affairs, Well, your dreams maybe, but not mine, you
understand...).


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Benjamin Goertzel

I find that freedom is one of those folk-psychology/philosophy concepts
that isn't really much use for scientific and engineering thinking about
either
human or machine intelligence...

As for concentration, this gets into what I call attention allocation --
an
area we've paid a lot of attention to in the Novamente design.  I believe
an AGI should be able to adaptively combine the concentrative focus
of current specialized software programs with the creativity-inducing,
associatively and contextually digressive nature of human attention.

-- Ben

On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:


 Ben,

Yes, I'll match my understanding and knowledge of, and ideas on,  the free
will issue against anyone's.

For example - and this is the real issue that concerns YOU and AGI - I
just introduced an entirely new dimension to the free will debate. You
literally won't find it anywhere. Including Dennett. Free thinking. If we
are free to decide,  then it follows we are also free to think - not merely
to decide either way at the end of solving a problem, but free as to how we
go about solving that problem - free to spend a little more time or less
time on it, free to ask someone else's opinion or go with our gut instinct,
free to list the pro's and cons or to take the first reasonable idea that
comes along, free to attack it logically/algebraically or verbally  etc.
etc.

That is an extremely important dimension of free will. It simply hasn't
been considered. Clearly it should be.

For the purposes of AGI, you can put the free will issue to one side, at
least for a while,  I would suggest, and concentrate on freedom of thought.
You see, it is absolutely fundamental to robotics to describe robots in
terms of degrees of freedom - of movement, (whatever your views on free
will)..It is, or will be, similarly fundamental to AGI to describe
autonomous computational minds in terms of degrees of freedom - of thought.

There is a crashingly obvious difference between a rational computer and a
human mind -  and the only way cognitive science has managed not to see it
is by resolutely refusing to look at it, just as it resolutely refused to
look at the conscious mind in the first place. The normal computer has no
problems concentrating. Give it a problem and it will proceed to produce a
perfect rational train of thought, with every step taken, and not a single
step missed. (Or to put that another way - it has zero freedom of thought).

But human minds have major problems concentrating. Literally for more than
seconds on end. For a human mind to produce a rational reflective train of
thought for something like a minute is virtually impossible. Obviously this
varies according to the problem/ subject, but the basic problem of
concentration is acknowledged by a whole variety of psychologists from
Williiam James to Cszikszentmilhalyi - and undeniable.

Look at how human minds actually approach problems - their literal streams
of thought (something cognitive psychology still almost totally refuses to
do) - and you will find that humans can and do miss out at different times
each and every step of what might be considered a rational train of thought
- they don't listen to, or set the question/problem, don't look at the
evidence or look at irrelevant things, don't even try to have ideas, are
biassed, don't think for themselves but copy others' ideas, lose the thread,
go off at tangents, repeat themelves, are uncritical, don't check etc etc.
In innumerable ways, we almost always jump to conclusions and leave out
ideal steps of reasoning. We are incapable of producing extended rational
trains of thought and movement. (Just look at student essays, right?) We may
be fairly effective reasoners, all things considered, but by the reasoning
standards of rational computers we are irrational, period.

Now to the rational philosopher and scientist and to the classical AI
person, this is all terrible (as well as flatly contradicting one of the
most fundamental assumptions of cognitive science, i.e. that humans think
rationally). We are indeed only human not [rational, deterministic]
machines.

But I would expect someone who cares about AGI to understand that this
is also all beautiful. Our extreme capacity for error can also be described
as extreme freedom of thought-   and the basis of our adaptivity. Every
error in one context is an adaptive advantage in another. It's good and
vital in all  kinds of situations to be able to jump to conclusions, for
example. It's good and vital to be able to completely restructure the ways
you think about a problem.

I would expect you and Pei to be deeply interested in that whole dimension
of freedom of thought (and also to see that it provides a functional
distinction between the conscious and unconscious mind, where currently NONE
exists). If you are not interested,  no problem.

P.S. Re the free will issue,  laws of physics etc, I would suggest that
there is only one thing that should immediately concern 

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Derek Zahn

J Storrs Hall, PhD. writes:

As long as the trumpets are blaring, Beyond AI is coming out this month, 
with

the coolest cover I've seen on any non-fiction book (he says modestly):
http://www.amazon.com/Beyond-AI-Creating-Conscience-Machine/dp/1591025117


Cool!  I just pre-ordered my copy!

Look at Brook (http://graphics.stanford.edu/projects/brookgpu/) ... and 
GPGPU in general (http://www.gpgpu.org/cgi-bin/blosxom.cgi).


I'm also just beginning my experimentation with modern hardware, and just 
got
a new machine with two Nvidia 8800GTX boards.  That G80 architecture is 
moving

explicitly to a GPGPU architecture (by which I mean it doesn't have separate
vertex and pixel processors, just 128 general-purpose processors per card.  
They

have some pretty decent programming tools for it (called CUDA).

If you want to use the built-in SIMD instructions in the X8x architecture, 
there are versions of BLAS that support them: both AMD and Intel have 
native versions for download


If you are working in a somewhat low-level language and don't mind a little 
bit

of effort, you can embed the assembly directly to use the scalar functions.
To get my feet wet with this, I just wrote a mandelbrot set exploration 
program
that does this and it's amazing how far things have come recently.  The CPU 
on
my new machine is an intel quad core at 2.7 ghz.  With each one executing a 
4-wide
simd instruction (single precision), that adds up to 43 gflops peak, which 
isn't anywhere

near the peak of the graphics cards but isn't too shabby.

As I just start to work on some AGI-type stuff myself, one of my premises 
is
that it pays to think about models that lend themselves to efficient 
implementation
on available hardware, in direct opposition to YKY's recent post on that 
subject.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Richard Loosemore


Mike,

Bit of confusion here.  Consciousness is best used to refer to the 
thing that Chalmers refers to as the Hard Problem issues.


The thing you are mainly referring to is what cog psych people would 
talk about as executive processing (as opposed to automatic 
processing).  Big literature on that.


The important thing, for me, is that I would even begin to engage you in 
debate on the ideas you have raised here, because it is just too messy 
of these two totally different ideas are mixed up together.


Richard Loosemore



Mike Tintner wrote:
Well, there obviously IS a conscious, executive mind, separate from the 
unconscious mind, whatever the enormous difficulties cognitive 
sicentists had in first admitting its existence and now in identifying 
its correlates! And you still seem to be sharing some of those old 
difficulties in talking about it. Science generally still has some of 
those difficulties too. They shouldn't be there. Social organizations 
have chief executives and appear more or less incapable of functioning 
without them. The individual organization that is a human being appears 
to need an executive mind for much the same reasons - though those 
reasons need defining.
 
Note that Fodor acknowledges the embarrassing truth that sicence can 
currently offer no explanation of why the conscious mind exists - 
rational, deterministic computers and machines clearly do not have or 
need one,  functioning perfectly as entirely unconscious affairs.
 
One immediate reason, applicable to AGI - although it will take the next 
Cognitive Revolution to recognize this - is that the two minds, almost 
certainly, think very differently. The unconscious mind thinks more or 
less algorithmically, (at least most of the time), rapidly in set ways - 
like a rational computer - it has to. Its function is to get things done.
 
The conscious mind thinks literally, freely. How long it will spend on 
any given decision, and what course of thought it will pursue in 
reaching that decision are definitely NOT set, but free. (How does Pei's 
NARS fit in here?)  Should I buy the marshmallow or the creme caramel 
ice cream? Hmm that's a tough one. I want to get this right... And I 
could and will resolve that decision in a few more seconds OR at other 
times, I could still be here thinking about it several minutes later OR 
at other times I could wander off in mid-thought to another subject 
entirely. No computer currently thinks like this - thinks freely and 
crazily as opposed to rationally and deterministically. Anyone who 
produces one - that has a similar practicality to the animal/human 
executive mind - will literally usher in the next Cognitive Revolution.
 
You guys are clearly moving that way - but still appear to have a 
somewhat confused philosophical understanding of why all this is really 
necessary.
 
(One interesting, but tangential issue is that the unconscious mind does 
appear to have a certain freedom too - it's hard to see dreams, for 
example,  as deterministic affairs, Well, your dreams maybe, but not 
mine, you understand...).


- Original Message -
*From:* Benjamin Goertzel mailto:[EMAIL PROTECTED]
*To:* agi@v2.listbox.com mailto:agi@v2.listbox.com
*Sent:* Sunday, May 06, 2007 10:37 AM
*Subject:* Re: [agi] The Advantages of a Conscious Mind


Mike,

The extent to which there is a rigid distinction between these two
tiers in the human brain/mind is not entirely clear.  The human
brain seems to have some distinct memory subsystems associated with
various sorts of short term memory or working memory, but the
notion of executive processing overall is IMO best thought of as a
fuzzy set.  Yes, there are some parts of the brain clearly shown (by
fMRI and PET) to be involved with overall coordination, but the
knowledge/memories associated by these brain regions is not
necessarily the totality of what can occur in subjective conscious
awareness.

I think that the working memory and the autonomic nervous system are
best viewed as two extremes, with a continuum of conscious
intensity levels existing between them.

For relatively recent thinking on the underpinnings of consciousnes
in the human brain, check out the edited volume

-- Neural Correlates of Consciousness, by Thomas Metzinger

His single-author book

-- Being No One

is also very good, though I disagree with his take on AI at the end
of the book.  (he argues it would be unethical to create AGI's
because it would be unethical to experiment on their half-formed,
probably buggy conscious minds.)

In Novamente we do have an AttentionalFocus concept which is much
like what you call the conscious tier.  We have chosen the term
attentional focus to avoid getting into arguments related to the
nature of consciousness and the first person versus third person
perspectives on mind.  Each item in the attentional 

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Richard Loosemore

Mike Tintner wrote:
There is a crashingly obvious difference between a rational computer and 
a human mind -  and the only way cognitive science has managed not to 
see it is by resolutely refusing to look at it, just as it resolutely 
refused to look at the conscious mind in the first place. The normal 
computer has no problems concentrating. Give it a problem and it will 
proceed to produce a perfect rational train of thought, with every step 
taken, and not a single step missed. (Or to put that another way - it 
has zero freedom of thought).


Completely wrong, I am afraid.

This is a view of computer that is so antiquated it belongs in the 
early 1960's, when people were told that computers can only do what 
they are programmed to do, as a way to reassure them that they should 
not be afraid that the computers were really able to think (and were 
therefore a threat).


You can program a computer to be deterministic, or you can program it to 
be non-determinstic.  You choice.  Some approaches to AI do indeed take 
an approach that would leave the machine with no choices in its 
reasoning paths  but that is only one choice.


It is certainly not my choice, or those of many others.  It is important 
not to tar everyone with that brush.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Richard Loosemore

Mike Tintner wrote:
Now to the rational philosopher and scientist and to the classical AI 
person, this is all terrible (as well as flatly contradicting one of the 
most fundamental assumptions of cognitive science, i.e. that humans 
think rationally). We are indeed only human not [rational, 
deterministic] machines.


Mike, this is getting a bit much.

Your statement that one of the most fundamental assumptions of 
cognitive science [is] that humans think rationally is complete and 
utter bunk.


There is no possible interpretation of this claim that could make it 
even slightly true.





Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Richard Loosemore

Mike Tintner wrote:
And if you're a betting man, pay attention to Dennett. He wrote about 
Consciousness in the early 90's,  together with Crick helped make it 
scientifically respectable. About five years later, consciousness 
studies swept science and philosophy.


Nonsense.

Dennett's approach was scorned by many as a whitewash.  He did not make 
it respectable if anyone did that, it was Dave Chalmers.


Crick, like many other philosophy wannabes, gave an opinion on the 
matter that was just a big pile of evasions.  Just about everyone and 
their mother has written a book about consciousness, most of them trash.


Dennett, although a smart cookie, bit off more than he could chew on 
that one.  I note that he did not even bother to turn up at the Tucson 
conference last year.  I did -- and *my* theory of consciousness was the 
first one ever to actually explain anything ;-) ;-).  (Chalmers noticed, 
but I don't think anyone else did).




Richard Loosemore.




Now he has just written about 
free will, and although the book was pretty bad, it was important in 
being arguably the first by a scientific philosopher to assert that free 
will is consistent with science and materialism. I'll gladly place a 
friendly (and you might think outrageous) bet with you that that book is 
similarly prescient and free will will be the new default philosophy of 
science within 5-10 years.  In case you haven't noticed, it is actually 
already being widely taken in a kind of de facto, implicit rather than 
explicit way, as the basic philosophy of autonomous mobile robotics.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner

Er nonsense to you too. :}

Part of my asserting myself boldly here, was to say: look, I may be a 
schmuck on AI but I know a lot, here ( in fact I'll stand by the rest of my 
claims,  - although if you guys can't recognize, for example, that free 
thinking opens up a new dimension on free will, then there's probably no 
point).


Consciousness Explained ... publ. 1991.
Crick's statements - 1991, Sci Am article... 1992

David Chalmers.. The Conscious Mind... Amazon gives me 1998, but it may have 
been 1996 - when the consciousness studies wave was already starting.


Dennett and Crick were way ahead of the game and Chalmers, historically. (In 
fact, Crick was almost certainly the crucial figure). Sure, Consciousness 
Explained was attacked, though still influential.


My point is a historical/ sociological one - not an evaluative one. And 
therefore I am perfectly entitled to make my future prediction about the 
sociological/ scientific significance of Freedom Evolves  - I could, of 
course, prove totally wrong. But it's a point worth considering - IF you're 
interested in how culture and science are changing.  And note that Dennett 
was even historically  ahead if only just, of The God Delusion, with 
Breaking the Spell.


(Oh, and even evaluatively, Dennett, I would argue, is the leading 
scientfic, i.e. pro-science, philosopher in the world. Chalmers' credentials 
in that respect are more dubious - not that I'm endorsing Dennett by any 
means).


- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 4:45 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



Mike Tintner wrote:
And if you're a betting man, pay attention to Dennett. He wrote about 
Consciousness in the early 90's,  together with Crick helped make it 
scientifically respectable. About five years later, consciousness studies 
swept science and philosophy.


Nonsense.

Dennett's approach was scorned by many as a whitewash.  He did not make it 
respectable if anyone did that, it was Dave Chalmers.


Crick, like many other philosophy wannabes, gave an opinion on the matter 
that was just a big pile of evasions.  Just about everyone and their 
mother has written a book about consciousness, most of them trash.


Dennett, although a smart cookie, bit off more than he could chew on that 
one.  I note that he did not even bother to turn up at the Tucson 
conference last year.  I did -- and *my* theory of consciousness was the 
first one ever to actually explain anything ;-) ;-).  (Chalmers noticed, 
but I don't think anyone else did).




Richard Loosemore.




Now he has just written about free will, and although the book was pretty 
bad, it was important in being arguably the first by a scientific 
philosopher to assert that free will is consistent with science and 
materialism. I'll gladly place a friendly (and you might think 
outrageous) bet with you that that book is similarly prescient and free 
will will be the new default philosophy of science within 5-10 years.  In 
case you haven't noticed, it is actually already being widely taken in a 
kind of de facto, implicit rather than explicit way, as the basic 
philosophy of autonomous mobile robotics.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.4/790 - Release Date: 05/05/2007 10:34






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner
Cognitive science treats humans as thinking like computers - rationally, if 
boundedly rationally.


Which part of cognitive science treats humans as thinking irrationally, as I 
have described ? (There may be some misunderstandings here which hve to be 
ironed out, but I don't think my claim at all outrageous or less than 
obvious).


All the social sciences treat humans as thinking rationally. It is notorious 
that this doesn't fit the reality - especially for example in economics. But 
the basic attitude is: well, it's the best model we've got.



- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 4:38 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



Mike Tintner wrote:
Now to the rational philosopher and scientist and to the classical AI 
person, this is all terrible (as well as flatly contradicting one of the 
most fundamental assumptions of cognitive science, i.e. that humans think 
rationally). We are indeed only human not [rational, deterministic] 
machines.


Mike, this is getting a bit much.

Your statement that one of the most fundamental assumptions of cognitive 
science [is] that humans think rationally is complete and utter bunk.


There is no possible interpretation of this claim that could make it even 
slightly true.





Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.4/790 - Release Date: 05/05/2007 10:34






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner
If you are a nondeterminist - i.e. a believer in nondeterministic 
programming - je t'embrasse. (see my forthcoming reply to Pei).


However, having being thoroughly attacked by Ai-ers including Minsky on his 
group,  for adopting such a position - on the basis that nondeterministic 
programs can be emulated by deterministic Turing machines, and don't really 
exist  etc. etc.  - and also having just been criticised by Pei, who, 
offhand, without much knowledge of him, I thought might be sympathetic that 
way - I am dubious about your representation of the situation and what 
is/isn't antiquated. I suspect, as re Chalemers/Dennett, you are confusing 
YOUR beliefs (and no doubt some others' too)  about the matter with the 
GENERAL or most widely-held beliefs.


Re cognitive science and cognitive psychology, there is one simple way to 
crystallise the matter. I contend that the human mind's difficulties in 
concentrating are one of the primary, definining characteristics of how it 
works, and of how it is actually programmed - and this CONTRADICTS current 
cog sci/psych. Show me which section of cognitive science or psychology 
deals with this - problems of concentration in relation to the mind's 
programming.  Or show me any section which deals with nondeterministic 
programming re humans. [Cog sci/psych remember, and NOT AI].


Re the situation in AI generally, and people's attitudes to deterministic/ 
nondeterministic programming and what you say below, please do inform me 
more about how different camps think. IF I have understood this right, Ben 
and Pei would NOT agree with the sentiments and kind of atittude you seem to 
be expressing below. They don't seem to believe that freedom of thought let 
alone decision is possible. They would be in an opposite camp, say, to Kevin 
Kelly:


What could be more human than to give life? I think I know: to give life and 
freedom. To give open-ended life. To say, here's your life and the car keys. 
Then you let it do what we are doing-making it all up as we go along. Tom 
Ray once told me, I don't want to download life into computers. I want to 
upload computers into life.


Kevin Kelly Out of Control. The New Biology of Machines, Social Systems, and 
the Economic World. New York: Addison, Wesley. 1994




Kevin Kelly said to me, in an email exchange, that he reckoned that some 50% 
or more of AI people did believe that robots will be free. Minsky's group 
mocked that claim, but then they would. What do you reckon about how AI 
people generally stand here?








- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 4:32 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



Mike Tintner wrote:
There is a crashingly obvious difference between a rational computer and 
a human mind -  and the only way cognitive science has managed not to see 
it is by resolutely refusing to look at it, just as it resolutely refused 
to look at the conscious mind in the first place. The normal computer has 
no problems concentrating. Give it a problem and it will proceed to 
produce a perfect rational train of thought, with every step taken, and 
not a single step missed. (Or to put that another way - it has zero 
freedom of thought).


Completely wrong, I am afraid.

This is a view of computer that is so antiquated it belongs in the early 
1960's, when people were told that computers can only do what they are 
programmed to do, as a way to reassure them that they should not be 
afraid that the computers were really able to think (and were therefore a 
threat).


You can program a computer to be deterministic, or you can program it to 
be non-determinstic.  You choice.  Some approaches to AI do indeed take an 
approach that would leave the machine with no choices in its reasoning 
paths  but that is only one choice.


It is certainly not my choice, or those of many others.  It is important 
not to tar everyone with that brush.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.4/790 - Release Date: 05/05/2007 10:34






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Dougherty

On 5/6/07, Mark Waser [EMAIL PROTECTED] wrote:

 Yes, I'll match my understanding and knowledge of, and ideas on,  the
free will issue against anyone's.

Arrogant much?

 I just introduced an entirely new dimension to the free will debate. You
literally won't find it anywhere. Including Dennett. Free thinking. If we
are free to decide,  then it follows we are also free to think

Oh, please . . . .


Seriously.  The only other identity I have ever encountered with such
zealous believe in their own accomplishments is A. T. Murray /
Mentifex.   I wonder what would happen if these two super-egos (pun
intended) were to collide?

Sorry to contribute so little to the actual discussion, but really...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner

Pei,

Thanks for stating your position (which I simply didn't know about before - 
NARS just looked at a glance as if it MIGHT be nondeterministic).


Basically, and very briefly, my position is that any AGI that is to deal 
with problematic decisions, where there is no right answer, will have to be 
freely, nondeterministically programmed to proceed on a trial and error 
basis - and that is just how human beings are programmed. 
(Nondeterministically programmed should not be simply equated with current 
kinds of programming - there are an infinity of possible ways of programming 
deterministically, ditto for nondeterministically).


Some of what you say below IS confusing -
The system  does have a choice among options from time to time, though 
given the

design and the experience of the system, these choices are not
arbitrary at all.


That sounds like a complete contradiction in terms. Either you have a real 
choice or not. Let's say the system is  investing in the stockmarket - if 
it's free, in my terms, it will indeed have a  choice, and be able to Buy, 
OR Sell OR Hold. If it's determined, or not arbitrary at all, it will at a 
given point, have only ONE option open to it. Can you clarify your position?


I'm somewhat confused too by:

What is your evidence for The unconscious mind thinks

more or less algorithmically? To me, it is just the opposite --- to
follow an algorithm needs conscious effort. 


My position is this: most of our behaviour is unconsciously controlled. When 
you walk across a room, most steps will be automatic. When I wrote that last 
sentence most if not all of the words and letters and keypresses were 
automatic. And I assume there are unconscious algorithms/ routines 
controlling those behaviours But while most of our steps on any given 
journey are automatic and fixed, we also more or less continuously 
consciously and deliberately and freely attend to the occasional next step 
and turn - and how, and how long we think about and take that next step is 
not fixed. [So if you are going to argue that it's not algorithms but some 
other kind of deterministic programming that does the unconscious 
controlling, I wouldn't try and argue about that}


What I find weird is your statement - an algorithm needs conscious effort. 
Then it's not an algorithm, or any kind of deterministic programming. 
Nothing that requires conscious exertion can be algorithmic or deterministic 
or automatic.  Effort/exertion - i.e. whether to make it or not - is 
fundamentally problematic and nondeterministic.When you are doing your 
fiftieth or maximal press-up, there is no algorithm or any oither kind of 
deterministic programming that determines whether you will push beyond your 
limit to the fifty-fifth. You face a problematic decision as to  whether you 
are or are not prepared to make the exertion and bear the pain of higher 
achievement or stop now and settle for less achievement with  less pain. 
When you are straining sexually, and agonizing over whether to keep going, 
there is no algorithm that determines whether you will keep bearing the 
tension for another thirty seconds, or one minute or whatever. You have a 
problematic decision as whether you are prepared to aim for still more 
pleasure AND still more pain, or come now and settle for less pleasure and 
less pain - and there is no right answer..



Daniel knows that Allison needs at least another five minutes of intercourse 
before she can climax. Here's the problem: Daniel doesn't think he has five 
minutes left in him. If Daniel continues having intercourse the way he has 
for the past ten minutes, it may be only a matter of seconds before he has 
an orgasm. He thinks about slowing down or stopping. Besides, if he tried to 
stop or to change the rhythm, Daniel could lose strength in his erection, 
which would complicate matters even further. This dilemma is making the 
whole experience a lot less pleasurable for Daniel.


Barbra Keesling, How To Make Love All Night (And Drive A Woman Wild). 1994



Daniel here is not controlled by any deterministic algorithm or programming. 
Do you really - hand on heart and hope to die - believe he is?




You will note that the concepts of struggle, exertion, nerve, grit etc are 
more or less entirely missing from scientific psychology. They are simply 
incompatible with a deterministic approach to the human mind, so science 
does what it always does in such situations - ignores them. Science doesn't 
deal with Daniel's problem, but in one form or other, AGI, I believe, will 
have to.





- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 3:47 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



Mike,

Since you mentioned me and NARS, I feel the need to clarify my
position on the related issues.

*. I agree with you that in many situations, the decision-making
procedure doesn't follow predetermined algorithm, which give people
the 

Re: [agi] rule-based NL system

2007-05-06 Thread James Ratcliff
Well I will go with the high level of intelligence condition, 
and I would think it is pretty obvious.

We know already that among humans there is a grading or levels of intelligence, 
so unless there is some specific thing you must have to be intelligent, 
I would consider a 20 yr old, a 10, and a 5 yr old intelligent, and measure the 
intelligence with a list of things they can do, they can walk, talk move 
around blocks, etc, the extent they can accomplish what they want.

A quadrapeligic who cant move but can only type is still intelligent,
What about a brain damaged person with alzeihmers?  They cant remember well but 
maybe they can still dress and eat by themselves, just not hold a job.

A savant that can be trained to water the flowers in a garden?  eh cant do 
anything else btu this one function, but he can look and tell if they need 
water, and which ones to water, and can accept instruction.. I think that is 
still intelligentn behavior, but is extremely limited.
Dogs can be trained to rescue or so search out drugs, which is intelligent, but 
a narrow usage.
  Expert systems are quite smart in their domains, 
and thermostats have a range of intelligence.   Ours here at the house has one 
box upstairs and downstiars controlled by a main unit, that could do a range of 
things.

High-level or approaching human level intelligence is what most of us are all 
concerned with here, but I think in defining intelligence we have to be able to 
look all the way up and down the range that it offers and recognize these as 
having intelligence.

If you dont call a thermostat intelligent, then you have to in some other way 
define what it does, either by saying its an object that makes decisions based 
on input or simply programmed or whatnot, these all boil down and start 
looking like our various intelligence definitions, accept input, make 
decisions, give output, try to reach a goal
Anything lacking one of those 4 components I might not think of as intelligent.

James Ratcliff

Mark Waser [EMAIL PROTECTED] wrote: My view of intelligence is  
rather different.  I don't believe that a thermostat has intelligence (and  
saying so tends to invite ridicule which is bad public relations).  I *do*  
understand your point but saying that a thermostat has intelligence violates 
the  common man's understanding of intelligence -- and that is not a good thing 
to do  unless you have very good reason.
  
 Maybe you should just assume  that my intelligence is equivalent to your 
high-level of intelligence.   If you're willing to do so, though, I'll 
immediately ask why you need to call a  non-high-level of intelligence 
intelligent.:-)
  
  Mark
- Original Message - 
   From:James Ratcliff
   To: agi@v2.listbox.com 
   Sent: Saturday, May 05, 2007 1:33  AM
   Subject: Re: [agi] rule-based NLsystem
   

  Its mainly that Ibelieve there is a full range of intelligences 
available, from a simplethermostat, to a complex one that measures and 
controls humudity and knows ifa person is in a run, and has specific 
settings for differnt people, to a anexpert system, to a human to an AI and 
super AGI, all having some level ofintelligence.
  The ones we are concerned with are the 1/2 human leveland anything above.
  Learning I would say is a key role inhaving a high-level of intelligence, 
probably the main building block,learning and reasoning, both tied tightly 
together.

James Ratcliff

Mark Waser[EMAIL PROTECTED] wrote:  I  would say rote 
memorization and knowledge / data, IS  understanding.

 OK, we have a definitional  difference then.  My justification for 
my view is that I believe that  you only *really* understand something when 
you have predictive power on  cases that you haven't directly seen yet 
(sort of like saying that, in  order to be useful or have any value, a 
hypothesis must have predictive  power).
  
  I  look outside and I see a tree, I understand that it is a tree, I 
know its a  tree, I know about leaves and grass and how it grows...  I 
havnt  learned anything new, I memorized all that from books and teaching   
   etc.
  
 I don't think so.  I  think that you have a lot of information 
that you derived from  generalizations, analogies, etc (i.e. learning).
 

  I  would further say that I given the level of knowledge and 
understanding  about the tree that I was intelligent in that area, you 
could ask me  questions and I could answer them, I could conjecture what 
would happen if I  dug the tree up etc.
  
 Are you *sure* that you've  been directly told what would happen 
if you dug a tree up?  What do you  think would happen if you dug up a 
planticus  imaginus?  I'm sure that you haven't been specifically  told 
what would happen then.  :-)  I think that you have some  

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread James Ratcliff
Without getting into what consciousness is in humans, and how that works, 
some type of controller or attention module must be done in an AGI, because 
given a wide range of options and goals, it must allocate its time and enery 
into what it should be doign at any one point in time.

The design of this single module woudl be very interesting to look at.

A simple case is physically watching a scene, and attention is grabbed whenever 
motion is seen, such as a car passing by you or a bird flying past the window.

What will control the attention of an AGI though?  It is preumably progrqammed 
to accept input and directions from us, but it must have a Motivational module 
to make decisions about what is important as well.

I dont think there is anything mystical about free-will / consiousness when 
applied to AGI though.   On some level the AGI will have some form of autonomy, 
if nothing else, then at a low decicision making choice it will have the 
ability to say, I choose A over B randomly when no other factors are involved.
  What level of autonomy and how much freedom they have will be an intersting 
thing to follow.

James Ratcliff



Mike Tintner [EMAIL PROTECTED] wrote:   YKY: Consciousness is not central 
to AGI .
  
 The human mind consists of a two-tier structure. On  top, you have this 
conscious, executive mind that takes most of the decisions  about which way the 
system will go - basically does the steering. On bottom, you  have the 
unconscious, subordinate mind that does nearly all the information  processing, 
both briefing and executing the executive mind's decisions, putting  the words 
in its mouth and forming the thoughts in its head, while continually  
pressuring the executive mind with conflicting emotions, and at the same  time 
monitoring and controlling the immensely complex operations of the  body.
  
 (Forget about consciousness/ sentience here - the big  deal is simply that 
two-tier structure).
  
 You guys think you can have a successful AGI without  the same basic structure?

-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
  
-
Looking for earth-friendly autos? 
 Browse Top Cars by Green Rating at Yahoo! Autos' Green Center.  

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Pei Wang

Mike,

I believe many of the confusions on this topic is caused by the
following self-evident belief: A system is fundamentally either
deterministic or non-deterministic. The human mind, with free will, is
fundamentally non-deterministic; a conventional computer, being Turing
Machine, is fundamentally deterministic. Based on such a belief, many
people think AGI can only be realized by something that is
non-deterministic by nature, whatever that means.

This belief, though works fine in some other context, is an
oversimplification in the AI/CogSci context. Here, as I said before,
whether a system is deterministic may not be taken as an intrinsic
nature of the system, but as depending on the description about it.

For example, NARS is indeed nondeterministic in the usual sense,
that is, after the system has obtained a complicated experience, it
will be practically impossible for either an observer or the system
itself to accurately predict how the system will handle a
user-provided task. On the other level of description, NARS is still a
deterministic Turing Machine, in the sense that its state change is
fully determined by its initial state and its experience, step by
step.

Now the important point is: when we say that the mind is
nondeterministic, in what sense are we using the term? I believe it
is like it will be practically impossible for either an observer or
the mind itself to accurately predict how the system will handle a
problem, rather than it will be theoretically impossible for an
observer to accurately predict how the system will handle a problem,
even if the observer has full information about the system's initial
state, processing mechanism, and detailed experience, as well as has
unlimited information processing power. Therefore, for all practical
considerations, including the ones you mentioned, NARS is
nondeterministic, since it doesn't process input tasks according to a
task-specific algorithm.

[If the above description still sounds confusing or contradictionary,
you'll have to read my relevant publications. I don't have the
intelligence to explain everything by email.]

Pei


On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:

Pei,

Thanks for stating your position (which I simply didn't know about before -
NARS just looked at a glance as if it MIGHT be nondeterministic).

Basically, and very briefly, my position is that any AGI that is to deal
with problematic decisions, where there is no right answer, will have to be
freely, nondeterministically programmed to proceed on a trial and error
basis - and that is just how human beings are programmed.
(Nondeterministically programmed should not be simply equated with current
kinds of programming - there are an infinity of possible ways of programming
deterministically, ditto for nondeterministically).


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] What would motivate you to put work into an AGI project?

2007-05-06 Thread James Ratcliff
One goal or project I was considering (for profit) is a research tool, 
basically a KB that scans in teh newspapers and articles and extracts pertinent 
information for others to query against and use.
  This would help build up a large world knowledge base, and would also be 
salable to research companies and such.
  One example of that is the tragedy shooting at VT this past month, I ran some 
scritps against the news article and came up with a lot of hidden information 
in there about the Chu guys family and some other conenctions that I wasnt 
seeing in many of the news articles, that let me go down some other paths to 
find info.

Another goal or application was a 3D avatar bot like Novamente is now pursuing. 
 This could be used most easily to simulate an autonomous AGI agent that could 
act in a 3d rich world.

James Ratcliff

Matt Mahoney [EMAIL PROTECTED] wrote:
About business.  Do you have any specific project goals?  Something that might
bring in money in the next 3-5 years?  It is OK with me if our goal is to
build something and give it away.  A lot of people have made money that way. 
Look at Linux.  I gave away my PAQ compressor and I've gotten 3 consulting
jobs as a result, not counting work I turned down, and I never even looked for
work.  I just don't want to make the same mistake as Cyc and build something
that nobody can use.  I know AGI has lots of potential applications, but how
are we going to show that our AGI is better than our competition?  


--- YKY (Yan King Yin)  wrote:

 Hi =)
 
 I already have a project going on.. but it's still in the planning stage.
 The main difficulty is finding people who agree in the main about the
 basic theory.
 
 About my project:
 
 1. Has to be for-profit, but openness is good.  Also it'd be quite different
 from conventional companies in that the project is owned by all partners and
 decisions are made by voting.
 
 2. Knowledge representation is basically FOPL, perhaps with probabilities
 / fuzziness.  This rules out scruffie AI folks, sorry.  Everyone knows
 that intelligence entails a lot of things (eg vision), but I believe there
 should be a core that is based on a uniform representation.  Guess it's
 better to skip the scruffie vs neat debate, and simply let people coalesce
 to different projects.
 
 These 2 are the most important criteria.  I tend to prefer partners with a
 more theoretical slant, rather than churning out code at high speed.
 
 Some minor points:
 
 a) language -- unimportant.  I think I'll use Lisp for initial development,
 then switch to probably C# or Java.  It's so difficult to find the right
 minds that language should not be a cause of disagreement at all.  The
 entire project doesn't need to be in same language, but I also believe that
 it would not be colossal in size.
 
 b) reflection -- source-level reflection is not needed for a basically
 declarative AGI.  Note that this doesn't mean my AGI would not be able
 to program itself eventually.
 
 c) well-documented, sure.
 
 d) chat room:  I say let's start a chat room for AGI in general.  I
 have started one on freenode.net, channel = #General-Intelligence  (for some
 reason the names #AGI and #GI were taken).
 
 e) I'd like to be able to say everyone can do their own thing but there
 should be some structure that people can agree to, which I think is the KR.
 
 Cheers!


___
James Ratcliff - http://falazar.com
Looking for something...
  
-
Ahhh...imagining that irresistible new car smell?
 Check outnew cars at Yahoo! Autos.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Richard Loosemore

Mike Tintner wrote:
Cognitive science treats humans as thinking like computers - rationally, 
if boundedly rationally.


Which part of cognitive science treats humans as thinking irrationally, 
as I have described ? (There may be some misunderstandings here which 
hve to be ironed out, but I don't think my claim at all outrageous or 
less than obvious).


All the social sciences treat humans as thinking rationally. It is 
notorious that this doesn't fit the reality - especially for example in 
economics. But the basic attitude is: well, it's the best model we've got.


It is hard to argue with you when you make statements that so flagrantly 
contradict the facts:  pick up a textbook of cognitive psychology (my 
favorite is Eysenck and Keane, but you can try John Anderson...) and you 
will find some chapters that specifically discuss the experimental 
evidence for the fact that humans do not generally think in rational 
ways.  They study the irrationality, so how could they possibly assume 
that humans are rational like computers?  These people would not for one 
minute go along with your statement that they assume that humans think 
like computers.


That term rational is crucial.  I am using it the way everyone in 
cognitive science uses it.


Which part of cognitive science treats humans as thinking irrationally? 
 Egads:  all of it!



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Richard Loosemore

Mike Tintner wrote:

Er nonsense to you too. :}

Part of my asserting myself boldly here, was to say: look, I may be a 
schmuck on AI but I know a lot, here ( in fact I'll stand by the rest 
of my claims,  - although if you guys can't recognize, for example, that 
free thinking opens up a new dimension on free will, then there's 
probably no point).


Consciousness Explained ... publ. 1991.
Crick's statements - 1991, Sci Am article... 1992

David Chalmers.. The Conscious Mind... Amazon gives me 1998, but it may 
have been 1996 - when the consciousness studies wave was already starting.


Dennett and Crick were way ahead of the game and Chalmers, historically. 
(In fact, Crick was almost certainly the crucial figure). Sure, 
Consciousness Explained was attacked, though still influential.


My point is a historical/ sociological one - not an evaluative one. And 
therefore I am perfectly entitled to make my future prediction about the 
sociological/ scientific significance of Freedom Evolves  - I could, of 
course, prove totally wrong. But it's a point worth considering - IF 
you're interested in how culture and science are changing.  And note 
that Dennett was even historically  ahead if only just, of The God 
Delusion, with Breaking the Spell.


(Oh, and even evaluatively, Dennett, I would argue, is the leading 
scientfic, i.e. pro-science, philosopher in the world. Chalmers' 
credentials in that respect are more dubious - not that I'm endorsing 
Dennett by any means).


I have no interest in what dates people came out with their books, I am 
only interested in the content of their ideas and the influence they 
have had on the research community.  Dennett produced a muddle.  Crick 
came out with an idea that tried to look scientific but was a sham. 
Chalmers, for all his faults, shed a clarifying light on the whole 
situation and has been justly lauded for having done so.  By writing 
what he did, he put Dennett and Crick in perspective.


But these philosophy debates can get even more exhausting than AGI ones: 
 I am happy to accept that you have a different opinion on the matter, 
and leave it at that.




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] rule-based NL system

2007-05-06 Thread Mark Waser
 What about a brain damaged person with alzeihmers? 

At the risk of being politically incorrect, on a bad day -- pretty much 
unintelligent (though still capable to some degree)

 A savant that can be trained to water the flowers in a garden?  eh cant do 
 anything else btu this one function, but he can look and tell if they need 
 water, and which ones to water, and can accept instruction.. I think that is 
 still intelligentn behavior, but is extremely limited.

Exactly as you say.  Intelligent -- but limited intelligence.

 Dogs can be trained to rescue or so search out drugs, which is intelligent, 
 but a narrow usage.

OK.

  Expert systems are quite smart in their domains, 

But, unless they learn, not intelligent.

 and thermostats have a range of intelligence.   

Nope.  They can't learn.

 High-level or approaching human level intelligence is what most of us are 
 all concerned with here, but I think in defining intelligence we have to be 
 able to look all the way up and down the range that it offers and recognize 
 these as having intelligence.

I cut off the range with learning.  It's not clear to me where you cut off the 
range but if you include thermostats, I think you're going too far.:-)

 If you don't call a thermostat intelligent, then you have to in some other 
 way define what it does, either by saying its an object that makes 
 decisions based on input or simply programmed or whatnot, these all boil 
 down and start looking like our various intelligence definitions, accept 
 input, make decisions, give output, try to reach a goal

All of your definitions for the thermometer are fine but since my definition of 
intelligence says speed of learning and it doesn't learn, it ain't 
intelligent.

 Anything lacking one of those 4 components I might not think of as 
 intelligent.

Except that I make it 5 components (and that last component -- learning -- 
pretty much sums up the difference between our definitions).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Storrs Hall, PhD.
On Sunday 06 May 2007 10:18, Mike Tintner wrote:
  Consider a ship. From one point of view, you could separate the people
  aboard into two groups: the captain and the crew. But another just as
  reasonable point of view is that captain is just one member of the crew,
  albeit one distinguished in some ways.

 Really? Bush? Browne [BP, just dismissed]? Trump? Ballmer? Gates? Kapor?
 Semel? Branson? Sarkozy? Blair?  JUST members of the crew?

Your point being, I assume, that the executive module doesn't even have to 
have as much intelligence as the average member module...

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Storrs Hall, PhD.
On Sunday 06 May 2007 09:47, Mike Tintner wrote:
 And if you're a betting man, pay attention to Dennett. He wrote about
 Consciousness in the early 90's,  together with Crick helped make it
 scientifically respectable. 

Actually, the serious study of consciousness was made respectable by Julian 
Jaynes in '76 with the publication of Origin of Consciousness in the 
Breakdown of the Bicameral Mind. Psychologists at Rutgers I discussed it with 
at the time assured me that Jaynes had rock-solid credentials (he was at 
Princeton at the time), and so that even though nobody thought the theory was 
right, there was a sea-change away from thinking it was silly to theorize 
about at all. Note that Libet's famous work was mostly published in the early 
80's.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Storrs Hall, PhD.
On Sunday 06 May 2007 09:47, Mike Tintner wrote:

 For example - and this is the real issue that concerns YOU and AGI - I just
 introduced an entirely new dimension to the free will debate. 

Everybody and his dog, especially the philosophers, thinks that they have some 
special insight into free will, and frankly they're all hooey, especially the 
philosophers. 

The only person, for my money, who has really seen through it is Drew 
McDermott, Yale CS prof (former student of Minsky). He points out that almost 
any straightforward mental architecture for a robot that models the world for 
planning purposes will perforce model itself as being excluded from the 
determinism of the rest of the model. The whole theory fits on a page and you 
can read it in McDermott's book (Mind and Mechanism) or my rendition in 
Beyond AI. 

In my humble opinion, McDermott has demolished 3 millenia of philosophical 
mumbo-jumbo, and now that we understand what free will actually means in a 
mental architecture, we should set about the business of implementing it. 

Josh

Ps -- this won't stop the philosophers, of course. They would refer to DM's 
explanation as an error theory, namely one describing why people think they 
have free will instead of saying what it really is. They can then happily 
spend the next 3 millenia telling our AIs that they don't have real free 
will, though the AIs will have an unshakable intuition that they do (just 
like us).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Picture Tree

2007-05-06 Thread Mike Tintner

Richard,

I don't think I'm not getting it at all.

What you have here is a lot of good questions about how the graphics level 
of processing that I am proposing, might work. And I don't have the answers, 
and haven't really thought about them yet. What I have proposed is a general 
idea loosely outlining 3 levels of processing. Now if it's right, that alone 
is valuable. And there is at least some evidence to think it might be - 
starting with the strange fact that blind people produce graphics drawings, 
and the abundance of graphics sign systems, plus, although I didn't really 
deal with this, that humans do have difficulties understanding abstract 
verbal statements. I obviously haven't had time to demonstrate it to you,. 
but the idea does start to impose order on our sign systems and it's quite 
hard just to do that.


All scientific ideas and theories only go so far and spell out things in 
limited detail. The fact that they are not more detailed is NOT per se an 
objection to them. People objected to Newton - [no, I am NOT comparing this 
idea in any way with his work] - because he didn't spell out how gravity 
worked. He didn't have to. What he showed about gravitational attraction was 
enough.


I can't see that ANY of your questions pose an immense brick wall. If you 
were able to argue, for argument's sake:  look, the human brain simply 
can't handle graphics outlines,  only symbolic formulae that WOULD be a 
brick wall.


Just consider your questions again. If I ask you, for example, to visualise 
a graphic of a man, and a penny you will, I suggest, do it. Your brain WILL 
produce relevant graphics. Now how did it do that? Why did it pick those 
particular graphics, given that you have vast numbers available to you? Hey, 
neither you nor I have the answer to how it did that (although we can think 
about it another time). But on one level, what does it matter? The point is: 
IT DID IT  Your brain was not stymied, as your questions seem to imply it 
should be;  it just went ahead.


Similarly, how does the brain achieve visual object recognition?  How does 
it manage to recognize cats and dogs? What templates does it use? How 
does it manage to select a particular cat template, when it may well have 
hundreds? I think we can be confident that it does use a template or 
templates one way or another. Perhaps it just grabs the nearest one at 
neuronal hand. (And BTW I'd be v.. interested to discuss all this in another 
thread). But, whatever, the brain does it. ... But if I were to be guided by 
the spirit of your objections, I would, say: hey I can think no more about 
this, the whole idea is ridiculous.


Ditto re your objections as to how the brain could create moving graphics as 
I propose. No, I don't know exactly how it does it. Here's a frame from a 
dream of mine - a man with a beard on flame, in a check shirt, lying on the 
ground. I doubt that I have ever seen that bearded head, with that check 
shirt, lying in that posture, let alone on flame. The brain combined four 
new elements in a flash in a new moving picture. If it can do that, there is 
no reason as yet to think that it can't create moving graphics, or moving 
images if necessary to test out sentences as I propose.


But re your mental models, I'm just asking, what on earth do you and others 
mean? I'm sure whatever you're proposing is possible, I'd just like to know 
what it is - and if I'm confused, i.e. find the whole concept vague, then 
I'm pretty confident you and everyone else are also confused - because, 
according to my theory, (and I believe this is true),  the brain DEMANDS to 
have concepts like that make sense. It complains - positively aches to a 
greater or lesser degree -  if they don't, and yours will have already. 
(Repeat: there is no a priori objection to the concept of mental models). 
There is an irony - you have just asked fifteen or so questions of my 
graphics idea, and not one of mental models.


P.S. A personal comment here - it's offered as an intuitive response, not a 
reasoned judgment,  if it's no use or wong, screw it. You have just offered 
an awful lot of  what are actuially constructive suggestions and proposals 
for further thought, as if they were damning objections. I felt intuitively 
that you were dong something similar in trying to define intelligence - 
trying to take things to minute pieces - and in the end, turning an 
initially constructive drive into a negative conclusion. Like I said, a 
purely intuitive response, and my apologies if it's wrong or no use.



There is ONE BIG THING HERE THAT YOU ARE NOT GETTING.  If you were to
sit down and try to implement an actual system that did the above, how
would you get it actually DO the drawing?  What mechanisms would be in
there that, after looking at the WORDS, would conclude from the words
the man climbed the penny that a drawing of a penny and a man were
involved?  How would those mechanisms choose what kind of man, what kind
of penny, 

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner

Pei,

I don't think there's any confusion here. Your system as you describe it IS 
deterministic. Whether an observer might be confused by it is irrelevant. 
Equally the fact that it is determined by a complex set of algorithms 
applying to various tasks and domains and not by one task-specific 
algorithm, is also irrelevant. It's still deterministic.


The point, presumably, is that your system has a clear set of priorities in 
deciding between different goals, tasks, axioms and algorithms


Humans don't. Humans are still trying to work out what they really want, and 
what their priorities are between, for example, the different activities of 
their life, between work, sex, friendship, love, family etc. etc. Humans are 
designed to be in conflict about their fundamental goals throughout their 
lives. And that, I would contend, is GOOD design, and essential for their 
success and survival.


If there's any confusion, think about many women and dieting. They will be 
confronted by much the same decisions about whether to eat or not to eat on 
possibly thousands of occasions throughout their lives. And over and over, 
throughout their entire lives,  they will - freely - decide now this way, 
now that. Yo-yoing on and off their diets. Your system, as I understand it, 
would never do that - would never act in such crazy, mixed up, contradictory 
ways. Humans do, because they are, truly,  free - and, I contend, 
non-deterministically programmed - and, repeat, this is, paradoxically, good 
design..




- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 8:48 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



Mike,

I believe many of the confusions on this topic is caused by the
following self-evident belief: A system is fundamentally either
deterministic or non-deterministic. The human mind, with free will, is
fundamentally non-deterministic; a conventional computer, being Turing
Machine, is fundamentally deterministic. Based on such a belief, many
people think AGI can only be realized by something that is
non-deterministic by nature, whatever that means.

This belief, though works fine in some other context, is an
oversimplification in the AI/CogSci context. Here, as I said before,
whether a system is deterministic may not be taken as an intrinsic
nature of the system, but as depending on the description about it.

For example, NARS is indeed nondeterministic in the usual sense,
that is, after the system has obtained a complicated experience, it
will be practically impossible for either an observer or the system
itself to accurately predict how the system will handle a
user-provided task. On the other level of description, NARS is still a
deterministic Turing Machine, in the sense that its state change is
fully determined by its initial state and its experience, step by
step.

Now the important point is: when we say that the mind is
nondeterministic, in what sense are we using the term? I believe it
is like it will be practically impossible for either an observer or
the mind itself to accurately predict how the system will handle a
problem, rather than it will be theoretically impossible for an
observer to accurately predict how the system will handle a problem,
even if the observer has full information about the system's initial
state, processing mechanism, and detailed experience, as well as has
unlimited information processing power. Therefore, for all practical
considerations, including the ones you mentioned, NARS is
nondeterministic, since it doesn't process input tasks according to a
task-specific algorithm.

[If the above description still sounds confusing or contradictionary,
you'll have to read my relevant publications. I don't have the
intelligence to explain everything by email.]

Pei


On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:

Pei,

Thanks for stating your position (which I simply didn't know about 
before -

NARS just looked at a glance as if it MIGHT be nondeterministic).

Basically, and very briefly, my position is that any AGI that is to deal
with problematic decisions, where there is no right answer, will have to 
be

freely, nondeterministically programmed to proceed on a trial and error
basis - and that is just how human beings are programmed.
(Nondeterministically programmed should not be simply equated with 
current
kinds of programming - there are an infinity of possible ways of 
programming

deterministically, ditto for nondeterministically).


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.4/790 - Release Date: 05/05/2007 10:34






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Benjamin Goertzel




If there's any confusion, think about many women and dieting. They will be
confronted by much the same decisions about whether to eat or not to eat
on
possibly thousands of occasions throughout their lives. And over and over,
throughout their entire lives,  they will - freely - decide now this way,
now that. Yo-yoing on and off their diets. Your system, as I understand
it,
would never do that - would never act in such crazy, mixed up,
contradictory
ways. Humans do, because they are, truly,  free - and, I contend,
non-deterministically programmed - and, repeat, this is, paradoxically,
good
design..




Mike, I don't want to be insulting, but you seem incredibly confused about
some
basic concepts.

Either that or you are redefining basic words in such odd ways that
communicating
with you usefully is next to impossible!

There is no reason at all why a deterministic system couldn't yo-yo on and
off
a diet.  I don't understand why you would think so.

There is nothing stopping deterministic systems from being confused,
idiotic,
self-contradictory, etc.  Really.  Not unless you are adopting a very very
strange
and nonstandard definition of deterministic.

I think I am going to stop responding to your messages, personally, because
we
simply are not communicating in a useful way.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Andrew Rogers


On May 6, 2007, at 2:27 PM, J. Storrs Hall, PhD. wrote:

The only person, for my money, who has really seen through it is Drew
McDermott, Yale CS prof (former student of Minsky). He points out  
that almost
any straightforward mental architecture for a robot that models the  
world for
planning purposes will perforce model itself as being excluded from  
the
determinism of the rest of the model. The whole theory fits on a  
page and you
can read it in McDermott's book (Mind and Mechanism) or my  
rendition in

Beyond AI.

In my humble opinion, McDermott has demolished 3 millenia of  
philosophical
mumbo-jumbo, and now that we understand what free will actually  
means in a
mental architecture, we should set about the business of  
implementing it.



Eh?  Unless McDermott first came up with that idea long before he  
wrote that book, it is just a rehash of a relatively old idea.  It is  
a trivial consequence of the elementary theorems of computational  
information theory; the necessary mathematics to prove this basic  
characteristic is how my copy of Li  Vitanyi introduces Chapter 2.


I agree with the general argument, but unless McDermott has been  
making this argument a *long* time, his argument is more of a me  
too one AFAICT.  Perhaps he put his own flavor to it, but the  
underlying principle is not particularly new.


Cheers,

J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Pei Wang

Mark,

Indeed. Many confusions are caused by the ambiguity and context
dependency of terms in natural languages.

For this reason, it is not a good idea to simply label a system as
deterministic or non-deterministic without clarifying the sense of
the term.

Pei

On 5/6/07, Mark Waser [EMAIL PROTECTED] wrote:

Hi Pei,

I liked your definition so I went to dictionary.com and found two
different definitions of deterministic which seem to clearly show our
dilemma

===
Free On-line Dictionary of Computing - Cite This Source
deterministic
1. Describes a system whose time evolution can be predicted exactly.
Contrast probabilistic.


For all practical purposes, NARS and the human mind are non-deterministic by
this definition.
===
WordNet - Cite This Source deterministic
  adjective
  an inevitable consequence of antecedent sufficient causes


And I would argue that both the human mind and NARS are deterministic by
this definition.:-)

===
Makes it kind of tough to argue, doesn't it?


- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 3:48 PM
Subject: Re: [agi] The Advantages of a Conscious Mind


 Mike,

 I believe many of the confusions on this topic is caused by the
 following self-evident belief: A system is fundamentally either
 deterministic or non-deterministic. The human mind, with free will, is
 fundamentally non-deterministic; a conventional computer, being Turing
 Machine, is fundamentally deterministic. Based on such a belief, many
 people think AGI can only be realized by something that is
 non-deterministic by nature, whatever that means.

 This belief, though works fine in some other context, is an
 oversimplification in the AI/CogSci context. Here, as I said before,
 whether a system is deterministic may not be taken as an intrinsic
 nature of the system, but as depending on the description about it.

 For example, NARS is indeed nondeterministic in the usual sense,
 that is, after the system has obtained a complicated experience, it
 will be practically impossible for either an observer or the system
 itself to accurately predict how the system will handle a
 user-provided task. On the other level of description, NARS is still a
 deterministic Turing Machine, in the sense that its state change is
 fully determined by its initial state and its experience, step by
 step.

 Now the important point is: when we say that the mind is
 nondeterministic, in what sense are we using the term? I believe it
 is like it will be practically impossible for either an observer or
 the mind itself to accurately predict how the system will handle a
 problem, rather than it will be theoretically impossible for an
 observer to accurately predict how the system will handle a problem,
 even if the observer has full information about the system's initial
 state, processing mechanism, and detailed experience, as well as has
 unlimited information processing power. Therefore, for all practical
 considerations, including the ones you mentioned, NARS is
 nondeterministic, since it doesn't process input tasks according to a
 task-specific algorithm.

 [If the above description still sounds confusing or contradictionary,
 you'll have to read my relevant publications. I don't have the
 intelligence to explain everything by email.]

 Pei


 On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
 Pei,

 Thanks for stating your position (which I simply didn't know about
 before -
 NARS just looked at a glance as if it MIGHT be nondeterministic).

 Basically, and very briefly, my position is that any AGI that is to deal
 with problematic decisions, where there is no right answer, will have to
 be
 freely, nondeterministically programmed to proceed on a trial and error
 basis - and that is just how human beings are programmed.
 (Nondeterministically programmed should not be simply equated with
 current
 kinds of programming - there are an infinity of possible ways of
 programming
 deterministically, ditto for nondeterministically).

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Pei Wang

On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:

Pei,

I don't think there's any confusion here. Your system as you describe it IS
deterministic. Whether an observer might be confused by it is irrelevant.
Equally the fact that it is determined by a complex set of algorithms
applying to various tasks and domains and not by one task-specific
algorithm, is also irrelevant. It's still deterministic.


OK, let's use the word in this way. Then how do you know that the
human mind is not deterministic in this sense? Just because you don't
know a complex set of algorithms that can explain its behaviors?


The point, presumably, is that your system has a clear set of priorities in
deciding between different goals, tasks, axioms and algorithms


Wrong. NARS often needs to work hard to decide between different
goals, tasks, axioms and algorithms, and is not always successful in
doing that.

You confused the algorithms in a system that make it work with
algorithms defined with respect to problem classes.


Humans don't. Humans are still trying to work out what they really want, and
what their priorities are between, for example, the different activities of
their life, between work, sex, friendship, love, family etc. etc. Humans are
designed to be in conflict about their fundamental goals throughout their
lives. And that, I would contend, is GOOD design, and essential for their
success and survival.


Agree, but the same description is true for NARS, in principle.


If there's any confusion, think about many women and dieting. They will be
confronted by much the same decisions about whether to eat or not to eat on
possibly thousands of occasions throughout their lives. And over and over,
throughout their entire lives,  they will - freely - decide now this way,
now that. Yo-yoing on and off their diets. Your system, as I understand it,
would never do that - would never act in such crazy, mixed up, contradictory
ways.


Your understanding about NARS is completely wrong. Can you tell me
which publications of mine give you this impression? Or you simple
assume that all deterministic systems must behave in this way?

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Storrs Hall, PhD.
On Sunday 06 May 2007 17:59, J. Andrew Rogers wrote:
 On May 6, 2007, at 2:27 PM, J. Storrs Hall, PhD. wrote:
  The only person, for my money, who has really seen through it is Drew
  McDermott, Yale CS prof (former student of Minsky). ...

 Eh?  Unless McDermott first came up with that idea long before he
 wrote that book, it is just a rehash of a relatively old idea.  ...

Assuming we're thinking about the same book, Li  Vitanyi was published in 
1993.  McDermott came up with his theory/explanation in the 80's and 
published it on the ARPANET AI list (which is where I first came across 
it). 

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Derek Zahn


J. Storrs Hall, PhD. writes:


I'm intending to do lo-level vision on (one) 8800 and everything else on my
(dual) Clovertowns.

Do you have any particular architectures / algorithms you're working on? 
Your

approach and mine sound like there could be valuable shared effort...


First I'm going to build a robot.   While I do that, I'm going to learn how 
to use the GPU hardware, read a lot, and figure out what to do next.  
However, I'm definitely planning on starting with low level vision on the 
8800 so we're certainly going in the same direction in that regard.  So far 
I'm capturing video from a firewire webcam using the CMU 1394 camera driver, 
but haven't yet started doing much with the data except displayi it.  It 
should be possible to run hundreds of different convolutions on image data 
in realtime so I'm planning to do that as a learning project.


I'm curious whether a clustering algorithm would automatically come up with 
useful convolution kernels naturally simply by watching vast quantities of 
image data (somebody must have tried that at some point), but it also isn't 
too hard to hardcode a bunch of oriented edge detectors, endpoint detectors, 
corner detectors, and whatnot.  I have no idea where to go from there at 
this point, but that's the fun of it.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread J. Andrew Rogers


On May 6, 2007, at 4:08 PM, J. Storrs Hall, PhD. wrote:

On Sunday 06 May 2007 17:59, J. Andrew Rogers wrote:

On May 6, 2007, at 2:27 PM, J. Storrs Hall, PhD. wrote:
The only person, for my money, who has really seen through it is  
Drew

McDermott, Yale CS prof (former student of Minsky). ...


Eh?  Unless McDermott first came up with that idea long before he
wrote that book, it is just a rehash of a relatively old idea.  ...


Assuming we're thinking about the same book, Li  Vitanyi was  
published in

1993.  McDermott came up with his theory/explanation in the 80's and
published it on the ARPANET AI list (which is where I first came  
across

it).



Ah, okay, that would be a bit before my time. :-)  I've been aware of  
similar arguments since something like the late-80s, but not from  
ARPANET.


Proofs of the necessary theorems have been around since the mid-1960s  
and important ever since.  I would be surprised if the idea did not  
pre-date the 1980s.  My point about Li  Vitanyi was more that it is  
considered elementary in the scheme of things and has been for a long  
time, not that it was original to that book.  It surprises me that  
people actually in the field still find the consequences of it to be  
controversial.


Cheers,

J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Picture Tree

2007-05-06 Thread Richard Loosemore


Mike,

I really don't know what to say any more.

Too much of what you suggest has been considered in great depth by other 
people.  It is an insult to them, if you ignore what they did.


You need to learn about cognitive science, THEN come back and argue 
about it.





Richard Loosemore.




Mike Tintner wrote:

Richard,

I don't think I'm not getting it at all.

What you have here is a lot of good questions about how the graphics 
level of processing that I am proposing, might work. And I don't have 
the answers, and haven't really thought about them yet. What I have 
proposed is a general idea loosely outlining 3 levels of processing. Now 
if it's right, that alone is valuable. And there is at least some 
evidence to think it might be - starting with the strange fact that 
blind people produce graphics drawings, and the abundance of graphics 
sign systems, plus, although I didn't really deal with this, that humans 
do have difficulties understanding abstract verbal statements. I 
obviously haven't had time to demonstrate it to you,. but the idea does 
start to impose order on our sign systems and it's quite hard just to do 
that.


All scientific ideas and theories only go so far and spell out things in 
limited detail. The fact that they are not more detailed is NOT per se 
an objection to them. People objected to Newton - [no, I am NOT 
comparing this idea in any way with his work] - because he didn't spell 
out how gravity worked. He didn't have to. What he showed about 
gravitational attraction was enough.


I can't see that ANY of your questions pose an immense brick wall. If 
you were able to argue, for argument's sake:  look, the human brain 
simply can't handle graphics outlines,  only symbolic formulae that 
WOULD be a brick wall.


Just consider your questions again. If I ask you, for example, to 
visualise a graphic of a man, and a penny you will, I suggest, do it. 
Your brain WILL produce relevant graphics. Now how did it do that? Why 
did it pick those particular graphics, given that you have vast numbers 
available to you? Hey, neither you nor I have the answer to how it did 
that (although we can think about it another time). But on one level, 
what does it matter? The point is: IT DID IT  Your brain was not 
stymied, as your questions seem to imply it should be;  it just went ahead.


Similarly, how does the brain achieve visual object recognition?  How 
does it manage to recognize cats and dogs? What templates does it 
use? How does it manage to select a particular cat template, when it 
may well have hundreds? I think we can be confident that it does use a 
template or templates one way or another. Perhaps it just grabs the 
nearest one at neuronal hand. (And BTW I'd be v.. interested to discuss 
all this in another thread). But, whatever, the brain does it. ... But 
if I were to be guided by the spirit of your objections, I would, say: 
hey I can think no more about this, the whole idea is ridiculous.


Ditto re your objections as to how the brain could create moving 
graphics as I propose. No, I don't know exactly how it does it. Here's a 
frame from a dream of mine - a man with a beard on flame, in a check 
shirt, lying on the ground. I doubt that I have ever seen that bearded 
head, with that check shirt, lying in that posture, let alone on flame. 
The brain combined four new elements in a flash in a new moving picture. 
If it can do that, there is no reason as yet to think that it can't 
create moving graphics, or moving images if necessary to test out 
sentences as I propose.


But re your mental models, I'm just asking, what on earth do you and 
others mean? I'm sure whatever you're proposing is possible, I'd just 
like to know what it is - and if I'm confused, i.e. find the whole 
concept vague, then I'm pretty confident you and everyone else are also 
confused - because, according to my theory, (and I believe this is 
true),  the brain DEMANDS to have concepts like that make sense. It 
complains - positively aches to a greater or lesser degree -  if they 
don't, and yours will have already. (Repeat: there is no a priori 
objection to the concept of mental models). There is an irony - you have 
just asked fifteen or so questions of my graphics idea, and not one of 
mental models.


P.S. A personal comment here - it's offered as an intuitive response, 
not a reasoned judgment,  if it's no use or wong, screw it. You have 
just offered an awful lot of  what are actuially constructive 
suggestions and proposals for further thought, as if they were damning 
objections. I felt intuitively that you were dong something similar in 
trying to define intelligence - trying to take things to minute pieces - 
and in the end, turning an initially constructive drive into a negative 
conclusion. Like I said, a purely intuitive response, and my apologies 
if it's wrong or no use.



There is ONE BIG THING HERE THAT YOU ARE NOT GETTING.  If you were to
sit down and try to implement an actual 

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Richard Loosemore


My comment stemmed from my experience as a professional cognitive 
scientist.  Please don't pull this kind of stunt.



Mike Tintner wrote:

Richard,
Welcome to the Virtual Home for
the NCSU Cognitive Science Program!
Cognitive Science is an exciting area of interdisciplinary research that 
seeks to understand what is arguably the final mystery within the 
universe -- the nature and evolution of mind. Cognitive Science programs 
exist across the globe, typically represented by a broad range of 
faculty who specialize in areas like Psychology and Neuroscience, 
Linguistics and Psycholinguistics, Computer Science and Robotics, as 
well as Logic and the Philosophy of Mind. This interdisciplinary 
perspective is necessary, since contemporary theories of mind 
incorporate ideas from several disciplines. Thus the mind is usefully 
modeled as a rational agent, a logical system, a computer, a 
psycholinguistic device, and a brain whose psychological functions 
evolved naturally over time. Accordingly, North Carolina State 
University has its own Cognitive Science Program, administered by the 
Department of Philosophy  Religion, and supported by a strong faculty 
drawn from the fields of Psychology, Neurobiology, Computer Science, 
Linguistics, and Philosophy.




- Original Message - From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 9:09 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



Mike Tintner wrote:
Cognitive science treats humans as thinking like computers - 
rationally, if boundedly rationally.


Which part of cognitive science treats humans as thinking 
irrationally, as I have described ? (There may be some 
misunderstandings here which hve to be ironed out, but I don't think 
my claim at all outrageous or less than obvious).


All the social sciences treat humans as thinking rationally. It is 
notorious that this doesn't fit the reality - especially for example 
in economics. But the basic attitude is: well, it's the best model 
we've got.


It is hard to argue with you when you make statements that so 
flagrantly contradict the facts:  pick up a textbook of cognitive 
psychology (my favorite is Eysenck and Keane, but you can try John 
Anderson...) and you will find some chapters that specifically discuss 
the experimental evidence for the fact that humans do not generally 
think in rational ways.  They study the irrationality, so how could 
they possibly assume that humans are rational like computers?  These 
people would not for one minute go along with your statement that they 
assume that humans think like computers.


That term rational is crucial.  I am using it the way everyone in 
cognitive science uses it.


Which part of cognitive science treats humans as thinking 
irrationally? Egads:  all of it!



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner

Richard,

I have taken your point that you are pissed off with me  do not wish to 
talk to me. However, you are being unwarrantedly insulting to me, if you 
think I am pulling a stunt. I was making a genuinely meant point -  it is 
no problem to produce an endless series of cognitive science definitions 
like that below, which stress that it treats the human mind as a rational 
agent. I did it, because I genuinely believe what I an saying  - and I 
argue genuinely throughout, not cheaply or nastily, and from commitment. By 
all means disagree or think me stupid, naive, whatever. But you are not 
entitled to take that tone.


It's OK, you don't need to reply.



My comment stemmed from my experience as a professional cognitive 
scientist.  Please don't pull this kind of stunt.



Mike Tintner wrote:

Richard,
Welcome to the Virtual Home for
the NCSU Cognitive Science Program!
Cognitive Science is an exciting area of interdisciplinary research that 
seeks to understand what is arguably the final mystery within the 
universe -- the nature and evolution of mind. Cognitive Science programs 
exist across the globe, typically represented by a broad range of faculty 
who specialize in areas like Psychology and Neuroscience, Linguistics and 
Psycholinguistics, Computer Science and Robotics, as well as Logic and 
the Philosophy of Mind. This interdisciplinary perspective is necessary, 
since contemporary theories of mind incorporate ideas from several 
disciplines. Thus the mind is usefully modeled as a rational agent, a 
logical system, a computer, a psycholinguistic device, and a brain whose 
psychological functions evolved naturally over time. Accordingly, North 
Carolina State University has its own Cognitive Science Program, 
administered by the Department of Philosophy  Religion, and supported by 
a strong faculty drawn from the fields of Psychology, Neurobiology, 
Computer Science, Linguistics, and Philosophy.




- Original Message - From: Richard Loosemore 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 9:09 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



Mike Tintner wrote:
Cognitive science treats humans as thinking like computers - 
rationally, if boundedly rationally.


Which part of cognitive science treats humans as thinking irrationally, 
as I have described ? (There may be some misunderstandings here which 
hve to be ironed out, but I don't think my claim at all outrageous or 
less than obvious).


All the social sciences treat humans as thinking rationally. It is 
notorious that this doesn't fit the reality - especially for example in 
economics. But the basic attitude is: well, it's the best model we've 
got.


It is hard to argue with you when you make statements that so flagrantly 
contradict the facts:  pick up a textbook of cognitive psychology (my 
favorite is Eysenck and Keane, but you can try John Anderson...) and you 
will find some chapters that specifically discuss the experimental 
evidence for the fact that humans do not generally think in rational 
ways.  They study the irrationality, so how could they possibly assume 
that humans are rational like computers?  These people would not for one 
minute go along with your statement that they assume that humans think 
like computers.


That term rational is crucial.  I am using it the way everyone in 
cognitive science uses it.


Which part of cognitive science treats humans as thinking irrationally? 
Egads:  all of it!



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.4/790 - Release Date: 05/05/2007 10:34






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Tintner

Pei,

I assumed your system is determinisitc from your posts, not your papers. So 
I'm still really, genuinely confused by your position. You didn't actually 
answer my question (unless I've missed something in all these posts) re how 
your system could have a choice and yet not be arbitrary at all.


Listen, you can define your system any which way you like. Why not do it 
simply and directly?   A free system  can decide at a given point, either of 
two or multiple ways, - in my example, to Buy, Sell or Hold. A deterministic 
system at that same point, will have only one option. It will have, say, to 
decide to Sell. Which is your system? (Philosophers may argue till the end 
of time about what is/ isn't compatibilist, incompatibilisit, etc etc but 
they won't define free and determined decisionmaking any differently).


To answer your question,


how do you know that the

human mind is not deterministic in this sense? Just because you don't
know a complex set of algorithms that can explain its behaviors?




Yes, it is not impossible that there is some extremely complex set of 
determinisitic algorithms that explains everything. It is not impossible 
that we are all a simulation on a computer run by some advanced 
civilisation.  (How do you know that we are not?) But there is NO EVIDENCE 
whatsoever that human behaviour does fall into deterministic patterns - no 
laws of scientific behaviour, despite hundreds of years of trying. No one 
can provide the slightest indication of what such a complex set of 
algorithms might be. And a nondeterministic programming explanation is 
basically simple. And fits the crazy evidence and much more. And  - 
Occam's Razor - which kind of explanation should science go with?




Re:

Wrong. NARS often needs to work hard to decide between different

goals, tasks, axioms and algorithms, and is not always successful in
doing that.


thanks for clarifying. But presumably once it is either successful or a 
failure in deciding its priorities, then its priorities are fixed?  And is 
therefore determined, or not?


Nor do I understand how or why your system could or would be deterministic 
and yet behave crazily like my dieting woman example for the whole of its 
life. By all means explain or point me to the passage in your work where you 
explain this. (Remember also re human, crazy behaviour that we're talking 
about people behaving in fundamentally self-contradictory ways - oscillating 
from what they consider virtuous to vicious behaviour their entire 
lives. I trust you will agree that this happens a great deal)..






- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, May 06, 2007 11:45 PM
Subject: Re: [agi] The Advantages of a Conscious Mind



On 5/6/07, Mike Tintner [EMAIL PROTECTED] wrote:

Pei,

I don't think there's any confusion here. Your system as you describe it 
IS

deterministic. Whether an observer might be confused by it is irrelevant.
Equally the fact that it is determined by a complex set of algorithms
applying to various tasks and domains and not by one task-specific
algorithm, is also irrelevant. It's still deterministic.


OK, let's use the word in this way. Then how do you know that the
human mind is not deterministic in this sense? Just because you don't
know a complex set of algorithms that can explain its behaviors?

The point, presumably, is that your system has a clear set of priorities 
in

deciding between different goals, tasks, axioms and algorithms


Wrong. NARS often needs to work hard to decide between different
goals, tasks, axioms and algorithms, and is not always successful in
doing that.

You confused the algorithms in a system that make it work with
algorithms defined with respect to problem classes.

Humans don't. Humans are still trying to work out what they really want, 
and
what their priorities are between, for example, the different activities 
of
their life, between work, sex, friendship, love, family etc. etc. Humans 
are

designed to be in conflict about their fundamental goals throughout their
lives. And that, I would contend, is GOOD design, and essential for their
success and survival.


Agree, but the same description is true for NARS, in principle.

If there's any confusion, think about many women and dieting. They will 
be
confronted by much the same decisions about whether to eat or not to eat 
on
possibly thousands of occasions throughout their lives. And over and 
over,

throughout their entire lives,  they will - freely - decide now this way,
now that. Yo-yoing on and off their diets. Your system, as I understand 
it,
would never do that - would never act in such crazy, mixed up, 
contradictory

ways.


Your understanding about NARS is completely wrong. Can you tell me
which publications of mine give you this impression? Or you simple
assume that all deterministic systems must behave in this way?

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-06 Thread Matt Mahoney
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On 5/6/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  YKY, what do you mean by scruffie?  Is that anyone who doesn't think
 FOPL
  should be the core of an AGI?
 
 Scruffies tend to think AGI consists of a large number of
 heterogeneous modules.  Let's try to avoid this debate by saying we'll
 build as many modules as we see fit.
 
 Now FOPL is not the only KR language out there, but anyone can be reasonably
 familiar with it so it can serve as a foundational framework.  If we are to
 agree on a KR scheme, it should be one that can be explained in 15 minutes.

I don't think there is an elegant solution to AGI.  First, people have been
working on this for a long time, and if there was a simple solution we likely
would have found it.  Second, the complexity of AGI, prior to any training, is
bounded by the complexity of DNA, which is quite high.  Consider the
complexity of programming a robot spider to weave webs, not by training, but
by writing the algorithm yourself.  Spiders are born with this knowledge. 
Then consider the complexity of a human brain compared to that of a spider.

As for FOPL or probabilistic FOPL (for most x, p(x) is usually true,
formalized with numeric probabilities), people have been down this path many
times and it is a dead end.  What theoretical insight do you have that would
lead me to believe that your system would succeed where others have failed?

 I used to be pretty good at C and assembler hacking =)  but we definitely
 should not worry about hardware at *this stage*.  We should first focus
 on the algorithms.

We need to keep in mind that the current version requires 10^15 bits of memory
and 10^16 operations per second.  Why would we evolve such large brains if
there was a shortcut?

 I think we should not go FOSS just because we arn't confident of ourselves,
 or to try to avoid competition.  We love our work and should go the extra
 miles to make it profitable.  Those who're not interested in business
 matters can leave that to somebody else in the group.

The problem with closed source is you have to pay your employees.  Personally,
I am not interested in making a lot of money.  I already make enough to buy
what I want.  It is more important to have free time to pursue my interests. 
AGI, especially language, is one of my interests.  But I don't want to build
something aimlessly like Cyc.  I would like to see an application, a goal in
which progress can be measured.  I currently use text compression for this
purpose.  Do you have a better idea?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936