Re: [agi] reasoning & knowledge

2008-03-13 Thread Bob Mottram
On 13/03/2008, Linas Vepstas <[EMAIL PROTECTED]> wrote:
>  >  object itself. How, say, do you get from a human face to the distorted
>  >  portraits of Modigliani, Picasso, Francis Bacon, Scarfe, or any 
> cartoonist?
>  >  By logical or mathematical formulae?
>
>
> Actually, yes. Computer vision processing has always been based
>  on mathematical formulae.


Computer vision is really all about logic and mathematical formulae.
Many of the things which you think you can see actually don't exist in
the raw data, but are suggested by the relative geometry of features.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-03-13 Thread Linas Vepstas
On 14/02/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Pei: > Though many people assume "reasoning" can only been applied to
>
> > "symbolic" or "linguistic" materials, I'm not convinced yet, nor that
>  > there is really a separate "imaginative reasoning" --- at least I
>  > haven't seen a concrete proposal on what it means and why it is
>  > different.
>  >
>
>  I suspect - and correct me - that you haven't thought much at all about this
>  whole area of imaginative and visual reasoning - i.e. how one image is drawn
>  from another, or  how someone delineates a drawing of an object from the
>  object itself. How, say, do you get from a human face to the distorted
>  portraits of Modigliani, Picasso, Francis Bacon, Scarfe, or any cartoonist?
>  By logical or mathematical formulae?

Actually, yes. Computer vision processing has always been based
on mathematical formulae.

> Which parts of the face do logic or
>  semantic networks tell you to highlight or leave out or transpose or smudge
>  or overlay, or what to blur, and what to sharpen?

There are many entertainting computer programs that "draw" in
an "artistic way", given a snapshot of a face.  Its all just math
under th covers.

> Which of the continuously
>  changing expressions on a person's face does logic tell you are most
>  representative of their personality?

This is learned over many years.

In defense of Pei Wang's NARS, the foundations that he lays down there
are broad enough to encompass the mathematics of image processing.

When he writes (A->B, B->C) |- A->C  in NARSese, don't make the mistake
that A,B,C are single-bit true-false values. These can be
multi-gigabit structures
with all sorts of complexities embedded inside of them, including ideas that
"A looks like a smiling face" at the visual level (or symbolic level, or both,
including info about what the face might look like in darkness, light,
happy or sad, etc..)

Also, don't mistake logic and computation. While computers are anchored
in boolean logic, they can do gazillions of ops per second; Computer
programmers don't push around single bits at a time.  And so also in
NARSese -- something like (A->B, B->C) |- A->C is one op; yet one might
be performing gazillions of these just to recognize one face.

There is nothing in the foundations of NARS that prevents "imaginative"
reasoning, certainly not based on my understanding.

--linas

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Matt Mahoney

--- Mike Tintner <[EMAIL PROTECTED]> wrote:

> 
> >Vlad:> Don't you know about change blindness and the like? You don't 
> >actually
> > see all these details, it's delusional. You only get the gist of the
> > scene, according to current context that forms the focus of your
> > attention. Amount of information you extract from watching a movie is
> > not dramatically bigger than what you extract from reading a book.
> 
> Vlad,
> Are you seriously trying to tell me that science knows how we see? You've 
> actually just stated a set of huge assumptions, right? For argument's sake, 
> you might be right. But that last sentence is plucked out of thin air, 
> right? You can barely begin to account for what those 30 odd areas of visual
> cortex are doing, or how much detail and what detail they are extracting 
> from scenes. And Ai systems are not much better than blind.

Vlad is right.  In studies of human long term memory, the learning rate is
about 1 bit per second whether it is words or pictures.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Richard Loosemore

Bob Mottram wrote:

On 29/02/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:

 consciousness is a continuously moving picture with the other senses 
continuous too


There doesn't seem to be much evidence for this.  People with damage
to MT, or certain types of visual migrane, see the world as a slow
jerky series of snapshots (like looking at a webcam with a low frame
rate).  The temporal resolution of consciousness may actually be quite
slow - on the 0.5 second time frame as originally described by Grey
Walter.


Oh, I don't know about that:  a good musician can hear the difference 
between a string of 64th notes (hemidemisemiquavers) and the same string 
with a 64th rest in the middle...


At d=120 that would be a fair bit more than 0.5 sec resolution ;-)


Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Richard Loosemore

Mike Tintner wrote:


Sorry, yes the "run" is ambiguous.

I mean that what the human mind does is *watch* continuous movies -  but 
it then runs/creates its own extensive movies based on its experience in 
dreams - and, with some effort, replay movies in conscious imagination.


The point is: my impression is that in discussing this whole area,   
both in AI, philosophy & cog sci/psych, people tend to forget that 
consciousness is a continuously moving picture with the other senses 
continuous too, and tend to think, even if only implicitly, in terms of 
stills.


I am not at all sure where you get the impression that everyone thinks 
in terms of "stills" rather than movies.


I am acutely aware of the need to handle time-varying phenomena.  Such 
things are definitely built into my model (there are elements that 
capture sequences of other elements, and there are mechanisms for 
allowing these to generalize their scope to arbitrary timescales...).


I guess you might not see much about that aspect of intelligence in the 
conventional AI literature, but even they (with whom I have many issues) 
would say that nothing excludes that type of representation from being 
deployed in their formalisms.


You seem to find issues that "nobody is doing anything about in AI" 
quite frequently, but I am not convinced that the gaps you are finding 
are ever real.




Richard Loosemore















Mike Tintner wrote:
Er, just to clarify. You guys have, or know of, AI systems which run 
continuous movies of the world, analysing and responding to those 
movies with all the relevant senses, as discussed below, and then to 
the world beyond those movies, in real time (or any time, for that 
matter)?


I have no idea what you man by "run movies of the world" in this context.

You mean they have an internal world model?


Richard Loosemore







Mike Tintner wrote:
 > You're crossing a road - you track both the oncoming car and
your body
 > with all your senses at once - see a continuous moving image
of the
 > car, hear the noise of the engine and tires, possibly smell
it if
 > there's a smell of gasoline, have a kinaesthetic sense of
your body in
 > relation to the car, including a sense of up/down, 
left/right etc


*/Richard Loosemore <[EMAIL PROTECTED]>/* wrote:

I have been working on getting exactly that sort of cognitive
system
since the mid 1980s.

I don't know: perhaps you think it is especially difficult
because you
have not done much work on it.

Conventional approaches to AI may well have trouble in this
area, but
since my approach has been directed at these kinds of issues
since the
very beginning, to me it looks relatively straightforward in
principle.

The real issues are elsewhere.

Richard Loosemore

I agree!  The significant obstacles are elsewhere.  The integration
of ideas and the ability to index large volumes of information so
that recognition systems can find them quickly are two problems that
I see as particularly difficult.  Even if we were able to show how
to get the job done for a special case it would not necessarily
translate into a feasible and extensible general program.
Jim Bromer




Be a better friend, newshound, and know-it-all with Yahoo! Mobile.
Try it now.





*agi* | Archives 
 | Modify
 Your Subscription [Powered by
Listbox] 





No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.21.1/1302 - Release Date:
2/27/2008 4:34 PM


*agi* | Archives  
 | Modify 
 Your Subscription [Powered by 
Listbox] 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.516 / Virus Database: 
269.21.1/1302 - Release Date: 2/27/2008 4:34 PM







Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Mike Tintner


Vlad:> Don't you know about change blindness and the like? You don't 
actually

see all these details, it's delusional. You only get the gist of the
scene, according to current context that forms the focus of your
attention. Amount of information you extract from watching a movie is
not dramatically bigger than what you extract from reading a book.


Vlad,
Are you seriously trying to tell me that science knows how we see? You've 
actually just stated a set of huge assumptions, right? For argument's sake, 
you might be right. But that last sentence is plucked out of thin air, 
right? You can barely begin to account for what those 30 odd areas of visual 
cortex are doing, or how much detail and what detail they are extracting 
from scenes. And Ai systems are not much better than blind.


I suggest you're uncharacteristically leaping to an absurdly audacious 
position here - and the only reason can be prejudice. The truth is that as a 
culture we are just at the beginning of visual/cinematic literacy, and 
knowing how to think about movies - something that is about to change v. 
radically in the next 10 years, as for the 1st time it becomes as easy for 
everyone to manipulate a movie, as it became to handle the printed book. 
That last sentence of yours is an old-style literate mind talking - you 
extract a huge amount of info. from a movie - you just aren't aware of it, 
because you don't begin to know how to analyse it - there are no words for 
that info.


How do you think a person can fall in love with another person in just a few 
minutes of talking to them (or not even talking at all)? How does their 
brain get them to do that - without the person having any conscious 
understanding of why they're falling? By analysis of a few words that the 
other person says (& what if they don't say anything at all)?  Well, if you 
don't know how that process works, then maybe there's a lot else here you 
don't know - and it might be better to keep an open mind. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-29 Thread Mike Tintner
Robert:
I think it would be more accurate to say that technological meme evolution was 
caused by the biological evolution, rather than being the extension of it, 
since they are in fact two quite different evolutionary systems, with different 
kinds of populations/survival conditions. 

I would say that in some sense, there is already a machine species, even if not 
independent. This machine species just have not yet found a way of staying 
alive and breed outside human minds. 

Is this a helpful perspective? :-)... 


Robert,

Not quite. Yes technological evolution is much faster and is a phenomenon unto 
itself. But *we're* the ones who are going/ thinking faster (although they 
alter us in turn). But they - machines - don't truly evolve yet at all.  They 
have no intentions, aren't changing themselves - they can't stand up on 
savannahs, (or whatever we did), crawl out of the sea etc. as living species 
did, and aren't fiddling with their genomes.

I think it's important to make the distinction for everyone's sake (esp. AGI - 
underlined several times) because "living machines" - truly "autonomous mobile 
robots" involve a whole different paradigm and way of thinking, than existing 
machines, and we're just beginning to get our heads round that paradigm.

For example, living machines will like living creatures have to be 
"psychoeconomies", having to conduct a whole set of activities with limited 
resources in real time without any ability to switch off ever again. I doubt 
that anyone's truly trying to construct either AGI's or robots along those 
lines - and before you leap to contradict, you need to think about that - i.e. 
what's entailed in being alive - v. carefully

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Vladimir Nesov
Mike,

Don't you know about change blindness and the like? You don't actually
see all these details, it's delusional. You only get the gist of the
scene, according to current context that forms the focus of your
attention. Amount of information you extract from watching a movie is
not dramatically bigger than what you extract from reading a book.

Bob was talking about the very real phenomenon - if we disable area in
your brain that captures movement, you will observe a slide show and
it might help you to get away from this movies-in-the-head fix that
you've got.

On Fri, Feb 29, 2008 at 2:36 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Eh? Move your hand across the desk. You see that as a series of snapshots?
>  Move a noisy object across. You don't see a continuous picture with a
>  continuous soundtrack?
>
>  Let me give you an example of how impressive I think the brain's powers here
>  are. I've been thinking about metaphor and the superimposition/
>  transformation of two images involved. "The clouds cried" - that sort of
>  thing. Then another one came up: "bicycle kick." Now technically, I think
>  that's awesome - because to arrive at it, the brain has to superimpose two
>  *movie* clips.
>
>  Look at the football kick:
>
>  http://www.youtube.com/watch?v=3NCWQr47bK0
>
>  and then look at the action of cycling. (In fact that superimposition of
>  clouds and eyes crying is also of movie clips - and so are a vast amount of
>  metaphors - but I hadn't really noticed it).
>
>  Try and tell me how current visual systems might make that connection.
>
>  And I would assert - and am increasingly confident - that the grammar of
>  language - how we put words together in whatever form - is based on cutting
>  together internal *movies* in our head - not still images,but movies.
>
>  They don't teach moviemaking in AI courses do they?
>
>
>
>  "Bob Mottram": Mike Tintner <[EMAIL PROTECTED]> wrote:
>  >>  consciousness is a continuously moving picture with the other senses
>  >> continuous too
>  >
>  > There doesn't seem to be much evidence for this.  People with damage
>  > to MT, or certain types of visual migrane, see the world as a slow
>  > jerky series of snapshots (like looking at a webcam with a low frame
>  > rate).  The temporal resolution of consciousness may actually be quite
>  > slow - on the 0.5 second time frame as originally described by Grey
>  > Walter.
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-29 Thread Robert Wensman
>
>
> d) you keep repeating the illusion that evolution did NOT achieve the
> airplane and other machines - oh yes, it did - your central illusion here
> is
> that machines are independent species. They're not. They are
> EXTENSIONS  of
> human beings, and don't work without human beings attached. Manifestly
> evolution has taken several stages to perfect tool/machine-using species -
> of whom we are only the latest version - I refer you to my good colleague,
> the tool-using-and-creating Caledonian crow.
>
> Yes, somehow, we are going to create the first independent machine species
> -
> but there's a big unanswered set of questions as to how .


It can be said that the emergence of human intelligence and human cultures
set of another kind of technological evolution on top of the biological one.
That these two forms of evolution can be seen as separate, can be explained
as follows:

Biological evolution works through DNA sequences, genes. The survivability
of genes, depend on whether they are a part of successful biological
lifeforms.

Technological evolution works through sets of ideas, or memes that grow in
our culture and in the minds of human beings. The survivability of memes
depend on whether they are "appealing" to human minds. Whether a meme is
appealing or not, could depend on a number of factors, such as whether the
meme could help humans to achieve some of their goals, whether they are
self-contradicting, or whether we can understand them etc. Memes can even
survive outside the brain of humans, stored in books etc.

The reason why technological innovations works with such great strides, is
first because memes are produced at an incredible rate compared to genes;
they are software based instead of hardware based. But more importantly,
because memes can be selected based on logical deduction and the
consideration of a predicted future. Thus, the survivability of memes depend
on how well we believe them to help us in the future.

I think it would be more accurate to say that technological meme evolution *was
caused by *the biological evolution, rather than being *the extension of it*,
since they are in fact two quite different evolutionary systems, with
different kinds of populations/survival conditions.

I would say that in some sense, there is already a machine species, even if
not independent. This machine species just have not yet found a way of
staying alive and breed outside human minds.

Is this a helpful perspective? :-)...

One key issue here, is whether we want to consider hardware and software
evolutionary systems, or just hardware based evolutionary systems. Also, I
admit that maybe I am not using the concept of "species" in any stringent
way.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Mike Tintner
Eh? Move your hand across the desk. You see that as a series of snapshots? 
Move a noisy object across. You don't see a continuous picture with a 
continuous soundtrack?


Let me give you an example of how impressive I think the brain's powers here 
are. I've been thinking about metaphor and the superimposition/ 
transformation of two images involved. "The clouds cried" - that sort of 
thing. Then another one came up: "bicycle kick." Now technically, I think 
that's awesome - because to arrive at it, the brain has to superimpose two 
*movie* clips.


Look at the football kick:

http://www.youtube.com/watch?v=3NCWQr47bK0

and then look at the action of cycling. (In fact that superimposition of 
clouds and eyes crying is also of movie clips - and so are a vast amount of 
metaphors - but I hadn't really noticed it).


Try and tell me how current visual systems might make that connection.

And I would assert - and am increasingly confident - that the grammar of 
language - how we put words together in whatever form - is based on cutting 
together internal *movies* in our head - not still images,but movies.


They don't teach moviemaking in AI courses do they?

"Bob Mottram": Mike Tintner <[EMAIL PROTECTED]> wrote:
 consciousness is a continuously moving picture with the other senses 
continuous too


There doesn't seem to be much evidence for this.  People with damage
to MT, or certain types of visual migrane, see the world as a slow
jerky series of snapshots (like looking at a webcam with a low frame
rate).  The temporal resolution of consciousness may actually be quite
slow - on the 0.5 second time frame as originally described by Grey
Walter.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Bob Mottram
On 29/02/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
>  consciousness is a continuously moving picture with the other senses 
> continuous too

There doesn't seem to be much evidence for this.  People with damage
to MT, or certain types of visual migrane, see the world as a slow
jerky series of snapshots (like looking at a webcam with a low frame
rate).  The temporal resolution of consciousness may actually be quite
slow - on the 0.5 second time frame as originally described by Grey
Walter.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-28 Thread Mike Tintner


Sorry, yes the "run" is ambiguous.

I mean that what the human mind does is *watch* continuous movies -  but it 
then runs/creates its own extensive movies based on its experience in 
dreams - and, with some effort, replay movies in conscious imagination.


The point is: my impression is that in discussing this whole area,   both in 
AI, philosophy & cog sci/psych, people tend to forget that consciousness is 
a continuously moving picture with the other senses continuous too, and tend 
to think, even if only implicitly, in terms of stills.




Mike Tintner wrote:
Er, just to clarify. You guys have, or know of, AI systems which run 
continuous movies of the world, analysing and responding to those movies 
with all the relevant senses, as discussed below, and then to the world 
beyond those movies, in real time (or any time, for that matter)?


I have no idea what you man by "run movies of the world" in this context.

You mean they have an internal world model?


Richard Loosemore







Mike Tintner wrote:
 > You're crossing a road - you track both the oncoming car and
your body
 > with all your senses at once - see a continuous moving image
of the
 > car, hear the noise of the engine and tires, possibly smell
it if
 > there's a smell of gasoline, have a kinaesthetic sense of
your body in
 > relation to the car, including a sense of up/down, left/right 
etc


*/Richard Loosemore <[EMAIL PROTECTED]>/* wrote:

I have been working on getting exactly that sort of cognitive
system
since the mid 1980s.

I don't know: perhaps you think it is especially difficult
because you
have not done much work on it.

Conventional approaches to AI may well have trouble in this
area, but
since my approach has been directed at these kinds of issues
since the
very beginning, to me it looks relatively straightforward in
principle.

The real issues are elsewhere.

Richard Loosemore

I agree!  The significant obstacles are elsewhere.  The integration
of ideas and the ability to index large volumes of information so
that recognition systems can find them quickly are two problems that
I see as particularly difficult.  Even if we were able to show how
to get the job done for a special case it would not necessarily
translate into a feasible and extensible general program.
Jim Bromer


Be a better friend, newshound, and know-it-all with Yahoo! Mobile.
Try it now.



*agi* | Archives 
 | Modify
 Your Subscription [Powered by
Listbox] 



No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.21.1/1302 - Release Date:
2/27/2008 4:34 PM


*agi* | Archives  
 | Modify 
 Your Subscription [Powered by Listbox] 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.516 / Virus Database: 
269.21.1/1302 - Release Date: 2/27/2008 4:34 PM






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-28 Thread Richard Loosemore

Mike Tintner wrote:
Er, just to clarify. You guys have, or know of, AI systems which run 
continuous movies of the world, analysing and responding to those movies 
with all the relevant senses, as discussed below, and then to the world 
beyond those movies, in real time (or any time, for that matter)?


I have no idea what you man by "run movies of the world" in this context.

You mean they have an internal world model?


Richard Loosemore







Mike Tintner wrote:
 > You're crossing a road - you track both the oncoming car and
your body
 > with all your senses at once - see a continuous moving image
of the
 > car, hear the noise of the engine and tires, possibly smell
it if
 > there's a smell of gasoline, have a kinaesthetic sense of
your body in
 > relation to the car, including a sense of up/down, left/right etc

*/Richard Loosemore <[EMAIL PROTECTED]>/* wrote:

I have been working on getting exactly that sort of cognitive
system
since the mid 1980s.

I don't know: perhaps you think it is especially difficult
because you
have not done much work on it.

Conventional approaches to AI may well have trouble in this
area, but
since my approach has been directed at these kinds of issues
since the
very beginning, to me it looks relatively straightforward in
principle.

The real issues are elsewhere.

Richard Loosemore

I agree!  The significant obstacles are elsewhere.  The integration
of ideas and the ability to index large volumes of information so
that recognition systems can find them quickly are two problems that
I see as particularly difficult.  Even if we were able to show how
to get the job done for a special case it would not necessarily
translate into a feasible and extensible general program.
Jim Bromer


Be a better friend, newshound, and know-it-all with Yahoo! Mobile.
Try it now.



*agi* | Archives 
 | Modify
 Your Subscription  [Powered by
Listbox] 



No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.21.1/1302 - Release Date:
2/27/2008 4:34 PM


*agi* | Archives  
 | Modify 
 
Your Subscription	[Powered by Listbox] 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-28 Thread Mike Tintner
Er, just to clarify. You guys have, or know of, AI systems which run continuous 
movies of the world, analysing and responding to those movies with all the 
relevant senses, as discussed below, and then to the world beyond those movies, 
in real time (or any time, for that matter)?



Mike Tintner wrote:
> You're crossing a road - you track both the oncoming car and your body 
> with all your senses at once - see a continuous moving image of the 
> car, hear the noise of the engine and tires, possibly smell it if 
> there's a smell of gasoline, have a kinaesthetic sense of your body in 
> relation to the car, including a sense of up/down, left/right etc

Richard Loosemore <[EMAIL PROTECTED]> wrote:

I have been working on getting exactly that sort of cognitive system 
since the mid 1980s.

I don't know: perhaps you think it is especially difficult because you 
have not done much work on it.

Conventional approaches to AI may well have trouble in this area, but 
since my approach has been directed at these kinds of issues since the 
very beginning, to me it looks relatively straightforward in principle.

The real issues are elsewhere.

Richard Loosemore


  I agree!  The significant obstacles are elsewhere.  The integration of ideas 
and the ability to index large volumes of information so that recognition 
systems can find them quickly are two problems that I see as particularly 
difficult.  Even if we were able to show how to get the job done for a special 
case it would not necessarily translate into a feasible and extensible general 
program.
  Jim Bromer



--
  Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.

--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.516 / Virus Database: 269.21.1/1302 - Release Date: 2/27/2008 
4:34 PM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-28 Thread Jim Bromer


Ben Goertzel <[EMAIL PROTECTED]> wrote:
That is purely rhetorical gamesmanship

ben

I studied some rhetoric and while a learned how to avoid some of the worst 
pitfalls of gamemanship and how to avoid wasting *all* of my time, I found that 
the study, which was one of Aristotle's subjects by the way, was very helpful 
in giving me some understanding of how complex ideas work.  Or at least I think 
it was.
Jim Bromer

   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-28 Thread Jim Bromer

 Mike Tintner wrote:
> You're crossing a road - you track both the oncoming car and your body 
> with all your senses at once -  see a continuous moving image of the 
> car,  hear the noise of the engine and tires,  possibly smell it if 
> there's a smell of gasoline,  have a kinaesthetic sense of your body in 
> relation to the car, including a sense of up/down, left/right etc

Richard Loosemore <[EMAIL PROTECTED]> wrote:

I have been working on getting exactly that sort of cognitive system 
since the mid 1980s.

I don't know:  perhaps you think it is especially difficult because you 
have not done much work on it.

Conventional approaches to AI may well have trouble in this area, but 
since my approach has been directed at these kinds of issues since the 
very beginning, to me it looks relatively straightforward in principle.

The real issues are elsewhere.

Richard Loosemore

I agree!  The significant obstacles are elsewhere.  The integration of ideas 
and the ability to index large volumes of information so that recognition 
systems can find them quickly are two problems that I see as particularly 
difficult.  Even if we were able to show how to get the job done for a special 
case it would not necessarily translate into a feasible and extensible general 
program.
Jim Bromer

   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Bob Mottram
On 27/02/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I don't buy that my body plays a significant role in thinking about,
>  for instance,
>  mathematics.  I bet that my brain in a vat could think about math just
>  as well or
>  better than my embodied brain.
>
>  Of course my brain is what it is because of evolving to be embodied, but 
> that's
>  a different statement.


Your body will have played a significant role in the development of
you brain, including the types of concepts and representations formed,
but once you're past your mid teens and having acquired a lot of
embodied experience you could probably enjoy a comfortable retirement
as the brain of Morbius wallowing in a vat, thinking abstract thoughts
and contemplating domination of the universe.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Ben Goertzel
I do not doubt that body-thinking exists and is important, my doubt is that it
is in any AGI-useful sense "the largest part" of thinking...

On Wed, Feb 27, 2008 at 1:07 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Ben:What evidence do you have that this [body thinking] is the "largest
>
> part" ... it does
>  not feel at all
>  that way to me, as a subjectively-experiencing human; and I know of no
>  evidence
>  in this regard
>
>  Like I said, I'm at the start here - and this is going against thousands of
>  years of literate culture. And there's a lot of work that needs to be done,
>  but I'm increasingly confident about it.
>
>  For a quick, impressionistic response to your question, think of what kind
>  of spectator events are almost guaranteed to produce the greatest
>  physical-and-emotional, "whole-body" responses in you. Spectator sports -
>  when you watch, say, someone miss a goal, and literally scream with your
>  whole body. Or farce - when some comic actor makes some crazy physical
>  errors - which you find literally gut-wrenchingly funny.  Why do you respond
>  so intensely? Because you are "body thinking", mirroring their actions with
>  your whole body - and that's a whole lot of stuff to think with, compared
>  say to the relatively few brain-and-body areas involved in symbolic thinking
>  like "22 + 22 = 44". (I notice in education they are now talking about how
>  infants and young children have to acquire all those symbols by "hands-on
>  thinking", i.e. "body thinking." You (and I) have just forgotten all that
>  stuff.).
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-27 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

On Wednesday 27 February 2008 12:22:30 pm, Richard Loosemore wrote:

Mike Tintner wrote:
As Ben said, it's something like "multisensory integrative 
consciousness" - i.e. you track a subject/scene with all senses 
simultaneously and integratedly.
Conventional approaches to AI may well have trouble in this area, but 
since my approach has been directed at these kinds of issues since the 
very beginning, to me it looks relatively straightforward in principle.


The real issues are elsewhere.


True. I'd go farther and point out just where they are: You need to have a 
system with recognition / action generation integrated between the sensory 
modalities to be a trainable animal. To be intelligent, the system has to be 
able to *invent new modalities / representations / concepts itself* and 
integrate them into the existing mechanism.


True.

Because of the particular methodology that I use, however, I can say 
that the architecture required to do that is no longer an issue.  My 
focus is (mostly) on getting the low level mechanisms to be stable.




Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Mike Tintner
Ben:What evidence do you have that this [body thinking] is the "largest 
part" ... it does

not feel at all
that way to me, as a subjectively-experiencing human; and I know of no 
evidence

in this regard

Like I said, I'm at the start here - and this is going against thousands of 
years of literate culture. And there's a lot of work that needs to be done, 
but I'm increasingly confident about it.


For a quick, impressionistic response to your question, think of what kind 
of spectator events are almost guaranteed to produce the greatest 
physical-and-emotional, "whole-body" responses in you. Spectator sports - 
when you watch, say, someone miss a goal, and literally scream with your 
whole body. Or farce - when some comic actor makes some crazy physical 
errors - which you find literally gut-wrenchingly funny.  Why do you respond 
so intensely? Because you are "body thinking", mirroring their actions with 
your whole body - and that's a whole lot of stuff to think with, compared 
say to the relatively few brain-and-body areas involved in symbolic thinking 
like "22 + 22 = 44". (I notice in education they are now talking about how 
infants and young children have to acquire all those symbols by "hands-on 
thinking", i.e. "body thinking." You (and I) have just forgotten all that 
stuff.). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-27 Thread J Storrs Hall, PhD
On Wednesday 27 February 2008 12:22:30 pm, Richard Loosemore wrote:
> Mike Tintner wrote:
> > As Ben said, it's something like "multisensory integrative 
> > consciousness" - i.e. you track a subject/scene with all senses 
> > simultaneously and integratedly.
> 
> Conventional approaches to AI may well have trouble in this area, but 
> since my approach has been directed at these kinds of issues since the 
> very beginning, to me it looks relatively straightforward in principle.
> 
> The real issues are elsewhere.

True. I'd go farther and point out just where they are: You need to have a 
system with recognition / action generation integrated between the sensory 
modalities to be a trainable animal. To be intelligent, the system has to be 
able to *invent new modalities / representations / concepts itself* and 
integrate them into the existing mechanism.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Ben Goertzel
>  Well,  what I and embodied cognitive science are trying to formulate
>  properly, both philosophically and scientifically, is why:
>
>  a) common sense consciousness is the brain-AND-body thinking on several
>  levels simultaneously about any given subject...

I don't buy that my body plays a significant role in thinking about,
for instance,
mathematics.  I bet that my brain in a vat could think about math just
as well or
better than my embodied brain.

Of course my brain is what it is because of evolving to be embodied, but that's
a different statement.

>  b) with the *largest* part of that thinking being "body thinking" - i.e.
>  your body working out *in-the-body* how the actions under consideration can
>  be enacted  (although this is inseparable from, and dependent on, the
>  brain's levels of thinking)

What evidence do you have that this is the "largest part" ... it does
not feel at all
that way to me, as a subjectively-experiencing human; and I know of no evidence
in this regard.

The largest bulk of brain matter does not equate to the largest part
of thinking,
in any useful sense...

I suspect that, in myself at any rate, the vast majority of my brain
dynamics are driven
by the small percentage of my brain that deal with abstract cognition.
 An attractor
spanning the whole brain can nonetheless be triggered/controlled by dynamics
in a small region.

>  c) if an agent doesn't have a body that can think about how it can move (and
>  have emotions), then it almost certainly can't understand how other bodies
>  move (and have emotions) - and therefore can't acquire a
>  "more-than-it's-all-Greek/Chinese/probabilistic-logic-to-me" understanding
>  of physics, biology, psychology, sociology etc. etc. - of both the
>  formal/cultural and informal/personal kinds.

I agree about psychology and sociology, but not about physics and biology.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Ben Goertzel
>  d) you keep repeating the illusion that evolution did NOT achieve the
>  airplane and other machines - oh yes, it did - your central illusion here is
>  that machines are independent species. They're not. They are EXTENSIONS  of
>  human beings, and don't work without human beings attached. Manifestly
>  evolution has taken several stages to perfect tool/machine-using species -
>  of whom we are only the latest version - I refer you to my good colleague,
>  the tool-using-and-creating Caledonian crow.

That is purely rhetorical gamesmanship...

By that interpretation of "achieved by evolution" then any AGI that we create
will also be achieved by evolution, due to being created by humans that
were achieved by evolution, right?

So, by this definition, the concept of "achieved by evolution" makes no
useful distinctions among AGI designs...

And: a wheel does work without a human attached, btw ..

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-27 Thread Richard Loosemore

Mike Tintner wrote:

Richard: Mike Tintner wrote:

 No one in AGI is aiming for common sense consciousness, are they?


Inasmuch as I understand what you mean by that, yes of course.

Both common sense and consciousness.



As Ben said, it's something like "multisensory integrative 
consciousness" - i.e. you track a subject/scene with all senses 
simultaneously and integratedly.


You're crossing a road - you track both the oncoming car and your body 
with all your senses at once -  see a continuous moving image of the 
car,  hear the noise of the engine and tires,  possibly smell it if 
there's a smell of gasoline,  have a kinaesthetic sense of your body in 
relation to the car, including a sense of up/down, left/right etc and 
are doing "body thinking/mapping" about whether your body and the car 
will or won't collide if you move at such-and-such speeds, and whether 
your nerves and muscles can supply the necessary hormones and power in 
time. and no doubt there's a few other senses involved that I've left 
out! And all this sensory processing is integrated and cross-referenced 
and cross-checked - you'd be really disturbed if the car looks as if 
it's going slowly, but sounds as if it's going fast, just as people are 
disturbed when they see a card of black hearts.


You guys seem to think this - true common sense consciousness - can all 
be cracked in a year or two. I think there's probably a lot of good 
reasons - and therefore major creative problems - why it took a billion 
years of evolution to achieve.


I have been working on getting exactly that sort of cognitive system 
since the mid 1980s.


I don't know:  perhaps you think it is especially difficult because you 
have not done much work on it.


Conventional approaches to AI may well have trouble in this area, but 
since my approach has been directed at these kinds of issues since the 
very beginning, to me it looks relatively straightforward in principle.


The real issues are elsewhere.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Mike Tintner



Ben: MT:>>  You guys seem to think this - true common sense consciousness - 
can all be
 cracked in a year or two. I think there's probably a lot of good 
reasons -

 and therefore major creative problems - why it took a billion years of
 evolution to achieve.


Ben: I'm not trying to emulate the brain. Evolution took billions of years 
to NOT achieve the airplane, helicopter

or wheel ...



Well,  what I and embodied cognitive science are trying to formulate 
properly, both philosophically and scientifically, is why:


a) common sense consciousness is the brain-AND-body thinking on several 
levels simultaneously about any given subject...


b) with the *largest* part of that thinking being "body thinking" - i.e. 
your body working out *in-the-body* how the actions under consideration can 
be enacted  (although this is inseparable from, and dependent on, the 
brain's levels of thinking)


and:

c) if an agent doesn't have a body that can think about how it can move (and 
have emotions), then it almost certainly can't understand how other bodies 
move (and have emotions) - and therefore can't acquire a 
"more-than-it's-all-Greek/Chinese/probabilistic-logic-to-me" understanding 
of physics, biology, psychology, sociology etc. etc. - of both the 
formal/cultural and informal/personal kinds.


[my robotics friend's comments on b]:
"I do agree with you about the body mapping/consciousness being greater in 
magnitude than symbolic consciousness. In fact, it appears the vast majority 
of brain is devoted to the former, but only a small amount [uncertainty 
reigns] may be devoted to the latter. Point in fact are the 30+ visual 
cortical areas. It's not sure how they each contribute to visual 
consciousness, per se, as compared to simply performing computations on the 
visual input. " ]


and

d) you keep repeating the illusion that evolution did NOT achieve the 
airplane and other machines - oh yes, it did - your central illusion here is 
that machines are independent species. They're not. They are EXTENSIONS  of 
human beings, and don't work without human beings attached. Manifestly 
evolution has taken several stages to perfect tool/machine-using species - 
of whom we are only the latest version - I refer you to my good colleague, 
the tool-using-and-creating Caledonian crow.


Yes, somehow, we are going to create the first independent machine species - 
but there's a big unanswered set of questions as to how .




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Bob Mottram
What I tried to do with robocore is have a number of subsystems
dedicated to particular modalities such as vision, touch, hearing,
smell and so on.  Each of these modalities operates in a semi
independent and self organised way, and their function is to create
stable abstractions from the raw data of sensory experience.  So what
you end up with are learned representations (you could call them
symbols or neural groups) which can act as building blocks for
experience.

Each of the main modality specific subsystems is linked to a central
hub or switching mechanism (think of it as a big telephone exchange).
This hub can connect the partial representations into multi-modal
constellations which might be called percepts.  These percepts can be
formed by association in a sort of bottom up way, but they can also be
recalled by higher level systems so that you have bi-directionality.
A memory in this architecture is a kind of reconstruction of the
original experience from the representational jigsaw.  This is a
rather Freudian way of thinking about memory, since it's an active
reconstruction which could be subject to alteration over time in the
light of new experiences.  From an information storage point of view
it's also fairly efficient.  To recall a memory the hub switches are
set appropriately which lights up those areas within particular
modalities close to the level of direct experience.



On 27/02/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
>  > No one in AGI is aiming for common sense consciousness, are they?
>  >
>
>
> The OpenCog and NM architectures are in principle supportive of this kind
>  of multisensory integrative consciousness, but not a lot of thought has gone
>  into exactly how to support it ...
>
>  In one approach, one would want to have
>
>  -- a large DB of embodied experiences (complete with the sensorial and
>  action data from the experiences)
>
>  -- a number of dimensional spaces, into which experiences are embedded
>  (a spatiotemporal region corresponds to a point in a dimensional space).
>  Each dimensional space would be organized according to a different principle,
>  e.g. melody, rhythm, overall visual similarity, similarity of shape, 
> similarity
>  of color, etc.
>
>  -- an internal simulation world in which concrete remembered experiences,
>  blended experiences, or abstracted experiences could be enacted and
>  "internally simulated"
>
>  -- conceptual blending operations implemented on the dimensional spaces
>  and directly in the internal sim world
>
>  -- methods for measuring similarity, inheritance and other logical 
> relationships
>  in the dimensional spaces and the internal sim world
>
>  -- methods for enacting learned procedures in the internal sim world,
>  and learning
>  new procedures based on simulating what they would do in the internal sim 
> world
>
>
>  This is all do-able according to mechanisms that exist in the OpenCog and NM
>  designs, but it's an aspect we haven't focused on so far in NM... though 
> we're
>  moving in that direction due to our work w/ embodiment in simulation
>  worlds...
>
>  We have built a sketchy internal sim world for NM but haven't experimented 
> with
>  it much yet due to other priorities...
>
>
>  -- Ben
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>  You guys seem to think this - true common sense consciousness - can all be
>  cracked in a year or two. I think there's probably a lot of good reasons -
>  and therefore major creative problems - why it took a billion years of
>  evolution to achieve.

I'm not trying to emulate the brain.

Evolution took billions of years to NOT achieve the airplane, helicopter
or wheel ...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner

Richard: Mike Tintner wrote:

 No one in AGI is aiming for common sense consciousness, are they?


Inasmuch as I understand what you mean by that, yes of course.

Both common sense and consciousness.



As Ben said, it's something like "multisensory integrative consciousness" - 
i.e. you track a subject/scene with all senses simultaneously and 
integratedly.


You're crossing a road - you track both the oncoming car and your body with 
all your senses at once -  see a continuous moving image of the car,  hear 
the noise of the engine and tires,  possibly smell it if there's a smell of 
gasoline,  have a kinaesthetic sense of your body in relation to the car, 
including a sense of up/down, left/right etc and are doing "body 
thinking/mapping" about whether your body and the car will or won't collide 
if you move at such-and-such speeds, and whether your nerves and muscles can 
supply the necessary hormones and power in time. and no doubt there's a few 
other senses involved that I've left out! And all this sensory processing is 
integrated and cross-referenced and cross-checked - you'd be really 
disturbed if the car looks as if it's going slowly, but sounds as if it's 
going fast, just as people are disturbed when they see a card of black 
hearts.


You guys seem to think this - true common sense consciousness - can all be 
cracked in a year or two. I think there's probably a lot of good reasons - 
and therefore major creative problems - why it took a billion years of 
evolution to achieve. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Richard Loosemore

Mike Tintner wrote:
 
No one in AGI is aiming for common sense consciousness, are they?


Inasmuch as I understand what you mean by that, yes of course.

Both common sense and consciousness.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>
> No one in AGI is aiming for common sense consciousness, are they?
>

The OpenCog and NM architectures are in principle supportive of this kind
of multisensory integrative consciousness, but not a lot of thought has gone
into exactly how to support it ...

In one approach, one would want to have

-- a large DB of embodied experiences (complete with the sensorial and
action data from the experiences)

-- a number of dimensional spaces, into which experiences are embedded
(a spatiotemporal region corresponds to a point in a dimensional space).
Each dimensional space would be organized according to a different principle,
e.g. melody, rhythm, overall visual similarity, similarity of shape, similarity
of color, etc.

-- an internal simulation world in which concrete remembered experiences,
blended experiences, or abstracted experiences could be enacted and
"internally simulated"

-- conceptual blending operations implemented on the dimensional spaces
and directly in the internal sim world

-- methods for measuring similarity, inheritance and other logical relationships
in the dimensional spaces and the internal sim world

-- methods for enacting learned procedures in the internal sim world,
and learning
new procedures based on simulating what they would do in the internal sim world


This is all do-able according to mechanisms that exist in the OpenCog and NM
designs, but it's an aspect we haven't focused on so far in NM... though we're
moving in that direction due to our work w/ embodiment in simulation
worlds...

We have built a sketchy internal sim world for NM but haven't experimented with
it much yet due to other priorities...

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner

Bob: It's this linguistic tagging of
constellations of modality specific representations which is the real
power of the human brain, since it can permit arbitrary configurations
to be conjured up (literally re-membered) in a manner which is
reasonably efficient from an information compression perspective.

Bob,

That's what I'm arguing against. And I'm still groping here, because I'm 
trying to contradict the "wisdom" of several thousand years of alphabetic 
language or literate civilisation, which says that imaginative and body 
knowledge is very secondary, if not peripheral or even entirely decorative 
to symbolic forms of knowledge.


Actually, I'm suggesting, when you learn to play a sport, or physical skill, 
and to perform all the various movements - all the various swings, kicks, 
finger-presses etc - very, very little thinking goes on linguistically. Most 
of the thinking is in body form - feeling with your body, including muscles 
and senses, how comfortable and fluid and powerful and effective different 
movements are - and how any objects you're using, like ball and racket or 
piano keys, are reacting to your movements. *Some* linguistic thinking will 
certainly be involved in this, but v. v. little.


Mainly what the linguistic thinking will do, I think, is to say to yourself 
:"that's not working, try something else." But what that something else is, 
you will have to find out most of the time with your body - "hands on".


Occasionally, you will formally, consciously analyse a movement, in part 
using language - perhaps with the aid of a mirror or somesuch - but actually 
this will only be occasional.


It's v. simple to begin to test all this - get up and kick a ball, or 
practice a swing - and observe how many words you use to control and alter 
your motions. V.v. little, right?  Then go through your entire repertoire 
and check out how much you can even begin to describe verbally.


Be interested in further comments, here - I've only just begun to truly 
realise all this in the last month or so.


P.S. It isn't just our knowledge of select movements and select objects for 
select skills that is primarily imaginative and body-form, but our knowledge 
of the entire world of objects and creatures and their movements and 
behaviours. We are continually thinking in principally body form not just 
about how to move our own body, but how other bodies do and will move - 
mirroring with those mirror neurons.



Mike Tintner <[EMAIL PROTECTED]> wrote:
 The idea that an AGI can symbolically encode all the knowledge, and 
perform
 all the thinking, necessary to produce, say, a golf swing, let alone 
play a
 symphony,  is a pure fantasy. Our system keeps that knowledge and 
thinking
 largely in the motor areas of the brain and body, because that's where 
it

 HAS to be.



Well in the case of a golf swing you might have some high level
reasoning, such as "I wanna move the ball, so I'm gonna try to hit it
with this big hunka metal".  That might then get translated into a
kind of plan by the premotor cortex, such as grasping the handle in a
certain kind of way, shuffling your feet, looking down at the ball.
This plan might be represented in motor space as a set of
heirachically organised vectors which act as attractors for lower
level systems controlling tensioning of individual muscles.  In the
final act the outcome would be a combination of the physical
properties of the system combined with feedback control of muscles.

The symbols in this kind of system are really flags of convenience.
Much knowledge is probably represented in a largely modality specific
way only a few abstractions away from the raw data of experience.
This can then be cross indexed via the thalamus and if necessary
associated with linguistic events.  It's this linguistic tagging of
constellations of modality specific representations which is the real
power of the human brain, since it can permit arbitrary configurations
to be conjured up (literally re-membered) in a manner which is
reasonably efficient from an information compression perspective.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.21.1/1299 - Release Date: 2/26/2008 
9:08 AM






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner
Ben: Anyway, I agree with you that formal logical rules and inference are not 
the
end-all of AGI and are not the right tool for handling visual imagination or
motor learning. But I do think they have an important role to play even so.

Just one thought here that is worth trying to express, although I'm still 
groping with it. When we talk about "imagination" we are usually referring to 
secondary and/or reflective acts of imagination.  So when you make 
piano-playing movements, they are based on existing imaginative and body 
knowledge of how to make those movements.

But our immediate, primary consciousness, from which that secondary knowledge 
is derived, and which is the foundation of an intelligent mind, is not actually 
a dissectible affair. Your (Ben) consciousness as you read this, or as you sit 
at the piano about to play, is the "imovie-in-and-around-the-mind" - a 
*common-sense* affair, involving 
vision-hearing-touch-smell-kinaesthetic-&-every-other sense, interacting 
integratedly and inseparably. Later, when you remember your actions, you can 
focus reflectively on one sense at a time - just remember, for example, what 
you saw, and ignore the other senses. But in reality it isn't possible to 
separate the senses (as Michael Tye has pointed out). The brain - and any 
intelligent mind that deals with the real world - needs the whole movie and not 
just, say, vision, as well as the "i" that is continually viewing it. 

No one in AGI is aiming for common sense consciousness, are they?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Bob Mottram
On 26/02/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
>  The idea that an AGI can symbolically encode all the knowledge, and perform
>  all the thinking, necessary to produce, say, a golf swing, let alone play a
>  symphony,  is a pure fantasy. Our system keeps that knowledge and thinking
>  largely in the motor areas of the brain and body, because that's where it
>  HAS to be.


Well in the case of a golf swing you might have some high level
reasoning, such as "I wanna move the ball, so I'm gonna try to hit it
with this big hunka metal".  That might then get translated into a
kind of plan by the premotor cortex, such as grasping the handle in a
certain kind of way, shuffling your feet, looking down at the ball.
This plan might be represented in motor space as a set of
heirachically organised vectors which act as attractors for lower
level systems controlling tensioning of individual muscles.  In the
final act the outcome would be a combination of the physical
properties of the system combined with feedback control of muscles.

The symbols in this kind of system are really flags of convenience.
Much knowledge is probably represented in a largely modality specific
way only a few abstractions away from the raw data of experience.
This can then be cross indexed via the thalamus and if necessary
associated with linguistic events.  It's this linguistic tagging of
constellations of modality specific representations which is the real
power of the human brain, since it can permit arbitrary configurations
to be conjured up (literally re-membered) in a manner which is
reasonably efficient from an information compression perspective.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Jim Bromer


Ben Goertzel <[EMAIL PROTECTED]> wrote:

Anyway, I agree with you that formal logical rules and inference are not the
end-all of AGI and are not the right tool for handling visual imagination or
motor learning.  But I do think they have an important role to play even so.

-- Ben G

Well, pure closed logic alone is not the right tool for visual imagination, but 
the use of category and substitution into various contexts is.  This means that 
these symbolic cut and paste methods along with blends, morphs, mapping and the 
like can be used as references from symbols which can be used o  simplify the 
representation and integration of complex scenes (in a more general sense of 
images) and so on.  Because categorical substitution is so computery it means 
that the generation of the imagination is probably one of the simplest parts of 
the problem.  These symbolic references could be used logically (inductive 
logic) by associating certain combinations and certain kinds of combinations 
with, say, effective results.  Of course the more serious problems, how can the 
program define what constitutes an effective result in a reasonable way, how to 
integrate separate ideas in complex ways appropriately, how to incorporate 
reason effectively and how these imaginative
 processes can be integrated with empirical methods and cross analysis are 
still major complications that no one has seemed to master.
Jim Bromer

   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread BillK
On Tue, Feb 26, 2008 at 8:29 PM, Ben Goertzel wrote:

>
>  I don't think that formal logic is a suitably convenient language for 
> describing
>  motor movements or dealing with motor learning.
>
>  But still, I strongly suspect one can produce software programs that do 
> handle
>  motor movement and learning effectively.  They are symbolic at the level of
>  the programming language, but not symbolic at the level of the deliberative,
>  reflective component of the artificial mind doing the learning.
>
>  A symbol is a symbol **to some system**.  Just because a hunk of program
>  code contains symbols to the programmer, doesn't mean it contains symbols
>  to the mind it helps implement.  Any more than a neuron being a symbol to a
>  neuroscientist, implies that neuron is a symbol to the mind it helps 
> implement.
>
>  Anyway, I agree with you that formal logical rules and inference are not the
>  end-all of AGI and are not the right tool for handling visual imagination or
>  motor learning.  But I do think they have an important role to play even so.
>

Asimo has a motor movement program.
Obviously he didn't 'learn' it himself. But once written, it seems
likely that similar sub-routines can be taken advantage of by later
robots.


BillK

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>  Your piano example is a good one.
>
>  What it illustrates, I suggest, is:
>
>  your knowledge of, and thinking about, how to play the piano, and perform
>  the many movements involved, is overwhelmingly imaginative and body
>  knowledge/thinking (contained in images and the motor parts of the brain and
>  body as distinct from any kind of symbols)
>
>  The percentage of that knowledge that can be expressed in symbolic form -
>  logical, mathematical, verbal  etc- i.e. the details of those movements that
>  can be named or measured - is only A TINY FRACTION of the total.

Wrong...

This knowledge CAN be expressed in logical, symbolic form... just as can the
positions of all the particles in my brain ... but for these cases, the logical,
symbolic representation is highly awkward and inefficient...


>Our
>  cultural let alone your personal vocabulary (both linguistic and of  any
>  other symbolic form) for all the different finger movements you will
>  perform, can only name a tiny percentage of the details involved.

That is true, but in principle one could give a formal logical description of
them, boiling things all the way down to logical atoms corresponding to the
signals sent along the nerves to and from my fingers...

>  Such imaginative and body knowledge (which takes both declarative,
>  procedural and episodic forms) isn't, I suggest, - when considered as
>  corpuses or corpora of knowledge - MEANT to be put into explicit, symbolic,
>  verbal, logico-mathematical form.

Correct

> It would be utterly impossible to name all
>  the details of that knowledge.

Infeasible, not impossible

> One imaginative picture : an infinity of
>  words and other symbols. Any attempt to symbolise our imaginative/body
>  knowledge as a whole, would simply overwhelm our brain, or indeed any brain.

The concept of infinity is better handled in formal logic than anywhere else!!!

>  The idea that an AGI can symbolically encode all the knowledge, and perform
>  all the thinking, necessary to produce, say, a golf swing, let alone play a
>  symphony,  is a pure fantasy. Our system keeps that knowledge and thinking
>  largely in the motor areas of the brain and body, because that's where it
>  HAS to be.

Again you seem to be playing with different meanings of the word "symbolic."

I don't think that formal logic is a suitably convenient language for describing
motor movements or dealing with motor learning.

But still, I strongly suspect one can produce software programs that do handle
motor movement and learning effectively.  They are symbolic at the level of
the programming language, but not symbolic at the level of the deliberative,
reflective component of the artificial mind doing the learning.

A symbol is a symbol **to some system**.  Just because a hunk of program
code contains symbols to the programmer, doesn't mean it contains symbols
to the mind it helps implement.  Any more than a neuron being a symbol to a
neuroscientist, implies that neuron is a symbol to the mind it helps implement.

Anyway, I agree with you that formal logical rules and inference are not the
end-all of AGI and are not the right tool for handling visual imagination or
motor learning.  But I do think they have an important role to play even so.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner
Ben: One advantage AGIs will have over humans is better methods for 
translating
procedural to declarative knowledge, and vice versa.  For us to translate 
"knowing how to do X" into

"knowing how we do X" can be really difficult (I play piano
improvisationally and by
ear, and I have a hard time figuring out what the hell my fingers are
doing, even though
they do the same complex things repeatedly each time I play the same
song..).  This is
not a trivial problem for AGIs either but it won't be as hard as for 
humans...


Your piano example is a good one.

What it illustrates, I suggest, is:

your knowledge of, and thinking about, how to play the piano, and perform 
the many movements involved, is overwhelmingly imaginative and body 
knowledge/thinking (contained in images and the motor parts of the brain and 
body as distinct from any kind of symbols)


The percentage of that knowledge that can be expressed in symbolic form - 
logical, mathematical, verbal  etc- i.e. the details of those movements that 
can be named or measured - is only A TINY FRACTION of the total. Our 
cultural let alone your personal vocabulary (both linguistic and of  any 
other symbolic form) for all the different finger movements you will 
perform, can only name a tiny percentage of the details involved.


We see/sense, think, and move vastly more than we can put into words or 
symbols.


Such imaginative and body knowledge (which takes both declarative, 
procedural and episodic forms) isn't, I suggest, - when considered as 
corpuses or corpora of knowledge - MEANT to be put into explicit, symbolic, 
verbal, logico-mathematical form. It would be utterly impossible to name all 
the details of that knowledge. One imaginative picture : an infinity of 
words and other symbols. Any attempt to symbolise our imaginative/body 
knowledge as a whole, would simply overwhelm our brain, or indeed any brain.


The idea that an AGI can symbolically encode all the knowledge, and perform 
all the thinking, necessary to produce, say, a golf swing, let alone play a 
symphony,  is a pure fantasy. Our system keeps that knowledge and thinking 
largely in the motor areas of the brain and body, because that's where it 
HAS to be. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread J Storrs Hall, PhD
On Tuesday 26 February 2008 12:33:32 pm, Jim Bromer wrote:
> There is a lot of evidence that children do not learn through imitation, at 
least not in its truest sense. 

Haven't heard of any children born into, say, a purely French-speaking 
household suddenly acquiring a full-blown competence in Japanese...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Jim Bromer


Vladimir Nesov <[EMAIL PROTECTED]> wrote: Plus, I like to
think about learning as a kind of imitation, and procedural imitation
seems more direct. It's "substrate starting to imitate (adapt to)
process with which it interacted" as opposed to "a system that
observes a process, and then controls inference to reason about doing
that kind of thing".

-- 
Vladimir Nesov
[EMAIL PROTECTED]

There is a lot of evidence that children do not learn through imitation, at 
least not in its truest sense.  Of course we have all seen young children 
imitating adults and older children, but there is a complex difference between 
imprinting and childish imitation.  And I think this difference may be 
attributable or at least found in the conceptual complexity that would be 
necessary to explain human actions in full detail. How does the child know that 
certain mannerisms actually represents (the experience of) imitation?  I think 
that childish imitation, in all of its variations, can only be explained by 
theories of complex conceptual integration.
Jim Bromer

   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Vladimir Nesov
On Tue, Feb 26, 2008 at 6:10 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >  Knowing how to carry out inference can itself be procedural knowledge,
>  >  in which case no explicit distinction between the two is required.
>  >
>  >  --
>  >  Vladimir Nesov
>
>  Representationally, the same formalisms can of course be used for both
>  procedural and declarative knowledge.
>
>  The slightly subtler point, however, is that it seems that **given finite 
> space
>  and time resources**, it's far better to use specialized
>  reasoning/learning methods
>  for handling knowledge that pertains to carrying out coordinated sets of 
> action
>  in space and time.
>
>  Thus, "procedure learning" as a separate module from general inference.

Ben,

You are talking about optimization, and in this sense different
mutually-convertible representations are equivalent to one common
representation and only add in performance and tweakability. My take
on Pei's classification was that 'declarative' and 'procedural'
knowledge refer to different styles of processing and different
capabilities.

At this point I see 'procedural' knowledge as a useful metaphor to
generalize all types of processing in AGI system. It provides an
intuitive take on limited resources issue: only limited amount of
processing is allowed at each time, so system learns to deal with
constraints, learning reasoning procedures that work under these
constraints (and together), and spending no runtime on those that
don't. It enables the same flexibility with streamed I/O (natural
language?) as with internal reasoning processes. 'Hard' (semantic)
knowledge corresponds to stable procedures, and weak episodic
knowledge, that requires many relevant cues to line up to access,
gradually transforms into semantic form if used sufficiently, without
need to trace this process explicitly (which Novamente needs to do, so
far as I understand based on few available documents). Plus, I like to
think about learning as a kind of imitation, and procedural imitation
seems more direct. It's "substrate starting to imitate (adapt to)
process with which it interacted" as opposed to "a system that
observes a process, and then controls inference to reason about doing
that kind of thing".

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>  Knowing how to carry out inference can itself be procedural knowledge,
>  in which case no explicit distinction between the two is required.
>
>  --
>  Vladimir Nesov

Representationally, the same formalisms can of course be used for both
procedural and declarative knowledge.

The slightly subtler point, however, is that it seems that **given finite space
and time resources**, it's far better to use specialized
reasoning/learning methods
for handling knowledge that pertains to carrying out coordinated sets of action
in space and time.

Thus, "procedure learning" as a separate module from general inference.

The brain works this way and, on this very general level, I think
we'll do best to
emulate the brain in our AGI designs (not necessarily in the specific
representations/
algorithms the brain uses, but rather in the simple fact of the
pragmatic declarative/
procedural distinction..)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
YKY,

I'm with Pei on this one...

Decades of trying to do procedure learning using logic have led only
to some very
brittle planners that are useful under very special and restrictive
assumptions...

Some of that work is useful but it doesn't seem to me to be pointing in an AGI
direction.

OTOH for instance evolutionary learning and NN's have been more successful
at learning simple procedures for embodied action.

Within NM we have done (and published) experiments using probabilistic logic
for procedure learning, so I'm well aware it can be done.  But I don't
think it's a
scalable approach.

There appears to be a solid information-theoretic reason that the human brain
represents and manipulates declarative, procedural and episodic knowledge
separately.

It's more complex, but I believe it's a better idea to have separate methods for
representing and learning/adapting procedural vs declarative knowledge
--- and then
have routines for converting btw the two forms of knowledge.

One advantage AGIs will have over humans is better methods for translating
procedural to declarative knowledge, and vice versa.

For us to translate "knowing how to do X" into
"knowing how we do X" can be really difficult (I play piano
improvisationally and by
ear, and I have a hard time figuring out what the hell my fingers are
doing, even though
they do the same complex things repeatedly each time I play the same
song..).  This is
not a trivial problem for AGIs either but it won't be as hard as for humans...

-- Ben G

On Tue, Feb 26, 2008 at 8:00 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Tue, Feb 26, 2008 at 7:03 AM, YKY (Yan King Yin)
>  <[EMAIL PROTECTED]> wrote:
>  >
>  > On 2/15/08, Pei Wang <[EMAIL PROTECTED]> wrote:
>  > >
>  > > To me, the following two questions are independent of each other:
>  > >
>  >  > *. What type of reasoning is needed for AI? The major answers are:
>  > > (A): deduction only, (B) multiple types, including deduction,
>  > > induction, abduction, analogy, etc.
>  > >
>  > > *. What type of knowledge should be reasoned upon? The major answers
>  >  > are: (1) declarative only, (2) declarative and procedural.
>  > >
>  > > All four combination of the two answers are possible. Cyc is mainly
>  > > A1; you seem to suggest A2; in NARS it is B2.
>  >
>  >
>  > My current approach is "B1".  I'm wondering what is your argument for
>  > including procedural knowledge, in addition to declarative?
>
>  You have mentioned the reason in the following: some important
>  knowledge is procedural by nature.
>
>
>  > There is the idea of "deductive planning" which allows us to plan actions
>  > using a solely declarative KB.  So procedural knowledge is not needed for
>  > acting.
>
>  I haven't seen any no trivial result supporting this claim.
>
>
>  > Also, if you include procedural knowledge, things may be learned doubly in
>  > your KB.  For example, you may learn some declarative knowledge about the
>  > concept of "reverse" and also procedural knowledge of how to reverse
>  > sequences.
>
>  The knowledge about "how to do ..." can either be in procedural form,
>  as "programs", or in declarative, as descriptions of the programs.
>  There is overlapping/redundancy information in the two, but very often
>  both are needed, and the redundancy is tolerated.
>
>
>  > Even worse, in some cases you may only have procedural knowledge, without
>  > anything declarative.  That'd be like the intelligence of a calculator,
>  > without true understanding of maths.
>
>  Yes, but that is exactly the reason to directly reasoning on
>  procedural knowledge, right?
>
>  Pei
>
>
>  > YKY
>  >
>  >
>  >  
>  >
>  >  agi | Archives | Modify Your Subscription
>
>
>
> ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Vladimir Nesov
On Tue, Feb 26, 2008 at 3:03 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
>
> Also, if you include procedural knowledge, things may be learned doubly in
> your KB.  For example, you may learn some declarative knowledge about the
> concept of "reverse" and also procedural knowledge of how to reverse
> sequences.
>
> Even worse, in some cases you may only have procedural knowledge, without
> anything declarative.  That'd be like the intelligence of a calculator,
> without true understanding of maths.
>

Knowing how to carry out inference can itself be procedural knowledge,
in which case no explicit distinction between the two is required.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Pei Wang
On Tue, Feb 26, 2008 at 7:03 AM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
>
> On 2/15/08, Pei Wang <[EMAIL PROTECTED]> wrote:
> >
> > To me, the following two questions are independent of each other:
> >
>  > *. What type of reasoning is needed for AI? The major answers are:
> > (A): deduction only, (B) multiple types, including deduction,
> > induction, abduction, analogy, etc.
> >
> > *. What type of knowledge should be reasoned upon? The major answers
>  > are: (1) declarative only, (2) declarative and procedural.
> >
> > All four combination of the two answers are possible. Cyc is mainly
> > A1; you seem to suggest A2; in NARS it is B2.
>
>
> My current approach is "B1".  I'm wondering what is your argument for
> including procedural knowledge, in addition to declarative?

You have mentioned the reason in the following: some important
knowledge is procedural by nature.

> There is the idea of "deductive planning" which allows us to plan actions
> using a solely declarative KB.  So procedural knowledge is not needed for
> acting.

I haven't seen any no trivial result supporting this claim.

> Also, if you include procedural knowledge, things may be learned doubly in
> your KB.  For example, you may learn some declarative knowledge about the
> concept of "reverse" and also procedural knowledge of how to reverse
> sequences.

The knowledge about "how to do ..." can either be in procedural form,
as "programs", or in declarative, as descriptions of the programs.
There is overlapping/redundancy information in the two, but very often
both are needed, and the redundancy is tolerated.

> Even worse, in some cases you may only have procedural knowledge, without
> anything declarative.  That'd be like the intelligence of a calculator,
> without true understanding of maths.

Yes, but that is exactly the reason to directly reasoning on
procedural knowledge, right?

Pei

> YKY
>
>
>  
>
>  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread YKY (Yan King Yin)
On 2/15/08, Pei Wang <[EMAIL PROTECTED]> wrote:
>
> To me, the following two questions are independent of each other:
>
> *. What type of reasoning is needed for AI? The major answers are:
> (A): deduction only, (B) multiple types, including deduction,
> induction, abduction, analogy, etc.
>
> *. What type of knowledge should be reasoned upon? The major answers
> are: (1) declarative only, (2) declarative and procedural.
>
> All four combination of the two answers are possible. Cyc is mainly
> A1; you seem to suggest A2; in NARS it is B2.

My current approach is "B1".  I'm wondering what is your argument for
including procedural knowledge, in addition to declarative?

There is the idea of "deductive planning" which allows us to plan actions
using a solely declarative KB.  So procedural knowledge is not needed for
acting.

Also, if you include procedural knowledge, things may be learned doubly in
your KB.  For example, you may learn some declarative knowledge about the
concept of "reverse" and also procedural knowledge of how to reverse
sequences.

Even worse, in some cases you may only have procedural knowledge, without
anything declarative.  That'd be like the intelligence of a calculator,
without true understanding of maths.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge.. p.s.

2008-02-15 Thread Stephen Reed
David said:

Most  of  the  people  on  this  list  have  quite  different  ideas  about  
how  an  AGI
should  be  made  BUT  I  think  there  are  a  few  things  that  most,  if  
not  all
agree  on.

1.  Intelligence  can  be  created  by  using  computers  that  exist  today  
using
software.
2.  Physical  embodiment  of  the  software  is  not  essential  (might  be  
desirable)
for  intelligence  to  be  created.
3.  Intelligence  hasn't  yet  been  reached  in  anyone's  AGI  project.

 
I agree entirely.  My comments on this list are generally grounded in my own 
work, or my experience at Cycorp and my intuition is that indeed today's 
multicore computers are sufficient to achieve intelligence.  Some evidence:
driverless cars, e.g. the DARPA Urban Challenge, are quite competent using 
modest clusters of multicore computersI estimate that a near-future 8-core cpu 
can achieve real-time automatic speech recognition using the Sphinx-4 
softwareMy own very preliminary results on an English dialog system gives me 
hope that a multicore cpu can be used to robustly convert text into logic 
faster than a human can perform the same task.  For example, my Incremental 
Fluid Construction Grammar parser can convert "the book is on the table" into 
logical statements at the rate of 400 times per second per thread.  That gives 
me a lot of headroom when expanding the grammar rule set, adding commonsense 
entailed facts, and pruning alternative interpretations.
-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] reasoning & knowledge.. p.s.

2008-02-15 Thread David Clark
I agree with Pei Wang 100% on this point.

Even though I find many of the comments from Mike to be interesting, I think
it would be much more productive to add to the solutions and problems of
creating a computer based AGI rather than trying to convince the converted
that AGI on today's computers is impossible.  Some problems might be solved
by visual techniques but if this is important then that aspect of the
problem will probably have to wait until the more general problems of object
extraction from video images is further along (from Bob Mottram's comments).

Most of the people on this list have quite different ideas about how an AGI
should be made BUT I think there are a few things that most, if not all
agree on.

1. Intelligence can be created by using computers that exist today using
software.
2. Physical embodiment of the software is not essential (might be desirable)
for intelligence to be created.
3. Intelligence hasn't yet been reached in anyone's AGI project.

It is not possible to *prove* any AGI project to be correct until it is
actually an AGI and this list won't matter much when that happens.  The only
way to find out if a particular AGI approach is actually a good one is to
try and create it.  It will be difficult to identify even the right projects
when they appear because the AGI will inevitably have some capabilities far
beyond a humans' and other abilities that are far less.  Even if a project
gets some level of intelligence with a particular approach, that doesn't
mean that that approach will continue to produce even higher levels of
intelligence or get to the AGI level (whatever that is).

Therefore, all current AGI projects have to be fundamentally based on
intuition or faith or both.  No argument there, but it would seem that there
is no other way to get to creating an AGI when none currently exists.

It is just a waste of time to demand that someone or some group produce
proof that their ideas are correct when that proof is impossible to produce
until an AGI is achieved.  That doesn't mean we can't debate the merits of
different approaches, or demonstrate why previous attempts weren't
successful.  The last point being very difficult because many things could
result in the failure of a project including scale, resources, etc.  Just
because some previous approach didn't work, doesn't necessarily mean that
that approach couldn't work if some other variable was changed.

I believe that a paraplegic person can still be intelligent and useful if
they could just type on a keyboard and use their brain.  This doesn't
*prove* that human intelligence can be created without a body in the first
place but I think it shows that roaming around in the world and getting
firsthand knowledge from a person's senses isn't a 100% prerequisite for
intelligence.

I would appreciate more comments on how to achieve an AGI and less on
whether a AGI on computers using software is possible or not.

David Clark

> -Original Message-
> From: Pei Wang [mailto:[EMAIL PROTECTED]
> Sent: February-14-08 5:11 PM
> To: agi@v2.listbox.com
> Subject: Re: [agi] reasoning & knowledge.. p.s.
> 
> You are correct that MOST PEOPLE in AI treat observation/perception as
> pure passive. As on many topics, most people in AI are probably wrong.
> However, you keep making claim on "everyone", "nobody", ..., which is
> almost never true. If this is your way to get people to reply your
> email, it won't work on me anymore.
> 
> There are many open problems in AI, so it is not hard to find one that
> haven't been solved. If you have an idea about how to solve it, then
> work on it and show us how far you can go. Just saying "Nobody has
> idea about how to ..." contribute little to the field, since that
> problem typically has been raised decades ago.
> 
> Pei
> 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge.. p.s.

2008-02-14 Thread Pei Wang
On Thu, Feb 14, 2008 at 6:41 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Pei,
>
>  A misunderstanding.  My point was not about the psychology of
>  observation/vision. I understand well that psychology and philosophy are
>  increasingly treating it as more active/reasoned and implicitly referenced
>  Noe. My point is that *AI* and *AGI* treat observation as if it is passive
>  rather than reasoned - that I can't remember any discussion of this in any
>  context here or related places (& would welcome references otherwise) - and
>  the opinions you have expressed seem consistent with this.

Even that is not true --- see
http://users.rsise.anu.edu.au/~rsl/rsl_active.html for example. I'm
sure there are more out there.

You are correct that MOST PEOPLE in AI treat observation/perception as
pure passive. As on many topics, most people in AI are probably wrong.
However, you keep making claim on "everyone", "nobody", ..., which is
almost never true. If this is your way to get people to reply your
email, it won't work on me anymore.

There are many open problems in AI, so it is not hard to find one that
haven't been solved. If you have an idea about how to solve it, then
work on it and show us how far you can go. Just saying "Nobody has
idea about how to ..." contribute little to the field, since that
problem typically has been raised decades ago.

Pei

>  Let me put this in context for you. Imaginative and visual reasoning are
>  massively underestimated throughout our culture. So, for example, while we
>  have had cognitive linguistics, & Lakoff & co talking about the fundamental
>  embodiment of language (and maths and symbol systems) for 2 to 3 decades
>  now, it was only last year that the first journal of Cognitive *Semiotics*
>  was brought out. Visual reasoning and tacit knowledge are obviously not new,
>  but they are v. understudied. Within 5 years, though, (& Ben can come back &
>  laugh at me if I'm wrong), they will have exploded as areas of study.
>
>
>
>
>   Mike Tintner <[EMAIL PROTECTED]> wrote:
>  >>
>  >>  Everyone is talking about observation as if it is PASSIVE - as if you
>  >> just
>  >>  record the world and THEN you start reasoning.
>  >
>  > Mike: I really hope you can stop making this kind of claim, for your own
>  > sake.
>  >
>  > For what people have been talking about on this topic, see
>  >
>  > http://www.amazon.com/Why-We-See-What-Empirical/dp/0878937528/ref=sr_1_6
>  > http://socrates.berkeley.edu/~noe/action.html
>  > http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel1/5/291/5968.pdf
>  >
>
>
>
>
> ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge.. p.s.

2008-02-14 Thread Mike Tintner

Pei,

A misunderstanding.  My point was not about the psychology of 
observation/vision. I understand well that psychology and philosophy are 
increasingly treating it as more active/reasoned and implicitly referenced 
Noe. My point is that *AI* and *AGI* treat observation as if it is passive 
rather than reasoned - that I can't remember any discussion of this in any 
context here or related places (& would welcome references otherwise) - and 
the opinions you have expressed seem consistent with this.


Let me put this in context for you. Imaginative and visual reasoning are 
massively underestimated throughout our culture. So, for example, while we 
have had cognitive linguistics, & Lakoff & co talking about the fundamental 
embodiment of language (and maths and symbol systems) for 2 to 3 decades 
now, it was only last year that the first journal of Cognitive *Semiotics* 
was brought out. Visual reasoning and tacit knowledge are obviously not new, 
but they are v. understudied. Within 5 years, though, (& Ben can come back & 
laugh at me if I'm wrong), they will have exploded as areas of study.



Mike Tintner <[EMAIL PROTECTED]> wrote:


 Everyone is talking about observation as if it is PASSIVE - as if you 
just

 record the world and THEN you start reasoning.


Mike: I really hope you can stop making this kind of claim, for your own 
sake.


For what people have been talking about on this topic, see

http://www.amazon.com/Why-We-See-What-Empirical/dp/0878937528/ref=sr_1_6
http://socrates.berkeley.edu/~noe/action.html
http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel1/5/291/5968.pdf




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge.. p.s.

2008-02-14 Thread Bob Mottram
On 14/02/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>  Who knows what we might have achieved had that level of dedication actually
>  continued for 4-7 more years?

This kind of frustration is familiar to most inventors, and probably
most people on this list.  Likewise I'm pretty sure that if I had
access to more resources and maybe had a couple of assistants I could
make much faster progress on the problems I'd like to see resolved.


>  Our codebase had some problems, and some of our ideas at that point were
>  inadequately specified.  But we were moving in the right direction,
>  and my progress
>  since that point has been significantly slower due to having less than 1/10 
> the
>  team-size devoted to AGI.

Perhaps this is where the open source (opencog) approach can help,
although whether open or closed it's always hard to find people to
work on a project.


>  The real stupidity underlying that prediction I made, in early 2001,
>  was my naivete
>  in not realizing how suddenly the dot-com bubble was going to burst.


Even if you know better it's sometimes tricky to break from your
natural linear intuitions.  For most of human history guessing that
the next few years will be much like the last was a good heuristic.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-14 Thread Pei Wang
You don't need to keep my busy --- I'm already too busy to continue
this discussion.

I don't have all the answers to your questions. For the ones I do have
answers, I'm afraid I don't have the time to explain them to your
satisfaction.

Pei

On Thu, Feb 14, 2008 at 5:23 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  I should be supplying a detailed argument that in effect deals with this
>  soon. However just to keep you busy :) -
>  here is a v. cool website:
>
>  http://www.citrinitas.com/history_of_viscom/index.html
>
>  tracing the history of communication. Perhaps you'd like to set out  how
>  logic can explain the invention of ONE SINGLE form of symbolic
>  communication -  from pictograms to cuneiform, ideograms, Greek alphabet, .
>  etc.  right down to the a's and b's of logic, x's and y's of algebra, and
>  1's and 0's of  computers. How, pace Saussure, do you come to associate
>  "TREE" or "ARBRE" or any of the thousands of equivalent words in different
>  languages, with the actual object, with branches and leaves, that they refer
>  to?  There is no connection, nothing for logic to work on at all. There is
>  no physical or other relation between the signifier and the signified. Every
>  symbolic system is entirely *arbitrary.* How were they arrived at? All those
>  "A"'s and "B"'s and "C"'s (without which you couldn't function
>  intellectually).  By acts of *imagination*  / pure imaginative association.
>
>  I suspect - and correct me - that you haven't thought much at all about this
>  whole area of imaginative and visual reasoning - i.e. how one image is drawn
>  from another, or  how someone delineates a drawing of an object from the
>  object itself. How, say, do you get from a human face to the distorted
>  portraits of Modigliani, Picasso, Francis Bacon, Scarfe, or any cartoonist?
>  By logical or mathematical formulae?  Which parts of the face do logic or
>  semantic networks tell you to highlight or leave out or transpose or smudge
>  or overlay, or what to blur, and what to sharpen? Which of the continuously
>  changing expressions on a person's face does logic tell you are most
>  representative of their personality?
>
>  And just as you are blind to the imaginative basis of all symbolic forms, so
>  you are blind to the imaginative basis of the whole of science and
>  technology  - how,. other than by an act of supreme imagination, do you
>  think Descartes invented coordinate geometry? (no coordinates or axes in
>  nature) or Archimedes thought of measuring irregular solids (no baths or
>  water containers classified under "measuring instruments"?) And blind too to
>  the imaginative basis of all reasoning, period, including logic.  But your
>  reply has been v. helpful.
>
>  P.S. I'm starting to get it here - you can't ahem imagine that one image
>  (and therefore conclusion) can be drawn/reasoned from another. You in effect
>  think that if you see an erection bulging through a trouser, the reason you
>  know that person is sexually excited is because there is a semantic,
>  symbolic network in your head - "ERECTION" - "SEXUAL" -  "SIGN OF
>  EXCITEMENT" that you use to reason here. No, it's because you've seen direct
>  sensory images of (concealed) erections connected (via observation) with
>  other sensory images of excited faces.  And when you have sex, you will
>  engage in thousands of other comparable acts of imaginative reasoning. At
>  this stage, you will be probably thinking, "well, "ERECTION" "EXCITEMENT"
>  etc - why couldn't there be such a semantic network in my head?" Well,
>  actually there could be for that particular one (in addition to the
>  imaginative connection). But there couldn't be for most of the others. How,
>  for example, does a partner's panting tell you when they're excited (and not
>  just heavily breathing)? That and most of your sexual reasoning (and indeed
>  reasoning for day-to-day activities) will come under Polanyi's "tacit
>  knowledge." Not under "Pei's Logical Rules of Sex." Entirely imaginative
>  observations of which object movements/behaviour follow upon which other
>  ones. Entirely "drawn" conclusions.  (We obviously need something like an
>  Encyclopaedia/ Movie Library of Tacit/Imaginative Knowledge - it's vast. How
>  do you know Madonna is a toughie just from her face? - you do but it's
>  imaginative knowledge and you probably don't have the words to hand to
>  explain. And ditto for most of your knowledge about human beings and
>  animals).
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http:/

Re: [agi] reasoning & knowledge.. p.s.

2008-02-14 Thread Ben Goertzel
Hi Mike,


>  P.S. I also came across this lesson that AGI forecasting must stop (I used
>  to make similar mistakes elsewhere).
>
>  "We've been at it since mid-1998, and we estimate that within 1-3 years from
>  the time I'm writing this (March 2001), we will complete the creation of a
>  program that can hold highly intelligent (though not necessarily fully
>  human-like) English conversations, talking to us about its own creative
>  discoveries and ideas regarding the digital data that is its worldOf
>  course, "1-4 years from real AI" and "1-3 years more to fully self-modifying
>  AI" are very gutsy claims, similar to other claims that have been made (and
>  not fulfilled) throughout the history of AI. But we believe that, due to the
>  combination of advances in computer hardware and software with advances in
>  various aspects of cognitive science, real AI really now is possible - and
>  that we know how to achieve it, and are substantially advanced along the
>  path to this goal."
>  http://www.goertzel.org/books/DIExcerpts.htm


I'd like to note that at that time I was working with a team of about
**40** full-time
R&D staff focused on nothing but AGI.

On April 1, 2001, the company hosting that team (Webmind Inc.) shut its doors.

Who knows what we might have achieved had that level of dedication actually
continued for 4-7 more years?

Our codebase had some problems, and some of our ideas at that point were
inadequately specified.  But we were moving in the right direction,
and my progress
since that point has been significantly slower due to having less than 1/10 the
team-size devoted to AGI.

The real stupidity underlying that prediction I made, in early 2001,
was my naivete
in not realizing how suddenly the dot-com bubble was going to burst.
The prediction
was conditional on the Webmind AI team continuing in the form it existed at that
time; but as it happened, the creation and maintenance of that sort of
AGI R&D team
was an epiphenomenon of the temporary dot-com bubble.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-14 Thread Mike Tintner

Pei: > Though many people assume "reasoning" can only been applied to

"symbolic" or "linguistic" materials, I'm not convinced yet, nor that
there is really a separate "imaginative reasoning" --- at least I
haven't seen a concrete proposal on what it means and why it is
different.



I should be supplying a detailed argument that in effect deals with this 
soon. However just to keep you busy :) -

here is a v. cool website:

http://www.citrinitas.com/history_of_viscom/index.html

tracing the history of communication. Perhaps you'd like to set out  how 
logic can explain the invention of ONE SINGLE form of symbolic 
communication -  from pictograms to cuneiform, ideograms, Greek alphabet, . 
etc.  right down to the a's and b's of logic, x's and y's of algebra, and 
1's and 0's of  computers. How, pace Saussure, do you come to associate 
"TREE" or "ARBRE" or any of the thousands of equivalent words in different 
languages, with the actual object, with branches and leaves, that they refer 
to?  There is no connection, nothing for logic to work on at all. There is 
no physical or other relation between the signifier and the signified. Every 
symbolic system is entirely *arbitrary.* How were they arrived at? All those 
"A"'s and "B"'s and "C"'s (without which you couldn't function 
intellectually).  By acts of *imagination*  / pure imaginative association.


I suspect - and correct me - that you haven't thought much at all about this 
whole area of imaginative and visual reasoning - i.e. how one image is drawn 
from another, or  how someone delineates a drawing of an object from the 
object itself. How, say, do you get from a human face to the distorted 
portraits of Modigliani, Picasso, Francis Bacon, Scarfe, or any cartoonist? 
By logical or mathematical formulae?  Which parts of the face do logic or 
semantic networks tell you to highlight or leave out or transpose or smudge 
or overlay, or what to blur, and what to sharpen? Which of the continuously 
changing expressions on a person's face does logic tell you are most 
representative of their personality?


And just as you are blind to the imaginative basis of all symbolic forms, so 
you are blind to the imaginative basis of the whole of science and 
technology  - how,. other than by an act of supreme imagination, do you 
think Descartes invented coordinate geometry? (no coordinates or axes in 
nature) or Archimedes thought of measuring irregular solids (no baths or 
water containers classified under "measuring instruments"?) And blind too to 
the imaginative basis of all reasoning, period, including logic.  But your 
reply has been v. helpful.


P.S. I'm starting to get it here - you can't ahem imagine that one image 
(and therefore conclusion) can be drawn/reasoned from another. You in effect 
think that if you see an erection bulging through a trouser, the reason you 
know that person is sexually excited is because there is a semantic, 
symbolic network in your head - "ERECTION" - "SEXUAL" -  "SIGN OF 
EXCITEMENT" that you use to reason here. No, it's because you've seen direct 
sensory images of (concealed) erections connected (via observation) with 
other sensory images of excited faces.  And when you have sex, you will 
engage in thousands of other comparable acts of imaginative reasoning. At 
this stage, you will be probably thinking, "well, "ERECTION" "EXCITEMENT" 
etc - why couldn't there be such a semantic network in my head?" Well, 
actually there could be for that particular one (in addition to the 
imaginative connection). But there couldn't be for most of the others. How, 
for example, does a partner's panting tell you when they're excited (and not 
just heavily breathing)? That and most of your sexual reasoning (and indeed 
reasoning for day-to-day activities) will come under Polanyi's "tacit 
knowledge." Not under "Pei's Logical Rules of Sex." Entirely imaginative 
observations of which object movements/behaviour follow upon which other 
ones. Entirely "drawn" conclusions.  (We obviously need something like an 
Encyclopaedia/ Movie Library of Tacit/Imaginative Knowledge - it's vast. How 
do you know Madonna is a toughie just from her face? - you do but it's 
imaginative knowledge and you probably don't have the words to hand to 
explain. And ditto for most of your knowledge about human beings and 
animals). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge.. p.s.

2008-02-14 Thread Pei Wang
On Thu, Feb 14, 2008 at 3:39 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  Everyone is talking about observation as if it is PASSIVE - as if you just
>  record the world and THEN you start reasoning.

Mike: I really hope you can stop making this kind of claim, for your own sake.

For what people have been talking about on this topic, see

http://www.amazon.com/Why-We-See-What-Empirical/dp/0878937528/ref=sr_1_6
http://socrates.berkeley.edu/~noe/action.html
http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel1/5/291/5968.pdf

Pei

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge.. p.s.

2008-02-14 Thread Mike Tintner

Pei:  What type of reasoning is needed for AI? The major answers are:

(A): deduction only, (B) multiple types, including deduction,
induction, abduction, analogy, etc.


And the other thing that AI presumably lacks currently - this sounds so 
obvious as to be almost silly to say, but I can't remember it being 
discussed  - is imaginative reasoning by way of *observation.* If you can't 
perceive the world, other than to a v. limited extent, (as we all agree, 
pace Wozniak's robot), you can hardly observe it - and look for 
clues/evidence and so learn about not just murders, (the obvious 
association), but all forms of behaviour of all things and organisms. I must 
admit I've never thought about this, and I'm groping for an analogy of how 
huge a void this entails  - it's as if you could only travel the world 
locked in a rail carriage on a few well-established rail lines, and never 
walk or otherwise move around anywhere - and had to rely entirely on 
second-hand books etc for your knowledge  -  in which case, you might well 
think that logic was a significant form of reasoning.  If you can't observe 
the world, you are so-o-o buggered...


Now a funny thing. I Google and there are loads of "AI" - "observe the 
world" associations. Indeed who is numero uno (well duo) in many thousands:


*Why has no one yet managed to build a thinking machine? Mainly it's because 
no one has really tried to build a whole mind: a computer system that could 
observe the world around it, act in it.."
Ben - The Path to Posthumanity.  (I'm beginning to think the Net is just 
footnotes to Ben's voluminous publications).


So I check out some more - and now I think I understand what's going on 
here. {Please correct me].


Everyone is talking about observation as if it is PASSIVE - as if you just 
record the world and THEN you start reasoning.


But it's not -  just as perception is heavily enactive, (with you choosing 
what to look at), so we engage in OBSERVATION-AS-REASONING - looking for 
clues. And that is arguably the most fundamental arena of human intelligence 
(not just immediately, but later A La Recherche du Temps Perdu).


P.S. I also came across this lesson that AGI forecasting must stop (I used 
to make similar mistakes elsewhere).


"We've been at it since mid-1998, and we estimate that within 1-3 years from 
the time I'm writing this (March 2001), we will complete the creation of a 
program that can hold highly intelligent (though not necessarily fully 
human-like) English conversations, talking to us about its own creative 
discoveries and ideas regarding the digital data that is its worldOf 
course, "1-4 years from real AI" and "1-3 years more to fully self-modifying 
AI" are very gutsy claims, similar to other claims that have been made (and 
not fulfilled) throughout the history of AI. But we believe that, due to the 
combination of advances in computer hardware and software with advances in 
various aspects of cognitive science, real AI really now is possible - and 
that we know how to achieve it, and are substantially advanced along the 
path to this goal."

http://www.goertzel.org/books/DIExcerpts.htm



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-14 Thread Pei Wang
Though many people assume "reasoning" can only been applied to
"symbolic" or "linguistic" materials, I'm not convinced yet, nor that
there is really a separate "imaginative reasoning" --- at least I
haven't seen a concrete proposal on what it means and why it is
different.

For a simple deduction rule {S --> M, M --> P} |- S --> P, it is
possible that the "symbols" S, M, and P correspond to different
"mental images" in a system, and the rule just shows their relations:
if S is a special type of M, and M is a special type of P, then S is a
special type of P. Whether they are words, images, actions, etc., does
not matter.

Of course there will be modular-specific operations: operations on
visual signal and operations on audio signals are different, but they
can all be handled consistently in procedural inference. We don't have
separate "visual logic" and "audio logic".

If by "imaginative reasoning" you mean "reasoning on imaginative
information", then it is correct to say that few AI system can do it
at the moment, but it is wrong to assume that it cannot be done, or
demand a completely different type of logic.

Pei

Pei

On Thu, Feb 14, 2008 at 1:51 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  Pei:  What type of reasoning is needed for AI? The major answers are:
>
> > (A): deduction only, (B) multiple types, including deduction,
>  > induction, abduction, analogy, etc.
>
>  Is it fair to say that current AI involves an absence of imaginative
>  reasoning?   -  reasoning that is conducted more or less exclusively with
>  and in images. For example,  an animal/ human predicting which way a
>  predator/prey/sports opponent will move; a designer redesigning a font (the
>  letter T, say), or a chair, or an ipod; a Riemann or a Mandelbrot perceiving
>  that geometry can be applied to new kinds or dimensions of form.
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-14 Thread Mike Tintner


Pei:  What type of reasoning is needed for AI? The major answers are:

(A): deduction only, (B) multiple types, including deduction,
induction, abduction, analogy, etc.


Is it fair to say that current AI involves an absence of imaginative 
reasoning?   -  reasoning that is conducted more or less exclusively with 
and in images. For example,  an animal/ human predicting which way a 
predator/prey/sports opponent will move; a designer redesigning a font (the 
letter T, say), or a chair, or an ipod; a Riemann or a Mandelbrot perceiving 
that geometry can be applied to new kinds or dimensions of form. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-14 Thread Stephen Reed
Pei,
Given your description, I agree B2 is the way to go.  At Cycorp, the inductive 
(e.g. rule induction), abductive (e.g. hypothesis generation), and analogical 
reasoning engines I observed were all supported by deductive inference.  I also 
a member of a Cycorp team that collaborated with Pedro Domingos' group at the 
university of Washington regarding probabilistic reasoning.

-Steve
 
Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Pei Wang <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, February 14, 2008 10:28:50 AM
Subject: [agi] reasoning & knowledge

 Steve,

To  me,  the  following  two  questions  are  independent  of  each  other:

*.  What  type  of  reasoning  is  needed  for  AI?  The  major  answers  are:
(A):  deduction  only,  (B)  multiple  types,  including  deduction,
induction,  abduction,  analogy,  etc.

*.  What  type  of  knowledge  should  be  reasoned  upon?  The  major  answers
are:  (1)  declarative  only,  (2)  declarative  and  procedural.

All  four  combination  of  the  two  answers  are  possible.  Cyc  is  mainly
A1;  you  seem  to  suggest  A2;  in  NARS  it  is  B2.

McDermott's  paper  has  many  good  points,  but  his  notion  of  "logic"  and
"semantics"  are  too  limited  ---  to  what  the  "Logicist  AI"  has  been
trying.

As  for  Semantic  Web  and  Ontology,  I  have  no  doubt  that  there  are
useful  for  some  special  applications.  However,  from  an  AGI  point  of
view,  their  assumption  about  human  knowledge  is  way  to  oversimplified.
I  don't  think  the  so-called  "common  knowledge"  can  be  forced  into  the
rigid  framework  of  "ontology",  not  to  mention  personal  knowledge,
which  is  even  more  fluid.

Even  so,  we  can  use  SW  as  a  possible  knowledge  source,  and  use
inference  engine  to  reveal  hidden  conclusions  in  it,  so  SW  is  still
relevant  to  AGI.

Pei









  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] reasoning & knowledge

2008-02-14 Thread Pei Wang
Steve,

To me, the following two questions are independent of each other:

*. What type of reasoning is needed for AI? The major answers are:
(A): deduction only, (B) multiple types, including deduction,
induction, abduction, analogy, etc.

*. What type of knowledge should be reasoned upon? The major answers
are: (1) declarative only, (2) declarative and procedural.

All four combination of the two answers are possible. Cyc is mainly
A1; you seem to suggest A2; in NARS it is B2.

McDermott's paper has many good points, but his notion of "logic" and
"semantics" are too limited --- to what the "Logicist AI" has been
trying.

As for Semantic Web and Ontology, I have no doubt that there are
useful for some special applications. However, from an AGI point of
view, their assumption about human knowledge is way to oversimplified.
I don't think the so-called "common knowledge" can be forced into the
rigid framework of "ontology", not to mention personal knowledge,
which is even more fluid.

Even so, we can use SW as a possible knowledge source, and use
inference engine to reveal hidden conclusions in it, so SW is still
relevant to AGI.

Pei


On Thu, Feb 14, 2008 at 10:38 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> Mike,
>
> Cyc uses, and my own Texai project will also eventually employ, deductive
> reasoning (i.e. modus ponens) as its main inference mechanism.  In Cyc, most
> of the fallacies that Shirkey points out are avoided by two means -
> nonmonotonic (e.g. default) reasoning, and context.
>
> Although I strongly favor interoperability with the Semantic Web via RDF
> (Resource Description Framework), my main issue with the SW is its allowance
> of a multitude of ontologies.   This problem is being addressed by the
> Linked Data movement, which is linking a wide variety of structured
> information sources using ontology mapping.
>
> Here is a link to a University of Texas presentation about McDermott's
> Critique of Pure Reason essay.  I agree with Drew McDermott that an AI will
> require a lot of procedural knowledge, not just a deductive inference
> engine.  In contrast to the Cyc project, my approach to commonsense
> knowledge acquistion will stress the acquisition of skills - at first
> linguistic skills.   Initially, these skills will be persisted as deductive
> rule sets, but eventually most will be  persisted as scripts that can be
> subsequently composed  into executable programs.
>
> -Steve
>
> Stephen L. Reed
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
>
> - Original Message 
> From: Mike Tintner <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Thursday, February 14, 2008 8:32:16 AM
> Subject: [agi] Applicable to Cyc, NARS, ATM & others?
>
> The Semantic Web, Syllogism, and Worldview
> First published November 7, 2003 on the "Networks, Economics, and Culture"
> mailing list.
> Clay Shirky
>
>
>
>
>  
> Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
> now.
>  
>
>  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com