[agi] Re: Accidental Genius

2008-05-09 Thread Brad Paulsen

Ryan,

Thanks for the clarifications and the links!

Cheers,

Brad

- Original Message - 
From: Bryan Bishop [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; [EMAIL PROTECTED]

Sent: Wednesday, May 07, 2008 9:46 PM
Subject: Re: Accidental Genius




On Wed, May 7, 2008 at 9:21 PM, Brad Paulsen [EMAIL PROTECTED] 
wrote:

I happened to catch a program on National Geographic Channel today
entitled Accidental Genius.  It was quite interesting from an AGI 
standpoint.


One of the researchers profiled has invented a device that, by sending
electromagnetic pulses through a person's skull to the appropriate spot 
in
the left hemisphere of that person's brain, can achieve behavior similar 
to
that of an idiot savant in a non-brain-damaged person (in the session 
shown,

this was a volunteer college student).


That's Snyder's work.*
http://www.wireheading.com/brainstim/savant.html

http://heybryan.org/mediawiki/index.php/rTMS

Re: savantism,
http://heybryan.org/intense_world_syndrome.html

DIY rTMS:
http://transcenmentalism.org/OpenStim/tiki
(there's a mailing list)
http://heybryan.org/mailing_lists.html

Before being zapped by the device, the student is taken through a 
series

of exercises.  One is to draw a horse from memory.  The other is to read
aloud a very familiar saying with a slight grammatical mistake in it 
(the

word the is duplicated, i.e., the the, in the saying -- sorry I can't
recall the saying used). Then the student is shown a computer screen full 
of

dots for about 1 second and asked to record his best guess at how many
dots there were.  This exercise is repeated several times (with different
numbers of dots each time).


It's not just being zapped, it's being specifically stimulated in a
certain region of the brain; think of it like actually targetting the
visual cortex, or actually targetting the anterior cingulate, the left
ventrolateral amygdala, etc. And that's why this is interesting. I
wrote somewhat about this on my site once:

http://heybryan.org/recursion.html

Specifically, if this can be used to modify attention, then can we use
it to modify attention re: paying attention to attention? Sounds like
a direct path to the singularity to me.


The student is then zapped by the electromagnetic pulse device for 15
minutes.  It's kind of scary to watch the guy's face flinch 
uncontrollably

as each pulse is delivered. But, while he reported feeling something, he
claimed there was no pain or disorientation. His language facilities were
unimpaired (they zap a very particular spot in the left hemisphere based 
on

brain scans taken of idiot savants).


Right. The DIY setups that I have heard of haven't been able to be all
that high-powered due to safety concerns -- not safety re: the brain,
but safety when considering working with superhigh voltages so close
to one's head. ;-)


You can watch the episode on-line here:
http://channel.nationalgeographic.com/tv-schedule.  It's not scheduled 
for

repeat showing anytime soon.


Awesome. Thanks for the link.


That's not a direct link (I couldn't find one).  When you get to that Web
page, navigate to Wed, May 7 at 3PM and click the More button under the
picture.  Unfortunately, the full-motion video is the size of a large
postage stamp.  The full screen view uses stop motion (at least i did 
on

my laptop using a DSL-based WiFi hotspot). The audio is the same in both
versions.


- Bryan

* Damien Broderick had to correct me on this, once. :-)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
OpenCog.org (Open Cognition Project) group.

To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to 
[EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/opencog?hl=en

-~--~~~~--~~--~--~---



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: pattern definition

2008-05-09 Thread Mike Tintner
Boris: I define intelligence as an ability to predict/plan by discovering  
projecting patterns within an input flow.


IOW a capacity to generalize. A general intelligence is something that 
generalizes from incoming info. about the world.


Well, no it can't be just that. Look at what you write at the end of your 
blog:


Hope this makes sense.

And it doesn't literally make much sense because your blog has a lot of 
generalizations with no examples - no individualizations/particularisations 
of, for example, what individual/particular problems your algorithms might 
apply to. The making sense level of your brain - an AGI that works - is 
the level that seeks individual examples (and exceptions) for every 
generalization.


A general intelligence doesn't just generalize, it individualizes. It can 
talk not just about the field of AGI but about Boris K, Ben G., Stephen 
Reed, etc etc.  And it has to, otherwise those generalizations don't make 
sense.


I'm stressing this because so many people's ideas about AGI like yours 
involve only, or almost only a generalizing intelligence with no 
individualizing, sensemaking level.


Boris: Entities must not be multiplied unnecessarily. William of Okkam.


A pattern is a set of matching inputs.
A match is a partial identity of the comparands.
The comparands for general intelligence must incrementally  indefinitely 
scale in complexity.
The scaling must start from the bottom: uncompressed single-integer 
comparands,  the match here is the sum of bitwise AND.


For more see my blog: http://scalable-intelligence.blogspot.com/
Boris.

- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, May 08, 2008 1:17 PM
Subject: [agi] Re: pattern definition



[EMAIL PROTECTED] wrote:

Hello

I am writing a literature review on AGI and I am mentioning the 
definition of pattern as explained by Ben in his work.


A pattern is a representation of an object on a simpler scale. For 
example, a pattern in a drawing of a mathematical curve could be a 
program that can compute the curve from a formula (Looks et al. 2004). 
My supervisor told me that she doesn?t see how this can be simpler than 
the actual drawing.


Any other definition I could use in the same context to explain to a 
non-technical audience?


thanks

xav


Xav,

[I am copying this to the AGI mailing list because it is more appropriate 
there than on Singularity]


A more general definition of pattern would include the idea that there is 
a collection of mechanisms that take in a source of information (e.g. an 
image consisting of a grid of pixels) and respond in such a way that each 
mechanism 'triggers' in some way when a particular arrangement of signal 
values appears in the information source.


Note that the triggering of each mechanism is the 'recognition' of a 
pattern, and the mechanism in question is a 'recognizer' of a pattern. 
(In this way of looking at things, there are many mechanisms, one for 
each pattern).  The 'particular arrangement of signal values' is the 
pattern itself.


Most importantly note that a mechanism does not have to trigger for some 
exact, deterministic set of signal values.  For example, a mechanism 
could respond in a stochastic, noisy way to a whole bunch of different 
arrangements of signal values.  It is allowed to be slightly 
inconsistent, and not always respond in the same way to the same input 
(although it would be a particularly bad pattern recognizer if it did not 
behave in a reasonably consistent way!).  The amount of the 'triggering' 
reaction does not have to be all-or-nothing, either:  the mechanism can 
give a graded response.


What the above paragraph means is that the thing that we call a 'pattern' 
is actually 'whatever makes a mechanism trigger', and we have to be 
extremely tolerant of the fact that a wide range of different signal 
arrangements will give rise to triggering ... so a pattern is something 
much more amorphous and hard to define than simply *one* arrangement of 
signals.


Finally, there is one more twist to this definition, which is very 
important.  Everything said above was about arrangements of signals in 
the primary information source ... but we also allow that some mechanisms 
are designed to trigger on an arrangement of other *mechanisms*, not just 
primary input signals.  In other words, this pattern finding system is 
hierarchical, and there can be abstract patterns.


This definition of pattern is the most general one that I know of.  I use 
it in my own work, but I do not know if it has been explicitly published 
and named by anyone else.


In this conception, patterns are defined by the mechanisms that trigger, 
and further deponent sayeth not what they are, in any more fundamental 
way.


And one last thing:  as far as I can seem this does not easily map onto 
the concept of Kolmogorov complexity.  At least, the mapping is very 
awkward and uninformative, if it exists.  If a 

Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-09 Thread Jim Bromer



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 9:05:22 PM
Subject: Re: Symbol Grounding  [WAS Re: [agi] AGI-08 videos]

 
I just want to make the point that I think categorical grounding is necessary 
for AGI, but I believe that it could be done through symbol processing.  The 
reason is that old AI had some kind of learning facility.  If symbols were 
associated with generalized forms that represented possible arrangements of 
data, they could then be used to designate categorical analysis that was 
implicitly derived from the IO data environment as well.  So while I agree that 
categorical grounding is necessary, it is just that I also believe that could 
have been accomplished with AI programs that used symbol processing before 
Harnad's publication.  I just believe that they lacked another facility which I 
call conceptual integration.  My idea of conceptual integration includes 
blending but it is not limited to it.  (And the computers in that era were too 
wimpy.)
Jim Bromer

Hi Jim,

It's simply I think - and I stand to be  corrected - that he has never pushed 
those levels v. hard at all. They  are definitely there in his writing, but not 
elaborated.
 
So the only enduring impression he has left,  IMO, is the idea of symbol 
grounding - which people have interpreted in  various ways.
 
As you can imagine, I personally would have  liked to see a lot more re those 
intermediate levels. And if he had pushed  them, someone would presumably have 
brought him up in conection with  Jeff Hawkins' work.


 Your Subscription  


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Vladimir Nesov
On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 I claim there is no P such that P(P,y) = P(y) for all y.

(I assume you mean something like P((P,y))=P(y)).

If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for all y.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Accidental Genius

2008-05-09 Thread Mike Tintner
Right on. Everything I've read esp. Grandin, suggests strongly autism is 
crucially hypersensitivity rather than an emotional disorder.  If every time 
the normal person touched someone, they got the equivalent of an electric 
shock, they'd stay away from people too. [Thanks for your previous links 
too].


Bryan:

 We discuss how excessive
neuronal processing may render the world painfully intense when the
neocortex is affected and even aversive when the amygdala is affected,
leading to social and environmental withdrawal. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer



- Original Message 
From: Matt Mahoney [EMAIL PROTECTED]
--- Jim Bromer [EMAIL PROTECTED] wrote:

 I don't want to get into a quibble fest, but understanding is not
 necessarily constrained to prediction.

What would be a good test for understanding an algorithm?

-- Matt Mahoney, [EMAIL PROTECTED]
--
I don't have a ready answer for this.  First of all, (maybe I do have a ready 
answer to this), understanding has to be understood in the context of partial 
understanding.  I can understand something about a subject without being an 
expert in the subject, and I am Skeptical of anyone who claims that total 
understanding is feasible, (except for a bounded discussion of a bounded 
concept in which case I would only be skeptical with a small s.)

So to start with, I could say that understanding an algorithm could be defined 
by various kinds of partial knowledge of it.  What kinds of input does it react 
to, and what kinds of internal actions does it take?  What kind of output does 
it produce. Can generalizations of the input it takes, its internal actions and 
its output be made.  What was it designed to do?  Can relations between 
specific examples or derived generalizations of its input, its internal actions 
and its output be made.

While some of this kind of knowledge would require some kind of intelligence, 
others could be expressed in simpler data-concepts.  Harnad's categorical 
grounding comes to mind.  An experimental AI program would be capable of 
deriving data from the operation of an algorithm if its program was created 
around this paradigm of examining an algorithm.   It could then create its own 
kind of analyses of the algorithm, and even though it might not be the same as 
an analysis that we might create, it still might be usable to produce something 
that would border on understanding.

The capacity of prediction is significant in the kind of derived 
generalizations and categorical exemplars that I am thinking of, but the 
concept of understanding must go beyond simple prediction.

Jim Bromer



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] Re: pattern definition

2008-05-09 Thread John G. Rose
So many overloads - pattern, complexity, atoms - can't we come up with new
terms like schfinkledorfs? - but a very interesting question is - given an
image of W x H pixels of 1 bit depth (on or off), one frame, how many
patterns exist within this grid?  When you think about it, it becomes an
extremely difficult question to answer because within a static image you can
have dupes, different sizes, dimensions, distortions, compressions,
expansions, combo's... it's crazy. BUT, there is a pattern to the patterns -
there's a mathematical description of them.

John


 -Original Message-
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 Sent: Thursday, May 08, 2008 11:18 AM
 To: agi@v2.listbox.com
 Subject: [agi] Re: pattern definition
 
 [EMAIL PROTECTED] wrote:
  Hello
 
  I am writing a literature review on AGI and I am mentioning the
  definition of pattern as explained by Ben in his work.
 
  A pattern is a representation of an object on a simpler scale. For
  example, a pattern in a drawing of a mathematical curve could be a
  program that can compute the curve from a formula (Looks et al. 2004).
  My supervisor told me that she doesn?t see how this can be simpler
 than
  the actual drawing.
 
  Any other definition I could use in the same context to explain to a
  non-technical audience?
 
  thanks
 
  xav
 
 Xav,
 
 [I am copying this to the AGI mailing list because it is more
 appropriate there than on Singularity]
 
 A more general definition of pattern would include the idea that there
 is a collection of mechanisms that take in a source of information (e.g.
 an image consisting of a grid of pixels) and respond in such a way that
 each mechanism 'triggers' in some way when a particular arrangement of
 signal values appears in the information source.
 
 Note that the triggering of each mechanism is the 'recognition' of a
 pattern, and the mechanism in question is a 'recognizer' of a pattern.
 (In this way of looking at things, there are many mechanisms, one for
 each pattern).  The 'particular arrangement of signal values' is the
 pattern itself.
 
 Most importantly note that a mechanism does not have to trigger for some
 exact, deterministic set of signal values.  For example, a mechanism
 could respond in a stochastic, noisy way to a whole bunch of different
 arrangements of signal values.  It is allowed to be slightly
 inconsistent, and not always respond in the same way to the same input
 (although it would be a particularly bad pattern recognizer if it did
 not behave in a reasonably consistent way!).  The amount of the
 'triggering' reaction does not have to be all-or-nothing, either:  the
 mechanism can give a graded response.
 
 What the above paragraph means is that the thing that we call a
 'pattern' is actually 'whatever makes a mechanism trigger', and we have
 to be extremely tolerant of the fact that a wide range of different
 signal arrangements will give rise to triggering ... so a pattern is
 something much more amorphous and hard to define than simply *one*
 arrangement of signals.
 
 Finally, there is one more twist to this definition, which is very
 important.  Everything said above was about arrangements of signals in
 the primary information source ... but we also allow that some
 mechanisms are designed to trigger on an arrangement of other
 *mechanisms*, not just primary input signals.  In other words, this
 pattern finding system is hierarchical, and there can be abstract
 patterns.
 
 This definition of pattern is the most general one that I know of.  I
 use it in my own work, but I do not know if it has been explicitly
 published and named by anyone else.
 
 In this conception, patterns are defined by the mechanisms that trigger,
 and further deponent sayeth not what they are, in any more fundamental
 way.
 
 And one last thing:  as far as I can seem this does not easily map onto
 the concept of Kolmogorov complexity.  At least, the mapping is very
 awkward and uninformative, if it exists.  If a mechanism triggers on a
 possibly stochastic, nondeterminstic set of features, it can hardly be
 realised by a feasible algorithm, so talking about a pattern as an
 algorithm that can generate the source seems, to me at least, to be
 unworkable.
 
 Hope that is useful.
 
 
 
 
 Richard Loosemore
 
 
 P.S.  Nice to see some Welsh in the boilerplate stuff at the bottom of
 your message. I used to work at Bangor in the early 90s, so it brought
 back fond memories to see Prifysgol Bangor!  Are you in the Psychology
 department?
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 2bb036
 Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your 

Re: [agi] Re: pattern definition

2008-05-09 Thread Boris Kazachenko
And it doesn't literally make much sense because your blog has a lot of 
generalizations with no examples - no 
individualizations/particularisations of, for example, what 
individual/particular problems your algorithms might apply to. The making 
sense level of your brain - an AGI that works - is the level that seeks 
individual examples (and exceptions) for every generalization.


If you need examples you're in the wrong field.

A general intelligence doesn't just generalize, it individualizes. It can 
talk not just about the field of AGI but about Boris K, Ben G., Stephen 
Reed, etc etc.  And it has to, otherwise those generalizations don't make 
sense.


I'm stressing this because so many people's ideas about AGI like yours 
involve only, or almost only a generalizing intelligence with no 
individualizing, sensemaking level.


Boris: Entities must not be multiplied unnecessarily. William of Okkam.


A pattern is a set of matching inputs.
A match is a partial identity of the comparands.
The comparands for general intelligence must incrementally  indefinitely 
scale in complexity.
The scaling must start from the bottom: uncompressed single-integer 
comparands,  the match here is the sum of bitwise AND.


For more see my blog: http://scalable-intelligence.blogspot.com/
Boris.

- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, May 08, 2008 1:17 PM
Subject: [agi] Re: pattern definition



[EMAIL PROTECTED] wrote:

Hello

I am writing a literature review on AGI and I am mentioning the 
definition of pattern as explained by Ben in his work.


A pattern is a representation of an object on a simpler scale. For 
example, a pattern in a drawing of a mathematical curve could be a 
program that can compute the curve from a formula (Looks et al. 2004). 
My supervisor told me that she doesn?t see how this can be simpler 
than the actual drawing.


Any other definition I could use in the same context to explain to a 
non-technical audience?


thanks

xav


Xav,

[I am copying this to the AGI mailing list because it is more 
appropriate there than on Singularity]


A more general definition of pattern would include the idea that there 
is a collection of mechanisms that take in a source of information (e.g. 
an image consisting of a grid of pixels) and respond in such a way that 
each mechanism 'triggers' in some way when a particular arrangement of 
signal values appears in the information source.


Note that the triggering of each mechanism is the 'recognition' of a 
pattern, and the mechanism in question is a 'recognizer' of a pattern. 
(In this way of looking at things, there are many mechanisms, one for 
each pattern).  The 'particular arrangement of signal values' is the 
pattern itself.


Most importantly note that a mechanism does not have to trigger for some 
exact, deterministic set of signal values.  For example, a mechanism 
could respond in a stochastic, noisy way to a whole bunch of different 
arrangements of signal values.  It is allowed to be slightly 
inconsistent, and not always respond in the same way to the same input 
(although it would be a particularly bad pattern recognizer if it did 
not behave in a reasonably consistent way!).  The amount of the 
'triggering' reaction does not have to be all-or-nothing, either:  the 
mechanism can give a graded response.


What the above paragraph means is that the thing that we call a 
'pattern' is actually 'whatever makes a mechanism trigger', and we have 
to be extremely tolerant of the fact that a wide range of different 
signal arrangements will give rise to triggering ... so a pattern is 
something much more amorphous and hard to define than simply *one* 
arrangement of signals.


Finally, there is one more twist to this definition, which is very 
important.  Everything said above was about arrangements of signals in 
the primary information source ... but we also allow that some 
mechanisms are designed to trigger on an arrangement of other 
*mechanisms*, not just primary input signals.  In other words, this 
pattern finding system is hierarchical, and there can be abstract 
patterns.


This definition of pattern is the most general one that I know of.  I 
use it in my own work, but I do not know if it has been explicitly 
published and named by anyone else.


In this conception, patterns are defined by the mechanisms that trigger, 
and further deponent sayeth not what they are, in any more fundamental 
way.


And one last thing:  as far as I can seem this does not easily map onto 
the concept of Kolmogorov complexity.  At least, the mapping is very 
awkward and uninformative, if it exists.  If a mechanism triggers on a 
possibly stochastic, nondeterminstic set of features, it can hardly be 
realised by a feasible algorithm, so talking about a pattern as an 
algorithm that can generate the source seems, to me at least, to be 
unworkable.


Hope that is useful.




Richard Loosemore


P.S.  

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Stephen Reed
Hi Matt,

You asked:

What would be a good test for understanding an algorithm?

 As I mentioned before, I want to create a system capable of being taught - 
specifically capable of being taught skills.  And I strongly share your 
interest in answers to this question.   A student should be able to demonstrate 
to its mentor that it has learned, or is making progress learning, the given 
subject matter.

A small use case might be the following:

Skill:  Trimming the whitespace off both ends of a character string.

Domain knowledge:

* a character string is a sequence of characters
* characters can be partitioned into whitespace and non-whitespace 
charactersAssumed existing algorithm skills:

* a sequence of objects can be inspected by index position
* a sequence of objects can be mutated by the remove operation
Tests to be passed by Texai following the skill acquisition:

* what are the preconditions for applying this skill?
* what are the postconditions for applying this skill?
* can this skill be applied to the sequence [1, 2, 3, 4, 5] ?
* can this skill be applied to the string abc?

* trim the whitespace off an empty string
* trim the whitespace off the stringabc   
* trim the whitespace off the string ab  cCheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 11:02:33 PM
Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI 
Dangers)


--- Jim Bromer [EMAIL PROTECTED] wrote:

 I don't want to get into a quibble fest, but understanding is not
 necessarily constrained to prediction.

What would be a good test for understanding an algorithm?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Steve Richfield
Matt,

On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote:

  Skill:  Trimming the whitespace off both ends of a character string.


One of the many annoyances of writing real-world AI programs is having to
write this function; to replace the broken system functions that are
supposed to do this, but which don't work properly with control characters,
UNICODE characters that should be considered to be whitespace (e.g. long
spaces), etc. Clearly, some intelligent humans haven't yet mastered this
algorithm, and ordinary software testing methods have failed to disclose the
remaining bugs.

Steve Richfield

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer
On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote:
 
Skill:  Trimming the whitespace off both ends of a character string.
 
One of the many annoyances of writing real-world AI programs is having to write 
this function; to replace the broken system functions that are supposed to do 
this, but which don't work properly with control characters, UNICODE characters 
that should be considered to be whitespace (e.g. long spaces), etc. Clearly, 
some intelligent humans haven't yet mastered this algorithm, and ordinary 
software testing methods have failed to disclose the remaining bugs.
Steve Richfield
---
I agree.  And you have to find the instructions before you can read them.  
(Seriously.)
Jim Bromer


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: pattern definition

2008-05-09 Thread Mike Tintner
Jim,

I doubt that your specification equals my individualization. 
If I want to be able to recognize the individuals,  Curtis/Brian/Carl/ and 
Billi Bromer,only images will do it:

http://www.dunningmotorsales.com/IMAGES/people/Curtis%20Bromer.jpg
http://www.newyorksocialdiary.com/socialdiary/2006/02_27_06/images/BRomer-Sir-PThomas.jpg
http://www.stellarsales.com/images/carl3.jpg
http://www.dec-sped.org/images/executiveboard/Billi_Bromer.jpg

If you'd like to try a logical, verbal, mathematical description of any oneof 
those individuals, so that someone will be sure to recognize them, be my guest 
:).

That's why they put photos on your passport and not program printouts or verbal 
descriptions.

All the words in the world won't tell you what a bucket looks like.
McLuhan


  Jim/MT:,

  The making sense level of your brain - an AGI that works - is 
  the level that seeks individual examples (and exceptions) for every 
  generalization.

  A general intelligence doesn't just generalize, it individualizes. It can 
  talk not just about the field of AGI but about Boris K, Ben G., Stephen 
  Reed, etc etc.  And it has to, otherwise those generalizations don't make 
  sense.

  I'm stressing this because so many people's ideas about AGI ... 
  involve only, or almost only a generalizing intelligence with no 
  individualizing, sensemaking level.

  --
  I agree with what Mike was saying in the part of his message I quoted here, 
except that the ability to understand involves the ability to make 
generalizations.  But, a generalization can be seen as a specific relative to 
another level of generalization.  I also think most people who have been 
seriously involved in AI and who think of AI in terms of generalization realize 
that specification must play an important role in understanding.
  Jim Bromer



--
  Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: pattern definition

2008-05-09 Thread Joseph Henry
Mike, what is your stance on vector images?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] organising parallel processes, try2

2008-05-09 Thread rooftop8000

I'll try to explain it more..
Suppose you have a lot of processes, all containing some production rule(s). 
They communicate with messages. They all should get cpu time somehow. Some 
processes just do low-level responses, some monitor other processes, etc. Some 
are involved in looking at the world, some involved in planning, etc. 

I'm thinking of a system like SOAR, but in parallel. Are there any systems that 
work like this, and have some way to organise the processes (assign cpu time, 
guide the communication, group according to some criteria..) 
I'd like to look at a bunch of those and compare the pros  cons

thanks





  

  





  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes, try2

2008-05-09 Thread Stephen Reed
Hi,

The Texai system, as I envision its deployment, will have the following 
characteristics:

* a lot of processes
* a lot of hosts
* message passing between processes, that are arranged in a 
hierarchical control system

* higher level processes will be deliberative, executing compiled 
production rules (e.g. acquired skills)
* lower level processes will be reactive, even so far as not to contain 
any state whatsoever, if the sensed world itself will suffice
* some higher level processes on each host will be agents of the Host 
Resource Allocation Agency and will have the learned skills sufficient to 
optimally allocate host resources (e.g. CPU cores, RAM, KB cache) on behalf of 
other processes (i.e. agents)
* I have not yet thought much about how these resources should be 
allocated except to initially adopt the scheduling algorithms used by the Linux 
OS for its processes (e.g. each process has a priority, schedule the processes 
to achieve maximum use of the resources, allow real-time response for processes 
that must have it, do not allow low priority processes to starve, etc.)
Cheers.
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: rooftop8000 [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, May 9, 2008 3:24:14 PM
Subject: [agi] organising parallel processes, try2


I'll try to explain it more..
Suppose you have a lot of processes, all containing some production rule(s). 
They communicate with messages. They all should get cpu time somehow. Some 
processes just do low-level responses, some monitor other processes, etc. Some 
are involved in looking at the world, some involved in planning, etc. 

I'm thinking of a system like SOAR, but in parallel. Are there any systems that 
work like this, and have some way to organise the processes (assign cpu time, 
guide the communication, group according to some criteria..) 
I'd like to look at a bunch of those and compare the pros  cons

thanks





  

  





  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney

--- Steve Richfield [EMAIL PROTECTED] wrote:

 Matt,
 
 On 5/8/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
 
  --- Steve Richfield [EMAIL PROTECTED] wrote:
 
   On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
   
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
 
   After many postings on this subject, I still assert that
   ANY rational AGI would be religious.
 
  Not necessarily.  You execute a program P that inputs the conditions
of
  the game and outputs 1 box or 2 boxes.  Omega executes a program W
  as follows:
 
  if P outputs 1 box
 then put $1 million in box B
  else
 leave box B empty.
 
  No matter what P is, it cannot call W because it would be infinite
  recursion.
 
 
 QED this is NOT the program that Omega executes.

No, it is given that Omega never makes a mistake.  Please try again.


  A rational agent only has to know that there are some things it cannot
  compute.  In particular, it cannot understand its own algorithm.
 
 
 There is a LOT wrapped up in your only. It is one thing to know that
 you can't presently compute certain things that you have identified, and
 quite
 another to believe that an unseen power changes things that you have NOT
 identified as being beyond your present (flawed) computational
 abilities. No
 matter how extensive your observations, you can NEVER be absolutely sure
 that you understand anything, and you will in fact fail to understand
 key details of some things without realizing it. With a good workable
 explanation of the variances between predicted and actual events (God),
 of course you will continue to look for less divine explanations, but at
 exactly what point do you broadly dismiss ALL divine explanations, in
 the absence of alternative explanations?

Intelligent agents cannot recognize higher levels of intelligence in other
agents.  We invoke divine explanation (godlike AI) because people have
trouble accepting mathematical proofs of this statement.



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
 
  I claim there is no P such that P(P,y) = P(y) for all y.
 
 (I assume you mean something like P((P,y))=P(y)).
 
 If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for
 all y.

You're right.  But we wouldn't say that the trivial language P = {0,1}*
understands anything.  That is a problem with my formal definition of
understanding.

I teach a C++ class.  To test my student's understanding of the language,
I give them exam questions with code and ask them what it outputs.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Self-maintaining Architecture first for AI

2008-05-09 Thread William Pearson
After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.

There are two possible modes for designing a computer system. I shall
characterise them as the active and the passive. The active approach
attempts to solve the problem directly where as the passive approach
gives a framework under which the problem and related other ones can
be more easily solved. The passive approach is generally less
efficient but much more reconfigurable.

The passive approach is used when there is large number of related
possible problems, with a large variety of solutions. Examples of the
passive approach are mainly architectures, programming languages and
operating systems, with a variety of different goals. They are not
always completely passive, for example automatic garbage collection
impacts the system somewhat. One illuminating example is the variety
of security systems that have been built along this structure.
Security in this sense means that the computer system is composed of
domains, where not all of them are equally trusted or allowed
resources. Now it is possible to set up a passive system designed with
security in mind insecurely, by allowing all domains to access every
file on the hard disk. Passive systems do not guarantee the solution
they are aiming to aid, the most they can do is allow as many possible
things to be represented and permit the prevention of certain things.
A passive security system allows the prevention of a domain lowering
the security of a part of another domain.

The set of problems that I intend to help solve is the set of
self-maintainanceing computer systems. Self-maintainance is basically
reconfiguring the computer to be suited to the environment it finds
itself in.  The reason why I think it needs to be solved first before
AI is attempted is 1) humans self-maintenance, 2) otherwise the very
complex computer systems we build for AI will have to be maintained by
ourselves which may become increasingly difficult as they approach
human level.

It is worth noting that I am using AI in the pure sense of being able
to solve problems. It is entirely possible to get very high complexity
problem solvers (including potentially passing the turing test) that
cannot self-maintaince.

There a large variety of possible AIs (different
bodies/environments/computational resources/goals) as can be seen from
the variety of humans and (proto?) intelligences of animals, so a
passive approach is not unreasonable.

In the case of self-maintaining system, what is that we wish the
architecture to prevent? About the only thing we can prevent is a
useless program being able to degrade of the system from the current
level of operation by taking control of resources. However we also
want to enable useful programs to be able to control more resources.
To do this we must protect the resources and make sure the correct
programs can somehow get the correct resources, the internal programs
should do the rest. So it is a resource management problem. Any active
force for better levels of operation has to come from the internal
programs of the architecture, and once the higher level of operation
has been reached the architecture should act as a ratchet to prevent
it from slipping down again.

Protecting resources amounts to the security problem which we have a
fair amount of literature on and the only passive form of resource
allocation we know of is a economic system.

... to be continued

I might go into further detail about what I mean by resource but that
will have to wait for a further post.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Vladimir Nesov
On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- Vladimir Nesov [EMAIL PROTECTED] wrote:

 (I assume you mean something like P((P,y))=P(y)).

 If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for
 all y.

 You're right.  But we wouldn't say that the trivial language P = {0,1}*
 understands anything.  That is a problem with my formal definition of
 understanding.


Then make a definition that fits your claim. As currently stated, it
looks erroneous to me, and I can't see how it's possible to fix that
without explicating your assertion mathematically.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Stan Nilsen

Matt,

You asked What would be a good test for understanding an algorithm?

Thanks for posing this question.  It has been a good exercise.  Assuming 
that the key word here is understanding rather than algorithm, I submit -


A test of understanding is if one can give a correct *explanation* for 
any and all of the possible outputs that it (the thing to understand) 
produces.


I see this as having merit in explaining understanding because it 
shows that one grasps the transformations that occur in the system.  If 
one is given a set of inputs and states, then one could state what the 
output would be by stepping through the transformations that take place.


This differs slightly from prediction because prediction demands that 
you be able to instrument every state and input for a given moment. This 
creates a distinction between it being hard or extremely hard or in 
practice impossible to predict vs. not so hard to understand what is 
going on.


Stan


Jim Bromer wrote:




- Original Message 
From: Matt Mahoney [EMAIL PROTECTED]
--- Jim Bromer [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

  I don't want to get into a quibble fest, but understanding is not
  necessarily constrained to prediction.

What would be a good test for understanding an algorithm?

-- Matt Mahoney, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
--
I don't have a ready answer for this.  First of all, (maybe I do have a 
ready answer to this), understanding has to be understood in the context 
of partial understanding.  I can understand something about a subject 
without being an expert in the subject, and I am Skeptical of anyone who 
claims that total understanding is feasible, (except for a bounded 
discussion of a bounded concept in which case I would only be skeptical 
with a small s.)


So to start with, I could say that understanding an algorithm could be 
defined by various kinds of partial knowledge of it.  What kinds of 
input does it react to, and what kinds of internal actions does it 
take?  What kind of output does it produce. Can generalizations of the 
input it takes, its internal actions and its output be made.  What was 
it designed to do?  Can relations between specific examples or derived 
generalizations of its input, its internal actions and its output be made.


While some of this kind of knowledge would require some kind of 
intelligence, others could be expressed in simpler data-concepts.  
Harnad's categorical grounding comes to mind.  An experimental AI 
program would be capable of deriving data from the operation of an 
algorithm if its program was created around this paradigm of examining 
an algorithm.   It could then create its own kind of analyses of the 
algorithm, and even though it might not be the same as an analysis that 
we might create, it still might be usable to produce something that 
would border on understanding.


The capacity of prediction is significant in the kind of derived 
generalizations and categorical exemplars that I am thinking of, but the 
concept of understanding must go beyond simple prediction.


Jim Bromer

Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try 
it now. 
http://us.rd.yahoo.com/evt=51733/*http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
 
*agi* | Archives http://www.listbox.com/member/archive/303/=now 
http://www.listbox.com/member/archive/rss/303/ | Modify 
http://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
 
  --- Vladimir Nesov [EMAIL PROTECTED] wrote:
 
  (I assume you mean something like P((P,y))=P(y)).
 
  If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0
  for all y.
 
  You're right.  But we wouldn't say that the trivial language P =
 {0,1}*
  understands anything.  That is a problem with my formal definition
  of understanding.
 
 
 Then make a definition that fits your claim. As currently stated, it
 looks erroneous to me, and I can't see how it's possible to fix that
 without explicating your assertion mathematically.

OK, let me make more clear the distinction between running a program and
simulating it.  Say that a program P simulates a program Q if for all y,
P((Q,y)) = the output is +Q(y) where + means string concatenation.  In
other words, given Q and y, P prints not Q(y) but a statement describing
what the output Q(y) would be.  Then I claim there is no finite state
machine P that can simulate itself (including the trivial case).  P needs
as many states as Q to simulate it, plus additional states to print the
output is .

I also claim that P understands Q can be reasonably interpreted as P
can simulate Q, but I can't prove it :-)




-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Self-maintaining Architecture first for AI

2008-05-09 Thread Richard Loosemore

William Pearson wrote:

After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.

There are two possible modes for designing a computer system. I shall
characterise them as the active and the passive. The active approach
attempts to solve the problem directly where as the passive approach
gives a framework under which the problem and related other ones can
be more easily solved. The passive approach is generally less
efficient but much more reconfigurable.

The passive approach is used when there is large number of related
possible problems, with a large variety of solutions. Examples of the
passive approach are mainly architectures, programming languages and
operating systems, with a variety of different goals. They are not
always completely passive, for example automatic garbage collection
impacts the system somewhat. One illuminating example is the variety
of security systems that have been built along this structure.
Security in this sense means that the computer system is composed of
domains, where not all of them are equally trusted or allowed
resources. Now it is possible to set up a passive system designed with
security in mind insecurely, by allowing all domains to access every
file on the hard disk. Passive systems do not guarantee the solution
they are aiming to aid, the most they can do is allow as many possible
things to be represented and permit the prevention of certain things.
A passive security system allows the prevention of a domain lowering
the security of a part of another domain.

The set of problems that I intend to help solve is the set of
self-maintainanceing computer systems. Self-maintainance is basically
reconfiguring the computer to be suited to the environment it finds
itself in.  The reason why I think it needs to be solved first before
AI is attempted is 1) humans self-maintenance, 2) otherwise the very
complex computer systems we build for AI will have to be maintained by
ourselves which may become increasingly difficult as they approach
human level.

It is worth noting that I am using AI in the pure sense of being able
to solve problems. It is entirely possible to get very high complexity
problem solvers (including potentially passing the turing test) that
cannot self-maintaince.

There a large variety of possible AIs (different
bodies/environments/computational resources/goals) as can be seen from
the variety of humans and (proto?) intelligences of animals, so a
passive approach is not unreasonable.

In the case of self-maintaining system, what is that we wish the
architecture to prevent? About the only thing we can prevent is a
useless program being able to degrade of the system from the current
level of operation by taking control of resources. However we also
want to enable useful programs to be able to control more resources.
To do this we must protect the resources and make sure the correct
programs can somehow get the correct resources, the internal programs
should do the rest. So it is a resource management problem. Any active
force for better levels of operation has to come from the internal
programs of the architecture, and once the higher level of operation
has been reached the architecture should act as a ratchet to prevent
it from slipping down again.

Protecting resources amounts to the security problem which we have a
fair amount of literature on and the only passive form of resource
allocation we know of is a economic system.

... to be continued

I might go into further detail about what I mean by resource but that
will have to wait for a further post.


This is still quite ambiguous on a number of levels, so would it be 
possible for you to give us a road map of where the argument is going? 
At the moment I am not sure what the theme is.


For example, your distinction between active and passive could mean that 
you think we should be building a general learning mechanism, or it 
could mean that you think we should be taking a Generative Programming 
approach to the construction of an AI, or ... probably several other 
meanings.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com