Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Bob Mottram
On 03/03/2008, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  Dont you see the way to go on Neural nets is hybrid with genetic algorithms 
 in mass amounts?


I experimented with this combination in the early 1990s, and the
results were not very impressive.  Such systems still suffered from
extremely slow learning and poor scalability.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Kingma, D.P.
On Mon, Mar 3, 2008 at 6:33 AM, [EMAIL PROTECTED] wrote:

 Thanks for that.

 Dont you see the way to go on Neural nets is hybrid with genetic
 algorithms in mass amounts?


No, I dont agree with your buzzword-laden statement :) I experimented EA +
NN's and its still intractable when scaled up to nontrivial samples.

Luckily, there exist more efficient learning methods for NN's then search.
For multilayer perceptrons there's standard backpropagation (gradient
descent), conjugate gradient descent, newton's method, etc. For RBM's,
there's contrastive divergence (CD) or wake-sleep using Gibbs sampling, etc.

The great thing about RBM's is that while still slow at learning (can take a
few days to converge a complex model), it's a very very simple architecture
(just a few matrices) plus very simple learning methods (just a few matrix
multiplications) that SEEMS to be exceptionally good at building a *
generative* model from (labeled or unlabeled) complex data.
With RBM's you can do all kinds of interesting stuff like:
 - confabulate novel samples from model;
 - compression (although inherently lossy)
 - visualisation in 2D (compress to 2 neurons)

There's a nice flash demonstration about digit generation/classification
http://www.cs.toronto.edu/~hinton/adi/index.htmhttp://www.cs.toronto.edu/%7Ehinton/adi/index.htm

Did anyone on this list do experiments with these kind of generative models?
I'd can't find much research into this subject outside from the Univ. of
Tortonto's CS group, so the information reaching me might be positivily
biased. I don't have any affiliation with this group if anyone might ask.

Durk

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Vladimir Nesov
On Mon, Mar 3, 2008 at 4:29 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:

 There's a nice flash demonstration about digit generation/classification
  http://www.cs.toronto.edu/~hinton/adi/index.htm

 Did anyone on this list do experiments with these kind of generative models?
 I'd can't find much research into this subject outside from the Univ. of
 Tortonto's CS group, so the information reaching me might be positivily
 biased. I don't have any affiliation with this group if anyone might ask.


At this point I see this kind of system as a sufficient substrate for
AGI (modulo layers being abandoned, and a 'Perti dish' of feature
detectors listening to recent past states of their neighbors used
instead). I only converged on this view recently, so I'm just starting
to experiment with it.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-03 Thread Richard Loosemore

Kaj Sotala wrote:

On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Kaj Sotala wrote:
  Well, the basic gist was this: you say that AGIs can't be constructed
  with built-in goals, because a newborn AGI doesn't yet have built up
  the concepts needed to represent the goal. Yet humans seem tend to
  have built-in (using the term a bit loosely, as all goals do not
  manifest in everyone) goals, despite the fact that newborn humans
  don't yet have built up the concepts needed to represent those goals.
 
Oh, complete agreement here.  I am only saying that the idea of a
 built-in goal cannot be made to work in an AGI *if* one decides to
 build that AGI using a goal-stack motivation system, because the
 latter requires that any goals be expressed in terms of the system's
 knowledge.  If we step away from that simplistic type of GS system, and
 instead use some other type of motivation system, then I believe it is
 possible for the system to be motivated in a coherent way, even before
 it has the explicit concepts to talk about its motivations (it can
 pursue the goal seek Momma's attention long before it can explicitly
 represent the concept of [attention], for example).


Alright. But previously, you said that Omohundro's paper, which to me
seemed to be a general analysis of the behavior of *any* minds with
(more or less) explict goals, looked like it was based on a
'goal-stack' motivation system. (I believe this has also been the
basis of your critique for e.g. some SIAI articles about
friendliness.) If built-in goals *can* be constructed into
motivational system AGIs, then why do you seem to assume that AGIs
with built-in goals are goal-stack ones?


I seem to have caused lots of confusion earlier on in the discussion, so 
let me backtrack and try to summarize the structure of my argument.


1)  Conventional AI does not have a concept of a Motivational-Emotional 
System (MES), the way that I use that term, so when I criticised 
Omuhundro's paper for referring only to a Goal Stack control system, I 
was really saying no more than that he was assuming that the AI was 
driven by the system that all conventional AIs are supposed to have. 
These two ways of controlling an AI are two radically different designs.


2)  Not only are MES and GS different classes of drive mechanism, they 
also make very different assumptions about the general architecture of 
the AI.  When I try to explain how an MES works, I often get tangled up 
in the problem of explaining the general architecture that lies behind 
it (which does, I admit, cause much confusion).  I sometimes use the 
terms molecular or sub-symbolic to describe that architecture.


2(a)  I should say something about the architecture difference.  In a 
sub-symbolic architecture you would find that the significant thought 
events are the result of clouds of sub-symbolic elements interacting 
with one another across a broad front.  This is to be contrasted with 
the way that symbols interact in a regular symbolic AI, where symbols 
are single entities that get plugged into well-defined mechanisms like 
deduction operators.  In a sub-symbolic system, operations are usually 
the result of several objects *constraining* one another in a relatively 
weak manner, not the result of a very small number of objects slotting 
into a precisely defined, rigid mechanism.  There is a flexibility 
inherent in the sub-symbolic architecture that is completely lacking in 
the conventional symbolic system.


3)  It is important to understand that in an AI that uses the MES drive 
system, there is *also* a goal stack, quite similar to what is found in 
a GS-driven AI, but this goal stack is entirely subservient to the MES, 
and it plays a role only in the day to day (and moment to moment) 
thinking of the system.


4)  I plead guilty to saying things like ... Goal-Stack motivation 
system... when what I should do is use the word motivation only in 
the context of an MES system.  A better wording would have been ... 
Goal-Stack *drive* system  Or perhaps ... Goal-Stack *control* 
system


5)  The main thrust of my attack on GS-driven AIs is that goal stacks 
were invented in the context of planning problems, and were never 
intended to be used as the global control system for an AI that is 
capable of long-range development.  So, you will find me saying things 
like A GS drive system is appropriate for handling goals like 'Put the 
red pyramid on top of the green block', but it makes no sense in the 
context of goals like 'Be friendly to humans'.  Most AI people assume 
that a GS control system *must* be the way to go, but I would argue that 
they are in denial about the uselessness of a GS.  Also, most 
conventional AI people assume that a GS is valid simply because they see 
no alternative ... and this is because the architecture used by most 
conventional AI does not easily admit of any other type of drive system. 
 In a sense, they have to support the GS idea because 

Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread eldras
Care to state the exact problem you were having?

My thought is scalability is to do entirely with speed availability
 - Original Message -
 From: Bob Mottram [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Subject: Re: [agi] interesting Google Tech Talk about Neural Nets
 Date: Mon, 3 Mar 2008 09:48:08 +
 
 
 On 03/03/2008, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
   Dont you see the way to go on Neural nets is hybrid with genetic 
  algorithms in mass amounts?
 
 
 I experimented with this combination in the early 1990s, and the
 results were not very impressive.  Such systems still suffered from
 extremely slow learning and poor scalability.
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Want an e-mail address like mine?
Get a free e-mail account today at www.mail.com!

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread eldras
that's a great idea Vlad,  there are other forms of statistical sampling 
available.

the closer we get to running accelerated evolution to human intelligence the 
better I beleive.

 - Original Message -
 From: Vladimir Nesov [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Subject: Re: [agi] interesting Google Tech Talk about Neural Nets
 Date: Mon, 3 Mar 2008 17:13:04 +0300
 
 
 On Mon, Mar 3, 2008 at 4:29 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
 
  There's a nice flash demonstration about digit generation/classification
   http://www.cs.toronto.edu/~hinton/adi/index.htm
 
  Did anyone on this list do experiments with these kind of generative models?
  I'd can't find much research into this subject outside from the Univ. of
  Tortonto's CS group, so the information reaching me might be positivily
  biased. I don't have any affiliation with this group if anyone might ask.
 
 
 At this point I see this kind of system as a sufficient substrate for
 AGI (modulo layers being abandoned, and a 'Perti dish' of feature
 detectors listening to recent past states of their neighbors used
 instead). I only converged on this view recently, so I'm just starting
 to experiment with it.
 
 --
 Vladimir Nesov
 [EMAIL PROTECTED]
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Want an e-mail address like mine?
Get a free e-mail account today at www.mail.com!

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Richard Loosemore

[EMAIL PROTECTED] wrote:

Care to state the exact problem you were having?

My thought is scalability is to do entirely with speed availability


The problems with bolting together NN and GA are so numerous it is hard 
to know where to begin.  For one thing, you cannot represent structured 
information with NNs unless you go to some trouble to add extra 
architecture.  Most NNs can only cope with single concepts learned in 
isolation, so if you show a visual field containing 5,000 copies of the 
letter 'A', all that happens is that the 'A' neuron fires.


If you do find some way to get around this problem, your solution will 
end up being the tail that wags the dog:  the NN itself will fade into 
relative insignificance compared to your solution.



Richard Loosemore



- Original Message -
From: Bob Mottram [EMAIL PROTECTED]
To: agi@v2.listbox.com
Subject: Re: [agi] interesting Google Tech Talk about Neural Nets
Date: Mon, 3 Mar 2008 09:48:08 +


On 03/03/2008, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Dont you see the way to go on Neural nets is hybrid with genetic 
algorithms in mass amounts?


I experimented with this combination in the early 1990s, and the
results were not very impressive.  Such systems still suffered from
extremely slow learning and poor scalability.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Kingma, D.P.
On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED]
wrote:

 The problems with bolting together NN and GA are so numerous it is hard
 to know where to begin.  For one thing, you cannot represent structured
 information with NNs unless you go to some trouble to add extra
 architecture.  Most NNs can only cope with single concepts learned in
 isolation, so if you show a visual field containing 5,000 copies of the
 letter 'A', all that happens is that the 'A' neuron fires.

 If you do find some way to get around this problem, your solution will
 end up being the tail that wags the dog:  the NN itself will fade into
 relative insignificance compared to your solution.


Well, you could achieve that (5000 registration of the letter 'A' with their
corresponding position in the image) by using a sliding window over multiple
rescaled (and maybe other transformations) transformations of the input
image. This way, you get image patches for each window and scale (and maybe
other transformations), and each patch can be a given a corresponding
position in multidimensional space (e.g., an image patch with X and Y
position and scale S has is a point in 3-dimensional space). For each of the
produced points (patches) in the space, run the neural net to produce a
lower-dimensional code and corresponding energy (= reconstruction quality).
Now filter this space by let the points have local battles for salience
using some heuristic (e.g. lower energy means higher salience) and filter
out the low-salient points. This produces a filtered space with fewer points
then the previous one, and each point containing a lower-dimensional code.

In the example of the letter 'A', the above method would recognize all 5000
versions while remembering their individual input position. This presumes
the neural net is properly trained on the letter 'A' and can properly
reconstuct them (using Hinton's method). This should produce 5000
registrations of the letter 'A', while filtering out unimportant
information.

But you could take it a step further. For each image input, the above method
creates a filtered, 3-dimensional space with points containing
low-dimensional codes. This space can then again be harvested by taking
patches with each patch containing *n* points, each point containing
an *m *dimensional
code, so each patch being (*m***n*).* *A neural net can be trained on
lowering the dimension of these patches from (*m***n*) to something
lower-dimensional. This process is quite similar to the one in the previous
paragraph.

What could *possibly *go wrong? :)

Regards,
Durk Kingma





 Richard Loosemore


  - Original Message -
  From: Bob Mottram [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Subject: Re: [agi] interesting Google Tech Talk about Neural Nets
  Date: Mon, 3 Mar 2008 09:48:08 +
 
 
  On 03/03/2008, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
   Dont you see the way to go on Neural nets is hybrid with genetic
  algorithms in mass amounts?
 
  I experimented with this combination in the early 1990s, and the
  results were not very impressive.  Such systems still suffered from
  extremely slow learning and poor scalability.
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread YKY (Yan King Yin)
I'm increasingly convinced that the human brain is not a statistical
learner, but a logical learner.  There are many examples of humans
learning concepts/rules from one or two examples, rather than thousands of
examples.  So I think that at a high level, AGI should be logic-based.

But it would be interesting to integrate NN-based techniques to logic-based
AI, especially in vision.  (NN is also very weak at language processing.)

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Richard Loosemore

Kingma, D.P. wrote:
On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


The problems with bolting together NN and GA are so numerous it is hard
to know where to begin.  For one thing, you cannot represent structured
information with NNs unless you go to some trouble to add extra
architecture.  Most NNs can only cope with single concepts learned in
isolation, so if you show a visual field containing 5,000 copies of the
letter 'A', all that happens is that the 'A' neuron fires.

If you do find some way to get around this problem, your solution will
end up being the tail that wags the dog:  the NN itself will fade into
relative insignificance compared to your solution.


Well, you could achieve that (5000 registration of the letter 'A' with 
their corresponding position in the image) by using a sliding window 
over multiple rescaled (and maybe other transformations) transformations 
of the input image. This way, you get image patches for each window and 
scale (and maybe other transformations), and each patch can be a given a 
corresponding position in multidimensional space (e.g., an image patch 
with X and Y position and scale S has is a point in 3-dimensional 
space). For each of the produced points (patches) in the space, run the 
neural net to produce a lower-dimensional code and corresponding energy 
(= reconstruction quality). Now filter this space by let the points have 
local battles for salience using some heuristic (e.g. lower energy means 
higher salience) and filter out the low-salient points. This produces a 
filtered space with fewer points then the previous one, and each point 
containing a lower-dimensional code.


In the example of the letter 'A', the above method would recognize all 
5000 versions while remembering their individual input position. This 
presumes the neural net is properly trained on the letter 'A' and can 
properly reconstuct them (using Hinton's method). This should produce 
5000 registrations of the letter 'A', while filtering out unimportant 
information.


But you could take it a step further. For each image input, the above 
method creates a filtered, 3-dimensional space with points containing 
low-dimensional codes. This space can then again be harvested by taking 
patches with each patch containing /n/ points, each point containing an 
/m /dimensional code, so each patch being (/m/*/n/)./ /A neural net can 
be trained on lowering the dimension of these patches from (/m/*/n/) to 
something lower-dimensional. This process is quite similar to the one in 
the previous paragraph.


What could /possibly /go wrong? :)

Regards,
Durk Kingma


Excellent!  Sounds like a perfect solution ;-).

Oh, wait!

What about. if the scene is structured in such a way that the 
5,000 copies of the letter 'A' were actually scattered around in such a 
way that most (but not all) of them were arranged to form a huge letter 'A'?


Would it then count 5,001 copies?

Oh, and one more thing I forgot to mention that is in the same scene 
(how could I forget this one?):  there are also a couple of women 
standing side by side, leaning against each other with their shoulders 
touching and keeping their bodies stiff and straight, forming the two 
sides of a letter 'A', and holding a model of a horizontally reclining 
woman between them at waist height, to form the crossbar of a letter 'A'.


Could we get the NN to recognize, in the context of the overall scene, 
that here were actually 5,002 copies of the letter 'A'..?


And if the scene had one single, rather small letter B over in the 
corner, would the NN find this funny?


You have 30 minutes to devise an algorithm, Durk... :-).



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Youri Lima
i stumbled upon this project recently. it adresses the connectivity in a
neural network. pretty interresting stuff. could be its a known thing but i
just wanted to share this.

http://oege.ie.hva.nl/~bergd/

im sorta new to this agi development but as far as i understand, couldn't
this speed up the performance/effectivity to agi software?(keeping in mind
the current topic)


-- 
youri lima,

Developer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread David Clark
How intelligent would any human be if it couldn't be taught by other humans?

Could a human ever learn to speak by itself?  The few times this has
happened in real life, the person was permanently disabled and not capable
of becoming a normal human being.

If humans can't become human without the help of other humans, why should
this is a criteria for AGI?

David Clark

PS I am not suggesting that explicitly programming 100% of an AGI is either
doable or desirable but some degree of detailed teaching must be a
requirement for all on this list who dream of creating an AGI, no?

 -Original Message-
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 Sent: March-02-08 5:36 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Thought experiment on informationally limited
 systems
 
 Jeez, Will, the point of Artificial General Intelligence is that it can
 start adapting to an unfamiliar situation and domain BY ITSELF.  And
 your
 FIRST and only response to the problem you set was to say: I'll get
 someone
 to tell it what to do.
 
 IOW you simply avoided the problem and thought only of cheating. What a
 solution, or merest idea for a solution, must do is tell me how that
 intelligence will start adapting by itself  - will generalize from its
 existing skills to cross over domains.
 
 Then, as my answer indicated, it may well have to seek some
 instructions and
 advice - especially and almost certainly  if it wants to acquire a
 whole new
 major skill, as we do, by taking courses etc.
 
 But a general intelligence should be able to adapt to some unfamiliar
 situations entirely by itself - like perhaps your submersible
 situation. No
 guarantee that it will succeed in any given situation, (as there isn't
 with
 us), but you should be able to demonstrate its power to adapt
 sometimes.
 
 In a sense, you should be appalled with yourself that you didn't try to
 tackle the problem - to produce a cross-over idea. But since
 literally no
 one else in the field of AGI has the slightest cross-over idea - i.e.
 is
 actually tackling the problem of AGI, - and the whole culture is one of
 avoiding the problem, it's to be expected. (You disagree - show me one,
 just
 one, cross-over idea anywhere. Everyone will give you a v.
 detailed,impressive timetable for how long it'll take them to produce
 such
 an idea, they just will never produce one. Frankly, they're too
 scared).
 
 
 Mike Tintner [EMAIL PROTECTED] wrote:
 
   You must first define its existing skills, then define the new
 challenge
   with some degree of precision - then explain the principles by
 which it
  will
   extend its skills. It's those principles of
 extension/generalization
  that
   are the be-all and end-all, (and NOT btw, as you suggest, any
 helpful
  info
   that the robot will receive - that,sir, is cheating - it has to
 work
  these
   things out for itself - although perhaps it could *ask* for info).
 
 
  Why is that cheating? Would you never give instructions to a child
  about what to do? Taking instuctions is something that all
  intelligences need to be able to do, but it should be attempted to be
  minimised. I'm not saying it should take instructions unquestioningly
  either, ideally it should figure out whether the instructions you
 give
  are any use for it.
 
   Will Pearson
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 724342
 Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread YKY (Yan King Yin)
On 2/28/08, Mark Waser [EMAIL PROTECTED] wrote:

  I think Ben's text mining approach has one big flaw:  it can
only reason about existing knowledge, but cannot generate new ideas using
words / concepts

 There is a substantial amount of literature that claims that *humans*
can't generate new ideas de novo either -- and that they can only build up
new ideas from existing pieces.

That's fine, but the way our language builds up new ideas seems to be very
complex, and it makes natural language a bad knowledge representation for
AGI.

For example:
An apple pie is a pie with apple fillings.
A door knob is a knob attached to a door.
A street prostitute is prostitute working in the streets.

So the meaning of AB depends on the *interactions* of A and B, and it
violates the principle of compositionality -- where the meaning of AB would
be somehow combined from A and B in a *fixed* way.

An even more complex example:
spread the jam with a knife
draw a circle with a knife
cut the cake with a knife
rape the girl with a knife
stop the train with a knife (with unclear meaning)

So the simple concept do X with a knife can be interpreted in myriad ways
-- it generates new ideas in complex ways.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Kingma, D.P.
Too easy ;)

One of the points in patch-space corresponds to X=center, Y=center,
Scale=huge, so this patch is a rescaled version (say 20x20) of the whole
image (say 1000x1000). In this 20x20 patch, the letter 'A' emerges naturally
and can be reconstructed by the NN, and therefore be recognized. It will
probably be salient, since it's far away in patch-space from the small A's
in the Scale dimension. Far-away points in patch-space dont battle for
salience.
Your second example is solved analogously.

Okay, time for diner now. Vision solved :)

Regards,
Durk

On Mon, Mar 3, 2008 at 7:59 PM, Richard Loosemore [EMAIL PROTECTED]
wrote:

 Kingma, D.P. wrote:
  On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  The problems with bolting together NN and GA are so numerous it is
 hard
  to know where to begin.  For one thing, you cannot represent
 structured
  information with NNs unless you go to some trouble to add extra
  architecture.  Most NNs can only cope with single concepts learned
 in
  isolation, so if you show a visual field containing 5,000 copies of
 the
  letter 'A', all that happens is that the 'A' neuron fires.
 
  If you do find some way to get around this problem, your solution
 will
  end up being the tail that wags the dog:  the NN itself will fade
 into
  relative insignificance compared to your solution.
 
 
  Well, you could achieve that (5000 registration of the letter 'A' with
  their corresponding position in the image) by using a sliding window
  over multiple rescaled (and maybe other transformations) transformations
  of the input image. This way, you get image patches for each window and
  scale (and maybe other transformations), and each patch can be a given a
  corresponding position in multidimensional space (e.g., an image patch
  with X and Y position and scale S has is a point in 3-dimensional
  space). For each of the produced points (patches) in the space, run the
  neural net to produce a lower-dimensional code and corresponding energy
  (= reconstruction quality). Now filter this space by let the points have
  local battles for salience using some heuristic (e.g. lower energy means
  higher salience) and filter out the low-salient points. This produces a
  filtered space with fewer points then the previous one, and each point
  containing a lower-dimensional code.
 
  In the example of the letter 'A', the above method would recognize all
  5000 versions while remembering their individual input position. This
  presumes the neural net is properly trained on the letter 'A' and can
  properly reconstuct them (using Hinton's method). This should produce
  5000 registrations of the letter 'A', while filtering out unimportant
  information.
 
  But you could take it a step further. For each image input, the above
  method creates a filtered, 3-dimensional space with points containing
  low-dimensional codes. This space can then again be harvested by taking
  patches with each patch containing /n/ points, each point containing an
  /m /dimensional code, so each patch being (/m/*/n/)./ /A neural net can
  be trained on lowering the dimension of these patches from (/m/*/n/) to
  something lower-dimensional. This process is quite similar to the one in
  the previous paragraph.
 
  What could /possibly /go wrong? :)
 
  Regards,
  Durk Kingma

 Excellent!  Sounds like a perfect solution ;-).

 Oh, wait!

 What about. if the scene is structured in such a way that the
 5,000 copies of the letter 'A' were actually scattered around in such a
 way that most (but not all) of them were arranged to form a huge letter
 'A'?

 Would it then count 5,001 copies?

 Oh, and one more thing I forgot to mention that is in the same scene
 (how could I forget this one?):  there are also a couple of women
 standing side by side, leaning against each other with their shoulders
 touching and keeping their bodies stiff and straight, forming the two
 sides of a letter 'A', and holding a model of a horizontally reclining
 woman between them at waist height, to form the crossbar of a letter 'A'.

 Could we get the NN to recognize, in the context of the overall scene,
 that here were actually 5,002 copies of the letter 'A'..?

 And if the scene had one single, rather small letter B over in the
 corner, would the NN find this funny?

 You have 30 minutes to devise an algorithm, Durk... :-).



 Richard Loosemore


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Vladimir Nesov
On Mon, Mar 3, 2008 at 9:50 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 I'm increasingly convinced that the human brain is not a statistical
 learner, but a logical learner.  There are many examples of humans learning
 concepts/rules from one or two examples, rather than thousands of examples.
 So I think that at a high level, AGI should be logic-based.

 But it would be interesting to integrate NN-based techniques to logic-based
 AI, especially in vision.  (NN is also very weak at language processing.)


One doesn't preclude another: if AGI can learn finite state machines
statistically, it can then use them to carry out more 'logical' kinds
of reasoning. Fast learning is also possible: it just takes a more
similar state to evoke episodic memories than to evoke strong semantic
memories.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner

Yes, an AGI will have to be able to do narrow AI.

What you are doing here - and everyone is doing over and over and over - is 
saying: Yes, I know there's a hard part to AGI, but can I please 
concentrate on the easy parts - the narrow AI parts -  first?


If I give you a problem, I don't want to know whether you can take dictation 
and spell, I just want to know whether you can solve the problem - and not 
make excuses, or create distractions.


It's simple - do you have any ideas about the problem of AGI - ideas for 
generalizing skills (see below) -  cross-over ideas - or not?


David:


How intelligent would any human be if it couldn't be taught by other 
humans?


Could a human ever learn to speak by itself?  The few times this has
happened in real life, the person was permanently disabled and not capable
of becoming a normal human being.

If humans can't become human without the help of other humans, why should
this is a criteria for AGI?

David Clark

PS I am not suggesting that explicitly programming 100% of an AGI is 
either

doable or desirable but some degree of detailed teaching must be a
requirement for all on this list who dream of creating an AGI, no?


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: March-02-08 5:36 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Thought experiment on informationally limited
systems

Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF.  And
your
FIRST and only response to the problem you set was to say: I'll get
someone
to tell it what to do.

IOW you simply avoided the problem and thought only of cheating. What a
solution, or merest idea for a solution, must do is tell me how that
intelligence will start adapting by itself  - will generalize from its
existing skills to cross over domains.

Then, as my answer indicated, it may well have to seek some
instructions and
advice - especially and almost certainly  if it wants to acquire a
whole new
major skill, as we do, by taking courses etc.

But a general intelligence should be able to adapt to some unfamiliar
situations entirely by itself - like perhaps your submersible
situation. No
guarantee that it will succeed in any given situation, (as there isn't
with
us), but you should be able to demonstrate its power to adapt
sometimes.

In a sense, you should be appalled with yourself that you didn't try to
tackle the problem - to produce a cross-over idea. But since
literally no
one else in the field of AGI has the slightest cross-over idea - i.e.
is
actually tackling the problem of AGI, - and the whole culture is one of
avoiding the problem, it's to be expected. (You disagree - show me one,
just
one, cross-over idea anywhere. Everyone will give you a v.
detailed,impressive timetable for how long it'll take them to produce
such
an idea, they just will never produce one. Frankly, they're too
scared).


Mike Tintner [EMAIL PROTECTED] wrote:

  You must first define its existing skills, then define the new
challenge
  with some degree of precision - then explain the principles by
which it
 will
  extend its skills. It's those principles of
extension/generalization
 that
  are the be-all and end-all, (and NOT btw, as you suggest, any
helpful
 info
  that the robot will receive - that,sir, is cheating - it has to
work
 these
  things out for itself - although perhaps it could *ask* for info).


 Why is that cheating? Would you never give instructions to a child
 about what to do? Taking instuctions is something that all
 intelligences need to be able to do, but it should be attempted to be
 minimised. I'm not saying it should take instructions unquestioningly
 either, ideally it should figure out whether the instructions you
give
 are any use for it.

  Will Pearson




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
724342
Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 3/3/2008 
10:01 AM






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Mike Tintner
YKY: the way our language builds up new ideas seems to be very complex, and it 
makes natural language a bad knowledge representation for AGI.
An even more complex example:
spread the jam with a knife
draw a circle with a knife
cut the cake with a knife
rape the girl with a knife
stop the train with a knife (with unclear meaning)
So the simple concept do X with a knife can be interpreted in myriad ways -- 
it generates new ideas in complex ways.

YKY,

Good example, but how about: language is open-ended, period and capable of 
infinite rather than myriad interpretations - and that open-endedness is the 
whole point of it?.

Simple example much like yours : handle. You can attach words for objects ad 
infinitum to form different sentences  - 

handle an egg/ spear/ pen/ snake, stream of water etc.  -  

the hand shape referred to will keep changing - basically because your hand is 
capable of an infinity of shapes and ways of handling an infinity of different 
objects. . 

And the next sentence after that first one, may require that the reader know 
exactly which shape the hand took.

But if you avoid natural language, and its open-endedness then you are surely 
avoiding AGI.  It's that capacity for open-ended concepts that is central to a 
true AGI (like a human or animal). It enables us to keep coming up with new 
ways to deal with new kinds of problems and situations   - new ways to handle 
any problem. (And it also enables us to keep recognizing new kinds of objects 
that might classify as a knife - as well as new ways of handling them - which 
could be useful, for example, when in danger).

  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Monday, March 03, 2008 7:14 PM
  Subject: Re: [agi] would anyone want to use a commonsense KB?




  On 2/28/08, Mark Waser [EMAIL PROTECTED] wrote: 
   
I think Ben's text mining approach has one big flaw:  it can only reason 
about existing knowledge, but cannot generate new ideas using words / concepts

   There is a substantial amount of literature that claims that *humans* can't 
generate new ideas de novo either -- and that they can only build up new 
ideas from existing pieces.


  That's fine, but the way our language builds up new ideas seems to be very 
complex, and it makes natural language a bad knowledge representation for AGI.

  For example:
  An apple pie is a pie with apple fillings.
  A door knob is a knob attached to a door.
  A street prostitute is prostitute working in the streets.

  So the meaning of AB depends on the *interactions* of A and B, and it 
violates the principle of compositionality -- where the meaning of AB would be 
somehow combined from A and B in a *fixed* way.

  An even more complex example:
  spread the jam with a knife
  draw a circle with a knife
  cut the cake with a knife
  rape the girl with a knife
  stop the train with a knife (with unclear meaning)

  So the simple concept do X with a knife can be interpreted in myriad ways 
-- it generates new ideas in complex ways.

  YKY

--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 3/3/2008 
10:01 AM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Vladimir Nesov
On Mon, Mar 3, 2008 at 11:30 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 Can you explain a bit more, your terms are too vague.  I think statistical
 learning and logical learning are fundamentally quite different.  I'd be
 interested in some hybrid approach, if it exists.


Bayesian logic becomes something like Aristotelian logic when
probability tends to 1. If statistical learning observes a perfect
regularity, it forms a strong link, and classification becomes logical
inference. Classification is performed in time, so that act of
classification is an event that takes place after the events that were
classified, and logical inference becomes a deterministic algorithm.
These algorithms build up and help in learning other regularities and
other algorithms.

Maybe you mean something specific by logical learning that can't be
supported by this kind of algorithm imitation?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread YKY (Yan King Yin)
On 3/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Good example, but how about: language is open-ended, period and capable of
infinite rather than myriad interpretations - and that open-endedness is
the whole point of it?.

 Simple example much like yours : handle. You can attach words for
objects ad infinitum to form different sentences  -

 handle an egg/ spear/ pen/ snake, stream of water etc.  -

 the hand shape referred to will keep changing - basically because your
hand is capable of an infinity of shapes and ways of handling an infinity of
different objects. .

 And the next sentence after that first one, may require that the
reader know exactly which shape the hand took.

 But if you avoid natural language, and its open-endedness then you are
surely avoiding AGI.  It's that capacity for open-ended concepts that is
central to a true AGI (like a human or animal). It enables us to keep coming
up with new ways to deal with new kinds of problems and situations   - new
ways to handle any problem. (And it also enables us to keep recognizing
new kinds of objects that might classify as a knife - as well as new ways
of handling them - which could be useful, for example, when in danger).

Sure, AGI needs to handle NL in an open-ended way.  But the question is
whether the internal knowledge representation of the AGI needs to allow
ambiguities, or should we use an ambiguity-free representation.  It seems
that the latter choice is better.  Otherwise, the knowledge stored in
episodic memory would be open to interpretations and may need to errors in
recall, and similar problems.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Ben Goertzel
 Sure, AGI needs to handle NL in an open-ended way.  But the question is
 whether the internal knowledge representation of the AGI needs to allow
 ambiguities, or should we use an ambiguity-free representation.  It seems
 that the latter choice is better.  Otherwise, the knowledge stored in
 episodic memory would be open to interpretations and may need to errors in
 recall, and similar problems.

Rather, I think the right goal is to create an AGI that, in each
context, can be as ambiguous as it wants/needs to be in its
representation of a given piece of information.

Ambiguity allows compactness, and can be very valuable in this regard.

Guidance on this issue is provided by the Lojban language.  Lojban
allows extremely precise expression, but also allows ambiguity as
desired.  What one finds when speaking Lojban is that sometimes one
chooses ambiguity because it lets one make ones utterances shorter.  I
think the same thing holds in terms of an AGI's memory.  An AGI with
finite memory resources must sometimes choose to represent relatively
unimportant information ambiguously rather than precisely so as to
conserve memory.

For instance, storing the information

A is associated with B

is highly ambiguous, but takes little memory.  Storing logical
information regarding the precise relationship between A and B may
take one or more orders of magnitude more information.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] interesting Google Tech Talk about Neural Nets

2008-03-03 Thread Richard Loosemore

Kingma, D.P. wrote:

Too easy ;)

One of the points in patch-space corresponds to X=center, Y=center, 
Scale=huge, so this patch is a rescaled version (say 20x20) of the whole 
image (say 1000x1000). In this 20x20 patch, the letter 'A' emerges 
naturally and can be reconstructed by the NN, and therefore be 
recognized. It will probably be salient, since it's far away in 
patch-space from the small A's in the Scale dimension. Far-away points 
in patch-space dont battle for salience.

Your second example is solved analogously.

Okay, time for diner now. Vision solved :)

Regards,
Durk


Yeah, modulo a few implementation details, that sounds about right.

We can probably do language the same way tomorrow morning.

;-)


Richard Loosemore



On Mon, Mar 3, 2008 at 7:59 PM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Kingma, D.P. wrote:
  On Mon, Mar 3, 2008 at 6:39 PM, Richard Loosemore
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
  The problems with bolting together NN and GA are so numerous
it is hard
  to know where to begin.  For one thing, you cannot represent
structured
  information with NNs unless you go to some trouble to add extra
  architecture.  Most NNs can only cope with single concepts
learned in
  isolation, so if you show a visual field containing 5,000
copies of the
  letter 'A', all that happens is that the 'A' neuron fires.
 
  If you do find some way to get around this problem, your
solution will
  end up being the tail that wags the dog:  the NN itself will
fade into
  relative insignificance compared to your solution.
 
 
  Well, you could achieve that (5000 registration of the letter 'A'
with
  their corresponding position in the image) by using a sliding window
  over multiple rescaled (and maybe other transformations)
transformations
  of the input image. This way, you get image patches for each
window and
  scale (and maybe other transformations), and each patch can be a
given a
  corresponding position in multidimensional space (e.g., an image
patch
  with X and Y position and scale S has is a point in 3-dimensional
  space). For each of the produced points (patches) in the space,
run the
  neural net to produce a lower-dimensional code and corresponding
energy
  (= reconstruction quality). Now filter this space by let the
points have
  local battles for salience using some heuristic (e.g. lower
energy means
  higher salience) and filter out the low-salient points. This
produces a
  filtered space with fewer points then the previous one, and each
point
  containing a lower-dimensional code.
 
  In the example of the letter 'A', the above method would
recognize all
  5000 versions while remembering their individual input position. This
  presumes the neural net is properly trained on the letter 'A' and can
  properly reconstuct them (using Hinton's method). This should produce
  5000 registrations of the letter 'A', while filtering out unimportant
  information.
 
  But you could take it a step further. For each image input, the above
  method creates a filtered, 3-dimensional space with points containing
  low-dimensional codes. This space can then again be harvested by
taking
  patches with each patch containing /n/ points, each point
containing an
  /m /dimensional code, so each patch being (/m/*/n/)./ /A neural
net can
  be trained on lowering the dimension of these patches from
(/m/*/n/) to
  something lower-dimensional. This process is quite similar to the
one in
  the previous paragraph.
 
  What could /possibly /go wrong? :)
 
  Regards,
  Durk Kingma

Excellent!  Sounds like a perfect solution ;-).

Oh, wait!

What about. if the scene is structured in such a way that the
5,000 copies of the letter 'A' were actually scattered around in such a
way that most (but not all) of them were arranged to form a huge
letter 'A'?

Would it then count 5,001 copies?

Oh, and one more thing I forgot to mention that is in the same scene
(how could I forget this one?):  there are also a couple of women
standing side by side, leaning against each other with their shoulders
touching and keeping their bodies stiff and straight, forming the two
sides of a letter 'A', and holding a model of a horizontally reclining
woman between them at waist height, to form the crossbar of a letter
'A'.

Could we get the NN to recognize, in the context of the overall scene,
that here were actually 5,002 copies of the letter 'A'..?

And if the scene had one single, rather small letter B over in the
corner, would 

Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner

Will:Is generalising a skill logically the first thing that you need to
make an AGI? Nope, the means and sufficient architecture to acquire
skills and competencies are more useful early on in an agi
development

Ah, you see, that's where I absolutely disagree, and a good part of why I'm 
hammering on the way I am. I don't think many (anyone?) will agree with 
David, but many if not everyone will agree with you.


Yes, the problem of generalising is the very first thing you tackle, and 
should shape everything you do - at least once you have moved beyond idle 
thought to serious engagement.


If you're trying to develop a new electric battery, you look for that new 
chemical first (assuming that's what you reckon you'll need) - you don't 
start looking at the casing or other aspects of the battery. Anything 
peripheral you do first may be rendered totally irrelevant later on when you 
do discover that chemical and a total waste of time.


And such, I'm sure, is the case with AGI. That central problem of 
generalising demands a total new mentality - a sea-change of approach.


(You saw an example in my exchange with YKY. I think - in fact, I'm just 
about totally certain - that generalising demands a system of open-ended 
concepts like ours. Because he isn't directly concerned with the 
generalising problem, he wants a closed-ended, unambiguous language - which 
is in fact only suitable for narrow AI and, I would argue, a waste of time).


P.S. It's a bit sad - you started this thread with a generalising problem, 
now you're backtracking on it. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com