Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread William Pearson

On 13/06/06, sanjay padmane [EMAIL PROTECTED] wrote:


On the suggestion of creating a wiki, we already have it here
http://en.wikipedia.org/wiki/Artificial_general_intelligence


I wouldn't want to pollute the wiki proper with our unverified claims.


, as you know, and its exposure is much wider. I feel, wiki cannot be a good
format for discussions. No one would like their views edited out by a random


It is not meant so much to replace our discussions on list but to
display the various questions people have asked and the various
answers to them in a persistant and easy to use fashion.

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread Eugen Leitl
On Tue, Jun 13, 2006 at 05:05:56AM +0530, sanjay padmane wrote:
 Even though only a few have reacted to my (somewhat threatening ;-) )
 proposal to discontinue this list, it seems that people are comfortable with

I would call it a troll.

 it, anyhow...

You seem to be new to the Internet. I suggest you take it slow, and do
your research instead of posting reflexively on merits of technology
you're not familiar with.

 Someone can experiment with automated posting of all forum messages to the

Hey, it was your suggestion, you do it. Just download the list manager,
and hack it. It's easy, right? And don't forget automatic cathegorization, 
plaintext and multipart support, and a search engine, and anti-spam measures, 
and authentication, and to make my browser spawn my favourite editor,
instead of pasting into a form, and server-side filtering, and distributed 
archives, 
and push, while you're at it. And don't forget to build a community about 
your project, in order to support it, and to issue security fixes for
the hundreds of bugs you'll find in a new project of such complexity. 
Gosh, email is sure retarded, having all these features a forum doesn't 
have, and you'll find are absolutely trivial to implement. Get back to us 
when you're done, will you?

 list, as and when they are created.
 
 Speaking of high quality, you are the best person to do that :-). As I'm
 only starting in Agi etc, I've only questions and speculations to post. I've
 not done that because I'm afraid of sinking agi-forums to the level of
 agi-n00b-forums. But I'll take that risk someday, I can delete the post
 (unlike in a list), if it sounds too low quality.

That's not a bug, that's a feature. And you can't edit my local inbox, and
it won't go away when the machine with the list archives dies (trust me,
eventually they all do).
 
 On the suggestion of creating a wiki, we already have it here
 http://en.wikipedia.org/wiki/Artificial_general_intelligence , as you know,
 and its exposure is much wider. I feel, wiki cannot be a good format for
 discussions. No one would like their views edited out by a random user. It
 serves the purpose best, when the knowledge is already established.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread William Pearson

On 13/06/06, Yan King Yin [EMAIL PROTECTED] wrote:


Will,

I've been thinking of hosting a wiki for some time, but not sure if we have
reached critical mass here.


Possibly not. I may just collate my own list of questions and answers
until the time does come.


When we get down to the details, people's views may diverge even further.  I
can think of some potential points of disagreement:
0. what's the overall AGI architecture?
1. neurally based or logic based?


I think this question would be better as analog or digital.  While the
system I am interested in uses logical operations (AND, OR etc) for
running a program, I do not expect it to be constrained to be logical
in the everday sense.


2. what's the view on Friendliness?
3. initially, self-improving or static?


I think the distinction would need to be between static, weakly
self-improving (like the human brain) and strongly self-improving.


4. open source or not?
5. commercial or not?


I would also add

6. Does specialist hardware need to be made to make AGI practical?
7. Do you deal with combinatorial explosions in your AGI, if not why not?
7. Similarly for the No free lunch theorems.


May be we can set up a simple poll place to see who agrees with whom??


It might be hard to keep the poll simple

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Mentifex AI Breakthrough on Wed.7.JUN.2006

2006-06-13 Thread Ricardo Barreira

Hi,

Well, let me waste a little bit of time:


 Distinguish homonyms from context?
I believe so, because the current AI uses ASCII characters,
not phonemes.


Hilarious.



 Represent the concept of a homonym?
At this stage, I am not sure.


Which shows how sure you are about the fact that it's really intelligent.


 Can it handle deixis?
Since I have a degree in ancient Greek and briefly
attended U Cal Berkeley graduate school in classics,
I know that deixis from deiknumi means
pointing or showing, and so I must admit
that the AI is not far enough along to show things.
It is an implementation of the simplest thinking that
I can muster -- a proof of concept program.



Since I have hands and know how to use a search engine, I can point
you to these pages:

http://dictionary.reference.com/browse/deixis
http://en.wikipedia.org/wiki/Deixis


It would most likely be extremely difficult if not impossible
to port Mind.Forth into circa 1982 Sinclair Spectrum BASIC.



Why, because of memory issues?

Sarcastic regards,
Ricardo Barreira

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Worthwhile time sinks was Re: [agi] list vs. forum

2006-06-13 Thread sanjay padmane
On 6/13/06, William Pearson [EMAIL PROTECTED] wrote:
It is not meant so much to replace our discussions on list but todisplay the various questions people have asked and the variousanswers to them in a persistant and easy to use fashion. Will
Well I'm not sure if this requires group effort. If someone is really interested, one can start by compiling the material scattered in the list.Sanjay

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Ben Goertzel

Hi all,

Well, this list seems more active than it has been for a while, but
unfortunately this increased activity does not seem to be correlated
with a  more profound intellectual content ;-)

So, I'm going to make a brazen attempt to change the subject, and
start a conversation about an issue that perplexes me.

The issue is: how might NNs effectively represent abstract knowledge?

[Note that I do not pick a discussion topic related to my own
Novamente AGI project.  That is because at the present time there is
nothing about Novamente that really perplexes me... we are in a phase
of (too slowly, due to profound understaffing) implementing our design
and experimenting with it, and at the present time don't see any need
to re-explore the basic concepts underlying the system.  Perhaps
experimentation will reveal such a need in time...]

There are a number of reasons that I chose to use a largely
logic-based knowledge representation in the Novamente system:

1)
This will make it easier to import knowledge from DBs or from the
output of rule-based NLP systems, when the system is at an
intermediate stage of development.  [When the Novamente system is
mature, it will be able to read DB's and NLP itself without help; and
now when it is very young and incomplete it would be counterproductive
to feed it DB or NLP knowledge, as it lacks the experientially learned
conceptual habit-patterns required to interpret this knowledge for
itself.]

2)
While I considered using a more centrally neural net based knowledge
representation, I got stuck on the problem of how to represent
abstract knowledge using neural nets.

So, Novamente's KR has the following aspects:
-- explicit logic-type representation of knowledge
-- knowledge-items and their components are tagged with numbers
indicating importance levels, which act slightly like time-averages
of Hopfield net activations
-- implicit knowledge can be represented as patterns of
activity/importance across the network

This is all very well for Novamente -- which is not intended to be
brainlike -- BUT, I am still perplexed by the question of how the
brain (or a brain-emulating formal neural net architecture) represents
abstract knowledge.

So far as I know none of the brain-emulating would-be-AGI
architectures I have seen address this issue very well.  Hawkins'
architecture, for instance, doesn't really tell you how to represent
and manipulate an abstract function with variables...

Say, a category like people who have the property that there is
exactly one big scary thing they are afraid of.  How does the brain
represent this?  How would a useful formal neural net model represent
this?

I am aware that this is in principle representable using neural net
mathematics, as McCullough and Pitts showed long about that simple
binary NNs are Turing-complete.  But this is not the issue.  The issue
is how such knowledge can/should be represented in NNs in a way that
supports flexible learning and reasoning...

It seems to me that simple probabilistic logical inference can be seen
as parallel to Hebbian learning, for instance

A implies B
B implies C
|-
A implies C

is a lot like Hebbian learning which given the connections

A -- B -- C

may cause the reinforcement of

A -- C

But I know of no similarly natural mapping from the logic of more
complex (e.g. quantified) predicates into neural-net structures and
operations.

Anyone have any special knowledge, or any interesting ideas, on this topic?

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Eric Baum

Ben The issue is: how might NNs effectively represent abstract
Ben knowledge?

...

Ben So far as I know none of the brain-emulating would-be-AGI
Ben architectures I have seen address this issue very well.  Hawkins'
Ben architecture, for instance, doesn't really tell you how to
Ben represent and manipulate an abstract function with variables...

Ben Say, a category like people who have the property that there is
Ben exactly one big scary thing they are afraid of.  How does the
Ben brain represent this?  How would a useful formal neural net model
Ben represent this?

Ben,

I'd point you to Les Valiant's book Circuits of the Mind 
for a serious attempt to answer precisely such questions.

It's been years since I read it, and my recollection is
hazy, so I won't attempt much of a summary. The book
posits units of several (hundred?) neurons called neuroids
that act as finite state machines with various properties,
and then gives algorithms by which the whole system could be programmed
to learn, for example, concepts of exactly the type you ask.



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Eric Baum

Hi,
I just found that Les has several new papers on the subject and related
subjects at http://people.deas.harvard.edu/~valiant/



 Ben,
 
 I'd point you to Les Valiant's book Circuits of the Mind for a
 serious attempt to answer precisely such questions.
 
 It's been years since I read it, and my recollection is hazy, so I
 won't attempt much of a summary. The book posits units of several
 (hundred?) neurons called neuroids that act as finite state
 machines with various properties, and then gives algorithms by
 which the whole system could be programmed to learn, for example,
 concepts of exactly the type you ask.

Ben Thanks, I will check it out...

Ben Ben

Ben --- To unsubscribe, change your address, or temporarily
Ben deactivate your subscription, please go to
Ben http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread rpwl
Ben Goertzel wrote:
 Hi all,
 
 Well, this list seems more active than it has been for a while, but
 unfortunately this increased activity does not seem to be correlated
 with a  more profound intellectual content ;-)
 
 So, I'm going to make a brazen attempt to change the subject, and
 start a conversation about an issue that perplexes me.
 
 The issue is: how might NNs effectively represent abstract knowledge?
 
 [Note that I do not pick a discussion topic related to my own
 Novamente AGI project.  That is because at the present time there is
 nothing about Novamente that really perplexes me... we are in a phase
 of (too slowly, due to profound understaffing) implementing our design
 and experimenting with it, and at the present time don't see any need
 to re-explore the basic concepts underlying the system.  Perhaps
 experimentation will reveal such a need in time...]
 
 There are a number of reasons that I chose to use a largely
 logic-based knowledge representation in the Novamente system:
 
 1)
 This will make it easier to import knowledge from DBs or from the
 output of rule-based NLP systems, when the system is at an
 intermediate stage of development.  [When the Novamente system is
 mature, it will be able to read DB's and NLP itself without help; and
 now when it is very young and incomplete it would be counterproductive
 to feed it DB or NLP knowledge, as it lacks the experientially learned
 conceptual habit-patterns required to interpret this knowledge for
 itself.]
 
 2)
 While I considered using a more centrally neural net based knowledge
 representation, I got stuck on the problem of how to represent
 abstract knowledge using neural nets.
 
 So, Novamente's KR has the following aspects:
 -- explicit logic-type representation of knowledge
 -- knowledge-items and their components are tagged with numbers
 indicating importance levels, which act slightly like time-averages
 of Hopfield net activations
 -- implicit knowledge can be represented as patterns of
 activity/importance across the network
 
 This is all very well for Novamente -- which is not intended to be
 brainlike -- BUT, I am still perplexed by the question of how the
 brain (or a brain-emulating formal neural net architecture) represents
 abstract knowledge.
 
 So far as I know none of the brain-emulating would-be-AGI
 architectures I have seen address this issue very well.  Hawkins'
 architecture, for instance, doesn't really tell you how to represent
 and manipulate an abstract function with variables...
 
 Say, a category like people who have the property that there is
 exactly one big scary thing they are afraid of.  How does the brain
 represent this?  How would a useful formal neural net model represent
 this?
 
 I am aware that this is in principle representable using neural net
 mathematics, as McCullough and Pitts showed long about that simple
 binary NNs are Turing-complete.  But this is not the issue.  The issue
 is how such knowledge can/should be represented in NNs in a way that
 supports flexible learning and reasoning...
 
 It seems to me that simple probabilistic logical inference can be seen
 as parallel to Hebbian learning, for instance
 
 A implies B
 B implies C
 |-
 A implies C
 
 is a lot like Hebbian learning which given the connections
 
 A -- B -- C
 
 may cause the reinforcement of
 
 A -- C
 
 But I know of no similarly natural mapping from the logic of more
 complex (e.g. quantified) predicates into neural-net structures and
 operations.
 
 Anyone have any special knowledge, or any interesting ideas, on this topic?

Yes!  Very much so.  (And thanks for asking a question that raises the
level of discussion so dramatically).

Back in the early connectionist days (1986/7) - when McClelland and
Rumelhart had just released their two PDP books - there was a lot of
variation in how the neural net idea was interpreted.  In particular, you
will find in the PDP books some discussion of the concept of a neurally
inspired architecture was very much on everyone's mind:  in other words,
we don't have to take the simulated neurons too literally, because what was
really important was having systems that worked in the general way that NNs
seemed to work:  lots of parallelism, redundancy, distributed information
in the connections, active computing units, relaxation, etc.

More importantly, there were some (Don Norman's name comes to mind: I think
he wrote the concluding chapter of the PDP boks) who specifically cautioned
against ignoring some important issues that seemed problematic for simple
neural nets.  For example, how does a neural net represent multiple copies
of things, and how does it distinguish instances of things from generic
concepts?  These are particularly thorny for distributed representations of
course:  can't get two concepts in one of those at the same time, so how
can we represent anything structured?

What happened after that, in my opinion, was that *because* some types of
neural net (backprop, 

Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, Ben Goertzel [EMAIL PROTECTED] wrote:
The issue is: how might NNs effectively represent abstract knowledge?

With difficulty!

Okay, to put it in a less facetious-sounding way: It is worth bearing
in mind that biological neural nets are _very bad_ at syntactic symbol
manipulation; consider the mindboggling sophistication and computing
power in a dolphin's brain, for example, and note that it is completely
incapable of doing any such thing. Even humans aren't particularly good
at it: our present slow, simple, crude computers can do things like
symbolic differentiation millions of times faster and more accurately
than we can.

The point being, we tend to try to answer how questions by looking
for simple, efficient methods - but biology suggests (albeit doesn't
prove) that the reason we can't see a simple, efficient way for NNs to
handle syntactic knowledge is that there isn't one; that researchers
trying to use NNs or the like for AGI may have to bite the bullet and
look for complex, expensive solutions to this problem.

(My own reaction to this is the same as yours, incidentally: to go
straight for symbolic mechanisms as fundamental components in the
belief that this plays better to the strengths of digital hardware.
That doesn't mean NNs can't succeed, but it does suggest that they'll
have to hit this problem head-on and resign themselves to throwing a
lot of resources at it, in somewhat the same way that we on the
symbolic side of the fence will have to resign ourselves to throwing a
lot of resources at problems like visual perception.)

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Eugen Leitl
On Tue, Jun 13, 2006 at 04:15:35PM +0100, Russell Wallace wrote:

 Okay, to put it in a less facetious-sounding way: It is worth bearing in
 mind that biological neural nets are _very bad_ at syntactic symbol
 manipulation; consider the mindboggling sophistication and computing power
 in a dolphin's brain, for example, and note that it is completely incapable

Representing and manipulating formal system is a very recent component
in the fitness function, and hence not well-optimized.

 of doing any such thing. Even humans aren't particularly good at it: our
 present slow, simple, crude computers can do things like symbolic
 differentiation millions of times faster and more accurately than we can.

And how little it does help them to navigate reality.
 
 The point being, we tend to try to answer how questions by looking for
 simple, efficient methods - but biology suggests (albeit doesn't prove) that
 the reason we can't see a simple, efficient way for NNs to handle syntactic
 knowledge is that there isn't one; that researchers trying to use NNs or the
 like for AGI may have to bite the bullet and look for complex, expensive
 solutions to this problem.

The world is complicated. There are no simple solutions that work over
all domains in the real word.
 
 (My own reaction to this is the same as yours, incidentally: to go straight
 for symbolic mechanisms as fundamental components in the belief that this
 plays better to the strengths of digital hardware. That doesn't mean NNs

What are the strenghts of digital hardware, in your opinion?

 can't succeed, but it does suggest that they'll have to hit this problem
 head-on and resign themselves to throwing a lot of resources at it, in
 somewhat the same way that we on the symbolic side of the fence will have to
 resign ourselves to throwing a lot of resources at problems like visual
 perception.)

Human resources, or computational resources? If computational resources,
which architecture?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, Eugen Leitl [EMAIL PROTECTED] wrote:
Representing and manipulating formal system is a very recent componentin the fitness function, and hence not well-optimized.
True; but I will claim that no matter how much you optimize a
biological neural net, it will always have characteristics such as
being slow at serial computation, and relatively imprecise.
The world is complicated. There are no simple solutions that work overall domains in the real word.

Yep.
What are the strenghts of digital hardware, in your opinion?

Fast serial calculation. Very high precision. Extreme flexibility in
choice of operations and instant rewiring of data structures.
Human resources, or computational resources?
Both. 
If computational resources,which architecture?
I don't understand the question, please clarify? 


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Eugen Leitl
On Tue, Jun 13, 2006 at 04:38:49PM +0100, Russell Wallace wrote:

 Representing and manipulating formal system is a very recent component
 in the fitness function, and hence not well-optimized.
 
 True; but I will claim that no matter how much you optimize a biological
 neural net, it will always have characteristics such as being slow at serial
 computation, and relatively imprecise.

No disagreement. But you don't have to use live cells to build
a computational network. As to imprecise, with scaling down geometry
and ramping up switching speed digital is no longer well-defined.
With many small switches you're also getting reliability problems,
so noise begins to creep in at the hardware layer.
 
 Fast serial calculation. 

In comparison to biological neurons, yes.

 Very high precision. Extreme flexibility in choice

I don't see why an automaton network can't use many bits
to represent things. There's also some question for what
you need very high precision. Cryptography is a candidate,
another one is physical modelling doing it like a 
mathematician. I think there is a very distinct
bias, almost an agnosia if you want to do it not like
a mathematician.

 of operations and instant rewiring of data structures.

You can't actually rewire the circuit, so you have
to switch state which reprsents the circuit. It's easier
if you embrace the model of dynamically traced out
circuitry in a computational substrate. Very few
things are instant in a current memory-bottlenecked
digital computers. If you want to widen that bottleneck,
you first get a massively parallel box, and eventually
a cellular/mosaic architecture of simple computational
elements.
 
 If computational resources,
 which architecture?
 
 I don't understand the question, please clarify?

If you want to build a robot capable of playing tennis
in a heavy hail, how would you do it?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] Architecture

2006-06-13 Thread Russell Wallace
On 6/13/06, Eugen Leitl [EMAIL PROTECTED] wrote:
You can't actually rewire the circuit, so you haveto switch state which reprsents the circuit. It's easierif you embrace the model of dynamically traced outcircuitry in a computational substrate. Very fewthings are instant in a current memory-bottlenecked
digital computers. If you want to widen that bottleneck,you first get a massively parallel box, and eventuallya cellular/mosaic architecture of simple computationalelements.
Sure, ultimate computronium might plausibly look like that. Right now
human brains and Opteron chips are the best hardware we have, and it
seems to me that smart software optimized for the latter shouldn't be a
very close copy of smart software optimized for the former.
If you want to build a robot capable of playing tennisin a heavy hail, how would you do it?

Well if that was _all_ I wanted it to do I'd just write the code in C++
:) But for the general problem of AI, I think the best approach is
roughly along the same lines as Novamente.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread rpwl

This thread has completely missed Ben's original point, surely.

It has nothing to do with whether neurons are faster/better/whatever than
digital circuits, it has to do with the way that artificial neurons are
used in current NN systems to represent information.

The fact that neurons are slower than digital systems is a trivial
difference between them.  That doesn't make them inherently less capable of
doing syntax, for example.

Richard Loosemore.


Eugen Leitl wrote:
 On Tue, Jun 13, 2006 at 04:38:49PM +0100, Russell Wallace wrote:
 
 Representing and manipulating formal system is a very recent component
 in the fitness function, and hence not well-optimized.
 True; but I will claim that no matter how much you optimize a biological
 neural net, it will always have characteristics such as being slow at serial
 computation, and relatively imprecise.
 
 No disagreement. But you don't have to use live cells to build
 a computational network. As to imprecise, with scaling down geometry
 and ramping up switching speed digital is no longer well-defined.
 With many small switches you're also getting reliability problems,
 so noise begins to creep in at the hardware layer.
  
 Fast serial calculation. 
 
 In comparison to biological neurons, yes.
 
 Very high precision. Extreme flexibility in choice
 
 I don't see why an automaton network can't use many bits
 to represent things. There's also some question for what
 you need very high precision. Cryptography is a candidate,
 another one is physical modelling doing it like a 
 mathematician. I think there is a very distinct
 bias, almost an agnosia if you want to do it not like
 a mathematician.
 
 of operations and instant rewiring of data structures.
 
 You can't actually rewire the circuit, so you have
 to switch state which reprsents the circuit. It's easier
 if you embrace the model of dynamically traced out
 circuitry in a computational substrate. Very few
 things are instant in a current memory-bottlenecked
 digital computers. If you want to widen that bottleneck,
 you first get a massively parallel box, and eventually
 a cellular/mosaic architecture of simple computational
 elements.
  
 If computational resources,
 which architecture?
 I don't understand the question, please clarify?
 
 If you want to build a robot capable of playing tennis
 in a heavy hail, how would you do it?
 



-
This message was sent using Endymion MailMan.
http://www.endymion.com/products/mailman/


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
This thread has completely missed Ben's original point, surely.
It diverged certainly, which is why I changed the subject heading for my latest reply. 
It has nothing to do with whether neurons are faster/better/whatever thandigital circuits, it has to do with the way that artificial neurons are
used in current NN systems to represent information.The fact that neurons are slower than digital systems is a trivialdifference between them.That doesn't make them inherently less capable ofdoing syntax, for example.


Sure; the bit about wetware being slower than silicon was in reply to a
different question. My reason for thinking artificial neural nets will
have an uphill job doing syntax is a different one; and I think it is
indeed about how they represent information.

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Mike Ross

Eric,

Thanks for this.  I just read his paper, A quantitative theory of
neural computation, and its quite good.  It is particularly nice that
he uses human cognition as a basis for defining (and defending) his
models, but also generalizes the models so that they could apply to
non-human-like neural concept systems...

Mike

On 6/13/06, Eric Baum [EMAIL PROTECTED] wrote:


Hi,
I just found that Les has several new papers on the subject and related
subjects at http://people.deas.harvard.edu/~valiant/



 Ben,

 I'd point you to Les Valiant's book Circuits of the Mind for a
 serious attempt to answer precisely such questions.

 It's been years since I read it, and my recollection is hazy, so I
 won't attempt much of a summary. The book posits units of several
 (hundred?) neurons called neuroids that act as finite state
 machines with various properties, and then gives algorithms by
 which the whole system could be programmed to learn, for example,
 concepts of exactly the type you ask.

Ben Thanks, I will check it out...

Ben Ben

Ben --- To unsubscribe, change your address, or temporarily
Ben deactivate your subscription, please go to
Ben http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread rpwl
Russell Wallace wrote:
 On 6/13/06, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]* 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
 
 This thread has completely missed Ben's original point, surely.
 
 
 It diverged certainly, which is why I changed the subject heading for my 
 latest reply.
 
 It has nothing to do with whether neurons are faster/better/whatever
 than
 digital circuits, it has to do with the way that artificial neurons are
 used in current NN systems to represent information.
 
 The fact that neurons are slower than digital systems is a trivial
 difference between them.  That doesn't make them inherently less
 capable of
 doing syntax, for example.

What I said in my previous reply was that something very like neural nets
(with all the beneficial features for which people got interested in NNs in
the first place) *can* do syntax, and all forms of abstract representation.

I do not think it is fair to say that they can't, only that the
particularly restrictive interpretation of NN that prevails in the
literature can't.

Richard
Loosemore

-
This message was sent using Endymion MailMan.
http://www.endymion.com/products/mailman/


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/13/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
What I said in my previous reply was that something very like neural nets(with all the beneficial features for which people got interested in NNs inthe first place) *can* do syntax, and all forms of abstract representation.

Clearly they can - we're an existence proof of that - my claim being only that it won't be easy.

Has anyone yet made an artificial NN or anything like one handle syntax?


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Yan King Yin

 What I said in my previous reply was that something very like neural nets (with all the beneficial features for which people got interested in NNs in the first place) *can* do syntax, and all forms of abstract representation.
  I do not think it is fair to say that they can't, only that the particularly restrictive interpretation of NN that prevails in the literature can't.
Hi Richard

I have to agree that NN can represent all forms of knowledge, since our brainsareNNs. But figuring out how to do that in artificial systems must be pretty difficult. I should also mention Ron Sun's work, he has longtried to reconcile neural and symbolic processing. I studied NNs/ANNs for some time, but I recently switched camp to the more symbolic side.


Onequestion is whether there is some definite advantage to using NNs instead of say, predicate logic. Can you give an example of a thought, or a line of inference, etc, thatthe NN-type representation is particularly suited? And that has a advantage over the predicate logic representation? John McCarthyproposed that predicate logic can represent 'almost' everything.


If NN-type representationis not necessarily required, then we should naturally use symbolic/logic representations since they are so much more convenient to program and to run on von Neumann hardware.


YKY

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Advantages Disadvantages... How the Brain Represents Abstract Knowledge

2006-06-13 Thread DGoe

What are the advantages and distadvantages to predicate logic and NNs? 
Dan Goe


From : Yan King Yin [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] How the Brain Represents Abstract Knowledge
Date : Wed, 14 Jun 2006 04:28:36 +0800
  What I said in my previous reply was that something very like neural 
nets 
  (with all the beneficial features for which people got interested in 
NNs 
 in
  the first place) *can* do syntax, and all forms of abstract
 representation.
 
  I do not think it is fair to say that they can't, only that the
  particularly restrictive interpretation of NN that prevails in the
  literature can't.
 Hi Richard
 
 I have to agree that NN can represent all forms of knowledge, since our
 brains are NNs.  But figuring out how to do that in artificial systems 
must 
 be pretty difficult.  I should also mention Ron Sun's work, he has
 long tried to reconcile neural and symbolic processing.  I studied 
NNs/ANNs 
 for some time, but I recently switched camp to the more symbolic side.
 
 One question is whether there is some definite advantage to using NNs
 instead of say, predicate logic.  Can you give an example of a thought, 
or a 
 line of inference, etc, that the NN-type representation is particularly
 suited?  And that has a advantage over the predicate logic 
representation? 
 John McCarthy proposed that predicate logic can represent 'almost'
 everything.
 
 If NN-type representation is not necessarily required, then we should
 naturally use symbolic/logic representations since they are so much more
 convenient to program and to run on von Neumann hardware.
 
 YKY
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread [EMAIL PROTECTED]
Russell Wallace wrote:
 On 6/13/06, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]* 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
 What I said in my previous reply was that something very like neural
 nets
 (with all the beneficial features for which people got interested in
 NNs in
 the first place) *can* do syntax, and all forms of abstract
 representation. 
 
 
 Clearly they can - we're an existence proof of that - my claim being 
 only that it won't be easy.
 
 Has anyone yet made an artificial NN or anything like one handle syntax?

Uhhh:  did you read my first post on this thread?



Richard Loosemore



-
This message was sent using Endymion MailMan.
http://www.endymion.com/products/mailman/


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-06-13 Thread Russell Wallace
On 6/14/06, [EMAIL PROTECTED]@pop.lightlink.com 
[EMAIL PROTECTED]@pop.lightlink.com wrote:
Russell Wallace wrote: Has anyone yet made an artificial NN or anything like one handle syntax?Uhhh:did you read my first post on this thread?

Yes; you appear to be saying that as far as you know nobody has yet
made NNs or similar do syntax, but that's because they went off into
the dead end of back propagation, and you believe it should be possible
to create something like NNs that do syntax and other such things, but
you haven't yet implemented any such. Do I understand you correctly?

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]