AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
Sorry, but this was no proof that a natural language understanding system is
necessarily able to solve the equation x*3 = y for arbitrary y.

1) You have not shown that a language understanding system must
necessarily(!) have made statistical experiences on the equation x*3 =y.

2) you give only a few examples. For a proof of the claim, you have to prove
it for every(!) y.

3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
that
3a) that a language understanding system necessarily(!) has this rules
3b) that a language understanding system necessarily(!) can apply such rules

In my opinion a natural language understanding system must have a lot of
linguistic knowledge.
Furthermore a system which can learn natural languages must be able to gain
linguistic knowledge.

But both systems do not have necessarily(!) the ability to *work* with this
knowledge as it is essential for AGI.

And for this reason natural language understanding is not AGI complete at
all.

-Matthias



-Ursprüngliche Nachricht-
Von: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 21. Oktober 2008 05:05
An: agi@v2.listbox.com
Betreff: [agi] Language learning (was Re: Defining AGI)


--- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

It can be solved with statistics. Take y = 12 and count Google hits:

string count
-- -
1x3=12 760
2x3=12 2030
3x3=12 9190
4x3=12 16200
5x3=12 1540
6x3=12 1010

More generally, people learn algebra and higher mathematics by induction, by
generalizing from lots of examples.

5 * 7 = 35 - 35 / 7 = 5
4 * 6 = 24 - 24 / 6 = 4
etc...
a * b = c - c = b / a

It is the same way we learn grammatical rules, for example converting active
to passive voice and applying it to novel sentences:

Bob kissed Alice - Alice was kissed by Bob.
I ate dinner - Dinner was eaten by me.
etc...
SUBJ VERB OBJ - OBJ was VERB by SUBJ.

In a similar manner, we can learn to solve problems using logical deduction:

All frogs are green. Kermit is a frog. Therefore Kermit is green.
All fish live in water. A shark is a fish. Therefore sharks live in water.
etc...

I understand the objection to learning math and logic in a language model
instead of coding the rules directly. It is horribly inefficient. I estimate
that a neural language model with 10^9 connections would need up to 10^18
operations to learn simple arithmetic like 2+2=4 well enough to get it right
90% of the time. But I don't know of a better way to learn how to convert
natural language word problems to a formal language suitable for entering
into a calculator at the level of an average human adult.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Bob Mottram
2008/10/21 Matt Mahoney [EMAIL PROTECTED]:
 More generally, people learn algebra and higher mathematics by induction, by 
 generalizing from lots of examples.

 5 * 7 = 35 - 35 / 7 = 5
 4 * 6 = 24 - 24 / 6 = 4
 etc...
 a * b = c - c = b / a



Not only this though.  If I remember correctly from school the way
that this was taught was at least partly geometric, so that you can
see the numbers moving to different places in a particular pattern
rather like doing a chess move or some other stereotypical physical
action.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
There is another point which indicates that the ability to understand
language or to learn language does not imply *general* intelligence.

You can often observe in school that linguistic talents are poor in
mathematics and vice versa. 

- Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Bob Mottram
2008/10/21 Dr. Matthias Heger [EMAIL PROTECTED]:
 There is another point which indicates that the ability to understand
 language or to learn language does not imply *general* intelligence.

 You can often observe in school that linguistic talents are poor in
 mathematics and vice versa.


The usual explanation given for this is that understanding mathematics
may require thinking about spatial organisation/imagery/mental
rotation, which according to popular mythology are located in the
opposite hemisphere from speech understanding and production.  How
true or not this is I don't know.  Have any fMRI studies been done
specifically on learning of maths concepts?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
If MW would be scientific then he would not have asked Ben to prove that 
MWs hypothesis is wrong.


Science is done by comparing hypotheses to data.  Frequently, the fastest 
way to handle a hypothesis is to find a counter-example so that it can be 
discarded (or extended appropriately to handle the new case).  How is asking 
for a counter-example unscientific?



The person who has to prove something is the person who creates the 
hypothesis.


Ah.  Like the theory of evolution is conclusively proved?  The scientific 
method is about predictive power not proof.  Try reading the reference that 
I gave Ben.  (And if you've got something to prove, maybe the scientific 
method isn't so good for you.  :-)



And MW has given not a tiny argument for his hypothesis that a natural 
language understanding system can easily be a scientist.


First, I'd appreciate it if you'd drop the strawman.  You are the only one 
who keeps insisting that anything is easy.


Second, my hypothesis is more correctly stated that the pre-requisites for a 
natural language understanding system are necessary and sufficient for a 
scientist because both are AGI-complete.  Again, I would appreciate it if 
you could correctly represent it in the future.


Third, while I haven't given a tiny argument, I have given a reasonably 
short logical chain which I'll attempt to rephrase yet again.


Science is all about modeling the world and predicting future data.
The scientific method simply boils down to making a theory (of how to change 
or enhance your world model) and seeing if it is supported (not proved!) or 
disproved by future data.
Ben's and my disagreement initially came down to whether a scientist was an 
Einstein (his view) or merely capable of competently reviewing data to see 
if it supports, disproves, or isn't relevant to the predictive power of a 
theory (my view).
Later, he argued that most humans aren't even competent to review data and 
can't be made competent.
I agreed with his assessment that many scientists don't competently review 
data (inappropriate over-reliance on the heuristic p  0.5 without 
understanding what it truly means) but disagreed as to whether the average 
human could be *taught*
Ben's argument was that the scientific method couldn't be codified well 
enough to be taught.  My argument was that the method was codified 
sufficiently but that the application of the method was clearly context 
dependent and could be unboundedly complex.


But this is actually a distraction from some more important arguments . . . 
.
The $1,000,000 question is If a human can't be taught something, is that 
human a general intelligence?
The $5,000,000 question is If a human can't competently follow a recipe in 
a cookbook, do they have natural language understanding?


Fundamentally, this either comes down to a disagreement about what a general 
intelligence is and/or what understanding and meaning are.
Currently, I'm using the definition that a general intelligence is one that 
can achieve competence in any domain in a reasonable length of time.

To achieve competence in a domain, you have to understand that domain
My definition of understanding is that you have a mental model of that 
domain that has predictive power in that domain and which you can update as 
you learn about that domain.

(You could argue with this definition if you like)
Or, in other words, you have to be a competent scientist in that domain --  
or else, you don't truly understand that domain


So, for simplicity, why don't we just say
   scientist = understanding

Now, for a counter-example to my initial hypothesis, why don't you explain 
how you can have natural language understanding without understanding (which 
equals scientist ;-).





- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,

and creativity

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed 

Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Ben Goertzel
I wouldn't argue that any software system capable of learning human
language, would necessarily be able to learn mathematics

However, I strongly suspect that any software system **with a vaguely
human-mind-like architecture** that is capable of learning human language,
would also be able to learn basic mathematics

ben

On Tue, Oct 21, 2008 at 2:30 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 Sorry, but this was no proof that a natural language understanding system
 is
 necessarily able to solve the equation x*3 = y for arbitrary y.

 1) You have not shown that a language understanding system must
 necessarily(!) have made statistical experiences on the equation x*3 =y.

 2) you give only a few examples. For a proof of the claim, you have to
 prove
 it for every(!) y.

 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
 that
 3a) that a language understanding system necessarily(!) has this rules
 3b) that a language understanding system necessarily(!) can apply such
 rules

 In my opinion a natural language understanding system must have a lot of
 linguistic knowledge.
 Furthermore a system which can learn natural languages must be able to gain
 linguistic knowledge.

 But both systems do not have necessarily(!) the ability to *work* with this
 knowledge as it is essential for AGI.

 And for this reason natural language understanding is not AGI complete at
 all.

 -Matthias



 -Ursprüngliche Nachricht-
 Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Gesendet: Dienstag, 21. Oktober 2008 05:05
 An: agi@v2.listbox.com
 Betreff: [agi] Language learning (was Re: Defining AGI)


 --- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  For instance, I doubt that anyone can prove that
  any system which understands natural language is
  necessarily able to solve
  the simple equation x *3 = y for a given y.

 It can be solved with statistics. Take y = 12 and count Google hits:

 string count
 -- -
 1x3=12 760
 2x3=12 2030
 3x3=12 9190
 4x3=12 16200
 5x3=12 1540
 6x3=12 1010

 More generally, people learn algebra and higher mathematics by induction,
 by
 generalizing from lots of examples.

 5 * 7 = 35 - 35 / 7 = 5
 4 * 6 = 24 - 24 / 6 = 4
 etc...
 a * b = c - c = b / a

 It is the same way we learn grammatical rules, for example converting
 active
 to passive voice and applying it to novel sentences:

 Bob kissed Alice - Alice was kissed by Bob.
 I ate dinner - Dinner was eaten by me.
 etc...
 SUBJ VERB OBJ - OBJ was VERB by SUBJ.

 In a similar manner, we can learn to solve problems using logical
 deduction:

 All frogs are green. Kermit is a frog. Therefore Kermit is green.
 All fish live in water. A shark is a fish. Therefore sharks live in water.
 etc...

 I understand the objection to learning math and logic in a language model
 instead of coding the rules directly. It is horribly inefficient. I
 estimate
 that a neural language model with 10^9 connections would need up to 10^18
 operations to learn simple arithmetic like 2+2=4 well enough to get it
 right
 90% of the time. But I don't know of a better way to learn how to convert
 natural language word problems to a formal language suitable for entering
 into a calculator at the level of an average human adult.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
 My apologies if I've misconstrued you. Regardless of any fault, the basic
 point was/is important. Even if a considerable percentage of science's
 conclusions are v. hard, there is no definitive scientific method for
 reaching them .

I think I understand.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Oh, and I *have* to laugh . . . .


Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,

and creativity

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


In the cited wikipedia entry, the phrase Scientific method is not a recipe: 
it requires intelligence, imagination, and creativity is immediately 
followed by just such a recipe for the scientific method


A linearized, pragmatic scheme of the four points above is sometimes offered 
as a guideline for proceeding:[25]

 1.. Define the question
 2.. Gather information and resources (observe)
 3.. Form hypothesis
 4.. Perform experiment and collect data
 5.. Analyze data
 6.. Interpret data and draw conclusions that serve as a starting point for 
new hypothesis

 7.. Publish results
 8.. Retest (frequently done by other scientists)



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,

and creativity

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser [EMAIL PROTECTED] wrote:

 Oh, and I *have* to laugh . . . .

  Hence the wiki entry on scientific method:
 Scientific method is not a recipe: it requires intelligence,
 imagination,

 and creativity

 http://en.wikipedia.org/wiki/Scientific_method
 This is basic stuff.


 In the cited wikipedia entry, the phrase Scientific method is not a
 recipe: it requires intelligence, imagination, and creativity is
 immediately followed by just such a recipe for the scientific method

 A linearized, pragmatic scheme of the four points above is sometimes
 offered as a guideline for proceeding:[25]


Yes, but each of those steps is very vague, and cannot be boiled down to a
series of precise instructions sufficient for a stupid person to
consistently carry them out effectively...

Also, those steps are heuristic and do not cover all cases.  For instance
step 4 requires experimentation, yet there are sciences such as cosmology
and paleontology that are not focused on experimentation.

As you asked for references I will give you two:

Paul Feyerabend, Against Method (a polemic I don't fully agree with, but his
points need to be understood by those who will talk about scientific method)

Imre Lakatos, The Methodology of Scientific Research Programmes (which I do
largely agree with ... he's a very subtle thinker...)



ben g


  1.. Define the question
  2.. Gather information and resources (observe)
  3.. Form hypothesis
  4.. Perform experiment and collect data
  5.. Analyze data
  6.. Interpret data and draw conclusions that serve as a starting point for
 new hypothesis
  7.. Publish results
  8.. Retest (frequently done by other scientists)



 - Original Message - From: Dr. Matthias Heger [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, October 20, 2008 5:00 PM
 Subject: AW: AW: AW: [agi] Re: Defining AGI


 If MW would be scientific then he would not have asked Ben to prove that
 MWs
 hypothesis is wrong.
 The person who has to prove something is the person who creates the
 hypothesis.
 And MW has given not a tiny argument for his hypothesis that a natural
 language understanding system can easily be a scientist.

 -Matthias

 -Ursprüngliche Nachricht-
 Von: Eric Burton [mailto:[EMAIL PROTECTED]
 Gesendet: Montag, 20. Oktober 2008 22:48
 An: agi@v2.listbox.com
 Betreff: Re: AW: AW: [agi] Re: Defining AGI


  You and MW are clearly as philosophically ignorant, as I am in AI.


 But MW and I have not agreed on anything.

  Hence the wiki entry on scientific method:
 Scientific method is not a recipe: it requires intelligence,
 imagination,

 and creativity

 http://en.wikipedia.org/wiki/Scientific_method
 This is basic stuff.


 And this is fundamentally what I was trying to say.

 I don't think of myself as philosophically ignorant. I believe
 you've reversed the intention of my post. It's probably my fault for
 choosing my words poorly. I could have conveyed the nuances of the
 argument better as I understood them. Next time!


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Ed Porter
Ben,

 

In my email starting this thread on 10/15/08 7:41pm I pointed out that a
more sophisticated version of the algorithm would have to take connection
weights into account in determining cross talk, as you have suggested below.
But I asked for the answer to a more simple version of the problem, since
that might prove difficult enough, and since I was just trying to get some
rough feeling for whether or not node assemblies might offer substantial
gains in possible representational capability, before delving more deeply
into the subject.

 

Ed Porter

 

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 20, 2008 10:52 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


But, suppose you have two assemblies A and B, which have nA and nB neurons
respectively, and which overlap in O neurons...

It seems that the system's capability to distinguish A from B is going to
depend on the specific **weight matrix** of the synapses inside the
assemblies A and B, not just on the numbers nA, nB and O.

And this weight matrix depends on the statistical properties of the memories
being remembered.

So, these counting arguments you're trying to do are only going to give you
a very crude indication, anyway, right? 

ben




On Mon, Oct 20, 2008 at 5:09 PM, Ed Porter [EMAIL PROTECTED] wrote:

Ben, 

 

I am interested in exactly the case where individual nodes partake in
multiple attractors,  

 

I use the notation A(N,O,S) which is similar to the A(n,d,w) formula of
constant weight codes, except as Vlad says you would plug my varaiables into
the constant weight formula buy using A(N, 2*(S-0+1),S).

 

I have asked my question assuming each node assembly has the same size S for
to make the math easier.  Each such assembly is an autoassociative
attractor.  I want to keep the overlap O low to reduce the cross talk
between attractors.  So the question is how many node assemblies A, can you
make having a size S, and no more than an overlap O, given N nodes.

 

Actually the cross talk between auto associative patterns becomes an even
bigger problem if there are many attractors being activated at once (such as
hundreds of them), but if the signaling driving different the population of
different attractors could have different timing or timing patterns, and if
the auto associatively was sensitive to such timing, this problem could be
greatly reduced.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 

Sent: Monday, October 20, 2008 4:16 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


Wait, now I'm confused.

I think I misunderstood your question.

Bounded-weight codes correspond to the case where the assemblies themselves
can have n or fewer neurons, rather than exactly n.

Constant-weight codes correspond to assemblies with exactly n neurons.

A complication btw is that an assembly can hold multiple memories in
multiple attractors.  For instance using Storkey's palimpsest model a
completely connected assembly with n neurons can hold about .25n attractors,
where each attractor has around .5n neurons switched on.

In a constant-weight code, I believe the numbers estimated tell you the
number of sets where the Hamming distance is greater than or equal to d.
The idea in coding is that the code strings denoting distinct messages
should not be closer to each other than d.

But I'm not sure I'm following your notation exactly.

ben g

On Mon, Oct 20, 2008 at 3:19 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 

I also don't understand whether A(n,d,w) is the number of sets where the
hamming distance is exactly d (as it would seem from the text of
http://en.wikipedia.org/wiki/Constant-weight_code
http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
number of set where the hamming distance is d or less.  If the former case
is true then the lower bounds given in the tables would actually be lower
than the actual lower bounds for the question I asked, which would
correspond to all cases where the hamming distance is d or less.



The case where the Hamming distance is d or less corresponds to a
bounded-weight code rather than a constant-weight code.

I already forwarded you a link to a paper on bounded-weight codes, which are
also combinatorially intractable and have been studied only via
computational analysis.

-- Ben G

 




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives

Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Ben Goertzel
makes sense, yep...

i guess my intuition is that there are obviously a huge number of
assemblies, so that the number of assemblies is not the hard part, the hard
part lies in the weights...

On Tue, Oct 21, 2008 at 11:18 AM, Ed Porter [EMAIL PROTECTED] wrote:

  Ben,



 In my email starting this thread on 10/15/08 7:41pm I pointed out that a
 more sophisticated version of the algorithm would have to take connection
 weights into account in determining cross talk, as you have suggested
 below.  But I asked for the answer to a more simple version of the problem,
 since that might prove difficult enough, and since I was just trying to get
 some rough feeling for whether or not node assemblies might offer
 substantial gains in possible representational capability, before delving
 more deeply into the subject.



 Ed Porter





 -Original Message-
 *From:* Ben Goertzel [mailto:[EMAIL PROTECTED]
 *Sent:* Monday, October 20, 2008 10:52 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] Who is smart enough to answer this question?




 But, suppose you have two assemblies A and B, which have nA and nB neurons
 respectively, and which overlap in O neurons...

 It seems that the system's capability to distinguish A from B is going to
 depend on the specific **weight matrix** of the synapses inside the
 assemblies A and B, not just on the numbers nA, nB and O.

 And this weight matrix depends on the statistical properties of the
 memories being remembered.

 So, these counting arguments you're trying to do are only going to give you
 a very crude indication, anyway, right?

 ben


  On Mon, Oct 20, 2008 at 5:09 PM, Ed Porter [EMAIL PROTECTED] wrote:

 Ben,



 I am interested in exactly the case where individual nodes partake in
 multiple attractors,



 I use the notation A(N,O,S) which is similar to the A(n,d,w) formula of
 constant weight codes, except as Vlad says you would plug my varaiables into
 the constant weight formula buy using A(N, 2*(S-0+1),S).



 I have asked my question assuming each node assembly has the same size S
 for to make the math easier.  Each such assembly is an autoassociative
 attractor.  I want to keep the overlap O low to reduce the cross talk
 between attractors.  So the question is how many node assemblies A, can you
 make having a size S, and no more than an overlap O, given N nodes.



 Actually the cross talk between auto associative patterns becomes an even
 bigger problem if there are many attractors being activated at once (such as
 hundreds of them), but if the signaling driving different the population of
 different attractors could have different timing or timing patterns, and if
 the auto associatively was sensitive to such timing, this problem could be
 greatly reduced.



 Ed Porter



 -Original Message-
 *From:* Ben Goertzel [mailto:[EMAIL PROTECTED]

 *Sent:* Monday, October 20, 2008 4:16 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] Who is smart enough to answer this question?




 Wait, now I'm confused.

 I think I misunderstood your question.

 Bounded-weight codes correspond to the case where the assemblies themselves
 can have n or fewer neurons, rather than exactly n.

 Constant-weight codes correspond to assemblies with exactly n neurons.

 A complication btw is that an assembly can hold multiple memories in
 multiple attractors.  For instance using Storkey's palimpsest model a
 completely connected assembly with n neurons can hold about .25n attractors,
 where each attractor has around .5n neurons switched on.

 In a constant-weight code, I believe the numbers estimated tell you the
 number of sets where the Hamming distance is greater than or equal to d.
 The idea in coding is that the code strings denoting distinct messages
 should not be closer to each other than d.

 But I'm not sure I'm following your notation exactly.

 ben g

 On Mon, Oct 20, 2008 at 3:19 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



  I also don't understand whether A(n,d,w) is the number of sets where the
 hamming distance is exactly d (as it would seem from the text of
 http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
 number of set where the hamming distance is d or less.  If the former case
 is true then the lower bounds given in the tables would actually be lower
 than the actual lower bounds for the question I asked, which would
 correspond to all cases where the hamming distance is d or less.



 The case where the Hamming distance is d or less corresponds to a
 bounded-weight code rather than a constant-weight code.

 I already forwarded you a link to a paper on bounded-weight codes, which
 are also combinatorially intractable and have been studied only via
 computational analysis.

 -- Ben G






 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson
 

Re: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread wannabe
This really seems more like arguing that there is no such thing as
AI-complete at all.  That is certainly a possibility.  It could be that
there are only different competences.  This would also seem to mean that
there isn't really anything that is truly general about intelligence,
which is again possible.

I guess one thing we're seeing here is a basic example of mathematics as
having underlying separate mechanisms from other features of language. 
The Lakoff and Nunez talk about subitizing (judging small numbers of
things at a glance) as one core competancy, and counting as another. 
These are things you can see in animals that do not use language.  So,
sure, mathematics could be a separate realm of intelligence.

Of course, my response to that is that this kind of basic mathematical
ability is needed to understand language.  Of course, people who favor
language use my not exercise their mathematical ability and it can become
weak, but I think it generally has to be there for full competance.  And
there are some more abstract concepts that could be hard for people to
get, and maybe some people don't have what it takes to get some concepts,
so the don't have infinite potential.
andi


Matthias wrote:
 Sorry, but this was no proof that a natural language understanding system
 is
 necessarily able to solve the equation x*3 = y for arbitrary y.

 1) You have not shown that a language understanding system must
 necessarily(!) have made statistical experiences on the equation x*3 =y.

 2) you give only a few examples. For a proof of the claim, you have to
 prove
 it for every(!) y.

 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
 that
 3a) that a language understanding system necessarily(!) has this rules
 3b) that a language understanding system necessarily(!) can apply such
 rules

 In my opinion a natural language understanding system must have a lot of
 linguistic knowledge.
 Furthermore a system which can learn natural languages must be able to
 gain
 linguistic knowledge.

 But both systems do not have necessarily(!) the ability to *work* with
 this
 knowledge as it is essential for AGI.

 And for this reason natural language understanding is not AGI complete at
 all.

 -Matthias



 -Ursprüngliche Nachricht-
 Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Gesendet: Dienstag, 21. Oktober 2008 05:05
 An: agi@v2.listbox.com
 Betreff: [agi] Language learning (was Re: Defining AGI)


 --- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

 It can be solved with statistics. Take y = 12 and count Google hits:

 string count
 -- -
 1x3=12 760
 2x3=12 2030
 3x3=12 9190
 4x3=12 16200
 5x3=12 1540
 6x3=12 1010

 More generally, people learn algebra and higher mathematics by induction,
 by
 generalizing from lots of examples.

 5 * 7 = 35 - 35 / 7 = 5
 4 * 6 = 24 - 24 / 6 = 4
 etc...
 a * b = c - c = b / a

 It is the same way we learn grammatical rules, for example converting
 active
 to passive voice and applying it to novel sentences:

 Bob kissed Alice - Alice was kissed by Bob.
 I ate dinner - Dinner was eaten by me.
 etc...
 SUBJ VERB OBJ - OBJ was VERB by SUBJ.

 In a similar manner, we can learn to solve problems using logical
 deduction:

 All frogs are green. Kermit is a frog. Therefore Kermit is green.
 All fish live in water. A shark is a fish. Therefore sharks live in water.
 etc...

 I understand the objection to learning math and logic in a language model
 instead of coding the rules directly. It is horribly inefficient. I
 estimate
 that a neural language model with 10^9 connections would need up to 10^18
 operations to learn simple arithmetic like 2+2=4 well enough to get it
 right
 90% of the time. But I don't know of a better way to learn how to convert
 natural language word problems to a formal language suitable for entering
 into a calculator at the level of an average human adult.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben,

Unfortunately, this response is going to be (somewhat) long, because I
have several points that I want to make.

If I understand what you are saying, you're claiming that if I pointed
to the black box and said That's a halting oracle, I'm not
describing the box directly, but instead describing it in terms of a
(semi)formal system in my head that defines halting oracle. This
system is computable.

This seems to fit back with the comment I made about William Pearson's
system: we don't assume that the universe is computable, instead we
just assume that our mental substrate is.

But, we need to be careful about what computable means here. Things
like a mandelbrot set rendering are computably enumerable, which is
formally separated from computable, but still easily implemented on
a computer. The same is true of first-order theories that describe
halting oracle and related notions. Technically these are not
computable, because there is no halting criteria (or, in the case of
the mandelbrot renderer, no halting criteria *yet*, although many
mathematicians expect that one can be formulated.) We can list
positive cases (provably halting/nonhalting programs, provably
escaping points) but we have no way of deciding when to give up on the
stubborn points.

A third type of computability is computably co-enumerable, which is
what the halting problem is. I imagine you know the definition of this
term already.

So, halting-related things such as halting oracles have no
computable description, but they do have a
description-implementable-on-a-computer. Unfortunately, AIXI does not
use models of this variety, since it only considers models that are
computable in the strict technical sense.

But, worse, there are mathematically well-defined entities that are
not even enumerable or co-enumerable, and in no sense seem computable.
Of course, any axiomatic theory of these objects *is* enumerable and
therefore intuitively computable (but technically only computably
enumerable). Schmidhuber's super-omegas are one example.

Concerning your statement,

It is not clear what you really mean by the description length
of something uncomputable, since the essence of uncomputability
is the property of **not being finitely describable**.

That statement basically agrees with the following definition of meaning:

A statement is meaningful if we have a (finite) rule that tells us
whether it is true or false.

The idea of finite rule here is a program that takes finite input
(the facts we currently know) and halts in finite time with an output.
This agrees with the formal definition of computable, so that
meaningful facts and computable facts are one and the same. Here is a
slightly broader definition:

A statement is meaningful if we have a (finite) rule that tells us
whether it is true.

This agrees instead with the definition of enumerable. Or, the
scientific testability version:

A statement is meaningful if we have a (finite) rule that tells us
whether it is false.

This of course agrees with the definition of co-enumerable. Now here
is a rather broad one:

A statement is meaningful if we have a (finite) rule that tells us
how we can reason if it is true.

So, each statement corresponds to a program that operates on known
statements to produce more statements; applying the rule corresponds
to using the fact in our reasoning. So the direct consequences of a
statement given some other statements are computable, but the truth or
falsehood is not necessarily. As it happens, this definition of
meaning admits horribly-terribly-uncomputable-things to be described!
(Far worse than the above-mentioned super-omegas.) So, the truth or
falsehood is very much not computable.

I'm hesitant to provide the mathematical proof in this email, since it
is already long enough... let's just say it is available upon
request. Anyway, you'll probably have some more basic objection.

--Abram

On Mon, Oct 20, 2008 at 10:38 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


 On Mon, Oct 20, 2008 at 5:29 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 [my statement] seems to incorporate the assumption of a finite
 period of time because a finite set of sentences or observations must
 occur during a finite period of time.

 A finite set of observations, sure, but a finite set of statements can
 include universal statements.

 Ok ... let me clarify what I meant re sentences

 I'll define what I mean by a **descriptive sentence**

 What I mean
 by a sentence is a finite string of symbols drawn from a finite alphabet.

 What I mean by a *descriptive sentence* is a sentence that is agreed by
 a certain community to denote some subset of the total set of observations
 (where all observations have finite precision and are drawn from a certain
 finite set).

 So, whether or not a descriptive sentence contains universal quantifiers or
 quantum-gravity
 quantifiers or psychospirituometaphysical quantifiers, or whatever, in the
 end
 there are some observation-sets it 

AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
 

I agree. But the vaguely-human-mind-like-architecture is a huge additional
assumption.

 

If you have a system that can solve problem x and has in addition a
human-mind-like-architecture then obviously you obtain AGI for *any* x. 

 

The whole AGI-completeness would essentially depend on this additional
assumption. 

A human-mind-like-architecture  even would imply the ability to learn
natural language understanding 

 

- Matthias

Ben wrote


I wouldn't argue that any software system capable of learning human
language, would necessarily be able to learn mathematics

However, I strongly suspect that any software system **with a vaguely
human-mind-like architecture** that is capable of learning human language,
would also be able to learn basic mathematics

ben

On Tue, Oct 21, 2008 at 2:30 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

Sorry, but this was no proof that a natural language understanding system is
necessarily able to solve the equation x*3 = y for arbitrary y.

1) You have not shown that a language understanding system must
necessarily(!) have made statistical experiences on the equation x*3 =y.

2) you give only a few examples. For a proof of the claim, you have to prove
it for every(!) y.

3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
that
3a) that a language understanding system necessarily(!) has this rules
3b) that a language understanding system necessarily(!) can apply such rules

In my opinion a natural language understanding system must have a lot of
linguistic knowledge.
Furthermore a system which can learn natural languages must be able to gain
linguistic knowledge.

But both systems do not have necessarily(!) the ability to *work* with this
knowledge as it is essential for AGI.

And for this reason natural language understanding is not AGI complete at
all.

-Matthias



-Ursprüngliche Nachricht-
Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 21. Oktober 2008 05:05
An: agi@v2.listbox.com
Betreff: [agi] Language learning (was Re: Defining AGI)



--- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

It can be solved with statistics. Take y = 12 and count Google hits:

string count
-- -
1x3=12 760
2x3=12 2030
3x3=12 9190
4x3=12 16200
5x3=12 1540
6x3=12 1010

More generally, people learn algebra and higher mathematics by induction, by
generalizing from lots of examples.

5 * 7 = 35 - 35 / 7 = 5
4 * 6 = 24 - 24 / 6 = 4
etc...
a * b = c - c = b / a

It is the same way we learn grammatical rules, for example converting active
to passive voice and applying it to novel sentences:

Bob kissed Alice - Alice was kissed by Bob.
I ate dinner - Dinner was eaten by me.
etc...
SUBJ VERB OBJ - OBJ was VERB by SUBJ.

In a similar manner, we can learn to solve problems using logical deduction:

All frogs are green. Kermit is a frog. Therefore Kermit is green.
All fish live in water. A shark is a fish. Therefore sharks live in water.
etc...

I understand the objection to learning math and logic in a language model
instead of coding the rules directly. It is horribly inefficient. I estimate
that a neural language model with 10^9 connections would need up to 10^18
operations to learn simple arithmetic like 2+2=4 well enough to get it right
90% of the time. But I don't know of a better way to learn how to convert
natural language word problems to a formal language suitable for entering
into a calculator at the level of an average human adult.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:

https://www.listbox.com/member/? https://www.listbox.com/member/?; 

Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 
Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel


 But, worse, there are mathematically well-defined entities that are
 not even enumerable or co-enumerable, and in no sense seem computable.
 Of course, any axiomatic theory of these objects *is* enumerable and
 therefore intuitively computable (but technically only computably
 enumerable). Schmidhuber's super-omegas are one example.


My contention is that the first use of the word are in the first sentence
of
the above is deceptive.

The whole problem with the question of whether there are uncomputable
entities is the ambiguity of the natural language term is / are, IMO ...

If by

A exists

you  mean communicable-existence, i.e.

It is possible to communicate A using a language composed of discrete
symbols, in a finite time

then uncomputable numbers do not exist

If by

A exists

you mean

I can take some other formal property F(X) that applies to
communicatively-existent things X, and apply it to A

then this will often be true ... depending on the property F ...

My question to you is: how do you interpret are in your statement that
uncomputable entities are?

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Ben Goertzel
OK, I guess the term architecture is poorly defined.

Let me say it this way, then: I suspect that if you have a system that

a) has a generally dog/chimp/pig like cognitive architecture (which we
humans certainly do)

b) can learn to understand and generate human language

this this system will also be able to learn basic human math...

-- Ben G

On Tue, Oct 21, 2008 at 11:53 AM, Dr. Matthias Heger [EMAIL PROTECTED]wrote:



 I agree. But the vaguely-human-mind-like-architecture is a huge additional
 assumption.



 If you have a system that can solve problem x and has in addition a
 human-mind-like-architecture then obviously you obtain AGI for **any** x.



 The whole AGI-completeness would essentially depend on this additional
 assumption.

 A human-mind-like-architecture  even would imply the ability to learn
 natural language understanding



 - Matthias

 Ben wrote


 I wouldn't argue that any software system capable of learning human
 language, would necessarily be able to learn mathematics

 However, I strongly suspect that any software system **with a vaguely
 human-mind-like architecture** that is capable of learning human language,
 would also be able to learn basic mathematics

 ben

 On Tue, Oct 21, 2008 at 2:30 AM, Dr. Matthias Heger [EMAIL PROTECTED]
 wrote:

 Sorry, but this was no proof that a natural language understanding system
 is
 necessarily able to solve the equation x*3 = y for arbitrary y.

 1) You have not shown that a language understanding system must
 necessarily(!) have made statistical experiences on the equation x*3 =y.

 2) you give only a few examples. For a proof of the claim, you have to
 prove
 it for every(!) y.

 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
 that
 3a) that a language understanding system necessarily(!) has this rules
 3b) that a language understanding system necessarily(!) can apply such
 rules

 In my opinion a natural language understanding system must have a lot of
 linguistic knowledge.
 Furthermore a system which can learn natural languages must be able to gain
 linguistic knowledge.

 But both systems do not have necessarily(!) the ability to *work* with this
 knowledge as it is essential for AGI.

 And for this reason natural language understanding is not AGI complete at
 all.

 -Matthias



 -Ursprüngliche Nachricht-
 Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Gesendet: Dienstag, 21. Oktober 2008 05:05
 An: agi@v2.listbox.com
 Betreff: [agi] Language learning (was Re: Defining AGI)



 --- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  For instance, I doubt that anyone can prove that
  any system which understands natural language is
  necessarily able to solve
  the simple equation x *3 = y for a given y.

 It can be solved with statistics. Take y = 12 and count Google hits:

 string count
 -- -
 1x3=12 760
 2x3=12 2030
 3x3=12 9190
 4x3=12 16200
 5x3=12 1540
 6x3=12 1010

 More generally, people learn algebra and higher mathematics by induction,
 by
 generalizing from lots of examples.

 5 * 7 = 35 - 35 / 7 = 5
 4 * 6 = 24 - 24 / 6 = 4
 etc...
 a * b = c - c = b / a

 It is the same way we learn grammatical rules, for example converting
 active
 to passive voice and applying it to novel sentences:

 Bob kissed Alice - Alice was kissed by Bob.
 I ate dinner - Dinner was eaten by me.
 etc...
 SUBJ VERB OBJ - OBJ was VERB by SUBJ.

 In a similar manner, we can learn to solve problems using logical
 deduction:

 All frogs are green. Kermit is a frog. Therefore Kermit is green.
 All fish live in water. A shark is a fish. Therefore sharks live in water.
 etc...

 I understand the objection to learning math and logic in a language model
 instead of coding the rules directly. It is horribly inefficient. I
 estimate
 that a neural language model with 10^9 connections would need up to 10^18
 operations to learn simple arithmetic like 2+2=4 well enough to get it
 right
 90% of the time. But I don't know of a better way to learn how to convert
 natural language word problems to a formal language suitable for entering
 into a calculator at the level of an average human adult.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:

 https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson

   

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben,

My discussion of meaning was supposed to clarify that. The final
definition is the broadest I currently endorse, and it admits
meaningful uncomputable facts about numbers. It does not appear to get
into the realm of set theory, though.

--Abram

On Tue, Oct 21, 2008 at 12:07 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



 But, worse, there are mathematically well-defined entities that are
 not even enumerable or co-enumerable, and in no sense seem computable.
 Of course, any axiomatic theory of these objects *is* enumerable and
 therefore intuitively computable (but technically only computably
 enumerable). Schmidhuber's super-omegas are one example.

 My contention is that the first use of the word are in the first sentence
 of
 the above is deceptive.

 The whole problem with the question of whether there are uncomputable
 entities is the ambiguity of the natural language term is / are, IMO ...

 If by

 A exists

 you  mean communicable-existence, i.e.

 It is possible to communicate A using a language composed of discrete
 symbols, in a finite time

 then uncomputable numbers do not exist

 If by

 A exists

 you mean

 I can take some other formal property F(X) that applies to
 communicatively-existent things X, and apply it to A

 then this will often be true ... depending on the property F ...

 My question to you is: how do you interpret are in your statement that
 uncomputable entities are?

 ben

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
Marc Walser wrote


Science is done by comparing hypotheses to data.  Frequently, the fastest 
way to handle a hypothesis is to find a counter-example so that it can be 
discarded (or extended appropriately to handle the new case).  How is asking

for a counter-example unscientific?


Before you ask for counter examples you should *first* give some arguments
which supports your hypothesis. This was my point. If everyone would make
wild hypotheses and ask other scientists to find counter-examples then we
would end up in a explosion of number of hypotheses. But if you would first
show some arguments which support your hypothesis then you give reasons to
the scientific community why it is worth to use some time to think about the
hypothesis. Regarding your example with Darwin: Darwin had gathered signs of
evidence which supports his hypothesis *first* .



First, I'd appreciate it if you'd drop the strawman.  You are the only one 
who keeps insisting that anything is easy.


Is this a scientific discussion from you? No. You use rhetoric and nothing
else.
I don't say that anything is easy. 


Second, my hypothesis is more correctly stated that the pre-requisites for a

natural language understanding system are necessary and sufficient for a 
scientist because both are AGI-complete.  Again, I would appreciate it if 
you could correctly represent it in the future.


This is the first time you speak about pre-requisites. Your whole hypothesis
changes with these pre-requisites. If you would be scientific you would
qualify what are your pre-requisites.


So, for simplicity, why don't we just say
scientist = understanding


Understanding does not imply the ability to create something new or to apply
knowledge. 
Furthermore natural language understanding does not imply understanding
*general* domains. There is much evidence that the ability to understand
natural language does not imply to the understanding of mathematics. Not to
speak from the ability to create mathematics.


Now, for a counter-example to my initial hypothesis, why don't you explain 
how you can have natural language understanding without understanding (which

equals scientist ;-).


Understanding does not equal scientist. 
The claim that natural language understanding needs understanding is
trivial. This wasn't your initial hypothesis.






- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI

 You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,
and creativity
http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
 Yes, but each of those steps is very vague, and cannot be boiled down to a 
 series of precise instructions sufficient for a stupid person to 
 consistently carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they only 
general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and emphasize) 
between a discoverer and a learner.  The cognitive skills/intelligence 
necessary to design questions, hypotheses, experiments, etc. are far in excess 
the cognitive skills/intelligence necessary to evaluate/validate those things.  
My argument was meant to be that a general intelligence needs to be a 
learner-type rather than a discoverer-type although the discoverer type is 
clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence?  
How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

 Also, those steps are heuristic and do not cover all cases.  For instance 
 step 4 requires experimentation, yet there are sciences such as cosmology 
 and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than physical 
experiments but it's still all about predictive power.  What is that next 
star/dinosaur going to look like?  What is it *never* going to look like (or 
else we need to expand or correct our theory)?  Is there anything that we can 
guess that we haven't tested/seen yet that we can verify?  What else is science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively looking for disproofs)
D.  Evaluate Hypotheses
E.  Add Evaluation to Knowledge-Base (Tentatively) but continue to test
F.  Return to step A with additional leverage

If you were forced to codify the hard core of the scientific method, how 
would you do it?

 As you asked for references I will give you two:

Thank you for setting a good example by including references but the contrast 
between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 
(you didn't really forget that my undergraduate degree was a dual major of 
Biochemistry and Philosophy of Science, did you? :-).

My view is basically that of Lakatos to the extent that I would challenge you 
to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism snort) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).




  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 10:41 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





  On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser [EMAIL PROTECTED] wrote:

Oh, and I *have* to laugh . . . .



  Hence the wiki entry on scientific method:
  Scientific method is not a recipe: it requires intelligence, 
imagination,

and creativity

  http://en.wikipedia.org/wiki/Scientific_method
  This is basic stuff.



In the cited wikipedia entry, the phrase Scientific method is not a 
recipe: it requires intelligence, imagination, and creativity is immediately 
followed by just such a recipe for the scientific method

A linearized, pragmatic scheme of the four points above is sometimes 
offered as a guideline for proceeding:[25]

  Yes, but each of those steps is very vague, and cannot be boiled down to a 
series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

  Also, those steps are heuristic and do not cover all cases.  For instance 
step 4 requires experimentation, yet there are sciences such as cosmology and 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Marc Walser wrote


Try to get the name right.  It's just common competence and courtesy.

Before you ask for counter examples you should *first* give some 
arguments which supports your hypothesis. This was my point.


And I believe that I did.  And I note that you didn't even address the fact 
that I did so again in the e-mail you are quoting.  You seem to want to 
address trivia rather than the meat of the argument.  What don't you address 
the core instead of throwing up a smokescreen?



Regarding your example with Darwin:


What example with Darwin?

First, I'd appreciate it if you'd drop the strawman.  You are the only 
one who keeps insisting that anything is easy.
 Is this a scientific discussion from you? No. You use rhetoric and 
nothing else.


And baseless statements like You use rhetoric and nothing else are a 
scientific discussion.  Again with the smokescreen.



I don't say that anything is easy.


Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining AGI


The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy.


--





Clearly you DO say that language understanding is easy.








This is the first time you speak about pre-requisites.


Direct quote cut and paste from *my* e-mail . . . . .

- Original Message - 
From: Mark Waser [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 4:01 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I don't think that learning of language is the entire point. If I have
only
learned language I still cannot create anything. A human who can
understand
language is by far still no good scientist. Intelligence means the 
ability

to solve problems. Which problems can a system solve if it can nothing
else
than language understanding?


Many or most people on this list believe that learning language is an
AGI-complete task.  What this means is that the skills necessary for
learning a language are necessary and sufficient for learning any other
task.  It is not that language understanding gives general intelligence
capabilities, but that the pre-requisites for language understanding are
general intelligence (or, that language understanding is isomorphic to
general intelligence in the same fashion that all NP-complete problems are
isomorphic).  Thus, the argument actually is that a system that can do
nothing else than language understanding is an oxymoron.


-




Clearly I DO talk about the pre-requisites for language understanding.






Dude.  Seriously.

First you deny your own statements and then claim that I didn't previously 
mention something that it is easily provable that I did (at the top of an 
e-mail).  Check the archives.  It's all there in bits and bytes.


Then you end with a funky pseudo-definition that Understanding does not 
imply the ability to create something new or to apply knowledge.   What 
*does* understanding mean if you can't apply it?  What value does it have?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
Andi wrote

This really seems more like arguing that there is no such thing as
AI-complete at all.  That is certainly a possibility.  It could be that
there are only different competences.  This would also seem to mean that
there isn't really anything that is truly general about intelligence,
which is again possible.

No. This arguing shows that there are very basic features which do not imply
necessarily from natural language understanding:

Usage of knowledge.
The example to solve a equation is just one of many examples. 
If you can talk about things this does not imply that you can do things.



I guess one thing we're seeing here is a basic example of mathematics as
having underlying separate mechanisms from other features of language. 
The Lakoff and Nunez talk about subitizing (judging small numbers of
things at a glance) as one core competancy, and counting as another. 
These are things you can see in animals that do not use language.  So,
sure, mathematics could be a separate realm of intelligence.


It is not just mathematics. A natural language understanding system can talk
about shopping. But from this ability you can't prove that it can do
shopping.
There are essential features of intelligence missing in natural language
understanding. And that's the reason why natural language understanding is
not AGI-complete.



Of course, my response to that is that this kind of basic mathematical
ability is needed to understand language.  



This argumentation is nothing else than making a non-AGI-complete system AGI
complete by adding more and more features.

If you suppose for a arbitrary still unsolved problem P that everything is
which is needed to solve AGI is also necessary to solve P then it becomes
trivial that P is AGI-complete. 

But this argumentation is similar to the doubters of AGI who essentially
suppose for an arbitrary given still unsolved problem P that P is not
computable at all.






-Matthias


Matthias wrote:
 Sorry, but this was no proof that a natural language understanding system
 is
 necessarily able to solve the equation x*3 = y for arbitrary y.

 1) You have not shown that a language understanding system must
 necessarily(!) have made statistical experiences on the equation x*3 =y.

 2) you give only a few examples. For a proof of the claim, you have to
 prove
 it for every(!) y.

 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
 that
 3a) that a language understanding system necessarily(!) has this rules
 3b) that a language understanding system necessarily(!) can apply such
 rules

 In my opinion a natural language understanding system must have a lot of
 linguistic knowledge.
 Furthermore a system which can learn natural languages must be able to
 gain
 linguistic knowledge.

 But both systems do not have necessarily(!) the ability to *work* with
 this
 knowledge as it is essential for AGI.

 And for this reason natural language understanding is not AGI complete at
 all.

 -Matthias



 -Ursprüngliche Nachricht-
 Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Gesendet: Dienstag, 21. Oktober 2008 05:05
 An: agi@v2.listbox.com
 Betreff: [agi] Language learning (was Re: Defining AGI)


 --- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

 It can be solved with statistics. Take y = 12 and count Google hits:

 string count
 -- -
 1x3=12 760
 2x3=12 2030
 3x3=12 9190
 4x3=12 16200
 5x3=12 1540
 6x3=12 1010

 More generally, people learn algebra and higher mathematics by induction,
 by
 generalizing from lots of examples.

 5 * 7 = 35 - 35 / 7 = 5
 4 * 6 = 24 - 24 / 6 = 4
 etc...
 a * b = c - c = b / a

 It is the same way we learn grammatical rules, for example converting
 active
 to passive voice and applying it to novel sentences:

 Bob kissed Alice - Alice was kissed by Bob.
 I ate dinner - Dinner was eaten by me.
 etc...
 SUBJ VERB OBJ - OBJ was VERB by SUBJ.

 In a similar manner, we can learn to solve problems using logical
 deduction:

 All frogs are green. Kermit is a frog. Therefore Kermit is green.
 All fish live in water. A shark is a fish. Therefore sharks live in water.
 etc...

 I understand the objection to learning math and logic in a language model
 instead of coding the rules directly. It is horribly inefficient. I
 estimate
 that a neural language model with 10^9 connections would need up to 10^18
 operations to learn simple arithmetic like 2+2=4 well enough to get it
 right
 90% of the time. But I don't know of a better way to learn how to convert
 natural language word problems to a formal language suitable for entering
 into a calculator at the level of an average human adult.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: 

RE: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Ed Porter
Vlad,

 

Thanks.  In respone to your email I tried plugging different values into the
Excel spread sheet I sent by a prior email under this subject line, and, and
low and behold, got some interesting answers for the number A of assemblies
(or sets) of nodes of uniform size S you can create from N nodes where no
two assemblies have more than O overlapping nodes:

 

 

N  S   OA

==

100K   20  12.5x10^2

100K   20  21.3x10^5-10% overlap

100K   20  31.2x10^8

==

100K   50  23.3x10^3

100K   50  34.4x10^5

100K   50  47.9x10^7

100K   50  51.8x10^10-10% overlap

==

100K   88  23.5x10^2

100K   88  31.4x10^4

100K   88  48.1x10^5

100K   88  55.7x10^7

100K   88  65.6x10^9

100K   88  75.2x10^11

100K   88  86.3x10^13-9.09% overlap

==

50K88  68.2x10^7

50K88  81.6x10^11-9.09% overlap

==

25K88  61.4x10^6

25K88  81.1x10^9-9.09% overlap

 

From this data, it is clear that for a given N, increasing S (as least when
S is small relative to N), increases the number of nodes that have a given
percent of maximum overlap, such as 5% or 10% max overlap.  Doubling N from
25k, to 50K, to 100K, provides a little less than two orders of magnitude in
increases in the number of cell assemblies of size 88 having overlaps of 6
or 8 at each doubling. 

 

88 was the largest number S that Excel could produce a C(100K,S) for,
enabling the calculations to be made.  BUT EVEN AT THIS LIMIT WITH 100K
NODES, YOU COULD CREATE 57 MILLION NODE ASSEMBLIES WITH AN OVERLAP OF LESS
5.7%.  This is a 570x increase in the number of states that could be
represented, relative to representing states with individual nodes.

 

Since it is clear that the number of possible assemblies relative to percent
overlap, grows with N and with S, it is likely that one could produce even
larger multiplicative increases in number of assemblies relative to the
number of nodes having even lower percentage overlap, with larger Ns and/or
Ss.

 

The one thing I don't understand is how you derived the formula I used in
this spreadsheet, the one you described in your Thu 10/16/2008 7:50 PM email
(with the position of the variables in the combinations function switched).
Switching the variables in the combinatorial formula to the convention more
commonly used in America this formula that is as follows: 

 

A =C(N,S)/T(N,S,O)

 

Where 

T(N,S,O)=

C(S,S)

+C(S, S-1)*C(N-S, 1)

+C(S, S-2)*C(N-S, 2)

+...

+C(S, O)*C(N-S, S-O)

 

(note the first term in T(N,S,O), i.e., C(S,S), is the equivalent of
C(S,S-0)*C(N-S,0) since C(X,0)=1, oddly enough, which makes all the terms in
T(N,S,O) have the same form, differing only by the value of the iterator
which goes from 0 to O)

 

THE ONE PROBLEM I HAVE, IS THAT I DON'T UNDERSTAND THE DERIVATION OF THIS
FORMULA, SO I CAN'T KNOW HOW MUST FAITH OR ACCURACY I SHOULD ATTRIBUTE TO
ITS ESTIMATION OF A LOWER BOUND. 

 

If it is possible to give an explanation of why this formula is a proper
lower bounds, in a little more detail than in the email in which you first
presented it, I would appreciate it very much, it would let me know how much
faith I should put into the above numerical results.

 

Ed Porter

 

 

 

-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 21, 2008 1:14 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 

On Tue, Oct 21, 2008 at 12:07 AM, Ed Porter [EMAIL PROTECTED] wrote:



 I built an excel spread sheet to calculate this for various values of N,S,

 and O.  But when O = zero, the value of C(N,S)/T(N,S,O) doesn't make sense

 for most values of N and S.  For example if N = 100 and S = 10, and O =

 zero, then A should equal 10, not one as it does on the spread sheet.



 

It's a lower bound.

 

 

 I have attached the excel spreadsheet I made to play around with your

 formulas, and a PDF of one page of it, in case you don't have access to

 Excel.



 

Your spreadsheet doesn't catch it for S=100 and O=1, it explodes when

you try to increase N.

But at S=10, O=2, you can see how lower bound increases as you

increase N. At N=5000, lower bound is 6000, at N=10^6, it's 2.5*10^8,

and at N=10^9 it's 2.5*10^14.

 

-- 

Vladimir Nesov

[EMAIL PROTECTED]

http://causalityrelay.wordpress.com/

 

 


[agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Terren Suydam

Matthias wrote:
 Your claim is that natural language understanding is
 sufficient for AGI. Then you must be able to prove that
 everything what AGI can is also possible by a system which
 is able to understand natural language. AGI can learn to
 solve x*3 = y for arbitrary y. And AGI can do this with
 Mathematica or without Mathematica. Simply prove that a
 natural language understanding system must necessarily be
 able to do the same.

Here's my simple proof: algebra, or any other formal language for that matter, 
is expressible in natural language, if inefficiently. 

Words like quantity, sum, multiple, equals, and so on, are capable of conveying 
the same meaning that the sentence x*3 = y conveys. The rules for 
manipulating equations are likewise expressible in natural language. 

Thus it is possible in principle to do algebra without learning the 
mathematical symbols. Much more difficult for human minds perhaps, but possible 
in principle. Thus, learning mathematical formalism via translation from 
natural language concepts is possible (which is how we do it, after all). 
Therefore, an intelligence that can learn natural language can learn to do math.

 I have given the model why we have the illusion that we
 believe our thoughts are build from language. 
 
. snipped description of model
 
 My model explains several phenomena:
 
 1. We hear our thoughts
 2. We think with the same speed as we speak (this is not
 trivial!)
 3. We hear our thoughts with our own voice (strong evidence
 for my model!)
 4. We have problems to think in a very noisy and loud
 environment (because we have to listen to our thoughts)
 
I believe there are linguistic forms of thought (exactly as you describe) and 
non-linguistic forms of thought (as described by Einstein - thinking in 
'pictures'). I agree with your premise that thought is not necessarily 
linguistic (as I have in previous emails!). 

Your model (which is quite good at explaining internal monologue) - and list of 
phenomena above - does not apply to the non-linguistic form of thought (as I 
experience it) except perhaps for (4), but that could simply be due to 
sensorial competition for one's attention, not a need to hear thought. This 
non-linguistic kind of thought is much faster and obviously non-verbal - it is 
not 'heard'. It can be quite a struggle to express the products of such 
thinking in natural language. 

This faculty for non-linguistic mental manipulation is most likely exclusively 
how chimps, ravens, and other highly intelligent animals solve problems. But 
relying on this form of thought alone is not sufficient for the development of 
the symbolic conceptual framework necessary to perform human-level analytical 
thought.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
C(N,S) is the total number of assemblies of size S that fit in the N
nodes, if you forget about overlaps.

Each assembly overlaps in X places with other C(S,X)*C(N-S,S-X)
assemblies: if another assembly overlaps with our assembly in X
places, then X nodes are inside S nodes of our assembly, which gives
C(S,X) possible combinations, and the remaining S-X of the nodes are
outside the assembly, in remaining N-S nodes, which gives C(N-S,S-X)
combinations, totaling to C(S,X)*C(N-S,S-X). Thus, the total number of
assemblies that overlap with our assembly in O to S places (including
our assembly itself) is
T(N,S,O)=
C(S,S)*C(N-S,S-S)+
C(S,S-1)*C(N-S,S-(S-1))+
...+
C(S,O)*C(N-S,S-O)

Let's apply a trivial algorithm to our problem, adding an arbitrary
assembly to the working set merely if it doesn't conflict with any of
the assemblies already in the working set. Adding a new assembly will
ban other T(N,S,O) assemblies from the total pool of C(N,S)
assemblies, thus each new assembly in the working set lowers the
number of remaining assemblies that we'll be able to add later. Some
assemblies from this pool will be banned multiple times, but at least
C(N,S)/T(N,S,O) assemblies can be added without conflicts, since
T(N,S,O) is the maximum number of assemblies that each one in the pool
is able to subtract from the total pool of assemblies.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Russell Wallace
On Tue, Oct 21, 2008 at 4:53 PM, Abram Demski [EMAIL PROTECTED] wrote:
 As it happens, this definition of
 meaning admits horribly-terribly-uncomputable-things to be described!
 (Far worse than the above-mentioned super-omegas.) So, the truth or
 falsehood is very much not computable.

 I'm hesitant to provide the mathematical proof in this email, since it
 is already long enough... let's just say it is available upon
 request.

Now I'm curious -- can these horribly-terribly-uncomputable-things be
described to a non-mathematician? If so, consider this a request.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
Mark Waser answered to 

 I don't say that anything is easy.

:

Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining AGI


The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy.

--





Clearly you DO say that language understanding is easy.


Your claim was that I have said that *anything* is easy.
This is a wrong generalization which is usually known in rhetoric.


I think, often you are less scientific than those people who you blame not
to be scientific.
I will give up to discuss with you.









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-21 Thread Steve Richfield
Matt.

On 10/20/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 The singularity list is probably more appropriate for philosophical
 discussions about AGI.


Only those discussions that relate AGI to singularity.

Another one for Ben's list:

*Basic Economic Feasibility: It has been proposed that intelligent but not
super-intelligent machines may have great economic value. Others have said
that we already have way too many such biological machines, making more such
intelligence worthless. This has been countered by arguments that there are
hazardous and/or biologically impossible environments where only an
intelligent machine could work. This seems to fall into the realm of basic
business plan projections, where the cost of engineering and manufacture is
returned by sales through market penetration. An abbreviated business plan
showing quantitatively how a profit might be made would go a LONG way to
settling this argument.*

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
Try Rudy Rucker's book Infinity and the Mind for a good nontechnical
treatment of related ideas

http://www.amazon.com/Infinity-Mind-Rudy-Rucker/dp/0691001723

The related wikipedia pages are a bit technical ;-p , e.g.

http://en.wikipedia.org/wiki/Inaccessible_cardinal

On Tue, Oct 21, 2008 at 2:27 PM, Russell Wallace
[EMAIL PROTECTED]wrote:

 On Tue, Oct 21, 2008 at 4:53 PM, Abram Demski [EMAIL PROTECTED]
 wrote:
  As it happens, this definition of
  meaning admits horribly-terribly-uncomputable-things to be described!
  (Far worse than the above-mentioned super-omegas.) So, the truth or
  falsehood is very much not computable.
 
  I'm hesitant to provide the mathematical proof in this email, since it
  is already long enough... let's just say it is available upon
  request.

 Now I'm curious -- can these horribly-terribly-uncomputable-things be
 described to a non-mathematician? If so, consider this a request.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Ben Goertzel


 Here's my simple proof: algebra, or any other formal language for that
 matter, is expressible in natural language, if inefficiently.

 Words like quantity, sum, multiple, equals, and so on, are capable of
 conveying the same meaning that the sentence x*3 = y conveys. The rules
 for manipulating equations are likewise expressible in natural language.

 Thus it is possible in principle to do algebra without learning the
 mathematical symbols. Much more difficult for human minds perhaps, but
 possible in principle. Thus, learning mathematical formalism via translation
 from natural language concepts is possible (which is how we do it, after
 all). Therefore, an intelligence that can learn natural language can learn
 to do math.



OK, but I didn't think we were talking about what is possible in principle
but may be unrealizable in practice...

It's possible in principle to create a supercomputer via training pigeons to
peck in appropriate patterns, in response to the patterns that they notice
other pigeons peck.  My friends in Perth and I designed such a machine once
and called it the PC or Pigeon Computer.  I wish I'd retained the drawings
and schematics!  We considered launching a company to sell them, IBM or
International Bird Machines ... but failed to convince any VC's (even in the
Internet bubble!!) and gave up...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Value of philosophy

2008-10-21 Thread Steve Richfield
Ben,

Hey, maybe I FINALLY got your frame of mind here. Just to test this,
consider:

Suppose we change the format NOT to exclude anything at all, but rather
I/you/we set up a Wiki that includes EVERYTHING. Right next to a technical
details may be a link to a philosophical point, and right next to a
philosophical point may be a link to a technical detail. Then, on this
forum, people would only post pointers to new edits and information that
they EXPECT would disappear into the bit bucket by tomorrow.

We would include identified buzz phrases to be able to pull important but
disjoint things together, as I have been using the buzz phrase Ben's list
with my various distilled philosophical (read that feasibility) points.

This way, everything ever related to a given subject would be pulled
together and organized. I would be happier because the feasibility issues
would all be together for anyone entering AGI to consider, and you would be
happier because your technical section would be undisturbed by
philosophical discussion, except for a few hyperlinks sprinkled therein.

Does this work for everyone?

Steve Richfield
=
On 10/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 Just to clarify one point: I am not opposed to philosophy, nor do I
 consider it irrelevant to AGI.  I wrote a book on my own philosophy of mind
 in 2006.

 I just feel like the philosophical discussions tend to overwhelm the
 pragmatic discussions on this list, and that a greater number of pragmatic
 discussions **might** emerge if the pragmatic and philosophical discussions
 were carried out in separate venues.

 Some of us feel we already have adequate philosophical understanding to
 design and engineer AGI systems.  We may be wrong, but that doesn't mean we
 should spend our time debating our philosophical understandings, to the
 exclusion of discussing the details of our concrete AGI work.

 For me, after enough discussion of the same philosophical issue, I stop
 learning anything.  Most of the philosophical discussions on this list are
 nearly identical in content to discussions I had with others 20 years ago.
 I learned a lot from the discussions then, and learn a lot less from the
 repeats...

 -- Ben


  On Mon, Oct 20, 2008 at 9:06 AM, Mike Tintner [EMAIL PROTECTED]wrote:

 Vlad:Good philosophy is necessary for AI...We need to work more on the
 foundations, to understand whether we are
 going in the right direction


 More or less perfectly said. While I can see that a majority of people
 here don't want it,  actually philosophy, (which should be scientifically
 based), is essential for AGI, precisely as Vlad says - to decide what are
 the proper directions and targets for AGI. What is creativity? Intelligence?
 What are the kinds of problems an AGI should be dealing with? What kind(s)
 of knowledge representation are necessary? Is language necessary? What forms
 should concepts take? What kinds of information structures, eg networks,
 should underlie them? What kind(s) of search are necessary? How do analogy
 and metaphor work? Is embodiment necessary? etc etc.   These are all matters
 for what is actually philosophical as well as scientific as well as
 technological/engineering discussion.  They tend to be often  more
 philosophical in practice because these areas are so vast that they can't be
 neatly covered  - or not at present - by any scientific.
 experimentally-backed theory.

 If your philosophy is all wrong, then the chances are v. high that your
 engineering work will be a complete waste of time. So it's worth considering
 whether your personal AGI philosophy and direction are viable.

 And that is essentially what the philosophical discussions here have all
 been about - the proper *direction* for AGI efforts to take. Ben has
 mischaracterised these discussions. No one - certainly not me - is objecting
 to the *feasibility* of AGI. Everyone agrees that AGI in one form or other
 is indeed feasible,  though some (and increasingly though by no means fully,
 Ben himself) incline to robotic AGI. The arguments are mainly about
 direction, not feasibility.

 (There is a separate, philosophical discussion,  about feasibility in a
 different sense -  the lack of  a culture of feasibility, which is perhaps,
 subconsciously what Ben was also referring to  -  no one, but no one, in
 AGI, including Ben,  seems willing to expose their AGI ideas and proposals
 to any kind of feasibility discussion at all  -  i.e. how can this or that
 method solve any of the problem of general intelligence? This is what Steve
 R has pointed to recently, albeit IMO in a rather confusing way. )

 So while I recognize that a lot of people have an antipathy to my personal
 philosoophising, one way or another, you can't really avoid philosophising,
 unless you are, say, totally committed to just one approach, like Opencog.
 And even then...

 P.S. Philosophy is always a matter of (conflicting) opinion. (Especially,
 given last night's 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Hmm...

I think that non-retarded humans are fully general intelligences in the
following weak sense: for any fixed t and l, for any human there are some
numbers M and T so that if the human is given amount M of external memory
(e.g. notebooks to write on), that human could be taught to emulate AIXItl

[see
http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3540221395/ref=sr_1_1?ie=UTF8s=booksqid=1224614995sr=1-1,
or the relevant papers on Marcus Hutter's website]

where each single step of AIXItl might take up to T seconds.

This is a kind of generality that I think no animals but humans have.  So,
in that sense, we seem to be the first evolved general intelligences.

But, that said, there are limits to what any one of us can learn in a fixed
finite amount of time.   If you fix T realistically then our intelligence
decreases dramatically.

And for the time-scales relevant in human life, it may not be possible to
teach some people to do science adequately.

I am thinking for instance of a 40 yr old student I taught at the University
of Nevada way back when (normally I taught advanced math, but in summers I
sometimes taught remedial stuff for extra $$).  She had taken elementary
algebra 7 times before ... and had had extensive tutoring outside of class
... but I still was unable to convince her of the incorrectness of the
following reasoning: The variable a always stands for 1.  The variable b
always stands for 2. ... The variable z always stands for 26.   She was not
retarded.  She seemed to have a mental block against algebra.  She could
discuss politics and other topics with seeming intelligence.  Eventually I'm
sure she could have been taught to overcome this block.  But, by the time
she overcame every other issue in the way of really understanding science,
her natural lifespan would have long been overspent...

-- Ben G


On Tue, Oct 21, 2008 at 12:33 PM, Mark Waser [EMAIL PROTECTED] wrote:

   Yes, but each of those steps is very vague, and cannot be boiled down
 to a series of precise instructions sufficient for a stupid person to
 consistently carry them out effectively...
 So -- are those stupid people still general intelligences?  Or are they
 only general intelligences to the degree to which they *can* carry them
 out?  (because I assume that you'd agree that general intelligence is a
 spectrum like any other type).

 There also remains the distinction (that I'd like to highlight and
 emphasize) between a discoverer and a learner.  The cognitive
 skills/intelligence necessary to design questions, hypotheses, experiments,
 etc. are far in excess the cognitive skills/intelligence necessary to
 evaluate/validate those things.  My argument was meant to be that a general
 intelligence needs to be a learner-type rather than a discoverer-type
 although the discoverer type is clearly more effective.

 So -- If you can't correctly evaluate data, are you a general
 intelligence?  How do you get an accurate and effective domain model to
 achieve competence in a domain if you don't know who or what to believe?  If
 you don't believe in evolution, does that mean that you aren't a general
 intelligence in that particular realm/domain (biology)?

  Also, those steps are heuristic and do not cover all cases.  For
 instance step 4 requires experimentation, yet there are sciences such as
 cosmology and paleontology that are not focused on experimentation.
 I disagree.  They may be based upon thought experiments rather than
 physical experiments but it's still all about predictive power.  What is
 that next star/dinosaur going to look like?  What is it *never* going to
 look like (or else we need to expand or correct our theory)?  Is there
 anything that we can guess that we haven't tested/seen yet that we can
 verify?  What else is science?

 My *opinion* is that the following steps are pretty inviolable.
 A.  Observe
 B.  Form Hypotheses
 C.  Observe More (most efficiently performed by designing competent
 experiments including actively looking for disproofs)
 D.  Evaluate Hypotheses
 E.  Add Evaluation to Knowledge-Base (Tentatively) but continue to test
 F.  Return to step A with additional leverage

 If you were forced to codify the hard core of the scientific method, how
 would you do it?

  As you asked for references I will give you two:
 Thank you for setting a good example by including references but the
 contrast between the two is far better drawn in *For and Against 
 Method*http://en.wikipedia.org/w/index.php?title=For_and_Against_Methodaction=editredlink=1(ISBN
 0-226-46774-0http://en.wikipedia.org/wiki/Special:BookSources/0226467740
 ).
 Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for
 those who wish to educate themselves in the fundamentals of Philosophy of
 Science
 (you didn't really forget that my undergraduate degree was a dual major of
 Biochemistry and Philosophy of Science, did you? :-).

 My view is 

Re: [agi] Re: Value of philosophy

2008-10-21 Thread Ben Goertzel
This is basically the suggestion to move to a forum-type format instead of a
mailing list  It has its plusses and minuses... you've cited one of the
plusses.

ben

On Tue, Oct 21, 2008 at 2:46 PM, Steve Richfield
[EMAIL PROTECTED]wrote:

 Ben,

 Hey, maybe I FINALLY got your frame of mind here. Just to test this,
 consider:

 Suppose we change the format NOT to exclude anything at all, but rather
 I/you/we set up a Wiki that includes EVERYTHING. Right next to a technical
 details may be a link to a philosophical point, and right next to a
 philosophical point may be a link to a technical detail. Then, on this
 forum, people would only post pointers to new edits and information that
 they EXPECT would disappear into the bit bucket by tomorrow.

 We would include identified buzz phrases to be able to pull important but
 disjoint things together, as I have been using the buzz phrase Ben's list
 with my various distilled philosophical (read that feasibility) points.

 This way, everything ever related to a given subject would be pulled
 together and organized. I would be happier because the feasibility issues
 would all be together for anyone entering AGI to consider, and you would be
 happier because your technical section would be undisturbed by
 philosophical discussion, except for a few hyperlinks sprinkled therein.

 Does this work for everyone?

 Steve Richfield
 =
 On 10/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:


 Just to clarify one point: I am not opposed to philosophy, nor do I
 consider it irrelevant to AGI.  I wrote a book on my own philosophy of mind
 in 2006.

 I just feel like the philosophical discussions tend to overwhelm the
 pragmatic discussions on this list, and that a greater number of pragmatic
 discussions **might** emerge if the pragmatic and philosophical discussions
 were carried out in separate venues.

 Some of us feel we already have adequate philosophical understanding to
 design and engineer AGI systems.  We may be wrong, but that doesn't mean we
 should spend our time debating our philosophical understandings, to the
 exclusion of discussing the details of our concrete AGI work.

 For me, after enough discussion of the same philosophical issue, I stop
 learning anything.  Most of the philosophical discussions on this list are
 nearly identical in content to discussions I had with others 20 years ago.
 I learned a lot from the discussions then, and learn a lot less from the
 repeats...

 -- Ben


  On Mon, Oct 20, 2008 at 9:06 AM, Mike Tintner [EMAIL PROTECTED]wrote:

 Vlad:Good philosophy is necessary for AI...We need to work more on the
 foundations, to understand whether we are
 going in the right direction


 More or less perfectly said. While I can see that a majority of people
 here don't want it,  actually philosophy, (which should be scientifically
 based), is essential for AGI, precisely as Vlad says - to decide what are
 the proper directions and targets for AGI. What is creativity? Intelligence?
 What are the kinds of problems an AGI should be dealing with? What kind(s)
 of knowledge representation are necessary? Is language necessary? What forms
 should concepts take? What kinds of information structures, eg networks,
 should underlie them? What kind(s) of search are necessary? How do analogy
 and metaphor work? Is embodiment necessary? etc etc.   These are all matters
 for what is actually philosophical as well as scientific as well as
 technological/engineering discussion.  They tend to be often  more
 philosophical in practice because these areas are so vast that they can't be
 neatly covered  - or not at present - by any scientific.
 experimentally-backed theory.

 If your philosophy is all wrong, then the chances are v. high that your
 engineering work will be a complete waste of time. So it's worth considering
 whether your personal AGI philosophy and direction are viable.

 And that is essentially what the philosophical discussions here have all
 been about - the proper *direction* for AGI efforts to take. Ben has
 mischaracterised these discussions. No one - certainly not me - is objecting
 to the *feasibility* of AGI. Everyone agrees that AGI in one form or other
 is indeed feasible,  though some (and increasingly though by no means fully,
 Ben himself) incline to robotic AGI. The arguments are mainly about
 direction, not feasibility.

 (There is a separate, philosophical discussion,  about feasibility in a
 different sense -  the lack of  a culture of feasibility, which is perhaps,
 subconsciously what Ben was also referring to  -  no one, but no one, in
 AGI, including Ben,  seems willing to expose their AGI ideas and proposals
 to any kind of feasibility discussion at all  -  i.e. how can this or that
 method solve any of the problem of general intelligence? This is what Steve
 R has pointed to recently, albeit IMO in a rather confusing way. )

 So while I recognize that a lot of people have an antipathy to my
 personal 

RE: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Ed Porter
Ben, 

 

You're right.  Although one might seem to be getting a free lunch in terms
of being able to create more assemblies than the number of nodes from which
they are created, it would appear that the extra number of links required
not only for auto-associative activation withn an assembly, but that would
be required to activate an assembly from the outside with a signal that
would be distinguishable over the cross talk, may prevent such a use of node
assemblies from resulting in any actual saving.

 

If Vlad's forumula for a lower bound is correct, the one that I used in the
Excel spreadsheet I sent out earlier under this thread, then it is clear one
can create substantially more assemblies than nodes, with maximum overlaps
below 5%, but it is not clear the increased costs in extra connections would
be worth it, since it is not clear that the cost of a node, need be that
much higher than the cost of a link.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 21, 2008 11:28 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


makes sense, yep...

i guess my intuition is that there are obviously a huge number of
assemblies, so that the number of assemblies is not the hard part, the hard
part lies in the weights...

On Tue, Oct 21, 2008 at 11:18 AM, Ed Porter [EMAIL PROTECTED] wrote:

Ben,

 

In my email starting this thread on 10/15/08 7:41pm I pointed out that a
more sophisticated version of the algorithm would have to take connection
weights into account in determining cross talk, as you have suggested below.
But I asked for the answer to a more simple version of the problem, since
that might prove difficult enough, and since I was just trying to get some
rough feeling for whether or not node assemblies might offer substantial
gains in possible representational capability, before delving more deeply
into the subject.

 

Ed Porter

 

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 

Sent: Monday, October 20, 2008 10:52 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


But, suppose you have two assemblies A and B, which have nA and nB neurons
respectively, and which overlap in O neurons...

It seems that the system's capability to distinguish A from B is going to
depend on the specific **weight matrix** of the synapses inside the
assemblies A and B, not just on the numbers nA, nB and O.

And this weight matrix depends on the statistical properties of the memories
being remembered.

So, these counting arguments you're trying to do are only going to give you
a very crude indication, anyway, right? 

ben



On Mon, Oct 20, 2008 at 5:09 PM, Ed Porter [EMAIL PROTECTED] wrote:

Ben, 

 

I am interested in exactly the case where individual nodes partake in
multiple attractors,  

 

I use the notation A(N,O,S) which is similar to the A(n,d,w) formula of
constant weight codes, except as Vlad says you would plug my varaiables into
the constant weight formula buy using A(N, 2*(S-0+1),S).

 

I have asked my question assuming each node assembly has the same size S for
to make the math easier.  Each such assembly is an autoassociative
attractor.  I want to keep the overlap O low to reduce the cross talk
between attractors.  So the question is how many node assemblies A, can you
make having a size S, and no more than an overlap O, given N nodes.

 

Actually the cross talk between auto associative patterns becomes an even
bigger problem if there are many attractors being activated at once (such as
hundreds of them), but if the signaling driving different the population of
different attractors could have different timing or timing patterns, and if
the auto associatively was sensitive to such timing, this problem could be
greatly reduced.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 

Sent: Monday, October 20, 2008 4:16 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


Wait, now I'm confused.

I think I misunderstood your question.

Bounded-weight codes correspond to the case where the assemblies themselves
can have n or fewer neurons, rather than exactly n.

Constant-weight codes correspond to assemblies with exactly n neurons.

A complication btw is that an assembly can hold multiple memories in
multiple attractors.  For instance using Storkey's palimpsest model a
completely connected assembly with n neurons can hold about .25n attractors,
where each attractor has around .5n neurons switched on.

In a constant-weight code, I believe the numbers estimated tell you the
number of sets where the Hamming distance is greater than or equal to d.
The idea in coding is that the code strings denoting distinct messages
should not be closer to each other than d.

But I'm not sure I'm following your notation exactly.

ben g

On Mon, Oct 20, 2008 at 3:19 PM, Ben 

Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Russell,

The wikipedia article Ben cites is definitely meant for
mathematicians, so I will try to give an example.

The halting problem asks us about halting facts for a single program.
To make it worse, I could ask about an infinite class of programs:
All programs satisfying Q eventually halt. If Q is some computable
function that accepts some programs and rejects others, it is only a
little worse than the halting problem; Call this halting2. If Q is
more difficult to evaluate than that, say if Q is as hard as solving
the halting problem, it's more difficult; call problems like this
halting3. If Q is as hard as halting2, then call that halting4. If Q
is as hard as halting3, then call the resulting class halting4. And so
on.

This is a somewhat odd way of constructing it, but I hope it is understandable.

Other references:
http://en.wikipedia.org/wiki/Hypercomputation
http://en.wikipedia.org/wiki/Arithmetical_hierarchy

--Abram

On Tue, Oct 21, 2008 at 2:27 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Tue, Oct 21, 2008 at 4:53 PM, Abram Demski [EMAIL PROTECTED] wrote:
 As it happens, this definition of
 meaning admits horribly-terribly-uncomputable-things to be described!
 (Far worse than the above-mentioned super-omegas.) So, the truth or
 falsehood is very much not computable.

 I'm hesitant to provide the mathematical proof in this email, since it
 is already long enough... let's just say it is available upon
 request.

 Now I'm curious -- can these horribly-terribly-uncomputable-things be
 described to a non-mathematician? If so, consider this a request.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Charles Hixson

Abram Demski wrote:

Ben,
...
One reasonable way of avoiding the humans are magic explanation of
this (or humans use quantum gravity computing, etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those assumptions. Therefore, we cannot understand the math
needed to define our own intelligence. Therefore, we can't engineer
human-level AGI. I don't like this conclusion! I want a different way
out.

I'm not sure the guru explanation is enough... who was the Guru for Humankind?

Thanks,

--Abram

  
You may not like Therefore, we cannot understand the math needed to 
define our own intelligence., but I'm rather convinced that it's 
correct.  OTOH, I don't think that it follows from this that humans 
can't build a better than human-level AGI.  (I didn't say engineer, 
because I'm not certain what connotations you put on that term.)  This 
does, however, imply that people won't be able to understand the better 
than human-level AGI.  They may well, however, understand parts of it, 
probably large parts.  And they may well be able to predict with fair 
certitude how it would react in numerous situations.  Just not in 
numerous other situations.


The care, then, must be used in designing so that we can predict 
favorable motivations behind the actions in important-to-us  areas.  
Even this is probably impossible in detail, but then it doesn't *need* 
to be understood in detail.  If you can predict that it will make better 
choices than we can, and that it's motives are benevolent, and that it 
has a good understanding of our desires...that should suffice.  And I 
think we'll be able to do considerably better than that.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
 

 



Here's my simple proof: algebra, or any other formal language for that
matter, is expressible in natural language, if inefficiently.

Words like quantity, sum, multiple, equals, and so on, are capable of
conveying the same meaning that the sentence x*3 = y conveys. The rules
for manipulating equations are likewise expressible in natural language.

Thus it is possible in principle to do algebra without learning the
mathematical symbols. Much more difficult for human minds perhaps, but
possible in principle. Thus, learning mathematical formalism via translation
from natural language concepts is possible (which is how we do it, after
all). Therefore, an intelligence that can learn natural language can learn
to do math.



The problem is not to learn the equations or the symbols.

The point is that a system which is able to understand and learn linguistic
knowledge  is not necessarily able to use and apply its knowledge  to  solve
problems.

 

- Matthias

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Mark,

 As you asked for references I will give you two:
 Thank you for setting a good example by including references but the
 contrast between the two is far better drawn in *For and Against 
 Method*http://en.wikipedia.org/w/index.php?title=For_and_Against_Methodaction=editredlink=1(ISBN
 0-226-46774-0http://en.wikipedia.org/wiki/Special:BookSources/0226467740
 ).


I read that book but didn't like it as much ... but you're right, it may be
an easier place for folks to start...


 Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for
 those who wish to educate themselves in the fundamentals of Philosophy of
 Science


All good stuff indeed.


 My view is basically that of Lakatos to the extent that I would challenge
 you to find anything in Lakatos that promotes your view over the one that
 I've espoused here.  Feyerabend's rants alternate between criticisms
 ultimately based upon the fact that what society frequently calls science
 is far more politics (see sociology of scientific knowledge); a 
 Tintnerian/Anarchist
 rant against structure and formalism; and incorrect portrayals/extensions of
 Lakatos (just like this list ;-).  Where he is correct is in the first
 case where society is not doing science correctly (i.e. where he provided
 examples regarded as indisputable instances of progress and showed how the
 political structures of the time fought against or suppressed them).  But
 his rants against structure and formalism (or, purportedly, for freedom and
 humanitarianism snort) are simply garbage in my opinion (though I'd guess
 that they appeal to you ;-).


Feyerabend appeals to my sense of humor ... I liked the guy.  I had some
correspondence with him when I was 18.  I wrote him a letter outlining some
of my ideas on philosophy of mind and asking his advice on where I should go
to grad school to study philosophy.  He replied telling me that if I wanted
to be a real philosopher I should **not** study philosophy academically nor
become a philosophy professor, but should study science or arts and then
pursue philosophy independently.  We chatted back and forth a little after
that.

I think Feyerabend did a good job of poking holes in some simplistic
accounts of scientific process, but ultimately I found Lakatos's arguments
mostly more compelling...

Lakatos did not argue for any one scientific method, as I recall.  Rather he
argued that different research programmes come with different methods, and
that the evaluation of a given piece of data is meaningful only within a
research programme, not generically.  He argued that comparative evaluation
of scientific theories is well-defined only for theories within the same
programme, and otherwise one has to talk about comparative evaluation of
whole scientific research programmes.

I am not entirely happy with Lakatos's approach either.  I find it
descriptively accurate yet normatively inadequate.

My own take is that science normatively **should** be based on a Bayesian
approach to evaluating theories based on data, and that different research
programmes then may be viewed as corresponding to different **priors** to be
used in doing Bayesian statistical evaluations.  I think this captures a lot
of Lakatos's insights but within a sound statistical framework.  This is my
social computational probabilistic philosophy of science.  The social
part is that each social group, corresponding to a different research
programme, has its own prior distribution.

I have also, more recently, posited a sort of universal prior, defined as
**simplicity of communication in natural language within a certain
community**.  This, I suggest, provides a baseline prior apart from any
particular research programme.

However, I still don't think that a below-average-IQ human can pragmatically
(i.e., within the scope of the normal human lifetime) be taught to
effectively carry out statistical evaluation of theories based on data,
given the realities of how theories are formulated and how data is obtained
and presented, at the present time...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
I think I see what's on the table here. Does all this mean a Bayes
net, properly motivated, could be capable of performing scientific
inquiry? Maybe in combination with a GA that tunes itself to maximize
adaptive mutations in the input based on scores from the net, which
seeks superior product designs? A Bayes net could be a sophisticated
tool for evaluating technological merit, while really just a signal
filter on a stream of candidate blueprints if what you're saying is
true.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
 But, by the time she overcame every other issue in the way of really 
 understanding science, her natural lifespan would have long been overspent...

You know, this is a *really* interesting point.  Effectively what you're saying 
(I believe) is that the difficulty isn't in learning but in UNLEARNING 
incorrect things that actively prevent you (via conflict) from learning correct 
things.  Is this a fair interpretation?

It's also particularly interesting when you compare it to information theory 
where the sole cost is in erasing a bit, not in setting it.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 2:56 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Hmm...

  I think that non-retarded humans are fully general intelligences in the 
following weak sense: for any fixed t and l, for any human there are some 
numbers M and T so that if the human is given amount M of external memory (e.g. 
notebooks to write on), that human could be taught to emulate AIXItl

  [see 
http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3540221395/ref=sr_1_1?ie=UTF8s=booksqid=1224614995sr=1-1
 , or the relevant papers on Marcus Hutter's website]

  where each single step of AIXItl might take up to T seconds.

  This is a kind of generality that I think no animals but humans have.  So, in 
that sense, we seem to be the first evolved general intelligences.

  But, that said, there are limits to what any one of us can learn in a fixed 
finite amount of time.   If you fix T realistically then our intelligence 
decreases dramatically.

  And for the time-scales relevant in human life, it may not be possible to 
teach some people to do science adequately.

  I am thinking for instance of a 40 yr old student I taught at the University 
of Nevada way back when (normally I taught advanced math, but in summers I 
sometimes taught remedial stuff for extra $$).  She had taken elementary 
algebra 7 times before ... and had had extensive tutoring outside of class ... 
but I still was unable to convince her of the incorrectness of the following 
reasoning: The variable a always stands for 1.  The variable b always stands 
for 2. ... The variable z always stands for 26.   She was not retarded.  She 
seemed to have a mental block against algebra.  She could discuss politics and 
other topics with seeming intelligence.  Eventually I'm sure she could have 
been taught to overcome this block.  But, by the time she overcame every other 
issue in the way of really understanding science, her natural lifespan would 
have long been overspent...

  -- Ben G



  On Tue, Oct 21, 2008 at 12:33 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Yes, but each of those steps is very vague, and cannot be boiled down to 
a series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they 
only general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and 
emphasize) between a discoverer and a learner.  The cognitive 
skills/intelligence necessary to design questions, hypotheses, experiments, 
etc. are far in excess the cognitive skills/intelligence necessary to 
evaluate/validate those things.  My argument was meant to be that a general 
intelligence needs to be a learner-type rather than a discoverer-type although 
the discoverer type is clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence? 
 How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

 Also, those steps are heuristic and do not cover all cases.  For 
instance step 4 requires experimentation, yet there are sciences such as 
cosmology and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than 
physical experiments but it's still all about predictive power.  What is that 
next star/dinosaur going to look like?  What is it *never* going to look like 
(or else we need to expand or correct our theory)?  Is there anything that we 
can guess that we haven't tested/seen yet that we can verify?  What else is 
science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively looking for disproofs)
D.  Evaluate Hypotheses
E.  Add Evaluation to Knowledge-Base (Tentatively) but continue to test
 

Re: [agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Terren Suydam

As unpopular as philosophical discussions are lately, that was what this is - a 
debate about whether language is separable from general intelligence, in 
principle. So in-principle arguments about language and intelligence are 
relevant in that context, even if not embraced with open arms by the whole list.

Terren

--- On Tue, 10/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
OK, but I didn't think we were talking about what is possible in principle 
but may be unrealizable in practice...

It's possible in principle to create a supercomputer via training pigeons to 
peck in appropriate patterns, in response to the patterns that they notice 
other pigeons peck.  My friends in Perth and I designed such a machine once and 
called it the PC or Pigeon Computer.  I wish I'd retained the drawings and 
schematics!  We considered launching a company to sell them, IBM or 
International Bird Machines ... but failed to convince any VC's (even in the 
Internet bubble!!) and gave up...


ben g
 






  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Ben Goertzel
Yes, I find this thread relevant to practical AGI work ... although, I
wouldn't precisely say it's addressing the separability of language from
general intelligence: that would be an even broader question!!

ben

On Tue, Oct 21, 2008 at 5:23 PM, Terren Suydam [EMAIL PROTECTED] wrote:


 As unpopular as philosophical discussions are lately, that was what this is
 - a debate about whether language is separable from general intelligence, in
 principle. So in-principle arguments about language and intelligence are
 relevant in that context, even if not embraced with open arms by the whole
 list.

 Terren

 --- On Tue, 10/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 OK, but I didn't think we were talking about what is possible in
 principle but may be unrealizable in practice...

 It's possible in principle to create a supercomputer via training pigeons
 to peck in appropriate patterns, in response to the patterns that they
 notice other pigeons peck.  My friends in Perth and I designed such a
 machine once and called it the PC or Pigeon Computer.  I wish I'd retained
 the drawings and schematics!  We considered launching a company to sell
 them, IBM or International Bird Machines ... but failed to convince any VC's
 (even in the Internet bubble!!) and gave up...


 ben g










   agi | Archives

  | Modify
  Your Subscription













 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
On Tue, Oct 21, 2008 at 5:20 PM, Mark Waser [EMAIL PROTECTED] wrote:

   But, by the time she overcame every other issue in the way of really
 understanding science, her natural lifespan would have long been
 overspent...
 You know, this is a *really* interesting point.  Effectively what you're
 saying (I believe) is that the difficulty isn't in learning but in
 UNLEARNING incorrect things that actively prevent you (via conflict) from
 learning correct things.  Is this a fair interpretation?


I think that's a large part of it

Incorrect things are wrapped up with correct things in peoples' minds

However, pure slowness at learning is another part of the problem ...




 It's also particularly interesting when you compare it to information
 theory where the sole cost is in erasing a bit, not in setting it.


 - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, October 21, 2008 2:56 PM
 *Subject:* Re: AW: AW: [agi] Re: Defining AGI


 Hmm...

 I think that non-retarded humans are fully general intelligences in the
 following weak sense: for any fixed t and l, for any human there are some
 numbers M and T so that if the human is given amount M of external memory
 (e.g. notebooks to write on), that human could be taught to emulate AIXItl

 [see
 http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3540221395/ref=sr_1_1?ie=UTF8s=booksqid=1224614995sr=1-1,
  or the relevant papers on Marcus Hutter's website]

 where each single step of AIXItl might take up to T seconds.

 This is a kind of generality that I think no animals but humans have.  So,
 in that sense, we seem to be the first evolved general intelligences.

 But, that said, there are limits to what any one of us can learn in a fixed
 finite amount of time.   If you fix T realistically then our intelligence
 decreases dramatically.

 And for the time-scales relevant in human life, it may not be possible to
 teach some people to do science adequately.

 I am thinking for instance of a 40 yr old student I taught at the
 University of Nevada way back when (normally I taught advanced math, but in
 summers I sometimes taught remedial stuff for extra $$).  She had taken
 elementary algebra 7 times before ... and had had extensive tutoring outside
 of class ... but I still was unable to convince her of the incorrectness of
 the following reasoning: The variable a always stands for 1.  The variable
 b always stands for 2. ... The variable z always stands for 26.   She was
 not retarded.  She seemed to have a mental block against algebra.  She could
 discuss politics and other topics with seeming intelligence.  Eventually I'm
 sure she could have been taught to overcome this block.  But, by the time
 she overcame every other issue in the way of really understanding science,
 her natural lifespan would have long been overspent...

 -- Ben G


 On Tue, Oct 21, 2008 at 12:33 PM, Mark Waser [EMAIL PROTECTED] wrote:

   Yes, but each of those steps is very vague, and cannot be boiled down
 to a series of precise instructions sufficient for a stupid person to
 consistently carry them out effectively...
 So -- are those stupid people still general intelligences?  Or are they
 only general intelligences to the degree to which they *can* carry them
 out?  (because I assume that you'd agree that general intelligence is a
 spectrum like any other type).

 There also remains the distinction (that I'd like to highlight and
 emphasize) between a discoverer and a learner.  The cognitive
 skills/intelligence necessary to design questions, hypotheses, experiments,
 etc. are far in excess the cognitive skills/intelligence necessary to
 evaluate/validate those things.  My argument was meant to be that a general
 intelligence needs to be a learner-type rather than a discoverer-type
 although the discoverer type is clearly more effective.

 So -- If you can't correctly evaluate data, are you a general
 intelligence?  How do you get an accurate and effective domain model to
 achieve competence in a domain if you don't know who or what to believe?  If
 you don't believe in evolution, does that mean that you aren't a general
 intelligence in that particular realm/domain (biology)?

  Also, those steps are heuristic and do not cover all cases.  For
 instance step 4 requires experimentation, yet there are sciences such as
 cosmology and paleontology that are not focused on experimentation.
 I disagree.  They may be based upon thought experiments rather than
 physical experiments but it's still all about predictive power.  What is
 that next star/dinosaur going to look like?  What is it *never* going to
 look like (or else we need to expand or correct our theory)?  Is there
 anything that we can guess that we haven't tested/seen yet that we can
 verify?  What else is science?

 My *opinion* is that the following steps are pretty inviolable.
 A.  Observe
 B.  Form Hypotheses
 C.  Observe 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread BillK
On Tue, Oct 21, 2008 at 10:31 PM, Ben Goertzel wrote:

 Incorrect things are wrapped up with correct things in peoples' minds

 However, pure slowness at learning is another part of the problem ...



Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem. Even when you
know what the problem is, the tech won't listen. He insists on working
through his checklist, making you do all the irrelevant checks,
eventually by a process of elimination, ending up with what you knew
was wrong all along. Very little GI required.

But Ben is saying that for evaluating science, there ain't no such checklist.
The circumstances are too variable, you would need checklists to infinity.

I go along with Ben.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
Wow!  Way too much good stuff to respond to in one e-mail.  I'll try to respond 
to more in a later e-mail but . . . . (and I also want to get your reaction to 
a few things first :-)

 However, I still don't think that a below-average-IQ human can pragmatically 
 (i.e., within the scope of the normal human lifetime) be taught to 
 effectively carry out statistical evaluation of theories based on data, 
 given the realities of how theories are formulated and how data is obtained 
 and presented, at the present time...

Hmmm.  After some thought, I have to start by saying that it looks like you're 
equating science with statistics and I've got all sorts of negative reactions 
to that.

First -- Sure.  I certainly have to agree for a below-average-IQ human and 
could even be easily convinced for an average IQ human if they had to do it all 
themselves.  And then, statistical packages quickly turn into a two-edged sword 
where people blindly use heuristics without understanding them (p  .05 
anyone?).

A more important point, though, is that humans natively do *NOT* use statistics 
but innately use very biased, non-statistical methods that *arguably* function 
better than statistics in real world data environments.   That alone would 
convince me that I certainly don't want to say that science = statistics.

 I am not entirely happy with Lakatos's approach either.  I find it 
 descriptively accurate yet normatively inadequate.

Hmmm.  (again)  To me that seems to be an interesting way of rephrasing our 
previous disagreement except that you're now agreeing with me.  (Gotta love it 
:-)

You find Lakatos's approach descriptively accurate?  Fine, that's the 
scientific method.  

You find it normatively inadequate?  Well, duh (but meaning no offense :-) . . 
. . you can't codify the application of the scientific method to all cases.  I 
easily agreed to that before.

What were we disagreeing on again?


 My own take is that science normatively **should** be based on a Bayesian 
 approach to evaluating theories based on data

That always leads me personally to the question Why do humans operate on the 
biases that they do rather than Bayesian statistics?  MY *guess*  is that 
evolution COULD have implemented Bayesian methods but that the current methods 
are more efficient/effective under real world conditions (i.e. because of the 
real-world realities of feature extraction under dirty and incomplete or 
contradictory data and the fact that the Bayesian approach really does need to 
operate in an incredibly data-rich world where the features have already been 
extracted and ambiguities, other than occurrence percentages, are basically 
resolved).

**And adding different research programmes and/or priors always seems like such 
a kludge . . . . . 






  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 4:15 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,


 As you asked for references I will give you two:

Thank you for setting a good example by including references but the 
contrast between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).

  I read that book but didn't like it as much ... but you're right, it may be 
an easier place for folks to start...
   
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 

  All good stuff indeed.
   
My view is basically that of Lakatos to the extent that I would challenge 
you to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism snort) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).

  Feyerabend appeals to my sense of humor ... I liked the guy.  I had some 
correspondence with him when I was 18.  I wrote him a letter outlining some of 
my ideas on philosophy of mind and asking his advice on where I should go to 
grad school to study philosophy.  He replied telling me that if I wanted to be 
a real philosopher I should **not** study philosophy academically nor become a 
philosophy professor, but should study science or arts and then pursue 
philosophy independently.  We chatted back and forth a little after 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

AI!   :-)

This is what I was trying to avoid.   :-)

My objection starts with How is a Bayes net going to do feature 
extraction?


A Bayes net may be part of a final solution but as you even indicate, it's 
only going to be part . . . .


- Original Message - 
From: Eric Burton [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 21, 2008 4:51 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I think I see what's on the table here. Does all this mean a Bayes
net, properly motivated, could be capable of performing scientific
inquiry? Maybe in combination with a GA that tunes itself to maximize
adaptive mutations in the input based on scores from the net, which
seeks superior product designs? A Bayes net could be a sophisticated
tool for evaluating technological merit, while really just a signal
filter on a stream of candidate blueprints if what you're saying is
true.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Incorrect things are wrapped up with correct things in peoples' minds



Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem.


Um.  No.

I'm thinking that in order to integrate a new idea into your world model, 
you first have to resolve all the conflicts that it has with the existing 
model.  That could be incredibly expensive.


(And intelligence is emphatically not linear)

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.


I'm sure that Ben was saying that for doing discovery . . . . and I agree.

For evaluation, I'm not sure that we've come to closure on what either of us 
think . . . .   :-)




- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 21, 2008 5:50 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



On Tue, Oct 21, 2008 at 10:31 PM, Ben Goertzel wrote:


Incorrect things are wrapped up with correct things in peoples' minds

However, pure slowness at learning is another part of the problem ...




Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem. Even when you
know what the problem is, the tech won't listen. He insists on working
through his checklist, making you do all the irrelevant checks,
eventually by a process of elimination, ending up with what you knew
was wrong all along. Very little GI required.

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.

I go along with Ben.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
Post #101 :V

Somehow this hit the wrong thread :|


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
Post #101 :V


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Ed Porter
Vlad,

 

Thanks for your below reply of Tue 10/21/2008 2:17 PM. 

 

I have spend hours trying to understand your explanation, and I now think I
understand much of it, but not all of it. I have copied much of it word for
word below and have inserted my questions about its various portions.  

 

If you could answer the following questions I might understand all of it.
If your formula works, it is important, and if possible I would like to
understand it

 

Thank you!

 

=quote from Vlad's reply with my comments

=Vlad

C(N,S) is the total number of assemblies of size S that fit in the N nodes,
if you forget about overlaps.

 

Each assembly overlaps in X places with other C(S,X)*C(N-S,S-X) assemblies: 

 

if another assembly overlaps with our assembly in X places, then X nodes are
inside S nodes of our assembly, which gives

C(S,X) possible combinations, 

 

[=EWP:

--I understand this --- for any first given set of S nodes, there would be
C(S,X) different sub-combinations of X node from that given set of S nodes]

 

=Vlad

and the remaining S-X of the nodes are outside the assembly, in remaining
N-S nodes, 

which gives C(N-S,S-X) combinations, 

 

[EWP:

--I interpret this as saying: for a given fixed sub-combination of X nodes
from the many possible such subcombinations C(S,X), let us determine the
number of combinations C(N-S, S-X) of length S-X that can be created from
the N nodes minus the S nodes in the first given set, and that can be added
to said given combination of X nodes to create different sets of length S
--- is this correct?]

 

=Vlad

totaling to C(S,X)*C(N-S,S-X). 

[=EWP:

--I interpret this as saying: 

--sum the following over each of the sub-combinations C(S,X) of length X
that can be selected from the first given set of S nodes, 

-the number of each possible combination C(N-S,S-X) of length S-X,
described above, that when added to the X nodes from a given one of such
sub-combinations would form another set of length S

 

--this was counter intuitive to me at first, because I subconsciously
rejected the notion that each sub-combination of X nodes from the first
given set of length S could occur in all C(N-S,S-X) possible complimentary
combinations of length S-X, so I kept thinking there must be something wrong
with it, but now I realize there is no reason they shouldn't be allowed to 

 

--For each given sub-combination of X node in the first given set of length
S, all other sets of length S are complementary and include the same
sub-combination of X nodes.  

 

--Am I correct in assuming that over all possible sub-combinations of X
nodes in the first given set of length S, all possible sets of length S that
have an exact overlap of length X with the first given set will be counted?]


 

=Vlad

Thus, the total number of assemblies that overlap with our assembly in O to
S places (including our assembly itself) is 

T(N,S,O)= C(S,S)*C(N-S,S-S)+ C(S,S-1)*C(N-S,S-(S-1))+ ..+C(S,O)*C(N-S,S-O)

 

[=EWP:

--this is equivalent to the formula for T(N,S,O) that I used in my spread
sheet that I derived from your email of Thu 10/16/2008 7:50 pm, after having
switched the positions of the combination function variables

 

--I have rewriten this formula below to make its structure easier to see at
a glance

 

T(N,S,O)=

+C(S, S-0)*C(N-S, 0) (= 1)

+C(S, S-1)*C(N-S, 1)

+C(S, S-2)*C(N-S, 2)

+...

+C(S, O)*C(N-S, S-O)

 

OR

 

T(N,S,O) = SUM FROM X = 0 TO S-O OF C(S, S-X)*C(N-S, X)

 

Comparing this to C(S,X)*C(N-S,S-X) --- it appears that T(N,S,O) is equal to
the number of all combinations calculated by C(S,X)*C(N-S,S-X) where X is
greater than O, Thus it is an attempt to enumerate all such combinations in
which the overlap is more than O and thus which should be excluded from A.

 

--Thus it would seem A should equal C(N,S) - T(N,S,O), not C(N,S) /
T(N,S,O).  Why isn't this correct]

 

 

=Vlad

Let's apply a trivial algorithm to our problem, adding an arbitrary assembly
to the working set merely if it doesn't conflict with any of the assemblies
already in the working set. 

 

Adding a new assembly will ban other T(N,S,O) assemblies from the total pool
of C(N,S) assemblies, thus each new assembly in the working set lowers the
number of remaining assemblies that we'll be able to add later. 

 

Some assemblies from this pool will be banned multiple times, but at least
C(N,S)/T(N,S,O) assemblies can be added without conflicts, 

Since T(N,S,O) is the maximum number of assemblies that each one in the pool
is able to subtract from the total pool of assemblies.

 

[=EWP:

--I don't understand this.  

 

--First, yes, each new assembly of length S added to the working set lowers
the number of remaining assemblies that we'll be able to add later, but
adding a 

FW: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Ed Porter
Ben,

 

Upon thinking  more about my comments below, in an architecture such as the
brain where connections are much cheaper (at least more common) than nodes,
cell assemblies might make sense.

 

This is particularly true since one could develop tricks to reduce the
number of links that would be required between assemblies representing
different concepts for the implication between such concepts to a number not
more than two to four times the value of O, of the maximum overlap allowed
between assembly populations.  This could be done by having assemblies that
are activated enough to be transmitting synchronize their firing, so that
their inputs to another concept node could be filtered out from background
noise, and if such synchronized input were above a given threshold, the
assembly receiving them could be activated.  Perhaps it would be valueable
to have the communication between two concept nodes first blast a very
strong synchronized signal to allow inputs above cross talk to be
identified, and then have a more variable signal reflecting the degree of
the input, which could be lower, since the nodes receiving them would know
from which links to expect such a signal.

 

Since in a brain like computer you want a fair amount of redundancy in your
connections, for reliability of connections in a noisy setting, and to
enable variable values to be transmitted with reasonable degree of
statistical smoothing, the cost of the extra connection required to exceed
the maximal cross talk between concept assemblies, might not be above that
which would be desired for such other purposes.

 

Ed Porter

 

 

-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 21, 2008 3:09 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Who is smart enough to answer this question?

 

Ben, 

 

You're right.  Although one might seem to be getting a free lunch in terms
of being able to create more assemblies than the number of nodes from which
they are created, it would appear that the extra number of links required
not only for auto-associative activation withn an assembly, but that would
be required to activate an assembly from the outside with a signal that
would be distinguishable over the cross talk, may prevent such a use of node
assemblies from resulting in any actual saving.

 

If Vlad's forumula for a lower bound is correct, the one that I used in the
Excel spreadsheet I sent out earlier under this thread, then it is clear one
can create substantially more assemblies than nodes, with maximum overlaps
below 5%, but it is not clear the increased costs in extra connections would
be worth it, since it is not clear that the cost of a node, need be that
much higher than the cost of a link.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 21, 2008 11:28 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


makes sense, yep...

i guess my intuition is that there are obviously a huge number of
assemblies, so that the number of assemblies is not the hard part, the hard
part lies in the weights...

On Tue, Oct 21, 2008 at 11:18 AM, Ed Porter [EMAIL PROTECTED] wrote:

Ben,

 

In my email starting this thread on 10/15/08 7:41pm I pointed out that a
more sophisticated version of the algorithm would have to take connection
weights into account in determining cross talk, as you have suggested below.
But I asked for the answer to a more simple version of the problem, since
that might prove difficult enough, and since I was just trying to get some
rough feeling for whether or not node assemblies might offer substantial
gains in possible representational capability, before delving more deeply
into the subject.

 

Ed Porter

 

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 

Sent: Monday, October 20, 2008 10:52 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


But, suppose you have two assemblies A and B, which have nA and nB neurons
respectively, and which overlap in O neurons...

It seems that the system's capability to distinguish A from B is going to
depend on the specific **weight matrix** of the synapses inside the
assemblies A and B, not just on the numbers nA, nB and O.

And this weight matrix depends on the statistical properties of the memories
being remembered.

So, these counting arguments you're trying to do are only going to give you
a very crude indication, anyway, right? 

ben

On Mon, Oct 20, 2008 at 5:09 PM, Ed Porter [EMAIL PROTECTED] wrote:

Ben, 

 

I am interested in exactly the case where individual nodes partake in
multiple attractors,  

 

I use the notation A(N,O,S) which is similar to the A(n,d,w) formula of
constant weight codes, except as Vlad says you would plug my varaiables into
the constant weight formula buy using A(N, 2*(S-0+1),S).

 

I have asked my question assuming each node 

Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
(I agree with the points I don't quote here)

General reiteration on notation: O-1 is the maximum allowed overlap,
overlap of O is already not allowed (it was this way in your first
message).

On Wed, Oct 22, 2008 at 3:08 AM, Ed Porter [EMAIL PROTECTED] wrote:

 T(N,S,O) = SUM FROM X = 0 TO S-O OF C(S, S-X)*C(N-S, X)


To match with the explanation of the size of the overlap, I intended
T(N,S,O)= C(S,S)*C(N-S,S-S)+ C(S,S-1)*C(N-S,S-(S-1))+ ...+C(S,O)*C(N-S,S-O)
to be parsed as
T(N,S,O) = SUM FROM X =O TO S OF C(S,X)*C(N-S,S-X)



 Comparing this to C(S,X)*C(N-S,S-X) --- it appears that T(N,S,O) is equal to
 the number of all combinations calculated by C(S,X)*C(N-S,S-X) where X is
 greater than O, Thus it is an attempt to enumerate all such combinations in
 which the overlap is more than O and thus which should be excluded from A.


I don't exclude them from A, as I don't know which of them will go to
A and which will get banned multiple times. I exclude them from
overall pool of C(N,S).



 --First, yes, each new assembly of length S added to the working set lowers
 the number of remaining assemblies that we'll be able to add later, but
 adding a given new assembly will ban not T(N,S,O) assemblies, but rather
 only all those assemblies that overlap with it by more than O nodes.


But T(N,S,O) IS the number of all those assemblies that overlap with a
given assembly by O or more nodes (having from X=O to X=S nodes of
overlap).



 --Second, what are the cases where assemblies will be banned multiple times
 that you mentioned in the above text?


It's one of the reasons it's a lower bound: in reality, some of the
assemblies are banned multiple times, which leaves more free
assemblies that could be added to the working set later.



 --Third --- as mentioned in my last group of comments --- why doesn't A =
 C(N,S) – T(N,S,O), since C(N,S) is the total number of combinations of
 length S that can be formed from N nodes, and T(N,S,O) appears to enumerate
 all the combinations that occur with each possible overlap value greater O.


It's only overlap with one given assembly, blind to any other
interactions, it says nothing about ideal combination of assemblies
that manages to keep the overlap between each pair in check.



 --Fifth, is possible that even though T(N,S,O) appears to enumerate all
 possible combinations in which all sets overlap by more than O, that it
 fails to take into account possible combinations of sets of size S in which
 some sets overlap by more than O and others do not?

 --in which case T(N,S,O) would be smaller than the number of all prohibited
 combinations of sets of length S.  Or would all the possible sets of length
 S which overlap be have been properly taken into account in the above
 formula for T?


T doesn't reason about combinations of sets, it's a filter on the
individual sets from the total of C(N,S).



 --Sixth, if C(S,X)*C(N-S,S-X) enumerates all possible combinations having an
 overlap of X, why can't one calculate A as follows?

 A = SUM FROM X = 0 TO O OF C(S,X)*C(N-S,S-X)


Because some of these sets intersect with each other, you can't
include them all.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Mark W wrote:

What were we disagreeing on again?



This conversation has drifted into interesting issues in the philosophy of
science, most of which you and I seem to substantially agree on.

However, the point I took issue with was your claim that a stupid person
could be taught to effectively do science ... or (your later modification)
evaluation of scientific results.

At the time I originally took exception to your claim, I had not read the
earlier portion of the thread, and I still haven't; so I still do not know
why you made the claim in the first place.

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Charles,

You are right to call me out on this, as I really don't have much
justification for rejecting that view beyond I don't like it, it's
not elegant.

But, I don't like it! It's not elegant!

About the connotations of engineer... more specifically, I should
say that this prevents us from making one universal normative
mathematical model of intelligence, since our logic cannot describe
itself. Instead, we would be doomed to make a series of more and more
general models (AIXI being the first and most narrow), all of which
fall short of human logic.

Worse, the implication is that this is not because human logic sits at
some sort of maximum; human intelligence would be just another rung in
the ladder from the perspective of some mathematically more powerful
alien species, or human mutant.

--Abram

On Tue, Oct 21, 2008 at 3:29 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
 Abram Demski wrote:

 Ben,
 ...
 One reasonable way of avoiding the humans are magic explanation of
 this (or humans use quantum gravity computing, etc) is to say that,
 OK, humans really are an approximation of an ideal intelligence
 obeying those assumptions. Therefore, we cannot understand the math
 needed to define our own intelligence. Therefore, we can't engineer
 human-level AGI. I don't like this conclusion! I want a different way
 out.

 I'm not sure the guru explanation is enough... who was the Guru for
 Humankind?

 Thanks,

 --Abram



 You may not like Therefore, we cannot understand the math needed to define
 our own intelligence., but I'm rather convinced that it's correct.  OTOH, I
 don't think that it follows from this that humans can't build a better than
 human-level AGI.  (I didn't say engineer, because I'm not certain what
 connotations you put on that term.)  This does, however, imply that people
 won't be able to understand the better than human-level AGI.  They may
 well, however, understand parts of it, probably large parts.  And they may
 well be able to predict with fair certitude how it would react in numerous
 situations.  Just not in numerous other situations.

 The care, then, must be used in designing so that we can predict favorable
 motivations behind the actions in important-to-us  areas.  Even this is
 probably impossible in detail, but then it doesn't *need* to be understood
 in detail.  If you can predict that it will make better choices than we can,
 and that it's motives are benevolent, and that it has a good understanding
 of our desires...that should suffice.  And I think we'll be able to do
 considerably better than that.



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
I am completely unable to understand what this paragraph is supposed to
mean:

***
One reasonable way of avoiding the humans are magic explanation of
this (or humans use quantum gravity computing, etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those assumptions. Therefore, we cannot understand the math
needed to define our own intelligence. Therefore, we can't engineer
human-level AGI. I don't like this conclusion! I want a different way
out.
***

Explanation of WHAT?  Of your intuitive feeling that you are uncomputable,
that you have no limits in what you can do?

Why is this intuitive feeling any more worthwhile than some peoples'
intuitive
feeling that they have some kind of absolute free will not allowed by
classical
physics or quantum theory??

Personally my view is as follows.  Science does not need to intuitively
explain all
aspects of our experience: what it has to do is make predictions about
finite sets of finite-precision observations, based on previously-collected
finite sets of finite-precision observations.

It is not impossible that we are unable to engineer intelligence, even
though
we are intelligent.  However, your intuitive feeling of awesome
supercomputable
powers seems an extremely weak argument in favor of this inability.

You have not convinced me that you can do anything a computer can't do.

And, using language or math, you never will -- because any finite set of
symbols
you can utter, could also be uttered by some computational system.

-- Ben G




On Tue, Oct 21, 2008 at 9:13 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Charles,

 You are right to call me out on this, as I really don't have much
 justification for rejecting that view beyond I don't like it, it's
 not elegant.

 But, I don't like it! It's not elegant!

 About the connotations of engineer... more specifically, I should
 say that this prevents us from making one universal normative
 mathematical model of intelligence, since our logic cannot describe
 itself. Instead, we would be doomed to make a series of more and more
 general models (AIXI being the first and most narrow), all of which
 fall short of human logic.

 Worse, the implication is that this is not because human logic sits at
 some sort of maximum; human intelligence would be just another rung in
 the ladder from the perspective of some mathematically more powerful
 alien species, or human mutant.

 --Abram

 On Tue, Oct 21, 2008 at 3:29 PM, Charles Hixson
 [EMAIL PROTECTED] wrote:
  Abram Demski wrote:
 
  Ben,
  ...
  One reasonable way of avoiding the humans are magic explanation of
  this (or humans use quantum gravity computing, etc) is to say that,
  OK, humans really are an approximation of an ideal intelligence
  obeying those assumptions. Therefore, we cannot understand the math
  needed to define our own intelligence. Therefore, we can't engineer
  human-level AGI. I don't like this conclusion! I want a different way
  out.
 
  I'm not sure the guru explanation is enough... who was the Guru for
  Humankind?
 
  Thanks,
 
  --Abram
 
 
 
  You may not like Therefore, we cannot understand the math needed to
 define
  our own intelligence., but I'm rather convinced that it's correct.
  OTOH, I
  don't think that it follows from this that humans can't build a better
 than
  human-level AGI.  (I didn't say engineer, because I'm not certain what
  connotations you put on that term.)  This does, however, imply that
 people
  won't be able to understand the better than human-level AGI.  They may
  well, however, understand parts of it, probably large parts.  And they
 may
  well be able to predict with fair certitude how it would react in
 numerous
  situations.  Just not in numerous other situations.
 
  The care, then, must be used in designing so that we can predict
 favorable
  motivations behind the actions in important-to-us  areas.  Even this is
  probably impossible in detail, but then it doesn't *need* to be
 understood
  in detail.  If you can predict that it will make better choices than we
 can,
  and that it's motives are benevolent, and that it has a good
 understanding
  of our desires...that should suffice.  And I think we'll be able to do
  considerably better than that.
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must 

Re: [agi] constructivist issues

2008-10-21 Thread Trent Waddington
On Wed, Oct 22, 2008 at 11:21 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Personally my view is as follows.  Science does not need to intuitively
 explain all
 aspects of our experience: what it has to do is make predictions about
 finite sets of finite-precision observations, based on previously-collected
 finite sets of finite-precision observations.


I can do one better than that.  If you don't believe that AGI is
possible then bugger right off.  We don't want your mysticism around
here.

Is it too much to ask that the people on this list be interested in
the topic?  If you don't think it's possible, go back to your cave and
pray for our souls or something.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben,

This is not what I meant at all! I am not trying to make an argument
from any sort of intuitive feeling of absolute free will in that
paragraph (or, well, ever).

That paragraph was referring to Terski's undefinability theorem.

Quoting the context directly before the paragraph in question:

I am suggesting a broader problem that will apply to a wide
class of formulations of idealized intelligence such as AIXI: if their
internal logic obeys a particular set of assumptions, it will become
prone to Tarski's Undefinability Theorem. Therefore, we humans will be
able to point out a particular class of concepts that it cannot reason
about; specifically, the very concepts used in describing the ideal
intelligence in the first place.

To re-explain: We might construct generalizations of AIXI that use a
broader range of models. Specifically, it seems reasonable to try
models that are extensions of first-order arithmetic, such as
second-order arithmetic (analysis), ZF-set theory... (Models in
first-order logic of course could be considered equivalent to
Turing-machine models, the current AIXI.) Description length then
becomes description-length-in-language-X. But, any such extension is
doomed to a simple objection:

(1) We humans understand the semantics of formal system X.
(2) The undefinability theorem shows that formal system X cannot
understand its own semantics.

That is what needs an explanation. The one we can all agree is wrong:
Humans are magic, we're better than any formal system. The one
Charles Hixon is OK with, but I dislike: Humans approximate a
generalization of AIXI that satisfies the above assumptions;
therefore, the logic we use is some extension of arithmetic, but we
are incapable of defining that logic thanks to the undefinability
theorem.

--Abram

On Tue, Oct 21, 2008 at 9:21 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I am completely unable to understand what this paragraph is supposed to
 mean:

 ***
 One reasonable way of avoiding the humans are magic explanation of
 this (or humans use quantum gravity computing, etc) is to say that,
 OK, humans really are an approximation of an ideal intelligence
 obeying those assumptions. Therefore, we cannot understand the math
 needed to define our own intelligence. Therefore, we can't engineer
 human-level AGI. I don't like this conclusion! I want a different way
 out.
 ***

 Explanation of WHAT?  Of your intuitive feeling that you are uncomputable,
 that you have no limits in what you can do?

 Why is this intuitive feeling any more worthwhile than some peoples'
 intuitive
 feeling that they have some kind of absolute free will not allowed by
 classical
 physics or quantum theory??

 Personally my view is as follows.  Science does not need to intuitively
 explain all
 aspects of our experience: what it has to do is make predictions about
 finite sets of finite-precision observations, based on previously-collected
 finite sets of finite-precision observations.

 It is not impossible that we are unable to engineer intelligence, even
 though
 we are intelligent.  However, your intuitive feeling of awesome
 supercomputable
 powers seems an extremely weak argument in favor of this inability.

 You have not convinced me that you can do anything a computer can't do.

 And, using language or math, you never will -- because any finite set of
 symbols
 you can utter, could also be uttered by some computational system.

 -- Ben G




 On Tue, Oct 21, 2008 at 9:13 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Charles,

 You are right to call me out on this, as I really don't have much
 justification for rejecting that view beyond I don't like it, it's
 not elegant.

 But, I don't like it! It's not elegant!

 About the connotations of engineer... more specifically, I should
 say that this prevents us from making one universal normative
 mathematical model of intelligence, since our logic cannot describe
 itself. Instead, we would be doomed to make a series of more and more
 general models (AIXI being the first and most narrow), all of which
 fall short of human logic.

 Worse, the implication is that this is not because human logic sits at
 some sort of maximum; human intelligence would be just another rung in
 the ladder from the perspective of some mathematically more powerful
 alien species, or human mutant.

 --Abram

 On Tue, Oct 21, 2008 at 3:29 PM, Charles Hixson
 [EMAIL PROTECTED] wrote:
  Abram Demski wrote:
 
  Ben,
  ...
  One reasonable way of avoiding the humans are magic explanation of
  this (or humans use quantum gravity computing, etc) is to say that,
  OK, humans really are an approximation of an ideal intelligence
  obeying those assumptions. Therefore, we cannot understand the math
  needed to define our own intelligence. Therefore, we can't engineer
  human-level AGI. I don't like this conclusion! I want a different way
  out.
 
  I'm not sure the guru explanation is enough... who was the Guru for
  Humankind?
 
  Thanks,
 
  --Abram
 
 
 

Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
Abram,


 To re-explain: We might construct generalizations of AIXI that use a
 broader range of models. Specifically, it seems reasonable to try
 models that are extensions of first-order arithmetic, such as
 second-order arithmetic (analysis), ZF-set theory... (Models in
 first-order logic of course could be considered equivalent to
 Turing-machine models, the current AIXI.) Description length then
 becomes description-length-in-language-X. But, any such extension is
 doomed to a simple objection:

 (1) We humans understand the semantics of formal system X.
 (2) The undefinability theorem shows that formal system X cannot
 understand its own semantics.

 That is what needs an explanation.



It doesn't, because **I see no evidence that humans can
understand the semantics of formal system in X in any sense that
a digital computer program cannot**

Whatever this mysterious understanding is that you believe you
possess, **it cannot be communicated to me in language or
mathematics**.  Because any series of symbols you give me, could
equally well be produced by some being without this mysterious
understanding.

Can you describe any possible finite set of finite-precision observations
that could provide evidence in favor of the hypothesis that you possess
this posited understanding, and against the hypothesis that you are
something equivalent to a digital computer?

I think you cannot.

So, your belief in this posited understanding has nothing to do with
science, it's
basically a kind of religious faith, it seems to me... '-)

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
 It doesn't, because **I see no evidence that humans can
 understand the semantics of formal system in X in any sense that
 a digital computer program cannot**

I agree with you there. Our disagreement is about what formal systems
a computer can understand. (The rest of your post seems to depend on
this, so I will leave it there for the moment.)


 Whatever this mysterious understanding is that you believe you
 possess, **it cannot be communicated to me in language or
 mathematics**.  Because any series of symbols you give me, could
 equally well be produced by some being without this mysterious
 understanding.

 Can you describe any possible finite set of finite-precision observations
 that could provide evidence in favor of the hypothesis that you possess
 this posited understanding, and against the hypothesis that you are
 something equivalent to a digital computer?

 I think you cannot.

 So, your belief in this posited understanding has nothing to do with
 science, it's
 basically a kind of religious faith, it seems to me... '-)

 -- Ben G


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Ben,

How accurate would it be to describe you as a finitist or
ultrafinitist? I ask because your view about restricting quantifiers
seems to reject even the infinities normally allowed by
constructivists.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
On Tue, Oct 21, 2008 at 10:11 PM, Abram Demski [EMAIL PROTECTED]wrote:

  It doesn't, because **I see no evidence that humans can
  understand the semantics of formal system in X in any sense that
  a digital computer program cannot**

 I agree with you there. Our disagreement is about what formal systems
 a computer can understand. (The rest of your post seems to depend on
 this, so I will leave it there for the moment.)



Actually our disagreement seems to be about the meanings of words like
exist
or understand

To make things clearer for you, I'll introduce new words

existP = exist pragmatically

understandP = understand pragmatically

I will say that A existPs iff there is some finite Boolean combination C of
finite-precision observations so that

C implies (A existPs)
~C implies ~(A existPs)

Similarly, I will say that A understandPs B iff there is some finite
Boolean combination C of finite-precision observations so that

C implies (A understandPs B)
~C implies ~(A understandPs B)

That is what I mean by exist and understand in a science and engineering
context.

In essence, this is Peircean pragmatism (more general than strict
verificationism), as summarized e.g. at

http://www.iep.utm.edu/p/PeircePr.htm#H4

What do you mean by the terms?

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Ben Goertzel
I am a Peircean pragmatist ...

I have no objection to using infinities in mathematics ... they can
certainly be quite useful.  I'd rather use differential calculus to do
calculations, than do everything using finite differences.

It's just that, from a science perspective, these mathematical infinities
have to be considered finite formal constructs ... they don't existP except
in this way ...

I'm not going to claim the pragmatist perspective is the only subjectively
meaningful one.  But so far as I can tell it's the only useful one for
science and engineering...

To take a totally different angle, consider the thought X = This is a
thought that is way too complex for me to ever have

Can I actually think X?

Well, I can understand the *idea* of X.  I can manipulate it symbolically
and formally.  I can reason about it and empathize with it by analogy to A
thought that is way too complex for my three-year-old past-self to have ever
had , and so forth.

But it seems I can't ever really think X, except by being logically
inconsistent within that same thought ... this is the Godel limitation
applied to my own mind...

I don't want to diss the personal value of logically inconsistent thoughts.
But I doubt their scientific and engineering value.

-- Ben G



On Tue, Oct 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED]wrote:

 Ben,

 How accurate would it be to describe you as a finitist or
 ultrafinitist? I ask because your view about restricting quantifiers
 seems to reject even the infinities normally allowed by
 constructivists.

 --Abram


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Russell Wallace
On Wed, Oct 22, 2008 at 3:11 AM, Abram Demski [EMAIL PROTECTED] wrote:
 I agree with you there. Our disagreement is about what formal systems
 a computer can understand.

I'm also not quite sure what the problem is, but suppose we put it this way:

I think the most useful way to understand the family of algorithms of
which AIXI is the best-known member, is that they effectively amount
to: create (by perfect simulation) all possible universes and select
the one that exhibits the desired behavior.

Suppose we took a bunch of data from our universe as input, if the
amount of data were large enough to be specific enough, our universe
(or at least one with the same physical laws) would be created and
selected as producing results that match the data.

So the universe thus created would contain humans, and therefore
contain all the understanding of mathematics that actual humans have.

Of course, this understanding would not be contained in the original kernel.

But this should not be surprising. Consider a realistic AI which can't
create whole universes, but can learn about mathematics. Suppose the
kernel of the AI is written in Lisp, does the Lisp compiler understand
incomputable numbers? No, but that's no reason the AI as a whole
can't, at least to the extent that we humans do.

Does this help at all?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread Abram Demski
Russel,

I could be wrong here. Jurgen's Super Omega is based on what I called
halting2, and while it would be simple to define super-super-omega
from halting3, and so on, I have not seen it done. The reason I called
these higher levels horribly-terribly-uncomputable is because
Jurgen's super-omega is at least defined in terms of things
computable in the limit. As he says, he is pushing the boundaries of
the constructivist program. Higher omegas than the one he defines
would obviously not be constructivist.

Well, I'm calling it a night. I will reply to Ben's further comments
at a later date.

--Abram

On Tue, Oct 21, 2008 at 10:55 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Tue, Oct 21, 2008 at 8:13 PM, Abram Demski [EMAIL PROTECTED] wrote:
 The wikipedia article Ben cites is definitely meant for
 mathematicians, so I will try to give an example.

 Yes indeed -- thanks!

 The halting problem asks us about halting facts for a single program.
 To make it worse, I could ask about an infinite class of programs:
 All programs satisfying Q eventually halt. If Q is some computable
 function that accepts some programs and rejects others, it is only a
 little worse than the halting problem; Call this halting2. If Q is
 more difficult to evaluate than that, say if Q is as hard as solving
 the halting problem, it's more difficult; call problems like this
 halting3. If Q is as hard as halting2, then call that halting4. If Q
 is as hard as halting3, then call the resulting class halting4. And so
 on.

 Right -- if I understand correctly, this is equivalent to the line of
 reasoning I came up when I first saw the proof of the incomputability
 of the halting problem: suppose you have an oracle that can solve the
 halting problem, can that answer all questions? No, because by the
 same logic you can show that to predict the behavior of an oracle
 requires a super oracle.

 But I was under the impression this is just the same as the super
 Omegas? And that you were talking about something beyond that?


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-21 Thread charles griffiths
I disagree, and believe that I can think X: This is a thought (T) that is way 
too complex for me to ever have.

Obviously, I can't think T and then think X, but I might represent T as a 
combination of myself plus a notebook or some other external media. Even if I 
only observe part of T at once, I might appreciate that it is one thought and 
believe (perhaps in error) that I could never think it.

I might even observe T in action, if T is the result of billions of 
measurements, comparisons and calculations in a computer program.

Isn't it just like thinking This is an image that is way too detailed for me 
to ever see?

Charles Griffiths

--- On Tue, 10/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] constructivist issues
To: agi@v2.listbox.com
Date: Tuesday, October 21, 2008, 7:56 PM


I am a Peircean pragmatist ...

I have no objection to using infinities in mathematics ... they can certainly 
be quite useful.  I'd rather use differential calculus to do calculations, than 
do everything using finite differences.


It's just that, from a science perspective, these mathematical infinities have 
to be considered finite formal constructs ... they don't existP except in this 
way ...

I'm not going to claim the pragmatist perspective is the only subjectively 
meaningful one.  But so far as I can tell it's the only useful one for science 
and engineering...


To take a totally different angle, consider the thought X = This is a thought 
that is way too complex for me to ever have

Can I actually think X?

Well, I can understand the *idea* of X.  I can manipulate it symbolically and 
formally.  I can reason about it and empathize with it by analogy to A thought 
that is way too complex for my three-year-old past-self to have ever had , and 
so forth.


But it seems I can't ever really think X, except by being logically 
inconsistent within that same thought ... this is the Godel limitation applied 
to my own mind...

I don't want to diss the personal value of logically inconsistent thoughts.  
But I doubt their scientific and engineering value.


-- Ben G



On Tue, Oct 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED] wrote:

Ben,



How accurate would it be to describe you as a finitist or

ultrafinitist? I ask because your view about restricting quantifiers

seems to reject even the infinities normally allowed by

constructivists.



--Abram





---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson








  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] If your AGI can't learn to play chess it is no AGI

2008-10-21 Thread Dr. Matthias Heger
It seems to me that many people think that embodiment is very important for
AGI.

For instance some people seem to believe that you can't be a good
mathematician if you haven't made some embodied experience.

 

But this would have a rather strange consequence:

If you give your AGI a difficult mathematical problem to solve, then it
would answer:

 

Sorry, I still cannot solve your problem, but let me walk with my body
through the virtual world. 

Hopefully, I will then understand your mathematical question end even more
hopefully I will be able to solve it after some further embodied
experience.

 

AGI is the ability to solve different problems in different domains. But
such an AGI would need to make experiences in domain d1 in order to solve
problems of domain d2. Does this really make sense, if every information
necessary to solve problems of d2 is in d2? I think an AGI which has to make
experiences in d1 in order to solve a problem of domain d2 which contains
everything to solve this problem is no AGI. How should such an AGI know what
experiences in d1 are necessary to solve the problem of d2?

 

In my opinion a real AGI must be able to solve a problem of a domain d
without leaving this domain if in this domain there is everything to solve
this problem.

 

From this we can define a simple benchmark which is not sufficient for AGI
but which is *necessary* for a system to be an AGI system:

 

Within the domain of chess there is everything to know about chess. So if it
comes up to be a good chess player

learning chess from playing chess must be sufficient. Thus, an AGI which is
not able to enhance its abilities in chess from playing chess alone is no
AGI.  

 

Therefore, my first steps in the roadmap towards AGI would be the following:

1.   Make a concept for your architecture of your AGI

2.   Implement the software for your AGI

3.   Try if your AGI is able to become a good chess player from learning
in the domain of chess alone.

4.   If your AGI can't even learn to play good chess then it is no AGI
and it would be a waste of time to make experiences with your system in more
complex domains.

 

-Matthias

 

 

 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com