Re: [agi] Re: Meaning, communication and understanding

2008-10-20 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 11:50 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 But in any case there is a complete distinction between D and L. The brain
 never sends entities of D to its output region but it sends entities of L.
 Therefore there must be a strict separation between language model and D.


In any case isn't good enough, Why does it even make sense to say
that brain sends entities? From L? So far, all of this is
completely unjustified, and probably not even wrong.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Ben Goertzel
It would also be nice if this mailing list could be operate on a bit more of
 a scientific basis.  I get really tired of pointing to specific references
 and then being told that I have no facts or that it was solely my opinion.


This really has to do with the culture of the community on the list, rather
than the operation of the list per se, I'd say.

I have also often been frustrated by the lack of inclination of some list
members to read the relevant literature.  Admittedly, there is a lot of it
to read.  But on the other hand, it's not reasonable to expect folks who
*have* read a certain subset of the literature, to summarize that subset in
emails for individuals who haven't taken the time.  Creating such summaries
carefully takes a lot of effort.

I agree that if more careful attention were paid to the known science
related to AGI ... and to the long history of prior discussions on the
issues discussed here ... this list would be a lot more useful.

But, this is not a structured discussion setting -- it's an Internet
discussion group, and even if I had the inclination to moderate more
carefully so as to try to encourage a more carefully scientific mode of
discussion, I wouldn't have the time...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Re: Value of philosophy

2008-10-20 Thread Vladimir Nesov
On Mon, Oct 20, 2008 at 2:33 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
 Hmm.  After the recent discussion it seems this list has turned into the
 philosophical musings related to AGI list.   Where is the AGI engineering
 list?


The problem isn't philosophy, but bad philosophy (the prevalent
variety). Good philosophy is necessary for AI, and philosophy in some
sense always focused on the questions of AI. Even if most of the
existing philosophy is bunk, we need to build our own philosophy.
Frankly, I don't remember any engineering discussions on this list
that didn't fall on deaf ears of most of the people not believing that
the direction is worthwhile, and for good reasons (barring occasional
discussions of this or that logic, which might be interesting, but
again).

We need to work more on the foundations, to understand whether we are
going in the right direction on at least good enough level to persuade
other people (which is NOT good enough in itself, but barring that,
who are we kidding).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Value of philosophy

2008-10-20 Thread Mike Tintner
Vlad:Good philosophy is necessary for AI...We need to work more on the 
foundations, to understand whether we are

going in the right direction

More or less perfectly said. While I can see that a majority of people here 
don't want it,  actually philosophy, (which should be scientifically based), 
is essential for AGI, precisely as Vlad says - to decide what are the proper 
directions and targets for AGI. What is creativity? Intelligence? What are 
the kinds of problems an AGI should be dealing with? What kind(s) of 
knowledge representation are necessary? Is language necessary? What forms 
should concepts take? What kinds of information structures, eg networks, 
should underlie them? What kind(s) of search are necessary? How do analogy 
and metaphor work? Is embodiment necessary? etc etc.   These are all matters 
for what is actually philosophical as well as scientific as well as 
technological/engineering discussion.  They tend to be often  more 
philosophical in practice because these areas are so vast that they can't be 
neatly covered  - or not at present - by any scientific. 
experimentally-backed theory.


If your philosophy is all wrong, then the chances are v. high that your 
engineering work will be a complete waste of time. So it's worth considering 
whether your personal AGI philosophy and direction are viable.


And that is essentially what the philosophical discussions here have all 
been about - the proper *direction* for AGI efforts to take. Ben has 
mischaracterised these discussions. No one - certainly not me - is objecting 
to the *feasibility* of AGI. Everyone agrees that AGI in one form or other 
is indeed feasible,  though some (and increasingly though by no means fully, 
Ben himself) incline to robotic AGI. The arguments are mainly about 
direction, not feasibility.


(There is a separate, philosophical discussion,  about feasibility in a 
different sense -  the lack of  a culture of feasibility, which is perhaps, 
subconsciously what Ben was also referring to  -  no one, but no one, in 
AGI, including Ben,  seems willing to expose their AGI ideas and proposals 
to any kind of feasibility discussion at all  -  i.e. how can this or that 
method solve any of the problem of general intelligence? This is what Steve 
R has pointed to recently, albeit IMO in a rather confusing way. )


So while I recognize that a lot of people have an antipathy to my personal 
philosoophising, one way or another, you can't really avoid philosophising, 
unless you are, say, totally committed to just one approach, like Opencog. 
And even then...


P.S. Philosophy is always a matter of (conflicting) opinion. (Especially, 
given last night's exchange, philosophy of science itself).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
I do not understand what kind of understanding of noncomputable numbers you
think a human has, that AIXI could not have.  Could you give a specific
example of this kind of understanding?  What is some fact about
noncomputable numbers that a human can understand but AIXI cannot?  And how
are you defining understand in this context?

I think uncomputable numbers can be indirectly useful in modeling the world
even if the world is fundamentally computable.  This is proved by
differential and integral calculus, which are based on the continuum (most
of the numbers on which are uncomputable), and which are extremely handy for
analyzing real, finite-precision data ... more so, it seems, than
computable analysis variants.

But, I think AIXI or other AI systems can understand how to apply
differential calculus in the same sense that humans can...

And, neither AIXI nor a human can display a specific example of an
uncomputable number.  But, both can understand the diagonalization
constructs that lead us to believe uncomputable numbers exist in some
sense of the word exist

-- Ben G

On Sun, Oct 19, 2008 at 9:33 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 How so? Also, do you think it is nonsensical to put some probability
 on noncomputable models of the world?

 --Abram

 On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  But: it seems to me that, in the same sense that AIXI is incapable of
  understanding proofs about uncomputable numbers, **so are we humans**
 ...
 
  On Sun, Oct 19, 2008 at 6:30 PM, Abram Demski [EMAIL PROTECTED]
 wrote:
 
  Matt,
 
  Yes, that is completely true. I should have worded myself more clearly.
 
  Ben,
 
  Matt has sorted out the mistake you are referring to. What I meant was
  that AIXI is incapable of understanding the proof, not that it is
  incapable of producing it. Another way of describing it: AIXI could
  learn to accurately mimic the way humans talk about uncomputable
  entities, but it would never invent these things on its own.
 
  --Abram
 
  On Sun, Oct 19, 2008 at 4:32 PM, Matt Mahoney [EMAIL PROTECTED]
  wrote:
   --- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:
  
   No, I do not claim that computer theorem-provers cannot
   prove Goedel's Theorem. It has been done. The objection applies
   specifically to AIXI-- AIXI cannot prove goedel's theorem.
  
   Yes it can. It just can't understand its own proof in the sense of
   Tarski's undefinability theorem.
  
   Construct a predictive AIXI environment as follows: the environment
   output symbol does not depend on anything the agent does. However, the
 agent
   receives a reward when its output symbol matches the next symbol input
 from
   the environment. Thus, the environment can be modeled as a string that
 the
   agent has the goal of compressing.
  
   Now encode in the environment a series of theorems followed by their
   proofs. Since proofs can be mechanically checked, and therefore found
 given
   enough time (if the proof exists), then the optimal strategy for the
 agent,
   according to AIXI is to guess that the environment receives as input a
   series of theorems and that the environment then proves them and
 outputs the
   proof. AIXI then replicates its guess, thus correctly predicting the
 proofs
   and maximizing its reward. To prove Goedel's theorem, we simply encode
 it
   into the environment after a series of other theorems and their
 proofs.
  
   -- Matt Mahoney, [EMAIL PROTECTED]
  
  
  
   ---
   agi
   Archives: https://www.listbox.com/member/archive/303/=now
   RSS Feed: https://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription: https://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
  
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  Nothing will ever be attempted if all possible objections must be first
  overcome   - Dr Samuel Johnson
 
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mark Waser
There is a wide area between moderation and complete laissez-faire.

Also, as list owner, people tend to pay attention to what you say/request and 
also what you do.

If you regularly point to references and ask others to do the same, they are 
likely to follow.  If you were to gently chastise people for saying that there 
are no facts when references were provided, people might get the hint.  
Instead, you generally feed the trolls and humorously insult the people who 
are trying to keep it on a scientific basis.  That's a pretty clear message all 
by itself.

You don't need to spend more time but, as a serious role model for many of the 
people on the list, you do need to pay attention to the effects of what you say 
and do.  I can't help but go back to my perceived summary of the most recent 
issue -- Ben Goertzel says that there is no true defined method to the 
scientific method (and Mark Waser is clueless for thinking that there is).


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 20, 2008 6:53 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





It would also be nice if this mailing list could be operate on a bit more 
of a scientific basis.  I get really tired of pointing to specific references 
and then being told that I have no facts or that it was solely my opinion.



  This really has to do with the culture of the community on the list, rather 
than the operation of the list per se, I'd say.

  I have also often been frustrated by the lack of inclination of some list 
members to read the relevant literature.  Admittedly, there is a lot of it to 
read.  But on the other hand, it's not reasonable to expect folks who *have* 
read a certain subset of the literature, to summarize that subset in emails for 
individuals who haven't taken the time.  Creating such summaries carefully 
takes a lot of effort.

  I agree that if more careful attention were paid to the known science related 
to AGI ... and to the long history of prior discussions on the issues discussed 
here ... this list would be a lot more useful.

  But, this is not a structured discussion setting -- it's an Internet 
discussion group, and even if I had the inclination to moderate more carefully 
so as to try to encourage a more carefully scientific mode of discussion, I 
wouldn't have the time...

  ben g




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Any argument of the kind you should better first read xxx + yyy +.  is
very weak. It is a pseudo killer argument against everything with no content
at all.

If  xxx , yyy . contains  really relevant information for the discussion
then it should be possible to quote the essential part with few lines of
text.

If someone is not able to do this he should himself better read xxx, yyy, .
once again.

 

-Matthias

 

 

Ben wrote

 

It would also be nice if this mailing list could be operate on a bit more of
a scientific basis.  I get really tired of pointing to specific references
and then being told that I have no facts or that it was solely my opinion.

 


This really has to do with the culture of the community on the list, rather
than the operation of the list per se, I'd say.

I have also often been frustrated by the lack of inclination of some list
members to read the relevant literature.  Admittedly, there is a lot of it
to read.  But on the other hand, it's not reasonable to expect folks who
*have* read a certain subset of the literature, to summarize that subset in
emails for individuals who haven't taken the time.  Creating such summaries
carefully takes a lot of effort.

I agree that if more careful attention were paid to the known science
related to AGI ... and to the long history of prior discussions on the
issues discussed here ... this list would be a lot more useful.

But, this is not a structured discussion setting -- it's an Internet
discussion group, and even if I had the inclination to moderate more
carefully so as to try to encourage a more carefully scientific mode of
discussion, I wouldn't have the time...

ben g



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ed Porter
Thanks to Ben and Vlad for their help answering my question about how to
estimate the number of node assemblies A(N,O,S) one can get from a total set
of N nodes, where each assembly has a size of S, and a maximum overlap with
any other set of O.  I am sorry I did not response sooner but I spend a fair
amount of time reviewing the tables recited in the below wikipedia article
and in thinking about how one might obtain more relevant information, and I
went away for the weekend.

 

It appears Ben and Vlad are right that the constant-weight code formula
A(n,d,w) described at  http://en.wikipedia.org/wiki/Constant-weight_code
http://en.wikipedia.org/wiki/Constant-weight_code, is highly relevant to my
question, if you take into account Vlad's suggestion that you fill in the
variable slots in the constant-weight code formula A(n,d,w) with the
parameters N,O, and S in my formula as follows; 

 

n = N

d = 2(S-O+1)

w = S

 

I understand why you multiply (S-O+1) time 2 to get the hamming distance,
i.e., because whenever comparing two sets, whatever non-overlap you had in
one set, you would have an equal non-overlap from the other compared set to
add to the hamming distance between the two sets being compared.

 

I also don't understand whether A(n,d,w) is the number of sets where the
hamming distance is exactly d (as it would seem from the text of
http://en.wikipedia.org/wiki/Constant-weight_code
http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
number of set where the hamming distance is d or less.  If the former case
is true then the lower bounds given in the tables would actually be lower
than the actual lower bounds for the question I asked, which would
correspond to all cases where the hamming distance is d or less.

 

IT WAS INTERESTING TO NOTE THAT THE WIKI ARTICLE SAID APART FROM SOME
TRIVIAL OBSERVATIONS, IT IS GENERALLY IMPOSSIBLE TO COMPUTE THESE NUMBERS IN
A STRAIGHTFORWARD WAY.

 

The tables at
http://www.research.att.com/~njas/codes/Andw/index.html#dist16
http://www.research.att.com/~njas/codes/Andw/index.html#dist16  indicates
the number of cell assemblies would, in fact be much larger than the number
of nodes, WHERE THE OVERLAP WAS RELATIVELY LARGE, which would be equivalent
to node assemblies with undesirably high cross talk.  It doesn't provide any
information for cases where over lap is small, i.e, d is actually larger
than w.  It is clear A drops sharply as a percent of all the possible
combinations from set N of size S as O increases, but were N and S are large
the number of combinations C(N,S) would be very large, so even a very small
percent of it might be much larger than N.  But I can be sure.

 

Some of the closest examples to what I was looking for in these tables were
the following:

 

In the case where n=32, d=16, w=16 and A = 62, solving for O  

16 = 2*(16-O+1) = 32-2O+2 = 34-2O

2O = 34-16 = 18

O = 9, which is over half of w (or S)

 

Near the end of the page under the label  Further lower bounds

In case where A(80,20,20) = 53404, solving for O

20 = 2*(20-O+1) = 49-2O+2 = 42-2O

2O = 42-20 = 22

O = 1, which is over half of w (or S)

 

 

In case where A(128,32,32)=512064, solving for O

32 = 2*(32-O+1) = 64-2O+2 = 66-2O

2O = 66-32 = 34

O = 17, which is over half of w (or S)

 

But you can see than in all of them the overlap O was over half the value of
S, which means there would be very high cross talk..

 

In my next email I will suggest an simple search algorithm for exploring the
lower bounds on A(N,O,S).  Unfortunately, I haven't coded for so long that
even writing code as simple as this algorithm would be hard for me, because
if have forgotten the peculiarities of different languages and programming
environments.  But to any of you who are still in the coding groove, you
should be able to write this program in less than half an hour, and it would
be interested to see what results it would give.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, October 16, 2008 10:38 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


Well, coding theory does let you derive upper bounds on the memory capacity
of Hopfield-net type memory  models...

But, the real issue for Hopfield nets is not theoretical memory capacity,
it's tractable incremental learning algorithms

Along those lines, this work is really nice...

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.817

I wonder how closely that method lets you achieve the theoretical upper
bound.  Unfortunately, current math seems inadequate to discover this, but
empirics could tell us.  If anyone wants to explore it, we have a Java
implementation of Storkey's palimpsest learning scheme for Hopfield nets,
specialized for simple experiments with character arrays.

-- Ben G



On Thu, Oct 16, 2008 at 10:30 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:

On Fri, Oct 17, 2008 at 6:26 AM, Ben Goertzel 

Re: [agi] Re: Value of philosophy

2008-10-20 Thread William Pearson
2008/10/20 Mike Tintner [EMAIL PROTECTED]:
 (There is a separate, philosophical discussion,  about feasibility in a
 different sense -  the lack of  a culture of feasibility, which is perhaps,
 subconsciously what Ben was also referring to  -  no one, but no one, in
 AGI, including Ben,  seems willing to expose their AGI ideas and proposals
 to any kind of feasibility discussion at all  -  i.e. how can this or that
 method solve any of the problem of general intelligence?

This is because you define GI to be totally about creativity, analogy
etc. Now that is part of GI, but no means all.  I'm a firm believer in
splitting tasks down and people specialising in those tasks, so I am
not worrying about creativity at the moment, apart from making sure
that any architecture I build doesn't constrain people working on it
with the types of creativity they can produce.

Many useful advances in computer technology (operating systems,
networks including the internet) have come about by not assuming too
much about what will be done with them. I think the first layer of a
GI system can be done the same way.

My self-selected speciality is resource allocation (RA). There are
some times when certain forms of creativity are not a good option,
e.g. flying a passenger jet. When shouldn't humans be creative? How
should creativity and X other systems be managed?

Looking at opencog the RA is not baked into the arch so I have doubts
about how well it would survive in its current state under recursive
self-change. It will probably be reasonable for what the opencog team
is doing at the moment, but getting low-level arch wrong or not fit
for the next stage is a good way to waste work.

 Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
On Mon, Oct 20, 2008 at 6:37 PM, Ed Porter [EMAIL PROTECTED] wrote:

 The tables at http://www.research.att.com/~njas/codes/Andw/index.html#dist16
  indicates the number of cell assemblies would, in fact be much larger than
 the number of nodes, WHERE THE OVERLAP WAS RELATIVELY LARGE, which would be
 equivalent to node assemblies with undesirably high cross talk.

Ed, find my reply where I derive a lower bound. Even if overlap must
be no more than 1 node, you can still have a number of assemblies as
much more than N as necessary, if N is big enough, given fixed S.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Terren Suydam

Matthias, still awaiting a response to this post, quoted below.

Thanks,
Terren


Matthias wrote:
 I don't think that learning of language is the entire
 point. If I have only
 learned language I still cannot create anything. A human
 who can understand
 language is by far still no good scientist. Intelligence
 means the ability
 to solve problems. Which problems can a system solve if it
 can nothing else
 than language understanding?

Language understanding requires a sophisticated conceptual framework complete 
with causal models, because, whatever meaning means, it must be captured 
somehow in an AI's internal models of the world.

The Piraha tribe in the Amazon basin has a very primitive language compared to 
all modern languages - it has no past or future tenses, for example - and as a 
people they exhibit barely any of the hallmarks of abstract reasoning that are 
so common to the rest of humanity, such as story-telling, artwork, religion... 
see http://en.wikipedia.org/wiki/Pirah%C3%A3_people. 

How do you explain that?

 Einstein had to express his (non-linguistic) internal
 insights in natural
 language and in mathematical language.  In both
 modalities he had to use
 his intelligence to make the translation from his
 mental models.

 The point is that someone else could understand Einstein
 even if he haven't
 had the same intelligence. This is a proof that
 understanding AI1 does not
 necessarily imply to have the intelligence of AI1.

I'm saying that if an AI understands  speaks natural language, you've solved 
AGI - your Nobel will be arriving soon.  The difference between AI1 that 
understands Einstein, and any AI currently in existence, is much greater then 
the difference between AI1 and Einstein.

 Deaf people speak in sign language, which is only
 different from spoken
 language in superficial ways. This does not tell us
 much about language
 that we didn't already know.

 But it is a proof that *natural* language understanding is
 not necessary for
 human-level intelligence.

Sorry, I don't see that, can you explain the proof?  Are you saying that sign 
language isn't natural language?  That would be patently false. (see 
http://crl.ucsd.edu/signlanguage/)

 I have already outlined the process of self-reflectivity:
 Internal patterns
 are translated into language.

So you're agreeing that language is necessary for self-reflectivity. In your 
models, then, self-reflectivity is not important to AGI, since you say AGI can 
be realized without language, correct?

 This is routed to the
 brain's own input
 regions. You *hear* your own thoughts and have the illusion
 that you think
 linguistically.
 If you can speak two languages then you can make an easy
 test: Try to think
 in the foreign language. It works. If language would be
 inherently involved
 in the process of thoughts then thinking alternatively in
 two languages
 would cost many resources of the brain. In fact you need
 just use the other
 module for language translation. This is a big hint that
 language and
 thoughts do not have much in common.

 -Matthias

I'm not saying that language is inherently involved in thinking, but it is 
crucial for the development of *sophisticated* causal models of the world - the 
kind of models that can support self-reflectivity. Word-concepts form the basis 
of abstract symbol manipulation.

That gets the ball rolling for humans, but the conceptual framework that 
emerges is not necessarily tied to linguistics, especially as humans get 
feedback from the world in ways that are not linguistic (scientific 
experimentation/tinkering, studying math, art, music, etc).

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-20 Thread John G. Rose
Just an idea - not sure if it would work or not - 3 lists: [AGI-1], [AGI-2],
[AGI-3]. Sub-content is determined by the posters themselves. Same amount of
emails initially but partitioned up.

Wonder what would happen?

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Abram Demski
Ben,

The most extreme case is if we happen to live in a universe with
uncomputable physics, which of course would violate the AIXI
assumption. This could be the case merely because we have physical
constants that have no algorithmic description (but perhaps still have
mathematical descriptions). As a concrete example, let's say some
physical constant turns out to be a (whole-number) multiple of
Chaitin's Omega. Omega cannot be computed, but it can be approximated
(slowly), so we could after a long time suspect that we had determined
the first 20 digits (although we would never know for sure!). If a
physical constant turned out to match (some multiple of) these, we
would strongly suspect that the rest of the digits matched as well.

(Of course, the actual value of Omega depends on the model of
computation employed, so it would be very surprising indeed if the
physical constant matched Omega for one of our standard computational
models...)

AIXI would never except this inductive evidence.

This is similar to Wei Dai's argument about aliens offering humans a
box that seems to be a halting oracle.

I think there is a less extreme case to be considered (meaning, I
think there is a broader way in which we might say AIXI cannot
understand uncomputable entities the way we can), but the argument
is probably clearer for the extreme case, so I will leave it at that
for now.

Clearly, this argument is very type 2 at the moment. What I *really*
would like to discuss is, as you put it, the set of sufficient
mathematical axioms for (patially-)logic-based AGI such as
OpenCogPrime.

--Abram

On Mon, Oct 20, 2008 at 9:45 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I do not understand what kind of understanding of noncomputable numbers you
 think a human has, that AIXI could not have.  Could you give a specific
 example of this kind of understanding?  What is some fact about
 noncomputable numbers that a human can understand but AIXI cannot?  And how
 are you defining understand in this context?

 I think uncomputable numbers can be indirectly useful in modeling the world
 even if the world is fundamentally computable.  This is proved by
 differential and integral calculus, which are based on the continuum (most
 of the numbers on which are uncomputable), and which are extremely handy for
 analyzing real, finite-precision data ... more so, it seems, than
 computable analysis variants.

 But, I think AIXI or other AI systems can understand how to apply
 differential calculus in the same sense that humans can...

 And, neither AIXI nor a human can display a specific example of an
 uncomputable number.  But, both can understand the diagonalization
 constructs that lead us to believe uncomputable numbers exist in some
 sense of the word exist

 -- Ben G

 On Sun, Oct 19, 2008 at 9:33 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 How so? Also, do you think it is nonsensical to put some probability
 on noncomputable models of the world?

 --Abram

 On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  But: it seems to me that, in the same sense that AIXI is incapable of
  understanding proofs about uncomputable numbers, **so are we humans**
  ...
 
  On Sun, Oct 19, 2008 at 6:30 PM, Abram Demski [EMAIL PROTECTED]
  wrote:
 
  Matt,
 
  Yes, that is completely true. I should have worded myself more clearly.
 
  Ben,
 
  Matt has sorted out the mistake you are referring to. What I meant was
  that AIXI is incapable of understanding the proof, not that it is
  incapable of producing it. Another way of describing it: AIXI could
  learn to accurately mimic the way humans talk about uncomputable
  entities, but it would never invent these things on its own.
 
  --Abram
 
  On Sun, Oct 19, 2008 at 4:32 PM, Matt Mahoney [EMAIL PROTECTED]
  wrote:
   --- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:
  
   No, I do not claim that computer theorem-provers cannot
   prove Goedel's Theorem. It has been done. The objection applies
   specifically to AIXI-- AIXI cannot prove goedel's theorem.
  
   Yes it can. It just can't understand its own proof in the sense of
   Tarski's undefinability theorem.
  
   Construct a predictive AIXI environment as follows: the environment
   output symbol does not depend on anything the agent does. However,
   the agent
   receives a reward when its output symbol matches the next symbol
   input from
   the environment. Thus, the environment can be modeled as a string
   that the
   agent has the goal of compressing.
  
   Now encode in the environment a series of theorems followed by their
   proofs. Since proofs can be mechanically checked, and therefore found
   given
   enough time (if the proof exists), then the optimal strategy for the
   agent,
   according to AIXI is to guess that the environment receives as input
   a
   series of theorems and that the environment then proves them and
   outputs the
   proof. AIXI then replicates its guess, thus correctly predicting 

AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Terren wrote


Language understanding requires a sophisticated conceptual framework
complete with causal models, because, whatever meaning means, it must be
captured somehow in an AI's internal models of the world.


Conceptual framework is not well defined. Therefore I can't agree or
disagree.
What do you mean with causal model?



The Piraha tribe in the Amazon basin has a very primitive language compared
to all modern languages - it has no past or future tenses, for example - and
as a people they exhibit barely any of the hallmarks of abstract reasoning
that are so common to the rest of humanity, such as story-telling, artwork,
religion... see http://en.wikipedia.org/wiki/Pirah%C3%A3_people. 


How do you explain that?


In this example we observe two phenomena:
1. primitive language compared to all modern languages
2. and as a people they exhibit barely any of the hallmarks of abstract
reasoning

From this we can neither conclude that 1 causes 2 nor that 2 causes 1.



I'm saying that if an AI understands  speaks natural language, you've
solved AGI - your Nobel will be arriving soon.  


This is just your opinion. I disagree that natural language understanding
necessarily implies AGI. For instance, I doubt that anyone can prove that
any system which understands natural language is necessarily able to solve
the simple equation x *3 = y for a given y.
And if this is not proven then we shouldn't assume that natural language
understanding without hidden further assumptions implies AGI.



The difference between AI1 that understands Einstein, and any AI currently
in existence, is much greater then the difference between AI1 and Einstein.


This might be true but what does this  show?




Sorry, I don't see that, can you explain the proof?  Are you saying that
sign language isn't natural language?  That would be patently false. (see
http://crl.ucsd.edu/signlanguage/)


Yes. In my opinion, sign language is no natural language as it is usually
understood.




So you're agreeing that language is necessary for self-reflectivity. In your
models, then, self-reflectivity is not important to AGI, since you say AGI
can be realized without language, correct?


No. Self-reflectifity needs just a feedback loop for  own processes. I do
not say that AGI can be realized without language. AGI must produce outputs
and AGI must obtain inputs. For inputs and outputs there must be protocols.
These protocols are not fixed but  depend on the input devices on output
devices. For instance the AGI could use the hubble telescope or a microscope
or both. 
For the domain of mathematics a formal language which is specified by humans
would be 
the best for input and output. 


I'm not saying that language is inherently involved in thinking, but it is
crucial for the development of *sophisticated* causal models of the world -
the kind of models that can support self-reflectivity. Word-concepts form
the basis of abstract symbol manipulation.

That gets the ball rolling for humans, but the conceptual framework that
emerges is not necessarily tied to linguistics, especially as humans get
feedback from the world in ways that are not linguistic (scientific
experimentation/tinkering, studying math, art, music, etc).


That is just your opinion again. I tolerate your opinion. But I have a
different opinion. The future will show which approach is successful.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
Yes, if we live in a universe that has Turing-uncomputable physics, then
obviously AIXI is not necessarily going to be capable of adequately dealing
with that universe ... and nor is AGI based on digital computer programs
necessarily going to be able to equal human intelligence.

In that case, we might need to articulate new computational models
reflecting the actual properties of the universe (i.e. new models that
relate to the newly-understood universe, the same way that AIXI relates to
an assumed-computable universe).  And we might need to build new kinds of
computer hardware that make appropriate use of this Turing-uncomputable
physics.

I agree this is possible.  I also see no evidence for it.  This is
essentially the same hypothesis that Penrose has put forth in his books The
Emperor's New Mind, and Shadows of the Mind; and I found his arguments there
completely unconvincing.  Ultimately his argument comes down to:

A)  mathematical thinking doesn't feel computable to me, therefore it
probably isn't

B) we don't have a unified theory of physics, so when we do find one it
might imply the universe is Turing-uncomputable

Neither of those points constitutes remotely convincing evidence to me, nor
is either one easily refutable.

I do have a limited argument against these ideas, which has to do with
language.   My point is that, if you take any uncomputable universe U, there
necessarily exists some computable universe C so that

1) there is no way to distinguish U from C based on any finite set of
finite-precision observations

2) there is no finite set of sentences in any natural or formal language
(where by language, I mean a series of symbols chosen from some discrete
alphabet) that can applies to U but does not apply also to C

To me, this takes a bit of the bite out of the idea of an uncomputable
universe.

Another way to frame this is: I think the notion of a computable universe is
effectively equivalent to the notion of a universe that is describable in
language or comprehensible via finite-precision observations.

And the deeper these discussions get, the more I think they belong on an
agi-phil list rather than an AGI list ;-) ... I like these sorts of ideas,
but they really have little to do with creating AGI ...

-- Ben G

On Mon, Oct 20, 2008 at 11:23 AM, Abram Demski [EMAIL PROTECTED]wrote:

 Ben,

 The most extreme case is if we happen to live in a universe with
 uncomputable physics, which of course would violate the AIXI
 assumption. This could be the case merely because we have physical
 constants that have no algorithmic description (but perhaps still have
 mathematical descriptions). As a concrete example, let's say some
 physical constant turns out to be a (whole-number) multiple of
 Chaitin's Omega. Omega cannot be computed, but it can be approximated
 (slowly), so we could after a long time suspect that we had determined
 the first 20 digits (although we would never know for sure!). If a
 physical constant turned out to match (some multiple of) these, we
 would strongly suspect that the rest of the digits matched as well.

 (Of course, the actual value of Omega depends on the model of
 computation employed, so it would be very surprising indeed if the
 physical constant matched Omega for one of our standard computational
 models...)

 AIXI would never except this inductive evidence.

 This is similar to Wei Dai's argument about aliens offering humans a
 box that seems to be a halting oracle.

 I think there is a less extreme case to be considered (meaning, I
 think there is a broader way in which we might say AIXI cannot
 understand uncomputable entities the way we can), but the argument
 is probably clearer for the extreme case, so I will leave it at that
 for now.

 Clearly, this argument is very type 2 at the moment. What I *really*
 would like to discuss is, as you put it, the set of sufficient
 mathematical axioms for (patially-)logic-based AGI such as
 OpenCogPrime.

 --Abram

 On Mon, Oct 20, 2008 at 9:45 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  I do not understand what kind of understanding of noncomputable numbers
 you
  think a human has, that AIXI could not have.  Could you give a specific
  example of this kind of understanding?  What is some fact about
  noncomputable numbers that a human can understand but AIXI cannot?  And
 how
  are you defining understand in this context?
 
  I think uncomputable numbers can be indirectly useful in modeling the
 world
  even if the world is fundamentally computable.  This is proved by
  differential and integral calculus, which are based on the continuum
 (most
  of the numbers on which are uncomputable), and which are extremely handy
 for
  analyzing real, finite-precision data ... more so, it seems, than
  computable analysis variants.
 
  But, I think AIXI or other AI systems can understand how to apply
  differential calculus in the same sense that humans can...
 
  And, neither AIXI nor a human can display a specific 

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 12:07 PM, Ed Porter [EMAIL PROTECTED] wrote:

  As I said in my last email, since the Wikipedia article on constant
 weight codes said APART FROM SOME TRIVIAL OBSERVATIONS, IT IS GENERALLY
 IMPOSSIBLE TO COMPUTE THESE NUMBERS IN A STRAIGHTFORWARD WAY. And since all
 of the examples they gave had vary large overlaps, meaning high cross talk,
 I think it would be valuable to find some rough measure of whether is it
 possible to create sets of cell assemblies with low cross talk that had
 numbers exceeding, or far exceeding, the number of nodes they are created
 from.



Intuitively, it seems obvious the answer is YES ...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
I also don't understand whether A(n,d,w) is the number of sets where the
 hamming distance is exactly d (as it would seem from the text of
 http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
 number of set where the hamming distance is d or less.  If the former case
 is true then the lower bounds given in the tables would actually be lower
 than the actual lower bounds for the question I asked, which would
 correspond to all cases where the hamming distance is d or less.



The case where the Hamming distance is d or less corresponds to a
bounded-weight code rather than a constant-weight code.

I already forwarded you a link to a paper on bounded-weight codes, which are
also combinatorially intractable and have been studied only via
computational analysis.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Value of philosophy

2008-10-20 Thread Steve Richfield
Mike, Vladimir, Ben, et al,

The mere presence of philosophy is proof positive that there are some
domains in which GI doesn't work well at all. Are those domains truly
difficult, or just ill adapted to GI? The mere existence of Dr. Eliza would
seem to be proof positive that those domains are NOT difficult - but rather
we are just missing neuron type 201 or something.

No, an AGI can NOT figure these things out on its own! First, much of the
data underlying philosophy has been long lost. Sun Tsu is still taught in
military colleges, even though the battles upon which that philosophy is
based have long been forgotten, thousands of years ago. Further, those
battles were fought with primitive hand weapons and bamboo armour, yet these
non-obvious principles still apply to modern heavy weapons. Note that these
principles predict a quick demise for the U.S.

Hence, I am sort of on Ben's side in this particular discussion (Ben, please
correct me if I am wrong in this), that an AGI need NOT engage in philosophy
to be interesting and even useful, though such an AGI will never rise to
become a singularity, but will remain more of a pet. Maybe from such an
AGI we can learn enough to build a truly powerful AGI.

Closing with yet another entry for Ben's list:

*Limits to AGI: GI has fundamental (and somewhat simplistic) limits, which
philosophy, decision theory, and some AI efforts seek to surpass. There is
absolutely no evidence that an AGI that is better/stronger than our own GI
will be any better at competing in the real world, just as many/most of the
smartest people in our population (e.g. AGI researchers) are some of
society's least successful people, and are often unable to even hold a job.
Hence, if the effort is to produce cheap droids, then we already have more
than enough biological droids. However, if the effort is to produce
super-smart machines able to lead our society, then there are some really
fundamental philosophical things that have yet to be understood enough to
start engineering such machines.*

Steve Richfield

On 10/20/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Vlad:Good philosophy is necessary for AI...We need to work more on the
 foundations, to understand whether we are
 going in the right direction

 More or less perfectly said. While I can see that a majority of people here
 don't want it,  actually philosophy, (which should be scientifically based),
 is essential for AGI, precisely as Vlad says - to decide what are the proper
 directions and targets for AGI. What is creativity? Intelligence? What are
 the kinds of problems an AGI should be dealing with? What kind(s) of
 knowledge representation are necessary? Is language necessary? What forms
 should concepts take? What kinds of information structures, eg networks,
 should underlie them? What kind(s) of search are necessary? How do analogy
 and metaphor work? Is embodiment necessary? etc etc.   These are all matters
 for what is actually philosophical as well as scientific as well as
 technological/engineering discussion.  They tend to be often  more
 philosophical in practice because these areas are so vast that they can't be
 neatly covered  - or not at present - by any scientific.
 experimentally-backed theory.

 If your philosophy is all wrong, then the chances are v. high that your
 engineering work will be a complete waste of time. So it's worth considering
 whether your personal AGI philosophy and direction are viable.

 And that is essentially what the philosophical discussions here have all
 been about - the proper *direction* for AGI efforts to take. Ben has
 mischaracterised these discussions. No one - certainly not me - is objecting
 to the *feasibility* of AGI. Everyone agrees that AGI in one form or other
 is indeed feasible,  though some (and increasingly though by no means fully,
 Ben himself) incline to robotic AGI. The arguments are mainly about
 direction, not feasibility.

 (There is a separate, philosophical discussion,  about feasibility in a
 different sense -  the lack of  a culture of feasibility, which is perhaps,
 subconsciously what Ben was also referring to  -  no one, but no one, in
 AGI, including Ben,  seems willing to expose their AGI ideas and proposals
 to any kind of feasibility discussion at all  -  i.e. how can this or that
 method solve any of the problem of general intelligence? This is what Steve
 R has pointed to recently, albeit IMO in a rather confusing way. )

 So while I recognize that a lot of people have an antipathy to my personal
 philosoophising, one way or another, you can't really avoid philosophising,
 unless you are, say, totally committed to just one approach, like Opencog.
 And even then...

 P.S. Philosophy is always a matter of (conflicting) opinion. (Especially,
 given last night's exchange, philosophy of science itself).





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: 

Re: [agi] Re: Value of philosophy

2008-10-20 Thread Ben Goertzel
Just to clarify one point: I am not opposed to philosophy, nor do I consider
it irrelevant to AGI.  I wrote a book on my own philosophy of mind in 2006.

I just feel like the philosophical discussions tend to overwhelm the
pragmatic discussions on this list, and that a greater number of pragmatic
discussions **might** emerge if the pragmatic and philosophical discussions
were carried out in separate venues.

Some of us feel we already have adequate philosophical understanding to
design and engineer AGI systems.  We may be wrong, but that doesn't mean we
should spend our time debating our philosophical understandings, to the
exclusion of discussing the details of our concrete AGI work.

For me, after enough discussion of the same philosophical issue, I stop
learning anything.  Most of the philosophical discussions on this list are
nearly identical in content to discussions I had with others 20 years ago.
I learned a lot from the discussions then, and learn a lot less from the
repeats...

-- Ben


On Mon, Oct 20, 2008 at 9:06 AM, Mike Tintner [EMAIL PROTECTED]wrote:

 Vlad:Good philosophy is necessary for AI...We need to work more on the
 foundations, to understand whether we are
 going in the right direction

 More or less perfectly said. While I can see that a majority of people here
 don't want it,  actually philosophy, (which should be scientifically based),
 is essential for AGI, precisely as Vlad says - to decide what are the proper
 directions and targets for AGI. What is creativity? Intelligence? What are
 the kinds of problems an AGI should be dealing with? What kind(s) of
 knowledge representation are necessary? Is language necessary? What forms
 should concepts take? What kinds of information structures, eg networks,
 should underlie them? What kind(s) of search are necessary? How do analogy
 and metaphor work? Is embodiment necessary? etc etc.   These are all matters
 for what is actually philosophical as well as scientific as well as
 technological/engineering discussion.  They tend to be often  more
 philosophical in practice because these areas are so vast that they can't be
 neatly covered  - or not at present - by any scientific.
 experimentally-backed theory.

 If your philosophy is all wrong, then the chances are v. high that your
 engineering work will be a complete waste of time. So it's worth considering
 whether your personal AGI philosophy and direction are viable.

 And that is essentially what the philosophical discussions here have all
 been about - the proper *direction* for AGI efforts to take. Ben has
 mischaracterised these discussions. No one - certainly not me - is objecting
 to the *feasibility* of AGI. Everyone agrees that AGI in one form or other
 is indeed feasible,  though some (and increasingly though by no means fully,
 Ben himself) incline to robotic AGI. The arguments are mainly about
 direction, not feasibility.

 (There is a separate, philosophical discussion,  about feasibility in a
 different sense -  the lack of  a culture of feasibility, which is perhaps,
 subconsciously what Ben was also referring to  -  no one, but no one, in
 AGI, including Ben,  seems willing to expose their AGI ideas and proposals
 to any kind of feasibility discussion at all  -  i.e. how can this or that
 method solve any of the problem of general intelligence? This is what Steve
 R has pointed to recently, albeit IMO in a rather confusing way. )

 So while I recognize that a lot of people have an antipathy to my personal
 philosoophising, one way or another, you can't really avoid philosophising,
 unless you are, say, totally committed to just one approach, like Opencog.
 And even then...

 P.S. Philosophy is always a matter of (conflicting) opinion. (Especially,
 given last night's exchange, philosophy of science itself).






 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-20 Thread Steve Richfield
Samantha,

On 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote:

 This sounds good to me.  I am much more drawn to topic #1.  Topic #2 I have
 seen discussed recursively and in dozens of variants multiple places.  The
 only thing I will add to Topic #2 is that I very seriously doubt current
 human intelligence individually or collectively is sufficient to address or
 meaningfully resolve or even crisply articulate such questions.


We are in absolute agreement that revolution rather than evolution is
necessary to advance. Aside from the specific technique, things like Reverse
Reductio ad Absurdum shows that, for example, that intractable disputes
absolutely MUST include a commonly held false assumption. This means, for
example, that if you take EITHER side in the abortion debate, then you
absolutely MUST hold a false assumption. The only hope is broad societal
education that flies in the face of nearly every religion, which will never
happen.

Without that impossible education, a truly successful AGI would have ~half
of the world's population bent on its immediate destruction, and not more
than 100 people would even understand what it said. Note that if you take
either side in the abortion debate, that you will NOT be one of those 100
people. Who could you find to even maintain such a machine, and who would
ever follow such a machine?

Much more is accomplished by actually looking into the horse's mouth than
 philosophizing endlessly.


Here, you think that AGI efforts will point the way to freeing man from his
collective maddness. Given the constraints explained above, I just don't see
how this is possible.

Another entry for Ben's List:

*Impossible Expectations: Man has many issues and problems for which he has
no good answers. Given man's inductive abilities, this comes NOT because of
any inability to imagine the correct answers, but comes instead because
either no such answers exist, or because man rejects the correct answers
when they are placed before him. Obviously, AGI cannot help either of these
situations. *


Steve Richfield
===

Ben Goertzel wrote:


 Hi all,

 I have been thinking a bit about the nature of conversations on this list.

 It seems to me there are two types of conversations here:

 1)
 Discussions of how to design or engineer AGI systems, using current
 computers, according to designs that can feasibly be implemented by
 moderately-sized groups of people

 2)
 Discussions about whether the above is even possible -- or whether it is
 impossible because of weird physics, or poorly-defined special
 characteristics of human creativity, or the so-called complex systems
 problem, or because AGI intrinsically requires billions of people and
 quadrillions of dollars, or whatever

 Personally I am pretty bored with all the conversations of type 2.

 It's not that I consider them useless discussions in a grand sense ...
 certainly, they are valid topics for intellectual inquiry.
 But, to do anything real, you have to make **some** decisions about what
 approach to take, and I've decided long ago to take an approach of trying to
 engineer an AGI system.

 Now, if someone had a solid argument as to why engineering an AGI system
 is impossible, that would be important.  But that never seems to be the
 case.  Rather, what we hear are long discussions of peoples' intuitions and
 opinions in this regard.  People are welcome to their own intuitions and
 opinions, but I get really bored scanning through all these intuitions about
 why AGI is impossible.

 One possibility would be to more narrowly focus this list, specifically on
 **how to make AGI work**.

 If this re-focusing were done, then philosophical arguments about the
 impossibility of engineering AGI in the near term would be judged **off
 topic** by definition of the list purpose.

 Potentially, there could be another list, something like agi-philosophy,
 devoted to philosophical and weird-physics and other discussions about
 whether AGI is possible or not.  I am not sure whether I feel like running
 that other list ... and even if I ran it, I might not bother to read it very
 often.  I'm interested in new, substantial ideas related to the in-principle
 possibility of AGI, but not interested at all in endless philosophical
 arguments over various peoples' intuitions in this regard.

 One fear I have is that people who are actually interested in building
 AGI, could be scared away from this list because of the large volume of
 anti-AGI philosophical discussion.   Which, I add, almost never has any new
 content, and mainly just repeats well-known anti-AGI arguments (Penrose-like
 physics arguments ... mind is too complex to engineer, it has to be
 evolved ... no one has built an AGI yet therefore it will never be done
 ... etc.)

 What are your thoughts on this?

 -- Ben




 On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer [EMAIL PROTECTED]mailto:
 [EMAIL PROTECTED] wrote:

On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
 Ben Goertzel says that there is no true defined method
 to the scientific method (and Mark Waser is clueless for thinking that there
 is).

This is pretty profound. I never saw Ben Goertzel abolish the
scientific method. I think he explained that its implementation is
intractable, with reference to expert systems whose domain knowledge
necessarily extrapolates massively to cover fringe cases. A strong AI
would produce its own expert system and could follow the same general
scientific method as a human. Can you quote the claim that there is no
such thing


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 4:04 PM, Eric Burton [EMAIL PROTECTED] wrote:

  Ben Goertzel says that there is no true defined method
  to the scientific method (and Mark Waser is clueless for thinking that
 there
  is).



That is not what I said.

My views on the philosophy of science are given here:

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

with an addition here

http://multiverseaccordingtoben.blogspot.com/2008/10/reflections-on-religulous-and.html

The argument with Mark* *was about his claim that a below-intelligence human
could be trained to be a good scientist ... then modified to the claim that
a below-intelligence human could be trained to be good at evaluating (rather
than discovering) scientific results.  I said I doubted this was true.
*
*I still doubt it's true.  Given the current state of scientific
experimental and statistical tools, and scientific theory, I don't think a
below-average-intelligence person can be trained to be good (as opposed to,
say, barely passable) at discovering or evaluating scientific results.*   *This
is because I don't think the scientific method as currently practiced has
been formalized fully enough that it can be practiced by a person without a
fair amount of intelligence and common sense.

My feeling is that, if someone needs to use a cash register with little
pictures of burgers and fries on it rather than numbers, it's probably not
going to work out to teach them to effectively discover or evaluate
scientific theories.*

*Again, I don't understand what this argument has to do with AGI in the
first place.  I'm just continuing this dialogue to avoid having my
statements publicly misrepresented (I'm sure this misrepresentation is
inadvertent, but still).*
*



 This is pretty profound. I never saw Ben Goertzel abolish the
 scientific method. I think he explained that its implementation is
 intractable, with reference to expert systems whose domain knowledge
 necessarily extrapolates massively to cover fringe cases. A strong AI
 would produce its own expert system and could follow the same general
 scientific method as a human. Can you quote the claim that there is no
 such thing


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
Wait, now I'm confused.

I think I misunderstood your question.

Bounded-weight codes correspond to the case where the assemblies themselves
can have n or fewer neurons, rather than exactly n.

Constant-weight codes correspond to assemblies with exactly n neurons.

A complication btw is that an assembly can hold multiple memories in
multiple attractors.  For instance using Storkey's palimpsest model a
completely connected assembly with n neurons can hold about .25n attractors,
where each attractor has around .5n neurons switched on.

In a constant-weight code, I believe the numbers estimated tell you the
number of sets where the Hamming distance is greater than or equal to d.
The idea in coding is that the code strings denoting distinct messages
should not be closer to each other than d.

But I'm not sure I'm following your notation exactly.

ben g

On Mon, Oct 20, 2008 at 3:19 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



 I also don't understand whether A(n,d,w) is the number of sets where the
 hamming distance is exactly d (as it would seem from the text of
 http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
 number of set where the hamming distance is d or less.  If the former case
 is true then the lower bounds given in the tables would actually be lower
 than the actual lower bounds for the question I asked, which would
 correspond to all cases where the hamming distance is d or less.



 The case where the Hamming distance is d or less corresponds to a
 bounded-weight code rather than a constant-weight code.

 I already forwarded you a link to a paper on bounded-weight codes, which
 are also combinatorially intractable and have been studied only via
 computational analysis.

 -- Ben G




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Abram Demski
Ben,

I agree that these issues don't need to have much to do with
implementation... William Pearson convinced me of that, since his
framework is about as general as general can get. His idea is to
search the space of *internal* programs rather than *external* ones,
so that we aren't assuming that the universe is computable, we are
just assuming that *we* are. This is like the Goedel Machine, except
Will's doesn't need to prove the correctness of its next version, so
it wouldn't run into the incompleteness of its logic. So, one can say,
If there is an AGI program that can be implemented on this hardware,
then we can find it if we set up a good enough search.

Of course, good enough search is highly nontrivial. The point is, it
circumvents all the foundational logical issues by saying that if
logic X really does work better than logic Y, the machine should
eventually notice and switch, assuming it has time/resources to try
both. (Again, if I could formalize this for the limit of infinite
computational resources, I'd be happy...)

But, on to those philosophical issues. Generally, all I'm arguing is
that an AGI should be able to admit the possibility of an uncomputable
reality, like you just did.

I am not sure about your statements 1 and 2. Generally responding,
I'll point out that uncomputable models may compress the data better
than computable ones. (A practical example would be fractal
compression of images. Decompression is not exactly a computation
because it never halts, we just cut it off at a point at which the
approximation to the fractal is good.) But more specifically, I am not
sure your statements are true... can you explain how they would apply
to Wei Dai's example of a black box that outputs solutions to the
halting problem? Are you assuming a universe that ends in finite time,
so that the box always has only a finite number of queries? Otherwise,
it is consistent to assume that for any program P, the box is
eventually queried about its halting. Then, the universal statement
The box is always right couldn't hold in any computable version of
U.

--Abram

On Mon, Oct 20, 2008 at 3:01 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Yes, if we live in a universe that has Turing-uncomputable physics, then
 obviously AIXI is not necessarily going to be capable of adequately dealing
 with that universe ... and nor is AGI based on digital computer programs
 necessarily going to be able to equal human intelligence.

 In that case, we might need to articulate new computational models
 reflecting the actual properties of the universe (i.e. new models that
 relate to the newly-understood universe, the same way that AIXI relates to
 an assumed-computable universe).  And we might need to build new kinds of
 computer hardware that make appropriate use of this Turing-uncomputable
 physics.

 I agree this is possible.  I also see no evidence for it.  This is
 essentially the same hypothesis that Penrose has put forth in his books The
 Emperor's New Mind, and Shadows of the Mind; and I found his arguments there
 completely unconvincing.  Ultimately his argument comes down to:

 A)  mathematical thinking doesn't feel computable to me, therefore it
 probably isn't

 B) we don't have a unified theory of physics, so when we do find one it
 might imply the universe is Turing-uncomputable

 Neither of those points constitutes remotely convincing evidence to me, nor
 is either one easily refutable.

 I do have a limited argument against these ideas, which has to do with
 language.   My point is that, if you take any uncomputable universe U, there
 necessarily exists some computable universe C so that

 1) there is no way to distinguish U from C based on any finite set of
 finite-precision observations

 2) there is no finite set of sentences in any natural or formal language
 (where by language, I mean a series of symbols chosen from some discrete
 alphabet) that can applies to U but does not apply also to C

 To me, this takes a bit of the bite out of the idea of an uncomputable
 universe.

 Another way to frame this is: I think the notion of a computable universe is
 effectively equivalent to the notion of a universe that is describable in
 language or comprehensible via finite-precision observations.

 And the deeper these discussions get, the more I think they belong on an
 agi-phil list rather than an AGI list ;-) ... I like these sorts of ideas,
 but they really have little to do with creating AGI ...

 -- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Value of philosophy

2008-10-20 Thread Dr. Matthias Heger
I think in the past there were always difficult technological problems
leading to a conceptual controversy how to solve these problems. Time has
always shown which approaches were successful and which were not successful.

The fact, that we have so many philosophical discussions show that we still
are at the beginning. There is still no real evidence for a certain AGI
approach to be a successful approach. Sorry, this is just my opinion. 

And this is the only(!!) reason why AGI doubters can still survive.

I am no AGI doubter at all. In my opinion a lot of people want to make
things more complicated than they are.

AGI is possible! Proof: We exist.

AGI is easy! Proof: Our genome is less than 1GB, i.e. less than your
USB-stick. How much is need for our brain? Probably Windows Vista needs more
memory than AGI.

We always have to think about the huge computational and memory resources of
the brain with massively concurrent computing.
We can therefore assume that a lot of mythical things like creativity are
nothing else than brute force giant database phenomena of the brain.
Especially, before there isn't any evidence that things must be complicated
we should assume that they are easy.

The AGI community suffers from its own main assumption that AGI is
difficult. 
For instance, things like Gödel' theorem  etc are of no relevance at all.
All we want to build is a finite system with a maximum number of
applications. Gödel says absolutely nothing against this goal.

Further problem: AGI approaches are often too much anthropomorphized
approaches. (embodiment, natural language ,... sorry).

-  Matthias



We need to work more on the foundations, to understand whether we are
going in the right direction on at least good enough level to persuade
other people (which is NOT good enough in itself, but barring that,
who are we kidding).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel

 I am not sure about your statements 1 and 2. Generally responding,
 I'll point out that uncomputable models may compress the data better
 than computable ones. (A practical example would be fractal
 compression of images. Decompression is not exactly a computation
 because it never halts, we just cut it off at a point at which the
 approximation to the fractal is good.)


Fractal image compression is computable.


 But more specifically, I am not
 sure your statements are true... can you explain how they would apply
 to Wei Dai's example of a black box that outputs solutions to the
 halting problem? Are you assuming a universe that ends in finite time,
 so that the box always has only a finite number of queries? Otherwise,
 it is consistent to assume that for any program P, the box is
 eventually queried about its halting. Then, the universal statement
 The box is always right couldn't hold in any computable version of
 U.


Based on a finite set of finite-precision observations, there is no way to
distinguish Wei Dai's black box from a black box with a Turing machine
inside.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mike Tintner


Eric:

Ben Goertzel says that there is no true defined method
to the scientific method (and Mark Waser is clueless for thinking that 
there

is).


This is pretty profound. I never saw Ben Goertzel abolish the
scientific method. I think he explained that its implementation is
intractable, with reference to expert systems whose domain knowledge
necessarily extrapolates massively to cover fringe cases. A strong AI
would produce its own expert system and could follow the same general
scientific method as a human. Can you quote the claim that there is no
such thing


Eric,

You and MW are clearly as philosophically ignorant, as I am in AI. The 
reason there is an extensive discipline called philosophy of science, (as 
with every other branch of knowledge), is that there are conflicting 
opinions and arguments about virtually every aspect of science.


Yes, there is a very broad consensus that science - the scientific method - 
generally involves a reliance on evidence, experiment and measurement  But 
exactly what constitutes evidence, and how much is required, and what 
constitutes experiment, either generally or in any particular field, and 
what form theories should take, are open to, and receiving, endless 
discussion. Plus new kinds of all of these are continually being invented.


Hence the wiki entry on scientific method:

Scientific method is not a recipe: it requires intelligence, imagination, 
and creativity


http://en.wikipedia.org/wiki/Scientific_method

This is basic stuff.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
 You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination, 
and creativity
http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
My statement was

***
if you take any uncomputable universe U, there necessarily exists some
computable universe C so that

1) there is no way to distinguish U from C based on any finite set of
finite-precision observations

2) there is no finite set of sentences in any natural or formal language
(where by language, I mean a series of symbols chosen from some discrete
alphabet) that can applies to U but does not apply also to C
***

This seems to incorporate the assumption of a finite period of time
because a finite set of sentences or observations must occur during a finite
period of time.

-- Ben G

On Mon, Oct 20, 2008 at 4:19 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 I agree that these issues don't need to have much to do with
 implementation... William Pearson convinced me of that, since his
 framework is about as general as general can get. His idea is to
 search the space of *internal* programs rather than *external* ones,
 so that we aren't assuming that the universe is computable, we are
 just assuming that *we* are. This is like the Goedel Machine, except
 Will's doesn't need to prove the correctness of its next version, so
 it wouldn't run into the incompleteness of its logic. So, one can say,
 If there is an AGI program that can be implemented on this hardware,
 then we can find it if we set up a good enough search.

 Of course, good enough search is highly nontrivial. The point is, it
 circumvents all the foundational logical issues by saying that if
 logic X really does work better than logic Y, the machine should
 eventually notice and switch, assuming it has time/resources to try
 both. (Again, if I could formalize this for the limit of infinite
 computational resources, I'd be happy...)

 But, on to those philosophical issues. Generally, all I'm arguing is
 that an AGI should be able to admit the possibility of an uncomputable
 reality, like you just did.

 I am not sure about your statements 1 and 2. Generally responding,
 I'll point out that uncomputable models may compress the data better
 than computable ones. (A practical example would be fractal
 compression of images. Decompression is not exactly a computation
 because it never halts, we just cut it off at a point at which the
 approximation to the fractal is good.) But more specifically, I am not
 sure your statements are true... can you explain how they would apply
 to Wei Dai's example of a black box that outputs solutions to the
 halting problem? Are you assuming a universe that ends in finite time,
 so that the box always has only a finite number of queries? Otherwise,
 it is consistent to assume that for any program P, the box is
 eventually queried about its halting. Then, the universal statement
 The box is always right couldn't hold in any computable version of
 U.

 --Abram

 On Mon, Oct 20, 2008 at 3:01 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Yes, if we live in a universe that has Turing-uncomputable physics, then
  obviously AIXI is not necessarily going to be capable of adequately
 dealing
  with that universe ... and nor is AGI based on digital computer programs
  necessarily going to be able to equal human intelligence.
 
  In that case, we might need to articulate new computational models
  reflecting the actual properties of the universe (i.e. new models that
  relate to the newly-understood universe, the same way that AIXI relates
 to
  an assumed-computable universe).  And we might need to build new kinds of
  computer hardware that make appropriate use of this Turing-uncomputable
  physics.
 
  I agree this is possible.  I also see no evidence for it.  This is
  essentially the same hypothesis that Penrose has put forth in his books
 The
  Emperor's New Mind, and Shadows of the Mind; and I found his arguments
 there
  completely unconvincing.  Ultimately his argument comes down to:
 
  A)  mathematical thinking doesn't feel computable to me, therefore it
  probably isn't
 
  B) we don't have a unified theory of physics, so when we do find one it
  might imply the universe is Turing-uncomputable
 
  Neither of those points constitutes remotely convincing evidence to me,
 nor
  is either one easily refutable.
 
  I do have a limited argument against these ideas, which has to do with
  language.   My point is that, if you take any uncomputable universe U,
 there
  necessarily exists some computable universe C so that
 
  1) there is no way to distinguish U from C based on any finite set of
  finite-precision observations
 
  2) there is no finite set of sentences in any natural or formal language
  (where by language, I mean a series of symbols chosen from some discrete
  alphabet) that can applies to U but does not apply also to C
 
  To me, this takes a bit of the bite out of the idea of an uncomputable
  universe.
 
  Another way to frame this is: I think the notion of a computable universe
 is
  effectively equivalent to the notion of a universe that is describable in
  language or comprehensible 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
 I could have conveyed the nuances of the
 argument better as I understood them.

s/as I/inasmuch as I/

,_,


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI

 You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,
and creativity
http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread David Hart
On Tue, Oct 21, 2008 at 12:56 AM, Dr. Matthias Heger [EMAIL PROTECTED]wrote:

  Any argument of the kind you should better first read xxx + yyy +…  is
 very weak. It is a pseudo killer argument against everything with no content
 at all.

 If  xxx , yyy … contains  really relevant information for the discussion
 then it should be possible to quote the essential part with few lines of
 text.

 If someone is not able to do this he should himself better read xxx, yyy, …
 once again.


I disagree. Books and papers are places to make complex multi-part
arguments. Dragging out those arguments through a series of email-based
soundbites in many cases will not help someone to grok the higher levels of
those arguments, and will constantly miss out on smaller points that fuel
countless unecessary misunderstandings. We witness these problems and others
(practically daily) on the AGI list.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ed Porter
Ben, 

 

I am interested in exactly the case where individual nodes partake in
multiple attractors,  

 

I use the notation A(N,O,S) which is similar to the A(n,d,w) formula of
constant weight codes, except as Vlad says you would plug my varaiables into
the constant weight formula buy using A(N, 2*(S-0+1),S).

 

I have asked my question assuming each node assembly has the same size S for
to make the math easier.  Each such assembly is an autoassociative
attractor.  I want to keep the overlap O low to reduce the cross talk
between attractors.  So the question is how many node assemblies A, can you
make having a size S, and no more than an overlap O, given N nodes.

 

Actually the cross talk between auto associative patterns becomes an even
bigger problem if there are many attractors being activated at once (such as
hundreds of them), but if the signaling driving different the population of
different attractors could have different timing or timing patterns, and if
the auto associatively was sensitive to such timing, this problem could be
greatly reduced.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 20, 2008 4:16 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Who is smart enough to answer this question?

 


Wait, now I'm confused.

I think I misunderstood your question.

Bounded-weight codes correspond to the case where the assemblies themselves
can have n or fewer neurons, rather than exactly n.

Constant-weight codes correspond to assemblies with exactly n neurons.

A complication btw is that an assembly can hold multiple memories in
multiple attractors.  For instance using Storkey's palimpsest model a
completely connected assembly with n neurons can hold about .25n attractors,
where each attractor has around .5n neurons switched on.

In a constant-weight code, I believe the numbers estimated tell you the
number of sets where the Hamming distance is greater than or equal to d.
The idea in coding is that the code strings denoting distinct messages
should not be closer to each other than d.

But I'm not sure I'm following your notation exactly.

ben g

On Mon, Oct 20, 2008 at 3:19 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 

I also don't understand whether A(n,d,w) is the number of sets where the
hamming distance is exactly d (as it would seem from the text of
http://en.wikipedia.org/wiki/Constant-weight_code
http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
number of set where the hamming distance is d or less.  If the former case
is true then the lower bounds given in the tables would actually be lower
than the actual lower bounds for the question I asked, which would
correspond to all cases where the hamming distance is d or less.



The case where the Hamming distance is d or less corresponds to a
bounded-weight code rather than a constant-weight code.

I already forwarded you a link to a paper on bounded-weight codes, which are
also combinatorially intractable and have been studied only via
computational analysis.

-- Ben G

 




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
5 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Abram Demski
Ben,

[my statement] seems to incorporate the assumption of a finite
period of time because a finite set of sentences or observations must
occur during a finite period of time.

A finite set of observations, sure, but a finite set of statements can
include universal statements.

Fractal image compression is computable.

OK, yea, scratch the example. The point would possibly be valid if
fractal compression relied on a superset of the Mandelbrot set's math,
since the computability of that is still open as far as I know.

Based on a finite set of finite-precision observations, there is no
way to distinguish Wei Dai's black box from a black box with a Turing
machine inside.

Sure, but the more observations, the longer the description length of
that turing machine, so that at some point it will exceed the
description length of the uncomputable alternative.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger




A conceptual framework starts with knowledge representation. Thus a symbol S 
refers to a persistent pattern P which is, in some way or another, a reflection 
of the agent's environment and/or a composition of other symbols. Symbols are 
related to each other in various ways. These relations (such as, is a property 
of, contains, is associated with) are either given or emerge in some kind 
of self-organizing dynamic.

A causal model M is a set of symbols such that the activation of symbols 
S1...Sn are used to infer the future activation of symbol S'. The rules of 
inference are either given or emerge in some kind of self-organizing dynamic.

A conceptual framework refers to the whole set of symbols and their relations, 
which includes all causal models and rules of inference.

Such a framework is necessary for language comprehension because meaning is 
grounded in that framework. For example, the word 'flies' has at least two 
totally distinct meanings, and each is unambiguously evoked only when given the 
appropriate conceptual context, as in the classic example time flies like an 
arrow; fruit flies like a banana.  time and fruit have very different sets 
of relations to other patterns, and these relations can in principle be 
employed to disambiguate the intended meaning of flies and like.

If you think language comprehension is possible with just statistical methods, 
perhaps you can show how they would work to disambiguate the above example.




I agree with your framework but it is in my approach a part of nonlinguistic D 
which is separated from L. D and L interact only during the process of 
translation but even in this process D and L are separated.





OK, let's look at all 3 cases:

1. Primitive language *causes* reduced abstraction faculties
2. Reduced abstraction faculties *causes* primitive language
3. Primitive language and reduced abstraction faculties are merely correlated; 
neither strictly causes the other

I've been arguing for (1), saying that language and intelligence are 
inseparable (for social intelligences). The sophistication of one's language 
bounds the sophistication of one's conceptual framework. 

In (2), one must be saying with the Piraha that they are cognitively deficient 
for another reason, and their language is primitive as a result of that 
deficiency. Professor Daniel Everett, the anthropological linguist who first 
described the Piraha grammar, dismissed this possibility in his paper Cultural 
Constraints on Grammar and Cognition in Piraha˜ (see 
http://www.eva.mpg.de/psycho/pdf/Publications_2005_PDF/Commentary_on_D.Everett_05.pdf):

... [the idea that] the Piraha˜ are sub-
standard mentally—is easily disposed of. The source
of this collective conceptual deficit could only be ge-
netics, health, or culture. Genetics can be ruled out
because the Piraha˜ people (according to my own ob-
servations and Nimuendajú’s have long intermarried
with outsiders. In fact, they have intermarried to the
extent that no well-defined phenotype other than stat-
ure can be identified. Piraha˜s also enjoy a good and
varied diet of fish, game, nuts, legumes, and fruits, so
there seems to be no dietary basis for any inferiority.
We are left, then, with culture, and here my argument
is exactly that their grammatical differences derive
from cultural values. I am not, however, making a
claim about Piraha˜ conceptual abilities but about their
expression of certain concepts linguistically, and this
is a crucial difference.

This quote thus also addresses (3), that the language and the conceptual 
deficiency are merely correlated. Everett seems to be arguing for this point, 
that their language and conceptual abilities are both held back by their 
culture. There are questions about the dynamic between culture and language, 
but that's all speculative.

I realize this leaves the issue unresolved. I include it because I raised the 
Piraha example and it would be disingenuous of me to not mention Everett's 
interpretation.



Everett's interpretation is that culture is responsible for reduced abstraction 
facilities. I agree with this. But this does not imply your claim (1) that 
language causes the reduced facilities. The reduced number of cultural 
experiences in which abstraction is important is responsible for the reduced 
abstraction facilities.

 


Of course, but our opinions have consequences, and in debating the consequences 
we may arrive at a situation in which one of our positions appears absurd, 
contradictory, or totally improbable. That is why we debate about what is 
ultimately speculative, because sometimes we can show the falsehood of a 
position without empirical facts.

On to your example. The ability to do algebra is hardly a test of general 
intelligence, as software like Mathematica can do it. One could say that the 
ability to be *taught* how to do algebra reflects general intelligence, but 
again, that involves learning the *language* of mathematical formalism.



Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mike Tintner


Eric: I could have conveyed the nuances of the

argument better as I understood them.



Eric,

My apologies if I've misconstrued you. Regardless of any fault, the basic 
point was/is important. Even if a considerable percentage of science's 
conclusions are v. hard, there is no definitive scientific method for 
reaching them . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-20 Thread Matt Mahoney
The singularity list is probably more appropriate for philosophical discussions 
about AGI. But good luck on moving such discussions to that list or a new list. 
Philosophical arguments usually result from different interpretations of what 
words mean. But usually the people doing the arguing don't know this.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Matt Mahoney
--- On Mon, 10/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I do have a limited argument against these ideas, which has to do with
 language.   My point is that, if you take any uncomputable universe
 U, there necessarily exists some computable universe C so that

 1) there is no way to distinguish U from C based on any finite set
 of finite-precision observations

 2) there is no finite set of sentences in any natural or formal
 language (where by language, I mean a series of symbols chosen
 from some discrete alphabet) that can applies to U but does not
 apply also to C

That is only true in C. In U you might be able to make an infinite number of 
observations with infinite precision.

On Mon, Oct 20, 2008 at 11:23 AM, Abram Demski [EMAIL PROTECTED] wrote:

 As a concrete example, let's say some
 physical constant turns out to be a (whole-number) multiple of
 Chaitin's Omega. Omega cannot be computed, but it can be approximated
 (slowly), so we could after a long time suspect that we had determined
 the first 20 digits (although we would never know for sure!). If a
 physical constant turned out to match (some multiple of) these, we
 would strongly suspect that the rest of the digits matched as well.

You are reasoning by Occam's Razor, but that only holds in a universe where 
AIXI holds. In an uncomputable universe there is no reason to prefer the 
simplest explanation for an observation.

(You might also be able to compute Omega exactly).

Note: I am not suggesting that our universe is not Turing computable. All of 
the evidence suggests that it is.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 5:29 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Ben,

 [my statement] seems to incorporate the assumption of a finite
 period of time because a finite set of sentences or observations must
 occur during a finite period of time.

 A finite set of observations, sure, but a finite set of statements can
 include universal statements.


Ok ... let me clarify what I meant re sentences

I'll define what I mean by a **descriptive sentence**

What I mean
by a sentence is a finite string of symbols drawn from a finite alphabet.

What I mean by a *descriptive sentence* is a sentence that is agreed by
a certain community to denote some subset of the total set of observations
(where all observations have finite precision and are drawn from a certain
finite set).

So, whether or not a descriptive sentence contains universal quantifiers or
quantum-gravity
quantifiers or psychospirituometaphysical quantifiers, or whatever, in the
end
there are some observation-sets it applies to, and some it does not.

Then, what I claim is that any finite set of descriptive sentences
corresponds
to some computable model of reality.  One never needs an uncomputable
model of reality to justify a set of descriptive sentences.



 Fractal image compression is computable.

 OK, yea, scratch the example. The point would possibly be valid if
 fractal compression relied on a superset of the Mandelbrot set's math,
 since the computability of that is still open as far as I know.


No ... because any algorithm that can be implemented on a digital computer,
can obviously be described in purely computable terms, using assembly
language.  Regardless of what uncomputable semantics you may wish to
assign to the expression of the algorithm in some higher-level language.



 Based on a finite set of finite-precision observations, there is no
 way to distinguish Wei Dai's black box from a black box with a Turing
 machine inside.

 Sure, but the more observations, the longer the description length of
 that turing machine, so that at some point it will exceed the
 description length of the uncomputable alternative.



We have to be careful with use of language here.

It is not clear what you really mean by the description length
of something uncomputable, since the essence of uncomputability
is the property of **not being finitely describable**.

One can create a Turing machine that proves theorems about
uncomputable sets ... i.e., that carries out computations that
we can choose to interpret as manipulating uncomputable sets.

Just as, one can create a Turing machine that carries out computations
that we interpret as differential calculus operations, acting on
infinitesimals.

However, even though we call them uncomputable, in reality these
computations are computational, and their so-called uncomputability
is actually just a mapping between one computable formal structure
and another (the first formal structure being the algorithms/structures
carrying out
the computations ... the second formal structure being the formal theory
of computability, which is itself a finite set of axioms that can be
manipulated by a Turing machine ;-) ...

There is no such thing as an uncomputable procedure with a short
description length (where descriptions are finite concatenations of
symbols from a finite vocabulary).  There are however procedures with
short description lengths that have interpretations in terms of
uncomputability --
where these interpretations, if they're to be finitely describable, must
also be computable ;-)

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Ben Goertzel
But, suppose you have two assemblies A and B, which have nA and nB neurons
respectively, and which overlap in O neurons...

It seems that the system's capability to distinguish A from B is going to
depend on the specific **weight matrix** of the synapses inside the
assemblies A and B, not just on the numbers nA, nB and O.

And this weight matrix depends on the statistical properties of the memories
being remembered.

So, these counting arguments you're trying to do are only going to give you
a very crude indication, anyway, right?

ben



On Mon, Oct 20, 2008 at 5:09 PM, Ed Porter [EMAIL PROTECTED] wrote:

  Ben,



 I am interested in exactly the case where individual nodes partake in
 multiple attractors,



 I use the notation A(N,O,S) which is similar to the A(n,d,w) formula of
 constant weight codes, except as Vlad says you would plug my varaiables into
 the constant weight formula buy using A(N, 2*(S-0+1),S).



 I have asked my question assuming each node assembly has the same size S
 for to make the math easier.  Each such assembly is an autoassociative
 attractor.  I want to keep the overlap O low to reduce the cross talk
 between attractors.  So the question is how many node assemblies A, can you
 make having a size S, and no more than an overlap O, given N nodes.



 Actually the cross talk between auto associative patterns becomes an even
 bigger problem if there are many attractors being activated at once (such as
 hundreds of them), but if the signaling driving different the population of
 different attractors could have different timing or timing patterns, and if
 the auto associatively was sensitive to such timing, this problem could be
 greatly reduced.



 Ed Porter



 -Original Message-
 *From:* Ben Goertzel [mailto:[EMAIL PROTECTED]
 *Sent:* Monday, October 20, 2008 4:16 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] Who is smart enough to answer this question?




 Wait, now I'm confused.

 I think I misunderstood your question.

 Bounded-weight codes correspond to the case where the assemblies themselves
 can have n or fewer neurons, rather than exactly n.

 Constant-weight codes correspond to assemblies with exactly n neurons.

 A complication btw is that an assembly can hold multiple memories in
 multiple attractors.  For instance using Storkey's palimpsest model a
 completely connected assembly with n neurons can hold about .25n attractors,
 where each attractor has around .5n neurons switched on.

 In a constant-weight code, I believe the numbers estimated tell you the
 number of sets where the Hamming distance is greater than or equal to d.
 The idea in coding is that the code strings denoting distinct messages
 should not be closer to each other than d.

 But I'm not sure I'm following your notation exactly.

 ben g

 On Mon, Oct 20, 2008 at 3:19 PM, Ben Goertzel [EMAIL PROTECTED] wrote:



  I also don't understand whether A(n,d,w) is the number of sets where the
 hamming distance is exactly d (as it would seem from the text of
 http://en.wikipedia.org/wiki/Constant-weight_code ), or whether it is the
 number of set where the hamming distance is d or less.  If the former case
 is true then the lower bounds given in the tables would actually be lower
 than the actual lower bounds for the question I asked, which would
 correspond to all cases where the hamming distance is d or less.



 The case where the Hamming distance is d or less corresponds to a
 bounded-weight code rather than a constant-weight code.

 I already forwarded you a link to a paper on bounded-weight codes, which
 are also combinatorially intractable and have been studied only via
 computational analysis.

 -- Ben G






 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson

   --

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Language learning (was Re: Defining AGI)

2008-10-20 Thread Matt Mahoney

--- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

It can be solved with statistics. Take y = 12 and count Google hits:

string count
-- -
1x3=12 760
2x3=12 2030
3x3=12 9190
4x3=12 16200
5x3=12 1540
6x3=12 1010

More generally, people learn algebra and higher mathematics by induction, by 
generalizing from lots of examples.

5 * 7 = 35 - 35 / 7 = 5
4 * 6 = 24 - 24 / 6 = 4
etc...
a * b = c - c = b / a

It is the same way we learn grammatical rules, for example converting active to 
passive voice and applying it to novel sentences:

Bob kissed Alice - Alice was kissed by Bob.
I ate dinner - Dinner was eaten by me.
etc...
SUBJ VERB OBJ - OBJ was VERB by SUBJ.

In a similar manner, we can learn to solve problems using logical deduction:

All frogs are green. Kermit is a frog. Therefore Kermit is green.
All fish live in water. A shark is a fish. Therefore sharks live in water.
etc...

I understand the objection to learning math and logic in a language model 
instead of coding the rules directly. It is horribly inefficient. I estimate 
that a neural language model with 10^9 connections would need up to 10^18 
operations to learn simple arithmetic like 2+2=4 well enough to get it right 
90% of the time. But I don't know of a better way to learn how to convert 
natural language word problems to a formal language suitable for entering into 
a calculator at the level of an average human adult.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
On Tue, Oct 21, 2008 at 12:07 AM, Ed Porter [EMAIL PROTECTED] wrote:

 I built an excel spread sheet to calculate this for various values of N,S,
 and O.  But when O = zero, the value of C(N,S)/T(N,S,O) doesn't make sense
 for most values of N and S.  For example if N = 100 and S = 10, and O =
 zero, then A should equal 10, not one as it does on the spread sheet.


It's a lower bound.


 I have attached the excel spreadsheet I made to play around with your
 formulas, and a PDF of one page of it, in case you don't have access to
 Excel.


Your spreadsheet doesn't catch it for S=100 and O=1, it explodes when
you try to increase N.
But at S=10, O=2, you can see how lower bound increases as you
increase N. At N=5000, lower bound is 6000, at N=10^6, it's 2.5*10^8,
and at N=10^9 it's 2.5*10^14.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com