AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.

Every email program can receive meaning, store meaning and it can express it
outwardly in order to send it to another computer. It even can do it without
loss of any information. Regarding this point, it even outperforms humans
already who have no conscious access to the full meaning (information) in
their brains.

The only thing which needs much intelligence from the nowadays point of view
is the learning of the process of outwardly expressing meaning, i.e. the
learning of language. The understanding of language itself is simple.

To show that intelligence is separated from language understanding I have
already given the example that a person could have spoken with Einstein but
needed not to have the same intelligence. Another example are humans who
cannot hear and speak but are intelligent. They only have the problem to get
the knowledge from other humans since language is the common social
communication protocol to transfer knowledge from brain to brain.

In my opinion language is overestimated in AI for the following reason:
When we think we believe that we think in our language. From this we
conclude that our thoughts are inherently structured by linguistic elements.
And if our thoughts are so deeply connected with language then it is a small
step to conclude that our whole intelligence depends inherently on language.

But this is a misconception.
We do not have conscious control over all of our thoughts. Most of the
activities within our brain we cannot be aware of when we think.
Nevertheless it is very useful and even essential for human intelligence
being able to observe at least a subset of the own thoughts. It is this
subset which we usually identify with the whole set of thoughts. But in fact
it is just a tiny subset of all what happens in the 10^11 neurons.
For the top-level observation of the own thoughts the brain uses the learned
language. 
But this is no contradiction to the point that language is just a
communication protocol and nothing else. The brain translates its patterns
into language and routes this information to its own input regions.

The reason why the brain uses language in order to observe its own thoughts
is probably the following:
If a person A wants to communicate some of its patterns to a person B then
it has solve two problems:
1. How to compress the patterns?
2. How to send the patterns to the person B?
The solution for the two problems is language.

If a brain wants to observe its own thoughts it has to solve the same
problems.
The thoughts have to be compressed. If not you would observe every element
of your thoughts and you would end up in an explosion of complexity. So why
not use the same compression algorithm as it is used for communication with
other people? That's the reason why the brain uses language when it observes
its own thoughts. 

This phenomenon leads to the misconception that language is inherently
connected with thoughts and intelligence. In fact it is just a top level
communication protocol between two brains and within a single brain.

Future AGI will have a much broader bandwidth and even for the current
possibilities of technology human language would be a weak communication
protocol for its internal observation of its own thoughts.
 
- Matthias


Terren Suydam wrote:


Nice post.

I'm not sure language is separable from any kind of intelligence we can
meaningfully interact with.

It's important to note (at least) two ways of talking about language:

1. specific aspects of language - what someone building an NLP module is
focused on (e.g. the rules of English grammar and such).

2. the process of language - the expression of the internal state in some
outward form in such a way that conveys shared meaning. 

If we conceptualize language as in #2, we can be talking about a great many
human activities besides conversing: playing chess, playing music,
programming computers, dancing, and so on. And in each example listed there
is a learning curve that goes from pure novice to halting sufficiency to
masterful fluency, just like learning a language. 

So *specific* forms of language (including the non-linguistic) are not in
themselves important to intelligence (perhaps this is Matthias' point?), but
the process of outwardly expressing meaning is fundamental to any social
intelligence.

Terren




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-19 Thread Samantha Atkins
This sounds good to me.  I am much more drawn to topic #1.  Topic #2 I 
have seen discussed recursively and in dozens of variants multiple 
places.  The only thing I will add to Topic #2 is that I very seriously 
doubt current human intelligence individually or collectively is 
sufficient to address or meaningfully resolve or even crisply articulate 
such questions.   Much more is accomplished by actually looking into 
the horse's mouth than philosophizing endlessly.


- samantha


Ben Goertzel wrote:


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current 
computers, according to designs that can feasibly be implemented by 
moderately-sized groups of people


2)
Discussions about whether the above is even possible -- or whether it 
is impossible because of weird physics, or poorly-defined special 
characteristics of human creativity, or the so-called complex systems 
problem, or because AGI intrinsically requires billions of people and 
quadrillions of dollars, or whatever


Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ... 
certainly, they are valid topics for intellectual inquiry.  

But, to do anything real, you have to make **some** decisions about 
what approach to take, and I've decided long ago to take an approach 
of trying to engineer an AGI system.


Now, if someone had a solid argument as to why engineering an AGI 
system is impossible, that would be important.  But that never seems 
to be the case.  Rather, what we hear are long discussions of peoples' 
intuitions and opinions in this regard.  People are welcome to their 
own intuitions and opinions, but I get really bored scanning through 
all these intuitions about why AGI is impossible.


One possibility would be to more narrowly focus this list, 
specifically on **how to make AGI work**.


If this re-focusing were done, then philosophical arguments about the 
impossibility of engineering AGI in the near term would be judged 
**off topic** by definition of the list purpose.


Potentially, there could be another list, something like 
agi-philosophy, devoted to philosophical and weird-physics and other 
discussions about whether AGI is possible or not.  I am not sure 
whether I feel like running that other list ... and even if I ran it, 
I might not bother to read it very often.  I'm interested in new, 
substantial ideas related to the in-principle possibility of AGI, but 
not interested at all in endless philosophical arguments over various 
peoples' intuitions in this regard.


One fear I have is that people who are actually interested in building 
AGI, could be scared away from this list because of the large volume 
of anti-AGI philosophical discussion.   Which, I add, almost never has 
any new content, and mainly just repeats well-known anti-AGI arguments 
(Penrose-like physics arguments ... mind is too complex to engineer, 
it has to be evolved ... no one has built an AGI yet therefore it 
will never be done ... etc.)


What are your thoughts on this?

-- Ben




On Wed, Oct 15, 2008 at 10:49 AM, Jim Bromer [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

 Actually, I think COMP=false is a perfectly valid subject for
discussion on
 this list.

 However, I don't think discussions of the form I have all the
answers, but
 they're top-secret and I'm not telling you, hahaha are
particularly useful.

 So, speaking as a list participant, it seems to me this thread
has probably
 met its natural end, with this reference to proprietary
weird-physics IP.

 However, speaking as list moderator, I don't find this thread so
off-topic
 or unpleasant as to formally kill the thread.

 -- Ben

If someone doesn't want to get into a conversation with Colin about
whatever it is that he is saying, then they should just exercise some
self-control and refrain from doing so.

I think Colin's ideas are pretty far out there. But that does not mean
that he has never said anything that might be useful.

My offbeat topic, that I believe that the Lord may have given me some
direction about a novel approach to logical satisfiability that I am
working on, but I don't want to discuss the details about the
algorithms until I have gotten a chance to see if they work or not,
was never intended to be a discussion about the theory itself.  I
wanted to have a discussion about whether or not a good SAT solution
would have a significant influence on AGI, and whether or not the
unlikely discovery of an unexpected breakthrough on SAT would serve as
rational evidence in support of the theory 

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Samantha Atkins

Matt Mahoney wrote:

--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:

  

It seems clear that without external inputs the amount of
improvement 
possible is stringently limited.  That is evident from
inspection.  But 
why the without input?  The only evident reason
is to ensure the truth 
of the proposition, as it doesn't match any intended
real-world scenario 
that I can imagine.  (I've never considered the
Oracle AI scenario [an 
AI kept within a black box that will answer all your
questions without 
inputs] to be plausible.)



If input is allowed, then we can't clearly distinguish between self improvement 
and learning. Clearly, learning is a legitimate form of improvement, but it is 
not *self* improvement.

What I am trying to debunk is the perceived risk of a fast takeoff singularity 
launched by the first AI to achieve superhuman intelligence. In this scenario, 
a scientist with an IQ of 180 produces an artificial scientist with an IQ of 
200, which produces an artificial scientist with an IQ of 250, and so on. I 
argue it can't happen because human level intelligence is the wrong threshold. 
There is currently a global brain (the world economy) with an IQ of around 
10^10, and approaching 10^12.


Oh man.  It is so tempting in today's economic morass to point out the 
obvious stupidity of this purported super-super-genius.   Why would you 
assign such an astronomical intelligence to the economy?   Even from the 
POV of the best of Austrian micro-economic optimism it is not at all 
clear that billions of minds of human level IQ interacting with one 
another can be said to produce some such large exponential of the 
average human IQ.How much of the advancement of humanity is the 
result of a relatively few exceptionally bright minds rather than the 
billions of lesser intelligences?   Are you thinking more of the entire 
cultural environment rather than specifically the economy?



- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
Abram,

I find it more useful to think in terms of Chaitin's reformulation of
Godel's Theorem:

http://www.cs.auckland.ac.nz/~chaitin/sciamer.html

Given any computer program with algorithmic information capacity less than
K, it cannot prove theorems whose algorithmic information content is greater
than K.

Put simply, there are some things our  brains are not big enough to prove
true or false

This is true for quantum computers just as it's true for classical
computers.  Penrose hypothesized it would NOT hold for quantum gravity
computers, but IMO this is a fairly impotent hypothesis because quantum
gravity computers don't exist (even theoretically, I mean: since there is no
unified quantum gravity theory yet).

Penrose assumes that humans don't have this sort of limitation, but I'm not
sure why.

On the other hand, this limitation can be overcome somewhat if you allow the
program P to interact with the external world in a way that lets it be
modified into P1 such that P1 is not computable by P.  In this case P needs
to have a guru (or should I say an oracle ;-) that it trusts to modify
itself in ways it can't understand, or else to be a gambler-type...

You seem almost confused when you say that an AI can't reason about
uncomputable entities.  Of course it can.  An AI can manipulate math symbols
in a certain formal system, and then associate these symbols with the words
uncomputable entities, and with its own self ... or us.  This is what we
do.

An AI program can't actually manipulate the uncomputable entities directly ,
but what makes you think *we* can, either?


-- Ben G




On Sat, Oct 18, 2008 at 9:54 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Matt,

 I suppose you don't care about Steve's do not comment request? Oh
 well, I want to discuss this anyway. 'Tis why I posted in the first
 place.

 No, I do not claim that computer theorem-provers cannot prove Goedel's
 Theorem. It has been done. The objection applies specifically to
 AIXI-- AIXI cannot prove goedel's theorem. More generally, all AIXI's
 world-models are computable.

 What do I mean when I say to reason about non-computable entities?
 Well, Goedel's Incompleteness Theorem is a fine example. Another
 example is the way humans can talk about whether a particular program
 will halt. This sort of thing can be done in logical systems by adding
 basic non-computable primitives. A common choice is to add numbers.
 (Numbers may seem like the prototypical computable thing, but any
 logic of numbers is incomplete, as Goedel showed of course.)

 The broader issue is that *in general* given any ideal model of
 intelligence similar to AIXI, with a logically defined class of
 world-models, it will be possible to point out something that the
 intelligence cannot possibly reason about-- namely, its own semantics.
 This follows from Tarski's indefinability theorem, and hinges on a few
 assumptions about the meaning of logically defined.

 I am not altogether sure that the Novamente/OCP design is really an
 approximation of AIXI anyway, *but* I think it is a serious concern.
 If the Novamente/OCP design really solves the (broader) problem, then
 it also solves some key problems in epistemology (specifically, in
 formal theories of truth), so it would be very interesting to see it
 worked out in these terms.

 --Abram

 On Sat, Oct 18, 2008 at 9:12 PM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
  --- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:
 
  Non-Constructive Logic: Any AI method that approximates AIXI
  will lack the human capability to reason about non-computable
  entities.
 
  Then how is it that humans can do it? According to the AIXI theorem, if
 we can do this, it makes us less able to achieve our goals because AIXI is
 provably optimal.
 
  Exactly what do you mean by reason about non-computable entities?
 
  Do you claim that a computer could not discover a proof of Goedel's
 incompleteness theorem by brute force search?
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
What the computer makes with the data it receives depends on the information
of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who was
angry?

The way to obtain knowledge with embodiment is hard and long even in virtual
worlds. 
If the AGI shall understand natural language it would be necessary that it
makes similar experiences as humans make in the real world. But this would
need a very very sophisticated and rich virtual world. At least, there have
to be angry dogs in the virtual world ;-) 

As I have already said I do not think the relation between utility of this
approach and the costs would be positive for first AGI.

- Matthias




William Pearson [mailto:[EMAIL PROTECTED] wrote


If I specify in a language to a computer that it should do something,
it will do it no matter what (as long as I have sufficient authority).
Telling a human to do something, e.g. wave your hands in the air and
shout, the human will decide to do that based on how much it trusts
you and whether they think it is a good idea. Generally a good idea in
a situation where you are attracting the attention of rescuers,
otherwise likely to make you look silly.

I'm generally in favour of getting some NLU into AIs mainly because a
lot of the information we have about the world is still in that form,
so an AI without access to that information would have to reinvent it,
which I think would take a long time. Even mathematical proofs are
still somewhat in natural language. Other than that you could work on
machine language understanding where information was taken in
selectively and judged on its merits not its security credentials.

  Will Pearson





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread David Hart
An excellent post, thanks!

IMO, it raises the bar for discussion of language and AGI, and should be
carefully considered by the authors of future posts on the topic of language
and AGI. If the AGI list were a forum, Matthias's post should be pinned!

-dave

On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 The process of outwardly expressing meaning may be fundamental to any
 social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express
 it
 outwardly in order to send it to another computer. It even can do it
 without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of
 view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.

 To show that intelligence is separated from language understanding I have
 already given the example that a person could have spoken with Einstein but
 needed not to have the same intelligence. Another example are humans who
 cannot hear and speak but are intelligent. They only have the problem to
 get
 the knowledge from other humans since language is the common social
 communication protocol to transfer knowledge from brain to brain.

 In my opinion language is overestimated in AI for the following reason:
 When we think we believe that we think in our language. From this we
 conclude that our thoughts are inherently structured by linguistic
 elements.
 And if our thoughts are so deeply connbected with language then it is a
 small
 step to conclude that our whole intelligence depends inherently on
 language.

 But this is a misconception.
 We do not have conscious control over all of our thoughts. Most of the
 activities within our brain we cannot be aware of when we think.
 Nevertheless it is very useful and even essential for human intelligence
 being able to observe at least a subset of the own thoughts. It is this
 subset which we usually identify with the whole set of thoughts. But in
 fact
 it is just a tiny subset of all what happens in the 10^11 neurons.
 For the top-level observation of the own thoughts the brain uses the
 learned
 language.
 But this is no contradiction to the point that language is just a
 communication protocol and nothing else. The brain translates its patterns
 into language and routes this information to its own input regions.

 The reason why the brain uses language in order to observe its own thoughts
 is probably the following:
 If a person A wants to communicate some of its patterns to a person B then
 it has solve two problems:
 1. How to compress the patterns?
 2. How to send the patterns to the person B?
 The solution for the two problems is language.

 If a brain wants to observe its own thoughts it has to solve the same
 problems.
 The thoughts have to be compressed. If not you would observe every element
 of your thoughts and you would end up in an explosion of complexity. So why
 not use the same compression algorithm as it is used for communication with
 other people? That's the reason why the brain uses language when it
 observes
 its own thoughts.

 This phenomenon leads to the misconception that language is inherently
 connected with thoughts and intelligence. In fact it is just a top level
 communication protocol between two brains and within a single brain.

 Future AGI will have a much broader bandwidth and even for the current
 possibilities of technology human language would be a weak communication
 protocol for its internal observation of its own thoughts.

 - Matthias






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread William Pearson
2008/10/19 Dr. Matthias Heger [EMAIL PROTECTED]:
 The process of outwardly expressing meaning may be fundamental to any social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express it
 outwardly in order to send it to another computer. It even can do it without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.

I'd disagree, there is another part of dealing with language that we
don't have a good idea of how to do. Deciding whether to assimilate it
and if so how.

If I specify in a language to a computer that it should do something,
it will do it no matter what (as long as I have sufficient authority).
Telling a human to do something, e.g. wave your hands in the air and
shout, the human will decide to do that based on how much it trusts
you and whether they think it is a good idea. Generally a good idea in
a situation where you are attracting the attention of rescuers,
otherwise likely to make you look silly.

I'm generally in favour of getting some NLU into AIs mainly because a
lot of the information we have about the world is still in that form,
so an AI without access to that information would have to reinvent it,
which I think would take a long time. Even mathematical proofs are
still somewhat in natural language. Other than that you could work on
machine language understanding where information was taken in
selectively and judged on its merits not its security credentials.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Ben Goertzel
regarding denotational semantics:
I prefer to think of the meaning of X as the fuzzy set of patterns
associated with X.  (In fact, I recall giving a talk on this topic at a
meeting of the American Math Society in 1990 ;-)



On Sun, Oct 19, 2008 at 6:59 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED]
 wrote:
  The process of outwardly expressing meaning may be fundamental to any
 social
  intelligence but the process itself needs not much intelligence.
 
  Every email program can receive meaning, store meaning and it can express
 it
  outwardly in order to send it to another computer. It even can do it
 without
  loss of any information. Regarding this point, it even outperforms humans
  already who have no conscious access to the full meaning (information) in
  their brains.
 
  The only thing which needs much intelligence from the nowadays point of
 view
  is the learning of the process of outwardly expressing meaning, i.e. the
  learning of language. The understanding of language itself is simple.
 

 Meaning is tricky business. As far as I can tell, meaning Y of a
 system X is an external model that relates system X to its meaning Y
 (where meaning may be a physical object, or a class of objects, where
 each individual object figures into the model). Formal semantics works
 this way (see http://en.wikipedia.org/wiki/Denotational_semantics ).
 When you are thinking about an object, the train of though depends on
 your experience about that object, and will influence your behavior in
 situations depending on information about that objects. Meaning
 propagates through the system according to rules of the model,
 propagates inferentially in the model and not in the system, and so
 can reach places and states of the system not at all obviously
 concerned with what this semantic model relates them to. And
 conversely, meaning doesn't magically appear where model doesn't say
 it does: if system is broken, meaning is lost, at least until you come
 up with another model and relate it to the previous one.

 When you say that e-mail contains meaning and network transfers
 meaning, it is an assertion about the model of content of e-mail, that
 relates meaning in the mind of the writer to bits in the memory of
 machines. From this point of view, we can legitemately say that
 meaning is transferred, and is expressed. But the same meaning doesn't
 exist in e-mails if you cut them from the mind that expressed the
 meaning in the form of e-mails, or experience that transferred meaning
 in the mind.

 Understanding is the process of integrating different models,
 different meanings, different pieces of information as seen by your
 model. It is the ability to translate pieces of information that have
 nontrivial structure, in your basis. Normal use of understanding
 applies only to humans, everything else generalizes this concept in
 sometimes very strange ways. When we say that person understood
 something, in this language it's equivalent to person having
 successfully integrated that piece in his mind, our model of that
 person starting to attribute properties of that piece of information
 to his thought and behavior.

 So, you are cutting this knot at a trivial point. The difficulty is in
 the translation, but you point on one side of the translation process
 and say that this side is simple, then point to another than say that
 this side is hard. The problem is that it's hard to put a finger on
 the point just after translation, but it's easy to see how our
 technology, as physical medium, transfers information ready for
 translation. This outward appearance has little bearing on semantic
 models.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 The process of outwardly expressing meaning may be fundamental to any social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express it
 outwardly in order to send it to another computer. It even can do it without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.


Meaning is tricky business. As far as I can tell, meaning Y of a
system X is an external model that relates system X to its meaning Y
(where meaning may be a physical object, or a class of objects, where
each individual object figures into the model). Formal semantics works
this way (see http://en.wikipedia.org/wiki/Denotational_semantics ).
When you are thinking about an object, the train of though depends on
your experience about that object, and will influence your behavior in
situations depending on information about that objects. Meaning
propagates through the system according to rules of the model,
propagates inferentially in the model and not in the system, and so
can reach places and states of the system not at all obviously
concerned with what this semantic model relates them to. And
conversely, meaning doesn't magically appear where model doesn't say
it does: if system is broken, meaning is lost, at least until you come
up with another model and relate it to the previous one.

When you say that e-mail contains meaning and network transfers
meaning, it is an assertion about the model of content of e-mail, that
relates meaning in the mind of the writer to bits in the memory of
machines. From this point of view, we can legitemately say that
meaning is transferred, and is expressed. But the same meaning doesn't
exist in e-mails if you cut them from the mind that expressed the
meaning in the form of e-mails, or experience that transferred meaning
in the mind.

Understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. It is the ability to translate pieces of information that have
nontrivial structure, in your basis. Normal use of understanding
applies only to humans, everything else generalizes this concept in
sometimes very strange ways. When we say that person understood
something, in this language it's equivalent to person having
successfully integrated that piece in his mind, our model of that
person starting to attribute properties of that piece of information
to his thought and behavior.

So, you are cutting this knot at a trivial point. The difficulty is in
the translation, but you point on one side of the translation process
and say that this side is simple, then point to another than say that
this side is hard. The problem is that it's hard to put a finger on
the point just after translation, but it's easy to see how our
technology, as physical medium, transfers information ready for
translation. This outward appearance has little bearing on semantic
models.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 3:09 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 regarding denotational semantics:
 I prefer to think of the meaning of X as the fuzzy set of patterns
 associated with X.  (In fact, I recall giving a talk on this topic at a
 meeting of the American Math Society in 1990 ;-)


I like denotational semantics as an example (even though it doesn't
suggest uncertainty), because it's a well-understood semantic model
with meaning assigned to deep intermediate steps, in nontrivial ways.
It's easier to see by analogy to this how abstract thought that
relates to misremembered experience of 20 years ago and that never
gets outwardly expressed still has meaning, and how to assign it which
meaning.

What form meaning takes depends on the model that assigns meaning to
the system, which when we cross the line into realm of human-level
understanding becomes a mind, and so meaning, in a technical sense,
becomes a functional aspect of AGI. If AGI works on something called
fuzzy set of patterns, then it's the meaning of what it models.
There is of course a second step when you yourself, as an engineer,
assign meaning to aspects of operation of AGI, and to relations
between AGI and what it models, in your own head, but this perspective
loses technical precision, although to some extent it's necessary.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mike Tintner

Matthias,

You seem - correct me - to be going a long way round saying that words are 
different from concepts - they're just sound-and-letter labels for concepts, 
which have a very different form. And the processing of words/language is 
distinct from and relatively simple compared to the processing of the 
underlying concepts.


So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l c-r-i-s-i-s 
are distinct from the underlying concepts. The question is: What form do 
those concepts take? And what is happening in our minds (and what has to 
happen in any mind) when we process those concepts?


You talk of patterns. What patterns, do you think, form the concept of 
mind that are engaged in thinking about sentence 2? Do you think that 
concepts like mind or the US might involve something much more complex 
still? Models? Or is that still way too simple? Spaces?


Equally, of course, we can say that each *sentence* above is not just a 
verbal composition but a conceptual composition - and the question then 
is what form does such a composition take? Do sentences form, say, a 
pattern of patterns,  or something like a picture? Or a blending of 
spaces ?


Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million dollars - 
something that we know can be cashed in, in an infinite variety of ways, but 
that we may not have to start cashing in,  (when processing), unless 
really called for - or only cash in so far?


P.S. BTW this is the sort of psycho-philosophical discussion that I would 
see as central to AGI, but that most of you don't want to talk about?






Matthias: What the computer makes with the data it receives depends on the 
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities 
can

be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to 
obtain

this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have 
experienced

angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who was
angry?

The way to obtain knowledge with embodiment is hard and long even in 
virtual

worlds.
If the AGI shall understand natural language it would be necessary that it
makes similar experiences as humans make in the real world. But this would
need a very very sophisticated and rich virtual world. At least, there 
have

to be angry dogs in the virtual world ;-)

As I have already said I do not think the relation between utility of this
approach and the costs would be positive for first AGI.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Dr. Matthias Heger
I agree that understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. But this integrating just matching and not extending the own model
with new entities. You only match linguistic entities of received
linguistically represented information with existing entities of your model
(i.e. with some of your existing patterns). If you could manage the matching
process successfully then you have understood the linguistic message. 

Natural communication and language understanding is completely comparable
with common processes in computer science. There is an internal data
representation. A subset of this data is translated into a linguistic string
and transferred to another agent which retranslates the message before it
possibly but not necessarily changes its database.

The only reason why natural language understanding is so difficult is
because it needs a lot of knowledge to resolve ambiguities which humans
usually gain via own experience.

But alone from being able to resolve the ambiguities and being able to do
the matching process successfully you will know nothing about the creation
of patterns and the way how to work intelligently with these patterns.
Therefore communication is separated from these main problems of AGI in the
same way as communication is completely separated from the structure and
algorithms of the database of computers.

Only the process of *learning* such a communication would be  AI (I am not
sure if it is AGI). But you cannot learn to communicate if there is nothing
to communicate. So every approach towards AGI via *learning* language
understanding will need at least a further domain for the content of
communication. Probably you need even more domains because the linguistic
ambiguities can resolved only with broad knowledge .

And this is my point why I say that language understanding would yield costs
which are not necessary. We can build AGI just by concentrating all efforts
to a *single* domain with very useful properties (i.e. domain of
mathematics).
This would reduce the immense costs of simulating real worlds and
additionally concentrating on *at least two* domains at the same time.

-Matthias


Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
 
Gesendet: Sonntag, 19. Oktober 2008 12:59
An: agi@v2.listbox.com
Betreff: [agi] Re: Meaning, communication and understanding

On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
 The process of outwardly expressing meaning may be fundamental to any
social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express
it
 outwardly in order to send it to another computer. It even can do it
without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of
view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.


Meaning is tricky business. As far as I can tell, meaning Y of a
system X is an external model that relates system X to its meaning Y
(where meaning may be a physical object, or a class of objects, where
each individual object figures into the model). Formal semantics works
this way (see http://en.wikipedia.org/wiki/Denotational_semantics ).
When you are thinking about an object, the train of though depends on
your experience about that object, and will influence your behavior in
situations depending on information about that objects. Meaning
propagates through the system according to rules of the model,
propagates inferentially in the model and not in the system, and so
can reach places and states of the system not at all obviously
concerned with what this semantic model relates them to. And
conversely, meaning doesn't magically appear where model doesn't say
it does: if system is broken, meaning is lost, at least until you come
up with another model and relate it to the previous one.

When you say that e-mail contains meaning and network transfers
meaning, it is an assertion about the model of content of e-mail, that
relates meaning in the mind of the writer to bits in the memory of
machines. From this point of view, we can legitemately say that
meaning is transferred, and is expressed. But the same meaning doesn't
exist in e-mails if you cut them from the mind that expressed the
meaning in the form of e-mails, or experience that transferred meaning
in the mind.

Understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. It is the ability to translate pieces of information that have
nontrivial structure, in your basis. Normal use of understanding
applies only to humans, everything else generalizes this 

Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 5:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I agree that understanding is the process of integrating different models,
 different meanings, different pieces of information as seen by your
 model. But this integrating just matching and not extending the own model
 with new entities. You only match linguistic entities of received
 linguistically represented information with existing entities of your model
 (i.e. with some of your existing patterns). If you could manage the matching
 process successfully then you have understood the linguistic message.

 Natural communication and language understanding is completely comparable
 with common processes in computer science. There is an internal data
 representation. A subset of this data is translated into a linguistic string
 and transferred to another agent which retranslates the message before it
 possibly but not necessarily changes its database.

 The only reason why natural language understanding is so difficult is
 because it needs a lot of knowledge to resolve ambiguities which humans
 usually gain via own experience.

 But alone from being able to resolve the ambiguities and being able to do
 the matching process successfully you will know nothing about the creation
 of patterns and the way how to work intelligently with these patterns.
 Therefore communication is separated from these main problems of AGI in the
 same way as communication is completely separated from the structure and
 algorithms of the database of computers.

 Only the process of *learning* such a communication would be  AI (I am not
 sure if it is AGI). But you cannot learn to communicate if there is nothing
 to communicate. So every approach towards AGI via *learning* language
 understanding will need at least a further domain for the content of
 communication. Probably you need even more domains because the linguistic
 ambiguities can resolved only with broad knowledge .

 And this is my point why I say that language understanding would yield costs
 which are not necessary. We can build AGI just by concentrating all efforts
 to a *single* domain with very useful properties (i.e. domain of
 mathematics).
 This would reduce the immense costs of simulating real worlds and
 additionally concentrating on *at least two* domains at the same time.


I think I see what you are trying to communicate. Correct me if I got
something wrong here.
You assume a certain architectural decision for AIs in question when
you talk about this interpretation of process of communication.
Basically, AI1 communicates with AI2, and they both work with two
domains: D and L, D being internal domain and L being communication
domain, stuff that gets sent via e-mail. AI1 translates meaning D1
into message L1, which is transferred as L2 to AI2, which then
translates it to D2. You call a step L2-D2 understanding or
matching, also assuming that this process doesn't need to change
AI2, to make it change its model, to learn. You then suggest that L
doesn't need to be natural language, as D for language is the most
difficult real world, and then instead we need to pick easier L and D
and work on their interplay.

If AI1 already can translate between D and L, AI2 might need to learn
to translate between L and D on its own, knowing only D at the start,
and this ability you suggest as central challenge of intelligence.

I think that this model is overly simplistic, overemphasizing an
artificial divide between domains within AI's cognition (L and D), and
externalizing communication domain from the core of AI. Both world
model and language model support interaction with environment, there
is no clear cognitive distinction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any domain, becoming able to
generate appropriate behavior in corresponding contexts.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
For the discussion of the subject the details of the pattern representation
are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a word:

http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations. 
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are 
different from concepts - they're just sound-and-letter labels for concepts,

which have a very different form. And the processing of words/language is 
distinct from and relatively simple compared to the processing of the 
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l c-r-i-s-i-s 
are distinct from the underlying concepts. The question is: What form do 
those concepts take? And what is happening in our minds (and what has to 
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of 
mind that are engaged in thinking about sentence 2? Do you think that 
concepts like mind or the US might involve something much more complex 
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a 
verbal composition but a conceptual composition - and the question then 
is what form does such a composition take? Do sentences form, say, a 
pattern of patterns,  or something like a picture? Or a blending of 
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million dollars - 
something that we know can be cashed in, in an infinite variety of ways, but

that we may not have to start cashing in,  (when processing), unless 
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would 
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the 
information
 of the transferred data, its internal algorithms and its internal data.
 This is the same with humans and natural language.


 Language understanding would be useful to teach the AGI with existing
 knowledge already represented in natural language. But natural language
 understanding suffers from the problem of ambiguities. These ambiguities 
 can
 be solved by having similar knowledge as humans have. But then you have a
 recursive problem because first there has to be solved the problem to 
 obtain
 this knowledge.

 Nature solves this problem with embodiment. Different people make similar
 experiences since the laws of nature do not depend on space and time.
 Therefore we all can imagine a dog which is angry. Since we have 
 experienced
 angry dogs but we haven't experienced angry trees we can resolve the
 linguistic ambiguity of my former example and answer the question: Who was
 angry?

 The way to obtain knowledge with embodiment is hard and long even in 
 virtual
 worlds.
 If the AGI shall understand natural language it would be necessary that it
 makes similar experiences as humans make in the real world. But this would
 need a very very sophisticated and rich virtual world. At least, there 
 have
 to be angry dogs in the virtual world ;-)

 As I have already said I do not think the relation between utility of this
 approach and the costs would be positive for first AGI.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your 

Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Ben,

I don't know what sounded almost confused, but anyway it is apparent
that I didn't make my position clear. I am not saying we can
manipulate these things directly via exotic (non)computing.

First, I am very specifically saying that AIXI-style AI (meaning, any
AI that approaches AIXI as resources increase) cannot reason about
uncomputable entities. This is because AIXI entertains only computable
models.

Second, I am suggesting a broader problem that will apply to a wide
class of formulations of idealized intelligence such as AIXI: if their
internal logic obeys a particular set of assumptions, it will become
prone to Tarski's Undefinability Theorem. Therefore, we humans will be
able to point out a particular class of concepts that it cannot reason
about; specifically, the very concepts used in describing the ideal
intelligence in the first place.

One reasonable way of avoiding the humans are magic explanation of
this (or humans use quantum gravity computing, etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those assumptions. Therefore, we cannot understand the math
needed to define our own intelligence. Therefore, we can't engineer
human-level AGI. I don't like this conclusion! I want a different way
out.

I'm not sure the guru explanation is enough... who was the Guru for Humankind?

Thanks,

--Abram


On Sun, Oct 19, 2008 at 5:39 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Abram,

 I find it more useful to think in terms of Chaitin's reformulation of
 Godel's Theorem:

 http://www.cs.auckland.ac.nz/~chaitin/sciamer.html

 Given any computer program with algorithmic information capacity less than
 K, it cannot prove theorems whose algorithmic information content is greater
 than K.

 Put simply, there are some things our  brains are not big enough to prove
 true or false

 This is true for quantum computers just as it's true for classical
 computers.  Penrose hypothesized it would NOT hold for quantum gravity
 computers, but IMO this is a fairly impotent hypothesis because quantum
 gravity computers don't exist (even theoretically, I mean: since there is no
 unified quantum gravity theory yet).

 Penrose assumes that humans don't have this sort of limitation, but I'm not
 sure why.

 On the other hand, this limitation can be overcome somewhat if you allow the
 program P to interact with the external world in a way that lets it be
 modified into P1 such that P1 is not computable by P.  In this case P needs
 to have a guru (or should I say an oracle ;-) that it trusts to modify
 itself in ways it can't understand, or else to be a gambler-type...

 You seem almost confused when you say that an AI can't reason about
 uncomputable entities.  Of course it can.  An AI can manipulate math symbols
 in a certain formal system, and then associate these symbols with the words
 uncomputable entities, and with its own self ... or us.  This is what we
 do.

 An AI program can't actually manipulate the uncomputable entities directly ,
 but what makes you think *we* can, either?


 -- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mike Tintner

Matthias,

I take the point that there is vastly more to language understanding than 
the surface processing of words as opposed to concepts.


I agree that it is typically v. fast.

I don't think though that you can call any concept a pattern. On the 
contrary, a defining property of concepts, IMO, is that they resist 
reduction to any pattern or structure - which is rather important, since my 
impression is most AGI-ers live by patterns/structures. Even a concept like 
triangle cannot actually be reduced to a pattern. Try it, if you wish.


And the issue of conceptualisation  - of what a concept consists of - is 
manifestly an unsolved problem for both cog sci and AI  and of utmost, 
central importance for AGI. We have to understand how the brain performs its 
feats here, because that, at a rough general level, is almost certainly how 
it will *have* to be done. (I can't resist being snide here and saying that 
since this an unsolved problem, one can virtually guarantee that AGI-ers 
will therefore refuse to discuss it).


Trying to work out what information the brain handles, for example, when it 
talks about


THE US IS THE HOME OF THE FINANCIAL CRISIS

- what passes - and has to pass - through a mind thinking specifically of 
the financial crisis?- is in some ways as great a challenge as working out 
what the brain's engrams consist of. Clearly it won't be the kind of mere, 
symbolic, dictionary processing that some AGI-ers envisage.


It will be perhaps as complex as the conceptualisation of party in:

HOW WAS THE PARTY LAST NIGHT?

where a single word may be used to touch upon over, say, two hours of 
sensory, movie experience in the brain.


I partly disagree with you about how we should study all this -  it is vital 
to look at how we understand, or rather fail to understand and get confused 
by concepts and language - which happens all the time. This can tell us a 
great deal about what is going on underneath.



Matthias:
For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of
mind that are engaged in thinking about sentence 2? Do you think that
concepts like mind or the US might involve something much more complex
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a
verbal composition but a conceptual composition - and the question 
then

is what form does such a composition take? Do sentences form, say, a
pattern of patterns,  or something like a picture? Or a blending of
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million 
dollars -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start cashing in,  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Terren Suydam

--- On Sun, 10/19/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 Every email program can receive meaning, store meaning and
 it can express it
 outwardly in order to send it to another computer. It even
 can do it without
 loss of any information. Regarding this point, it even
 outperforms humans
 already who have no conscious access to the full meaning
 (information) in
 their brains.

Email programs do not store meaning, they store data. The email program has no 
understanding of the stuff it stores, so this is a poor analogy. 
 
 The only thing which needs much intelligence from the
 nowadays point of view
 is the learning of the process of outwardly expressing
 meaning, i.e. the
 learning of language. The understanding of language itself
 is simple.

Isn't the *learning* of language the entire point? If you don't have an answer 
for how an AI learns language, you haven't solved anything.  The understanding 
of language only seems simple from the point of view of a fluent speaker. 
Fluency however should not be confused with a lack of intellectual effort - 
rather, it's a state in which the effort involved is automatic and beyond 
awareness.

 To show that intelligence is separated from language
 understanding I have
 already given the example that a person could have spoken
 with Einstein but
 needed not to have the same intelligence. Another example
 are humans who
 cannot hear and speak but are intelligent. They only have
 the problem to get
 the knowledge from other humans since language is the
 common social
 communication protocol to transfer knowledge from brain to
 brain.

Einstein had to express his (non-linguistic) internal insights in natural 
language and in mathematical language.  In both modalities he had to use his 
intelligence to make the translation from his mental models. 

Deaf people speak in sign language, which is only different from spoken 
language in superficial ways. This does not tell us much about language that we 
didn't already know. 

 In my opinion language is overestimated in AI for the
 following reason:
 When we think we believe that we think in our language.
 From this we
 conclude that our thoughts are inherently structured by
 linguistic elements.
 And if our thoughts are so deeply connected with language
 then it is a small
 step to conclude that our whole intelligence depends
 inherently on language.

It is surely true that much/most of our cognitive processing is not at all 
linguistic, and that there is much that happens beyond our awareness. However, 
language is a necessary tool, for humans at least, to obtain a competent 
conceptual framework, even if that framework ultimately transcends the 
linguistic dynamics that helped develop it. Without language it is hard to see 
how humans could develop self-reflectivity. 

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Domain effectiveness (a.k.a. intelligence) is predicated upon having an 
effective internal model of that domain.

Language production is the extraction and packaging of applicable parts of the 
internal model for transmission to others.
Conversely, language understanding is for the reception (and integration) of 
model portions developed by others (i.e. learning from a teacher).

The better your internal models, the more effective/intelligent you are.

BUT!  This also holds true for language!  Concrete unadorned statements convey 
a lot less information than statements loaded with adjectives, adverbs, or even 
more markedly analogies (or innuendos or . . . ).
A child cannot pick up the same amount of information from a sentence that they 
think that they understand (and do understand to some degree) that an adult can.
Language is a knowledge domain like any other and high intelligences can use it 
far more effectively than lower intelligences.

** Or, in other words, I am disagreeing with the statement that the process 
itself needs not much intelligence.

Saying that the understanding of language itself is simple is like saying that 
chess is simple because you understand the rules of the game.
Godel's Incompleteness Theorem can be used to show that there is no upper bound 
on the complexity of language and the intelligence necessary to pack and 
extract meaning/knowledge into/from language.

Language is *NOT* just a top-level communications protocol because it is not 
fully-specified and because it is tremendously context-dependent (not to 
mention entirely Godellian).  These two reasons are why it *IS* inextricably 
tied into intelligence.

I *might* agree that the concrete language of lower primates and young children 
is separate from intelligence, but there is far more going on in adult language 
than a simple communications protocol.

E-mail programs are simply point-to-point repeaters of language (NOT meaning!)  
Intelligences generally don't exactly repeat language but *try* to repeat 
meaning.  The game of telephone is a tremendous example of why language *IS* 
tied to intelligence (or look at the results of translating simple phrases into 
another language and back -- The drink is strong but the meat is rotten).  
Translating language to and from meaning (i.e. your domain model) is the 
essence of intelligence.

How simple is the understanding of the above?  How much are you having to fight 
to relate it to your internal model (assuming that it's even compatible :-)?

I don't believe that intelligence is inherent upon language EXCEPT that 
language is necessary to convey knowledge/meaning (in order to build 
intelligence in a reasonable timeframe) and that language is influenced by and 
influences intelligence since it is basically the core of the critical 
meta-domains of teaching, learning, discovery, and alteration of your internal 
model (the effectiveness of which *IS* intelligence).  Future AGI and humans 
will undoubtedly not only have a much richer language but also a much richer 
repertoire of second-order (and higher) features expressed via language.

** Or, in other words, I am strongly disagreeing that intelligence is 
separated from language understanding.  I believe that language understanding 
is the necessary tool that intelligence is built with since it is what puts the 
*contents* of intelligence (i.e. the domain model) into intelligence .  Trying 
to build an intelligence without language understanding is like trying to build 
it with just machine language or by using only observable data points rather 
than trying to build those things into more complex entities like third-, 
fourth-, and fifth-generation programming languages instead of machine language 
and/or knowledge instead of just data points.

BTW -- Please note, however, that the above does not imply that I believe that 
NLU is the place to start in developing AGI.  Quite the contrary -- NLU rests 
upon such a large domain model that I believe that it is counter-productive to 
start there.  I believe that we need to star with limited domains and learn 
about language, internal models, and grounding without brittleness in tractable 
domains before attempting to extend that knowledge to larger domains.

  - Original Message - 
  From: David Hart 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:30 AM
  Subject: Re: AW: [agi] Re: Defining AGI



  An excellent post, thanks!

  IMO, it raises the bar for discussion of language and AGI, and should be 
carefully considered by the authors of future posts on the topic of language 
and AGI. If the AGI list were a forum, Matthias's post should be pinned!

  -dave


  On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.

Every email program can receive meaning, store meaning and 

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.


This is what I disagree entirely with.  If nothing else, humans are 
constantly building and updating their mental model of what other people 
believe and how they communicate it.  Only in routine, pre-negotiated 
conversations can language be entirely devoid of learning.  Unless a 
conversation is entirely concrete and based upon something like shared 
physical experiences, it can't be any other way.  You're only paying 
attention to the absolutely simplest things that language does (i.e. the tip 
of the iceberg).



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 10:31 AM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]


For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of
mind that are engaged in thinking about sentence 2? Do you think that
concepts like mind or the US might involve something much more complex
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a
verbal composition but a conceptual composition - and the question 
then

is what form does such a composition take? Do sentences form, say, a
pattern of patterns,  or something like a picture? Or a blending of
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million 
dollars -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start cashing in,  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities
can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to
obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have
experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who 
was

angry?

The way to obtain knowledge with embodiment is hard and long even in
virtual
worlds.
If the 

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


Read Pinker's The Stuff of Thought.  Actually, a lot of these details *are* 
visible from a linguistic point of view.


- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 10:31 AM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]


For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of
mind that are engaged in thinking about sentence 2? Do you think that
concepts like mind or the US might involve something much more complex
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a
verbal composition but a conceptual composition - and the question 
then

is what form does such a composition take? Do sentences form, say, a
pattern of patterns,  or something like a picture? Or a blending of
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million 
dollars -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start cashing in,  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities
can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to
obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have
experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who 
was

angry?

The way to obtain knowledge with embodiment is hard and long even in
virtual
worlds.
If the AGI shall understand natural language it would be necessary that 
it
makes similar experiences as humans make in the real world. But this 
would

need a very very sophisticated and rich virtual world. At least, there
have
to be angry dogs in the virtual world ;-)

As I have already said I do not think the relation between utility of 
this

approach and the costs would be positive for first AGI.







AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished. Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.
You can see it differently but then everything is only a discussion about
definitions.

- Matthias


Mark Waser [mailto:[EMAIL PROTECTED] wrote

Gesendet: Sonntag, 19. Oktober 2008 19:00
An: agi@v2.listbox.com
Betreff: Re: [agi] Words vs Concepts [ex Defining AGI]

 There is no creation of new patterns and there is no intelligent algorithm
 which manipulates patterns. It is just translating, sending, receiving and
 retranslating.

This is what I disagree entirely with.  If nothing else, humans are 
constantly building and updating their mental model of what other people 
believe and how they communicate it.  Only in routine, pre-negotiated 
conversations can language be entirely devoid of learning.  Unless a 
conversation is entirely concrete and based upon something like shared 
physical experiences, it can't be any other way.  You're only paying 
attention to the absolutely simplest things that language does (i.e. the tip

of the iceberg).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
If there are some details of the internal structure of patterns visible then
this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view. 

Since in many communicating technical systems there are so much details
which are not transferred I would bet that this is also the case in humans.

As long as we have no proof this remains an open question. An AGI which may
have internal features for its patterns would have less restrictions and is
thus far easier to build.

- Matthias.


Mark Waser [mailto:[EMAIL PROTECTED] wrote

Read Pinker's The Stuff of Thought.  Actually, a lot of these details *are* 
visible from a linguistic point of view.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
The process of translating patterns into language should be easier than the
process of creating patterns or manipulating patterns. Therefore I say that
language understanding is easy. 

 

When you say that language is not fully specified then you probably imagine
an AGI which learns language.

This is a complete different thing. Learning language is difficult as I
already have mentioned.

 

Language cannot be translated into meaning. Meaning is a mapping from a
linguistic string to patterns.

 

Email programs are not just point to point repeaters.

They receive data in a certain communication protocol. They translate these
data into an internal representation and store the data. And they can
translate their internal data into a linguistic representation to send the
data to another email client. This process  of communication is conceptually
the same as we can observe it with humans.

The word meaning was bad chosen from me. But brains do not transfer
meaning as well. They also just transfer  data. Meaning is a mapping. 

 

You *believe* that language cannot be separated from intelligence. I don't
and I have described a model which has a strict separation. We both have no
proof.

 

- Matthias

 

 

Mark Waser [mailto:[EMAIL PROTECTED]  wrote



 

 

BUT!  This also holds true for language!  Concrete unadorned statements
convey a lot less information than statements loaded with adjectives,
adverbs, or even more markedly analogies (or innuendos or . . . ).

A child cannot pick up the same amount of information from a sentence that
they think that they understand (and do understand to some degree) that an
adult can.

Language is a knowledge domain like any other and high intelligences can use
it far more effectively than lower intelligences.

 

** Or, in other words, I am disagreeing with the statement that the process
itself needs not much intelligence.

 

Saying that the understanding of language itself is simple is like saying
that chess is simple because you understand the rules of the game.

Godel's Incompleteness Theorem can be used to show that there is no upper
bound on the complexity of language and the intelligence necessary to pack
and extract meaning/knowledge into/from language.

 

Language is *NOT* just a top-level communications protocol because it is not
fully-specified and because it is tremendously context-dependent (not to
mention entirely Godellian).  These two reasons are why it *IS* inextricably
tied into intelligence.

 

I *might* agree that the concrete language of lower primates and young
children is separate from intelligence, but there is far more going on in
adult language than a simple communications protocol.

 

E-mail programs are simply point-to-point repeaters of language (NOT
meaning!)  Intelligences generally don't exactly repeat language but *try*
to repeat meaning.  The game of telephone is a tremendous example of why
language *IS* tied to intelligence (or look at the results of translating
simple phrases into another language and back -- The drink is strong but
the meat is rotten).  Translating language to and from meaning (i.e. your
domain model) is the essence of intelligence.

 

How simple is the understanding of the above?  How much are you having to
fight to relate it to your internal model (assuming that it's even
compatible :-)?

 

I don't believe that intelligence is inherent upon language EXCEPT that
language is necessary to convey knowledge/meaning (in order to build
intelligence in a reasonable timeframe) and that language is influenced by
and influences intelligence since it is basically the core of the critical
meta-domains of teaching, learning, discovery, and alteration of your
internal model (the effectiveness of which *IS* intelligence).  Future AGI
and humans will undoubtedly not only have a much richer language but also a
much richer repertoire of second-order (and higher) features expressed via
language.

 

** Or, in other words, I am strongly disagreeing that intelligence is
separated from language understanding.  I believe that language
understanding is the necessary tool that intelligence is built with since it
is what puts the *contents* of intelligence (i.e. the domain model) into
intelligence .  Trying to build an intelligence without language
understanding is like trying to build it with just machine language or by
using only observable data points rather than trying to build those things
into more complex entities like third-, fourth-, and fifth-generation
programming languages instead of machine language and/or knowledge instead
of just data points.

 

BTW -- Please note, however, that the above does not imply that I believe
that NLU is the place to start in developing AGI.  Quite the contrary -- NLU
rests upon such a large domain model that I believe that it is
counter-productive to start there.  I believe that we need to star with
limited domains and learn about language, internal models, and grounding
without brittleness in 

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-19 Thread Matt Mahoney
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  There is
  currently a global brain (the world economy) with an IQ of
  around 10^10, and approaching 10^12.
 
 Oh man.  It is so tempting in today's economic morass
 to point out the 
 obvious stupidity of this purported super-super-genius.  
 Why would you 
 assign such an astronomical intelligence to the economy?  

Without the economy, or the language and culture needed to support it, you 
would be foraging for food and sleeping in the woods. You would not know that 
you could grow crops by planting seeds, or that you could make a spear out of 
sticks and rocks and use it for hunting. There is a 99.9% chance that you would 
starve because the primitive earth could only support a few million humans, not 
a few billions.

I realize it makes no sense to talk of an IQ of 10^10 when current tests only 
go to about 200. But by any measure of goal achievement, such as dollars earned 
or number of humans that can be supported, the global brain has enormous 
intelligence. It is a known fact that groups of humans collectively make more 
accurate predictions than their members, e.g. prediction markets. 
http://en.wikipedia.org/wiki/Prediction_market
Such markets would not work if the members did not individually think that they 
were smarter than the group (i.e. disagree). You may think you could run the 
government better than current leadership, but it is a fact that people are 
better off (as measured by GDP and migration) in democracies than 
dictatorships. Group decision making is also widely used in machine learning, 
e.g. the PAQ compression programs.

 How much of the advancement of humanity is the 
 result of a relatively few exceptionally bright minds
 rather than the  billions of lesser intelligences? 

Very little, because agents at any intelligence level cannot detect higher 
intelligence. Socrates was executed. Galileo was arrested. Even today, there is 
a span of decades between pioneering scientific work and its recognition with a 
Nobel prize. So I don't expect anyone to recognize the intelligence of the 
economy. But your ability to read this email depends more on circuit board 
assemblers in Malaysia than you are willing to give the world credit for.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished.


What if the matching process is not finished?

This is overly simplistic for several reasons since you're apparently 
assuming that the matching process is crisp, unambiguous, and irreversible 
(and ask Stephen Reed how well that works for TexAI).


It *must* be remembered that the internal model for natural language 
includes such critically entwined and constantly changing information as 
what this particular conversation is about, what the speaker knows, and what 
the speakers motivations are.  The meaning of sentences can change 
tremendously based upon the currently held beliefs about these questions. 
Suddenly realizing that the speaker is being sarcastic generally reverses 
the meaning of statements.  Suddenly realizing that the speaker is using an 
analogy can open up tremendous vistas for interpretation and analysis.  Look 
at all the problems that people have parsing sentences.



Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.


The reason why you can separate the process of communication with the 
process of manipulating data in a computer is because *data* is crisp and 
unambiguous.  It is concrete and completely specified as I suggested in my 
initial e-mail.  The model is entirely known and the communication process 
is entirely specified.  None of these things are true of unstructured 
knowledge.


Language understanding emphatically does not meet these requirements so your 
analogy doesn't hold.



You can see it differently but then everything is only a discussion about
definitions.


No, and claiming that everything is just a discussion about definitions is a 
strawman.  Your analogies are not accurate and your model is incomplete. 
You are focusing only on the tip of the iceberg (concrete language as spoken 
by a two-year-old) and missing the essence of NLP.



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 1:42 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished. Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.
You can see it differently but then everything is only a discussion about
definitions.

- Matthias




Mark Waser [mailto:[EMAIL PROTECTED] wrote

Gesendet: Sonntag, 19. Oktober 2008 19:00
An: agi@v2.listbox.com
Betreff: Re: [agi] Words vs Concepts [ex Defining AGI]

There is no creation of new patterns and there is no intelligent 
algorithm
which manipulates patterns. It is just translating, sending, receiving 
and

retranslating.


This is what I disagree entirely with.  If nothing else, humans are
constantly building and updating their mental model of what other people
believe and how they communicate it.  Only in routine, pre-negotiated
conversations can language be entirely devoid of learning.  Unless a
conversation is entirely concrete and based upon something like shared
physical experiences, it can't be any other way.  You're only paying
attention to the absolutely simplest things that language does (i.e. the 
tip


of the iceberg).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger

Terren wrote:


Isn't the *learning* of language the entire point? If you don't have an
answer for how an AI learns language, you haven't solved anything.  The
understanding of language only seems simple from the point of view of a
fluent speaker. Fluency however should not be confused with a lack of
intellectual effort - rather, it's a state in which the effort involved is
automatic and beyond awareness.

I don't think that learning of language is the entire point. If I have only
learned language I still cannot create anything. A human who can understand
language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing else
than language understanding?

Einstein had to express his (non-linguistic) internal insights in natural
language and in mathematical language.  In both modalities he had to use
his intelligence to make the translation from his mental models. 

The point is that someone else could understand Einstein even if he haven't
had the same intelligence. This is a proof that understanding AI1 does not
necessarily imply to have the intelligence of AI1. 

Deaf people speak in sign language, which is only different from spoken
language in superficial ways. This does not tell us much about language
that we didn't already know.

But it is a proof that *natural* language understanding is not necessary for
human-level intelligence.
 
It is surely true that much/most of our cognitive processing is not at all
linguistic, and that there is much that happens beyond our awareness.
However, language is a necessary tool, for humans at least, to obtain a
competent conceptual framework, even if that framework ultimately
transcends the linguistic dynamics that helped develop it. Without language
it is hard to see how humans could develop self-reflectivity. 

I have already outlined the process of self-reflectivity: Internal patterns
are translated into language. This is routed to the brain's own input
regions. You *hear* your own thoughts and have the illusion that you think
linguistically.
If you can speak two languages then you can make an easy test: Try to think
in the foreign language. It works. If language would be inherently involved
in the process of thoughts then thinking alternatively in two languages
would cost many resources of the brain. In fact you need just use the other
module for language translation. This is a big hint that language and
thoughts do not have much in common.

-Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser
If there are some details of the internal structure of patterns visible 
then

this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view.


True, but visible patterns offer clues for interpretation and analysis.  The 
more that is visible and clear, the less that is ambiguous and needs to be 
guessed at.  This is where your analogy splitting computer communications 
and data updates is accurate because the internal structures have been 
communicated and are shared to the nth degree.



Since in many communicating technical systems there are so much details
which are not transferred I would bet that this is also the case in 
humans.


Details that don't need to be transferred are those which are either known 
by or unnecessary to the recipient.  The former is a guess (unless the 
details were transmitted previously) and the latter is an assumption based 
upon partial knowledge of the recipient.  In a perfect, infinite world, 
details could and should always be transferred.  In the real world, time and 
computational constraints means that trade-offs need to occur.  This is 
where the essence of intelligence comes into play -- determining which of 
the trade-offs to take to get optimal perfomance (a.k.a. domain competence)



As long as we have no proof this remains an open question.


What remains an open question?  Obviously there are details which can be 
teased out by behavior and details that can't be easily teased out because 
we have insufficient data to do so.  This is like any other scientific 
examination of any other complex phenomenon.



An AGI which may
have internal features for its patterns would have less restrictions and 
is

thus far easier to build.


Sorry, but I can't interpret this.  An AGI without internal features and 
regularities is an oxymoron and completely nonsensical.  What are you trying 
to convey here?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
Mark Waser wrote

What if the matching process is not finished?
This is overly simplistic for several reasons since you're apparently 
assuming that the matching process is crisp, unambiguous, and irreversible 
(and ask Stephen Reed how well that works for TexAI).

I do not assume this. Why should I?

It *must* be remembered that the internal model for natural language 
includes such critically entwined and constantly changing information as 
what this particular conversation is about, what the speaker knows, and
what 
the speakers motivations are.  The meaning of sentences can change 
tremendously based upon the currently held beliefs about these questions. 
Suddenly realizing that the speaker is being sarcastic generally reverses 
the meaning of statements.  Suddenly realizing that the speaker is using an

analogy can open up tremendous vistas for interpretation and analysis.
Look 
at all the problems that people have parsing sentences.

If I suddenly realize that the speaker is sarcastic than I change my
mappings from linguistic entities to pattern entities. Where is the problem?


The reason why you can separate the process of communication with the 
process of manipulating data in a computer is because *data* is crisp and 
unambiguous.  It is concrete and completely specified as I suggested in my 
initial e-mail.  The model is entirely known and the communication process 
is entirely specified.  None of these things are true of unstructured 
knowledge.

You have given no reason why the separation of the process of communication
with the 
process of manipulating data can only be separated if the knowledge is
structured.
In fact there is no reason.



Language understanding emphatically does not meet these requirements so
your 
analogy doesn't hold.

There are no special requirements.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.

-Matthias

Mark Waser wrote

Details that don't need to be transferred are those which are either known 
by or unnecessary to the recipient.  The former is a guess (unless the 
details were transmitted previously) and the latter is an assumption based 
upon partial knowledge of the recipient.  In a perfect, infinite world, 
details could and should always be transferred.  In the real world, time
and 
computational constraints means that trade-offs need to occur.  This is 
where the essence of intelligence comes into play -- determining which of 
the trade-offs to take to get optimal perfomance (a.k.a. domain competence)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
 The process of translating patterns into language should be easier than the 
 process of creating patterns or manipulating patterns. 



How is translating patterns into language different from manipulating patterns? 
 It seems to me that they are *exactly* the same thing.  How do you believe 
that they differ?



 Therefore I say that language understanding is easy. 



Do you really believe that if A is easier than B then that makes A easy?  How 
about if A is leaping a tall building in a single bound and B is jumping to the 
moon?



 When you say that language is not fully specified then you probably imagine 
 an AGI which learns language.



Do you believe that language is fully specified?  That we can program English 
into an AGI by hand?



Yes, I imagine that an AGI must have some process for learning language because 
language is necessary for learning knowledge and knowledge is necessary for 
intelligence.  What part of that do you disagree with?  Please be specific.



 This is a complete different thing. Learning language is difficult as I 
 already have mentioned.



And this is where we are not communicating.  Since language is not fully 
specified, then the participants in many conversations are *constantly* 
creating and learning language as a part of the process of communication.  This 
is where Gödel's incompleteness comes in.  To be a General Intelligence, you 
must be able to extend beyond what is currently known and specified into new 
domains.  Any time that we are teaching or learning (i.e. modifying our model 
of the world), we are also necessarily extending our models of each other and 
language.  The computer database analogy you are basing your entire argument 
upon does not have the necessary features/complexity to be an accurate or 
useful analogy.



 Email programs are not just point to point repeaters.

 They receive data in a certain communication protocol. They translate these 
 data into an internal representation and store the data. And they can 
 translate their internal data into a linguistic representation to send the 
 data to another email client. This process  of communication is conceptually 
 the same as we can observe it with humans.



Again, I disagree.  You added internal details but the end result after the 
details are hidden is that e-mail programs are just point-to-point repeaters.  
That is why I used the examples (the telephone game and round-trip 
(mis)translations) that I did which you did not address.



 The word meaning was bad chosen from me. But brains do not transfer 
 meaning as well. They also just transfer  data. Meaning is a mapping. 



As I said, brains *try* to transfer meaning (though they must do it via the 
transfer of data).  If you don't believe that brains try (and most frequently 
succeed) at transferring meaning they we should just agree to disagree.



 You *believe* that language cannot be separated from intelligence. I don't 
 and I have described a model which has a strict separation. We both have no 
 proof.



Three points. 

1.  My statement was that intelligence can't be built without 
language/communication.  That is entirely different from the fact that they 
can't be separated.  I also gave reasoning why this was the case that you 
haven't addressed.

2.  Your model has serious flaws that you have not answered.  You are relying 
upon an analogy that has points that you have not shown that you are able to 
defend.  Until you do so, this invalidates your model.

3.  You have not provided a disproof or counter-example to what I am saying.  I 
have clearly specified where your analogy comes up short and other inaccuracies 
in your statements while you have not done so for any of mine (other than of 
the tis too, tis not variety).



I have had the courtesy to directly address your points with clear 
counter-examples.  Please return the favor and do not simply drop my examples 
without replying to them and revert back to global statements.  Global 
statements are great for an initial exposition but eventually you have to get 
down to the details and work out the nitty-gritty.  Thanks.



Mark




- Original Message - 

  From: Dr. Matthias Heger 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 2:19 PM
  Subject: AW: AW: [agi] Re: Defining AGI


  The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy. 

   

  When you say that language is not fully specified then you probably imagine 
an AGI which learns language.

  This is a complete different thing. Learning language is difficult as I 
already have mentioned.

   

  Language cannot be translated into meaning. Meaning is a mapping from a 
linguistic string to patterns.

   

  Email programs are not just point to point repeaters.

  They receive data in a certain communication protocol. They translate these 
data 

AW: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Dr. Matthias Heger
The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.

If the language must be learned then things are completely different and you
are right that the interaction with the environment is necessary to learn L.

But in any case there is a complete distinction between D and L. The brain
never sends entities of D to its output region but it sends entities of L.
Therefore there must be a strict separation between language model and D.

- Matthias


Vladimir Nesov wrote

I think that this model is overly simplistic, overemphasizing an
artificial divide between domains within AI's cognition (L and D), and
externalizing communication domain from the core of AI. Both world
model and language model support interaction with environment, there
is no clear cognitive distinction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any domain, becoming able to
generate appropriate behavior in corresponding contexts.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
I don't think that learning of language is the entire point. If I have 
only
learned language I still cannot create anything. A human who can 
understand

language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing 
else

than language understanding?


Many or most people on this list believe that learning language is an 
AGI-complete task.  What this means is that the skills necessary for 
learning a language are necessary and sufficient for learning any other 
task.  It is not that language understanding gives general intelligence 
capabilities, but that the pre-requisites for language understanding are 
general intelligence (or, that language understanding is isomorphic to 
general intelligence in the same fashion that all NP-complete problems are 
isomorphic).  Thus, the argument actually is that a system that can do 
nothing else than language understanding is an oxymoron.


*Any* human who can understand language beyond a certain point (say, that of 
a slightly sub-average human IQ) can easily be taught to be a good scientist 
if they are willing to play along.  Science is a rote process that can be 
learned and executed by anyone -- as long as their beliefs and biases don't 
get in the way.



Deaf people speak in sign language, which is only different from spoken
language in superficial ways. This does not tell us much about language
that we didn't already know.
But it is a proof that *natural* language understanding is not necessary 
for

human-level intelligence.


This is a bit of disingenuous side-track that I feel that I must address. 
When people say natural language, the important features are extensibility 
and ambiguity.  If you can handle one extensible and ambiguous language, you 
should have the capabilities to handle all of them.  It's yet another 
definition of GI-complete.  Just look at it as yet another example of 
dealing competently with ambiguous and incomplete data (which is, at root, 
all that intelligence is).


If you can speak two languages then you can make an easy test: Try to 
think
in the foreign language. It works. If language would be inherently 
involved

in the process of thoughts then thinking alternatively in two languages
would cost many resources of the brain. In fact you need just use the 
other

module for language translation. This is a big hint that language and
thoughts do not have much in common.


One thought module, two translation modules -- except that all the 
translation modules really are is label appliers and grammar re-arrangers. 
The heavy lifting is all in the thought module.  The problem is that you are 
claiming that language lies entirely in the translation modules while I'm 
arguing that a large percentage of it is in the thought module.  The fact 
that the translation module has to go to the thought module for 
disambiguation and interpretation (and numerous other things) should make it 
quite clear that language is *not* simply translation.


Further, if you read Pinker's book, you will find that languages have a lot 
more in common than you would expect if language truly were independent of 
and separate from thought (as you are claiming).  Language is built on top 
of the thinking/cognitive architecture (not beside it and not independent of 
it) and could not exist without it.  That is why language is AGI-complete. 
Language also gives an excellent window into many of the features of that 
cognitive architecture and determining what is necessary for language also 
determine what is in that cognitive architecture.  Another excellent window 
is how humans perform moral judgments (try reading Marc Hauser -- either his 
numerous scientific papers or the excellent Moral Minds).  Or, yet another, 
is examining the structure of human biases.




- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:52 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI




Terren wrote:



Isn't the *learning* of language the entire point? If you don't have an

answer for how an AI learns language, you haven't solved anything.  The
understanding of language only seems simple from the point of view of a
fluent speaker. Fluency however should not be confused with a lack of
intellectual effort - rather, it's a state in which the effort involved 
is

automatic and beyond awareness.

I don't think that learning of language is the entire point. If I have 
only
learned language I still cannot create anything. A human who can 
understand

language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing 
else

than language understanding?


Einstein had to express his (non-linguistic) internal insights in natural

language and in mathematical language.  In both modalities he had to use
his intelligence to make the translation from his 

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser
You have given no reason why the separation of the process of 
communication

with the
process of manipulating data can only be separated if the knowledge is
structured.
In fact there is no reason.


How do you communicate something for which you have no established 
communications protocol?  If you can answer that, you have solved the 
natural language problem.


- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 3:10 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



Mark Waser wrote


What if the matching process is not finished?
This is overly simplistic for several reasons since you're apparently
assuming that the matching process is crisp, unambiguous, and irreversible
(and ask Stephen Reed how well that works for TexAI).


I do not assume this. Why should I?


It *must* be remembered that the internal model for natural language
includes such critically entwined and constantly changing information as
what this particular conversation is about, what the speaker knows, and

what

the speakers motivations are.  The meaning of sentences can change
tremendously based upon the currently held beliefs about these questions.
Suddenly realizing that the speaker is being sarcastic generally reverses
the meaning of statements.  Suddenly realizing that the speaker is using 
an



analogy can open up tremendous vistas for interpretation and analysis.

Look

at all the problems that people have parsing sentences.


If I suddenly realize that the speaker is sarcastic than I change my
mappings from linguistic entities to pattern entities. Where is the 
problem?




The reason why you can separate the process of communication with the
process of manipulating data in a computer is because *data* is crisp and
unambiguous.  It is concrete and completely specified as I suggested in my
initial e-mail.  The model is entirely known and the communication process
is entirely specified.  None of these things are true of unstructured
knowledge.


You have given no reason why the separation of the process of 
communication

with the
process of manipulating data can only be separated if the knowledge is
structured.
In fact there is no reason.




Language understanding emphatically does not meet these requirements so

your

analogy doesn't hold.


There are no special requirements.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.


Absolutely.  We are not aware of most of our assumptions that are based in 
our common heritage, culture, and embodiment.  But an external observer 
could easily notice them and tease out an awful lot of information about us 
by doing so.


- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 3:18 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.

-Matthias

Mark Waser wrote


Details that don't need to be transferred are those which are either known
by or unnecessary to the recipient.  The former is a guess (unless the
details were transmitted previously) and the latter is an assumption based
upon partial knowledge of the recipient.  In a perfect, infinite world,
details could and should always be transferred.  In the real world, time

and

computational constraints means that trade-offs need to occur.  This is
where the essence of intelligence comes into play -- determining which of
the trade-offs to take to get optimal perfomance (a.k.a. domain 
competence)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Mark Waser

The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.


Yes!  But the formal languages need to be efficiently extensible as well 
(and ambiguity plays a large part in extensibility which then leads to . . . 
.  :-)


If the language must be learned then things are completely different and 
you
are right that the interaction with the environment is necessary to learn 
L.


How do you go from a formal language to a competent description of a messy, 
ambiguous, data-deficient world?  *That* is the natural language question.


What happens if I say that language extensibility is exactly analogous to 
learning which is exactly analogous to internal model improvement?



But in any case there is a complete distinction between D and L. The brain
never sends entities of D to its output region but it sends entities of L.
Therefore there must be a strict separation between language model and D.


I disagree with a complete distinction between D and L.  L is a very small 
fraction of D translated for transmission.  However, instead of arguing that 
there must be a strict separation between language model and D, I would 
argue that the more similar the two could be (i.e. the less translation from 
D to L) the better.  Analyzing L in that case could tell you more about D 
than you might think (which is what Pinker and Hauser argue).  It's like 
looking at data to determine an underlying cause for a phenomenon.  Even 
noticing what does and does not vary (and what covaries) tells you a lot 
about the underlying cause (D).



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 3:50 PM
Subject: AW: [agi] Re: Meaning, communication and understanding



The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.

If the language must be learned then things are completely different and 
you
are right that the interaction with the environment is necessary to learn 
L.


But in any case there is a complete distinction between D and L. The brain
never sends entities of D to its output region but it sends entities of L.
Therefore there must be a strict separation between language model and D.

- Matthias




Vladimir Nesov wrote

I think that this model is overly simplistic, overemphasizing an
artificial divide between domains within AI's cognition (L and D), and
externalizing communication domain from the core of AI. Both world
model and language model support interaction with environment, there
is no clear cognitive distinction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any domain, becoming able to
generate appropriate behavior in corresponding contexts.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Terren Suydam

Matthias wrote:
 I don't think that learning of language is the entire
 point. If I have only
 learned language I still cannot create anything. A human
 who can understand
 language is by far still no good scientist. Intelligence
 means the ability
 to solve problems. Which problems can a system solve if it
 can nothing else
 than language understanding?

Language understanding requires a sophisticated conceptual framework complete 
with causal models, because, whatever meaning means, it must be captured 
somehow in an AI's internal models of the world.

The Piraha tribe in the Amazon basin has a very primitive language compared to 
all modern languages - it has no past or future tenses, for example - and as a 
people they exhibit barely any of the hallmarks of abstract reasoning that are 
so common to the rest of humanity, such as story-telling, artwork, religion... 
see http://en.wikipedia.org/wiki/Pirah%C3%A3_people.  

How do you explain that?

 Einstein had to express his (non-linguistic) internal
 insights in natural
 language and in mathematical language.  In both
 modalities he had to use
 his intelligence to make the translation from his
 mental models. 
 
 The point is that someone else could understand Einstein
 even if he haven't
 had the same intelligence. This is a proof that
 understanding AI1 does not
 necessarily imply to have the intelligence of AI1. 

I'm saying that if an AI understands  speaks natural language, you've solved 
AGI - your Nobel will be arriving soon.  The difference between AI1 that 
understands Einstein, and any AI currently in existence, is much greater then 
the difference between AI1 and Einstein.

 Deaf people speak in sign language, which is only
 different from spoken
 language in superficial ways. This does not tell us
 much about language
 that we didn't already know.
 
 But it is a proof that *natural* language understanding is
 not necessary for
 human-level intelligence.

Sorry, I don't see that, can you explain the proof?  Are you saying that sign 
language isn't natural language?  That would be patently false. (see 
http://crl.ucsd.edu/signlanguage/)

 I have already outlined the process of self-reflectivity:
 Internal patterns
 are translated into language. 

So you're agreeing that language is necessary for self-reflectivity. In your 
models, then, self-reflectivity is not important to AGI, since you say AGI can 
be realized without language, correct?

 This is routed to the
 brain's own input
 regions. You *hear* your own thoughts and have the illusion
 that you think
 linguistically.
 If you can speak two languages then you can make an easy
 test: Try to think
 in the foreign language. It works. If language would be
 inherently involved
 in the process of thoughts then thinking alternatively in
 two languages
 would cost many resources of the brain. In fact you need
 just use the other
 module for language translation. This is a big hint that
 language and
 thoughts do not have much in common.
 
 -Matthias

I'm not saying that language is inherently involved in thinking, but it is 
crucial for the development of *sophisticated* causal models of the world - the 
kind of models that can support self-reflectivity. Word-concepts form the basis 
of abstract symbol manipulation. 

That gets the ball rolling for humans, but the conceptual framework that 
emerges is not necessarily tied to linguistics, especially as humans get 
feedback from the world in ways that are not linguistic (scientific 
experimentation/tinkering, studying math, art, music, etc). 

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-19 Thread Matt Mahoney
--- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:

 No, I do not claim that computer theorem-provers cannot
 prove Goedel's Theorem. It has been done. The objection applies
 specifically to AIXI-- AIXI cannot prove goedel's theorem.

Yes it can. It just can't understand its own proof in the sense of Tarski's 
undefinability theorem.

Construct a predictive AIXI environment as follows: the environment output 
symbol does not depend on anything the agent does. However, the agent receives 
a reward when its output symbol matches the next symbol input from the 
environment. Thus, the environment can be modeled as a string that the agent 
has the goal of compressing.

Now encode in the environment a series of theorems followed by their proofs. 
Since proofs can be mechanically checked, and therefore found given enough time 
(if the proof exists), then the optimal strategy for the agent, according to 
AIXI is to guess that the environment receives as input a series of theorems 
and that the environment then proves them and outputs the proof. AIXI then 
replicates its guess, thus correctly predicting the proofs and maximizing its 
reward. To prove Goedel's theorem, we simply encode it into the environment 
after a series of other theorems and their proofs.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-19 Thread Ben Goertzel
But, either you're just wrong or I don't understand your wording ... of
course, AIXI **can** reason about uncomputable entities.  If you showed AIXI
the axioms of, say, ZF set theory (including the Axiom of Choice), and
reinforced it for correctly proving theorems about uncomputable entities as
defined in ZF, then after enough reinforcement signals it could learn to
prove these theorems.

ben g

On Sun, Oct 19, 2008 at 10:42 AM, Abram Demski [EMAIL PROTECTED]wrote:

 Ben,

 I don't know what sounded almost confused, but anyway it is apparent
 that I didn't make my position clear. I am not saying we can
 manipulate these things directly via exotic (non)computing.

 First, I am very specifically saying that AIXI-style AI (meaning, any
 AI that approaches AIXI as resources increase) cannot reason about
 uncomputable entities. This is because AIXI entertains only computable
 models.

 Second, I am suggesting a broader problem that will apply to a wide
 class of formulations of idealized intelligence such as AIXI: if their
 internal logic obeys a particular set of assumptions, it will become
 prone to Tarski's Undefinability Theorem. Therefore, we humans will be
 able to point out a particular class of concepts that it cannot reason
 about; specifically, the very concepts used in describing the ideal
 intelligence in the first place.

 One reasonable way of avoiding the humans are magic explanation of
 this (or humans use quantum gravity computing, etc) is to say that,
 OK, humans really are an approximation of an ideal intelligence
 obeying those assumptions. Therefore, we cannot understand the math
 needed to define our own intelligence. Therefore, we can't engineer
 human-level AGI. I don't like this conclusion! I want a different way
 out.

 I'm not sure the guru explanation is enough... who was the Guru for
 Humankind?

 Thanks,

 --Abram


 On Sun, Oct 19, 2008 at 5:39 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Abram,
 
  I find it more useful to think in terms of Chaitin's reformulation of
  Godel's Theorem:
 
  http://www.cs.auckland.ac.nz/~chaitin/sciamer.htmlhttp://www.cs.auckland.ac.nz/%7Echaitin/sciamer.html
 
  Given any computer program with algorithmic information capacity less
 than
  K, it cannot prove theorems whose algorithmic information content is
 greater
  than K.
 
  Put simply, there are some things our  brains are not big enough to prove
  true or false
 
  This is true for quantum computers just as it's true for classical
  computers.  Penrose hypothesized it would NOT hold for quantum gravity
  computers, but IMO this is a fairly impotent hypothesis because quantum
  gravity computers don't exist (even theoretically, I mean: since there is
 no
  unified quantum gravity theory yet).
 
  Penrose assumes that humans don't have this sort of limitation, but I'm
 not
  sure why.
 
  On the other hand, this limitation can be overcome somewhat if you allow
 the
  program P to interact with the external world in a way that lets it be
  modified into P1 such that P1 is not computable by P.  In this case P
 needs
  to have a guru (or should I say an oracle ;-) that it trusts to modify
  itself in ways it can't understand, or else to be a gambler-type...
 
  You seem almost confused when you say that an AI can't reason about
  uncomputable entities.  Of course it can.  An AI can manipulate math
 symbols
  in a certain formal system, and then associate these symbols with the
 words
  uncomputable entities, and with its own self ... or us.  This is what
 we
  do.
 
  An AI program can't actually manipulate the uncomputable entities
 directly ,
  but what makes you think *we* can, either?
 
 
  -- Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
Marc Walser wrote:


*Any* human who can understand language beyond a certain point (say, that of

a slightly sub-average human IQ) can easily be taught to be a good scientist

if they are willing to play along.  Science is a rote process that can be 
learned and executed by anyone -- as long as their beliefs and biases don't 
get in the way.



This is just an opinion and I  strongly disagree with your opinion.
Obviously you overestimate language understanding a lot.



This is a bit of disingenuous side-track that I feel that I must address. 
When people say natural language, the important features are extensibility

and ambiguity.  If you can handle one extensible and ambiguous language, you

should have the capabilities to handle all of them.  It's yet another 
definition of GI-complete.  Just look at it as yet another example of 
dealing competently with ambiguous and incomplete data (which is, at root, 
all that intelligence is).


You use your personal definition of natural language. I don't think that
human's are intelligent because they use an ambiguous language. They also
would be intelligent if their language would not suffer from ambiguities.


One thought module, two translation modules -- except that all the 
translation modules really are is label appliers and grammar re-arrangers. 
The heavy lifting is all in the thought module.  The problem is that you are

claiming that language lies entirely in the translation modules while I'm 
arguing that a large percentage of it is in the thought module.  The fact 
that the translation module has to go to the thought module for 
disambiguation and interpretation (and numerous other things) should make it

quite clear that language is *not* simply translation.


It is still just translation.



Further, if you read Pinker's book, you will find that languages have a lot 
more in common than you would expect if language truly were independent of 
and separate from thought (as you are claiming).  Language is built on top 
of the thinking/cognitive architecture (not beside it and not independent of

it) and could not exist without it.  That is why language is AGI-complete. 
Language also gives an excellent window into many of the features of that 
cognitive architecture and determining what is necessary for language also 
determine what is in that cognitive architecture.  Another excellent window 
is how humans perform moral judgments (try reading Marc Hauser -- either his

numerous scientific papers or the excellent Moral Minds).  Or, yet another, 
is examining the structure of human biases.


There are also visual thoughts. You can imagine objects moving. The
principle is the same as with thoughts you perceive in your language: There
is an internal representation of patterns which is completely hidden for
your consciousness. The brain compresses and translates your visual thoughts
and routes the results to its own visual input regions. 

As long as there is no real evidence against the model that thoughts are
separated from the way I perceive thoughts (e.g. by language )I do not see
any reason to change my opinion.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has a
stronger  relation to the pattern for angry than it is the case for the
pattern of tree.

You don't have to manipulate any patterns and can do the translation.

- Matthias

Marc Walser wrote:

How do you communicate something for which you have no established 
communications protocol?  If you can answer that, you have solved the 
natural language problem.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
 Manipulating of patterns needs reading and writing operations. Data 
 structures will be changed. Translation needs just reading operations to the 
 patterns of the internal model.



So translation is a pattern manipulation where the result isn't stored?



 I disagree that AGI must have some process for learning language. If we 
 concentrate just on the domain of mathematics we could give AGI all the rules 
 for a sufficient language to express its results and to understand our 
 questions.



The domain of mathematics is complete and unambiguous.  A mathematics AI is not 
a GI in my book.  It won't generalize to the real world until it handles 
incompleteness and ambiguity (which is my objection to your main analogy).



(Note:  I'm not saying that it might not be a good first step . . . . but I 
don't believe that it is on the shortest path to GI).



 New definitions makes communication more comfortable but they are not 
 necessary.

 

Wrong.  Methane is not a new definition, it is a new label.  New definitions 
that combine lots of raw data into much more manipulable knowledge are 
necessary exactly as much as a third-, fourth-, or fifth- generation language 
is necessary instead of machine language.



 I don't know the telephone game. The details are essential. It is not 
 essential where the data comes from and where it ends. Just the process of 
 translating internal data into a certain language and vice versa is important.



Start with a circle of people.  Tell the first person a reasonable length 
phrase, have them tell the next, and so on.  The end result is fascinating and 
very similar to what happens when an incompetent pair of translators attempt to 
translate from one language to another and back again.



 It is clear that an AGI needs an interface for human beings. But the question 
 in this discussion is whether the language interface is a key point in AGI or 
 not. In my opinion it is no key point. It is just a communication protocol. 
 The real intelligence has nothing to do with language understanding. 
 Therefore we should use a simple formal hard coded language for first AGI.



The communication protocol needs to be extensible to handle output after 
learning or transition into a new domain.  How do you ground new concepts?  
More importantly, it needs to be extensible to support teaching the AGI.  As I 
keep saying, how are you going to make your communication protocol extensible?  
Real GENERAL intelligence has EVERYTHING to do with extensibility.



 I don't see any problems with my model and I do not see any flaws which I 
 don't have answered.

 I haven't seen any point where my analogy comes short.



I keep pointing out that your model separating communication and database 
updating depends upon a fully specified model and does not tolerate ambiguity 
(i.e. it lacks extensibility and doesn't handle ambiguity).  You continue not 
to answer these points.



Unless you can handle valid objections by showing why they aren't valid, your 
model is disproven by counter-example.



  - Original Message - 
  From: Dr. Matthias Heger 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 4:53 PM
  Subject: AW: AW: [agi] Re: Defining AGI


  Mark Waser wrote:

   

  How is translating patterns into language different from manipulating 
patterns? 

   It seems to me that they are *exactly* the same thing.  How do you believe 
that they differ?

   

  Manipulating of patterns needs reading and writing operations. Data 
structures will be changed. Translation needs just reading operations to the 
patterns of the internal model.

   

   

  Do you really believe that if A is easier than B then that makes A easy? 

   How about if A is leaping a tall building in a single bound and B is 
jumping to the moon?

   

  The word *easy*  is not exactly definable.

   

   

   Do you believe that language is fully specified?  That we can program 
English into an AGI by hand?

   

  No. That's the reason why I would not use human language for the first AGI.

   

  Yes, I imagine that an AGI must have some process for learning language 
because language is necessary for 

  learning knowledge and knowledge is necessary for intelligence.  

  What part of that do you disagree with?  Please be specific.

   

  I disagree that AGI must have some process for learning language. If we 
concentrate just on the domain of mathematics we could give AGI all the rules 
for a sufficient language to express its results and to understand our 
questions.

   

   

   

   

  And this is where we are not communicating.  Since language is not fully 
specified, then the participants in 

  many conversations are *constantly* creating and learning language as a part 
of the process of 

  communication.  This is where Gödel's incompleteness comes in.  To be a 
General Intelligence, you must be able to extend beyond what is currently known 
and specified into new domains.  Any time that we 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Funny, Ben.

So . . . . could you clearly state why science can't be done by anyone willing 
to simply follow the recipe?

Is it really anything other than the fact that they are stopped by their 
unconscious beliefs and biases?  If so, what?

Instead of a snide comment, defend your opinion with facts, explanations, and 
examples of what it really is.

I can give you all sorts of examples where someone is capable of doing 
something by the numbers until they are told that they can't.

What do you believe is so difficult about science other than overcoming the 
sub/unconscious?

Your statement is obviously spoken by someone who has lectured as opposed to 
taught.
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:26 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI







*Any* human who can understand language beyond a certain point (say, that of

a slightly sub-average human IQ) can easily be taught to be a good scientist

if they are willing to play along.  Science is a rote process that can be
learned and executed by anyone -- as long as their beliefs and biases don't
get in the way.


  This is obviously spoken by someone who has never been a professional teacher 
;-p

  ben g


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has 
a

stronger  relation to the pattern for angry than it is the case for the
pattern of tree.


So, are the relationships between the various patterns in your translation 
module or in your cognitive module?


I would argue that they are in your cognitive module.  If you disagree, then 
I'll just agree to disagree because in order for them to be in your 
translation module then you'll have to be constantly updating your 
translation module which then contradicts what you said previously about the 
translation module being static.



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 5:38 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has 
a

stronger  relation to the pattern for angry than it is the case for the
pattern of tree.

You don't have to manipulate any patterns and can do the translation.

- Matthias

Marc Walser wrote:

How do you communicate something for which you have no established
communications protocol?  If you can answer that, you have solved the
natural language problem.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Mark,

It is not the case that I have merely lectured rather than taught.  I've
lectured (math, CS, psychology and futurology) at university, it's true ...
but I've also done extensive one-on-one math tutoring with students at
various levels ... and I've also taught small groups of kids aged 7-12,
hands-on (math  programming), and I've taught retirees various skills
(mostly computer related).

Why can't a stupid person do good science?  Doing science in reality seems
to require a whole bunch of implicit, hard-to-verbalize knowledge that
stupid people just don't seem to be capable of learning.  A stupid person
can possibly be trained to be a good lab assistant, in some areas of science
but not others (it depends on how flaky and how complex the lab technology
involved is in that area).  But, being a scientist involves a lot of
judgment, a lot of heuristic, uncertain reasoning drawing on a wide variety
of knowledge.

Could a stupid person learn to be a good scientist given, say, a thousand
years of training?  Maybe.  But I doubt it, because by the time they had
moved on to learning the second half of what they need to know, they would
have already forgotten the first half ;-p

You work in software engineering -- do you think a stupid person could be
trained to be a really good programmer?  Again, I very much doubt it ...
though they could be (and increasingly are ;-p) trained to do routine
programming tasks.

Inevitably, in either of these cases, the person will encounter some
situation not directly covered in their training, and will need to
intelligently analogize to their experience, and will fail at this because
they are not very intelligent...

-- Ben G

On Sun, Oct 19, 2008 at 5:43 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Funny, Ben.

 So . . . . could you clearly state why science can't be done by anyone
 willing to simply follow the recipe?

 Is it really anything other than the fact that they are stopped by their
 unconscious beliefs and biases?  If so, what?

 Instead of a snide comment, defend your opinion with facts, explanations,
 and examples of what it really is.

 I can give you all sorts of examples where someone is capable of doing
 something by the numbers until they are told that they can't.

 What do you believe is so difficult about science other than overcoming the
 sub/unconscious?

 Your statement is obviously spoken by someone who has lectured as opposed
 to taught.

 - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Sunday, October 19, 2008 5:26 PM
 *Subject:* Re: AW: AW: [agi] Re: Defining AGI




 
 *Any* human who can understand language beyond a certain point (say, that
 of

 a slightly sub-average human IQ) can easily be taught to be a good
 scientist

 if they are willing to play along.  Science is a rote process that can be
 learned and executed by anyone -- as long as their beliefs and biases
 don't
 get in the way.
 


 This is obviously spoken by someone who has never been a professional
 teacher ;-p

 ben g
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser

Interesting how you always only address half my points . . .

I keep hammering extensibility and you focus on ambiguity which is merely 
the result of extensibility.  You refuse to address extensibility.  Maybe 
because it really is the secret sauce of intelligence and the one thing that 
you can't handle?


And after a long explanation, I get comments like  It is still just 
translation with no further explanation and visual thought nonsense 
worthy of Mike Tintner.


So, I give up.  I can't/won't debate someone who won't follow scientific 
methods of inquiry.



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 5:21 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI



Marc Walser wrote:



*Any* human who can understand language beyond a certain point (say, that 
of


a slightly sub-average human IQ) can easily be taught to be a good 
scientist


if they are willing to play along.  Science is a rote process that can be
learned and executed by anyone -- as long as their beliefs and biases 
don't

get in the way.



This is just an opinion and I  strongly disagree with your opinion.
Obviously you overestimate language understanding a lot.





This is a bit of disingenuous side-track that I feel that I must address.
When people say natural language, the important features are 
extensibility


and ambiguity.  If you can handle one extensible and ambiguous language, 
you


should have the capabilities to handle all of them.  It's yet another
definition of GI-complete.  Just look at it as yet another example of
dealing competently with ambiguous and incomplete data (which is, at root,
all that intelligence is).


You use your personal definition of natural language. I don't think that
human's are intelligent because they use an ambiguous language. They also
would be intelligent if their language would not suffer from ambiguities.




One thought module, two translation modules -- except that all the
translation modules really are is label appliers and grammar re-arrangers.
The heavy lifting is all in the thought module.  The problem is that you 
are


claiming that language lies entirely in the translation modules while I'm
arguing that a large percentage of it is in the thought module.  The fact
that the translation module has to go to the thought module for
disambiguation and interpretation (and numerous other things) should make 
it


quite clear that language is *not* simply translation.


It is still just translation.




Further, if you read Pinker's book, you will find that languages have a 
lot

more in common than you would expect if language truly were independent of
and separate from thought (as you are claiming).  Language is built on top
of the thinking/cognitive architecture (not beside it and not independent 
of


it) and could not exist without it.  That is why language is AGI-complete.
Language also gives an excellent window into many of the features of that
cognitive architecture and determining what is necessary for language also
determine what is in that cognitive architecture.  Another excellent 
window
is how humans perform moral judgments (try reading Marc Hauser -- either 
his


numerous scientific papers or the excellent Moral Minds).  Or, yet 
another,

is examining the structure of human biases.


There are also visual thoughts. You can imagine objects moving. The
principle is the same as with thoughts you perceive in your language: 
There

is an internal representation of patterns which is completely hidden for
your consciousness. The brain compresses and translates your visual 
thoughts

and routes the results to its own visual input regions.

As long as there is no real evidence against the model that thoughts are
separated from the way I perceive thoughts (e.g. by language )I do not see
any reason to change my opinion.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Actually, I should have drawn a distinction . . . . there is a major difference 
between performing discovery as a scientist and evaluating data as a scientist. 
 I was referring to the latter (which is similar to understanding Einstein) as 
opposed to the former (which is being Einstein).  You clearly are referring to 
the creative act of discovery (Programming is also a discovery operation).

So let me rephrase my statement -- Can a stupid person do good scientific 
evaluation if taught the rules and willing to abide by them?  Why or why not?
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:52 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,

  It is not the case that I have merely lectured rather than taught.  I've 
lectured (math, CS, psychology and futurology) at university, it's true ... but 
I've also done extensive one-on-one math tutoring with students at various 
levels ... and I've also taught small groups of kids aged 7-12, hands-on (math 
 programming), and I've taught retirees various skills (mostly computer 
related).

  Why can't a stupid person do good science?  Doing science in reality seems to 
require a whole bunch of implicit, hard-to-verbalize knowledge that stupid 
people just don't seem to be capable of learning.  A stupid person can possibly 
be trained to be a good lab assistant, in some areas of science but not others 
(it depends on how flaky and how complex the lab technology involved is in that 
area).  But, being a scientist involves a lot of judgment, a lot of heuristic, 
uncertain reasoning drawing on a wide variety of knowledge.

  Could a stupid person learn to be a good scientist given, say, a thousand 
years of training?  Maybe.  But I doubt it, because by the time they had moved 
on to learning the second half of what they need to know, they would have 
already forgotten the first half ;-p

  You work in software engineering -- do you think a stupid person could be 
trained to be a really good programmer?  Again, I very much doubt it ... though 
they could be (and increasingly are ;-p) trained to do routine programming 
tasks.  

  Inevitably, in either of these cases, the person will encounter some 
situation not directly covered in their training, and will need to 
intelligently analogize to their experience, and will fail at this because they 
are not very intelligent...

  -- Ben G


  On Sun, Oct 19, 2008 at 5:43 PM, Mark Waser [EMAIL PROTECTED] wrote:

Funny, Ben.

So . . . . could you clearly state why science can't be done by anyone 
willing to simply follow the recipe?

Is it really anything other than the fact that they are stopped by their 
unconscious beliefs and biases?  If so, what?

Instead of a snide comment, defend your opinion with facts, explanations, 
and examples of what it really is.

I can give you all sorts of examples where someone is capable of doing 
something by the numbers until they are told that they can't.

What do you believe is so difficult about science other than overcoming the 
sub/unconscious?

Your statement is obviously spoken by someone who has lectured as opposed 
to taught.
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:26 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI







*Any* human who can understand language beyond a certain point (say, 
that of

a slightly sub-average human IQ) can easily be taught to be a good 
scientist

if they are willing to play along.  Science is a rote process that can 
be
learned and executed by anyone -- as long as their beliefs and biases 
don't
get in the way.


  This is obviously spoken by someone who has never been a professional 
teacher ;-p

  ben g


--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Whether a stupid person can do good scientific evaluation if taught the
rules is a badly-formed question, because no one knows what the rules
are.   They are learned via experience just as much as by explicit teaching

Furthermore, as anyone who has submitted a lot of science papers to journals
knows, even smart scientists can be horrendously bad at scientific
evaluation.  I've had some really good bioscience papers rejected from
journals, by presumably intelligent referees, for extremely bad reasons (and
these papers were eventually published in good journals).

Evaluating research is not much easier than doing it.  When is someone's
supposed test of statistical validity really the right test?  Too many
biology referees just look for the magic number of p.05 rather than
understanding what test actually underlies that number, because they don't
know the math or don't know how to connect the math to the experiment in a
contextually appropriate way.

As another example: When should a data point be considered an outlier
(meaning: probably due to equipment error or some other quirk) rather than a
genuine part of the data?  Tricky.  I recall Feynman noting that he was held
back in making a breakthrough discovery for some time, because of an outlier
on someone else's published data table, which turned out to be spurious but
had been accepted as valid by the community.  In this case, Feyman's
exceptional intelligence allowed him to carry out scientific evaluation more
effectively than other, intelligent but less-so-than-him, had done...

-- Ben G

On Sun, Oct 19, 2008 at 6:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Actually, I should have drawn a distinction . . . . there is a major
 difference between performing discovery as a scientist and evaluating data
 as a scientist.  I was referring to the latter (which is similar to
 understanding Einstein) as opposed to the former (which is being Einstein).
 You clearly are referring to the creative act of discovery (Programming is
 also a discovery operation).

 So let me rephrase my statement -- Can a stupid person do good scientific
 evaluation if taught the rules and willing to abide by them?  Why or why
 not?

 - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Sunday, October 19, 2008 5:52 PM
 *Subject:* Re: AW: AW: [agi] Re: Defining AGI


 Mark,

 It is not the case that I have merely lectured rather than taught.  I've
 lectured (math, CS, psychology and futurology) at university, it's true ...
 but I've also done extensive one-on-one math tutoring with students at
 various levels ... and I've also taught small groups of kids aged 7-12,
 hands-on (math  programming), and I've taught retirees various skills
 (mostly computer related).

 Why can't a stupid person do good science?  Doing science in reality seems
 to require a whole bunch of implicit, hard-to-verbalize knowledge that
 stupid people just don't seem to be capable of learning.  A stupid person
 can possibly be trained to be a good lab assistant, in some areas of science
 but not others (it depends on how flaky and how complex the lab technology
 involved is in that area).  But, being a scientist involves a lot of
 judgment, a lot of heuristic, uncertain reasoning drawing on a wide variety
 of knowledge.

 Could a stupid person learn to be a good scientist given, say, a thousand
 years of training?  Maybe.  But I doubt it, because by the time they had
 moved on to learning the second half of what they need to know, they would
 have already forgotten the first half ;-p

 You work in software engineering -- do you think a stupid person could be
 trained to be a really good programmer?  Again, I very much doubt it ...
 though they could be (and increasingly are ;-p) trained to do routine
 programming tasks.

 Inevitably, in either of these cases, the person will encounter some
 situation not directly covered in their training, and will need to
 intelligently analogize to their experience, and will fail at this because
 they are not very intelligent...

 -- Ben G

 On Sun, Oct 19, 2008 at 5:43 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Funny, Ben.

 So . . . . could you clearly state why science can't be done by anyone
 willing to simply follow the recipe?

 Is it really anything other than the fact that they are stopped by their
 unconscious beliefs and biases?  If so, what?

 Instead of a snide comment, defend your opinion with facts, explanations,
 and examples of what it really is.

 I can give you all sorts of examples where someone is capable of doing
 something by the numbers until they are told that they can't.

 What do you believe is so difficult about science other than overcoming
 the sub/unconscious?

 Your statement is obviously spoken by someone who has lectured as opposed
 to taught.

  - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
   *Sent:* Sunday, October 19, 2008 5:26 PM
 *Subject:* 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
 Whether a stupid person can do good scientific evaluation if taught the 
 rules is a badly-formed question, because no one knows what the rules are.  
  They are learned via experience just as much as by explicit teaching

Wow!  I'm sorry but that is a very scary, incorrect opinion.  There's a really 
good book called The Game of Science by McCain and Segal that clearly 
explains all of the rules.  I'll get you a copy.

I understand that most scientists aren't trained properly -- but that is no 
reason to continue the problem by claiming that they can't be trained properly.

You make my point with your explanation of your example of biology referees.  
And the Feynman example, if it is the story that I've heard before, was 
actually an example of good science in action because the outlier was 
eventually overruled AFTER ENOUGH GOOD DATA WAS COLLECTED to prove that the 
outlier was truly an outlier and not just a mere inconvenience to someone's 
theory.  Feynman's exceptional intelligence allowed him to discover a 
possibility that might have been correct if the point was an outlier, but good 
scientific evaluation relies on data, data, and more data.  Using that story as 
an example shows that you don't understand how to properly run a scientific 
evaluative process.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 6:07 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Whether a stupid person can do good scientific evaluation if taught the 
rules is a badly-formed question, because no one knows what the rules are.   
They are learned via experience just as much as by explicit teaching

  Furthermore, as anyone who has submitted a lot of science papers to journals 
knows, even smart scientists can be horrendously bad at scientific evaluation.  
I've had some really good bioscience papers rejected from journals, by 
presumably intelligent referees, for extremely bad reasons (and these papers 
were eventually published in good journals).

  Evaluating research is not much easier than doing it.  When is someone's 
supposed test of statistical validity really the right test?  Too many biology 
referees just look for the magic number of p.05 rather than understanding what 
test actually underlies that number, because they don't know the math or don't 
know how to connect the math to the experiment in a contextually appropriate 
way.

  As another example: When should a data point be considered an outlier 
(meaning: probably due to equipment error or some other quirk) rather than a 
genuine part of the data?  Tricky.  I recall Feynman noting that he was held 
back in making a breakthrough discovery for some time, because of an outlier on 
someone else's published data table, which turned out to be spurious but had 
been accepted as valid by the community.  In this case, Feyman's exceptional 
intelligence allowed him to carry out scientific evaluation more effectively 
than other, intelligent but less-so-than-him, had done...

  -- Ben G


  On Sun, Oct 19, 2008 at 6:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

Actually, I should have drawn a distinction . . . . there is a major 
difference between performing discovery as a scientist and evaluating data as a 
scientist.  I was referring to the latter (which is similar to understanding 
Einstein) as opposed to the former (which is being Einstein).  You clearly are 
referring to the creative act of discovery (Programming is also a discovery 
operation).

So let me rephrase my statement -- Can a stupid person do good scientific 
evaluation if taught the rules and willing to abide by them?  Why or why not?
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:52 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,

  It is not the case that I have merely lectured rather than taught.  I've 
lectured (math, CS, psychology and futurology) at university, it's true ... but 
I've also done extensive one-on-one math tutoring with students at various 
levels ... and I've also taught small groups of kids aged 7-12, hands-on (math 
 programming), and I've taught retirees various skills (mostly computer 
related).

  Why can't a stupid person do good science?  Doing science in reality 
seems to require a whole bunch of implicit, hard-to-verbalize knowledge that 
stupid people just don't seem to be capable of learning.  A stupid person can 
possibly be trained to be a good lab assistant, in some areas of science but 
not others (it depends on how flaky and how complex the lab technology involved 
is in that area).  But, being a scientist involves a lot of judgment, a lot of 
heuristic, uncertain reasoning drawing on a wide variety of knowledge.

  Could a stupid person learn to be a good scientist given, say, a thousand 
years of training?  Maybe.  But I doubt it, because by the time they had moved 

Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Matt,

Yes, that is completely true. I should have worded myself more clearly.

Ben,

Matt has sorted out the mistake you are referring to. What I meant was
that AIXI is incapable of understanding the proof, not that it is
incapable of producing it. Another way of describing it: AIXI could
learn to accurately mimic the way humans talk about uncomputable
entities, but it would never invent these things on its own.

--Abram

On Sun, Oct 19, 2008 at 4:32 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:

 No, I do not claim that computer theorem-provers cannot
 prove Goedel's Theorem. It has been done. The objection applies
 specifically to AIXI-- AIXI cannot prove goedel's theorem.

 Yes it can. It just can't understand its own proof in the sense of Tarski's 
 undefinability theorem.

 Construct a predictive AIXI environment as follows: the environment output 
 symbol does not depend on anything the agent does. However, the agent 
 receives a reward when its output symbol matches the next symbol input from 
 the environment. Thus, the environment can be modeled as a string that the 
 agent has the goal of compressing.

 Now encode in the environment a series of theorems followed by their proofs. 
 Since proofs can be mechanically checked, and therefore found given enough 
 time (if the proof exists), then the optimal strategy for the agent, 
 according to AIXI is to guess that the environment receives as input a series 
 of theorems and that the environment then proves them and outputs the proof. 
 AIXI then replicates its guess, thus correctly predicting the proofs and 
 maximizing its reward. To prove Goedel's theorem, we simply encode it into 
 the environment after a series of other theorems and their proofs.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Samantha Atkins
Hmm.  After the recent discussion it seems this list has turned into the 
philosophical musings related to AGI list.   Where is the AGI 
engineering list?


- samantha



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Sorry Mark, but I'm not going to accept your opinion on this just because
you express it with vehemence and confidence.

I didn't argue much previously when you told me I didn't understand
engineering ... because, although I've worked with a lot of engineers, I
haven't been one.

But, I grew up around scientists, I've trained scientists, and I am
currently (among other things) working as a scientist.

It is really not true that there is a set of simple rules adequate to tell
people how to evaluate scientific results effectively.  As often occurs,
there may be rules that tell you how to handle 80% of cases (or whatever),
but then the remainder of the cases are harder and require actual judgment.

This is, by the way, the case with essentially every complex human process
that people have sought to cover via expert rules.  The rules cover many
cases ... but as one seeks to extend them to cover all relevant cases, one
winds up adding more and more and more specialized rules...

It is possible I inaccurately remembered an anecdote from Feynman's book,
but that's irrelevant to my point.

***
Using that story as an example shows that you don't understand how to
properly run a scientific evaluative process.
***

Wow, that is quite an insult.  So you're calling me an incompetent in my
profession now.

  I don't have particularly thin skin, but I have to say that I'm getting
really tired of being attacked and insulted on this email list.

-- Ben G


On Sun, Oct 19, 2008 at 6:18 PM, Mark Waser [EMAIL PROTECTED] wrote:

   Whether a stupid person can do good scientific evaluation if taught
 the rules is a badly-formed question, because no one knows what the rules
 are.   They are learned via experience just as much as by explicit teaching
 Wow!  I'm sorry but that is a very scary, incorrect opinion.  There's a
 really good book called The Game of Science by McCain and Segal that
 clearly explains all of the rules.  I'll get you a copy.

 I understand that most scientists aren't trained properly -- but that is
 no reason to continue the problem by claiming that they can't be trained
 properly.

 You make my point with your explanation of your example of biology
 referees.  And the Feynman example, if it is the story that I've heard
 before, was actually an example of good science in action because the
 outlier was eventually overruled AFTER ENOUGH GOOD DATA WAS COLLECTED to
 prove that the outlier was truly an outlier and not just a mere
 inconvenience to someone's theory.  Feynman's exceptional intelligence
 allowed him to discover a possibility that might have been correct if the
 point was an outlier, but good scientific evaluation relies on data, data,
 and more data.  Using that story as an example shows that you don't
 understand how to properly run a scientific evaluative process.


 - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Sunday, October 19, 2008 6:07 PM
 *Subject:* Re: AW: AW: [agi] Re: Defining AGI


 Whether a stupid person can do good scientific evaluation if taught the
 rules is a badly-formed question, because no one knows what the rules
 are.   They are learned via experience just as much as by explicit teaching

 Furthermore, as anyone who has submitted a lot of science papers to
 journals knows, even smart scientists can be horrendously bad at scientific
 evaluation.  I've had some really good bioscience papers rejected from
 journals, by presumably intelligent referees, for extremely bad reasons (and
 these papers were eventually published in good journals).

 Evaluating research is not much easier than doing it.  When is someone's
 supposed test of statistical validity really the right test?  Too many
 biology referees just look for the magic number of p.05 rather than
 understanding what test actually underlies that number, because they don't
 know the math or don't know how to connect the math to the experiment in a
 contextually appropriate way.

 As another example: When should a data point be considered an outlier
 (meaning: probably due to equipment error or some other quirk) rather than a
 genuine part of the data?  Tricky.  I recall Feynman noting that he was held
 back in making a breakthrough discovery for some time, because of an outlier
 on someone else's published data table, which turned out to be spurious but
 had been accepted as valid by the community.  In this case, Feyman's
 exceptional intelligence allowed him to carry out scientific evaluation more
 effectively than other, intelligent but less-so-than-him, had done...

 -- Ben G

 On Sun, Oct 19, 2008 at 6:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Actually, I should have drawn a distinction . . . . there is a major
 difference between performing discovery as a scientist and evaluating data
 as a scientist.  I was referring to the latter (which is similar to
 understanding Einstein) as opposed to the former (which is being Einstein).
 You clearly are referring to 

Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Eric Burton
I've been on some message boards where people only ever came back with
a formula or a correction. I didn't contribute a great deal but it is
a sight for sore eyes. We could have an agi-tech and an agi-philo list
and maybe they'd merit further recombination (more lists) after that.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger

Absolutely.  We are not aware of most of our assumptions that are based in 
our common heritage, culture, and embodiment.  But an external observer 
could easily notice them and tease out an awful lot of information about us 
by doing so.


You do not understand what I mean.
There will be lot of implementation details (e.g. temporary variables )
within the patterns which will never be send by linguistic messages.


I disagree with a complete distinction between D and L.  L is a very small
fraction of D translated for transmission.  However, instead of arguing that
there must be a strict separation between language model and D, I would
argue that the more similar the two could be (i.e. the less translation from
D to L) the better.  Analyzing L in that case could tell you more about D
than you might think (which is what Pinker and Hauser argue).  It's like
looking at data to determine an underlying cause for a phenomenon.  Even
noticing what does and does not vary (and what covaries) tells you a lot
about the underlying cause (D).


This is just an assumption of you. No facts. My opinion remains: D and L are
separated.



How do you go from a formal language to a competent description of a messy,
ambiguous, data-deficient world?  *That* is the natural language question.


Any algorithm in your computer is written in a formal well defined language.
If you agree that AGI is possible with current programming languages then
you have to agree that the ambiguous, data-deficient world can be managed by
formal languages.



What happens if I say that language extensibility is exactly analogous to
learning which is exactly analogous to internal model improvement?


What happens? I disagree.


So translation is a pattern manipulation where the result isn't stored?


The result isn't stored in D


The domain of mathematics is complete and unambiguous.  A mathematics AI is
not a GI in my book.  It won't generalize to the real world until it handles
incompleteness and ambiguity (which is my objection to your main analogy).


If you say mathematics is not GI then the following must be true for you:
The universe cannot be modeled by mathematics.
I disagree.


The communication protocol needs to be extensible to handle output after
learning or transition into a new domain.  How do you ground new concepts?
More importantly, it needs to be extensible to support teaching the AGI.  As
I keep saying, how are you going to make your communication protocol
extensible?  Real GENERAL intelligence has EVERYTHING to do with
extensibility.


For mathematics you just need a few axioms. There are an infinite number of
expressions which can be written with a final set of symbols and a finite
formal language.

But extensibility is no crucial point in this discussion at all. You can
have extensibility with a strict separation of D and L. For first AGI with
mathematics I would hardcode an algorithm which manages an open list of
axioms and definitions as a language interface.


I keep pointing out that your model separating communication and database
updating depends upon a fully specified model and does not tolerate
ambiguity (i.e. it lacks extensibility and doesn't handle ambiguity).  You
continue not to answer these points.


Once again:
The separation of communication and database updating does not contradict
extensibility and ambiguity. Language data and domain data can be strictly
separated.
I can update the language database without communicating (e.g. just by
changing the hard disk) or with communicating. The main point is that the
model *D* needs not to be changed during communicating. Furthermore,
language extension would be a nice feature but it is not necessary.

The model D needs not to be fully specified at all.
If the model L is formal and without ambiguities this does not imply at all
that problems with ambiguities cannot be handled.













---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Mark, I did not say that theory should trump data.  When theory should
trump data is a very complex question.

I don't mind reading the book you suggested eventually but I have a long
list of other stuff to read that seems to have higher priority.

I don't believe there exists a complete, well-defined set of rules for
evaluating scientific evidence in real-world cases, sorry.

If you want to say there is a complete set of rules, but there is no
complete set of rules for how to apply these rules -- well, I still doubt
it, but it seems like a less absurd contention.  But in that case, so what?
In that case the rules don't actually tell you how to evaluate scientific
evidence.

In bioinformatics, it seems to me that evaluating complex datasets gets into
tricky questions of applied statistics, on which expert biostatisticians
don't always agree (and they write papers about their arguments).  Clearly
this is not a pursuit for dumbheads ;-p ... Perhaps you would classify this
as a dispute about how to apply the rules and not about the rules
themselves?  I don't really understand the distinction you're drawing
there...

ben g

On Sun, Oct 19, 2008 at 7:15 PM, Mark Waser [EMAIL PROTECTED] wrote:

   It is really not true that there is a set of simple rules adequate to
 tell people how to evaluate scientific results effectively.

 Get the book and then speak from a position of knowledge by telling
 me something that you believe it is missing.  When I cite a specific example
 that you can go and verify or disprove, it is not an opinion but a valid
 data point (and your perception of my vehemence and/or confidence and your
 personal reaction to it are totally irrelevant).  The fact that you can
 make a statement like this from a position of total ignorance when I cite a
 specific example is a clear example of not following basic scientific
 principles.  You can be insulted all you like but that is not what a good
 scientist would do on a good day -- it is simply lazy and bad science.

  As often occurs, there may be rules that tell you how to handle 80% of
 cases (or whatever), but then the remainder of the cases are harder and
 require actual judgment.
 Is it that the rules don't have 100% coverage or is that it isn't always
 clear how to appropriately apply the rules and that is where the questions
 come in?  There is a huge difference between the two cases -- and your
 statement no one knows what the rules are argues for the former not the
 latter.  I'd be more than willing to accept the latter -- but the former is
 an embarrassment.  Do you really mean to contend the former?

  It is possible I inaccurately remembered an anecdote from Feynman's
 book, but that's irrelevant to my point.
 No, you accurately remembered the anecdote.  As I recall, Feynman was
 expressing frustration at the slowness of the process -- particularly
 because no one would consider his hypothesis enough to perform the
 experiments necessary to determine whether the point was an outlier or not.
 Not performing the experiment was an unfortunate choice of trade-offs (since
 I'm sure that they were doing something else that they deemed more likely to
 produce worthwhile results) but accepting his theory without first proving
 that the outlier was indeed an outlier (regardless of his
 intelligence) would have been far worse and directly contrary to the
 scientific method.

  Using that story as an example shows that you don't understand how to
 properly run a scientific evaluative process.
  Wow, that is quite an insult.  So you're calling me an incompetent in my
 profession now.

 It depends.  Are you going to continue promoting something as inexcusable
 as saying that theory should trump data (because of the source of the
 theory)?  I was quite clear that I was criticizing a very specific action.
 Are you going to continue to defend that improper action?

 And why don't we keep this on the level of scientific debate rather than
 arguing insults and vehemence and confidence?  That's not particularly good
 science either.


 - Original Message -
 *From:* Ben Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Sunday, October 19, 2008 6:31 PM
 *Subject:* Re: AW: AW: [agi] Re: Defining AGI


 Sorry Mark, but I'm not going to accept your opinion on this just because
 you express it with vehemence and confidence.

 I didn't argue much previously when you told me I didn't understand
 engineering ... because, although I've worked with a lot of engineers, I
 haven't been one.

 But, I grew up around scientists, I've trained scientists, and I am
 currently (among other things) working as a scientist.

 It is really not true that there is a set of simple rules adequate to tell
 people how to evaluate scientific results effectively.  As often occurs,
 there may be rules that tell you how to handle 80% of cases (or whatever),
 but then the remainder of the cases are harder and require actual judgment.

 This is, by the way, the case 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
 And why don't we keep this on the level of scientific debate rather than
 arguing insults and vehemence and confidence?  That's not particularly good
 science either.



Right ... being unnecessarily nasty is not either good or bad science, it's
just irritating for others to deal with

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

I disagree with a complete distinction between D and L.  L is a very small
fraction of D translated for transmission.  However, instead of arguing 
that

there must be a strict separation between language model and D, I would
argue that the more similar the two could be (i.e. the less translation 
from

D to L) the better.  Analyzing L in that case could tell you more about D
than you might think (which is what Pinker and Hauser argue).  It's like
looking at data to determine an underlying cause for a phenomenon.  Even
noticing what does and does not vary (and what covaries) tells you a lot
about the underlying cause (D).


This is just an assumption of you. No facts. My opinion remains: D and L 
are

separated.



Geez.  What is it with this list?  Read Pinker.  Tons of facts.  Take them 
into account and then form an opinion.


Any algorithm in your computer is written in a formal well defined 
language.

If you agree that AGI is possible with current programming languages then
you have to agree that the ambiguous, data-deficient world can be managed 
by

formal languages.


Once we figure out how to program the process of automatically extending 
formal languages -- yes, absolutely.  That's the path to AGI.



If you say mathematics is not GI then the following must be true for you:
The universe cannot be modeled by mathematics.
I disagree.


with Gödel?  That's impressive.


Furthermore,
language extension would be a nice feature but it is not necessary.


Cool.  And this is where we agree to disagree (and it does seem to be at the 
root of all the other arguments).  If I believed this, I would agree with 
most of your other stuff.  I just don't see how you're going to stretch any 
non-extensible language to *effectively* cover an infinite universe.  Gödel 
argues against it.


- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 6:53 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]





Absolutely.  We are not aware of most of our assumptions that are based in
our common heritage, culture, and embodiment.  But an external observer
could easily notice them and tease out an awful lot of information about 
us

by doing so.


You do not understand what I mean.
There will be lot of implementation details (e.g. temporary variables )
within the patterns which will never be send by linguistic messages.




I disagree with a complete distinction between D and L.  L is a very small
fraction of D translated for transmission.  However, instead of arguing 
that

there must be a strict separation between language model and D, I would
argue that the more similar the two could be (i.e. the less translation 
from

D to L) the better.  Analyzing L in that case could tell you more about D
than you might think (which is what Pinker and Hauser argue).  It's like
looking at data to determine an underlying cause for a phenomenon.  Even
noticing what does and does not vary (and what covaries) tells you a lot
about the underlying cause (D).


This is just an assumption of you. No facts. My opinion remains: D and L 
are

separated.




How do you go from a formal language to a competent description of a 
messy,

ambiguous, data-deficient world?  *That* is the natural language question.


Any algorithm in your computer is written in a formal well defined 
language.

If you agree that AGI is possible with current programming languages then
you have to agree that the ambiguous, data-deficient world can be managed 
by

formal languages.





What happens if I say that language extensibility is exactly analogous to
learning which is exactly analogous to internal model improvement?


What happens? I disagree.




So translation is a pattern manipulation where the result isn't stored?


The result isn't stored in D



The domain of mathematics is complete and unambiguous.  A mathematics AI 
is
not a GI in my book.  It won't generalize to the real world until it 
handles

incompleteness and ambiguity (which is my objection to your main analogy).


If you say mathematics is not GI then the following must be true for you:
The universe cannot be modeled by mathematics.
I disagree.




The communication protocol needs to be extensible to handle output after
learning or transition into a new domain.  How do you ground new concepts?
More importantly, it needs to be extensible to support teaching the AGI. 
As

I keep saying, how are you going to make your communication protocol
extensible?  Real GENERAL intelligence has EVERYTHING to do with
extensibility.


For mathematics you just need a few axioms. There are an infinite number 
of

expressions which can be written with a final set of symbols and a finite
formal language.

But extensibility is no crucial point in this discussion at all. You can
have extensibility with a strict separation of D and L. For first AGI with
mathematics I would hardcode an algorithm which manages an open list of
axioms and 

Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Eric Burton
I've been thinking. agi-phil might suffice. Although it isn't as explicit.

On Sun, Oct 19, 2008 at 6:52 PM, Eric Burton [EMAIL PROTECTED] wrote:
 I've been on some message boards where people only ever came back with
 a formula or a correction. I didn't contribute a great deal but it is
 a sight for sore eyes. We could have an agi-tech and an agi-philo list
 and maybe they'd merit further recombination (more lists) after that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Eric Burton
No, surely this is mostly outside the purview of the AGI list. I'm
reading some of this material and not getting a lot out of it. There
are channels on freenode for this stuff. But we have got to agree on
something if we are going to do anything. Can animals do science? They
can not.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
OK, well, I'm not going to formally kill this irrelevant-to-AGI thread as
moderator, but I'm going to abandon it as participant...

Time to get some work done tonight, enough time spent on email ;-p

ben g

On Sun, Oct 19, 2008 at 7:52 PM, Eric Burton [EMAIL PROTECTED] wrote:

 No, surely this is mostly outside the purview of the AGI list. I'm
 reading some of this material and not getting a lot out of it. There
 are channels on freenode for this stuff. But we have got to agree on
 something if we are going to do anything. Can animals do science? They
 can not.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Ben,

How so? Also, do you think it is nonsensical to put some probability
on noncomputable models of the world?

--Abram

On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 But: it seems to me that, in the same sense that AIXI is incapable of
 understanding proofs about uncomputable numbers, **so are we humans** ...

 On Sun, Oct 19, 2008 at 6:30 PM, Abram Demski [EMAIL PROTECTED] wrote:

 Matt,

 Yes, that is completely true. I should have worded myself more clearly.

 Ben,

 Matt has sorted out the mistake you are referring to. What I meant was
 that AIXI is incapable of understanding the proof, not that it is
 incapable of producing it. Another way of describing it: AIXI could
 learn to accurately mimic the way humans talk about uncomputable
 entities, but it would never invent these things on its own.

 --Abram

 On Sun, Oct 19, 2008 at 4:32 PM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
  --- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:
 
  No, I do not claim that computer theorem-provers cannot
  prove Goedel's Theorem. It has been done. The objection applies
  specifically to AIXI-- AIXI cannot prove goedel's theorem.
 
  Yes it can. It just can't understand its own proof in the sense of
  Tarski's undefinability theorem.
 
  Construct a predictive AIXI environment as follows: the environment
  output symbol does not depend on anything the agent does. However, the 
  agent
  receives a reward when its output symbol matches the next symbol input from
  the environment. Thus, the environment can be modeled as a string that the
  agent has the goal of compressing.
 
  Now encode in the environment a series of theorems followed by their
  proofs. Since proofs can be mechanically checked, and therefore found given
  enough time (if the proof exists), then the optimal strategy for the agent,
  according to AIXI is to guess that the environment receives as input a
  series of theorems and that the environment then proves them and outputs 
  the
  proof. AIXI then replicates its guess, thus correctly predicting the proofs
  and maximizing its reward. To prove Goedel's theorem, we simply encode it
  into the environment after a series of other theorems and their proofs.
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson


 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-19 Thread Abram Demski
Ben,

Just to clarify my opinion: I think an actual implementation of the
novamente/OCP design is likely to overcome this difficulty. However,
to the extent that it approximates AIXI, I think there will be
problems of these sorts.

The main reason I think OCP/novamente would *not* approximate AIXI is
that these systems are capable of a greater degree of self-reference,
as well as a very different sort of adaptation. Self-reference gives
the system a very direct reason to think *about* processes (resulting
in halting, convergence, and other uncomputable properties).
Self-adaptation could allow the system to adopt new sorts of reasoning
(such as uncomputable models) simply because they seem to work.
(This is different from AIXI being trained to prove theorems about
uncomputable things, because the system starts actually making use of
the theorems internally.)

If I could formalize that intuition, I would be happy.

--Abram

On Sun, Oct 19, 2008 at 9:33 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Ben,

 How so? Also, do you think it is nonsensical to put some probability
 on noncomputable models of the world?

 --Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com