The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.
Every email program can receive meaning, store meaning and it can express it
outwardly in order to send it to another computer. It even can do it without
This sounds good to me. I am much more drawn to topic #1. Topic #2 I
have seen discussed recursively and in dozens of variants multiple
places. The only thing I will add to Topic #2 is that I very seriously
doubt current human intelligence individually or collectively is
sufficient to
Matt Mahoney wrote:
--- On Tue, 10/14/08, Charles Hixson [EMAIL PROTECTED] wrote:
It seems clear that without external inputs the amount of
improvement
possible is stringently limited. That is evident from
inspection. But
why the without input? The only evident reason
is to ensure the
Abram,
I find it more useful to think in terms of Chaitin's reformulation of
Godel's Theorem:
http://www.cs.auckland.ac.nz/~chaitin/sciamer.html
Given any computer program with algorithmic information capacity less than
K, it cannot prove theorems whose algorithmic information content is
What the computer makes with the data it receives depends on the information
of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.
Language understanding would be useful to teach the AGI with existing
knowledge already
An excellent post, thanks!
IMO, it raises the bar for discussion of language and AGI, and should be
carefully considered by the authors of future posts on the topic of language
and AGI. If the AGI list were a forum, Matthias's post should be pinned!
-dave
On Sun, Oct 19, 2008 at 6:58 PM, Dr.
2008/10/19 Dr. Matthias Heger [EMAIL PROTECTED]:
The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.
Every email program can receive meaning, store meaning and it can express it
outwardly in order to
regarding denotational semantics:
I prefer to think of the meaning of X as the fuzzy set of patterns
associated with X. (In fact, I recall giving a talk on this topic at a
meeting of the American Math Society in 1990 ;-)
On Sun, Oct 19, 2008 at 6:59 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.
Every email program can receive meaning, store meaning and it can express
On Sun, Oct 19, 2008 at 3:09 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
regarding denotational semantics:
I prefer to think of the meaning of X as the fuzzy set of patterns
associated with X. (In fact, I recall giving a talk on this topic at a
meeting of the American Math Society in 1990 ;-)
Matthias,
You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for concepts,
which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the
I agree that understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. But this integrating just matching and not extending the own model
with new entities. You only match linguistic entities of received
linguistically
On Sun, Oct 19, 2008 at 5:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
I agree that understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. But this integrating just matching and not extending the own model
For the discussion of the subject the details of the pattern representation
are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
Ben,
I don't know what sounded almost confused, but anyway it is apparent
that I didn't make my position clear. I am not saying we can
manipulate these things directly via exotic (non)computing.
First, I am very specifically saying that AIXI-style AI (meaning, any
AI that approaches AIXI as
Matthias,
I take the point that there is vastly more to language understanding than
the surface processing of words as opposed to concepts.
I agree that it is typically v. fast.
I don't think though that you can call any concept a pattern. On the
contrary, a defining property of concepts,
--- On Sun, 10/19/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Every email program can receive meaning, store meaning and
it can express it
outwardly in order to send it to another computer. It even
can do it without
loss of any information. Regarding this point, it even
outperforms
Domain effectiveness (a.k.a. intelligence) is predicated upon having an
effective internal model of that domain.
Language production is the extraction and packaging of applicable parts of the
internal model for transmission to others.
Conversely, language understanding is for the reception (and
There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.
This is what I disagree entirely with. If nothing else, humans are
constantly building and updating their mental model of what
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.
Read Pinker's The Stuff of Thought. Actually, a lot of these details *are*
visible from a linguistic point of view.
- Original Message -
The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished. Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of
If there are some details of the internal structure of patterns visible then
this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view.
Since in many communicating technical systems there are so much details
which are
The process of translating patterns into language should be easier than the
process of creating patterns or manipulating patterns. Therefore I say that
language understanding is easy.
When you say that language is not fully specified then you probably imagine
an AGI which learns language.
--- On Sun, 10/19/08, Samantha Atkins [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
There is
currently a global brain (the world economy) with an IQ of
around 10^10, and approaching 10^12.
Oh man. It is so tempting in today's economic morass
to point out the
obvious stupidity of this
The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished.
What if the matching process is not finished?
This is overly simplistic for several reasons since you're apparently
assuming that the matching
Terren wrote:
Isn't the *learning* of language the entire point? If you don't have an
answer for how an AI learns language, you haven't solved anything. The
understanding of language only seems simple from the point of view of a
fluent speaker. Fluency however should not be confused with a
If there are some details of the internal structure of patterns visible
then
this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view.
True, but visible patterns offer clues for interpretation and analysis. The
Mark Waser wrote
What if the matching process is not finished?
This is overly simplistic for several reasons since you're apparently
assuming that the matching process is crisp, unambiguous, and irreversible
(and ask Stephen Reed how well that works for TexAI).
I do not assume this. Why should
We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.
-Matthias
Mark Waser wrote
Details that don't need to be transferred are those which are either known
by or unnecessary to the
The process of translating patterns into language should be easier than the
process of creating patterns or manipulating patterns.
How is translating patterns into language different from manipulating patterns?
It seems to me that they are *exactly* the same thing. How do you believe
The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.
If the language must be learned then things
I don't think that learning of language is the entire point. If I have
only
learned language I still cannot create anything. A human who can
understand
language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can
You have given no reason why the separation of the process of
communication
with the
process of manipulating data can only be separated if the knowledge is
structured.
In fact there is no reason.
How do you communicate something for which you have no established
communications protocol? If
We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.
Absolutely. We are not aware of most of our assumptions that are based in
our common heritage, culture, and embodiment. But an
The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.
Yes! But the formal languages need to be
Matthias wrote:
I don't think that learning of language is the entire
point. If I have only
learned language I still cannot create anything. A human
who can understand
language is by far still no good scientist. Intelligence
means the ability
to solve problems. Which problems can a system
--- On Sat, 10/18/08, Abram Demski [EMAIL PROTECTED] wrote:
No, I do not claim that computer theorem-provers cannot
prove Goedel's Theorem. It has been done. The objection applies
specifically to AIXI-- AIXI cannot prove goedel's theorem.
Yes it can. It just can't understand its own proof in
But, either you're just wrong or I don't understand your wording ... of
course, AIXI **can** reason about uncomputable entities. If you showed AIXI
the axioms of, say, ZF set theory (including the Axiom of Choice), and
reinforced it for correctly proving theorems about uncomputable entities as
Marc Walser wrote:
*Any* human who can understand language beyond a certain point (say, that of
a slightly sub-average human IQ) can easily be taught to be a good scientist
if they are willing to play along. Science is a rote process that can be
learned and executed by anyone -- as long as
I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has a
stronger relation to the pattern for angry than it is the case for the
pattern of tree.
You don't have to manipulate any patterns and can do the translation.
-
Manipulating of patterns needs reading and writing operations. Data
structures will be changed. Translation needs just reading operations to the
patterns of the internal model.
So translation is a pattern manipulation where the result isn't stored?
I disagree that AGI must have some
Funny, Ben.
So . . . . could you clearly state why science can't be done by anyone willing
to simply follow the recipe?
Is it really anything other than the fact that they are stopped by their
unconscious beliefs and biases? If so, what?
Instead of a snide comment, defend your opinion with
I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has
a
stronger relation to the pattern for angry than it is the case for the
pattern of tree.
So, are the relationships between the various patterns in your translation
Mark,
It is not the case that I have merely lectured rather than taught. I've
lectured (math, CS, psychology and futurology) at university, it's true ...
but I've also done extensive one-on-one math tutoring with students at
various levels ... and I've also taught small groups of kids aged 7-12,
Interesting how you always only address half my points . . .
I keep hammering extensibility and you focus on ambiguity which is merely
the result of extensibility. You refuse to address extensibility. Maybe
because it really is the secret sauce of intelligence and the one thing that
you
Actually, I should have drawn a distinction . . . . there is a major difference
between performing discovery as a scientist and evaluating data as a scientist.
I was referring to the latter (which is similar to understanding Einstein) as
opposed to the former (which is being Einstein). You
Whether a stupid person can do good scientific evaluation if taught the
rules is a badly-formed question, because no one knows what the rules
are. They are learned via experience just as much as by explicit teaching
Furthermore, as anyone who has submitted a lot of science papers to journals
Whether a stupid person can do good scientific evaluation if taught the
rules is a badly-formed question, because no one knows what the rules are.
They are learned via experience just as much as by explicit teaching
Wow! I'm sorry but that is a very scary, incorrect opinion. There's a
Matt,
Yes, that is completely true. I should have worded myself more clearly.
Ben,
Matt has sorted out the mistake you are referring to. What I meant was
that AIXI is incapable of understanding the proof, not that it is
incapable of producing it. Another way of describing it: AIXI could
learn
Hmm. After the recent discussion it seems this list has turned into the
philosophical musings related to AGI list. Where is the AGI
engineering list?
- samantha
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
Sorry Mark, but I'm not going to accept your opinion on this just because
you express it with vehemence and confidence.
I didn't argue much previously when you told me I didn't understand
engineering ... because, although I've worked with a lot of engineers, I
haven't been one.
But, I grew up
I've been on some message boards where people only ever came back with
a formula or a correction. I didn't contribute a great deal but it is
a sight for sore eyes. We could have an agi-tech and an agi-philo list
and maybe they'd merit further recombination (more lists) after that.
Absolutely. We are not aware of most of our assumptions that are based in
our common heritage, culture, and embodiment. But an external observer
could easily notice them and tease out an awful lot of information about us
by doing so.
You do not understand what I mean.
There will be lot of
Mark, I did not say that theory should trump data. When theory should
trump data is a very complex question.
I don't mind reading the book you suggested eventually but I have a long
list of other stuff to read that seems to have higher priority.
I don't believe there exists a complete,
And why don't we keep this on the level of scientific debate rather than
arguing insults and vehemence and confidence? That's not particularly good
science either.
Right ... being unnecessarily nasty is not either good or bad science, it's
just irritating for others to deal with
ben g
I disagree with a complete distinction between D and L. L is a very small
fraction of D translated for transmission. However, instead of arguing
that
there must be a strict separation between language model and D, I would
argue that the more similar the two could be (i.e. the less translation
I've been thinking. agi-phil might suffice. Although it isn't as explicit.
On Sun, Oct 19, 2008 at 6:52 PM, Eric Burton [EMAIL PROTECTED] wrote:
I've been on some message boards where people only ever came back with
a formula or a correction. I didn't contribute a great deal but it is
a sight
No, surely this is mostly outside the purview of the AGI list. I'm
reading some of this material and not getting a lot out of it. There
are channels on freenode for this stuff. But we have got to agree on
something if we are going to do anything. Can animals do science? They
can not.
OK, well, I'm not going to formally kill this irrelevant-to-AGI thread as
moderator, but I'm going to abandon it as participant...
Time to get some work done tonight, enough time spent on email ;-p
ben g
On Sun, Oct 19, 2008 at 7:52 PM, Eric Burton [EMAIL PROTECTED] wrote:
No, surely this is
Ben,
How so? Also, do you think it is nonsensical to put some probability
on noncomputable models of the world?
--Abram
On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
But: it seems to me that, in the same sense that AIXI is incapable of
understanding proofs about
Ben,
Just to clarify my opinion: I think an actual implementation of the
novamente/OCP design is likely to overcome this difficulty. However,
to the extent that it approximates AIXI, I think there will be
problems of these sorts.
The main reason I think OCP/novamente would *not* approximate AIXI
61 matches
Mail list logo