Hello Boris, and welcome to the list.
I didn't understand your algorithm, you use many terms that you didn't
define. It probably would be clearer if you use some kind of
pseudocode and systematically describe all occurring procedures. But I
think more fundamental questions that need clarifying
Although I symphathize with some of Hawkin's general ideas about
unsupervised learning, his current HTM framework is unimpressive in
comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.
But it should be quite
On Sun, Mar 30, 2008 at 7:23 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
Although I symphathize with some of Hawkin's general ideas about
unsupervised learning, his current HTM framework is unimpressive in
comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional
[EMAIL PROTECTED] writes:
But it should be quite clear that such methods could eventually be very handy
for AGI.
I agree with your post 100%, this type of approach is the most interesting
AGI-related stuff to me.
An audiovisual perception layer generates semantic interpretation on the
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Although I symphathize with some of Hawkin's general ideas about unsupervised
learning, his current HTM framework is unimpressive in comparison with
state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the
Derek: How could a symbolic engine ever reason about the real world *with*
access to such information?
I hope my work eventually demonstrates a solution to your satisfaction. In the
meantime there is evidence from robotics, specifically driverless cars, that
real world sensor input can be
Durk,
Absolutely right about the need for what is essentially an imaginative level of
mind. But wrong in thinking:
Vision may be classified under Narrow AI
You seem to be treating this extra audiovisual perception layer as a purely
passive layer. The latest psychology philosophy recognize
Stephen Reed writes:
How could a symbolic engine ever reason about the real world *with* access
to such information?
I hope my work eventually demonstrates a solution to your satisfaction.
Me too!
In the meantime there is evidence from robotics, specifically driverless
cars,
Mike, you seem to have misinterpreted my statement. Perception is certainly
not 'passive', as it can be described as active inference using a (mostly
actively) learned world model. Inference is done on many levels, and could
integrate information from various abstraction levels, so I don't see it
On Sun, Mar 30, 2008 at 6:48 PM, William Pearson [EMAIL PROTECTED] wrote:
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
An audiovisual perception layer generates semantic interpretation on the
(sub)symbolic level. How could a symbolic engine ever reason about the real
world without
On Sun, Mar 30, 2008 at 10:16 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person can still
learn a lot about the world with taste, smell, and touch, but the
senses one
Related obliquely to the discussion about pattern discovery algorithms What
is a symbol?
I am not sure that I am using the words in this post in exactly the same way
they are normally used by cognitive scientists; to the extent that causes
confusion, I'm sorry. I'd rather use words in
Vladimir, I agree with you on many issues, but...
On Sun, Mar 30, 2008 at 9:03 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
This way, for example, it should be possible to teach a 'modality' for
understanding simple graphs encoded as text, so that on one hand
text-based input is sufficient,
On Sun, Mar 30, 2008 at 11:33 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing the
physical world to vector graphics
Alright, agreed with all you say. If I understood correctly, your
system (at the moment) assumes scene descriptions at a level higher
than pixels, but certainly lower than objects. An application of such
system seems be a simulated, virtual world where such descriptions are
at hand... Is this
On Mon, Mar 31, 2008 at 12:21 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:
Alright, agreed with all you say. If I understood correctly, your
system (at the moment) assumes scene descriptions at a level higher
than pixels, but certainly lower than objects. An application of such
system seems be
I agree with Richard and hereby formally request that Ben chime in.
It is my contention that SAT is a relatively narrow form of Narrow AI and not
general enough to be on an AGI list.
This is not meant, in any way shape or form, to denigrate the work that you are
doing. It is very important
From: Derek Zahn
Is anybody else interested in this kind of question, or am I simply inventing
issues that are not meaningful and useful?
The issues you bring up are key/core to a major part of AGI. Unfortunately,
they are also issues hashed over way to many times in a mailing list
True enough, that is one answer: by hand-crafting the symbols and the
mechanics for instantiating them from subsymbolic structures. We of course
hope for better than this but perhaps generalizing these working systems is a
practical approach.
Um. That is what is known as the grounding
From: Kingma, D.P. [EMAIL PROTECTED]
Sure, you could argue that an intelligence purely based on text,
disconnected from the physical world, could be intelligent, but it
would have a very hard time reasoning about interaction of entities in
the physicial world. It would be unable to understand
My judgment as list moderator:
1) Discussions of particular, speculative algorithms for solving SAT
are not really germane for this list
2) Announcements of really groundbreaking new SAT algorithms would
certainly be germane to the list
3) Discussions of issues specifically regarding the
From: Kingma, D.P. [EMAIL PROTECTED]
Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing the
physical world to vector graphics is quite lossy (and computationally
4) If you think some supernatural being placed an insight in your mind,
you're
probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way less
seriously
by a vast majority of scientific-minded people...
Awesome
Mark Waser writes:
True enough, that is one answer: by hand-crafting the symbols and the
mechanics for instantiating them from subsymbolic structures. We of
course hope for better than this but perhaps generalizing these working
systems is a practical approach. Um. That is what is
In this surrounding discussions, everyone seems deeply confused - it's
nothing personal, so is our entire culture - about the difference between
SYMBOLS
1. Derek Zahn curly hair big jaw intelligent eyes . etc. etc
and
IMAGES
2.
On Sun, Mar 30, 2008 at 5:09 PM, Mark Waser [EMAIL PROTECTED] wrote:
4) If you think some supernatural being placed an insight in your mind,
you're
probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way
On Mon, Mar 31, 2008 at 12:02 AM, Mike Tintner [EMAIL PROTECTED] wrote:
We are all next to illiterate - and I mean, mind-blowingly ignorant - about
how images function. What, for example, does an image of D.Z. or any person,
do, that no amount of symbols - whether words, numbers, algebraic
Why are images almost always more powerful than the corresponding symbols?
Why do they communicate so much faster?
Um . . . . dude . . . . it's just a bandwidth thing.
Think about images vs. visual symbols vs. word descriptions vs. names.
It's a spectrum from high-bandwidth information
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person can still
learn a lot about the world with taste, smell, and touch, but the
senses one has access to defines
Jim Bromer wrote:
On the contrary, Vladimir is completely correct in requesting that the
discussion go elsewhere: this has no relevance to the AGI list, and
there are other places where it would be pertinent.
Richard Loosemore
If Ben doesn't want me to continue, I will
Okay, with text, I mean natural language, in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is too
low-bandwidth to provide sufficient data to
On Sun, Mar 30, 2008 at 11:00 PM, Mark Waser [EMAIL PROTECTED] wrote:
From: Kingma, D.P. [EMAIL PROTECTED]
Vector graphics can indeed be communicated to an AGI by relatively
low-bandwidth textual input. But, unfortunately,
the physical world is not made of vector graphics, so reducing
(Sorry for triple posting...)
On Sun, Mar 30, 2008 at 11:34 PM, William Pearson [EMAIL PROTECTED] wrote:
On 30/03/2008, Kingma, D.P. [EMAIL PROTECTED] wrote:
Intelligence is not *only* about the modalities of the data you get,
but modalities are certainly important. A deafblind person
MW:
MT: Why are images almost always more powerful than the corresponding
symbols? Why do they communicate so much faster?
Um . . . . dude . . . . it's just a bandwidth thing.
Vlad:Because of higher bandwidth?
Well, guys, if the only difference between an image and, say, a
From: Kingma, D.P. [EMAIL PROTECTED]
Okay, with text, I mean natural language, in it's usual
low-bandwidth form. That should clarify my statement. Any data can be
represented with text of course, but that's not the point... The point
that I was trying to make is that natural language is too
From: Kingma, D.P. [EMAIL PROTECTED]
Agreed with that, exact compression is not the way to go if you ask
me. But that doesn't mean any lossy method is OK. Converting a scene
to vector graphics will lead you to throwing away much visual
information early in the process: visual information (e.g.
From: Mike Tintner
Well, guys, if the only difference between an image and, say, a symbolic -
verbal or mathematical or programming - description is bandwidth, perhaps
you'll be able to explain how you see the Cafe Wall illusion from a symbolic
description:
Sure! The Cafe Wall illusion
I'm going to attack you by questions again :-)
You're more than welcome to, sorry for being brisk. I did reply about RSS on
the blog, but for some reason the post never made it through.
I don't how RSS works, but you can subscribe via bloglines.com.
What are 'range' and 'complexity'? Is
It seems like a reasonable and not uncommon idea that an AI could be built as
a mostly-hierarchical autoassiciative memory. As you point out, it's not so
different from Hawkins's ideas. Neighboring pixels will correlate in space
and time; features such as edges should become principle
39 matches
Mail list logo