Availibility of the Internet actually makes school grades worse. Of course,
grades does not equal education, but I don't see anything worldchanging
about education because of this.
- Panu Horsmalahti
---
agi
Archives:
-Original Message-
You have all missed one vital point. Music is repeating and it has a
symmetry.
In dancing (song and dance) moves are repeated in a symmetrical pattern.
Question why are we programmed to find symmetry? This question may be
more core to AGI than appears at first
On Sat, Jul 24, 2010 at 5:36 AM, Panu Horsmalahti nawi...@gmail.com wrote:
Availibility of the Internet actually makes school grades worse. Of course,
grades does not equal education, but I don't see anything worldchanging
about education because of this.
- Panu Horsmalahti
Hmmm I
lol. thanks Jim :)
On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer jimbro...@gmail.com wrote:
I have to say that I am proud of David Jone's efforts. He has really
matured during these last few months. I'm kidding but I really do respect
the fact that he is actively experimenting. I want to
Abram,
I haven't found a method that I think works consistently yet. Basically I
was trying methods like the one you suggested, which measures the number of
correct predictions or expectations. But, then I ran into the problem of,
what if the predictions you are counting are more of the same? Do
The Web site of David Jones at
http://practicalai.org
is quite impressive to me
as a kindred spirit building AGI.
(Just today I have been coding MindForth AGI :-)
For his Practical AI Challenge or similar
ventures, I would hope that David Jones is
open to the idea of aggregating or archiving
http://www.youtube.com/watch?v=vJG698U2Mvo
Can anyone suggest why our brains exhibit this phenomenon?
cheers,
Deepak
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
Abram,
I should also mention that I ran into problems mainly because I was having a
hard time deciding how to identify objects and determine what is really
going on in a scene. This adds a whole other layer of complexity to
hypotheses. It's not just about what is more predictive of the
Solomonoff Induction may require a trans-infinite level of complexity just
to run each program. Suppose each program is iterated through the
enumeration of its instructions. Then, not only do the infinity of possible
programs need to be run, many combinations of the infinite programs from
each
On Sat, Jul 24, 2010 at 3:59 PM, Jim Bromer jimbro...@gmail.com wrote:
Solomonoff Induction may require a trans-infinite level of complexity just
to run each program. Suppose each program is iterated through the
enumeration of its instructions. Then, not only do the infinity of
possible
On Sat, Jul 24, 2010 at 7:07 PM, deepakjnath deepakjn...@gmail.com wrote:
http://www.youtube.com/watch?v=vJG698U2Mvo
Can anyone suggest why our brains exhibit this phenomenon?
May I flag this as AGI irrelevant? The brain at a non-AGI task is not
that interesting for AGI, me thinks. Plus, we
David Jones wrote:
I should also mention that I ran into problems mainly because I was having a
hard time deciding how to identify objects and determine what is really going
on
in a scene.
I think that your approach makes the problem harder than it needs to be (not
that it is easy). Natural
Jim Bromer wrote:
Solomonoff Induction may require a trans-infinite level of complexity just to
run each program.
Trans-infinite is not a mathematically defined term as far as I can tell.
Maybe you mean larger than infinity, as in the infinite set of real numbers is
larger than the infinite
Huh, Matt? What examples of this holistic scene analysis are there (or are
you thinking about)?
From: Matt Mahoney
Sent: Saturday, July 24, 2010 10:25 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
David Jones wrote:
I should also mention that I ran into problems mainly
Abram,
II use constructivist's and intuitionist's (and for that matter finitist's)
methods when they seem useful to me. I often make mistakes when I am not
wary of constructivist issues. Constructist criticisms are interesting
because they can be turned against any presumptive method even though
Mike Tintner wrote:
Huh, Matt? What examples of this holistic scene analysis are there (or are
you thinking about)?
I mean a neural model with increasingly complex features, as opposed to an
algorithmic 3-D model (like video game graphics in reverse).
Of course David rejects such ideas (
Matt,
Any method must deal with similar, if not the same, ambiguities. You need to
show how neural nets solve this problem or how they solve agi goals while
completely skipping the problem. Until then, it is not a successful method.
Dave
On Jul 24, 2010 7:18 PM, Matt Mahoney
Maybe there are some students on this email list, who are wading through all
the BS and learning something about AGI, by following links and reading papers
mentioned here, etc. Without the Net, how would these students learn about
AGI, in practice? Such education would be far harder to come
Matt:
I mean a neural model with increasingly complex features, as opposed to an
algorithmic 3-D model (like video game graphics in reverse). Of course David
rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even though
the one proven working vision model uses it.
Which is?
Mike Tintner wrote:
Which is?
The one right behind your eyes.
-- Matt Mahoney, matmaho...@yahoo.com
From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Sat, July 24, 2010 9:00:42 PM
Subject: Re: [agi] Re: Huge Progress on the Core
Check this out!
The title Space and time, not surface features, guide object persistence
says it all.
http://pbr.psychonomic-journals.org/content/14/6/1199.full.pdf
Over just the last couple days I have begun to realize that they are so
right. My idea before of using high frame rates is also
This is absolutely incredible. The answer was right there in the last
paragraph:
The present experiments suggest that the computation
of object persistence appears to rely so heavily upon spatiotemporal
information that it will not (or at least is unlikely
to) use otherwise available surface
Thanks Dave, its very interesting. This gives us more clues in to how the
brain compresses and uses the relevant information while neglecting the
irrelevant information. But as Anast has demonstrated, the brain does need
priming inorder to decide what is relevant and irrelevant. :)
Cheers,
Deepak
http://www.youtube.com/watch?v=QbKw0_v2clofeature=player_embedded
What we see is not really what you see. Its what you see and what you know
you are seeing. The brain superimposes the predicted images to the viewed
image to actually have a perception of image.
cheers,
Deepak
Yes. I think I may have discovered the keys to crack this puzzle wide open.
The brain seems to use simplistic heuristics for depth perception and
surface bounding. Once it has that, it can apply the spaciotemporal
heuristic I mentioned in other emails to identify and track an object, which
allows
25 matches
Mail list logo