.
-- Matt Mahoney, matmaho...@yahoo.com
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Thu, July 22, 2010 3:11:46 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Because simpler is not better if it is less predictive.
On Thu, Jul 22, 2010 at 1:21 PM
benchmark.
-- Matt Mahoney, matmaho...@yahoo.com
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Thu, July 22, 2010 3:11:46 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Because simpler is not better if it is less predictive.
On Thu, Jul 22
://www.mailcom.com/challenge/ and in my
own
text compression benchmark.
-- Matt Mahoney, matmaho...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 22, 2010 3:11:46 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
Because simpler
Huh, Matt? What examples of this holistic scene analysis are there (or are
you thinking about)?
From: Matt Mahoney
Sent: Saturday, July 24, 2010 10:25 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
David Jones wrote:
I should also mention that I ran into problems mainly
] Re: Huge Progress on the Core of AGI
Huh, Matt? What examples of this holistic scene analysis are there (or are
you thinking about)?
From: Matt Mahoney
Sent: Saturday, July 24, 2010 10:25 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
David Jones wrote:
I should also
: Huge Progress on the Core of AGI
Huh, Matt? What examples of this holistic scene analysis are there (or are
you thinking about)?
...
*agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Matt:
I mean a neural model with increasingly complex features, as opposed to an
algorithmic 3-D model (like video game graphics in reverse). Of course David
rejects such ideas ( http://practicalai.org/Prize/Default.aspx ) even though
the one proven working vision model uses it.
Which is?
Mike Tintner wrote:
Which is?
The one right behind your eyes.
-- Matt Mahoney, matmaho...@yahoo.com
From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Sat, July 24, 2010 9:00:42 PM
Subject: Re: [agi] Re: Huge Progress on the Core
wrote:
Which is?
The one right behind your eyes.
-- Matt Mahoney, matmaho...@yahoo.com
--
*From:* Mike Tintner tint...@blueyonder.co.uk
*To:* agi agi@v2.listbox.com
*Sent:* Sat, July 24, 2010 9:00:42 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core
...@blueyonder.co.uk
*To:* agi agi@v2.listbox.com
*Sent:* Sat, July 24, 2010 9:00:42 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Matt:
I mean a neural model with increasingly complex features, as opposed to an
algorithmic 3-D model (like video game graphics in reverse
Jim,
Why more predictive *and then* simpler?
--Abram
On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.com wrote:
An Update
I think the following gets to the heart of general AI and what it takes to
achieve it. It also provides us with evidence as to why general AI is so
Predicting the old and predictable [incl in shape and form] is narrow AI.
Squaresville.
Adapting to the new and unpredictable [incl in shape and form] is AGI. Rock on.
From: David Jones
Sent: Thursday, July 22, 2010 4:49 PM
To: agi
Subject: [agi] Re: Huge Progress on the Core of AGI
Because simpler is not better if it is less predictive.
On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:
Jim,
Why more predictive *and then* simpler?
--Abram
On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.comwrote:
An Update
I think the
...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 22, 2010 3:11:46 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
Because simpler is not better if it is less predictive.
On Thu, Jul 22, 2010 at 1:21 PM, Abram
Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Thu, July 22, 2010 3:11:46 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Because simpler is not better if it is less predictive.
On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:
Jim,
Why
ps-- Sorry for accidentally calling you Jim!
On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:
Jim,
Why more predictive *and then* simpler?
--Abram
On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.comwrote:
An Update
I think the following
/challenge/ and in my own text compression
benchmark.
-- Matt Mahoney, matmaho...@yahoo.com
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Thu, July 22, 2010 3:11:46 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Because simpler is not better
PS-- I am not denying that statistics is applied probability theory. :) When
I say they are different, what I mean is that saying I'm going to use
probability theory and I'm going to use statistics tend to indicate very
different approaches. Probability is a set of axioms, whereas statistics is
a
On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski abramdem...@gmail.com wrote:
[The]complaint that probability theory doesn't try to figure out why it was
wrong in the 30% (or whatever) it misses is a common objection. Probability
theory glosses over important detail, it encourages lazy thinking, etc.
Abram,
Thanks for the clarification Abram. I don't have a single way to deal with
uncertainty. I try not to decide on a method ahead of time because what I
really want to do is analyze the problems and find a solution. But, at the
same time. I have looked at the probabilistic approaches and they
writing or shopping because these can only be
expressed as flexible outlines/schemas as per ideograms.
What do you mean?
.
From: Jim Bromer
Sent: Tuesday, July 13, 2010 2:50 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
On Tue, Jul 13, 2010 at 2:29 AM, Abram Demski
On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner tint...@blueyonder.co.ukwrote:
And programs as we know them, don't and can't handle *concepts* - despite
the misnomers of conceptual graphs/spaces etc wh are not concepts at all.
They can't for example handle writing or shopping because these
I meant,
I think that we both agree that creativity and imagination are absolutely
necessary aspects of intelligence.
of course!
On Tue, Jul 13, 2010 at 12:46 PM, Jim Bromer jimbro...@gmail.com wrote:
On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner
tint...@blueyonder.co.ukwrote:
And
, July 13, 2010 5:46 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:
And programs as we know them, don't and can't handle *concepts* - despite
the misnomers of conceptual graphs/spaces etc wh
agi@v2.listbox.com
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner
tint...@blueyonder.co.ukwrote:
And programs as we know them, don't and can't handle *concepts* - despite
the misnomers of conceptual graphs/spaces etc wh
Mike,
see below.
On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
The first thing is to acknowledge that programs *don't* handle concepts -
if you think they do, you must give examples.
The reasons they can't, as presently conceived, is
a) concepts encase a
On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
The first thing is to acknowledge that programs *don't* handle concepts -
if you think they do, you must give examples.
The reasons they can't, as presently conceived, is
a) concepts encase a more or less
Thanks Abram, I'll read up on it when I get a chance.
On Tue, Jul 13, 2010 at 12:03 PM, Abram Demski abramdem...@gmail.comwrote:
David,
Yes, this makes sense to me.
To go back to your original query, I still think you will find a rich
community relevant to your work if you look into the
] Re: Huge Progress on the Core of AGI
On Tue, Jul 13, 2010 at 10:07 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:
And programs as we know them, don't and can't handle *concepts* -
despite the misnomers of conceptual graphs/spaces etc wh are not concepts at
all. They can't
:* Re: [agi] Re: Huge Progress on the Core of AGI
Correction:
Mike, you are so full of it. There is a big difference between *can* and
*don't*. You have no proof that programs can't handle anything you say
[they] can't.
On Tue, Jul 13, 2010 at 2:59 PM, David Jones davidher...@gmail.comwrote
, or a Jackson Pollock, or a photo of any scene, this
program will give me 3-d versions?
Here's a bet - you're giving me yet more hype.
From: David Jones
Sent: Wednesday, July 14, 2010 1:32 AM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
I'm not even going to read your
David,
I tend to think of probability theory and statistics as different things.
I'd agree that statistics is not enough for AGI, but in contrast I think
probability theory is a pretty good foundation. Bayesianism to me provides a
sound way of integrating the elegance/utility tradeoff of
Thanks Abram,
I know that probability is one approach. But there are many problems with
using it in actual implementations. I know a lot of people will be angered
by that statement and retort with all the successes that they have had using
probability. But, the truth is that you can solve the
Mike,
Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.
1) Images are 2D. I assume you are also referring to 2D outlines. Real
objects are 3D. So, you're going to have to infer the shape
I accidentally pressed something and it sent it early... this is a finished
version:
Mike,
Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.
1) Images are 2D. I assume you are also
objects. Some part of you knows the v.obvious truth ).
From: David Jones
Sent: Saturday, July 10, 2010 3:51 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
Mike,
Using the image itself as a template to match is possible. In fact it has been
done before. But it suffers
: Huge Progress on the Core of AGI
Mike,
Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.
1) Images are 2D. I assume you are also referring to 2D outlines. Real
objects are 3D. So
] Re: Huge Progress on the Core of AGI
Mike,
Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.
1) Images are 2D. I assume you are also referring to 2D outlines. Real
objects
On Sat, Jul 10, 2010 at 5:02 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
Dave:You can't solve the problems with your approach either
This is based on knowledge of what examples? Zero?
It is based on the fact that you have refused to show how you deal with
uncertainty. You haven't even
David,
Sorry for the slow response.
I agree completely about expectations vs predictions, though I wouldn't use
that terminology to make the distinction (since the two terms are
near-synonyms in English, and I'm not aware of any technical definitions
that are common in the literature). This is
Mike,
On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
Isn't the first problem simply to differentiate the objects in a scene?
Well, that is part of the movement problem. If you say something moved, you
are also saying that the objects in the two or more video
: Friday, July 09, 2010 1:56 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
Mike,
On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote:
Isn't the first problem simply to differentiate the objects in a scene?
Well, that is part of the movement problem
the
world will be like. They aren't able to learn about any world. They are
optimized to configure their brains for this world.
*From:* David Jones davidher...@gmail.com
*Sent:* Friday, July 09, 2010 1:56 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
that evolution should have started with
touch as a more primary sense, well before it got to vision.
*Or perhaps it may prove better to start with robot snakes/bodies or somesuch.
From: David Jones
Sent: Friday, July 09, 2010 3:22 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Mike,
On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
Isn't the first problem simply to differentiate the objects in a
scene?
Well, that is part of the movement problem. If you say something moved
certainly
didn't have one.
From: David Jones
Sent: Friday, July 09, 2010 4:20 PM
To: agi
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
Mike,
Please outline your algorithm for fluid schemas though. It will be clear when
you do that you are faced with the exact same uncertainty
certainly didn't have one.
*From:* David Jones davidher...@gmail.com
*Sent:* Friday, July 09, 2010 4:20 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Mike,
Please outline your algorithm for fluid schemas though. It will be clear
when you do
David,
That's why, imho, the rules need to be *learned* (and, when need be,
unlearned). IE, what we need to work on is general learning algorithms, not
general visual processing algorithms.
As you say, there's not even such a thing as a general visual processing
algorithm. Learning algorithms
It may not be possible to create a learning algorithm that can learn how to
generally process images and other general AGI problems. This is for the
same reason that completely general vision algorithms are likely impossible.
I think that figuring out how to process sensory information
David,
How I'd present the problem would be predict the next frame, or more
generally predict a specified portion of video given a different portion. Do
you object to this approach?
--Abram
On Thu, Jul 8, 2010 at 5:30 PM, David Jones davidher...@gmail.com wrote:
It may not be possible to
Abram,
Yeah, I would have to object for a couple reasons.
First, prediction requires previous knowledge. So, even if you make that
your primary goal, you're still going to have my research goals as the
prerequisite: which are to process visual information in a more general way
and learn about
Isn't the first problem simply to differentiate the objects in a scene? (Maybe
the most important movement to begin with is not the movement of the object,
but of the viewer changing their POV if only slightly - wh. won't be a factor
if you're looking at a screen)
And that I presume comes
uncountably
infinite sets, I don't understand your objection.
-- Matt Mahoney, matmaho...@yahoo.com
--
*From:* Jim Bromer jimbro...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Sat, July 3, 2010 9:43:15 AM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
] Re: Huge Progress on the Core of AGI
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:
There cannot be a one to one correspondence to the representation of
the shortest program to produce a string and the strings that they produce.
This means
This group, as in most AGI discussions, will use logic and statistical
theory loosely. We have to. One is that we - thinking entities - do not
know everything and so our reasoning is based on fragmentary knowledge. In
this situation the boundaries of logical reasoning in thought, both natural
.listbox.com
Sent: Sat, July 3, 2010 9:43:15 AM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
On Fri, Jul 2, 2010 at 6:08 PM, Matt Mahoney matmaho...@yahoo.com wrote:
Jim, to address all of your points,
Solomonoff induction claims that the probability of a string is proportional
to the number
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:
Jim, what evidence do you have that Occam's Razor ... is wrong, besides
your own opinions? It is well established that elegant (short) theories are
preferred in all branches of science because they have greater
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:
Jim, what evidence do you have that Occam's Razor or algorithmic
information theory is wrong,
Also, what does this have to do with Cantor's diagonalization argument? AIT
considers only the countably infinite set of
On Fri, Jul 2, 2010 at 2:09 PM, Jim Bromer jimbro...@gmail.com wrote:
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.comwrote:
Jim, what evidence do you have that Occam's Razor or algorithmic
information theory is wrong,
Also, what does this have to do with Cantor's
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:
There cannot be a one to one correspondence to the representation of
the shortest program to produce a string and the strings that they produce.
This means that if the consideration of the hypotheses were to be put into
:09:38 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:
There cannot be a one to one correspondence to the representation of the
shortest program to produce a string and the strings that they produce. This
means
@v2.listbox.com
*Sent:* Fri, July 2, 2010 4:09:38 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:
There cannot be a one to one correspondence to the representation of
the shortest program to produce
Cantor's diagonal argument is (in all likelihood) mathematically correct.
However the attempt to use Cantor's methodology to derive an irrational
number that is the next greater irrational number from a given irrational
number (to a degree of precision sufficient to distinguish the two numbers)
is
Jim,
Well, like I said, it'll only probably lead you to accept AIT. :) In my
case, it led me to accept AIT but not AIXI, with reasons somewhat similar to
the ones Steve recently mentioned.
I agree that there is not a perfect equivalence; the math here is subtle.
Just saying it's equivalent
: Huge Progress on the Core of AGI
On Tue, Jun 29, 2010 at 11:46 PM, Abram Demski abramdem...@gmail.com wrote:
In brief, the answer to your question is: we formalize the description length
heuristic by assigning lower probabilities to longer hypotheses, and we apply
Bayes law to update
://www.scholarpedia.org/article/Algorithmic_probability
-- Matt Mahoney, matmaho...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 9:05:45 AM
Subject: [agi] Re: Huge Progress on the Core of AGI
If anyone has any knowledge
--
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Tue, June 29, 2010 9:05:45 AM
*Subject:* [agi] Re: Huge Progress on the Core of AGI
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning
...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 10:44:41 AM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
Thanks Matt,
Right. But Occam's Razor is not complete. It says simpler is better, but 1
definition), then 1 will usually be shorter than 2.
-- Matt Mahoney, matmaho...@yahoo.com
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 1:31:01 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
On Tue, Jun
...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Tue, June 29, 2010 1:31:01 PM
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
On Tue, Jun 29, 2010 at 11:26 AM, Matt Mahoney matmaho...@yahoo.comwrote:
Right. But Occam's Razor is not complete. It says simpler is better, but
1
From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 1:31:01 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI
On Tue, Jun 29, 2010 at 11:26 AM, Matt Mahoney matmaho...@yahoo.com wrote:
Right. But Occam's Razor is not complete. It says simpler
--
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Tue, June 29, 2010 9:05:45 AM
*Subject:* [agi] Re: Huge Progress on the Core of AGI
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords
72 matches
Mail list logo