Word of advice. You're creating your own artificial world here with its own
artificial rules.
AGI is about real vision of real objects in the real world. The two do not
relate - or compute.
It's a pity - it's good that you keep testing yourself, it's bad that they
aren't realistic tests. Sub
Hi,
I certainly agree with this method, but of course it's not original at all,
it's pretty much the basis of algorithmic learning theory, right?
Hutter's AIXI for instance works [very roughly speaking] by choosing the
most compact program that, based on historical data, would have yielded
maximu
Thanks Ben,
Right, explanatory reasoning not new at all (also called abduction and
inference to the best explanation). But, what seems to be elusive is a
precise and algorithm method for implementing explanatory reasoning and
solving real problems, such as sensory perception. This is what I'm hopi
For visual perception, there are many reasons to think that a hierarchical
architecture can be effective... this is one of the things you may find in
dealing with real visual data but not with these toy examples...
E.g. in a spatiotemporal predictive hierarchy, the idea would be to create a
predic
On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
> A method for comparing hypotheses in explanatory-based reasoning:*Here is
> a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close positio
Jim :This illustrates one of the things wrong with the dreary instantiations of
the prevailing mind set of a group. It is only a matter of time until you
discover (through experiment) how absurd it is to celebrate the triumph of an
overly simplistic solution to a problem that is, by its very po
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI. Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid, is AGI. The former is rational, uniform thinking.
lol.
Mike,
What I was trying to express by the word *expect* is NOT predict [some exact
outcome]. Expect means that the algorithm has a way of comparing
observations to what the algorithm considers to be consistent with an
"explanation". This is something I struggled to solve for a long time
rega
The fact that you are using experiment and the fact that you recognized that
AGI needs to provide both explanation and expectations (differentiated from
the false precision of 'prediction') shows that you have a grasp of some of
the philosophical problems, but the fact that you would rely on a prim
Jim,
I am using over simplification to identify the core problems involved. As
you can see, the over simplification is revealing how to resolve certain
types of dilemmas and uncertainty. That is exactly why I did this. If you
can't solve a simple environment, you certainly can't solve the full
env
On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner wrote:
> Jim :This illustrates one of the things wrong with the
> dreary instantiations of the prevailing mind set of a group. It is only a
> matter of time until you discover (through experiment) how absurd it is to
> celebrate the triumph of an ov
Jim,
Two things.
1) If the method I have suggested works for the most simple case, it is
quite straight forward to add complexity and then ask, how do I solve it
now. If you can't solve that case, there is no way in hell you will solve
the full AGI problem. This is how I intend to figure out how
This is wishful thinking. Wishful thinking is dangerous. How about instead of
hoping that AGI won't destroy the world, you study the problem and come up with
a safe design.
-- Matt Mahoney, matmaho...@yahoo.com
From: rob levy
To: agi
Sent: Sat, June 26, 20
Try ping-pong - as per the computer game. Just a line (/bat) and a
square(/ball) representing your opponent - and you have a line(/bat) to play
against them
Now you've got a relatively simple true AGI visual problem - because if the
opponent returns the ball somewhat as a real human AGI does,
Ben, et al,
*I think I may finally grok the fundamental misdirection that current AGI
thinking has taken!
*This is a bit subtle, and hence subject to misunderstanding. Therefore I
will first attempt to explain what I see, WITHOUT so much trying to convince
you (or anyone) that it is necessarily c
That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow
AI system could be constructed to beat humans at Pong ;p ... without
teaching us much of anything about intelligence...
Very likely a narrow-AI machine learning system could *learn* by experience
to beat humans at Pong ...
I definitely agree, however we lack a convincing model or plan of any sort
for the construction of systems demonstrating subjectivity, and it seems
plausible that subjectivity is functionally necessary for general
intelligence. Therefore it is reasonable to consider symbiosis as both a
safe design
This is wishful thinking. Wishful thinking is dangerous. How about instead
of hoping that AGI won't destroy the world, you study the problem and come
up with a safe design.
Agreed on this dangerous thought!
On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney wrote:
> This is wishful thinking. Wishful
I am working on logical satisfiability again. If what I am working on right
now works, it will become a pivotal moment in AGI, and what's more, the
method that I am developing will (probably) become a core method for AGI.
However, if the idea I am working on does not -itself- lead to a major
break
Well, Ben, I'm glad you're "quite sure" because you haven't given a single
reason why. Clearly you should be Number One advisor on every Olympic team,
because you've cracked the AGI problem of how to deal with opponents that can
move (whether themselves or balls) in multiple, unpredictable direc
Ben: I'm quite sure a simple narrow AI system could be constructed to beat
humans at Pong ;p
Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a
single reason why.
Although Ben would have to give us an actual example (of a pong program that
could beat humans at Pong) just to
This has probably been discussed at length, so I will appreciate a reference
on this:
Why does Legg's definition of intelligence (following on Hutters' AIXI and
related work) involve a reward function rather than a utility function? For
this purpose, reward is a function of the word state/history
It's just that something like world hunger is so complex AGI would have to
master simpler problems. Also, there are many people and institutions that
have solutions to world hunger already and they get ignored. So an AGI would
have to get established over a period of time for anyone to really care
On 27 June 2010 21:25, John G. Rose wrote:
> It's just that something like world hunger is so complex AGI would have to
> master simpler problems.
>
I am not sure that that follows necessarily. Computing is full of situations
where a seemingly simple problem is not solved and a more complex one
The definition of universal intelligence being over all utility functions
implies that the utility function is unknown. Otherwise there is a fixed
solution.
-- Matt Mahoney, matmaho...@yahoo.com
From: Joshua Fox
To: agi
Sent: Sun, June 27, 2010 4:22:19 PM
rob levy wrote:
>> This is wishful thinking.
> I definitely agree, however we lack a convincing model or plan of any sort
> for the construction of systems demonstrating subjectivity,
Define subjectivity. An objective decision might appear subjective to you only
because you aren't intelligent e
I don't like the idea of enhancing human intelligence before the
singularity. I think crime has to be made impossible even for an enhanced
humans first. I think life is too adapt to abusing opportunities if
possible. I would like to see the singularity enabling AI to be as least
like a reproduction
I sketched a graph the other day which represented my thoughts on the
usefulness of hardcoding knowledge into an AI. (Graph attached)
Basically,
the more hardcoded knowledge you include in an AI, of AGI, the lower
the overall intelligence it will have, but that faster you will reach
tha
Travis Lenting wrote:
> I don't like the idea of enhancing human intelligence before the singularity.
The singularity is a point of infinite collective knowledge, and therefore
infinite unpredictability. Everything has to happen before the singularity
because there is no after.
> I think crime
Correct. Intelligence = log(knowledge) + log(computing power). At the extreme
left of your graph is AIXI, which has no knowledge but infinite computing
power. At the extreme right you have a giant lookup table.
-- Matt Mahoney, matmaho...@yahoo.com
From: M E
Hi Steve,
A few comments...
1)
Nobody is trying to implement Hutter's AIXI design, it's a mathematical
design intended as a "proof of principle"
2)
Within Hutter's framework, one calculates the shortest program that explains
the data, where "shortest" is measured on Turing machine M. Given a
I think you're thinking of a plodding limited-movement classic Pong line.
I'm thinking of a line that can like a human player move with varying speed and
pauses to more or less any part of its court to hit the ball, and then hit it
with varying speed to more or less any part of the opposite cour
You can always build the utility function into the assumed universal Turing
machine underlying the definition of algorithmic information...
I guess this will improve learning rate by some additive constant, in the
long run ;)
ben
On Sun, Jun 27, 2010 at 4:22 PM, Joshua Fox wrote:
> This has pr
Ben,
What I saw as my central thesis is that propagating carefully conceived
dimensionality information along with classical "information" could greatly
improve the cognitive process, by FORCING reasonable physics WITHOUT having
to "understand" (by present concepts of what "understanding" means) p
Steve,
I know what dimensional analysis is, but it would be great if you could give
an example of how it's useful for everyday commonsense reasoning such as,
say, a service robot might need to do to figure out how to clean a house...
thx
ben
On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
wrote
Everything has to happen before the singularity because there is no after.
I meant when machines take over technological evolution.
That is easy. Eliminate all laws.
I would prefer a surveillance state. I should say impossible to get away
with if conducted in public.
Is there a difference betwe
Even with the variations you mention, I remain highly confident this is not
a difficult problem for narrow-AI machine learning methods
-- Ben G
On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner wrote:
> I think you're thinking of a plodding limited-movement classic Pong line.
>
> I'm thinking of a
Ben,
On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel wrote:
> know what dimensional analysis is, but it would be great if you could give
> an example of how it's useful for everyday commonsense reasoning such as,
> say, a service robot might need to do to figure out how to clean a house...
>
How
> -Original Message-
> From: Ian Parker [mailto:ianpark...@gmail.com]
>
> So an AGI would have to get established over a period of time for anyone
to
> really care what it has to say about these types of issues. It could
simulate
> things and come up with solutions but they would not get i
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield
wrote:
> Ben,
>
> On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel wrote:
>
>> know what dimensional analysis is, but it would be great if you could
>> give an example of how it's useful for everyday commonsense reasoning such
>> as, say, a service r
Travis Lenting wrote:
>> Is there a difference between enhancing our intelligence by uploading and
>> creating killer robots? Think about it.
> Well yes, we're not all bad but I think you read me wrong because thats
> basically my worry.
What I mean is that one way to look at uploading is to cr
Mike,
you are mixing multiple issues. Just like my analogy of the rubix cube, full
AGI problems involve many problems at the same time. The problem I wrote
this email about was not about how to solve them all at the same time. It
was about how to solve one of those problems. After solving the prob
On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:
>
> Humans may use sophisticated tactics to play Pong, but that doesn't
> mean it's the only way to win
>
> Humans use subtle and sophisticated methods to play chess also, right?
> But Deep Blue still kicks their ass...
If the rules of ches
43 matches
Mail list logo