Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Michael Swan

On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:
> 
> Humans may use sophisticated tactics to play Pong, but that doesn't
> mean it's the only way to win
> 
> Humans use subtle and sophisticated methods to play chess also, right?
> But Deep Blue still kicks their ass...

If the rules of chess changed slightly, without being reprogrammed deep
blue sux. 
And also there is anti deep blue chess. Play chess where you avoid
losing and taking pieces for as long as possible to maintain high
combination of possible outcomes, and avoid moving pieces in known
arrangements. 

Playing against another human player like this you would more than
likely lose.

> 
> The stock market is another situation where narrow-AI algorithms may
> already outperform humans ... certainly they outperform all except the
> very best humans...
> 
> ... ben g
> 
> On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner
>  wrote:
> Oh well that settles it...
>  
> How do you know then when the opponent has changed his
> tactics?
>  
> How do you know when he's switched from a predominantly
> baseline game say to a net-rushing game?
>  
> And how do you know when the market has changed from bull to
> bear or vice versa, and I can start going short or long? Why
> is there any difference between the tennis & market
> situations?


I'm solving this by using an algorithm + exceptions routines.

eg Input 100 numbers - write an algorithm that generalises/compresses
the input.

ans may be
(input_is_always > 0)  // highly general

(if fail try exceptions)
// exceptions   
// highly accurate exceptions
(input35 == -4) 
(input75 == -50)
..
more generalised exceptions, etc

I believe such a system is similar to the way we remember things. eg -
We tend to have highly detailed memory for exceptions - we tend to
remember things about "white whales" more than "ordinary whales". In
fact, there was a news story the other night on a returning white whale
in Brisbane, and there are additional laws to stay way from this whale
in particular, rather than all whales in general.

>  
>  
>  
>  
>  
>  
>  
> 
> 
> From: Ben Goertzel 
> Sent: Monday, June 28, 2010 12:03 AM
> 
> To: agi 
> Subject: Re: [agi] Huge Progress on the Core of AGI
> 
> 
> 
> Even with the variations you mention, I remain highly
> confident this is not a difficult problem for narrow-AI
> machine learning methods
> 
> -- Ben G
> 
> On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner
>  wrote:
> I think you're thinking of a plodding limited-movement
> classic Pong line.
>  
> I'm thinking of a line that can like a human
> player move with varying speed and pauses to more or
> less any part of its court to hit the ball, and then
> hit it with varying speed to more or less any part of
> the opposite court. I think you'll find that bumps up
> the variables if not unknowns massively.
>  
> Plus just about every shot exchange presents you with
> dilemmas of how to place your shot and then move in
> anticipation of your opponent's return .
>  
> Remember the object here is to present a would-be AGI
> with a simple but *unpredictable* object to deal with,
> reflecting the realities of there being a great many
> such objects in the real world - as distinct from
> Dave's all too predictable objects.
>  
> The possible weakness of this pong example is that
> there might at some point cease to be unknowns, as
> there always are in real world situations, incl
> tennis. One could always introduce them if necessary -
> allowing say creative spins on the ball.
>  
> But I doubt that it will be necessary here for the
> purposes of anyone like Dave -  and v. offhand and
> with no doubt extreme license this strikes me as not a
> million miles from a hyper version of the TSP problem,
> where the towns can move around, and you can't be sure
> whether they'll be there when you arrive.  Or is there
> an "obviously true" solution for that problem too?
> [Very convenient these obviously true solutions].
>  
> 
> 
> From: Jim Bromer 
> Sent: Sunday, June 27, 2010 8:53 PM
> 
> To: agi 
>

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Mike,

you are mixing multiple issues. Just like my analogy of the rubix cube, full
AGI problems involve many problems at the same time. The problem I wrote
this email about was not about how to solve them all at the same time. It
was about how to solve one of those problems. After solving the problem
satisfactorily for all test cases at a given complexity level, I intend to
incrementally add complexity and then continue to solve the problems I run
into. Your proposed "AGI problem" is a combination of sensory
interpretation, planning, plan/action execution, behavior learning, etc,
etc. You would do well to learn from my approach and break the problem down
into its separate pieces. You would be a fool to implement such a system
without a good understanding of the sub problems. If you break it down and
figure out how to solve each piece individually, while anticipating the end
goal, you would have a much better understanding and have fewer problems as
you develop the system because of your experience. You are philosophizing
about the right way, but your approach is completely theoretical and
completely void of any practical considerations. Why don't you try your
method, I'll try mine, and in a few years, let's see how far we get. I
suspect you'll have a very nice pong playing program that can't do anything
else. I on the other hand would have a full fledged theoretical foundation
and implementation on increasingly complex environments. At that point, the
proof of concept would be sufficient to gain significant support. While your
approach would likely be another narrow approach that can play a single
game. Why? because you're trying to juggle too many separate problems that
individual study. By lumping them altogether and not carefully considering
each, you will not solve them well. You will be spread too thin.

Dave

On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner wrote:

>  Oh well that settles it...
>
> How do you know then when the opponent has changed his tactics?
>
> How do you know when he's switched from a predominantly baseline game say
> to a net-rushing game?
>
> And how do you know when the market has changed from bull to bear or vice
> versa, and I can start going short or long? Why is there any difference
> between the tennis & market situations?
>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Travis Lenting wrote:
>> Is there a difference between enhancing our intelligence by uploading and 
>> creating killer robots? Think about it.

> Well yes, we're not all bad but I think you read me wrong because thats 
> basically my worry.

What I mean is that one way to look at uploading is to create a robot that 
behaves like you and then dying. The question is whether you "become" the 
robot. But it is a nonsense question. Nothing changes whichever way you answer 
it.

>> Assume we succeed. People want to be happy. Depending on how our minds are 
>> implemented, it's either a matter of rewiring our neurons or rewriting our 
>> software. Is that better than a gray goo accident?

> Are you asking if changing your hardware or software ends your true existence 
> like a grey goo accident would?

A state of maximum happiness or maximum utility is a degenerate mental state 
where any thought or perception would be unpleasant because it would result in 
a different mental state. In a competition with machines that can't have 
everything they want (for example, they fear death and later die), the other 
machines would win because you would have no interest in self preservation and 
they would.

> Assuming the goo is unconscious, 

What do you mean by "unconscious"?

> it would be worse because there is the potential for a peaceful experience 
> free from the power struggle for limited resources even if humans don't truly 
> exist or not.

That result could be reached by a dead planet, which BTW, is the only stable 
attractor in the chaotic process of evolution.

> Does anyone else worry about how we're going to keep this machine's 
> unprecedented resourcefulness from being abused by an elite few to further 
> protect and advance their social superiority?

If the elite few kill off all their competition, then theirs is the only 
ethical model that matters. From their point of view, it would be a good thing. 
How do you feel about humans currently being at the top of the food chain?

> To me it seems like if we can't create a democratic society where people have 
> real choices concerning the issues that affect them most and it  just ends up 
> being a continuation of the class war we have today, then maybe grey goo 
> would be the better option before we start "promoting democracy" throughout 
> the universe.

Freedom and fairness are important to us because they were programmed into our 
ethical models, not because they are actually important. As a counterexample, 
they are irrelevant to evolution. Gray goo might be collectively vastly more 
intelligent than humanity, if that makes you feel any better.
 -- Matt Mahoney, matmaho...@yahoo.com





From: Travis Lenting 
To: agi 
Sent: Sun, June 27, 2010 6:53:14 PM
Subject: Re: [agi] Questions for an AGI

Everything has to happen before the singularity because there is no after.

I meant when machines take over technological evolution. 

That is easy. Eliminate all laws.

I would prefer a surveillance state. I should say impossible to get away with 
if conducted in public. 

Is there a difference between enhancing our intelligence by uploading and 
creating killer robots? Think about it.

Well yes, we're not all bad but I think you read me wrong because thats 
basically my worry.

Assume we succeed. People want to be happy. Depending on how our minds are 
implemented, it's either a matter of rewiring our neurons or rewriting our 
software. Is that better than a gray goo accident?

Are you asking if changing your hardware or software ends your true existence 
like a grey goo accident would? Assuming the goo is unconscious, it would be 
worse because there is the potential for a peaceful experience free from the 
power struggle for limited resources even if humans don't truly exist or not. 
Does anyone else worry about how we're going to keep this machine's 
unprecedented resourcefulness from being abused by an elite few to further 
protect and advance their social superiority? To me it seems like if we can't 
create a democratic society where people have real choices concerning the 
issues that affect them most and it  just ends up being a continuation of the 
class war we have today, then maybe grey goo would be the better option before 
we start "promoting democracy" throughout the universe.


On Sun, Jun 27, 2010 at 2:43 PM, Matt Mahoney  wrote:

Travis Lenting wrote:
>> I don't like the idea of enhancing human intelligence before the singularity.
>
>
>The singularity is a point of infinite collective knowledge, and therefore 
>infinite unpredictability. Everything has to happen before the singularity 
>because there is no after.
>
>
>> I think crime has to be made impossible even for an enhanced humans first. 
>
>
>That is easy. Eliminate all laws.
>
>
>> I would like to see the singularity enabling AI to be as least like a 
>> reproduction machine as possible.
>
>
>Is there a difference between enhancing our intelligence by upl

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield
wrote:

> Ben,
>
> On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel  wrote:
>
>>  know what dimensional analysis is, but it would be great if you could
>> give an example of how it's useful for everyday commonsense reasoning such
>> as, say, a service robot might need to do to figure out how to clean a
>> house...
>>
>
> How much detergent will it need to clean the floors? Hmmm, we need to know
> ounces. We have the length and width of the floor, and the bottle says to
> use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
> oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
> multiply all three numbers together to get ounces. This WITHOUT
> "understanding" things like surface area, utilization, etc.
>


I think that the El Salvadorean maids who come to clean my house
occasionally, solve this problem without any dimensional analysis or any
quantitative reasoning at all...

Probably they solve it based on nearest-neighbor matching against past
experiences cleaning other dirty floors with water in similarly sized and
shaped buckets...

-- ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
> -Original Message-
> From: Ian Parker [mailto:ianpark...@gmail.com]
> 
> So an AGI would have to get established over a period of time for anyone
to
> really care what it has to say about these types of issues. It could
simulate
> things and come up with solutions but they would not get implemented
> unless it had power to influence. So in addition AGI would need to know
how
> to make people listen... and maybe obey.
> 
> This is CRESS. CRESS would be an accessible option.
> 

Yes, I agree, it looks like that.

> IMO I think AGI will take the embedded route - like other types of
computer
> systems - IRS, weather, military, Google, etc. and we become dependent
> intergenerationally so that it is impossible to survive without. At that
point
> AGI's will have power to influence.
> 
> Look! The point is this:-
> 
> 1) An embedded system is AI not AGI.
> 
> 2) AGI will arise simply because all embedded systems are themselves
> searchable.
> 

A narrow "embedded" system, like say a DMV computer network is not an AGI.
But that doesn't mean an AGI could not perform that function. In fact, AGI
might arise out of these systems needing to become more intelligent. And an
AGI system, that same AGI software may be used for a DMV, a space navigation
system, IRS, NASDAQ, etc. it could adapt. .. efficiently. There are some
systems that tout multi-use now but these are basically very narrow AI. AGI
will be able to apply it's intelligence across domains and should be able to
put its feelers into all the particular subsystems. Although I foresee some
types of standard interfaces perhaps into these narrow AI computer networks;
some sort of intelligence standards maybe, or the AGI just hooks into the
human interfaces...

An AGI could become a God but also it could do some useful stuff like run
everyday information systems just like people with brains have to perform
menial labor.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben,

On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel  wrote:

>  know what dimensional analysis is, but it would be great if you could give
> an example of how it's useful for everyday commonsense reasoning such as,
> say, a service robot might need to do to figure out how to clean a house...
>

How much detergent will it need to clean the floors? Hmmm, we need to know
ounces. We have the length and width of the floor, and the bottle says to
use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
multiply all three numbers together to get ounces. This WITHOUT
"understanding" things like surface area, utilization, etc.

Of course, throw in a few other available measures and it become REALLY easy
to come up with several wrong answers. This method does NOT avoid wrong
answers, it only provides a mechanism to have the right answer among them.

While this may be a challenge for dispensing detergent (especially if you
include the distance from the earth to the sun as one of your available
measures), it is little problem for learning.

I was more concerned with learning than with solving. I believe that
dimensional analysis could help learning a LOT, by maximally constraining
what is used as a basis for learning, without "throwing the baby out with
the bathwater", i.e. applying so much constraint that a good solution can't
"climb out" of the process.

Steve


On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
wrote:
>
>> Ben,
>>
>> What I saw as my central thesis is that propagating carefully conceived
>> dimensionality information along with classical "information" could greatly
>> improve the cognitive process, by FORCING reasonable physics WITHOUT having
>> to "understand" (by present concepts of what "understanding" means) physics.
>> Hutter was just a foil to explain my thought. Note again my comments
>> regarding how physicists and astronomers "understand" some processes though
>> "dimensional analysis" that involves NONE of the sorts of "understanding"
>> that you might think necessary, yet can predictably come up with the right
>> answers.
>>
>> Are you up on the basics of dimensional analysis? The reality is that it
>> is quite imperfect, but is often able to yield a short list of "answers",
>> with the correct one being somewhere in the list. Usually, the wrong answers
>> are wildly wrong (they are probably computing something, but NOT what you
>> might be interested in), and are hence easily eliminated. I suspect that
>> neurons might be doing much the same, as could formulaic implementations
>> like (most) present AGI efforts. This might explain "natural architecture"
>> and guide human architectural efforts.
>>
>> In short, instead of a "pot of neurons", we might instead have a pot of
>> dozens of types of neurons that each have their own complex rules regarding
>> what other types of neurons they can connect to, and how they process
>> information. "Architecture" might involve deciding how many of each type to
>> provide, and what types to put adjacent to what other types, rather than the
>> more detailed concept now usually thought to exist.
>>
>> Thanks for helping me wring my thought out here.
>>
>> Steve
>> =
>> On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel  wrote:
>>
>>>
>>> Hi Steve,
>>>
>>> A few comments...
>>>
>>> 1)
>>> Nobody is trying to implement Hutter's AIXI design, it's a mathematical
>>> design intended as a "proof of principle"
>>>
>>> 2)
>>> Within Hutter's framework, one calculates the shortest program that
>>> explains the data, where "shortest" is measured on Turing  machine M.
>>> Given a sufficient number of observations, the choice of M doesn't matter
>>> and AIXI will eventually learn any computable reward pattern.  However,
>>> choosing the right M can greatly accelerate learning.  In the case of a
>>> physical AGI system, choosing M to incorporate the correct laws of physics
>>> would obviously accelerate learning considerably.
>>>
>>> 3)
>>> Many AGI designs try to incorporate prior understanding of the structure
>>> & properties of the physical world, in various ways.  I have a whole chapter
>>> on this in my forthcoming book on OpenCog  E.g. OpenCog's design
>>> includes a physics-engine, which is used directly and to aid with
>>> inferential extrapolations...
>>>
>>> So I agree with most of your points, but I don't find them original
>>> except in phrasing ;)
>>>
>>> ... ben
>>>
>>>
>>> On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield <
>>> steve.richfi...@gmail.com> wrote:
>>>
 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current
 AGI thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore
 I will first attempt to explain what I see, WITHOUT so much trying to
 convince you (or anyone) that it is necessarily correct. Once I convey my
 vision, then let t

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
Even with the variations you mention, I remain highly confident this is not
a difficult problem for narrow-AI machine learning methods

-- Ben G

On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner wrote:

>  I think you're thinking of a plodding limited-movement classic Pong line.
>
> I'm thinking of a line that can like a human player move with varying
> speed and pauses to more or less any part of its court to hit the ball, and
> then hit it with varying speed to more or less any part of the opposite
> court. I think you'll find that bumps up the variables if not
> unknowns massively.
>
>  Plus just about every shot exchange presents you with dilemmas of how to
> place your shot and then move in anticipation of your opponent's return .
>
> Remember the object here is to present a would-be AGI with a simple but
> *unpredictable* object to deal with, reflecting the realities of there being
> a great many such objects in the real world - as distinct from Dave's all
> too predictable objects.
>
> The possible weakness of this pong example is that there might at some
> point cease to be unknowns, as there always are in real world situations,
> incl tennis. One could always introduce them if necessary - allowing say
> creative spins on the ball.
>
> But I doubt that it will be necessary here for the purposes of anyone like
> Dave -  and v. offhand and with no doubt extreme license this strikes me as
> not a million miles from a hyper version of the TSP problem, where the towns
> can move around, and you can't be sure whether they'll be there when you
> arrive.  Or is there an "obviously true" solution for that problem too?
> [Very convenient these obviously true solutions].
>
>
>  *From:* Jim Bromer 
> *Sent:* Sunday, June 27, 2010 8:53 PM
> *To:* agi 
> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>
> Ben:  I'm quite sure a simple narrow AI system could be constructed to beat
> humans at Pong ;p
> Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a
> single reason why.
>
> Although Ben would have to give us an actual example (of a pong program
> that could beat humans at Pong) just to make sure that it is
> not that difficult a task, it seems like such an obviously true statement
> that there is almost no incentive for anyone to try it.  However, there are
> chess programs that can beat the majority of people who play chess without
> outside assistance.
> Jim Bromer
>
> On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner wrote:
>
>>  Well, Ben, I'm glad you're "quite sure" because you haven't given a
>> single reason why. Clearly you should be Number One advisor on every
>> Olympic team, because you've cracked the AGI problem of how to deal with
>> opponents that can move (whether themselves or balls) in multiple,
>> unpredictable directions, that is at the centre of just about every field
>> and court sport.
>>
>> I think if you actually analyse it, you'll find that you can't predict and
>> prepare for  the presumably at least 50 to 100 spots on a table tennis
>> board/ tennis court that your opponent can hit the ball to, let
>> alone for how he will play subsequent 10 to 20 shot rallies   - and you
>> can't devise a deterministic program to play here. These are true,
>> multiple-/poly-solution problems rather than the single solution ones you
>> are familiar with.
>>
>> That's why all of these sports have normally hundreds of different
>> competing philosophies and strategies, - and people continually can and do
>> come up with new approaches and styles of play to the sports overall - there
>> are endless possibilities.
>>
>> I suspect you may not play these sports, because one factor you've
>> obviously ignored (although I stressed it) is not just the complexity
>> but that in sports players can and do change their strategies - and that
>> would have to be a given in our computer game. In real world activities,
>> you're normally *supposed* to act unpredictably at least some of the time.
>> It's a fundamental subgoal.
>>
>> In sport, as in investment, "past performance is not a [sure] guide to
>> future performance" - companies and markets may not continue to behave as
>> they did in the past -  so that alone buggers any narrow AI predictive
>> approach.
>>
>> P.S. But the most basic reality of these sports is that you can't cover
>> every shot or move your opponent may make, and that gives rise to a
>> continuing stream of genuine dilemmas . For example, you have just returned
>> a ball from the extreme, far left of your court - do you now start moving
>> rapidly towards the centre of the court so that you will be prepared to
>> cover a ball to the extreme, near right side - or do you move more slowly?
>> If you don't move rapidly, you won't be able to cover that ball if it comes.
>> But if you do move rapidly, your opponent can play the ball back to the
>> extreme left and catch you out.
>>
>> It's a genuine dilemma and gamble - just like deciding whether to invest
>> in shares. And c

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
Everything has to happen before the singularity because there is no after.

I meant when machines take over technological evolution.

That is easy. Eliminate all laws.

I would prefer a surveillance state. I should say impossible to get away
with if conducted in public.

Is there a difference between enhancing our intelligence by uploading and
creating killer robots? Think about it.

Well yes, we're not all bad but I think you read me wrong because
thats basically my worry.

Assume we succeed. People want to be happy. Depending on how our minds are
implemented, it's either a matter of rewiring our neurons or rewriting our
software. Is that better than a gray goo accident?

Are you asking if changing your hardware or software ends your
true existence like a grey goo accident would? Assuming the goo
is unconscious, it would be worse because there is the potential for a
peaceful experience free from the power struggle for limited resources even
if humans don't truly exist or not. Does anyone else worry about how we're
going to keep this machine's unprecedented resourcefulness from being abused
by an elite few to further protect and advance their social superiority? To
me it seems like if we can't create a democratic society where people have
real choices concerning the issues that affect them most and it  just ends
up being a continuation of the class war we have today, then maybe grey goo
would be the better option before we start "promoting democracy" throughout
the universe.

On Sun, Jun 27, 2010 at 2:43 PM, Matt Mahoney  wrote:

> Travis Lenting wrote:
> > I don't like the idea of enhancing human intelligence before the
> singularity.
>
> The singularity is a point of infinite collective knowledge, and therefore
> infinite unpredictability. Everything has to happen before the singularity
> because there is no after.
>
> > I think crime has to be made impossible even for an enhanced humans
> first.
>
> That is easy. Eliminate all laws.
>
> > I would like to see the singularity enabling AI to be as least like a
> reproduction machine as possible.
>
> Is there a difference between enhancing our intelligence by uploading and
> creating killer robots? Think about it.
>
> > Does it really need to be a general AI to cause a singularity? Can it not
> just stick to scientific data and quantify human uncertainty?  It seems like
> it would be less likely to ever care about killing all humans so it can rule
> the galaxy or that its an omnipotent servant.
>
> Assume we succeed. People want to be happy. Depending on how our minds are
> implemented, it's either a matter of rewiring our neurons or rewriting our
> software. Is that better than a gray goo accident?
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> --
> *From:* Travis Lenting 
>
> *To:* agi 
> *Sent:* Sun, June 27, 2010 5:21:24 PM
>
> *Subject:* Re: [agi] Questions for an AGI
>
> I don't like the idea of enhancing human intelligence before the
> singularity. I think crime has to be made impossible even for an enhanced
> humans first. I think life is too adapt to abusing opportunities if
> possible. I would like to see the singularity enabling AI to be as least
> like a reproduction machine as possible. Does it really need to be a general
> AI to cause a singularity? Can it not just stick to scientific data and
> quantify human uncertainty?  It seems like it would be less likely to ever
> care about killing all humans so it can rule the galaxy or that its
> an omnipotent servant.
>
> On Sun, Jun 27, 2010 at 11:39 AM, The Wizard wrote:
>
>> This is wishful thinking. Wishful thinking is dangerous. How about instead
>> of hoping that AGI won't destroy the world, you study the problem and come
>> up with a safe design.
>>
>>
>> Agreed on this dangerous thought!
>>
>> On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney wrote:
>>
>>> This is wishful thinking. Wishful thinking is dangerous. How about
>>> instead of hoping that AGI won't destroy the world, you study the problem
>>> and come up with a safe design.
>>>
>>>
>>> -- Matt Mahoney, matmaho...@yahoo.com
>>>
>>>
>>> --
>>> *From:* rob levy 
>>> *To:* agi 
>>> *Sent:* Sat, June 26, 2010 1:14:22 PM
>>> *Subject:* Re: [agi] Questions for an AGI
>>>
>>>  why should AGIs give a damn about us?
>>>
>>>
 I like to think that they will give a damn because humans have a unique
>>> way of experiencing reality and there is no reason to not take advantage of
>>> that precious opportunity to create astonishment or bliss. If anything is
>>> important in the universe, its insuring positive experiences for all areas
>>> in which it is conscious, I think it will realize that. And with the
>>> resources available in the solar system alone, I don't think we will be much
>>> of a burden.
>>>
>>>
>>> I like that idea.  Another reason might be that we won't crack the
>>> problem of autonomous general intelligence, but the singularity will proceed
>>> regardless as a symbiotic r

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Steve,

I know what dimensional analysis is, but it would be great if you could give
an example of how it's useful for everyday commonsense reasoning such as,
say, a service robot might need to do to figure out how to clean a house...

thx
ben

On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
wrote:

> Ben,
>
> What I saw as my central thesis is that propagating carefully conceived
> dimensionality information along with classical "information" could greatly
> improve the cognitive process, by FORCING reasonable physics WITHOUT having
> to "understand" (by present concepts of what "understanding" means) physics.
> Hutter was just a foil to explain my thought. Note again my comments
> regarding how physicists and astronomers "understand" some processes though
> "dimensional analysis" that involves NONE of the sorts of "understanding"
> that you might think necessary, yet can predictably come up with the right
> answers.
>
> Are you up on the basics of dimensional analysis? The reality is that it is
> quite imperfect, but is often able to yield a short list of "answers", with
> the correct one being somewhere in the list. Usually, the wrong answers are
> wildly wrong (they are probably computing something, but NOT what you might
> be interested in), and are hence easily eliminated. I suspect that neurons
> might be doing much the same, as could formulaic implementations like (most)
> present AGI efforts. This might explain "natural architecture" and guide
> human architectural efforts.
>
> In short, instead of a "pot of neurons", we might instead have a pot of
> dozens of types of neurons that each have their own complex rules regarding
> what other types of neurons they can connect to, and how they process
> information. "Architecture" might involve deciding how many of each type to
> provide, and what types to put adjacent to what other types, rather than the
> more detailed concept now usually thought to exist.
>
> Thanks for helping me wring my thought out here.
>
> Steve
> =
> On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel  wrote:
>
>>
>> Hi Steve,
>>
>> A few comments...
>>
>> 1)
>> Nobody is trying to implement Hutter's AIXI design, it's a mathematical
>> design intended as a "proof of principle"
>>
>> 2)
>> Within Hutter's framework, one calculates the shortest program that
>> explains the data, where "shortest" is measured on Turing  machine M.
>> Given a sufficient number of observations, the choice of M doesn't matter
>> and AIXI will eventually learn any computable reward pattern.  However,
>> choosing the right M can greatly accelerate learning.  In the case of a
>> physical AGI system, choosing M to incorporate the correct laws of physics
>> would obviously accelerate learning considerably.
>>
>> 3)
>> Many AGI designs try to incorporate prior understanding of the structure &
>> properties of the physical world, in various ways.  I have a whole chapter
>> on this in my forthcoming book on OpenCog  E.g. OpenCog's design
>> includes a physics-engine, which is used directly and to aid with
>> inferential extrapolations...
>>
>> So I agree with most of your points, but I don't find them original except
>> in phrasing ;)
>>
>> ... ben
>>
>>
>> On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield <
>> steve.richfi...@gmail.com> wrote:
>>
>>> Ben, et al,
>>>
>>> *I think I may finally grok the fundamental misdirection that current
>>> AGI thinking has taken!
>>>
>>> *This is a bit subtle, and hence subject to misunderstanding. Therefore
>>> I will first attempt to explain what I see, WITHOUT so much trying to
>>> convince you (or anyone) that it is necessarily correct. Once I convey my
>>> vision, then let the chips fall where they may.
>>>
>>> On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel  wrote:
>>>
 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward

>>>
>>> ... and there it is! What did I see?
>>>
>>> Example applicable to the lengthy following discussion:
>>> 1 - 2
>>> 2 - 2
>>> 3 - 2
>>> 4 - 2
>>> 5 - ?
>>> What is "?".
>>>
>>> Now, I'll tell you that the left column represents the distance along a
>>> 4.5 unit long table, and the right column represents the distance above the
>>> floor that you will be as your walk the length of the table. Knowing this,
>>> without ANY supporting physical experience, I would guess "?" to be zero, or
>>> maybe a little more if I were to step off of the table and land onto
>>> something lower, like the shoes that I left there.
>>>
>>> In an imaginary world where a GI boots up with a complete understanding
>>> of physics, etc., we wouldn't prefer the simplest "program" at all, but
>>> rather the simplest representation of the real world that is not
>>> physics/math *in*consistent with our observations. All observations
>>> would be presumed to be consistent with the response curves of our sensors,
>>> showing a world in which Ne

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben,

What I saw as my central thesis is that propagating carefully conceived
dimensionality information along with classical "information" could greatly
improve the cognitive process, by FORCING reasonable physics WITHOUT having
to "understand" (by present concepts of what "understanding" means) physics.
Hutter was just a foil to explain my thought. Note again my comments
regarding how physicists and astronomers "understand" some processes though
"dimensional analysis" that involves NONE of the sorts of "understanding"
that you might think necessary, yet can predictably come up with the right
answers.

Are you up on the basics of dimensional analysis? The reality is that it is
quite imperfect, but is often able to yield a short list of "answers", with
the correct one being somewhere in the list. Usually, the wrong answers are
wildly wrong (they are probably computing something, but NOT what you might
be interested in), and are hence easily eliminated. I suspect that neurons
might be doing much the same, as could formulaic implementations like (most)
present AGI efforts. This might explain "natural architecture" and guide
human architectural efforts.

In short, instead of a "pot of neurons", we might instead have a pot of
dozens of types of neurons that each have their own complex rules regarding
what other types of neurons they can connect to, and how they process
information. "Architecture" might involve deciding how many of each type to
provide, and what types to put adjacent to what other types, rather than the
more detailed concept now usually thought to exist.

Thanks for helping me wring my thought out here.

Steve
=
On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel  wrote:

>
> Hi Steve,
>
> A few comments...
>
> 1)
> Nobody is trying to implement Hutter's AIXI design, it's a mathematical
> design intended as a "proof of principle"
>
> 2)
> Within Hutter's framework, one calculates the shortest program that
> explains the data, where "shortest" is measured on Turing  machine M.
> Given a sufficient number of observations, the choice of M doesn't matter
> and AIXI will eventually learn any computable reward pattern.  However,
> choosing the right M can greatly accelerate learning.  In the case of a
> physical AGI system, choosing M to incorporate the correct laws of physics
> would obviously accelerate learning considerably.
>
> 3)
> Many AGI designs try to incorporate prior understanding of the structure &
> properties of the physical world, in various ways.  I have a whole chapter
> on this in my forthcoming book on OpenCog  E.g. OpenCog's design
> includes a physics-engine, which is used directly and to aid with
> inferential extrapolations...
>
> So I agree with most of your points, but I don't find them original except
> in phrasing ;)
>
> ... ben
>
>
> On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield <
> steve.richfi...@gmail.com> wrote:
>
>> Ben, et al,
>>
>> *I think I may finally grok the fundamental misdirection that current AGI
>> thinking has taken!
>>
>> *This is a bit subtle, and hence subject to misunderstanding. Therefore I
>> will first attempt to explain what I see, WITHOUT so much trying to convince
>> you (or anyone) that it is necessarily correct. Once I convey my vision,
>> then let the chips fall where they may.
>>
>> On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel  wrote:
>>
>>> Hutter's AIXI for instance works [very roughly speaking] by choosing the
>>> most compact program that, based on historical data, would have yielded
>>> maximum reward
>>>
>>
>> ... and there it is! What did I see?
>>
>> Example applicable to the lengthy following discussion:
>> 1 - 2
>> 2 - 2
>> 3 - 2
>> 4 - 2
>> 5 - ?
>> What is "?".
>>
>> Now, I'll tell you that the left column represents the distance along a
>> 4.5 unit long table, and the right column represents the distance above the
>> floor that you will be as your walk the length of the table. Knowing this,
>> without ANY supporting physical experience, I would guess "?" to be zero, or
>> maybe a little more if I were to step off of the table and land onto
>> something lower, like the shoes that I left there.
>>
>> In an imaginary world where a GI boots up with a complete understanding of
>> physics, etc., we wouldn't prefer the simplest "program" at all, but rather
>> the simplest representation of the real world that is not physics/math *
>> in*consistent with our observations. All observations would be presumed
>> to be consistent with the response curves of our sensors, showing a world in
>> which Newton's laws prevail, etc. Armed with these presumptions, our
>> physics-complete AGI would look for the simplest set of *UN*observed
>> phenomena that explained the observed phenomena. This theory of a
>> physics-complete AGI seems undeniable, but of course, we are NOT born
>> physics-complete - or are we?!
>>
>> This all comes down to the limits of representational math. At great risk
>> of hand-waving on a keyboard, I'll try to exp

Re: [agi] Reward function vs utility

2010-06-27 Thread Ben Goertzel
You can always build the utility function into the assumed universal Turing
machine underlying the definition of algorithmic information...

I guess this will improve learning rate by some additive constant, in the
long run ;)

ben

On Sun, Jun 27, 2010 at 4:22 PM, Joshua Fox  wrote:

> This has probably been discussed at length, so I will appreciate a
> reference on this:
>
> Why does Legg's definition of intelligence (following on Hutters' AIXI and
> related work) involve a reward function rather than a utility function? For
> this purpose, reward is a function of the word state/history which is
> unknown to the agent while  a utility function is known to the agent.
>
> Even if  we replace the former with the latter, we can still have a
> definition of intelligence that integrates optimization capacity over
> possible all utility functions.
>
> What is the real  significance of the difference between the two types of
> functions here?
>
> Joshua
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

"
“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
I think you're thinking of a plodding limited-movement classic Pong line.

I'm thinking of a line that can like a human player move with varying speed and 
pauses to more or less any part of its court to hit the ball, and then hit it 
with varying speed to more or less any part of the opposite court. I think 
you'll find that bumps up the variables if not unknowns massively.

Plus just about every shot exchange presents you with dilemmas of how to place 
your shot and then move in anticipation of your opponent's return .

Remember the object here is to present a would-be AGI with a simple but 
*unpredictable* object to deal with, reflecting the realities of there being a 
great many such objects in the real world - as distinct from Dave's all too 
predictable objects.

The possible weakness of this pong example is that there might at some point 
cease to be unknowns, as there always are in real world situations, incl 
tennis. One could always introduce them if necessary - allowing say creative 
spins on the ball.

But I doubt that it will be necessary here for the purposes of anyone like Dave 
-  and v. offhand and with no doubt extreme license this strikes me as not a 
million miles from a hyper version of the TSP problem, where the towns can move 
around, and you can't be sure whether they'll be there when you arrive.  Or is 
there an "obviously true" solution for that problem too? [Very convenient these 
obviously true solutions].



From: Jim Bromer 
Sent: Sunday, June 27, 2010 8:53 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


Ben:  I'm quite sure a simple narrow AI system could be constructed to beat 
humans at Pong ;p
Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a 
single reason why.

Although Ben would have to give us an actual example (of a pong program that 
could beat humans at Pong) just to make sure that it is not that difficult a 
task, it seems like such an obviously true statement that there is almost no 
incentive for anyone to try it.  However, there are chess programs that can 
beat the majority of people who play chess without outside assistance.
Jim Bromer


On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner  wrote:

  Well, Ben, I'm glad you're "quite sure" because you haven't given a single 
reason why. Clearly you should be Number One advisor on every Olympic team, 
because you've cracked the AGI problem of how to deal with opponents that can 
move (whether themselves or balls) in multiple, unpredictable directions, that 
is at the centre of just about every field and court sport.

  I think if you actually analyse it, you'll find that you can't predict and 
prepare for  the presumably at least 50 to 100 spots on a table tennis board/ 
tennis court that your opponent can hit the ball to, let alone for how he will 
play subsequent 10 to 20 shot rallies   - and you can't devise a deterministic 
program to play here. These are true, multiple-/poly-solution problems rather 
than the single solution ones you are familiar with.

  That's why all of these sports have normally hundreds of different competing 
philosophies and strategies, - and people continually can and do come up with 
new approaches and styles of play to the sports overall - there are endless 
possibilities.

  I suspect you may not play these sports, because one factor you've obviously 
ignored (although I stressed it) is not just the complexity but that in sports 
players can and do change their strategies - and that would have to be a given 
in our computer game. In real world activities, you're normally *supposed* to 
act unpredictably at least some of the time. It's a fundamental subgoal. 

  In sport, as in investment, "past performance is not a [sure] guide to future 
performance" - companies and markets may not continue to behave as they did in 
the past -  so that alone buggers any narrow AI predictive approach.

  P.S. But the most basic reality of these sports is that you can't cover every 
shot or move your opponent may make, and that gives rise to a continuing stream 
of genuine dilemmas . For example, you have just returned a ball from the 
extreme, far left of your court - do you now start moving rapidly towards the 
centre of the court so that you will be prepared to cover a ball to the 
extreme, near right side - or do you move more slowly?  If you don't move 
rapidly, you won't be able to cover that ball if it comes. But if you do move 
rapidly, your opponent can play the ball back to the extreme left and catch you 
out. 

  It's a genuine dilemma and gamble - just like deciding whether to invest in 
shares. And competitive sports are built on such dilemmas. 

  Welcome to the real world of AGI problems. You should get to know it.

  And as this example (and my rock wall problem) indicate, these problems can 
be as simple and accessible as fairly easy narrow AI problems. 

  From: Ben Goertzel 
  Sent: Sunday, June 27, 2010 7:33 PM
  To: agi 
  Subject: Re: [agi]

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Hi Steve,

A few comments...

1)
Nobody is trying to implement Hutter's AIXI design, it's a mathematical
design intended as a "proof of principle"

2)
Within Hutter's framework, one calculates the shortest program that explains
the data, where "shortest" is measured on Turing  machine M.   Given a
sufficient number of observations, the choice of M doesn't matter and AIXI
will eventually learn any computable reward pattern.  However, choosing the
right M can greatly accelerate learning.  In the case of a physical AGI
system, choosing M to incorporate the correct laws of physics would
obviously accelerate learning considerably.

3)
Many AGI designs try to incorporate prior understanding of the structure &
properties of the physical world, in various ways.  I have a whole chapter
on this in my forthcoming book on OpenCog  E.g. OpenCog's design
includes a physics-engine, which is used directly and to aid with
inferential extrapolations...

So I agree with most of your points, but I don't find them original except
in phrasing ;)

... ben


On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield
wrote:

> Ben, et al,
>
> *I think I may finally grok the fundamental misdirection that current AGI
> thinking has taken!
>
> *This is a bit subtle, and hence subject to misunderstanding. Therefore I
> will first attempt to explain what I see, WITHOUT so much trying to convince
> you (or anyone) that it is necessarily correct. Once I convey my vision,
> then let the chips fall where they may.
>
> On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel  wrote:
>
>> Hutter's AIXI for instance works [very roughly speaking] by choosing the
>> most compact program that, based on historical data, would have yielded
>> maximum reward
>>
>
> ... and there it is! What did I see?
>
> Example applicable to the lengthy following discussion:
> 1 - 2
> 2 - 2
> 3 - 2
> 4 - 2
> 5 - ?
> What is "?".
>
> Now, I'll tell you that the left column represents the distance along a 4.5
> unit long table, and the right column represents the distance above the
> floor that you will be as your walk the length of the table. Knowing this,
> without ANY supporting physical experience, I would guess "?" to be zero, or
> maybe a little more if I were to step off of the table and land onto
> something lower, like the shoes that I left there.
>
> In an imaginary world where a GI boots up with a complete understanding of
> physics, etc., we wouldn't prefer the simplest "program" at all, but rather
> the simplest representation of the real world that is not physics/math *in
> *consistent with our observations. All observations would be presumed to
> be consistent with the response curves of our sensors, showing a world in
> which Newton's laws prevail, etc. Armed with these presumptions, our
> physics-complete AGI would look for the simplest set of *UN*observed
> phenomena that explained the observed phenomena. This theory of a
> physics-complete AGI seems undeniable, but of course, we are NOT born
> physics-complete - or are we?!
>
> This all comes down to the limits of representational math. At great risk
> of hand-waving on a keyboard, I'll try to explain by pseudo-translating the
> concepts into NN/AGI terms.
>
> We all know about layering and columns in neural systems, and understand
> Bayesian math. However, let's dig a little deeper into exactly what is being
> represented by the "outputs" (or "terms" for died-in-the-wool AGIers). All
> physical quantities are well known to have value, significance, and
> dimensionality. Neurons/Terms (N/T) could easily be protein-tagged as to the
> dimensionality that their functionality is capable of producing, so that
> only compatible N/Ts could connect to them. However, let's dig a little
> deeper into "dimensionality"
>
> Physicists think we live in an MKS (Meters, Kilograms, Seconds) world, and
> that all dimensionality can be reduced to MKS. For physics purposes they may
> be right (see challenge below), but maybe for information processing
> purposes, they are missing some important things.
>
> *Challenge to MKS:* Note that some physicists and most astronomers utilize
> "*dimensional analysis*" where they experimentally play with the
> dimensions of observations to inductively find manipulations that would
> yield the dimensions of unobservable quantities, e.g. the mass of a star,
> and then run the numbers through the same manipulation to see if the results
> at least have the right exponent. However, many/most such manipulations
> produce nonsense, so they simply use this technique to jump from
> observations to a list of prospective results with wildly different
> exponents, and discard the results with the ridiculous exponents to find the
> correct result. The frequent failures of this process indirectly
> demonstrates that there is more to dimensionality (and hence physics) than
> just MKS. Let's accept that, and presume that neurons must have already
> dealt with whatever is missing from current thought.
>
> Consid

Re: [agi] Theory of Hardcoded Intelligence

2010-06-27 Thread Matt Mahoney
Correct. Intelligence = log(knowledge) + log(computing power). At the extreme 
left of your graph is AIXI, which has no knowledge but infinite computing 
power. At the extreme right you have a giant lookup table.

 -- Matt Mahoney, matmaho...@yahoo.com





From: M E 
To: agi 
Sent: Sun, June 27, 2010 5:36:38 PM
Subject: [agi] Theory of Hardcoded Intelligence

  I sketched a graph the other day which represented my thoughts on the 
usefulness of hardcoding knowledge into an AI.  (Graph attached)

Basically, the more hardcoded knowledge you include in an AI, of AGI, the lower 
the overall intelligence it will have, but that faster you will reach 
that value.  I would include any real AGI to be toward the left of the 
graph with systems like CYC to be toward the right.

Matt


The New Busy is not the old busy. Search, chat and e-mail from your inbox. Get 
started.
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Travis Lenting wrote:
> I don't like the idea of enhancing human intelligence before the singularity.

The singularity is a point of infinite collective knowledge, and therefore 
infinite unpredictability. Everything has to happen before the singularity 
because there is no after.

> I think crime has to be made impossible even for an enhanced humans first. 

That is easy. Eliminate all laws.

> I would like to see the singularity enabling AI to be as least like a 
> reproduction machine as possible.

Is there a difference between enhancing our intelligence by uploading and 
creating killer robots? Think about it.

> Does it really need to be a general AI to cause a singularity? Can it not 
> just stick to scientific data and quantify human uncertainty?  It seems like 
> it would be less likely to ever care about killing all humans so it can rule 
> the galaxy or that its an omnipotent servant.   

Assume we succeed. People want to be happy. Depending on how our minds are 
implemented, it's either a matter of rewiring our neurons or rewriting our 
software. Is that better than a gray goo accident?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Travis Lenting 
To: agi 
Sent: Sun, June 27, 2010 5:21:24 PM
Subject: Re: [agi] Questions for an AGI

I don't like the idea of enhancing human intelligence before the singularity. I 
think crime has to be made impossible even for an enhanced humans first. I 
think life is too adapt to abusing opportunities if possible. I would like to 
see the singularity enabling AI to be as least like a reproduction machine as 
possible. Does it really need to be a general AI to cause a singularity? Can it 
not just stick to scientific data and quantify human uncertainty?  It seems 
like it would be less likely to ever care about killing all humans so it can 
rule the galaxy or that its an omnipotent servant.


On Sun, Jun 27, 2010 at 11:39 AM, The Wizard  wrote:

This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.
>
>
>
>Agreed on this dangerous thought! 
>
>
>On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney  wrote:
>
>>>
>>This is wishful thinking. Wishful thinking is dangerous. How about instead of 
>>hoping that AGI won't destroy the world, you study the problem and come up 
>>with a safe design.
>>
>> -- Matt Mahoney, matmaho...@yahoo.com

>>
>>
>>
>>

 >>From: rob levy 
>>To: agi 
>>Sent: Sat, June 26, 2010 1:14:22 PM
>>Subject: Re: [agi]
>> Questions for an AGI
>>
>>
>why should AGIs give a damn about us?
>
>>>
>>>
>>I like to think that they will give a damn because humans have a unique way 
>>of experiencing reality and there is no reason to not take advantage of that 
>>precious opportunity to create astonishment or bliss. If anything is 
>>important in the universe, its insuring positive experiences for all areas in 
>>which it is conscious, I think it will realize that. And with the resources 
>>available in the solar system alone, I don't think we will be much of a 
>>burden. 
>>
>>
>>I like that idea.  Another reason might be that we won't crack the problem of 
>>autonomous general intelligence, but the singularity will proceed regardless 
>>as a symbiotic relationship between life and AI.  That would be beneficial to 
>>us as a form of intelligence expansion, and beneficial to the artificial 
>>entity a way of being alive and having an experience of the world.  

>>agi | Archives  >> | Modify >> Your Subscription  

>>agi | Archives  >> | Modify >> Your Subscription  
>
>
>
>-- 
>Carlos A Mejia
>
>Taking life one singularity at a time.
>www.Transalchemy.com  
>
>>
>agi | Archives  > | Modify > Your Subscription  

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Theory of Hardcoded Intelligence

2010-06-27 Thread M E






I sketched a graph the other day which represented my thoughts on the
 usefulness of hardcoding knowledge into an AI.  (Graph attached)

Basically,
 the more hardcoded knowledge you include in an AI, of AGI, the lower 
the overall intelligence it will have, but that faster you will reach 
that value.  I would include any real AGI to be toward the left of the 
graph with systems like CYC to be toward the right.

Matt
  
_
The New Busy is not the old busy. Search, chat and e-mail from your inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_3


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
<>

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
I don't like the idea of enhancing human intelligence before the
singularity. I think crime has to be made impossible even for an enhanced
humans first. I think life is too adapt to abusing opportunities if
possible. I would like to see the singularity enabling AI to be as least
like a reproduction machine as possible. Does it really need to be a general
AI to cause a singularity? Can it not just stick to scientific data and
quantify human uncertainty?  It seems like it would be less likely to ever
care about killing all humans so it can rule the galaxy or that its
an omnipotent servant.

On Sun, Jun 27, 2010 at 11:39 AM, The Wizard  wrote:

> This is wishful thinking. Wishful thinking is dangerous. How about instead
> of hoping that AGI won't destroy the world, you study the problem and come
> up with a safe design.
>
>
> Agreed on this dangerous thought!
>
> On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney wrote:
>
>> This is wishful thinking. Wishful thinking is dangerous. How about instead
>> of hoping that AGI won't destroy the world, you study the problem and come
>> up with a safe design.
>>
>>
>> -- Matt Mahoney, matmaho...@yahoo.com
>>
>>
>> --
>> *From:* rob levy 
>> *To:* agi 
>> *Sent:* Sat, June 26, 2010 1:14:22 PM
>> *Subject:* Re: [agi] Questions for an AGI
>>
>>  why should AGIs give a damn about us?
>>
>>
>>> I like to think that they will give a damn because humans have a unique
>> way of experiencing reality and there is no reason to not take advantage of
>> that precious opportunity to create astonishment or bliss. If anything is
>> important in the universe, its insuring positive experiences for all areas
>> in which it is conscious, I think it will realize that. And with the
>> resources available in the solar system alone, I don't think we will be much
>> of a burden.
>>
>>
>> I like that idea.  Another reason might be that we won't crack the problem
>> of autonomous general intelligence, but the singularity will proceed
>> regardless as a symbiotic relationship between life and AI.  That would be
>> beneficial to us as a form of intelligence expansion, and beneficial to the
>> artificial entity a way of being alive and having an experience of the
>> world.
>>*agi* | Archives 
>>  | 
>> ModifyYour Subscription
>> 
>>*agi* | Archives 
>>  | 
>> ModifyYour Subscription
>> 
>>
>
>
>
> --
> Carlos A Mejia
>
> Taking life one singularity at a time.
> www.Transalchemy.com
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
rob levy wrote:
>> This is wishful thinking.
> I definitely agree, however we lack a convincing model or plan of any sort 
> for the construction of systems demonstrating subjectivity, 

Define subjectivity. An objective decision might appear subjective to you only 
because you aren't intelligent enough to understand the decision process.

> Therefore it is reasonable to consider symbiosis

How does that follow?

> as both a safe design 

How do you know that a self replicating organism that we create won't evolve to 
kill us instead? Do we control evolution?

> and potentially the only possible design 

It is not the only possible design. It is possible to create systems that are 
more intelligent than a single human but less intelligent than all of humanity, 
without the capability to modify itself or reproduce without the collective 
permission of the billions of humans that own and maintain control over it. An 
example would be the internet.

 -- Matt Mahoney, matmaho...@yahoo.com





From: rob levy 
To: agi 
Sent: Sun, June 27, 2010 2:37:15 PM
Subject: Re: [agi] Questions for an AGI

I definitely agree, however we lack a convincing model or plan of any sort for 
the construction of systems demonstrating subjectivity, and it seems plausible 
that subjectivity is functionally necessary for general intelligence. Therefore 
it is reasonable to consider symbiosis as both a safe design and potentially 
the only possible design (at least at first), depending on how creative and 
resourceful we get in cog sci/ AGI in coming years.


On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney  wrote:

This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
>
>
>

From: rob levy 
>To: agi 
>Sent: Sat, June 26, 2010 1:14:22 PM
>Subject: Re: [agi]
> Questions for an AGI
>
>
>>>why should AGIs give a damn about us?
>>>
>>
>I like to think that they will give a damn because humans have a unique way of 
>experiencing reality and there is no reason to not take advantage of that 
>precious opportunity to create astonishment or bliss. If anything is important 
>in the universe, its insuring positive experiences for all areas in which it 
>is conscious, I think it will realize that. And with the resources available 
>in the solar system alone, I don't think we will be much of a burden. 
>
>
>I like that idea.  Another reason might be that we won't crack the problem of 
>autonomous general intelligence, but the singularity will proceed regardless 
>as a symbiotic relationship between life and AI.  That would be beneficial to 
>us as a form of intelligence expansion, and beneficial to the artificial 
>entity a way of being alive and having an experience of the world.  
>>
>agi | Archives  > | Modify > Your Subscription  
>>
>agi | Archives  > | Modify > Your Subscription  

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Reward function vs utility

2010-06-27 Thread Matt Mahoney
The definition of universal intelligence being over all utility functions 
implies that the utility function is unknown. Otherwise there is a fixed 
solution.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Joshua Fox 
To: agi 
Sent: Sun, June 27, 2010 4:22:19 PM
Subject: [agi] Reward function vs utility


This has probably been discussed at length, so I will appreciate a reference on 
this:

Why does Legg's definition of intelligence (following on Hutters' AIXI and 
related work) involve a reward function rather than a utility function? For 
this purpose, reward is a function of the word state/history which is unknown 
to the agent while  a utility function is known to the agent. 

Even if  we replace the former with the latter, we can still have a definition 
of intelligence that integrates optimization capacity over possible all utility 
functions. 

What is the real  significance of the difference between the two types of 
functions here?

Joshua
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-27 Thread Ian Parker
On 27 June 2010 21:25, John G. Rose  wrote:

> It's just that something like world hunger is so complex AGI would have to
> master simpler problems.
>

I am not sure that that follows necessarily. Computing is full of situations
where a seemingly simple problem is not solved and a more complex one is. I
remember posting some time ago on Cassini.

> Also, there are many people and institutions that have solutions to world
> hunger already and they get ignored.
>
Indeed. AGI in the shape of a search engine would find these solutions.
World Hunger might well be soluble *simply because so much work has already
been done.* AGI might well start off as search and develop into feasibility
ans solutions.

> So an AGI would have to get established over a period of time for anyone to
> really care what it has to say about these types of issues. It could
> simulate things and come up with solutions but they would not get
> implemented unless it had power to influence. So in addition AGI would need
> to know how to make people listen... and maybe obey.
>

This is CRESS. CRESS would be an accessible option.

>
>
> IMO I think AGI will take the embedded route - like other types of computer
> systems - IRS, weather, military, Google, etc. and we become dependent
> intergenerationally so that it is impossible to survive without. At that
> point AGI's will have power to influence.
>
>
>
Look! The point is this:-

1) An embedded system is AI not AGI.

2) AGI will arise simply because all embedded systems are themselves
searchable.


  - Ian Parker

>
>
> *From:* Ian Parker [mailto:ianpark...@gmail.com]
> *Sent:* Saturday, June 26, 2010 2:19 PM
> *To:* agi
> *Subject:* Re: [agi] The problem with AGI per Sloman
>
>
>
> Actually if you are serious about solving a political or social question
> then what you really need is CRESS.
> The solution of World Hunger is BTW a political question not a technical
> one. Hunger is largely due to bad governance in the Third World. How do you
> get good governance. One way to look at the problem is via CRESS and
> run simulations in second life.
>
>
>
> One thing which has in fact struck me in my linguistic researches is this.
> Google Translate is based on having Gigabytes of bilingual text. The fact
> that GT is so bad at technical Arabic indicates the absence of such
> bilingual text. Indeed Israel publishes more papers than the whole of the
> Islamic world. This is of profound importance for understanding the Middle
> East. I am sure CRESS would confirm this.
>
>
>
> AGI would without a doubt approach political questions by examining all the
> data about the various countries before making a conclusion. AGI would
> probably be what you would consult for long term solutions. It might not be
> so good at dealing with something (say) like the Gaza flotilla. In coing to
> this conclusion I have the University of Surrey and CRESS in mind.
>
>
>
>
>
>   - Ian Parker
>
> On 26 June 2010 14:36, John G. Rose  wrote:
>
> > -Original Message-
> > From: Ian Parker [mailto:ianpark...@gmail.com]
> >
> >
> > How do you solve World Hunger? Does AGI have to. I think if it is truly
> "G" it
> > has to. One way would be to find out what other people had written on the
> > subject and analyse the feasibility of their solutions.
> >
> >
>
> Yes, that would show the generality of their AGI theory. Maybe a particular
> AGI might be able to work with some problems but plateau out on its
> intelligence for whatever reason and not be able to work on more
> sophisticated issues. An AGI could be "hardcoded" perhaps and not improve
> much, whereas another AGI might improve to where it could tackle vast
> unknowns at increasing efficiency. There are common components in tackling
> unknowns, complexity classes for example, but some AGI systems may operate
> significantly more efficiently and improve. Human brains at some point may
> plateau without further augmentation though I'm not sure we have come close
> to what the brain is capable of.
>
> John
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>
>
> *agi* | Archives 
> | 
> ModifyYour Subscription
>
> 
>
>
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.lis

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
It's just that something like world hunger is so complex AGI would have to
master simpler problems. Also, there are many people and institutions that
have solutions to world hunger already and they get ignored. So an AGI would
have to get established over a period of time for anyone to really care what
it has to say about these types of issues. It could simulate things and come
up with solutions but they would not get implemented unless it had power to
influence. So in addition AGI would need to know how to make people
listen... and maybe obey.

 

IMO I think AGI will take the embedded route - like other types of computer
systems - IRS, weather, military, Google, etc. and we become dependent
intergenerationally so that it is impossible to survive without. At that
point AGI's will have power to influence.

 

John

 

From: Ian Parker [mailto:ianpark...@gmail.com] 
Sent: Saturday, June 26, 2010 2:19 PM
To: agi
Subject: Re: [agi] The problem with AGI per Sloman

 

Actually if you are serious about solving a political or social question
then what you really need is CRESS 
. The solution of World Hunger is BTW a political question not a technical
one. Hunger is largely due to bad governance in the Third World. How do you
get good governance. One way to look at the problem is via CRESS and run
simulations in second life.

 

One thing which has in fact struck me in my linguistic researches is this.
Google Translate is based on having Gigabytes of bilingual text. The fact
that GT is so bad at technical Arabic indicates the absence of such
bilingual text. Indeed Israel publishes more papers than the whole of the
Islamic world. This is of profound importance for understanding the Middle
East. I am sure CRESS would confirm this.

 

AGI would without a doubt approach political questions by examining all the
data about the various countries before making a conclusion. AGI would
probably be what you would consult for long term solutions. It might not be
so good at dealing with something (say) like the Gaza flotilla. In coing to
this conclusion I have the University of Surrey and CRESS in mind.

 

 

  - Ian Parker

On 26 June 2010 14:36, John G. Rose  wrote:

> -Original Message-
> From: Ian Parker [mailto:ianpark...@gmail.com]
>
>
> How do you solve World Hunger? Does AGI have to. I think if it is truly
"G" it
> has to. One way would be to find out what other people had written on the
> subject and analyse the feasibility of their solutions.
>
>

Yes, that would show the generality of their AGI theory. Maybe a particular
AGI might be able to work with some problems but plateau out on its
intelligence for whatever reason and not be able to work on more
sophisticated issues. An AGI could be "hardcoded" perhaps and not improve
much, whereas another AGI might improve to where it could tackle vast
unknowns at increasing efficiency. There are common components in tackling
unknowns, complexity classes for example, but some AGI systems may operate
significantly more efficiently and improve. Human brains at some point may
plateau without further augmentation though I'm not sure we have come close
to what the brain is capable of.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
 &
Powered by Listbox: http://www.listbox.com

 


agi |   Archives
 |

Modify Your Subscription

  

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Reward function vs utility

2010-06-27 Thread Joshua Fox
This has probably been discussed at length, so I will appreciate a reference
on this:

Why does Legg's definition of intelligence (following on Hutters' AIXI and
related work) involve a reward function rather than a utility function? For
this purpose, reward is a function of the word state/history which is
unknown to the agent while  a utility function is known to the agent.

Even if  we replace the former with the latter, we can still have a
definition of intelligence that integrates optimization capacity over
possible all utility functions.

What is the real  significance of the difference between the two types of
functions here?

Joshua



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
Ben:  I'm quite sure a simple narrow AI system could be constructed to beat
humans at Pong ;p
Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a
single reason why.

Although Ben would have to give us an actual example (of a pong program that
could beat humans at Pong) just to make sure that it is not that difficult a
task, it seems like such an obviously true statement that there is almost no
incentive for anyone to try it.  However, there are chess programs that can
beat the majority of people who play chess without outside assistance.
Jim Bromer

On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner wrote:

>  Well, Ben, I'm glad you're "quite sure" because you haven't given a
> single reason why. Clearly you should be Number One advisor on every
> Olympic team, because you've cracked the AGI problem of how to deal with
> opponents that can move (whether themselves or balls) in multiple,
> unpredictable directions, that is at the centre of just about every field
> and court sport.
>
> I think if you actually analyse it, you'll find that you can't predict and
> prepare for  the presumably at least 50 to 100 spots on a table tennis
> board/ tennis court that your opponent can hit the ball to, let
> alone for how he will play subsequent 10 to 20 shot rallies   - and you
> can't devise a deterministic program to play here. These are true,
> multiple-/poly-solution problems rather than the single solution ones you
> are familiar with.
>
> That's why all of these sports have normally hundreds of different
> competing philosophies and strategies, - and people continually can and do
> come up with new approaches and styles of play to the sports overall - there
> are endless possibilities.
>
> I suspect you may not play these sports, because one factor you've
> obviously ignored (although I stressed it) is not just the complexity
> but that in sports players can and do change their strategies - and that
> would have to be a given in our computer game. In real world activities,
> you're normally *supposed* to act unpredictably at least some of the time.
> It's a fundamental subgoal.
>
> In sport, as in investment, "past performance is not a [sure] guide to
> future performance" - companies and markets may not continue to behave as
> they did in the past -  so that alone buggers any narrow AI predictive
> approach.
>
> P.S. But the most basic reality of these sports is that you can't cover
> every shot or move your opponent may make, and that gives rise to a
> continuing stream of genuine dilemmas . For example, you have just returned
> a ball from the extreme, far left of your court - do you now start moving
> rapidly towards the centre of the court so that you will be prepared to
> cover a ball to the extreme, near right side - or do you move more slowly?
> If you don't move rapidly, you won't be able to cover that ball if it comes.
> But if you do move rapidly, your opponent can play the ball back to the
> extreme left and catch you out.
>
> It's a genuine dilemma and gamble - just like deciding whether to invest in
> shares. And competitive sports are built on such dilemmas.
>
> Welcome to the real world of AGI problems. You should get to know it.
>
> And as this example (and my rock wall problem) indicate, these problems can
> be as simple and accessible as fairly easy narrow AI problems.
>  *From:* Ben Goertzel 
> *Sent:* Sunday, June 27, 2010 7:33 PM
>   *To:* agi 
> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>
>
> That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow
> AI system could be constructed to beat humans at Pong ;p ... without
> teaching us much of anything about intelligence...
>
> Very likely a narrow-AI machine learning system could *learn* by experience
> to beat humans at Pong ... also without teaching us much
> of anything about intelligence...
>
> Pong is almost surely a "toy domain" ...
>
> ben g
>
> On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner wrote:
>
>>  Try ping-pong -  as per the computer game. Just a line (/bat) and a
>> square(/ball) representing your opponent - and you have a line(/bat) to play
>> against them
>>
>> Now you've got a relatively simple true AGI visual problem - because if
>> the opponent returns the ball somewhat as a real human AGI does,  (without
>> the complexities of spin etc just presumably repeatedly changing the
>> direction (and perhaps the speed)  of the returned ball) - then you have a
>> fundamentally *unpredictable* object.
>>
>> How will your program learn to play that opponent - bearing in mind that
>> the opponent is likely to keep changing and even evolving strategy? Your
>> approach will have to be fundamentally different from how a program learns
>> to play a board game, where all the possibilities are predictable. In the
>> real world, "past performance is not a [sure] guide to future performance".
>> Bayes doesn't apply.
>>
>> That's the real issue here -  it's not one of simplicity/complexity - it's

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Well, Ben, I'm glad you're "quite sure" because you haven't given a single 
reason why. Clearly you should be Number One advisor on every Olympic team, 
because you've cracked the AGI problem of how to deal with opponents that can 
move (whether themselves or balls) in multiple, unpredictable directions, that 
is at the centre of just about every field and court sport.

I think if you actually analyse it, you'll find that you can't predict and 
prepare for  the presumably at least 50 to 100 spots on a table tennis board/ 
tennis court that your opponent can hit the ball to, let alone for how he will 
play subsequent 10 to 20 shot rallies   - and you can't devise a deterministic 
program to play here. These are true, multiple-/poly-solution problems rather 
than the single solution ones you are familiar with.

That's why all of these sports have normally hundreds of different competing 
philosophies and strategies, - and people continually can and do come up with 
new approaches and styles of play to the sports overall - there are endless 
possibilities.

I suspect you may not play these sports, because one factor you've obviously 
ignored (although I stressed it) is not just the complexity but that in sports 
players can and do change their strategies - and that would have to be a given 
in our computer game. In real world activities, you're normally *supposed* to 
act unpredictably at least some of the time. It's a fundamental subgoal. 

In sport, as in investment, "past performance is not a [sure] guide to future 
performance" - companies and markets may not continue to behave as they did in 
the past -  so that alone buggers any narrow AI predictive approach.

P.S. But the most basic reality of these sports is that you can't cover every 
shot or move your opponent may make, and that gives rise to a continuing stream 
of genuine dilemmas . For example, you have just returned a ball from the 
extreme, far left of your court - do you now start moving rapidly towards the 
centre of the court so that you will be prepared to cover a ball to the 
extreme, near right side - or do you move more slowly?  If you don't move 
rapidly, you won't be able to cover that ball if it comes. But if you do move 
rapidly, your opponent can play the ball back to the extreme left and catch you 
out. 

It's a genuine dilemma and gamble - just like deciding whether to invest in 
shares. And competitive sports are built on such dilemmas. 

Welcome to the real world of AGI problems. You should get to know it.

And as this example (and my rock wall problem) indicate, these problems can be 
as simple and accessible as fairly easy narrow AI problems. 

From: Ben Goertzel 
Sent: Sunday, June 27, 2010 7:33 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI



That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow AI 
system could be constructed to beat humans at Pong ;p ... without teaching us 
much of anything about intelligence...

Very likely a narrow-AI machine learning system could *learn* by experience to 
beat humans at Pong ... also without teaching us much 
of anything about intelligence...

Pong is almost surely a "toy domain" ...

ben g


On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner  wrote:

  Try ping-pong -  as per the computer game. Just a line (/bat) and a 
square(/ball) representing your opponent - and you have a line(/bat) to play 
against them

  Now you've got a relatively simple true AGI visual problem - because if the 
opponent returns the ball somewhat as a real human AGI does,  (without the 
complexities of spin etc just presumably repeatedly changing the direction (and 
perhaps the speed)  of the returned ball) - then you have a fundamentally 
*unpredictable* object.

  How will your program learn to play that opponent - bearing in mind that the 
opponent is likely to keep changing and even evolving strategy? Your approach 
will have to be fundamentally different from how a program learns to play a 
board game, where all the possibilities are predictable. In the real world, 
"past performance is not a [sure] guide to future performance". Bayes doesn't 
apply.

  That's the real issue here -  it's not one of simplicity/complexity - it's 
that  your chosen worlds all consist of objects that are predictable, because 
they behave consistently, are shaped consistently, and come in consistent, 
closed sets - and  can only basically behave in one way at any given point. AGI 
is about dealing with the real world of objects that are unpredictable because 
they behave inconsistently,even contradictorily, are shaped inconsistently and 
come in inconsistent, open sets - and can behave in multi-/poly-ways at any 
given point. These differences apply at all levels from the most complex to the 
simplest.

  Dealing with consistent (and regular) objects is no preparation for dealing 
with inconsistent, irregular objects.It's a fundamental error

  Real AGI animals and humans were clearly designe

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
I am working on logical satisfiability again.  If what I am working on right
now works, it will become a pivotal moment in AGI, and what's more, the
method that I am developing will (probably) become a core method for AGI.
However, if the idea I am working on does not -itself- lead to a major
breakthrough (which is the likelihood) then the idea will (probably) not
become a core issue regardless of its significance to me right at this
moment.

This is a personal statement but it is not just a question that can be
resolved through personal perspective.  So I have to rely on a more
reasonable and balanced perspective that does not just assume that I will be
successful without some hard evidence.  Without the benefit of knowing what
will happen with the theory at this time, I have to assume that there is no
evidence that this is going to be a central approach which will in some
manifestation be central to AGI in the future.  I can see that as one
possibility but this one view has to be integrated with other possibilities
as well.

I appreciate people's reports of what they are doing, and I would happily
tell you what I am working on if I was more sure that it won't work or had
it all figured out and I thought anyone would be interested (even if it
didn't work.)

Dave asked and answered, " How do we add and combine this complex behavior
learning, explanation, recognition and understanding into our system?
Answer: The way that such things are learned is by making observations,
learning patterns and then connecting the patterns in a way that is
consistent, explanatory and likely."

That's not the answer.  That is a statement of a subgoal some of which is
programmable, but there is nothing in the statement that describes how it
can be actually achieved and there is nothing in the statement which
suggests that you have a mature insight into the nature of the problem.
There is nothing in the statement that seems new to me, I presume that many
of the programmers in the group have considered something similar at some
time in the past.

I am trying to avoid criticisms that get unnecessarily personal, but there
are some criticisms of ideas that should be made from time to time, and some
times a personal perspective is so tightly interwoven into the ideas that a
statement of a subgoal can look like it is a solution to a difficult
problem.

But Mike was absolutely right about one thing.  Constantly testing your
ideas with experiments is important, and if I ever gain any traction in
-anything- that I am doing, I will begin doing some AGI experiments again.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread The Wizard
This is wishful thinking. Wishful thinking is dangerous. How about instead
of hoping that AGI won't destroy the world, you study the problem and come
up with a safe design.


Agreed on this dangerous thought!
On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney  wrote:

> This is wishful thinking. Wishful thinking is dangerous. How about instead
> of hoping that AGI won't destroy the world, you study the problem and come
> up with a safe design.
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> --
> *From:* rob levy 
> *To:* agi 
> *Sent:* Sat, June 26, 2010 1:14:22 PM
> *Subject:* Re: [agi] Questions for an AGI
>
>  why should AGIs give a damn about us?
>
>
>> I like to think that they will give a damn because humans have a unique
> way of experiencing reality and there is no reason to not take advantage of
> that precious opportunity to create astonishment or bliss. If anything is
> important in the universe, its insuring positive experiences for all areas
> in which it is conscious, I think it will realize that. And with the
> resources available in the solar system alone, I don't think we will be much
> of a burden.
>
>
> I like that idea.  Another reason might be that we won't crack the problem
> of autonomous general intelligence, but the singularity will proceed
> regardless as a symbiotic relationship between life and AI.  That would be
> beneficial to us as a form of intelligence expansion, and beneficial to the
> artificial entity a way of being alive and having an experience of the
> world.
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread rob levy
I definitely agree, however we lack a convincing model or plan of any sort
for the construction of systems demonstrating subjectivity, and it seems
plausible that subjectivity is functionally necessary for general
intelligence. Therefore it is reasonable to consider symbiosis as both a
safe design and potentially the only possible design (at least at first),
depending on how creative and resourceful we get in cog sci/ AGI in coming
years.

On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney  wrote:

> This is wishful thinking. Wishful thinking is dangerous. How about instead
> of hoping that AGI won't destroy the world, you study the problem and come
> up with a safe design.
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> --
> *From:* rob levy 
> *To:* agi 
> *Sent:* Sat, June 26, 2010 1:14:22 PM
> *Subject:* Re: [agi] Questions for an AGI
>
>  why should AGIs give a damn about us?
>
>
>> I like to think that they will give a damn because humans have a unique
> way of experiencing reality and there is no reason to not take advantage of
> that precious opportunity to create astonishment or bliss. If anything is
> important in the universe, its insuring positive experiences for all areas
> in which it is conscious, I think it will realize that. And with the
> resources available in the solar system alone, I don't think we will be much
> of a burden.
>
>
> I like that idea.  Another reason might be that we won't crack the problem
> of autonomous general intelligence, but the singularity will proceed
> regardless as a symbiotic relationship between life and AI.  That would be
> beneficial to us as a form of intelligence expansion, and beneficial to the
> artificial entity a way of being alive and having an experience of the
> world.
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow
AI system could be constructed to beat humans at Pong ;p ... without
teaching us much of anything about intelligence...

Very likely a narrow-AI machine learning system could *learn* by experience
to beat humans at Pong ... also without teaching us much
of anything about intelligence...

Pong is almost surely a "toy domain" ...

ben g

On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner wrote:

>  Try ping-pong -  as per the computer game. Just a line (/bat) and a
> square(/ball) representing your opponent - and you have a line(/bat) to play
> against them
>
> Now you've got a relatively simple true AGI visual problem - because if the
> opponent returns the ball somewhat as a real human AGI does,  (without the
> complexities of spin etc just presumably repeatedly changing the direction
> (and perhaps the speed)  of the returned ball) - then you have a
> fundamentally *unpredictable* object.
>
> How will your program learn to play that opponent - bearing in mind that
> the opponent is likely to keep changing and even evolving strategy? Your
> approach will have to be fundamentally different from how a program learns
> to play a board game, where all the possibilities are predictable. In the
> real world, "past performance is not a [sure] guide to future performance".
> Bayes doesn't apply.
>
> That's the real issue here -  it's not one of simplicity/complexity - it's
> that  your chosen worlds all consist of objects that are predictable,
> because they behave consistently, are shaped consistently, and come in
> consistent, closed sets - and  can only basically behave in one way at any
> given point. AGI is about dealing with the real world of objects that are
> unpredictable because they behave inconsistently,even contradictorily, are
> shaped inconsistently and come in inconsistent, open sets - and can behave
> in multi-/poly-ways at any given point. These differences apply at all
> levels from the most complex to the simplest.
>
> Dealing with consistent (and regular) objects is no preparation for dealing
> with inconsistent, irregular objects.It's a fundamental error
>
> Real AGI animals and humans were clearly designed to deal with a world of
> objects that have some consistencies but overall are inconsistent, irregular
> and come in open sets. The perfect regularities and consistencies of
> geometrical figures and mechanical motion (and boxes moving across a screen)
> were only invented very recently.
>
>
>
>  *From:* David Jones 
> *Sent:* Sunday, June 27, 2010 5:57 PM
> *To:* agi 
> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>
> Jim,
>
> Two things.
>
> 1) If the method I have suggested works for the most simple case, it is
> quite straight forward to add complexity and then ask, how do I solve it
> now. If you can't solve that case, there is no way in hell you will solve
> the full AGI problem. This is how I intend to figure out how to solve such a
> massive problem. You cannot tackle the whole thing all at once. I've tried
> it and it doesn't work because you can't focus on anything. It is like a
> Rubik's cube. You turn one piece to get the color orange in place, but at
> the same time you are screwing up the other colors. Now imagine that times
> 1000. You simply can't do it. So, you start with a simple demonstration of
> the difficulties and show how to solve a small puzzle, such as a Rubik's
> cube with 4 little cubes to a side instead of 6. Then you can show how to
> solve 2 sides of a rubiks cube, etc. Eventually, it will be clear how to
> solve the whole problem because by the time you're done, you have a complete
> understanding of what is going on and how to go about solving it.
>
> 2) I haven't mentioned a method for matching expected behavior to
> observations and bypassing the default algorithms, but I have figured out
> quite a lot about how to do it. I'll give you an example from my own notes
> below. What I've realized is that the AI creates *expectations* (again).
> When those expectations are matched, the AI does not do its default
> processing and analysis. It doesn't do the default matching that it normally
> does when it has no other knowledge. It starts with an existing hypothesis.
> When unexpected observations or inconsistencies occur, then the AI will have
> a *reason* or *cue* (these words again... very important concepts) to look
> for a better hypothesis. Only then, should it look for another hypothesis.
>
> My notes:
> How does the ai learn and figure out how to explain complex unforseen
> behaviors that are not preprogrammable. For example the situation above
> regarding two windows. How does it learn the following knowledge: the
> notepad icon opens a new notepad window and that two windows can exist...
> not just one window that changes. the bar with the notepad icon represenants
> an instance. the bar at the bottom with numbers on it represents multiple
> instances of the same window and 

[agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Steve Richfield
Ben, et al,

*I think I may finally grok the fundamental misdirection that current AGI
thinking has taken!

*This is a bit subtle, and hence subject to misunderstanding. Therefore I
will first attempt to explain what I see, WITHOUT so much trying to convince
you (or anyone) that it is necessarily correct. Once I convey my vision,
then let the chips fall where they may.

On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel  wrote:

> Hutter's AIXI for instance works [very roughly speaking] by choosing the
> most compact program that, based on historical data, would have yielded
> maximum reward
>

... and there it is! What did I see?

Example applicable to the lengthy following discussion:
1 - 2
2 - 2
3 - 2
4 - 2
5 - ?
What is "?".

Now, I'll tell you that the left column represents the distance along a 4.5
unit long table, and the right column represents the distance above the
floor that you will be as your walk the length of the table. Knowing this,
without ANY supporting physical experience, I would guess "?" to be zero, or
maybe a little more if I were to step off of the table and land onto
something lower, like the shoes that I left there.

In an imaginary world where a GI boots up with a complete understanding of
physics, etc., we wouldn't prefer the simplest "program" at all, but rather
the simplest representation of the real world that is not physics/math
*in*consistent
with our observations. All observations would be presumed to be consistent
with the response curves of our sensors, showing a world in which Newton's
laws prevail, etc. Armed with these presumptions, our physics-complete AGI
would look for the simplest set of *UN*observed phenomena that explained the
observed phenomena. This theory of a physics-complete AGI seems undeniable,
but of course, we are NOT born physics-complete - or are we?!

This all comes down to the limits of representational math. At great risk of
hand-waving on a keyboard, I'll try to explain by pseudo-translating the
concepts into NN/AGI terms.

We all know about layering and columns in neural systems, and understand
Bayesian math. However, let's dig a little deeper into exactly what is being
represented by the "outputs" (or "terms" for died-in-the-wool AGIers). All
physical quantities are well known to have value, significance, and
dimensionality. Neurons/Terms (N/T) could easily be protein-tagged as to the
dimensionality that their functionality is capable of producing, so that
only compatible N/Ts could connect to them. However, let's dig a little
deeper into "dimensionality"

Physicists think we live in an MKS (Meters, Kilograms, Seconds) world, and
that all dimensionality can be reduced to MKS. For physics purposes they may
be right (see challenge below), but maybe for information processing
purposes, they are missing some important things.

*Challenge to MKS:* Note that some physicists and most astronomers utilize "
*dimensional analysis*" where they experimentally play with the dimensions
of observations to inductively find manipulations that would yield the
dimensions of unobservable quantities, e.g. the mass of a star, and then run
the numbers through the same manipulation to see if the results at least
have the right exponent. However, many/most such manipulations produce
nonsense, so they simply use this technique to jump from observations to a
list of prospective results with wildly different exponents, and discard the
results with the ridiculous exponents to find the correct result. The
frequent failures of this process indirectly demonstrates that there is more
to dimensionality (and hence physics) than just MKS. Let's accept that, and
presume that neurons must have already dealt with whatever is missing from
current thought.

Consider, there is some (hopefully finite) set of reasonable manipulations
that could be done to Bayesian measures, with the various competing theories
of recognition representing part of that set. The reasonable mathematics to
perform on spacial features is probably different than the reasonable
mathematics to perform on recognized objects, or the recognition of
impossible observations, the manipulation of ideas, etc. Hence, N/Ts could
also be tagged for this deeper level of dimensionality, so that ideas don't
get mixed up with spacial features, etc.

Note that we may not have perfected this process, and further, that this
process need not be perfected. Somewhere around the age of 12, many of our
neurons DIE. Perhaps these were just the victims of insufficiently precise
dimensional tagging?

Once things can ONLY connect up in mathematically reasonable ways, what
remains between a newborn and a physics-complete AGI? Obviously, the
physics, which can be quite different on land than in the water. Hence, the
physics must also be learned.

My point here is that if we impose a fragile requirement for mathematical
correctness against a developing system of physics and REJECT simplistic
explanations (not observations) that would violate either 

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Try ping-pong -  as per the computer game. Just a line (/bat) and a 
square(/ball) representing your opponent - and you have a line(/bat) to play 
against them

Now you've got a relatively simple true AGI visual problem - because if the 
opponent returns the ball somewhat as a real human AGI does,  (without the 
complexities of spin etc just presumably repeatedly changing the direction (and 
perhaps the speed)  of the returned ball) - then you have a fundamentally 
*unpredictable* object.

How will your program learn to play that opponent - bearing in mind that the 
opponent is likely to keep changing and even evolving strategy? Your approach 
will have to be fundamentally different from how a program learns to play a 
board game, where all the possibilities are predictable. In the real world, 
"past performance is not a [sure] guide to future performance". Bayes doesn't 
apply.

That's the real issue here -  it's not one of simplicity/complexity - it's that 
 your chosen worlds all consist of objects that are predictable, because they 
behave consistently, are shaped consistently, and come in consistent, closed 
sets - and  can only basically behave in one way at any given point. AGI is 
about dealing with the real world of objects that are unpredictable because 
they behave inconsistently,even contradictorily, are shaped inconsistently and 
come in inconsistent, open sets - and can behave in multi-/poly-ways at any 
given point. These differences apply at all levels from the most complex to the 
simplest.

Dealing with consistent (and regular) objects is no preparation for dealing 
with inconsistent, irregular objects.It's a fundamental error

Real AGI animals and humans were clearly designed to deal with a world of 
objects that have some consistencies but overall are inconsistent, irregular 
and come in open sets. The perfect regularities and consistencies of 
geometrical figures and mechanical motion (and boxes moving across a screen) 
were only invented very recently.




From: David Jones 
Sent: Sunday, June 27, 2010 5:57 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


Jim,

Two things.

1) If the method I have suggested works for the most simple case, it is quite 
straight forward to add complexity and then ask, how do I solve it now. If you 
can't solve that case, there is no way in hell you will solve the full AGI 
problem. This is how I intend to figure out how to solve such a massive 
problem. You cannot tackle the whole thing all at once. I've tried it and it 
doesn't work because you can't focus on anything. It is like a Rubik's cube. 
You turn one piece to get the color orange in place, but at the same time you 
are screwing up the other colors. Now imagine that times 1000. You simply can't 
do it. So, you start with a simple demonstration of the difficulties and show 
how to solve a small puzzle, such as a Rubik's cube with 4 little cubes to a 
side instead of 6. Then you can show how to solve 2 sides of a rubiks cube, 
etc. Eventually, it will be clear how to solve the whole problem because by the 
time you're done, you have a complete understanding of what is going on and how 
to go about solving it.

2) I haven't mentioned a method for matching expected behavior to observations 
and bypassing the default algorithms, but I have figured out quite a lot about 
how to do it. I'll give you an example from my own notes below. What I've 
realized is that the AI creates *expectations* (again).  When those 
expectations are matched, the AI does not do its default processing and 
analysis. It doesn't do the default matching that it normally does when it has 
no other knowledge. It starts with an existing hypothesis. When unexpected 
observations or inconsistencies occur, then the AI will have a *reason* or 
*cue* (these words again... very important concepts) to look for a better 
hypothesis. Only then, should it look for another hypothesis. 

My notes: 
How does the ai learn and figure out how to explain complex unforseen behaviors 
that are not preprogrammable. For example the situation above regarding two 
windows. How does it learn the following knowledge: the notepad icon opens a 
new notepad window and that two windows can exist... not just one window that 
changes. the bar with the notepad icon represenants an instance. the bar at the 
bottom with numbers on it represents multiple instances of the same window and 
if you click on it it shows you representative bars for each window. 

 How do we add and combine this complex behavior learning, explanation, 
recognition and understanding into our system? 

 Answer: The way that such things are learned is by making observations, 
learning patterns and then connecting the patterns in a way that is consistent, 
explanatory and likely. 

Example: Clicking the notepad icon causes a notepad window to appear with no 
content. If we previously had a notepad window open, it may seem like clicking 
the icon just clears the content by t

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.

 -- Matt Mahoney, matmaho...@yahoo.com





From: rob levy 
To: agi 
Sent: Sat, June 26, 2010 1:14:22 PM
Subject: Re: [agi] Questions for an AGI

>why should AGIs give a damn about us?

>
I like to think that they will give a damn because humans have a unique way of 
experiencing reality and there is no reason to not take advantage of that 
precious opportunity to create astonishment or bliss. If anything is important 
in the universe, its insuring positive experiences for all areas in which it is 
conscious, I think it will realize that. And with the resources available in 
the solar system alone, I don't think we will be much of a burden. 

I like that idea.  Another reason might be that we won't crack the problem of 
autonomous general intelligence, but the singularity will proceed regardless as 
a symbiotic relationship between life and AI.  That would be beneficial to us 
as a form of intelligence expansion, and beneficial to the artificial entity a 
way of being alive and having an experience of the world.  
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Jim,

Two things.

1) If the method I have suggested works for the most simple case, it is
quite straight forward to add complexity and then ask, how do I solve it
now. If you can't solve that case, there is no way in hell you will solve
the full AGI problem. This is how I intend to figure out how to solve such a
massive problem. You cannot tackle the whole thing all at once. I've tried
it and it doesn't work because you can't focus on anything. It is like a
Rubik's cube. You turn one piece to get the color orange in place, but at
the same time you are screwing up the other colors. Now imagine that times
1000. You simply can't do it. So, you start with a simple demonstration of
the difficulties and show how to solve a small puzzle, such as a Rubik's
cube with 4 little cubes to a side instead of 6. Then you can show how to
solve 2 sides of a rubiks cube, etc. Eventually, it will be clear how to
solve the whole problem because by the time you're done, you have a complete
understanding of what is going on and how to go about solving it.

2) I haven't mentioned a method for matching expected behavior to
observations and bypassing the default algorithms, but I have figured out
quite a lot about how to do it. I'll give you an example from my own notes
below. What I've realized is that the AI creates *expectations* (again).
When those expectations are matched, the AI does not do its default
processing and analysis. It doesn't do the default matching that it normally
does when it has no other knowledge. It starts with an existing hypothesis.
When unexpected observations or inconsistencies occur, then the AI will have
a *reason* or *cue* (these words again... very important concepts) to look
for a better hypothesis. Only then, should it look for another hypothesis.

My notes:
How does the ai learn and figure out how to explain complex unforseen
behaviors that are not preprogrammable. For example the situation above
regarding two windows. How does it learn the following knowledge: the
notepad icon opens a new notepad window and that two windows can exist...
not just one window that changes. the bar with the notepad icon represenants
an instance. the bar at the bottom with numbers on it represents multiple
instances of the same window and if you click on it it shows you
representative bars for each window.

 How do we add and combine this complex behavior learning, explanation,
recognition and understanding into our system?

 Answer: The way that such things are learned is by making observations,
learning patterns and then connecting the patterns in a way that is
consistent, explanatory and likely.

Example: Clicking the notepad icon causes a notepad window to appear with no
content. If we previously had a notepad window open, it may seem like
clicking the icon just clears the content by the instance is the same. But,
this cannot be the case because if we click the icon when no notepad window
previously existed, it will be blank. based on these two experiences we can
construct an explanatory hypothesis such that: clicking the icon simply
opens a blank window. We also get evidence for this conclusion when we see
the two windows side by side. If we see the old window with the content
still intact we will realize that clicking the icon did not seem to have
cleared it.

Dave


On Sun, Jun 27, 2010 at 12:39 PM, Jim Bromer  wrote:

> On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner 
> wrote:
>
>>  Jim :This illustrates one of the things wrong with the
>> dreary instantiations of the prevailing mind set of a group.  It is only a
>> matter of time until you discover (through experiment) how absurd it is to
>> celebrate the triumph of an overly simplistic solution to a problem that is,
>> by its very potential, full of possibilities]
>>
>> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
>> - narrow AI.  Looking for the one right prediction/ explanation is narrow
>> AI. Being able to generate more and more possible explanations, wh. could
>> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
>> is creative, polyform thinking. Or, if you prefer, it's convergent vs
>> divergent thinking, the difference between wh. still seems to escape Dave &
>> Ben & most AGI-ers.
>>
>
> Well, I agree with what (I think) Mike was trying to get at, except that I
> understood that Ben, Hutter and especially David were not only talking about
> prediction as a specification of a single prediction when many possible
> predictions (ie expectations) were appropriate for consideration.
>
> For some reason none of you seem to ever talk about methods that could be
> used to react to a situation with the flexibility to integrate the
> recognition of different combinations of familiar events and to classify
> unusual events so they could be interpreted as more familiar *kinds* of
> events or as novel forms of events which might be then be integrated.  For
> me, that seems to be one of the unsolved problems

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner wrote:

>  Jim :This illustrates one of the things wrong with the
> dreary instantiations of the prevailing mind set of a group.  It is only a
> matter of time until you discover (through experiment) how absurd it is to
> celebrate the triumph of an overly simplistic solution to a problem that is,
> by its very potential, full of possibilities]
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>

Well, I agree with what (I think) Mike was trying to get at, except that I
understood that Ben, Hutter and especially David were not only talking about
prediction as a specification of a single prediction when many possible
predictions (ie expectations) were appropriate for consideration.

For some reason none of you seem to ever talk about methods that could be
used to react to a situation with the flexibility to integrate the
recognition of different combinations of familiar events and to classify
unusual events so they could be interpreted as more familiar *kinds* of
events or as novel forms of events which might be then be integrated.  For
me, that seems to be one of the unsolved problems.  Being able to say that
the squares move to the right in unison is a better description than saying
the squares are dancing the irish jig is not really cutting edge.

As far as David's comment that he was only dealing with the "core issues," I
am sorry but you were not dealing with the core issues of contemporary AGI
programming.  You were dealing with a primitive problem that has been
considered for many years, but it is not a core research issue.  Yes we have
to work with simple examples to explain what we are talking about, but there
is a difference between an abstract problem that may be central to
your recent work and a core research issue that hasn't really been solved.

The entire problem of dealing with complicated situations is that these
narrow AI methods haven't really worked.  That is the core issue.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Jim,

I am using over simplification to identify the core problems involved. As
you can see, the over simplification is revealing how to resolve certain
types of dilemmas and uncertainty. That is exactly why I did this. If you
can't solve a simple environment, you certainly can't solve the full
environment, which contains at least several of the same problems all
intricately tied together. So, if I can show how to solve environments which
isolate these dilemmas, I can incrementally add complexity and have a very
strong and full understanding of how the added complexity changes the
problem. Your criticisms and mike's criticisms are unjustified. This is a
means to an end. Not an end in and of itself. ***  :)

Dave

On Sun, Jun 27, 2010 at 12:12 PM, Jim Bromer  wrote:

> The fact that you are using experiment and the fact that you recognized
> that AGI needs to provide both explanation and expectations (differentiated
> from the false precision of 'prediction') shows that you have a grasp of
> some of the philosophical problems, but the fact that you would rely on a
> primary principle of over simplification (as differentiated from a method
> that does not start with a rule that eliminates the very potential of
> possibilities as a *general* rule of intelligence) shows that you
> don't fully understand the problem.
> Jim Bromer
>
>
>
> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>
>> A method for comparing hypotheses in explanatory-based reasoning: *
>>
>> We prefer the hypothesis or explanation that ***expects* more
>> observations. If both explanations expect the same observations, then the
>> simpler of the two is preferred (because the unnecessary terms of the more
>> complicated explanation do not add to the predictive power).*
>>
>> *Why are expected events so important?* They are a measure of 1)
>> explanatory power and 2) predictive power. The more predictive and the more
>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>> to a competing hypothesis.
>>
>> Here are two case studies I've been analyzing from sensory perception of
>> simplified visual input:
>> The goal of the case studies is to answer the following: How do you
>> generate the most likely motion hypothesis in a way that is general and
>> applicable to AGI?
>> *Case Study 1)* Here is a link to an example: animated gif of two black
>> squares move from left to 
>> right.
>> *Description: *Two black squares are moving in unison from left to right
>> across a white screen. In each frame the black squares shift to the right so
>> that square 1 steals square 2's original position and square two moves an
>> equal distance to the right.
>> *Case Study 2) *Here is a link to an example: the interrupted 
>> square.
>> *Description:* A single square is moving from left to right. Suddenly in
>> the third frame, a single black square is added in the middle of the
>> expected path of the original black square. This second square just stays
>> there. So, what happened? Did the square moving from left to right keep
>> moving? Or did it stop and then another square suddenly appeared and moved
>> from left to right?
>>
>> *Here is a simplified version of how we solve case study 1:
>> *The important hypotheses to consider are:
>> 1) the square from frame 1 of the video that has a very close position to
>> the square from frame 2 should be matched (we hypothesize that they are the
>> same square and that any difference in position is motion).  So, what
>> happens is that in each two frames of the video, we only match one square.
>> The other square goes unmatched.
>> 2) We do the same thing as in hypothesis #1, but this time we also match
>> the remaining squares and hypothesize motion as follows: the first square
>> jumps over the second square from left to right. We hypothesize that this
>> happens over and over in each frame of the video. Square 2 stops and square
>> 1 jumps over it over and over again.
>> 3) We hypothesize that both squares move to the right in unison. This is
>> the correct hypothesis.
>>
>> So, why should we prefer the correct hypothesis, #3 over the other two?
>>
>> Well, first of all, #3 is correct because it has the most explanatory
>> power of the three and is the simplest of the three. Simpler is better
>> because, with the given evidence and information, there is no reason to
>> desire a more complicated hypothesis such as #2.
>>
>> So, the answer to the question is because explanation #3 expects the most
>> observations, such as:
>> 1) the consistent relative positions of the squares in each frame are
>> expected.
>> 2) It also expects their new positions in each from based on velocity
>> calculations.
>> 3) It expects both squares to occur in each frame.
>>
>> Explanation 1 ignores 1 square from each frame of the video, because it
>> can't match it. Hypothesis #1 doesn't have a reason for why

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
The fact that you are using experiment and the fact that you recognized that
AGI needs to provide both explanation and expectations (differentiated from
the false precision of 'prediction') shows that you have a grasp of some of
the philosophical problems, but the fact that you would rely on a primary
principle of over simplification (as differentiated from a method that does
not start with a rule that eliminates the very potential of possibilities as
a *general* rule of intelligence) shows that you don't fully understand the
problem.
Jim Bromer



On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

> A method for comparing hypotheses in explanatory-based reasoning: *
>
> We prefer the hypothesis or explanation that ***expects* more
> observations. If both explanations expect the same observations, then the
> simpler of the two is preferred (because the unnecessary terms of the more
> complicated explanation do not add to the predictive power).*
>
> *Why are expected events so important?* They are a measure of 1)
> explanatory power and 2) predictive power. The more predictive and the more
> explanatory a hypothesis is, the more likely the hypothesis is when compared
> to a competing hypothesis.
>
> Here are two case studies I've been analyzing from sensory perception of
> simplified visual input:
> The goal of the case studies is to answer the following: How do you
> generate the most likely motion hypothesis in a way that is general and
> applicable to AGI?
> *Case Study 1)* Here is a link to an example: animated gif of two black
> squares move from left to right.
> *Description: *Two black squares are moving in unison from left to right
> across a white screen. In each frame the black squares shift to the right so
> that square 1 steals square 2's original position and square two moves an
> equal distance to the right.
> *Case Study 2) *Here is a link to an example: the interrupted 
> square.
> *Description:* A single square is moving from left to right. Suddenly in
> the third frame, a single black square is added in the middle of the
> expected path of the original black square. This second square just stays
> there. So, what happened? Did the square moving from left to right keep
> moving? Or did it stop and then another square suddenly appeared and moved
> from left to right?
>
> *Here is a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close position to
> the square from frame 2 should be matched (we hypothesize that they are the
> same square and that any difference in position is motion).  So, what
> happens is that in each two frames of the video, we only match one square.
> The other square goes unmatched.
> 2) We do the same thing as in hypothesis #1, but this time we also match
> the remaining squares and hypothesize motion as follows: the first square
> jumps over the second square from left to right. We hypothesize that this
> happens over and over in each frame of the video. Square 2 stops and square
> 1 jumps over it over and over again.
> 3) We hypothesize that both squares move to the right in unison. This is
> the correct hypothesis.
>
> So, why should we prefer the correct hypothesis, #3 over the other two?
>
> Well, first of all, #3 is correct because it has the most explanatory power
> of the three and is the simplest of the three. Simpler is better because,
> with the given evidence and information, there is no reason to desire a more
> complicated hypothesis such as #2.
>
> So, the answer to the question is because explanation #3 expects the most
> observations, such as:
> 1) the consistent relative positions of the squares in each frame are
> expected.
> 2) It also expects their new positions in each from based on velocity
> calculations.
> 3) It expects both squares to occur in each frame.
>
> Explanation 1 ignores 1 square from each frame of the video, because it
> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
> appears in each frame and why one disappears. It doesn't expect these
> observations. In fact, explanation 1 doesn't expect anything that happens
> because something new happens in each frame, which doesn't give it a chance
> to confirm its hypotheses in subsequent frames.
>
> The power of this method is immediately clear. It is general and it solves
> the problem very cleanly.
>
> *Here is a simplified version of how we solve case study 2:*
> We expect the original square to move at a similar velocity from left to
> right because we hypothesized that it did move from left to right and we
> calculated its velocity. If this expectation is confirmed, then it is more
> likely than saying that the square suddenly stopped and another started
> moving. Such a change would be unexpected and such a conclusion would be
> unjustifiable.
>
> I also be

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
lol.

Mike,

What I was trying to express by the word *expect* is NOT predict [some exact
outcome]. Expect means that the algorithm has a way of comparing
observations to what the algorithm considers to be consistent with an
"explanation". This is something I struggled to solve for a long time
regarding explanatory reasoning.

Dave

On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner wrote:

>  Jim :This illustrates one of the things wrong with the
> dreary instantiations of the prevailing mind set of a group.  It is only a
> matter of time until you discover (through experiment) how absurd it is to
> celebrate the triumph of an overly simplistic solution to a problem that is,
> by its very potential, full of possibilities]
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>
> Consider a real world application - a footballer, Maradona, is dribbling
> with the ball - you don't/can't predict where he's going next, you have to
> be ready for various directions, including the possibility that he is going
> to do something surprising and new   - even if you have to commit yourself
> to anticipating a particular direction. Ditto if you're trying to predict
> the path of an animal prey.
>
> Dealing only with the "predictable" as most do, is perhaps what Jim is
> getting at - predictable. And wrong for AGI. It's your capacity to deal
> with the open, unpredictable, real world that signifies you are an AGI -
> not the same old, closed predictable, artificial world. When will you have
> the courage to face this?
>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>

You are misrepresenting my approach, which is not based on looking for "the
one right prediction/explanation"

OpenCog relies heavily on evolutionary learning and probabilistic inference,
both of which naturally generate a massive number of alternative possible
explanations in nearly every instance...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Jim :This illustrates one of the things wrong with the dreary instantiations of 
the prevailing mind set of a group.  It is only a matter of time until you 
discover (through experiment) how absurd it is to celebrate the triumph of an 
overly simplistic solution to a problem that is, by its very potential, full of 
possibilities]

To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject - 
narrow AI.  Looking for the one right prediction/ explanation is narrow AI. 
Being able to generate more and more possible explanations, wh. could all be 
valid,  is AGI.  The former is rational, uniform thinking. The latter is 
creative, polyform thinking. Or, if you prefer, it's convergent vs divergent 
thinking, the difference between wh. still seems to escape Dave & Ben & most 
AGI-ers.

Consider a real world application - a footballer, Maradona, is dribbling with 
the ball - you don't/can't predict where he's going next, you have to be ready 
for various directions, including the possibility that he is going to do 
something surprising and new   - even if you have to commit yourself to 
anticipating a particular direction. Ditto if you're trying to predict the path 
of an animal prey.

Dealing only with the "predictable" as most do, is perhaps what Jim is getting 
at - predictable. And wrong for AGI. It's your capacity to deal with the open, 
unpredictable, real world that signifies you are an AGI - not the same old, 
closed predictable, artificial world. When will you have the courage to face 
this?

Sent: Sunday, June 27, 2010 4:21 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

  A method for comparing hypotheses in explanatory-based reasoning:Here is a 
simplified version of how we solve case study 1:
  The important hypotheses to consider are: 
  1) the square from frame 1 of the video that has a very close position to the 
square from frame 2 should be matched (we hypothesize that they are the same 
square and that any difference in position is motion).  So, what happens is 
that in each two frames of the video, we only match one square. The other 
square goes unmatched.   
  2) We do the same thing as in hypothesis #1, but this time we also match the 
remaining squares and hypothesize motion as follows: the first square jumps 
over the second square from left to right. We hypothesize that this happens 
over and over in each frame of the video. Square 2 stops and square 1 jumps 
over it over and over again. 
  3) We hypothesize that both squares move to the right in unison. This is the 
correct hypothesis.

  So, why should we prefer the correct hypothesis, #3 over the other two?

  Well, first of all, #3 is correct because it has the most explanatory power 
of the three and is the simplest of the three. Simpler is better because, with 
the given evidence and information, there is no reason to desire a more 
complicated hypothesis such as #2. 

  So, the answer to the question is because explanation #3 expects the most 
observations, such as: 
  1) the consistent relative positions of the squares in each frame are 
expected. 
  2) It also expects their new positions in each from based on velocity 
calculations. 
  3) It expects both squares to occur in each frame. 

  Explanation 1 ignores 1 square from each frame of the video, because it can't 
match it. Hypothesis #1 doesn't have a reason for why the a new square appears 
in each frame and why one disappears. It doesn't expect these observations. In 
fact, explanation 1 doesn't expect anything that happens because something new 
happens in each frame, which doesn't give it a chance to confirm its hypotheses 
in subsequent frames.

  The power of this method is immediately clear. It is general and it solves 
the problem very cleanly.
  Dave 
agi | Archives  | Modify Your Subscription   


Nonsense.  This illustrates one of the things wrong with the dreary 
instantiations of the prevailing mind set of a group.  It is only a matter of 
time until you discover (through experiment) how absurd it is to celebrate the 
triumph of an overly simplistic solution to a problem that is, by its very 
potential, full of possibilities.

For one example, even if your program portrayed the 'objects' as moving in 
'unison' I doubt if the program calculated or represented those objects in 
unison.  I also doubt that their positioning was literally based on moving 
'right' since their movement were presumably calculated with incremental 
mathematics that were associated with screen positions.  And, looking for a 
technicality that represents the failure of the over reliance of the efficacy 
of a simplistic over generalization, I only have to point out that they did not 
only move to the right, so your description was either wrong or only partially 
representative of the apparent movement.

As long as the hypotheses are kept simple enough to eliminate the less useful 
hypotheses, 

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

> A method for comparing hypotheses in explanatory-based reasoning:*Here is
> a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close position to
> the square from frame 2 should be matched (we hypothesize that they are the
> same square and that any difference in position is motion).  So, what
> happens is that in each two frames of the video, we only match one square.
> The other square goes unmatched.
> 2) We do the same thing as in hypothesis #1, but this time we also match
> the remaining squares and hypothesize motion as follows: the first square
> jumps over the second square from left to right. We hypothesize that this
> happens over and over in each frame of the video. Square 2 stops and square
> 1 jumps over it over and over again.
> 3) We hypothesize that both squares move to the right in unison. This is
> the correct hypothesis.
>
> So, why should we prefer the correct hypothesis, #3 over the other two?
>
> Well, first of all, #3 is correct because it has the most explanatory power
> of the three and is the simplest of the three. Simpler is better because,
> with the given evidence and information, there is no reason to desire a more
> complicated hypothesis such as #2.
>
> So, the answer to the question is because explanation #3 expects the most
> observations, such as:
> 1) the consistent relative positions of the squares in each frame are
> expected.
> 2) It also expects their new positions in each from based on velocity
> calculations.
> 3) It expects both squares to occur in each frame.
>
> Explanation 1 ignores 1 square from each frame of the video, because it
> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
> appears in each frame and why one disappears. It doesn't expect these
> observations. In fact, explanation 1 doesn't expect anything that happens
> because something new happens in each frame, which doesn't give it a chance
> to confirm its hypotheses in subsequent frames.
>
> The power of this method is immediately clear. It is general and it solves
> the problem very cleanly.
> Dave
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>

Nonsense.  This illustrates one of the things wrong with the
dreary instantiations of the prevailing mind set of a group.  It is only a
matter of time until you discover (through experiment) how absurd it is to
celebrate the triumph of an overly simplistic solution to a problem that is,
by its very potential, full of possibilities.

For one example, even if your program portrayed the 'objects' as moving in
'unison' I doubt if the program calculated or represented those objects in
unison.  I also doubt that their positioning was literally based on moving
'right' since their movement were presumably calculated with incremental
mathematics that were associated with screen positions.  And, looking for a
technicality that represents the failure of the over reliance of the
efficacy of a simplistic over generalization, I only have to point out that
they did not only move to the right, so your description was either wrong or
only partially representative of the apparent movement.

As long as the hypotheses are kept simple enough to eliminate the less
useful hypotheses, and the underlying causes for apparent relations are kept
irrelevant, over simplification is a reasonable (and valuable) method. But
if you are seriously interested in scalability, then this kind of conclusion
is just dull.

I have often made the criticism that the theories put forward in these
groups are overly simplistic.  Although I understand that this was just a
simple example, here is the key to determining whether a method is overly
simplistic (or as in AIXI) based on an overly simplistic definition
of insight.  Would this method work in discovering the possibilities of a
potentially more complex IO data environment like those we would expect to
find using AGI?
Jim Bromer.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
For visual perception, there are many reasons to think that a hierarchical
architecture can be effective... this is one of the things you may find in
dealing with real visual data but not with these toy examples...

E.g. in a spatiotemporal predictive hierarchy, the idea would be to create a
predictive module (using an Occam heuristic, as you suggest) corresponding
to each of a host of observed spatiotemporal regions, with modules
corresponding to larger regions occurring higher up in the hierarchy...

ben

On Sun, Jun 27, 2010 at 10:09 AM, David Jones  wrote:

> Thanks Ben,
>
> Right, explanatory reasoning not new at all (also called abduction and
> inference to the best explanation). But, what seems to be elusive is a
> precise and algorithm method for implementing explanatory reasoning and
> solving real problems, such as sensory perception. This is what I'm hoping
> to solve. The theory has been there a while... How to effectively implement
> it in a general way though, as far as I can tell, has never been solved.
>
> Dave
>
> On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel  wrote:
>
>>
>> Hi,
>>
>> I certainly agree with this method, but of course it's not original at
>> all, it's pretty much the basis of algorithmic learning theory, right?
>>
>> Hutter's AIXI for instance works [very roughly speaking] by choosing the
>> most compact program that, based on historical data, would have yielded
>> maximum reward
>>
>> So yeah, this is the right idea... and your simple examples of it are
>> nice...
>>
>> Eric Baum's whole book "What Is thought" is sort of an explanation of this
>> idea in a human biology and psychology and AI context ;)
>>
>> ben
>>
>> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>>
>>> A method for comparing hypotheses in explanatory-based reasoning: *
>>>
>>> We prefer the hypothesis or explanation that ***expects* more
>>> observations. If both explanations expect the same observations, then the
>>> simpler of the two is preferred (because the unnecessary terms of the more
>>> complicated explanation do not add to the predictive power).*
>>>
>>> *Why are expected events so important?* They are a measure of 1)
>>> explanatory power and 2) predictive power. The more predictive and the more
>>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>>> to a competing hypothesis.
>>>
>>> Here are two case studies I've been analyzing from sensory perception of
>>> simplified visual input:
>>> The goal of the case studies is to answer the following: How do you
>>> generate the most likely motion hypothesis in a way that is general and
>>> applicable to AGI?
>>> *Case Study 1)* Here is a link to an example: animated gif of two black
>>> squares move from left to 
>>> right.
>>> *Description: *Two black squares are moving in unison from left to right
>>> across a white screen. In each frame the black squares shift to the right so
>>> that square 1 steals square 2's original position and square two moves an
>>> equal distance to the right.
>>> *Case Study 2) *Here is a link to an example: the interrupted 
>>> square.
>>> *Description:* A single square is moving from left to right. Suddenly in
>>> the third frame, a single black square is added in the middle of the
>>> expected path of the original black square. This second square just stays
>>> there. So, what happened? Did the square moving from left to right keep
>>> moving? Or did it stop and then another square suddenly appeared and moved
>>> from left to right?
>>>
>>> *Here is a simplified version of how we solve case study 1:
>>> *The important hypotheses to consider are:
>>> 1) the square from frame 1 of the video that has a very close position to
>>> the square from frame 2 should be matched (we hypothesize that they are the
>>> same square and that any difference in position is motion).  So, what
>>> happens is that in each two frames of the video, we only match one square.
>>> The other square goes unmatched.
>>> 2) We do the same thing as in hypothesis #1, but this time we also match
>>> the remaining squares and hypothesize motion as follows: the first square
>>> jumps over the second square from left to right. We hypothesize that this
>>> happens over and over in each frame of the video. Square 2 stops and square
>>> 1 jumps over it over and over again.
>>> 3) We hypothesize that both squares move to the right in unison. This is
>>> the correct hypothesis.
>>>
>>> So, why should we prefer the correct hypothesis, #3 over the other two?
>>>
>>> Well, first of all, #3 is correct because it has the most explanatory
>>> power of the three and is the simplest of the three. Simpler is better
>>> because, with the given evidence and information, there is no reason to
>>> desire a more complicated hypothesis such as #2.
>>>
>>> So, the answer to the question is because explanation #3 expects the most
>>> observations, 

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Thanks Ben,

Right, explanatory reasoning not new at all (also called abduction and
inference to the best explanation). But, what seems to be elusive is a
precise and algorithm method for implementing explanatory reasoning and
solving real problems, such as sensory perception. This is what I'm hoping
to solve. The theory has been there a while... How to effectively implement
it in a general way though, as far as I can tell, has never been solved.

Dave

On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel  wrote:

>
> Hi,
>
> I certainly agree with this method, but of course it's not original at all,
> it's pretty much the basis of algorithmic learning theory, right?
>
> Hutter's AIXI for instance works [very roughly speaking] by choosing the
> most compact program that, based on historical data, would have yielded
> maximum reward
>
> So yeah, this is the right idea... and your simple examples of it are
> nice...
>
> Eric Baum's whole book "What Is thought" is sort of an explanation of this
> idea in a human biology and psychology and AI context ;)
>
> ben
>
> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>
>> A method for comparing hypotheses in explanatory-based reasoning: *
>>
>> We prefer the hypothesis or explanation that ***expects* more
>> observations. If both explanations expect the same observations, then the
>> simpler of the two is preferred (because the unnecessary terms of the more
>> complicated explanation do not add to the predictive power).*
>>
>> *Why are expected events so important?* They are a measure of 1)
>> explanatory power and 2) predictive power. The more predictive and the more
>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>> to a competing hypothesis.
>>
>> Here are two case studies I've been analyzing from sensory perception of
>> simplified visual input:
>> The goal of the case studies is to answer the following: How do you
>> generate the most likely motion hypothesis in a way that is general and
>> applicable to AGI?
>> *Case Study 1)* Here is a link to an example: animated gif of two black
>> squares move from left to 
>> right.
>> *Description: *Two black squares are moving in unison from left to right
>> across a white screen. In each frame the black squares shift to the right so
>> that square 1 steals square 2's original position and square two moves an
>> equal distance to the right.
>> *Case Study 2) *Here is a link to an example: the interrupted 
>> square.
>> *Description:* A single square is moving from left to right. Suddenly in
>> the third frame, a single black square is added in the middle of the
>> expected path of the original black square. This second square just stays
>> there. So, what happened? Did the square moving from left to right keep
>> moving? Or did it stop and then another square suddenly appeared and moved
>> from left to right?
>>
>> *Here is a simplified version of how we solve case study 1:
>> *The important hypotheses to consider are:
>> 1) the square from frame 1 of the video that has a very close position to
>> the square from frame 2 should be matched (we hypothesize that they are the
>> same square and that any difference in position is motion).  So, what
>> happens is that in each two frames of the video, we only match one square.
>> The other square goes unmatched.
>> 2) We do the same thing as in hypothesis #1, but this time we also match
>> the remaining squares and hypothesize motion as follows: the first square
>> jumps over the second square from left to right. We hypothesize that this
>> happens over and over in each frame of the video. Square 2 stops and square
>> 1 jumps over it over and over again.
>> 3) We hypothesize that both squares move to the right in unison. This is
>> the correct hypothesis.
>>
>> So, why should we prefer the correct hypothesis, #3 over the other two?
>>
>> Well, first of all, #3 is correct because it has the most explanatory
>> power of the three and is the simplest of the three. Simpler is better
>> because, with the given evidence and information, there is no reason to
>> desire a more complicated hypothesis such as #2.
>>
>> So, the answer to the question is because explanation #3 expects the most
>> observations, such as:
>> 1) the consistent relative positions of the squares in each frame are
>> expected.
>> 2) It also expects their new positions in each from based on velocity
>> calculations.
>> 3) It expects both squares to occur in each frame.
>>
>> Explanation 1 ignores 1 square from each frame of the video, because it
>> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
>> appears in each frame and why one disappears. It doesn't expect these
>> observations. In fact, explanation 1 doesn't expect anything that happens
>> because something new happens in each frame, which doesn't give it a chance
>> to confirm its hypotheses in subsequen

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
Hi,

I certainly agree with this method, but of course it's not original at all,
it's pretty much the basis of algorithmic learning theory, right?

Hutter's AIXI for instance works [very roughly speaking] by choosing the
most compact program that, based on historical data, would have yielded
maximum reward

So yeah, this is the right idea... and your simple examples of it are
nice...

Eric Baum's whole book "What Is thought" is sort of an explanation of this
idea in a human biology and psychology and AI context ;)

ben

On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

> A method for comparing hypotheses in explanatory-based reasoning: *
>
> We prefer the hypothesis or explanation that ***expects* more
> observations. If both explanations expect the same observations, then the
> simpler of the two is preferred (because the unnecessary terms of the more
> complicated explanation do not add to the predictive power).*
>
> *Why are expected events so important?* They are a measure of 1)
> explanatory power and 2) predictive power. The more predictive and the more
> explanatory a hypothesis is, the more likely the hypothesis is when compared
> to a competing hypothesis.
>
> Here are two case studies I've been analyzing from sensory perception of
> simplified visual input:
> The goal of the case studies is to answer the following: How do you
> generate the most likely motion hypothesis in a way that is general and
> applicable to AGI?
> *Case Study 1)* Here is a link to an example: animated gif of two black
> squares move from left to right.
> *Description: *Two black squares are moving in unison from left to right
> across a white screen. In each frame the black squares shift to the right so
> that square 1 steals square 2's original position and square two moves an
> equal distance to the right.
> *Case Study 2) *Here is a link to an example: the interrupted 
> square.
> *Description:* A single square is moving from left to right. Suddenly in
> the third frame, a single black square is added in the middle of the
> expected path of the original black square. This second square just stays
> there. So, what happened? Did the square moving from left to right keep
> moving? Or did it stop and then another square suddenly appeared and moved
> from left to right?
>
> *Here is a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close position to
> the square from frame 2 should be matched (we hypothesize that they are the
> same square and that any difference in position is motion).  So, what
> happens is that in each two frames of the video, we only match one square.
> The other square goes unmatched.
> 2) We do the same thing as in hypothesis #1, but this time we also match
> the remaining squares and hypothesize motion as follows: the first square
> jumps over the second square from left to right. We hypothesize that this
> happens over and over in each frame of the video. Square 2 stops and square
> 1 jumps over it over and over again.
> 3) We hypothesize that both squares move to the right in unison. This is
> the correct hypothesis.
>
> So, why should we prefer the correct hypothesis, #3 over the other two?
>
> Well, first of all, #3 is correct because it has the most explanatory power
> of the three and is the simplest of the three. Simpler is better because,
> with the given evidence and information, there is no reason to desire a more
> complicated hypothesis such as #2.
>
> So, the answer to the question is because explanation #3 expects the most
> observations, such as:
> 1) the consistent relative positions of the squares in each frame are
> expected.
> 2) It also expects their new positions in each from based on velocity
> calculations.
> 3) It expects both squares to occur in each frame.
>
> Explanation 1 ignores 1 square from each frame of the video, because it
> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
> appears in each frame and why one disappears. It doesn't expect these
> observations. In fact, explanation 1 doesn't expect anything that happens
> because something new happens in each frame, which doesn't give it a chance
> to confirm its hypotheses in subsequent frames.
>
> The power of this method is immediately clear. It is general and it solves
> the problem very cleanly.
>
> *Here is a simplified version of how we solve case study 2:*
> We expect the original square to move at a similar velocity from left to
> right because we hypothesized that it did move from left to right and we
> calculated its velocity. If this expectation is confirmed, then it is more
> likely than saying that the square suddenly stopped and another started
> moving. Such a change would be unexpected and such a conclusion would be
> unjustifiable.
>
> I also believe that explanations which

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Word of advice. You're creating your own artificial world here with its own 
artificial rules.

AGI is about real vision of real objects in the real world. The two do not 
relate - or compute. 

It's a pity - it's good that you keep testing yourself,  it's bad that they 
aren't realistic tests. Subject yourself to reality - it'll feel better every 
which way.


From: David Jones 
Sent: Sunday, June 27, 2010 6:31 AM
To: agi 
Subject: [agi] Huge Progress on the Core of AGI


A method for comparing hypotheses in explanatory-based reasoning: 

We prefer the hypothesis or explanation that *expects* more observations. If 
both explanations expect the same observations, then the simpler of the two is 
preferred (because the unnecessary terms of the more complicated explanation do 
not add to the predictive power). 

Why are expected events so important? They are a measure of 1) explanatory 
power and 2) predictive power. The more predictive and the more explanatory a 
hypothesis is, the more likely the hypothesis is when compared to a competing 
hypothesis.

Here are two case studies I've been analyzing from sensory perception of 
simplified visual input:
The goal of the case studies is to answer the following: How do you generate 
the most likely motion hypothesis in a way that is general and applicable to 
AGI?
Case Study 1) Here is a link to an example: animated gif of two black squares 
move from left to right. Description: Two black squares are moving in unison 
from left to right across a white screen. In each frame the black squares shift 
to the right so that square 1 steals square 2's original position and square 
two moves an equal distance to the right.
Case Study 2) Here is a link to an example: the interrupted square. 
Description: A single square is moving from left to right. Suddenly in the 
third frame, a single black square is added in the middle of the expected path 
of the original black square. This second square just stays there. So, what 
happened? Did the square moving from left to right keep moving? Or did it stop 
and then another square suddenly appeared and moved from left to right?

Here is a simplified version of how we solve case study 1:
The important hypotheses to consider are: 
1) the square from frame 1 of the video that has a very close position to the 
square from frame 2 should be matched (we hypothesize that they are the same 
square and that any difference in position is motion).  So, what happens is 
that in each two frames of the video, we only match one square. The other 
square goes unmatched.   
2) We do the same thing as in hypothesis #1, but this time we also match the 
remaining squares and hypothesize motion as follows: the first square jumps 
over the second square from left to right. We hypothesize that this happens 
over and over in each frame of the video. Square 2 stops and square 1 jumps 
over it over and over again. 
3) We hypothesize that both squares move to the right in unison. This is the 
correct hypothesis.

So, why should we prefer the correct hypothesis, #3 over the other two?

Well, first of all, #3 is correct because it has the most explanatory power of 
the three and is the simplest of the three. Simpler is better because, with the 
given evidence and information, there is no reason to desire a more complicated 
hypothesis such as #2. 

So, the answer to the question is because explanation #3 expects the most 
observations, such as: 
1) the consistent relative positions of the squares in each frame are expected. 
2) It also expects their new positions in each from based on velocity 
calculations. 
3) It expects both squares to occur in each frame. 

Explanation 1 ignores 1 square from each frame of the video, because it can't 
match it. Hypothesis #1 doesn't have a reason for why the a new square appears 
in each frame and why one disappears. It doesn't expect these observations. In 
fact, explanation 1 doesn't expect anything that happens because something new 
happens in each frame, which doesn't give it a chance to confirm its hypotheses 
in subsequent frames.

The power of this method is immediately clear. It is general and it solves the 
problem very cleanly.

Here is a simplified version of how we solve case study 2:
We expect the original square to move at a similar velocity from left to right 
because we hypothesized that it did move from left to right and we calculated 
its velocity. If this expectation is confirmed, then it is more likely than 
saying that the square suddenly stopped and another started moving. Such a 
change would be unexpected and such a conclusion would be unjustifiable. 

I also believe that explanations which generate fewer incorrect expectations 
should be preferred over those that more incorrect expectations.

The idea I came up with earlier this month regarding high frame rates to reduce 
uncertainty is still applicable. It is important that all generated hypotheses 
have as low uncertainty as possible given o