Re: [agi] Huge Progress on the Core of AGI

2010-07-28 Thread David Jones
LOL. I didn't even realize that this was not his main website until today. I
must say that it seems very well put. Sorry Arthur :S

On Sun, Jul 25, 2010 at 12:44 PM, Chris Petersen  wrote:

> Don't fret; your main site's got good uptime.
>
> http://www.nothingisreal.com/mentifex_faq.html
>
> -Chris
>
>
>
> On Sun, Jul 25, 2010 at 9:42 AM, A. T. Murray  wrote:
>
>> David Jones wrote:
>> >
>> >Arthur,
>> >
>> >Thanks. I appreciate that. I would be happy to aggregate some of those
>> >things. I am sometimes not good at maintaining the website because I get
>> >bored of maintaining or updating it very quickly :)
>> >
>> >Dave
>> >
>> >On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:
>> >
>> >> The Web site of David Jones at
>> >>
>> >> http://practicalai.org
>> >>
>> >> is quite impressive to me
>> >> as a kindred spirit building AGI.
>> >> (Just today I have been coding MindForth AGI :-)
>> >>
>> >> For his "Practical AI Challenge" or similar
>> >> ventures, I would hope that David Jones is
>> >> open to the idea of aggregating or archiving
>> >> "representative AI samples" from such sources as
>> >> - TexAI;
>> >> - OpenCog;
>> >> - Mentifex AI;
>> >> - etc.;
>> >> so that visitors to PracticalAI may gain an
>> >> overview of what is happening in our field.
>> >>
>> >> Arthur
>> >> --
>> >> http://www.scn.org/~mentifex/AiMind.html
>> >> http://www.scn.org/~mentifex/mindforth.txt
>>
>> Just today, a few minutes ago, I updated the
>> mindforth.txt AI souce code listed above.
>>
>> In the PracticalAi aggregates, you might consider
>> listing Mentifex AI with copies of the above two
>> AI source code pages, and with links to the
>> original scn.org URL's, where visitors to
>> PracticalAi could look for any more recent
>> updates that you had not gotten around to
>> transferring from scn.org to PracticalAi.
>> In that way, theses releases of Mentifex
>> free AI source code would have a more robust
>> Web presence (SCN often goes down) and I
>> could link to PracticalAi for the aggregates
>> and other features of PracticalAI.
>>
>> Thanks.
>>
>> Arthur T. Murray
>>
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>
>> Powered by Listbox: http://www.listbox.com
>>
>
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-26 Thread Chris Petersen
On Mon, Jul 26, 2010 at 11:23 AM, Jim Bromer  wrote:

> Oh yeah.  I forgot about some of Arthur's claims about Mentiflex which
> seemed a bit exaggerated.  Oh well.
> Jim Bromer
>


World War II was a bit of a tussle, too.

-Chris



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-26 Thread Jim Bromer
Oh yeah.  I forgot about some of Arthur's claims about Mentiflex which
seemed a bit exaggerated.  Oh well.
Jim Bromer

On Mon, Jul 26, 2010 at 11:13 AM, Jim Bromer  wrote:

> Arthur,
> The section from "The Arthur T. Murray/Mentifex", "FAQ, 2.3 What do
> researchers in academia think of Murray’s work?", really puts you into a
> whole other category in my view.  The rest of us can only dream of such
> dismissals from "experts" who haven't achieved anything more than the rest
> of us.
> Congratulations on being honest, you have already achieved more than the
> experts who aren't so.
> Jim Bromer
>
>
> On Sun, Jul 25, 2010 at 10:42 AM, A. T. Murray  wrote:
>
>> David Jones wrote:
>> >
>> >Arthur,
>> >
>> >Thanks. I appreciate that. I would be happy to aggregate some of those
>> >things. I am sometimes not good at maintaining the website because I get
>> >bored of maintaining or updating it very quickly :)
>> >
>> >Dave
>> >
>> >On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:
>> >
>> >> The Web site of David Jones at
>> >>
>> >> http://practicalai.org
>> >>
>> >> is quite impressive to me
>> >> as a kindred spirit building AGI.
>> >> (Just today I have been coding MindForth AGI :-)
>> >>
>> >> For his "Practical AI Challenge" or similar
>> >> ventures, I would hope that David Jones is
>> >> open to the idea of aggregating or archiving
>> >> "representative AI samples" from such sources as
>> >> - TexAI;
>> >> - OpenCog;
>> >> - Mentifex AI;
>> >> - etc.;
>> >> so that visitors to PracticalAI may gain an
>> >> overview of what is happening in our field.
>> >>
>> >> Arthur
>> >> --
>> >> http://www.scn.org/~mentifex/AiMind.html
>> >> http://www.scn.org/~mentifex/mindforth.txt
>>
>> Just today, a few minutes ago, I updated the
>> mindforth.txt AI souce code listed above.
>>
>> In the PracticalAi aggregates, you might consider
>> listing Mentifex AI with copies of the above two
>> AI source code pages, and with links to the
>> original scn.org URL's, where visitors to
>> PracticalAi could look for any more recent
>> updates that you had not gotten around to
>> transferring from scn.org to PracticalAi.
>> In that way, theses releases of Mentifex
>> free AI source code would have a more robust
>> Web presence (SCN often goes down) and I
>> could link to PracticalAi for the aggregates
>> and other features of PracticalAI.
>>
>> Thanks.
>>
>> Arthur T. Murray
>>
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>>  Powered by Listbox: http://www.listbox.com
>>
>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-26 Thread David Jones
Sure. Thanks Arthur.

On Sun, Jul 25, 2010 at 10:42 AM, A. T. Murray  wrote:

> David Jones wrote:
> >
> >Arthur,
> >
> >Thanks. I appreciate that. I would be happy to aggregate some of those
> >things. I am sometimes not good at maintaining the website because I get
> >bored of maintaining or updating it very quickly :)
> >
> >Dave
> >
> >On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:
> >
> >> The Web site of David Jones at
> >>
> >> http://practicalai.org
> >>
> >> is quite impressive to me
> >> as a kindred spirit building AGI.
> >> (Just today I have been coding MindForth AGI :-)
> >>
> >> For his "Practical AI Challenge" or similar
> >> ventures, I would hope that David Jones is
> >> open to the idea of aggregating or archiving
> >> "representative AI samples" from such sources as
> >> - TexAI;
> >> - OpenCog;
> >> - Mentifex AI;
> >> - etc.;
> >> so that visitors to PracticalAI may gain an
> >> overview of what is happening in our field.
> >>
> >> Arthur
> >> --
> >> http://www.scn.org/~mentifex/AiMind.html
> >> http://www.scn.org/~mentifex/mindforth.txt
>
> Just today, a few minutes ago, I updated the
> mindforth.txt AI souce code listed above.
>
> In the PracticalAi aggregates, you might consider
> listing Mentifex AI with copies of the above two
> AI source code pages, and with links to the
> original scn.org URL's, where visitors to
> PracticalAi could look for any more recent
> updates that you had not gotten around to
> transferring from scn.org to PracticalAi.
> In that way, theses releases of Mentifex
> free AI source code would have a more robust
> Web presence (SCN often goes down) and I
> could link to PracticalAi for the aggregates
> and other features of PracticalAI.
>
> Thanks.
>
> Arthur T. Murray
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-26 Thread Jim Bromer
Arthur,
The section from "The Arthur T. Murray/Mentifex", "FAQ, 2.3 What do
researchers in academia think of Murray’s work?", really puts you into a
whole other category in my view.  The rest of us can only dream of such
dismissals from "experts" who haven't achieved anything more than the rest
of us.
Congratulations on being honest, you have already achieved more than the
experts who aren't so.
Jim Bromer


On Sun, Jul 25, 2010 at 10:42 AM, A. T. Murray  wrote:

> David Jones wrote:
> >
> >Arthur,
> >
> >Thanks. I appreciate that. I would be happy to aggregate some of those
> >things. I am sometimes not good at maintaining the website because I get
> >bored of maintaining or updating it very quickly :)
> >
> >Dave
> >
> >On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:
> >
> >> The Web site of David Jones at
> >>
> >> http://practicalai.org
> >>
> >> is quite impressive to me
> >> as a kindred spirit building AGI.
> >> (Just today I have been coding MindForth AGI :-)
> >>
> >> For his "Practical AI Challenge" or similar
> >> ventures, I would hope that David Jones is
> >> open to the idea of aggregating or archiving
> >> "representative AI samples" from such sources as
> >> - TexAI;
> >> - OpenCog;
> >> - Mentifex AI;
> >> - etc.;
> >> so that visitors to PracticalAI may gain an
> >> overview of what is happening in our field.
> >>
> >> Arthur
> >> --
> >> http://www.scn.org/~mentifex/AiMind.html
> >> http://www.scn.org/~mentifex/mindforth.txt
>
> Just today, a few minutes ago, I updated the
> mindforth.txt AI souce code listed above.
>
> In the PracticalAi aggregates, you might consider
> listing Mentifex AI with copies of the above two
> AI source code pages, and with links to the
> original scn.org URL's, where visitors to
> PracticalAi could look for any more recent
> updates that you had not gotten around to
> transferring from scn.org to PracticalAi.
> In that way, theses releases of Mentifex
> free AI source code would have a more robust
> Web presence (SCN often goes down) and I
> could link to PracticalAi for the aggregates
> and other features of PracticalAI.
>
> Thanks.
>
> Arthur T. Murray
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-25 Thread Chris Petersen
Don't fret; your main site's got good uptime.

http://www.nothingisreal.com/mentifex_faq.html

-Chris



On Sun, Jul 25, 2010 at 9:42 AM, A. T. Murray  wrote:

> David Jones wrote:
> >
> >Arthur,
> >
> >Thanks. I appreciate that. I would be happy to aggregate some of those
> >things. I am sometimes not good at maintaining the website because I get
> >bored of maintaining or updating it very quickly :)
> >
> >Dave
> >
> >On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:
> >
> >> The Web site of David Jones at
> >>
> >> http://practicalai.org
> >>
> >> is quite impressive to me
> >> as a kindred spirit building AGI.
> >> (Just today I have been coding MindForth AGI :-)
> >>
> >> For his "Practical AI Challenge" or similar
> >> ventures, I would hope that David Jones is
> >> open to the idea of aggregating or archiving
> >> "representative AI samples" from such sources as
> >> - TexAI;
> >> - OpenCog;
> >> - Mentifex AI;
> >> - etc.;
> >> so that visitors to PracticalAI may gain an
> >> overview of what is happening in our field.
> >>
> >> Arthur
> >> --
> >> http://www.scn.org/~mentifex/AiMind.html
> >> http://www.scn.org/~mentifex/mindforth.txt
>
> Just today, a few minutes ago, I updated the
> mindforth.txt AI souce code listed above.
>
> In the PracticalAi aggregates, you might consider
> listing Mentifex AI with copies of the above two
> AI source code pages, and with links to the
> original scn.org URL's, where visitors to
> PracticalAi could look for any more recent
> updates that you had not gotten around to
> transferring from scn.org to PracticalAi.
> In that way, theses releases of Mentifex
> free AI source code would have a more robust
> Web presence (SCN often goes down) and I
> could link to PracticalAi for the aggregates
> and other features of PracticalAI.
>
> Thanks.
>
> Arthur T. Murray
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-25 Thread A. T. Murray
David Jones wrote:
>
>Arthur,
>
>Thanks. I appreciate that. I would be happy to aggregate some of those
>things. I am sometimes not good at maintaining the website because I get
>bored of maintaining or updating it very quickly :)
>
>Dave
>
>On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:
>
>> The Web site of David Jones at
>>
>> http://practicalai.org
>>
>> is quite impressive to me
>> as a kindred spirit building AGI.
>> (Just today I have been coding MindForth AGI :-)
>>
>> For his "Practical AI Challenge" or similar
>> ventures, I would hope that David Jones is
>> open to the idea of aggregating or archiving
>> "representative AI samples" from such sources as
>> - TexAI;
>> - OpenCog;
>> - Mentifex AI;
>> - etc.;
>> so that visitors to PracticalAI may gain an
>> overview of what is happening in our field.
>>
>> Arthur
>> --
>> http://www.scn.org/~mentifex/AiMind.html
>> http://www.scn.org/~mentifex/mindforth.txt

Just today, a few minutes ago, I updated the
mindforth.txt AI souce code listed above.

In the PracticalAi aggregates, you might consider
listing Mentifex AI with copies of the above two
AI source code pages, and with links to the
original scn.org URL's, where visitors to
PracticalAi could look for any more recent
updates that you had not gotten around to
transferring from scn.org to PracticalAi.
In that way, theses releases of Mentifex 
free AI source code would have a more robust
Web presence (SCN often goes down) and I 
could link to PracticalAi for the aggregates
and other features of PracticalAI.

Thanks.

Arthur T. Murray



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
Arthur,

Thanks. I appreciate that. I would be happy to aggregate some of those
things. I am sometimes not good at maintaining the website because I get
bored of maintaining or updating it very quickly :)

Dave

On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray  wrote:

> The Web site of David Jones at
>
> http://practicalai.org
>
> is quite impressive to me
> as a kindred spirit building AGI.
> (Just today I have been coding MindForth AGI :-)
>
> For his "Practical AI Challenge" or similar
> ventures, I would hope that David Jones is
> open to the idea of aggregating or archiving
> "representative AI samples" from such sources as
> - TexAI;
> - OpenCog;
> - Mentifex AI;
> - etc.;
> so that visitors to PracticalAI may gain an
> overview of what is happening in our field.
>
> Arthur
> --
> http://www.scn.org/~mentifex/AiMind.html
> http://www.scn.org/~mentifex/mindforth.txt
>
> >
> >lol. thanks Jim :)
> >
> >
> >On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer  wrote:
> >
> >> I have to say that I am proud of David Jone's efforts.  He has really
> >> matured during these last few months.  I'm kidding but I really do
> respect
> >> the fact that he is actively experimenting.  I want to get back to work
> on
> >> my artificial imagination and image analysis programs - if I can ever
> figure
> >> out how to get the time.
> >>
> >> As I have read David's comments, I realize that we need to really
> leverage
> >> all sorts of cruddy data in order to make good agi.  But since that kind
> of
> >> thing doesn't work with sparse knowledge, it seems that the only way it
> >> could work is with extensive knowledge about a wide range of situations,
> >> like the knowledge gained from a vast variety of experiences.  This
> >> conjecture makes some sense because if wide ranging knowledge could be
> kept
> >> in superficial stores where it could be accessed quickly and
> economically,
> >> it could be used efficiently in (conceptual) model fitting.  However, as
> >> knowledge becomes too extensive it might become too unwieldy to find
> what is
> >> needed for a particular situation.  At this point indexing becomes
> necessary
> >> with cross-indexing references to different knowledge based on
> similarities
> >> and commonalities of employment.
> >>
> >> Here I am saying that relevant knowledge based on previous learning
> might
> >> not have to be totally relevant to a situation as long as it could be
> used
> >> to run during an ongoing situation.  From this perspective
> >> then, knowledge from a wide variety of experiences should actually be
> >> composed of reactions on different conceptual levels.  Then as a piece
> of
> >> knowledge is brought into play for an ongoing situation, those levels
> that
> >> seem best suited to deal with the situation could be promoted quickly as
> the
> >> situation unfolds, acting like an automated indexing system into other
> >> knowledge relevant to the situation.  So the ongoing process of trying
> to
> >> determine what is going on and what actions should be made would
> >> simultaneously act like an automated index to find better knowledge more
> >> suited for the situation.
> >> Jim Bromer
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-24 Thread A. T. Murray
The Web site of David Jones at

http://practicalai.org

is quite impressive to me 
as a kindred spirit building AGI.
(Just today I have been coding MindForth AGI :-)

For his "Practical AI Challenge" or similar 
ventures, I would hope that David Jones is
open to the idea of aggregating or archiving
"representative AI samples" from such sources as
- TexAI;
- OpenCog;
- Mentifex AI;
- etc.;
so that visitors to PracticalAI may gain an
overview of what is happening in our field.

Arthur
-- 
http://www.scn.org/~mentifex/AiMind.html
http://www.scn.org/~mentifex/mindforth.txt

>
>lol. thanks Jim :)
>
>
>On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer  wrote:
>
>> I have to say that I am proud of David Jone's efforts.  He has really
>> matured during these last few months.  I'm kidding but I really do respect
>> the fact that he is actively experimenting.  I want to get back to work on
>> my artificial imagination and image analysis programs - if I can ever figure
>> out how to get the time.
>>
>> As I have read David's comments, I realize that we need to really leverage
>> all sorts of cruddy data in order to make good agi.  But since that kind of
>> thing doesn't work with sparse knowledge, it seems that the only way it
>> could work is with extensive knowledge about a wide range of situations,
>> like the knowledge gained from a vast variety of experiences.  This
>> conjecture makes some sense because if wide ranging knowledge could be kept
>> in superficial stores where it could be accessed quickly and economically,
>> it could be used efficiently in (conceptual) model fitting.  However, as
>> knowledge becomes too extensive it might become too unwieldy to find what is
>> needed for a particular situation.  At this point indexing becomes necessary
>> with cross-indexing references to different knowledge based on similarities
>> and commonalities of employment.
>>
>> Here I am saying that relevant knowledge based on previous learning might
>> not have to be totally relevant to a situation as long as it could be used
>> to run during an ongoing situation.  From this perspective
>> then, knowledge from a wide variety of experiences should actually be
>> composed of reactions on different conceptual levels.  Then as a piece of
>> knowledge is brought into play for an ongoing situation, those levels that
>> seem best suited to deal with the situation could be promoted quickly as the
>> situation unfolds, acting like an automated indexing system into other
>> knowledge relevant to the situation.  So the ongoing process of trying to
>> determine what is going on and what actions should be made would
>> simultaneously act like an automated index to find better knowledge more
>> suited for the situation.
>> Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-24 Thread David Jones
lol. thanks Jim :)


On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer  wrote:

> I have to say that I am proud of David Jone's efforts.  He has really
> matured during these last few months.  I'm kidding but I really do respect
> the fact that he is actively experimenting.  I want to get back to work on
> my artificial imagination and image analysis programs - if I can ever figure
> out how to get the time.
>
> As I have read David's comments, I realize that we need to really leverage
> all sorts of cruddy data in order to make good agi.  But since that kind of
> thing doesn't work with sparse knowledge, it seems that the only way it
> could work is with extensive knowledge about a wide range of situations,
> like the knowledge gained from a vast variety of experiences.  This
> conjecture makes some sense because if wide ranging knowledge could be kept
> in superficial stores where it could be accessed quickly and economically,
> it could be used efficiently in (conceptual) model fitting.  However, as
> knowledge becomes too extensive it might become too unwieldy to find what is
> needed for a particular situation.  At this point indexing becomes necessary
> with cross-indexing references to different knowledge based on similarities
> and commonalities of employment.
>
> Here I am saying that relevant knowledge based on previous learning might
> not have to be totally relevant to a situation as long as it could be used
> to run during an ongoing situation.  From this perspective
> then, knowledge from a wide variety of experiences should actually be
> composed of reactions on different conceptual levels.  Then as a piece of
> knowledge is brought into play for an ongoing situation, those levels that
> seem best suited to deal with the situation could be promoted quickly as the
> situation unfolds, acting like an automated indexing system into other
> knowledge relevant to the situation.  So the ongoing process of trying to
> determine what is going on and what actions should be made would
> simultaneously act like an automated index to find better knowledge more
> suited for the situation.
> Jim Bromer
>*agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Huge Progress on the Core of AGI

2010-07-22 Thread Jim Bromer
I have to say that I am proud of David Jone's efforts.  He has really
matured during these last few months.  I'm kidding but I really do respect
the fact that he is actively experimenting.  I want to get back to work on
my artificial imagination and image analysis programs - if I can ever figure
out how to get the time.

As I have read David's comments, I realize that we need to really leverage
all sorts of cruddy data in order to make good agi.  But since that kind of
thing doesn't work with sparse knowledge, it seems that the only way it
could work is with extensive knowledge about a wide range of situations,
like the knowledge gained from a vast variety of experiences.  This
conjecture makes some sense because if wide ranging knowledge could be kept
in superficial stores where it could be accessed quickly and economically,
it could be used efficiently in (conceptual) model fitting.  However, as
knowledge becomes too extensive it might become too unwieldy to find what is
needed for a particular situation.  At this point indexing becomes necessary
with cross-indexing references to different knowledge based on similarities
and commonalities of employment.

Here I am saying that relevant knowledge based on previous learning might
not have to be totally relevant to a situation as long as it could be used
to run during an ongoing situation.  From this perspective
then, knowledge from a wide variety of experiences should actually be
composed of reactions on different conceptual levels.  Then as a piece of
knowledge is brought into play for an ongoing situation, those levels that
seem best suited to deal with the situation could be promoted quickly as the
situation unfolds, acting like an automated indexing system into other
knowledge relevant to the situation.  So the ongoing process of trying to
determine what is going on and what actions should be made would
simultaneously act like an automated index to find better knowledge more
suited for the situation.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-28 Thread Michael Swan

On Mon, 2010-06-28 at 13:21 +0100, Mike Tintner wrote:
> MS: I'm solving this by using an algorithm + exceptions routines.
> 
> You're saying there are predictable patterns to human and animal behaviour 
> in their activities, (like sports and investing) - and in this instance how 
> humans change tactics?
> 
> What empirical evidence do you have for this, apart from zero, and over 300 
> years of scientific failure to produce any such laws or patterns of 
> behaviour?
> 
> What evidence in the slightest do you have for your algorithm working?
Still in the testing phase. It's more complicated than just (algorithm +
exceptions), there are multiple levels of accuracy of data and how you
combine the multiple levels of data.
 

> 
> The evidence to the contrary, that human and animal behaviour, are not 
> predictable is pretty overwhelming.
> 
> Taking into account the above, how would you mathematically assess the cases 
> for proceeding on the basis that a) living organisms  ARE predictable vs b) 
> living organisms are NOT predictable?  Roughly about the same as a) you WILL 
> win the lottery vs b) you WON'T win? Actually that is almost certainly being 
> extremely kind - you do have a chance of winning the lottery.
> 
> --
> From: "Michael Swan" 
> Sent: Monday, June 28, 2010 4:17 AM
> To: "agi" 
> Subject: Re: [agi] Huge Progress on the Core of AGI
> 
> >
> > On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:
> >>
> >> Humans may use sophisticated tactics to play Pong, but that doesn't
> >> mean it's the only way to win
> >>
> >> Humans use subtle and sophisticated methods to play chess also, right?
> >> But Deep Blue still kicks their ass...
> >
> > If the rules of chess changed slightly, without being reprogrammed deep
> > blue sux.
> > And also there is anti deep blue chess. Play chess where you avoid
> > losing and taking pieces for as long as possible to maintain high
> > combination of possible outcomes, and avoid moving pieces in known
> > arrangements.
> >
> > Playing against another human player like this you would more than
> > likely lose.
> >
> >>
> >> The stock market is another situation where narrow-AI algorithms may
> >> already outperform humans ... certainly they outperform all except the
> >> very best humans...
> >>
> >> ... ben g
> >>
> >> On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner
> >>  wrote:
> >> Oh well that settles it...
> >>
> >> How do you know then when the opponent has changed his
> >> tactics?
> >>
> >> How do you know when he's switched from a predominantly
> >> baseline game say to a net-rushing game?
> >>
> >> And how do you know when the market has changed from bull to
> >> bear or vice versa, and I can start going short or long? Why
> >> is there any difference between the tennis & market
> >> situations?
> >
> >
> > I'm solving this by using an algorithm + exceptions routines.
> >
> > eg Input 100 numbers - write an algorithm that generalises/compresses
> > the input.
> >
> > ans may be
> > (input_is_always > 0)  // highly general
> >
> > (if fail try exceptions)
> > // exceptions
> > // highly accurate exceptions
> > (input35 == -4)
> > (input75 == -50)
> > ..
> > more generalised exceptions, etc
> >
> > I believe such a system is similar to the way we remember things. eg -
> > We tend to have highly detailed memory for exceptions - we tend to
> > remember things about "white whales" more than "ordinary whales". In
> > fact, there was a news story the other night on a returning white whale
> > in Brisbane, and there are additional laws to stay way from this whale
> > in particular, rather than all whales in general.
> >
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> From: Ben Goertzel
> >> Sent: Monday, June 28, 2010 12:03 AM
> >>
> >> To: agi
> >> Subject: Re: [agi] Huge Progress on the Core of AGI
> >>
> >>
> >>
> >> Even with the variations you mention, I remain highly
> >> confident this is not a difficult problem for narrow-AI
> >> machine learning 

Re: [agi] Huge Progress on the Core of AGI

2010-06-28 Thread Mike Tintner

MS: I'm solving this by using an algorithm + exceptions routines.

You're saying there are predictable patterns to human and animal behaviour 
in their activities, (like sports and investing) - and in this instance how 
humans change tactics?


What empirical evidence do you have for this, apart from zero, and over 300 
years of scientific failure to produce any such laws or patterns of 
behaviour?


What evidence in the slightest do you have for your algorithm working?

The evidence to the contrary, that human and animal behaviour, are not 
predictable is pretty overwhelming.


Taking into account the above, how would you mathematically assess the cases 
for proceeding on the basis that a) living organisms  ARE predictable vs b) 
living organisms are NOT predictable?  Roughly about the same as a) you WILL 
win the lottery vs b) you WON'T win? Actually that is almost certainly being 
extremely kind - you do have a chance of winning the lottery.


--
From: "Michael Swan" 
Sent: Monday, June 28, 2010 4:17 AM
To: "agi" 
Subject: Re: [agi] Huge Progress on the Core of AGI



On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:


Humans may use sophisticated tactics to play Pong, but that doesn't
mean it's the only way to win

Humans use subtle and sophisticated methods to play chess also, right?
But Deep Blue still kicks their ass...


If the rules of chess changed slightly, without being reprogrammed deep
blue sux.
And also there is anti deep blue chess. Play chess where you avoid
losing and taking pieces for as long as possible to maintain high
combination of possible outcomes, and avoid moving pieces in known
arrangements.

Playing against another human player like this you would more than
likely lose.



The stock market is another situation where narrow-AI algorithms may
already outperform humans ... certainly they outperform all except the
very best humans...

... ben g

On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner
 wrote:
Oh well that settles it...

How do you know then when the opponent has changed his
tactics?

How do you know when he's switched from a predominantly
baseline game say to a net-rushing game?

And how do you know when the market has changed from bull to
bear or vice versa, and I can start going short or long? Why
is there any difference between the tennis & market
situations?



I'm solving this by using an algorithm + exceptions routines.

eg Input 100 numbers - write an algorithm that generalises/compresses
the input.

ans may be
(input_is_always > 0)  // highly general

(if fail try exceptions)
// exceptions
// highly accurate exceptions
(input35 == -4)
(input75 == -50)
..
more generalised exceptions, etc

I believe such a system is similar to the way we remember things. eg -
We tend to have highly detailed memory for exceptions - we tend to
remember things about "white whales" more than "ordinary whales". In
fact, there was a news story the other night on a returning white whale
in Brisbane, and there are additional laws to stay way from this whale
in particular, rather than all whales in general.











From: Ben Goertzel
        Sent: Monday, June 28, 2010 12:03 AM

To: agi
Subject: Re: [agi] Huge Progress on the Core of AGI



Even with the variations you mention, I remain highly
confident this is not a difficult problem for narrow-AI
machine learning methods

-- Ben G

On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner
 wrote:
I think you're thinking of a plodding limited-movement
classic Pong line.

I'm thinking of a line that can like a human
player move with varying speed and pauses to more or
less any part of its court to hit the ball, and then
hit it with varying speed to more or less any part of
the opposite court. I think you'll find that bumps up
the variables if not unknowns massively.

Plus just about every shot exchange presents you with
dilemmas of how to place your shot and then move in
anticipation of your opponent's return .

Remember the object here is to present a would-be AGI
with a simple but *unpredictable* object to deal with,
reflecting the realities of there being a great many
such objects in the real world - as distinct from
Dave's all too predictable objects.

The possible weakness of this pong example is that
there might at some point cease to be unknowns, as
there always are in real world situations, incl
tennis. One could always introduc

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Michael Swan

On Sun, 2010-06-27 at 19:38 -0400, Ben Goertzel wrote:
> 
> Humans may use sophisticated tactics to play Pong, but that doesn't
> mean it's the only way to win
> 
> Humans use subtle and sophisticated methods to play chess also, right?
> But Deep Blue still kicks their ass...

If the rules of chess changed slightly, without being reprogrammed deep
blue sux. 
And also there is anti deep blue chess. Play chess where you avoid
losing and taking pieces for as long as possible to maintain high
combination of possible outcomes, and avoid moving pieces in known
arrangements. 

Playing against another human player like this you would more than
likely lose.

> 
> The stock market is another situation where narrow-AI algorithms may
> already outperform humans ... certainly they outperform all except the
> very best humans...
> 
> ... ben g
> 
> On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner
>  wrote:
> Oh well that settles it...
>  
> How do you know then when the opponent has changed his
> tactics?
>  
> How do you know when he's switched from a predominantly
> baseline game say to a net-rushing game?
>  
> And how do you know when the market has changed from bull to
> bear or vice versa, and I can start going short or long? Why
> is there any difference between the tennis & market
> situations?


I'm solving this by using an algorithm + exceptions routines.

eg Input 100 numbers - write an algorithm that generalises/compresses
the input.

ans may be
(input_is_always > 0)  // highly general

(if fail try exceptions)
// exceptions   
// highly accurate exceptions
(input35 == -4) 
(input75 == -50)
..
more generalised exceptions, etc

I believe such a system is similar to the way we remember things. eg -
We tend to have highly detailed memory for exceptions - we tend to
remember things about "white whales" more than "ordinary whales". In
fact, there was a news story the other night on a returning white whale
in Brisbane, and there are additional laws to stay way from this whale
in particular, rather than all whales in general.

>  
>  
>  
>  
>  
>  
>  
>     
>     
>         From: Ben Goertzel 
> Sent: Monday, June 28, 2010 12:03 AM
> 
> To: agi 
> Subject: Re: [agi] Huge Progress on the Core of AGI
> 
> 
> 
> Even with the variations you mention, I remain highly
> confident this is not a difficult problem for narrow-AI
> machine learning methods
> 
> -- Ben G
> 
> On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner
>  wrote:
> I think you're thinking of a plodding limited-movement
> classic Pong line.
>  
> I'm thinking of a line that can like a human
> player move with varying speed and pauses to more or
> less any part of its court to hit the ball, and then
> hit it with varying speed to more or less any part of
> the opposite court. I think you'll find that bumps up
> the variables if not unknowns massively.
>  
> Plus just about every shot exchange presents you with
> dilemmas of how to place your shot and then move in
> anticipation of your opponent's return .
>  
> Remember the object here is to present a would-be AGI
> with a simple but *unpredictable* object to deal with,
> reflecting the realities of there being a great many
> such objects in the real world - as distinct from
> Dave's all too predictable objects.
>  
> The possible weakness of this pong example is that
> there might at some point cease to be unknowns, as
> there always are in real world situations, incl
> tennis. One could always introduce them if necessary -
> allowing say creative spins on the ball.
>  
> But I doubt that it will be necessary here for the
> purposes of anyone like Dave -  and v. offhand and
> with no doubt extreme license this strikes me as not a
> million miles from a hyper version of the TSP problem,
> where the towns can move around, and you can't be sure
> whether they'll be there when you arrive.  Or is t

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Mike,

you are mixing multiple issues. Just like my analogy of the rubix cube, full
AGI problems involve many problems at the same time. The problem I wrote
this email about was not about how to solve them all at the same time. It
was about how to solve one of those problems. After solving the problem
satisfactorily for all test cases at a given complexity level, I intend to
incrementally add complexity and then continue to solve the problems I run
into. Your proposed "AGI problem" is a combination of sensory
interpretation, planning, plan/action execution, behavior learning, etc,
etc. You would do well to learn from my approach and break the problem down
into its separate pieces. You would be a fool to implement such a system
without a good understanding of the sub problems. If you break it down and
figure out how to solve each piece individually, while anticipating the end
goal, you would have a much better understanding and have fewer problems as
you develop the system because of your experience. You are philosophizing
about the right way, but your approach is completely theoretical and
completely void of any practical considerations. Why don't you try your
method, I'll try mine, and in a few years, let's see how far we get. I
suspect you'll have a very nice pong playing program that can't do anything
else. I on the other hand would have a full fledged theoretical foundation
and implementation on increasingly complex environments. At that point, the
proof of concept would be sufficient to gain significant support. While your
approach would likely be another narrow approach that can play a single
game. Why? because you're trying to juggle too many separate problems that
individual study. By lumping them altogether and not carefully considering
each, you will not solve them well. You will be spread too thin.

Dave

On Sun, Jun 27, 2010 at 7:33 PM, Mike Tintner wrote:

>  Oh well that settles it...
>
> How do you know then when the opponent has changed his tactics?
>
> How do you know when he's switched from a predominantly baseline game say
> to a net-rushing game?
>
> And how do you know when the market has changed from bull to bear or vice
> versa, and I can start going short or long? Why is there any difference
> between the tennis & market situations?
>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
Even with the variations you mention, I remain highly confident this is not
a difficult problem for narrow-AI machine learning methods

-- Ben G

On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner wrote:

>  I think you're thinking of a plodding limited-movement classic Pong line.
>
> I'm thinking of a line that can like a human player move with varying
> speed and pauses to more or less any part of its court to hit the ball, and
> then hit it with varying speed to more or less any part of the opposite
> court. I think you'll find that bumps up the variables if not
> unknowns massively.
>
>  Plus just about every shot exchange presents you with dilemmas of how to
> place your shot and then move in anticipation of your opponent's return .
>
> Remember the object here is to present a would-be AGI with a simple but
> *unpredictable* object to deal with, reflecting the realities of there being
> a great many such objects in the real world - as distinct from Dave's all
> too predictable objects.
>
> The possible weakness of this pong example is that there might at some
> point cease to be unknowns, as there always are in real world situations,
> incl tennis. One could always introduce them if necessary - allowing say
> creative spins on the ball.
>
> But I doubt that it will be necessary here for the purposes of anyone like
> Dave -  and v. offhand and with no doubt extreme license this strikes me as
> not a million miles from a hyper version of the TSP problem, where the towns
> can move around, and you can't be sure whether they'll be there when you
> arrive.  Or is there an "obviously true" solution for that problem too?
> [Very convenient these obviously true solutions].
>
>
>  *From:* Jim Bromer 
> *Sent:* Sunday, June 27, 2010 8:53 PM
> *To:* agi 
> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>
> Ben:  I'm quite sure a simple narrow AI system could be constructed to beat
> humans at Pong ;p
> Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a
> single reason why.
>
> Although Ben would have to give us an actual example (of a pong program
> that could beat humans at Pong) just to make sure that it is
> not that difficult a task, it seems like such an obviously true statement
> that there is almost no incentive for anyone to try it.  However, there are
> chess programs that can beat the majority of people who play chess without
> outside assistance.
> Jim Bromer
>
> On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner wrote:
>
>>  Well, Ben, I'm glad you're "quite sure" because you haven't given a
>> single reason why. Clearly you should be Number One advisor on every
>> Olympic team, because you've cracked the AGI problem of how to deal with
>> opponents that can move (whether themselves or balls) in multiple,
>> unpredictable directions, that is at the centre of just about every field
>> and court sport.
>>
>> I think if you actually analyse it, you'll find that you can't predict and
>> prepare for  the presumably at least 50 to 100 spots on a table tennis
>> board/ tennis court that your opponent can hit the ball to, let
>> alone for how he will play subsequent 10 to 20 shot rallies   - and you
>> can't devise a deterministic program to play here. These are true,
>> multiple-/poly-solution problems rather than the single solution ones you
>> are familiar with.
>>
>> That's why all of these sports have normally hundreds of different
>> competing philosophies and strategies, - and people continually can and do
>> come up with new approaches and styles of play to the sports overall - there
>> are endless possibilities.
>>
>> I suspect you may not play these sports, because one factor you've
>> obviously ignored (although I stressed it) is not just the complexity
>> but that in sports players can and do change their strategies - and that
>> would have to be a given in our computer game. In real world activities,
>> you're normally *supposed* to act unpredictably at least some of the time.
>> It's a fundamental subgoal.
>>
>> In sport, as in investment, "past performance is not a [sure] guide to
>> future performance" - companies and markets may not continue to behave as
>> they did in the past -  so that alone buggers any narrow AI predictive
>> approach.
>>
>> P.S. But the most basic reality of these sports is that you can't cover
>> every shot or move your opponent may make, and that gives rise to a
>> continuing stream of genuine dilemmas . For example, you have just returned
>

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
I think you're thinking of a plodding limited-movement classic Pong line.

I'm thinking of a line that can like a human player move with varying speed and 
pauses to more or less any part of its court to hit the ball, and then hit it 
with varying speed to more or less any part of the opposite court. I think 
you'll find that bumps up the variables if not unknowns massively.

Plus just about every shot exchange presents you with dilemmas of how to place 
your shot and then move in anticipation of your opponent's return .

Remember the object here is to present a would-be AGI with a simple but 
*unpredictable* object to deal with, reflecting the realities of there being a 
great many such objects in the real world - as distinct from Dave's all too 
predictable objects.

The possible weakness of this pong example is that there might at some point 
cease to be unknowns, as there always are in real world situations, incl 
tennis. One could always introduce them if necessary - allowing say creative 
spins on the ball.

But I doubt that it will be necessary here for the purposes of anyone like Dave 
-  and v. offhand and with no doubt extreme license this strikes me as not a 
million miles from a hyper version of the TSP problem, where the towns can move 
around, and you can't be sure whether they'll be there when you arrive.  Or is 
there an "obviously true" solution for that problem too? [Very convenient these 
obviously true solutions].



From: Jim Bromer 
Sent: Sunday, June 27, 2010 8:53 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


Ben:  I'm quite sure a simple narrow AI system could be constructed to beat 
humans at Pong ;p
Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a 
single reason why.

Although Ben would have to give us an actual example (of a pong program that 
could beat humans at Pong) just to make sure that it is not that difficult a 
task, it seems like such an obviously true statement that there is almost no 
incentive for anyone to try it.  However, there are chess programs that can 
beat the majority of people who play chess without outside assistance.
Jim Bromer


On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner  wrote:

  Well, Ben, I'm glad you're "quite sure" because you haven't given a single 
reason why. Clearly you should be Number One advisor on every Olympic team, 
because you've cracked the AGI problem of how to deal with opponents that can 
move (whether themselves or balls) in multiple, unpredictable directions, that 
is at the centre of just about every field and court sport.

  I think if you actually analyse it, you'll find that you can't predict and 
prepare for  the presumably at least 50 to 100 spots on a table tennis board/ 
tennis court that your opponent can hit the ball to, let alone for how he will 
play subsequent 10 to 20 shot rallies   - and you can't devise a deterministic 
program to play here. These are true, multiple-/poly-solution problems rather 
than the single solution ones you are familiar with.

  That's why all of these sports have normally hundreds of different competing 
philosophies and strategies, - and people continually can and do come up with 
new approaches and styles of play to the sports overall - there are endless 
possibilities.

  I suspect you may not play these sports, because one factor you've obviously 
ignored (although I stressed it) is not just the complexity but that in sports 
players can and do change their strategies - and that would have to be a given 
in our computer game. In real world activities, you're normally *supposed* to 
act unpredictably at least some of the time. It's a fundamental subgoal. 

  In sport, as in investment, "past performance is not a [sure] guide to future 
performance" - companies and markets may not continue to behave as they did in 
the past -  so that alone buggers any narrow AI predictive approach.

  P.S. But the most basic reality of these sports is that you can't cover every 
shot or move your opponent may make, and that gives rise to a continuing stream 
of genuine dilemmas . For example, you have just returned a ball from the 
extreme, far left of your court - do you now start moving rapidly towards the 
centre of the court so that you will be prepared to cover a ball to the 
extreme, near right side - or do you move more slowly?  If you don't move 
rapidly, you won't be able to cover that ball if it comes. But if you do move 
rapidly, your opponent can play the ball back to the extreme left and catch you 
out. 

  It's a genuine dilemma and gamble - just like deciding whether to invest in 
shares. And competitive sports are built on such dilemmas. 

  Welcome to the real world of AGI problems. You should get to know it.

  And as this example (and my rock wall problem) indicate, these

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
Ben:  I'm quite sure a simple narrow AI system could be constructed to beat
humans at Pong ;p
Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a
single reason why.

Although Ben would have to give us an actual example (of a pong program that
could beat humans at Pong) just to make sure that it is not that difficult a
task, it seems like such an obviously true statement that there is almost no
incentive for anyone to try it.  However, there are chess programs that can
beat the majority of people who play chess without outside assistance.
Jim Bromer

On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner wrote:

>  Well, Ben, I'm glad you're "quite sure" because you haven't given a
> single reason why. Clearly you should be Number One advisor on every
> Olympic team, because you've cracked the AGI problem of how to deal with
> opponents that can move (whether themselves or balls) in multiple,
> unpredictable directions, that is at the centre of just about every field
> and court sport.
>
> I think if you actually analyse it, you'll find that you can't predict and
> prepare for  the presumably at least 50 to 100 spots on a table tennis
> board/ tennis court that your opponent can hit the ball to, let
> alone for how he will play subsequent 10 to 20 shot rallies   - and you
> can't devise a deterministic program to play here. These are true,
> multiple-/poly-solution problems rather than the single solution ones you
> are familiar with.
>
> That's why all of these sports have normally hundreds of different
> competing philosophies and strategies, - and people continually can and do
> come up with new approaches and styles of play to the sports overall - there
> are endless possibilities.
>
> I suspect you may not play these sports, because one factor you've
> obviously ignored (although I stressed it) is not just the complexity
> but that in sports players can and do change their strategies - and that
> would have to be a given in our computer game. In real world activities,
> you're normally *supposed* to act unpredictably at least some of the time.
> It's a fundamental subgoal.
>
> In sport, as in investment, "past performance is not a [sure] guide to
> future performance" - companies and markets may not continue to behave as
> they did in the past -  so that alone buggers any narrow AI predictive
> approach.
>
> P.S. But the most basic reality of these sports is that you can't cover
> every shot or move your opponent may make, and that gives rise to a
> continuing stream of genuine dilemmas . For example, you have just returned
> a ball from the extreme, far left of your court - do you now start moving
> rapidly towards the centre of the court so that you will be prepared to
> cover a ball to the extreme, near right side - or do you move more slowly?
> If you don't move rapidly, you won't be able to cover that ball if it comes.
> But if you do move rapidly, your opponent can play the ball back to the
> extreme left and catch you out.
>
> It's a genuine dilemma and gamble - just like deciding whether to invest in
> shares. And competitive sports are built on such dilemmas.
>
> Welcome to the real world of AGI problems. You should get to know it.
>
> And as this example (and my rock wall problem) indicate, these problems can
> be as simple and accessible as fairly easy narrow AI problems.
>  *From:* Ben Goertzel 
> *Sent:* Sunday, June 27, 2010 7:33 PM
>   *To:* agi 
> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>
>
> That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow
> AI system could be constructed to beat humans at Pong ;p ... without
> teaching us much of anything about intelligence...
>
> Very likely a narrow-AI machine learning system could *learn* by experience
> to beat humans at Pong ... also without teaching us much
> of anything about intelligence...
>
> Pong is almost surely a "toy domain" ...
>
> ben g
>
> On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner wrote:
>
>>  Try ping-pong -  as per the computer game. Just a line (/bat) and a
>> square(/ball) representing your opponent - and you have a line(/bat) to play
>> against them
>>
>> Now you've got a relatively simple true AGI visual problem - because if
>> the opponent returns the ball somewhat as a real human AGI does,  (without
>> the complexities of spin etc just presumably repeatedly changing the
>> direction (and perhaps the speed)  of the returned ball) - then you have a
>> fundamentally *unpredictable* object.
>>
>> How will your program learn to play that opponent - beari

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Well, Ben, I'm glad you're "quite sure" because you haven't given a single 
reason why. Clearly you should be Number One advisor on every Olympic team, 
because you've cracked the AGI problem of how to deal with opponents that can 
move (whether themselves or balls) in multiple, unpredictable directions, that 
is at the centre of just about every field and court sport.

I think if you actually analyse it, you'll find that you can't predict and 
prepare for  the presumably at least 50 to 100 spots on a table tennis board/ 
tennis court that your opponent can hit the ball to, let alone for how he will 
play subsequent 10 to 20 shot rallies   - and you can't devise a deterministic 
program to play here. These are true, multiple-/poly-solution problems rather 
than the single solution ones you are familiar with.

That's why all of these sports have normally hundreds of different competing 
philosophies and strategies, - and people continually can and do come up with 
new approaches and styles of play to the sports overall - there are endless 
possibilities.

I suspect you may not play these sports, because one factor you've obviously 
ignored (although I stressed it) is not just the complexity but that in sports 
players can and do change their strategies - and that would have to be a given 
in our computer game. In real world activities, you're normally *supposed* to 
act unpredictably at least some of the time. It's a fundamental subgoal. 

In sport, as in investment, "past performance is not a [sure] guide to future 
performance" - companies and markets may not continue to behave as they did in 
the past -  so that alone buggers any narrow AI predictive approach.

P.S. But the most basic reality of these sports is that you can't cover every 
shot or move your opponent may make, and that gives rise to a continuing stream 
of genuine dilemmas . For example, you have just returned a ball from the 
extreme, far left of your court - do you now start moving rapidly towards the 
centre of the court so that you will be prepared to cover a ball to the 
extreme, near right side - or do you move more slowly?  If you don't move 
rapidly, you won't be able to cover that ball if it comes. But if you do move 
rapidly, your opponent can play the ball back to the extreme left and catch you 
out. 

It's a genuine dilemma and gamble - just like deciding whether to invest in 
shares. And competitive sports are built on such dilemmas. 

Welcome to the real world of AGI problems. You should get to know it.

And as this example (and my rock wall problem) indicate, these problems can be 
as simple and accessible as fairly easy narrow AI problems. 

From: Ben Goertzel 
Sent: Sunday, June 27, 2010 7:33 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI



That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow AI 
system could be constructed to beat humans at Pong ;p ... without teaching us 
much of anything about intelligence...

Very likely a narrow-AI machine learning system could *learn* by experience to 
beat humans at Pong ... also without teaching us much 
of anything about intelligence...

Pong is almost surely a "toy domain" ...

ben g


On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner  wrote:

  Try ping-pong -  as per the computer game. Just a line (/bat) and a 
square(/ball) representing your opponent - and you have a line(/bat) to play 
against them

  Now you've got a relatively simple true AGI visual problem - because if the 
opponent returns the ball somewhat as a real human AGI does,  (without the 
complexities of spin etc just presumably repeatedly changing the direction (and 
perhaps the speed)  of the returned ball) - then you have a fundamentally 
*unpredictable* object.

  How will your program learn to play that opponent - bearing in mind that the 
opponent is likely to keep changing and even evolving strategy? Your approach 
will have to be fundamentally different from how a program learns to play a 
board game, where all the possibilities are predictable. In the real world, 
"past performance is not a [sure] guide to future performance". Bayes doesn't 
apply.

  That's the real issue here -  it's not one of simplicity/complexity - it's 
that  your chosen worlds all consist of objects that are predictable, because 
they behave consistently, are shaped consistently, and come in consistent, 
closed sets - and  can only basically behave in one way at any given point. AGI 
is about dealing with the real world of objects that are unpredictable because 
they behave inconsistently,even contradictorily, are shaped inconsistently and 
come in inconsistent, open sets - and can behave in multi-/poly-ways at any 
given point. These differences apply at all levels from the most complex to the 
simplest.

  Dealing with consistent (and reg

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
I am working on logical satisfiability again.  If what I am working on right
now works, it will become a pivotal moment in AGI, and what's more, the
method that I am developing will (probably) become a core method for AGI.
However, if the idea I am working on does not -itself- lead to a major
breakthrough (which is the likelihood) then the idea will (probably) not
become a core issue regardless of its significance to me right at this
moment.

This is a personal statement but it is not just a question that can be
resolved through personal perspective.  So I have to rely on a more
reasonable and balanced perspective that does not just assume that I will be
successful without some hard evidence.  Without the benefit of knowing what
will happen with the theory at this time, I have to assume that there is no
evidence that this is going to be a central approach which will in some
manifestation be central to AGI in the future.  I can see that as one
possibility but this one view has to be integrated with other possibilities
as well.

I appreciate people's reports of what they are doing, and I would happily
tell you what I am working on if I was more sure that it won't work or had
it all figured out and I thought anyone would be interested (even if it
didn't work.)

Dave asked and answered, " How do we add and combine this complex behavior
learning, explanation, recognition and understanding into our system?
Answer: The way that such things are learned is by making observations,
learning patterns and then connecting the patterns in a way that is
consistent, explanatory and likely."

That's not the answer.  That is a statement of a subgoal some of which is
programmable, but there is nothing in the statement that describes how it
can be actually achieved and there is nothing in the statement which
suggests that you have a mature insight into the nature of the problem.
There is nothing in the statement that seems new to me, I presume that many
of the programmers in the group have considered something similar at some
time in the past.

I am trying to avoid criticisms that get unnecessarily personal, but there
are some criticisms of ideas that should be made from time to time, and some
times a personal perspective is so tightly interwoven into the ideas that a
statement of a subgoal can look like it is a solution to a difficult
problem.

But Mike was absolutely right about one thing.  Constantly testing your
ideas with experiments is important, and if I ever gain any traction in
-anything- that I am doing, I will begin doing some AGI experiments again.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow
AI system could be constructed to beat humans at Pong ;p ... without
teaching us much of anything about intelligence...

Very likely a narrow-AI machine learning system could *learn* by experience
to beat humans at Pong ... also without teaching us much
of anything about intelligence...

Pong is almost surely a "toy domain" ...

ben g

On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner wrote:

>  Try ping-pong -  as per the computer game. Just a line (/bat) and a
> square(/ball) representing your opponent - and you have a line(/bat) to play
> against them
>
> Now you've got a relatively simple true AGI visual problem - because if the
> opponent returns the ball somewhat as a real human AGI does,  (without the
> complexities of spin etc just presumably repeatedly changing the direction
> (and perhaps the speed)  of the returned ball) - then you have a
> fundamentally *unpredictable* object.
>
> How will your program learn to play that opponent - bearing in mind that
> the opponent is likely to keep changing and even evolving strategy? Your
> approach will have to be fundamentally different from how a program learns
> to play a board game, where all the possibilities are predictable. In the
> real world, "past performance is not a [sure] guide to future performance".
> Bayes doesn't apply.
>
> That's the real issue here -  it's not one of simplicity/complexity - it's
> that  your chosen worlds all consist of objects that are predictable,
> because they behave consistently, are shaped consistently, and come in
> consistent, closed sets - and  can only basically behave in one way at any
> given point. AGI is about dealing with the real world of objects that are
> unpredictable because they behave inconsistently,even contradictorily, are
> shaped inconsistently and come in inconsistent, open sets - and can behave
> in multi-/poly-ways at any given point. These differences apply at all
> levels from the most complex to the simplest.
>
> Dealing with consistent (and regular) objects is no preparation for dealing
> with inconsistent, irregular objects.It's a fundamental error
>
> Real AGI animals and humans were clearly designed to deal with a world of
> objects that have some consistencies but overall are inconsistent, irregular
> and come in open sets. The perfect regularities and consistencies of
> geometrical figures and mechanical motion (and boxes moving across a screen)
> were only invented very recently.
>
>
>
>  *From:* David Jones 
> *Sent:* Sunday, June 27, 2010 5:57 PM
> *To:* agi 
> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>
> Jim,
>
> Two things.
>
> 1) If the method I have suggested works for the most simple case, it is
> quite straight forward to add complexity and then ask, how do I solve it
> now. If you can't solve that case, there is no way in hell you will solve
> the full AGI problem. This is how I intend to figure out how to solve such a
> massive problem. You cannot tackle the whole thing all at once. I've tried
> it and it doesn't work because you can't focus on anything. It is like a
> Rubik's cube. You turn one piece to get the color orange in place, but at
> the same time you are screwing up the other colors. Now imagine that times
> 1000. You simply can't do it. So, you start with a simple demonstration of
> the difficulties and show how to solve a small puzzle, such as a Rubik's
> cube with 4 little cubes to a side instead of 6. Then you can show how to
> solve 2 sides of a rubiks cube, etc. Eventually, it will be clear how to
> solve the whole problem because by the time you're done, you have a complete
> understanding of what is going on and how to go about solving it.
>
> 2) I haven't mentioned a method for matching expected behavior to
> observations and bypassing the default algorithms, but I have figured out
> quite a lot about how to do it. I'll give you an example from my own notes
> below. What I've realized is that the AI creates *expectations* (again).
> When those expectations are matched, the AI does not do its default
> processing and analysis. It doesn't do the default matching that it normally
> does when it has no other knowledge. It starts with an existing hypothesis.
> When unexpected observations or inconsistencies occur, then the AI will have
> a *reason* or *cue* (these words again... very important concepts) to look
> for a better hypothesis. Only then, should it look for another hypothesis.
>
> My notes:
> How does the ai learn and figure out how to explain complex unforseen
> behaviors that are not preprogrammable. For example the situa

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Try ping-pong -  as per the computer game. Just a line (/bat) and a 
square(/ball) representing your opponent - and you have a line(/bat) to play 
against them

Now you've got a relatively simple true AGI visual problem - because if the 
opponent returns the ball somewhat as a real human AGI does,  (without the 
complexities of spin etc just presumably repeatedly changing the direction (and 
perhaps the speed)  of the returned ball) - then you have a fundamentally 
*unpredictable* object.

How will your program learn to play that opponent - bearing in mind that the 
opponent is likely to keep changing and even evolving strategy? Your approach 
will have to be fundamentally different from how a program learns to play a 
board game, where all the possibilities are predictable. In the real world, 
"past performance is not a [sure] guide to future performance". Bayes doesn't 
apply.

That's the real issue here -  it's not one of simplicity/complexity - it's that 
 your chosen worlds all consist of objects that are predictable, because they 
behave consistently, are shaped consistently, and come in consistent, closed 
sets - and  can only basically behave in one way at any given point. AGI is 
about dealing with the real world of objects that are unpredictable because 
they behave inconsistently,even contradictorily, are shaped inconsistently and 
come in inconsistent, open sets - and can behave in multi-/poly-ways at any 
given point. These differences apply at all levels from the most complex to the 
simplest.

Dealing with consistent (and regular) objects is no preparation for dealing 
with inconsistent, irregular objects.It's a fundamental error

Real AGI animals and humans were clearly designed to deal with a world of 
objects that have some consistencies but overall are inconsistent, irregular 
and come in open sets. The perfect regularities and consistencies of 
geometrical figures and mechanical motion (and boxes moving across a screen) 
were only invented very recently.




From: David Jones 
Sent: Sunday, June 27, 2010 5:57 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


Jim,

Two things.

1) If the method I have suggested works for the most simple case, it is quite 
straight forward to add complexity and then ask, how do I solve it now. If you 
can't solve that case, there is no way in hell you will solve the full AGI 
problem. This is how I intend to figure out how to solve such a massive 
problem. You cannot tackle the whole thing all at once. I've tried it and it 
doesn't work because you can't focus on anything. It is like a Rubik's cube. 
You turn one piece to get the color orange in place, but at the same time you 
are screwing up the other colors. Now imagine that times 1000. You simply can't 
do it. So, you start with a simple demonstration of the difficulties and show 
how to solve a small puzzle, such as a Rubik's cube with 4 little cubes to a 
side instead of 6. Then you can show how to solve 2 sides of a rubiks cube, 
etc. Eventually, it will be clear how to solve the whole problem because by the 
time you're done, you have a complete understanding of what is going on and how 
to go about solving it.

2) I haven't mentioned a method for matching expected behavior to observations 
and bypassing the default algorithms, but I have figured out quite a lot about 
how to do it. I'll give you an example from my own notes below. What I've 
realized is that the AI creates *expectations* (again).  When those 
expectations are matched, the AI does not do its default processing and 
analysis. It doesn't do the default matching that it normally does when it has 
no other knowledge. It starts with an existing hypothesis. When unexpected 
observations or inconsistencies occur, then the AI will have a *reason* or 
*cue* (these words again... very important concepts) to look for a better 
hypothesis. Only then, should it look for another hypothesis. 

My notes: 
How does the ai learn and figure out how to explain complex unforseen behaviors 
that are not preprogrammable. For example the situation above regarding two 
windows. How does it learn the following knowledge: the notepad icon opens a 
new notepad window and that two windows can exist... not just one window that 
changes. the bar with the notepad icon represenants an instance. the bar at the 
bottom with numbers on it represents multiple instances of the same window and 
if you click on it it shows you representative bars for each window. 

 How do we add and combine this complex behavior learning, explanation, 
recognition and understanding into our system? 

 Answer: The way that such things are learned is by making observations, 
learning patterns and then connecting the patterns in a way that is consistent, 
explanatory and likely. 

Example: Clicking the notepad icon causes a notepad window to appear with no 
content. If we 

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Jim,

Two things.

1) If the method I have suggested works for the most simple case, it is
quite straight forward to add complexity and then ask, how do I solve it
now. If you can't solve that case, there is no way in hell you will solve
the full AGI problem. This is how I intend to figure out how to solve such a
massive problem. You cannot tackle the whole thing all at once. I've tried
it and it doesn't work because you can't focus on anything. It is like a
Rubik's cube. You turn one piece to get the color orange in place, but at
the same time you are screwing up the other colors. Now imagine that times
1000. You simply can't do it. So, you start with a simple demonstration of
the difficulties and show how to solve a small puzzle, such as a Rubik's
cube with 4 little cubes to a side instead of 6. Then you can show how to
solve 2 sides of a rubiks cube, etc. Eventually, it will be clear how to
solve the whole problem because by the time you're done, you have a complete
understanding of what is going on and how to go about solving it.

2) I haven't mentioned a method for matching expected behavior to
observations and bypassing the default algorithms, but I have figured out
quite a lot about how to do it. I'll give you an example from my own notes
below. What I've realized is that the AI creates *expectations* (again).
When those expectations are matched, the AI does not do its default
processing and analysis. It doesn't do the default matching that it normally
does when it has no other knowledge. It starts with an existing hypothesis.
When unexpected observations or inconsistencies occur, then the AI will have
a *reason* or *cue* (these words again... very important concepts) to look
for a better hypothesis. Only then, should it look for another hypothesis.

My notes:
How does the ai learn and figure out how to explain complex unforseen
behaviors that are not preprogrammable. For example the situation above
regarding two windows. How does it learn the following knowledge: the
notepad icon opens a new notepad window and that two windows can exist...
not just one window that changes. the bar with the notepad icon represenants
an instance. the bar at the bottom with numbers on it represents multiple
instances of the same window and if you click on it it shows you
representative bars for each window.

 How do we add and combine this complex behavior learning, explanation,
recognition and understanding into our system?

 Answer: The way that such things are learned is by making observations,
learning patterns and then connecting the patterns in a way that is
consistent, explanatory and likely.

Example: Clicking the notepad icon causes a notepad window to appear with no
content. If we previously had a notepad window open, it may seem like
clicking the icon just clears the content by the instance is the same. But,
this cannot be the case because if we click the icon when no notepad window
previously existed, it will be blank. based on these two experiences we can
construct an explanatory hypothesis such that: clicking the icon simply
opens a blank window. We also get evidence for this conclusion when we see
the two windows side by side. If we see the old window with the content
still intact we will realize that clicking the icon did not seem to have
cleared it.

Dave


On Sun, Jun 27, 2010 at 12:39 PM, Jim Bromer  wrote:

> On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner 
> wrote:
>
>>  Jim :This illustrates one of the things wrong with the
>> dreary instantiations of the prevailing mind set of a group.  It is only a
>> matter of time until you discover (through experiment) how absurd it is to
>> celebrate the triumph of an overly simplistic solution to a problem that is,
>> by its very potential, full of possibilities]
>>
>> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
>> - narrow AI.  Looking for the one right prediction/ explanation is narrow
>> AI. Being able to generate more and more possible explanations, wh. could
>> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
>> is creative, polyform thinking. Or, if you prefer, it's convergent vs
>> divergent thinking, the difference between wh. still seems to escape Dave &
>> Ben & most AGI-ers.
>>
>
> Well, I agree with what (I think) Mike was trying to get at, except that I
> understood that Ben, Hutter and especially David were not only talking about
> prediction as a specification of a single prediction when many possible
> predictions (ie expectations) were appropriate for consideration.
>
> For some reason none of you seem to ever talk about methods that could be
> used to react to a situation with the flexibility to integrate the
> recognition of different combinations of familiar events and to classify
> unusual events so they could be interpreted as more familiar *kinds* of
> events or as novel forms of events which might be then be integrated.  For
> me, that seems to be one of the unsolved problems

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner wrote:

>  Jim :This illustrates one of the things wrong with the
> dreary instantiations of the prevailing mind set of a group.  It is only a
> matter of time until you discover (through experiment) how absurd it is to
> celebrate the triumph of an overly simplistic solution to a problem that is,
> by its very potential, full of possibilities]
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>

Well, I agree with what (I think) Mike was trying to get at, except that I
understood that Ben, Hutter and especially David were not only talking about
prediction as a specification of a single prediction when many possible
predictions (ie expectations) were appropriate for consideration.

For some reason none of you seem to ever talk about methods that could be
used to react to a situation with the flexibility to integrate the
recognition of different combinations of familiar events and to classify
unusual events so they could be interpreted as more familiar *kinds* of
events or as novel forms of events which might be then be integrated.  For
me, that seems to be one of the unsolved problems.  Being able to say that
the squares move to the right in unison is a better description than saying
the squares are dancing the irish jig is not really cutting edge.

As far as David's comment that he was only dealing with the "core issues," I
am sorry but you were not dealing with the core issues of contemporary AGI
programming.  You were dealing with a primitive problem that has been
considered for many years, but it is not a core research issue.  Yes we have
to work with simple examples to explain what we are talking about, but there
is a difference between an abstract problem that may be central to
your recent work and a core research issue that hasn't really been solved.

The entire problem of dealing with complicated situations is that these
narrow AI methods haven't really worked.  That is the core issue.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Jim,

I am using over simplification to identify the core problems involved. As
you can see, the over simplification is revealing how to resolve certain
types of dilemmas and uncertainty. That is exactly why I did this. If you
can't solve a simple environment, you certainly can't solve the full
environment, which contains at least several of the same problems all
intricately tied together. So, if I can show how to solve environments which
isolate these dilemmas, I can incrementally add complexity and have a very
strong and full understanding of how the added complexity changes the
problem. Your criticisms and mike's criticisms are unjustified. This is a
means to an end. Not an end in and of itself. ***  :)

Dave

On Sun, Jun 27, 2010 at 12:12 PM, Jim Bromer  wrote:

> The fact that you are using experiment and the fact that you recognized
> that AGI needs to provide both explanation and expectations (differentiated
> from the false precision of 'prediction') shows that you have a grasp of
> some of the philosophical problems, but the fact that you would rely on a
> primary principle of over simplification (as differentiated from a method
> that does not start with a rule that eliminates the very potential of
> possibilities as a *general* rule of intelligence) shows that you
> don't fully understand the problem.
> Jim Bromer
>
>
>
> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>
>> A method for comparing hypotheses in explanatory-based reasoning: *
>>
>> We prefer the hypothesis or explanation that ***expects* more
>> observations. If both explanations expect the same observations, then the
>> simpler of the two is preferred (because the unnecessary terms of the more
>> complicated explanation do not add to the predictive power).*
>>
>> *Why are expected events so important?* They are a measure of 1)
>> explanatory power and 2) predictive power. The more predictive and the more
>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>> to a competing hypothesis.
>>
>> Here are two case studies I've been analyzing from sensory perception of
>> simplified visual input:
>> The goal of the case studies is to answer the following: How do you
>> generate the most likely motion hypothesis in a way that is general and
>> applicable to AGI?
>> *Case Study 1)* Here is a link to an example: animated gif of two black
>> squares move from left to 
>> right.
>> *Description: *Two black squares are moving in unison from left to right
>> across a white screen. In each frame the black squares shift to the right so
>> that square 1 steals square 2's original position and square two moves an
>> equal distance to the right.
>> *Case Study 2) *Here is a link to an example: the interrupted 
>> square.
>> *Description:* A single square is moving from left to right. Suddenly in
>> the third frame, a single black square is added in the middle of the
>> expected path of the original black square. This second square just stays
>> there. So, what happened? Did the square moving from left to right keep
>> moving? Or did it stop and then another square suddenly appeared and moved
>> from left to right?
>>
>> *Here is a simplified version of how we solve case study 1:
>> *The important hypotheses to consider are:
>> 1) the square from frame 1 of the video that has a very close position to
>> the square from frame 2 should be matched (we hypothesize that they are the
>> same square and that any difference in position is motion).  So, what
>> happens is that in each two frames of the video, we only match one square.
>> The other square goes unmatched.
>> 2) We do the same thing as in hypothesis #1, but this time we also match
>> the remaining squares and hypothesize motion as follows: the first square
>> jumps over the second square from left to right. We hypothesize that this
>> happens over and over in each frame of the video. Square 2 stops and square
>> 1 jumps over it over and over again.
>> 3) We hypothesize that both squares move to the right in unison. This is
>> the correct hypothesis.
>>
>> So, why should we prefer the correct hypothesis, #3 over the other two?
>>
>> Well, first of all, #3 is correct because it has the most explanatory
>> power of the three and is the simplest of the three. Simpler is better
>> because, with the given evidence and information, there is no reason to
>> desire a more complicated hypothesis such as #2.
>>
>> So, the answer to the question is because explanation #3 expects the most
>> observations, such as:
>> 1) the consistent relative positions of the squares in each frame are
>> expected.
>> 2) It also expects their new positions in each from based on velocity
>> calculations.
>> 3) It expects both squares to occur in each frame.
>>
>> Explanation 1 ignores 1 square from each frame of the video, because it
>> can't match it. Hypothesis #1 doesn't have a reason for why

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
The fact that you are using experiment and the fact that you recognized that
AGI needs to provide both explanation and expectations (differentiated from
the false precision of 'prediction') shows that you have a grasp of some of
the philosophical problems, but the fact that you would rely on a primary
principle of over simplification (as differentiated from a method that does
not start with a rule that eliminates the very potential of possibilities as
a *general* rule of intelligence) shows that you don't fully understand the
problem.
Jim Bromer



On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

> A method for comparing hypotheses in explanatory-based reasoning: *
>
> We prefer the hypothesis or explanation that ***expects* more
> observations. If both explanations expect the same observations, then the
> simpler of the two is preferred (because the unnecessary terms of the more
> complicated explanation do not add to the predictive power).*
>
> *Why are expected events so important?* They are a measure of 1)
> explanatory power and 2) predictive power. The more predictive and the more
> explanatory a hypothesis is, the more likely the hypothesis is when compared
> to a competing hypothesis.
>
> Here are two case studies I've been analyzing from sensory perception of
> simplified visual input:
> The goal of the case studies is to answer the following: How do you
> generate the most likely motion hypothesis in a way that is general and
> applicable to AGI?
> *Case Study 1)* Here is a link to an example: animated gif of two black
> squares move from left to right.
> *Description: *Two black squares are moving in unison from left to right
> across a white screen. In each frame the black squares shift to the right so
> that square 1 steals square 2's original position and square two moves an
> equal distance to the right.
> *Case Study 2) *Here is a link to an example: the interrupted 
> square.
> *Description:* A single square is moving from left to right. Suddenly in
> the third frame, a single black square is added in the middle of the
> expected path of the original black square. This second square just stays
> there. So, what happened? Did the square moving from left to right keep
> moving? Or did it stop and then another square suddenly appeared and moved
> from left to right?
>
> *Here is a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close position to
> the square from frame 2 should be matched (we hypothesize that they are the
> same square and that any difference in position is motion).  So, what
> happens is that in each two frames of the video, we only match one square.
> The other square goes unmatched.
> 2) We do the same thing as in hypothesis #1, but this time we also match
> the remaining squares and hypothesize motion as follows: the first square
> jumps over the second square from left to right. We hypothesize that this
> happens over and over in each frame of the video. Square 2 stops and square
> 1 jumps over it over and over again.
> 3) We hypothesize that both squares move to the right in unison. This is
> the correct hypothesis.
>
> So, why should we prefer the correct hypothesis, #3 over the other two?
>
> Well, first of all, #3 is correct because it has the most explanatory power
> of the three and is the simplest of the three. Simpler is better because,
> with the given evidence and information, there is no reason to desire a more
> complicated hypothesis such as #2.
>
> So, the answer to the question is because explanation #3 expects the most
> observations, such as:
> 1) the consistent relative positions of the squares in each frame are
> expected.
> 2) It also expects their new positions in each from based on velocity
> calculations.
> 3) It expects both squares to occur in each frame.
>
> Explanation 1 ignores 1 square from each frame of the video, because it
> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
> appears in each frame and why one disappears. It doesn't expect these
> observations. In fact, explanation 1 doesn't expect anything that happens
> because something new happens in each frame, which doesn't give it a chance
> to confirm its hypotheses in subsequent frames.
>
> The power of this method is immediately clear. It is general and it solves
> the problem very cleanly.
>
> *Here is a simplified version of how we solve case study 2:*
> We expect the original square to move at a similar velocity from left to
> right because we hypothesized that it did move from left to right and we
> calculated its velocity. If this expectation is confirmed, then it is more
> likely than saying that the square suddenly stopped and another started
> moving. Such a change would be unexpected and such a conclusion would be
> unjustifiable.
>
> I also be

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
lol.

Mike,

What I was trying to express by the word *expect* is NOT predict [some exact
outcome]. Expect means that the algorithm has a way of comparing
observations to what the algorithm considers to be consistent with an
"explanation". This is something I struggled to solve for a long time
regarding explanatory reasoning.

Dave

On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner wrote:

>  Jim :This illustrates one of the things wrong with the
> dreary instantiations of the prevailing mind set of a group.  It is only a
> matter of time until you discover (through experiment) how absurd it is to
> celebrate the triumph of an overly simplistic solution to a problem that is,
> by its very potential, full of possibilities]
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>
> Consider a real world application - a footballer, Maradona, is dribbling
> with the ball - you don't/can't predict where he's going next, you have to
> be ready for various directions, including the possibility that he is going
> to do something surprising and new   - even if you have to commit yourself
> to anticipating a particular direction. Ditto if you're trying to predict
> the path of an animal prey.
>
> Dealing only with the "predictable" as most do, is perhaps what Jim is
> getting at - predictable. And wrong for AGI. It's your capacity to deal
> with the open, unpredictable, real world that signifies you are an AGI -
> not the same old, closed predictable, artificial world. When will you have
> the courage to face this?
>
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>

You are misrepresenting my approach, which is not based on looking for "the
one right prediction/explanation"

OpenCog relies heavily on evolutionary learning and probabilistic inference,
both of which naturally generate a massive number of alternative possible
explanations in nearly every instance...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Jim :This illustrates one of the things wrong with the dreary instantiations of 
the prevailing mind set of a group.  It is only a matter of time until you 
discover (through experiment) how absurd it is to celebrate the triumph of an 
overly simplistic solution to a problem that is, by its very potential, full of 
possibilities]

To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject - 
narrow AI.  Looking for the one right prediction/ explanation is narrow AI. 
Being able to generate more and more possible explanations, wh. could all be 
valid,  is AGI.  The former is rational, uniform thinking. The latter is 
creative, polyform thinking. Or, if you prefer, it's convergent vs divergent 
thinking, the difference between wh. still seems to escape Dave & Ben & most 
AGI-ers.

Consider a real world application - a footballer, Maradona, is dribbling with 
the ball - you don't/can't predict where he's going next, you have to be ready 
for various directions, including the possibility that he is going to do 
something surprising and new   - even if you have to commit yourself to 
anticipating a particular direction. Ditto if you're trying to predict the path 
of an animal prey.

Dealing only with the "predictable" as most do, is perhaps what Jim is getting 
at - predictable. And wrong for AGI. It's your capacity to deal with the open, 
unpredictable, real world that signifies you are an AGI - not the same old, 
closed predictable, artificial world. When will you have the courage to face 
this?

Sent: Sunday, June 27, 2010 4:21 PM
To: agi 
Subject: Re: [agi] Huge Progress on the Core of AGI


On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

  A method for comparing hypotheses in explanatory-based reasoning:Here is a 
simplified version of how we solve case study 1:
  The important hypotheses to consider are: 
  1) the square from frame 1 of the video that has a very close position to the 
square from frame 2 should be matched (we hypothesize that they are the same 
square and that any difference in position is motion).  So, what happens is 
that in each two frames of the video, we only match one square. The other 
square goes unmatched.   
  2) We do the same thing as in hypothesis #1, but this time we also match the 
remaining squares and hypothesize motion as follows: the first square jumps 
over the second square from left to right. We hypothesize that this happens 
over and over in each frame of the video. Square 2 stops and square 1 jumps 
over it over and over again. 
  3) We hypothesize that both squares move to the right in unison. This is the 
correct hypothesis.

  So, why should we prefer the correct hypothesis, #3 over the other two?

  Well, first of all, #3 is correct because it has the most explanatory power 
of the three and is the simplest of the three. Simpler is better because, with 
the given evidence and information, there is no reason to desire a more 
complicated hypothesis such as #2. 

  So, the answer to the question is because explanation #3 expects the most 
observations, such as: 
  1) the consistent relative positions of the squares in each frame are 
expected. 
  2) It also expects their new positions in each from based on velocity 
calculations. 
  3) It expects both squares to occur in each frame. 

  Explanation 1 ignores 1 square from each frame of the video, because it can't 
match it. Hypothesis #1 doesn't have a reason for why the a new square appears 
in each frame and why one disappears. It doesn't expect these observations. In 
fact, explanation 1 doesn't expect anything that happens because something new 
happens in each frame, which doesn't give it a chance to confirm its hypotheses 
in subsequent frames.

  The power of this method is immediately clear. It is general and it solves 
the problem very cleanly.
  Dave 
agi | Archives  | Modify Your Subscription   


Nonsense.  This illustrates one of the things wrong with the dreary 
instantiations of the prevailing mind set of a group.  It is only a matter of 
time until you discover (through experiment) how absurd it is to celebrate the 
triumph of an overly simplistic solution to a problem that is, by its very 
potential, full of possibilities.

For one example, even if your program portrayed the 'objects' as moving in 
'unison' I doubt if the program calculated or represented those objects in 
unison.  I also doubt that their positioning was literally based on moving 
'right' since their movement were presumably calculated with incremental 
mathematics that were associated with screen positions.  And, looking for a 
technicality that represents the failure of the over reliance of the efficacy 
of a simplistic over generalization, I only have to point out that they did not 
only move to the right, so your description was either wrong or only partially 
representative of th

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Jim Bromer
On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

> A method for comparing hypotheses in explanatory-based reasoning:*Here is
> a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close position to
> the square from frame 2 should be matched (we hypothesize that they are the
> same square and that any difference in position is motion).  So, what
> happens is that in each two frames of the video, we only match one square.
> The other square goes unmatched.
> 2) We do the same thing as in hypothesis #1, but this time we also match
> the remaining squares and hypothesize motion as follows: the first square
> jumps over the second square from left to right. We hypothesize that this
> happens over and over in each frame of the video. Square 2 stops and square
> 1 jumps over it over and over again.
> 3) We hypothesize that both squares move to the right in unison. This is
> the correct hypothesis.
>
> So, why should we prefer the correct hypothesis, #3 over the other two?
>
> Well, first of all, #3 is correct because it has the most explanatory power
> of the three and is the simplest of the three. Simpler is better because,
> with the given evidence and information, there is no reason to desire a more
> complicated hypothesis such as #2.
>
> So, the answer to the question is because explanation #3 expects the most
> observations, such as:
> 1) the consistent relative positions of the squares in each frame are
> expected.
> 2) It also expects their new positions in each from based on velocity
> calculations.
> 3) It expects both squares to occur in each frame.
>
> Explanation 1 ignores 1 square from each frame of the video, because it
> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
> appears in each frame and why one disappears. It doesn't expect these
> observations. In fact, explanation 1 doesn't expect anything that happens
> because something new happens in each frame, which doesn't give it a chance
> to confirm its hypotheses in subsequent frames.
>
> The power of this method is immediately clear. It is general and it solves
> the problem very cleanly.
> Dave
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>

Nonsense.  This illustrates one of the things wrong with the
dreary instantiations of the prevailing mind set of a group.  It is only a
matter of time until you discover (through experiment) how absurd it is to
celebrate the triumph of an overly simplistic solution to a problem that is,
by its very potential, full of possibilities.

For one example, even if your program portrayed the 'objects' as moving in
'unison' I doubt if the program calculated or represented those objects in
unison.  I also doubt that their positioning was literally based on moving
'right' since their movement were presumably calculated with incremental
mathematics that were associated with screen positions.  And, looking for a
technicality that represents the failure of the over reliance of the
efficacy of a simplistic over generalization, I only have to point out that
they did not only move to the right, so your description was either wrong or
only partially representative of the apparent movement.

As long as the hypotheses are kept simple enough to eliminate the less
useful hypotheses, and the underlying causes for apparent relations are kept
irrelevant, over simplification is a reasonable (and valuable) method. But
if you are seriously interested in scalability, then this kind of conclusion
is just dull.

I have often made the criticism that the theories put forward in these
groups are overly simplistic.  Although I understand that this was just a
simple example, here is the key to determining whether a method is overly
simplistic (or as in AIXI) based on an overly simplistic definition
of insight.  Would this method work in discovering the possibilities of a
potentially more complex IO data environment like those we would expect to
find using AGI?
Jim Bromer.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
For visual perception, there are many reasons to think that a hierarchical
architecture can be effective... this is one of the things you may find in
dealing with real visual data but not with these toy examples...

E.g. in a spatiotemporal predictive hierarchy, the idea would be to create a
predictive module (using an Occam heuristic, as you suggest) corresponding
to each of a host of observed spatiotemporal regions, with modules
corresponding to larger regions occurring higher up in the hierarchy...

ben

On Sun, Jun 27, 2010 at 10:09 AM, David Jones  wrote:

> Thanks Ben,
>
> Right, explanatory reasoning not new at all (also called abduction and
> inference to the best explanation). But, what seems to be elusive is a
> precise and algorithm method for implementing explanatory reasoning and
> solving real problems, such as sensory perception. This is what I'm hoping
> to solve. The theory has been there a while... How to effectively implement
> it in a general way though, as far as I can tell, has never been solved.
>
> Dave
>
> On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel  wrote:
>
>>
>> Hi,
>>
>> I certainly agree with this method, but of course it's not original at
>> all, it's pretty much the basis of algorithmic learning theory, right?
>>
>> Hutter's AIXI for instance works [very roughly speaking] by choosing the
>> most compact program that, based on historical data, would have yielded
>> maximum reward
>>
>> So yeah, this is the right idea... and your simple examples of it are
>> nice...
>>
>> Eric Baum's whole book "What Is thought" is sort of an explanation of this
>> idea in a human biology and psychology and AI context ;)
>>
>> ben
>>
>> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>>
>>> A method for comparing hypotheses in explanatory-based reasoning: *
>>>
>>> We prefer the hypothesis or explanation that ***expects* more
>>> observations. If both explanations expect the same observations, then the
>>> simpler of the two is preferred (because the unnecessary terms of the more
>>> complicated explanation do not add to the predictive power).*
>>>
>>> *Why are expected events so important?* They are a measure of 1)
>>> explanatory power and 2) predictive power. The more predictive and the more
>>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>>> to a competing hypothesis.
>>>
>>> Here are two case studies I've been analyzing from sensory perception of
>>> simplified visual input:
>>> The goal of the case studies is to answer the following: How do you
>>> generate the most likely motion hypothesis in a way that is general and
>>> applicable to AGI?
>>> *Case Study 1)* Here is a link to an example: animated gif of two black
>>> squares move from left to 
>>> right.
>>> *Description: *Two black squares are moving in unison from left to right
>>> across a white screen. In each frame the black squares shift to the right so
>>> that square 1 steals square 2's original position and square two moves an
>>> equal distance to the right.
>>> *Case Study 2) *Here is a link to an example: the interrupted 
>>> square.
>>> *Description:* A single square is moving from left to right. Suddenly in
>>> the third frame, a single black square is added in the middle of the
>>> expected path of the original black square. This second square just stays
>>> there. So, what happened? Did the square moving from left to right keep
>>> moving? Or did it stop and then another square suddenly appeared and moved
>>> from left to right?
>>>
>>> *Here is a simplified version of how we solve case study 1:
>>> *The important hypotheses to consider are:
>>> 1) the square from frame 1 of the video that has a very close position to
>>> the square from frame 2 should be matched (we hypothesize that they are the
>>> same square and that any difference in position is motion).  So, what
>>> happens is that in each two frames of the video, we only match one square.
>>> The other square goes unmatched.
>>> 2) We do the same thing as in hypothesis #1, but this time we also match
>>> the remaining squares and hypothesize motion as follows: the first square
>>> jumps over the second square from left to right. We hypothesize that this
>>> happens over and over in each frame of the video. Square 2 stops and square
>>> 1 jumps over it over and over again.
>>> 3) We hypothesize that both squares move to the right in unison. This is
>>> the correct hypothesis.
>>>
>>> So, why should we prefer the correct hypothesis, #3 over the other two?
>>>
>>> Well, first of all, #3 is correct because it has the most explanatory
>>> power of the three and is the simplest of the three. Simpler is better
>>> because, with the given evidence and information, there is no reason to
>>> desire a more complicated hypothesis such as #2.
>>>
>>> So, the answer to the question is because explanation #3 expects the most
>>> observations, 

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread David Jones
Thanks Ben,

Right, explanatory reasoning not new at all (also called abduction and
inference to the best explanation). But, what seems to be elusive is a
precise and algorithm method for implementing explanatory reasoning and
solving real problems, such as sensory perception. This is what I'm hoping
to solve. The theory has been there a while... How to effectively implement
it in a general way though, as far as I can tell, has never been solved.

Dave

On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel  wrote:

>
> Hi,
>
> I certainly agree with this method, but of course it's not original at all,
> it's pretty much the basis of algorithmic learning theory, right?
>
> Hutter's AIXI for instance works [very roughly speaking] by choosing the
> most compact program that, based on historical data, would have yielded
> maximum reward
>
> So yeah, this is the right idea... and your simple examples of it are
> nice...
>
> Eric Baum's whole book "What Is thought" is sort of an explanation of this
> idea in a human biology and psychology and AI context ;)
>
> ben
>
> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>
>> A method for comparing hypotheses in explanatory-based reasoning: *
>>
>> We prefer the hypothesis or explanation that ***expects* more
>> observations. If both explanations expect the same observations, then the
>> simpler of the two is preferred (because the unnecessary terms of the more
>> complicated explanation do not add to the predictive power).*
>>
>> *Why are expected events so important?* They are a measure of 1)
>> explanatory power and 2) predictive power. The more predictive and the more
>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>> to a competing hypothesis.
>>
>> Here are two case studies I've been analyzing from sensory perception of
>> simplified visual input:
>> The goal of the case studies is to answer the following: How do you
>> generate the most likely motion hypothesis in a way that is general and
>> applicable to AGI?
>> *Case Study 1)* Here is a link to an example: animated gif of two black
>> squares move from left to 
>> right.
>> *Description: *Two black squares are moving in unison from left to right
>> across a white screen. In each frame the black squares shift to the right so
>> that square 1 steals square 2's original position and square two moves an
>> equal distance to the right.
>> *Case Study 2) *Here is a link to an example: the interrupted 
>> square.
>> *Description:* A single square is moving from left to right. Suddenly in
>> the third frame, a single black square is added in the middle of the
>> expected path of the original black square. This second square just stays
>> there. So, what happened? Did the square moving from left to right keep
>> moving? Or did it stop and then another square suddenly appeared and moved
>> from left to right?
>>
>> *Here is a simplified version of how we solve case study 1:
>> *The important hypotheses to consider are:
>> 1) the square from frame 1 of the video that has a very close position to
>> the square from frame 2 should be matched (we hypothesize that they are the
>> same square and that any difference in position is motion).  So, what
>> happens is that in each two frames of the video, we only match one square.
>> The other square goes unmatched.
>> 2) We do the same thing as in hypothesis #1, but this time we also match
>> the remaining squares and hypothesize motion as follows: the first square
>> jumps over the second square from left to right. We hypothesize that this
>> happens over and over in each frame of the video. Square 2 stops and square
>> 1 jumps over it over and over again.
>> 3) We hypothesize that both squares move to the right in unison. This is
>> the correct hypothesis.
>>
>> So, why should we prefer the correct hypothesis, #3 over the other two?
>>
>> Well, first of all, #3 is correct because it has the most explanatory
>> power of the three and is the simplest of the three. Simpler is better
>> because, with the given evidence and information, there is no reason to
>> desire a more complicated hypothesis such as #2.
>>
>> So, the answer to the question is because explanation #3 expects the most
>> observations, such as:
>> 1) the consistent relative positions of the squares in each frame are
>> expected.
>> 2) It also expects their new positions in each from based on velocity
>> calculations.
>> 3) It expects both squares to occur in each frame.
>>
>> Explanation 1 ignores 1 square from each frame of the video, because it
>> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
>> appears in each frame and why one disappears. It doesn't expect these
>> observations. In fact, explanation 1 doesn't expect anything that happens
>> because something new happens in each frame, which doesn't give it a chance
>> to confirm its hypotheses in subsequen

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
Hi,

I certainly agree with this method, but of course it's not original at all,
it's pretty much the basis of algorithmic learning theory, right?

Hutter's AIXI for instance works [very roughly speaking] by choosing the
most compact program that, based on historical data, would have yielded
maximum reward

So yeah, this is the right idea... and your simple examples of it are
nice...

Eric Baum's whole book "What Is thought" is sort of an explanation of this
idea in a human biology and psychology and AI context ;)

ben

On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

> A method for comparing hypotheses in explanatory-based reasoning: *
>
> We prefer the hypothesis or explanation that ***expects* more
> observations. If both explanations expect the same observations, then the
> simpler of the two is preferred (because the unnecessary terms of the more
> complicated explanation do not add to the predictive power).*
>
> *Why are expected events so important?* They are a measure of 1)
> explanatory power and 2) predictive power. The more predictive and the more
> explanatory a hypothesis is, the more likely the hypothesis is when compared
> to a competing hypothesis.
>
> Here are two case studies I've been analyzing from sensory perception of
> simplified visual input:
> The goal of the case studies is to answer the following: How do you
> generate the most likely motion hypothesis in a way that is general and
> applicable to AGI?
> *Case Study 1)* Here is a link to an example: animated gif of two black
> squares move from left to right.
> *Description: *Two black squares are moving in unison from left to right
> across a white screen. In each frame the black squares shift to the right so
> that square 1 steals square 2's original position and square two moves an
> equal distance to the right.
> *Case Study 2) *Here is a link to an example: the interrupted 
> square.
> *Description:* A single square is moving from left to right. Suddenly in
> the third frame, a single black square is added in the middle of the
> expected path of the original black square. This second square just stays
> there. So, what happened? Did the square moving from left to right keep
> moving? Or did it stop and then another square suddenly appeared and moved
> from left to right?
>
> *Here is a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close position to
> the square from frame 2 should be matched (we hypothesize that they are the
> same square and that any difference in position is motion).  So, what
> happens is that in each two frames of the video, we only match one square.
> The other square goes unmatched.
> 2) We do the same thing as in hypothesis #1, but this time we also match
> the remaining squares and hypothesize motion as follows: the first square
> jumps over the second square from left to right. We hypothesize that this
> happens over and over in each frame of the video. Square 2 stops and square
> 1 jumps over it over and over again.
> 3) We hypothesize that both squares move to the right in unison. This is
> the correct hypothesis.
>
> So, why should we prefer the correct hypothesis, #3 over the other two?
>
> Well, first of all, #3 is correct because it has the most explanatory power
> of the three and is the simplest of the three. Simpler is better because,
> with the given evidence and information, there is no reason to desire a more
> complicated hypothesis such as #2.
>
> So, the answer to the question is because explanation #3 expects the most
> observations, such as:
> 1) the consistent relative positions of the squares in each frame are
> expected.
> 2) It also expects their new positions in each from based on velocity
> calculations.
> 3) It expects both squares to occur in each frame.
>
> Explanation 1 ignores 1 square from each frame of the video, because it
> can't match it. Hypothesis #1 doesn't have a reason for why the a new square
> appears in each frame and why one disappears. It doesn't expect these
> observations. In fact, explanation 1 doesn't expect anything that happens
> because something new happens in each frame, which doesn't give it a chance
> to confirm its hypotheses in subsequent frames.
>
> The power of this method is immediately clear. It is general and it solves
> the problem very cleanly.
>
> *Here is a simplified version of how we solve case study 2:*
> We expect the original square to move at a similar velocity from left to
> right because we hypothesized that it did move from left to right and we
> calculated its velocity. If this expectation is confirmed, then it is more
> likely than saying that the square suddenly stopped and another started
> moving. Such a change would be unexpected and such a conclusion would be
> unjustifiable.
>
> I also believe that explanations which

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Mike Tintner
Word of advice. You're creating your own artificial world here with its own 
artificial rules.

AGI is about real vision of real objects in the real world. The two do not 
relate - or compute. 

It's a pity - it's good that you keep testing yourself,  it's bad that they 
aren't realistic tests. Subject yourself to reality - it'll feel better every 
which way.


From: David Jones 
Sent: Sunday, June 27, 2010 6:31 AM
To: agi 
Subject: [agi] Huge Progress on the Core of AGI


A method for comparing hypotheses in explanatory-based reasoning: 

We prefer the hypothesis or explanation that *expects* more observations. If 
both explanations expect the same observations, then the simpler of the two is 
preferred (because the unnecessary terms of the more complicated explanation do 
not add to the predictive power). 

Why are expected events so important? They are a measure of 1) explanatory 
power and 2) predictive power. The more predictive and the more explanatory a 
hypothesis is, the more likely the hypothesis is when compared to a competing 
hypothesis.

Here are two case studies I've been analyzing from sensory perception of 
simplified visual input:
The goal of the case studies is to answer the following: How do you generate 
the most likely motion hypothesis in a way that is general and applicable to 
AGI?
Case Study 1) Here is a link to an example: animated gif of two black squares 
move from left to right. Description: Two black squares are moving in unison 
from left to right across a white screen. In each frame the black squares shift 
to the right so that square 1 steals square 2's original position and square 
two moves an equal distance to the right.
Case Study 2) Here is a link to an example: the interrupted square. 
Description: A single square is moving from left to right. Suddenly in the 
third frame, a single black square is added in the middle of the expected path 
of the original black square. This second square just stays there. So, what 
happened? Did the square moving from left to right keep moving? Or did it stop 
and then another square suddenly appeared and moved from left to right?

Here is a simplified version of how we solve case study 1:
The important hypotheses to consider are: 
1) the square from frame 1 of the video that has a very close position to the 
square from frame 2 should be matched (we hypothesize that they are the same 
square and that any difference in position is motion).  So, what happens is 
that in each two frames of the video, we only match one square. The other 
square goes unmatched.   
2) We do the same thing as in hypothesis #1, but this time we also match the 
remaining squares and hypothesize motion as follows: the first square jumps 
over the second square from left to right. We hypothesize that this happens 
over and over in each frame of the video. Square 2 stops and square 1 jumps 
over it over and over again. 
3) We hypothesize that both squares move to the right in unison. This is the 
correct hypothesis.

So, why should we prefer the correct hypothesis, #3 over the other two?

Well, first of all, #3 is correct because it has the most explanatory power of 
the three and is the simplest of the three. Simpler is better because, with the 
given evidence and information, there is no reason to desire a more complicated 
hypothesis such as #2. 

So, the answer to the question is because explanation #3 expects the most 
observations, such as: 
1) the consistent relative positions of the squares in each frame are expected. 
2) It also expects their new positions in each from based on velocity 
calculations. 
3) It expects both squares to occur in each frame. 

Explanation 1 ignores 1 square from each frame of the video, because it can't 
match it. Hypothesis #1 doesn't have a reason for why the a new square appears 
in each frame and why one disappears. It doesn't expect these observations. In 
fact, explanation 1 doesn't expect anything that happens because something new 
happens in each frame, which doesn't give it a chance to confirm its hypotheses 
in subsequent frames.

The power of this method is immediately clear. It is general and it solves the 
problem very cleanly.

Here is a simplified version of how we solve case study 2:
We expect the original square to move at a similar velocity from left to right 
because we hypothesized that it did move from left to right and we calculated 
its velocity. If this expectation is confirmed, then it is more likely than 
saying that the square suddenly stopped and another started moving. Such a 
change would be unexpected and such a conclusion would be unjustifiable. 

I also believe that explanations which generate fewer incorrect expectations 
should be preferred over those that more incorrect expectations.

The idea I came up with earlier this month regarding high frame rates to reduce 
uncertainty is still applicable. It is important that al

[agi] Huge Progress on the Core of AGI

2010-06-26 Thread David Jones
A method for comparing hypotheses in explanatory-based reasoning: *

We prefer the hypothesis or explanation that ***expects* more observations.
If both explanations expect the same observations, then the simpler of the
two is preferred (because the unnecessary terms of the more complicated
explanation do not add to the predictive power).*

*Why are expected events so important?* They are a measure of 1) explanatory
power and 2) predictive power. The more predictive and the more explanatory
a hypothesis is, the more likely the hypothesis is when compared to a
competing hypothesis.

Here are two case studies I've been analyzing from sensory perception of
simplified visual input:
The goal of the case studies is to answer the following: How do you generate
the most likely motion hypothesis in a way that is general and applicable to
AGI?
*Case Study 1)* Here is a link to an example: animated gif of two black
squares move from left to right.
*Description: *Two black squares are moving in unison from left to right
across a white screen. In each frame the black squares shift to the right so
that square 1 steals square 2's original position and square two moves an
equal distance to the right.
*Case Study 2) *Here is a link to an example: the interrupted
square.
*Description:* A single square is moving from left to right. Suddenly in the
third frame, a single black square is added in the middle of the expected
path of the original black square. This second square just stays there. So,
what happened? Did the square moving from left to right keep moving? Or did
it stop and then another square suddenly appeared and moved from left to
right?

*Here is a simplified version of how we solve case study 1:
*The important hypotheses to consider are:
1) the square from frame 1 of the video that has a very close position to
the square from frame 2 should be matched (we hypothesize that they are the
same square and that any difference in position is motion).  So, what
happens is that in each two frames of the video, we only match one square.
The other square goes unmatched.
2) We do the same thing as in hypothesis #1, but this time we also match the
remaining squares and hypothesize motion as follows: the first square jumps
over the second square from left to right. We hypothesize that this happens
over and over in each frame of the video. Square 2 stops and square 1 jumps
over it over and over again.
3) We hypothesize that both squares move to the right in unison. This is the
correct hypothesis.

So, why should we prefer the correct hypothesis, #3 over the other two?

Well, first of all, #3 is correct because it has the most explanatory power
of the three and is the simplest of the three. Simpler is better because,
with the given evidence and information, there is no reason to desire a more
complicated hypothesis such as #2.

So, the answer to the question is because explanation #3 expects the most
observations, such as:
1) the consistent relative positions of the squares in each frame are
expected.
2) It also expects their new positions in each from based on velocity
calculations.
3) It expects both squares to occur in each frame.

Explanation 1 ignores 1 square from each frame of the video, because it
can't match it. Hypothesis #1 doesn't have a reason for why the a new square
appears in each frame and why one disappears. It doesn't expect these
observations. In fact, explanation 1 doesn't expect anything that happens
because something new happens in each frame, which doesn't give it a chance
to confirm its hypotheses in subsequent frames.

The power of this method is immediately clear. It is general and it solves
the problem very cleanly.

*Here is a simplified version of how we solve case study 2:*
We expect the original square to move at a similar velocity from left to
right because we hypothesized that it did move from left to right and we
calculated its velocity. If this expectation is confirmed, then it is more
likely than saying that the square suddenly stopped and another started
moving. Such a change would be unexpected and such a conclusion would be
unjustifiable.

I also believe that explanations which generate fewer incorrect expectations
should be preferred over those that more incorrect expectations.

The idea I came up with earlier this month regarding high frame rates to
reduce uncertainty is still applicable. It is important that all generated
hypotheses have as low uncertainty as possible given our constraints and
resources available.

I thought I'd share my progress with you all. I'll be testing the ideas on
test cases such as the ones I mentioned in the coming days and weeks.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_