Define the type of problems it addresses which might be [for all I know]

*understanding and precis-ing a set of newspaper stories about politics or sewage
*solving a crime of murder - starting with limited evidence
*designing new types of buildings - starting from a limited knowledge of other types of buildings *navigate an agent through a domestic/ office environment with furniture strewn everywhere, animals and people milling around - & limited rules for navigation... *learn to identify every plant and tree in a complex jungle scene - starting with limited knowledge and access to public databases

Ideally I'd like to see in there, how having learned to solve one class of problems, it is going to solve new related classes of problems - to get an EDUCATION in various spheres ... (as distinct say from just learning in one sphere) -

so the navigational agent would have to be able, having mastered a domestic environment, to learn to navigate a jungle or forest environment.

I want to see some PROBLEM (& education) EXAMPLES. That's all, really. (You say that your system is attending to all these different problems more or less simultaneously, or interweavingly, but you don't say what the problems are).

Me, the general public, (& AGI people I suspect), have v. little idea of what AGI can actually do or is even trying to do right now

P.S. There are two sides to talking about knowledge/ intelligence/ problems etc. There is the side of the subject - the thinker, the brain manipulating ideas, the user of different techniques, logical, mathematical etc. bits, bytes etc And there is the side of the object(s) of knowledge - the crimes being solved, the buildings being constructed, the genes and society being learned about.. - what all that knowledge and those problems are about.

There is the subjective side of the mirror reflecting and there is the objective side of the scene being reflected.

You guys tend to be massively leaning over in describing everything in terms of the subjective side. But it's only when you describe intelligence & problemsolving in terms of what you're trying to know/ solve problems about that things start to make sense.

----- Original Message ----- From: "Pei Wang" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, May 01, 2007 2:30 PM
Subject: Re: [agi] The role of incertainty


On 5/1/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
Pei,

Glad to see your input. I noticed NARS quite by accident many years ago &
remembered it as pos. v. important.

You certainly are implementing the principles we have just been discussing -
which is exciting.

However, reading your papers & Ben's, it's becoming clear that there may
well be an industry-wide bad practice going on here. You guys all focus on how your systems WORK... The first thing anyone trying to understand your or any other system must know is what does it DO? What are the problems it
addresses, and the kinds of solutions it provides?

Well, that is exactly the problem addressed in the paper I mentioned:
my working definition of "intelligence", and why I think it is a
better understanding than the others.

It should be commonly accepted that it is EXTREMELY BAD PRACTICE not to
first define what problems your system is set up to solve.

Agree.

Imagine if I spent 100 pages writing up these intricate mechanisms of this new machine, with all these wonderful new wireless and heat and electroservo this and that principles involved,.. and then only at the v. end do I tell you that it's an apple-peeler. You'd find it a bit of a strain to read all
that.

Agree.

The only difference between the above write-up and yours and Ben's is that
we the readers never even get to find out that what you've got is  an
apple-peeler! I still don't know what your systems do.

I wonder is you really read the paper I mentioned --- you can
criticize it for all kinds of reasons, but you cannot say I didn't
define the problem I'm working on, because that is what that paper is
all about! If it is still not clear from that paper, you may also want
to read http://nars.wang.googlepages.com/wang.AI_Definitions.pdf and
http://nars.wang.googlepages.com/wang.WhatAIShouldBe.pdf

It may be good for grants to cover up what you do, but it's actually not
good for you or your thinking or the progress of AI.

I'd very much like to know what your NARS system DOES - is that possible?

I guess I don't understand what you mean by "DOES". If you mean the
goal of the project, then the above papers should be sufficient; if
you mean how the system works, you need to try my demo at
http://nars.wang.googlepages.com/nars%3Ademonstration ; if you mean
what domain problems it can solve by design, then the answer is
"none", since it is not an expert system. Can you be more specific?

Pei

P.S. Minsky is much the same.

----- Original Message -----
From: "Pei Wang" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, May 01, 2007 12:50 PM
Subject: Re: [agi] The role of incertainty


> You can take NARS (http://nars.wang.googlepages.com/) as an example,
> starting at > http://nars.wang.googlepages.com/wang.logic_intelligence.pdf
>
> Pei
>
> On 5/1/07, rooftop8000 <[EMAIL PROTECTED]> wrote:
>> It seems a lot of posts on this list are about the properties an AGI
>> should have. PLURALISTIC, OPEN-ENDED AGI, adaptive, sometimes >> irrational
>> ..
>> it can be useful to talk about them, but i'd rather hear about how
>> this translates into real projects.
>>
>> How to make a program that can deal with uncertainty
>> and is adaptive and can think irrationally at times.. Seems like
>> an awful lot of things.. how should we organize all this? How do we
>> take existing solutions for some of these problems and make sure new >> ones
>> can get added ..
>>
>>
>> --- Mike Tintner <[EMAIL PROTECTED]> wrote:
>>
>> > Yes, you are very right. And my point is that there are absolutely
>> > major
>> > philosophical issues here - both the general philosophy of mind and
>> > epistemology, and the more specific philosophy of AI. In fact, I >> > think
>> > my
>> > characterisation of the issue as one of monism [general - >> > behavioural
>> > as
>> > well as of substance] vs pluralism [again general - not just >> > cultural]
>> > is
>> > probably the best one.
>> >
>> > So do post further thoughts, esp. re AI./AGI - this is well worth
>> > pursuing
>> > and elaborating.
>> >
>> > ----- Original Message -----
>> > From: "Richard Loosemore" <[EMAIL PROTECTED]>
>> > To: <[email protected]>
>> > Sent: Monday, April 30, 2007 3:31 PM
>> > Subject: [agi] The role of incertainty
>> >
>> >
>> > > The discussion of uncertainty reminds me of a story about Piaget >> > > that
>> > > struck a chord with me.
>> > >
>> > > Apparently, when Piaget was but a pup, he had the job of scoring
>> > > tests
>> > > given to kids.  His job was to count the correct answers, but he
>> > > started
>> > > getting interested in the wrong answers.  When he mentioned to his
>> > > bosses
>> > > that the wrong answers looked really interesting in their >> > > wrongness,
>> > > they
>> > > got made at him and pointed out that wrong was just wrong, and all
>> > > they
>> > > were interested in was how to make the kids get more right >> > > answers.
>> > >
>> > > At that point, P had a revelation: looking at right answers told >> > > him
>> > > nothing about the children, whereas all the information about what
>> > > they
>> > > were really thinking was buried in the wrong answers. So he >> > > dumped
>> > > his
>> > > dead-end job and became Jean Piaget, Famous Psychologist instead.
>> > >
>> > > When I read the story I had a similar feeling of Aha! Thinking >> > > isn't >> > > about a lot of Right Thinking sprinkled with the occasional >> > > annoying >> > > Mistake. Thinking is actually a seething cauldron of Mistakes, >> > > some
>> > > of
>> > > which get less egregious over time and become Not-Quite-So-Bad
>> > > Mistakes,
>> > > which we call rational thinking.
>> > >
>> > > I think this attitude to how the mind works, though it is painted >> > > in
>> > > bright colors, is more healthy than the attitude that thinking is
>> > > about
>> > > reasoning modulated by uncertainty.
>> > >
>> > > (Perhaps this is what irritates me so much about the people who >> > > call >> > > themselves Bayesians: people so desperate to believe that they >> > > are >> > > perfect that they have made a religion out of telling each other >> > > that
>> > > they
>> > > think perfectly, when in fact they are just as irrational as any
>> > > other
>> > > religious fanatic). ;-)
>> > >
>> > >
>> > >
>> > > Richard Loosemore.
>> > >
>> > >
>> > > -----
>> > > This list is sponsored by AGIRI: http://www.agiri.org/email
>> > > To unsubscribe or change your options, please go to:
>> > > http://v2.listbox.com/member/?&;
>> > >
>> > >
>> > >
>> > > --
>> > > No virus found in this incoming message.
>> > > Checked by AVG Free Edition. Version: 7.5.467 / Virus Database:
>> > > 269.6.2/780 - Release Date: 29/04/2007 06:30
>> > >
>> > >
>> >
>> >
>> > -----
>> > This list is sponsored by AGIRI: http://www.agiri.org/email
>> > To unsubscribe or change your options, please go to:
>> > http://v2.listbox.com/member/?&;
>> >
>>
>>
>> __________________________________________________
>> Do You Yahoo!?
>> Tired of spam?  Yahoo! Mail has the best spam protection around
>> http://mail.yahoo.com
>>
>> -----
>> This list is sponsored by AGIRI: http://www.agiri.org/email
>> To unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?&;
>>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>
> --
> No virus found in this incoming message.
> Checked by AVG Free Edition. Version: 7.5.467 / Virus Database:
> 269.6.2/782 - Release Date: 01/05/2007 02:10
>
>


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 269.6.2/782 - Release Date: 01/05/2007 02:10




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to