Hmm... Shall we coin this the Tinter Contrarian Pattern? 

 

Or anti-pattern :)

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 
I agree John that this is a useful exercise. This would be a good discussion
if mike would ever admit that I might be right and he might be wrong. I'm
not sure that will ever happen though. :) First he says I can't define a
pattern that works. Then, when I do, he says the pattern is no good because
it isn't physical. Lol. If he would ever admit that I might have gotten it
right, the discussion would be a good one. Instead, he hugs his preconceived
notions no matter how good my arguments are and finds yet another reason,
any reason will do, to say I'm still wrong. 

On Aug 9, 2010 2:18 AM, "John G. Rose" <johnr...@polyplexic.com> wrote:

Actually this is quite critical.

 

Defining a chair - which would agree with each instance of a chair in the
supplied image - is the way a chair should be defined and is the way the
mind processes it.

 

It can be defined mathematically in many ways. There is a particular one I
would go for though...

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Sunday, August 08, 2010 7:28 AM


To: agi
Subject: Re: [agi] How To Create General AI Draft2

 

You're waffling.

 

You say there's a pattern for chair - DRAW IT. Attached should help you.

 

Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.

 

You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no. 

 

No. That's not "funny", that's a waste.. And woolly and imprecise through
and through.

 

 

 

From: David Jones <mailto:davidher...@gmail.com>  

Sent: Sunday, August 08, 2010 1:59 PM

To: agi <mailto:agi@v2.listbox.com>  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a "chair" means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny. 

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner <tint...@blueyonder.co.uk>
wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as
if everything was made up of matter

 

And "matter" is... ?  Huh?

 

You clearly don't realise that your thinking is seriously woolly - and you
will pay a heavy price in lost time.

 

What are your "basic world/visual-world analytic units"  wh. you are
claiming to exist?  

 

You thought - perhaps think still - that *concepts* wh. are pretty
fundamental intellectual units of analysis at a certain level, could be
expressed as, or indeed, were patterns. IOW there's a fundamental pattern
for "chair" or "table." Absolute nonsense. And a radical failure to
understand the basic nature of concepts which is that they are *freeform*
schemas, incapable of being expressed either as patterns or programs.

 

You had merely assumed that concepts could be expressed as patterns,but had
never seriously, visually analysed it. Similarly you are merely assuming
that the world can be analysed into some kind of visual units - but you
haven't actually done the analysis, have you? You don't have any of these
basic units to hand, do you? If you do, I suggest, reply instantly, naming a
few. You won't be able to do it. They don't exist.

 

Your whole approach to AGI is based on variations of what we can call
"fundamental analysis" - and it's wrong. God/Evolution hasn't built the
world with any kind of geometric, or other consistent, bricks. He/It is a
freeform designer. You have to start thinking outside the
box/brick/"fundamental unit".

 

From: David Jones <mailto:davidher...@gmail.com>  

Sent: Sunday, August 08, 2010 5:12 AM

To: agi <mailto:agi@v2.listbox.com>  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed. 

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner <tint...@blueyonder.co.uk>
wrote:

1) You don't define the difference between narrow AI and AGI - or make clear
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.
 

 

2) "Learning about the world" won't cut it -  vast nos. of progs. claim they
can learn about the world - what's the difference between narrow AI and AGI
learning?


The difference is in what you can or can't learn about and what tasks you
can or can't perform. If the AI is able to receive input about anything it
needs to know about in the same formats that it knows how to understand and
analyze, it can reason about anything it needs to.
 

 

3) "Breaking things down into generic components allows us to learn about
and handle the vast majority of things we want to learn about. This is what
makes it general!"

 

Wild assumption, unproven or at all demonstrated and untrue.


You are only right that I haven't demonstrated it. I will address this in
the next paper and continue adding details over the next few drafts.

As a simple argument against your counter argument... 

If that were true that we could not understand the world using a limited set
of rules or concepts, how is it that a human baby, with a design that is
predetermined to interact with the world a certain way by its DNA, is able
to deal with unforeseen things that were not preprogrammed? That's right,
the baby was born with a set of rules that robustly allows it to deal with
the unforeseen. It has a limited set of rules used to learn. That is
equivalent to a limited set of "concepts" (i.e. rules) that would allow a
computer to deal with the unforeseen. 
 

Interesting philosophically because it implicitly underlies AGI-ers'
fantasies of "take-off". You can compare it to the idea that all science can
be reduced to physics. If it could, then an AGI could indeed take-off. But
it's demonstrably not so.


No... it is equivalent to saying that the whole world can be modeled as if
everything was made up of matter. Oh, I forgot, that is the case :) It is a
limited set of "concepts", yet it can create everything we know.
 

 

You don't seem to understand that the problem of AGI is to deal with the NEW
- the unfamiliar, that wh. cannot be broken down into familiar categories, -
and then find ways of dealing with it ad hoc.


You don't seem to understand that even the things you think cannot be broken
down, can be.


Dave


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> | Modify Your Subscription


        

 <https://www.listbox.com/member/archive/rss/303/> agi | Archives | Modify
Your Subscription

        

 <https://www.listbox.com/member/archive/rss/303/>  


 <https://www.listbox.com/member/archive/rss/303/> agi | Archives | Modify
Your Subscription 

 <https://www.listbox.com/member/archive/rss/303/> 


 <https://www.listbox.com/member/archive/rss/303/> agi | Archives | Modify
Your Subscription

 <https://www.listbox.com/member/archive/rss/303/> 

 <https://www.listbox.com/member/archive/rss/303/>  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;>
Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to