Schema is a fluid outline – as distinct from a geometrically defined 
outline/pattern which is rigid.(You can geometrically define a moving wavy line 
– but it’s “rigidly/fixedly wavy”).  The outline of a real waterdrop is a fluid 
outline. The outline of your hand grasping or your body moving is a fluid 
outline. They’re actually moving/changing – so you know that any shape they may 
have at a given moment is fluid and about to change.

Another way to think of it is to look at any cartoon:
https://www.google.com/search?num=10&hl=en&safe=off&site=imghp&tbm=isch&source=hp&biw=1362&bih=692&q=obama+cartoon&oq=obama+cartoon&gs_l=img.3..0l10.1269.3112.0.3741.13.9.0.2.2.0.131.589.8j1.9.0...0.0...1ac.1.YShcABKFARI

We understand when we look at a cartoon outline of say Obama that that is an 
outline to be interpreted "*fluidly* and not literally. We understand that that 
outline is to be understood as saying “the lines of the real object are 
“SOMETHING LIKE” these (but not exactly and not in any way that can be 
precisely defined). Those outlines, you could say, stand in relation to the 
real thing, rather like the outline of a waterdrop or hand a few seconds ago, 
stands in relation to their outlines now.

The brain *demonstrably* works with fluid outlines. Every icon you see:

http://www.clipartlab.com/clipart_preview/clipart/icons3-2.gif

is evidently not a literal rendering of the outlines of the real objects, but 
to be interpreted fluidly.

So if the conscious brain evidently works with fluid outlines, then the 
unconscious brain must be able to.

But this requires a whole different mentality from the geometric/logical 
mentality – there, things have to be precise. You can’t understand a point as 
being loosely round about a given location. You can’t understand a given 
logical symbol as meaning “loosely something like this object”.  If you do all 
your equations and deductions will be buggered.

And if you just llsten to people here, they continually (naturally given their 
tools) crave precision, single, unambiguous meanings, correct answers.

The fluid mentality is: “hang loose, dude; don’t be so uptight; go with the 
flow”  - it’s fluid and adaptable, and continuously changing with unlimited 
potential to change further and produce multiple-to-infinite versions (within 
certain constraints)..

Algorithms are utterly rigid and haven’t produced and never will a produce a 
single new element – or new fluid conformation.



From: Mike Archbold 
Sent: Tuesday, October 30, 2012 4:13 PM
To: AGI 
Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was Superficiality]




On Tue, Oct 30, 2012 at 6:16 AM, Mike Tintner <[email protected]> wrote:



  Mike A:

  All of Mike T's arguments seem to me to stem from a standpoint of extreme 
empiricism.  He doesn't seem to acknowledge anything other than precisely what 
is under consideration.  Even though a chair top can look different in all 
cases, in all cases there IS a constant, and that is that the essence of a 
chair persists.  Philosophers have long fought with these issues, and as most 
know it was Kant who came closest (arguably) to reconciling the empiricists and 
the rationalizers.


  No I’m not a pure empiricist. (The philosophical/psychological background is 
loosely important –  recent comments seem unaware that this is one of the most 
controversial areas).

  The difference is indeed about rationality – about what *kind* of 
schema/classificatory devices the mind (human or any real world mind) must 
impose on its images of objects. Rationality – and everyone here, except for 
me, is in effect a rationalist – presupposes a CONSTANT schema – just as you 
have said, and just as Plato implied 2,500 years ago. That’s because you are 
still intellectually living in the age of text, where everything you see is 
constant and unchanging.


You wouldn't even be able to communicate at all if there were no constants.  
I'm not sure what you by schema in this context but I think you mean some kind 
of form or set-of-properties relevant to some object or thing.  

Nobody says you have to have 100% constants.  Indeed, that is ridiculous.  But, 
you are arguing using a false dichotomy, it seems to me:  either CONSTANTS or 
FLUID, or roughtly rationalist vs. empiricist.  The reality is however that 
both are needed to process reality, the constant and the changing/unique, and 
it doesn't matter if we are talking about language, thought, or physical 
objects.



  Move into the new millennium of movies, which are now a sine qua non, and you 
realise that everything is FLUID/MOVING – and different individual versions of 
things are different from (and in effect fluid versions of) others. 

  There is no constant, essential waterdrop or human being, or chair or apple – 
especially in a world in which all things may be and usually are transformed by 
external means in all kinds of way – like being stepped on, smashed, burned or 
fragmented -   if you just look, that lack of a constant is self-evident. But 
you don’t look – you a priori seek to impose the constant frameworks of 
language, maths and logic on a fluid world – determined to defend them to the 
death – despite the fact that they obviously are a complete, never failing to 
fail, bust for conceptualisation/recognition and anything AGI.

  For a fluid, transformational world and objects, you need fluid, 
transformational schemas – but there is nothing in the “languages” you know 
about them, and you’re not open to new ideas.


I get the continuous feeling that you think that just because we express 
something as an algorithm or in conversational language nothing further can 
emerge from it.... is that right???
 


  Fluid schemas are doubly essential because – the other thing that all here 
forget – an AGI of any kind must get to know and classify objects 
*piecemeal/gradually*, developmentally. The first chair or dog you see may not 
be at all a typical or common one.  All the current approaches to AGI assume a 
*full knowledge/fully developed mind* -  with well structured concept graphs 
and a fully developed grammar  - which has in effect already learned more or 
less all it really needs to know -  quite, quite absurd. Every approach in the 
field is only appropriate to a fully knowledgeable narrow AI routine/subsystem, 
not to a real world AGI, complete system gradually, fluidly getting to know the 
world.

        AGI | Archives  | Modify Your Subscription  


      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to