Aaron,

Thanks for reply.

I’ll be brief, because I intend to come back to this later at length – it’s 
*hugely*, hugely important.

Your approach to language is, broadly, standard AGI, standard philosophy, 
standard linguistics, standard last 2500 years of rational culture.  
Essentially this says that :language is primarily pictures of the world, whose 
meaning and truth we, as observers, must ascertain. Any application of language 
to action is secondary, to be tackled secondarily.

This is totally upside down. Actually language is primarily enactive – 
consisting of guides to action rather than pictures – and there to direct 
action first, and fill books only secondarily. The latter is totally dependent 
on the second. Language (a conceptual system) begins and ends in action. 
Understanding this requires a revolution in our approach to language – and is 
an absolute sine qua non  (indeed the v. foundation) of real AGI.

And that’s why there are such huge problems of communication between me and 
y’all in this area. You can’t understand concepts until you look at their 
primary function, which is enactive. I always think about concepts in terms of 
their enaction, you all are thinking about them almost exclusively in terms of 
their meaning/truth decipherment – and so never really understand my points – 
or the central nature of an AGI. If you look at concepts in terms of 
meaning/truth, then it’s not wildly unreasonable to think that semantic nets 
might help. If you look at them in terms of action, they become absurd.

All this is by way of a statement of intent, rather than a detailed argument.

Re Woz, you’re new, I assume, and don’t know as most here do by now, that it 
refers to the Wozniak test,  which does indeed require a robot to be able to 
respond to the command: GO TO THE KITCHEN (AND MAKE COFFEE).   That is a true 
test of AGI (if a little complicated) and also, more specifically, of an AGI 
that can truly use concepts..

Really think about it -  about all the kitchens, all the coffeepots, and all 
the intermediate *chairs* and furniture, such concepts embrace – and that a 
true Woz AGI must therefore be able to enactively navigate – and you’ll realise 
why semantic nets haven’t got a hope of working.



From: [email protected] 
Sent: Wednesday, October 31, 2012 9:20 PM
To: AGI 
Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was Superficiality]

The short term goal is to be able to represent & understand the meaning of 
natural language in both textual and conversational settings. This does entail 
being able to represent relationships between concepts across multiple contexts.

The long term goal is to implement analytical thought, reasoning, and decision 
making on top of this representational scheme. Ultimately, this will require 
embodiment.

I don't think the plasticity of thought which you emphasize is possible without 
first having an effective representation of meaning. This is why I'm starting 
by designing a robust representation, and only then moving on to reasoning.

Of course I recognize that the system won't have real-world experience and 
therefore won't have informed common sense as we do. This means that initially 
it will make ridiculous mistakes, the sort where "everybody knows" the right 
answer, and it will have to be instructed on these mistakes via natural 
language.

Later, when the system is embodied, experience will serve as the source of 
common sense information, and the level of natural language instruction will 
taper off until the system is capable of understanding without assistance. 
Children do not learn from words alone, and neither do I expect my system to, 
but it will have to make do in the interim.

Only once reasoning and embodiment-derived common sense are in place will the 
system will be able to pass tests like you describe below, as opposed to merely 
representing their meaning.

By the way, I don't think woz test means what you think it means. 
http://en.m.wikipedia.org/wiki/Wizard_of_Oz_experiment  It seems the opposite 
of what you intend. I assume what you really mean by woz test is a real-world, 
task-oriented version of the Turing test.



-- Sent from my Palm Pre


--------------------------------------------------------------------------------
On Oct 31, 2012 5:49 AM, Mike Tintner <[email protected]> wrote: 


Quick question, Aaron.

When you talk about “expressing”/representing these statements, is the goal of 
your net to *process text* -  other instances of these concepts used in other 
texts?( Hence, I presume,  your “truth values”).

Do you have any additional goal of your machine *enacting* these statements?

For example, 

GO TO THE KITCHEN  (close to MAIN STREET below)

is a form of the Woz test.

Is your net intended to represent that, so a robot could  *enact* it and pass 
the Woz test?



From: Aaron Hosford 
Sent: Wednesday, October 31, 2012 12:32 AM
To: AGI 
Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was Superficiality]

I see no contrast. These vague statements/strategies can be expressed quite 
nicely using soft/fuzzy/uncertain truth values applied to links in a semantic 
net. As for implementing the behavior they describe, that will come later, but 
as with any goal-oriented behavior, there will be (1) a goal description -- 
whether vague or precise, doesn't really matter -- and (2) a (most likely 
heuristic, a.k.a. imprecise) goal metric. The magic you perceive here is 
already accounted for. 

Defining imprecision or uncertainty lets you represent it effectively. If you 
don't define it, you can't represent it. If you can't represent it, you can't 
compute it. To build things you have to make things out of something. You are 
advocating we just conjure up uncertainty out of thin air. 






On Tue, Oct 30, 2012 at 2:16 PM, Mike Tintner <[email protected]> wrote:

  I would be surprised here, Aaron, if you are not a victim of the abuse of key 
terms throughout AI.

  Of course, there is a great deal of “fuzzy” logic – and I assume your use of 
“soft” is not a million miles from that.

  When you look into these logics, you find that actually they are being used 
with precision and precise values

  Contrast them with:.

  1.WE NEED TO TAKE SOME POSITIVE ACTION HERE TO DEAL WITH THEIR THREATS. WE 
CAN’T LET THEM THINK WE’RE GOING TO FOOL AROUND.

  2. FIRST WE NEED TO DEFINE THE PROBLEM – WE CAN’T START WORKING ON SOLUTIONS 
LIKE AGIERS BEFORE WE’VE EVEN DEFINED THE PROBLEM. THEN WE NEED TO DIG UP 
WHATEVER EVIDENCE WE CAN FIND, AND GRADUALLY GENERATE SOME IDEAS. OR WE COULD 
START WITH IDEAS, AND THEN CHECK OUT THE EVIDENCE.

  3. LET’S GO TO THE MAIN SHOPPING STREET, AND NOSE AROUND TO SEE WHAT WE CAN 
FIND.

  These are examples of the kind of truly fluid (or soft, or vague) thinking 
that characterises human/real AGI thinking – and that are way beyond the 
compass of any logic  or algo. They are also interdependently, as I think we’ve 
discussed, truly general –  levels higher than logic and maths, .with their 
vague generalities as distinct from the latter’s specific generalities.

  From: [email protected] 
  Sent: Tuesday, October 30, 2012 6:07 PM
  To: AGI 
  Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was 
Superficiality]

  You apparently missed a big chunk of the recent conversation between Jim & me.

  I am using "soft" truth values to represent so called facts because I assumed 
from the start that single, unambiguous meanings are merly artifacts of our 
perception. My system is built from the ground up with tools necessary for 
dealing with imprecision, uncertainty, and ambiguity. Jim was trying to 
convince me of the need for additional measures, and (within the limits of our 
mutual understanding) we agreed on that point.

  This is in direct contradition to your statements below about our use of 
rigid logic. I'm sure there are other groups out there doing the same for 
shapes and images. Where did you pick up this idea that we are stuck on simple 
geometric shapes and logic that only permits the simple yes/no dichotomy? And 
why do you think algorithms are restricted to them, too?




  -- Sent from my Palm Pre


------------------------------------------------------------------------------
  On Oct 30, 2012 12:23 PM, Mike Tintner <[email protected]> wrote: 


  Schema is a fluid outline – as distinct from a geometrically defined 
outline/pattern which is rigid.(You can geometrically define a moving wavy line 
– but it’s “rigidly/fixedly wavy”).  The outline of a real waterdrop is a fluid 
outline. The outline of your hand grasping or your body moving is a fluid 
outline. They’re actually moving/changing – so you know that any shape they may 
have at a given moment is fluid and about to change.

  Another way to think of it is to look at any cartoon:
  
https://www.google.com/search?num=10&hl=en&safe=off&site=imghp&tbm=isch&source=hp&biw=1362&bih=692&q=obama+cartoon&oq=obama+cartoon&gs_l=img.3..0l10.1269.3112.0.3741.13.9.0.2.2.0.131.589.8j1.9.0...0.0...1ac.1.YShcABKFARI

  We understand when we look at a cartoon outline of say Obama that that is an 
outline to be interpreted "*fluidly* and not literally. We understand that that 
outline is to be understood as saying “the lines of the real object are 
“SOMETHING LIKE” these (but not exactly and not in any way that can be 
precisely defined). Those outlines, you could say, stand in relation to the 
real thing, rather like the outline of a waterdrop or hand a few seconds ago, 
stands in relation to their outlines now.

  The brain *demonstrably* works with fluid outlines. Every icon you see:

  http://www.clipartlab.com/clipart_preview/clipart/icons3-2.gif

  is evidently not a literal rendering of the outlines of the real objects, but 
to be interpreted fluidly.

  So if the conscious brain evidently works with fluid outlines, then the 
unconscious brain must be able to.

  But this requires a whole different mentality from the geometric/logical 
mentality – there, things have to be precise. You can’t understand a point as 
being loosely round about a given location. You can’t understand a given 
logical symbol as meaning “loosely something like this object”.  If you do all 
your equations and deductions will be buggered.

  And if you just llsten to people here, they continually (naturally given 
their tools) crave precision, single, unambiguous meanings, correct answers.

  The fluid mentality is: “hang loose, dude; don’t be so uptight; go with the 
flow”  - it’s fluid and adaptable, and continuously changing with unlimited 
potential to change further and produce multiple-to-infinite versions (within 
certain constraints)..

  Algorithms are utterly rigid and haven’t produced and never will a produce a 
single new element – or new fluid conformation.



  From: Mike Archbold 
  Sent: Tuesday, October 30, 2012 4:13 PM
  To: AGI 
  Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was 
Superficiality]




  On Tue, Oct 30, 2012 at 6:16 AM, Mike Tintner <[email protected]> 
wrote:



    Mike A:

    All of Mike T's arguments seem to me to stem from a standpoint of extreme 
empiricism.  He doesn't seem to acknowledge anything other than precisely what 
is under consideration.  Even though a chair top can look different in all 
cases, in all cases there IS a constant, and that is that the essence of a 
chair persists.  Philosophers have long fought with these issues, and as most 
know it was Kant who came closest (arguably) to reconciling the empiricists and 
the rationalizers.


    No I’m not a pure empiricist. (The philosophical/psychological background 
is loosely important –  recent comments seem unaware that this is one of the 
most controversial areas).

    The difference is indeed about rationality – about what *kind* of 
schema/classificatory devices the mind (human or any real world mind) must 
impose on its images of objects. Rationality – and everyone here, except for 
me, is in effect a rationalist – presupposes a CONSTANT schema – just as you 
have said, and just as Plato implied 2,500 years ago. That’s because you are 
still intellectually living in the age of text, where everything you see is 
constant and unchanging.


  You wouldn't even be able to communicate at all if there were no constants.  
I'm not sure what you by schema in this context but I think you mean some kind 
of form or set-of-properties relevant to some object or thing.  

  Nobody says you have to have 100% constants.  Indeed, that is ridiculous.  
But, you are arguing using a false dichotomy, it seems to me:  either CONSTANTS 
or FLUID, or roughtly rationalist vs. empiricist.  The reality is however that 
both are needed to process reality, the constant and the changing/unique, and 
it doesn't matter if we are talking about language, thought, or physical 
objects.



    Move into the new millennium of movies, which are now a sine qua non, and 
you realise that everything is FLUID/MOVING – and different individual versions 
of things are different from (and in effect fluid versions of) others. 

    There is no constant, essential waterdrop or human being, or chair or apple 
– especially in a world in which all things may be and usually are transformed 
by external means in all kinds of way – like being stepped on, smashed, burned 
or fragmented -   if you just look, that lack of a constant is self-evident. 
But you don’t look – you a priori seek to impose the constant frameworks of 
language, maths and logic on a fluid world – determined to defend them to the 
death – despite the fact that they obviously are a complete, never failing to 
fail, bust for conceptualisation/recognition and anything AGI.

    For a fluid, transformational world and objects, you need fluid, 
transformational schemas – but there is nothing in the “languages” you know 
about them, and you’re not open to new ideas.


  I get the continuous feeling that you think that just because we express 
something as an algorithm or in conversational language nothing further can 
emerge from it.... is that right???
   


    Fluid schemas are doubly essential because – the other thing that all here 
forget – an AGI of any kind must get to know and classify objects 
*piecemeal/gradually*, developmentally. The first chair or dog you see may not 
be at all a typical or common one.  All the current approaches to AGI assume a 
*full knowledge/fully developed mind* -  with well structured concept graphs 
and a fully developed grammar  - which has in effect already learned more or 
less all it really needs to know -  quite, quite absurd. Every approach in the 
field is only appropriate to a fully knowledgeable narrow AI routine/subsystem, 
not to a real world AGI, complete system gradually, fluidly getting to know the 
world.

          AGI | Archives  | Modify Your Subscription  


        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription  

        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription  


      AGI | Archives  | Modify Your Subscription   

      AGI | Archives  | Modify Your Subscription  

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to