PM: The internal representation is an implementation detail, if you think of 
the 
larger functional processes as black boxes with specific inputs and outputs and 
well 
defined behavior. ...Call me naive.

“Naive.” You’re assuming language/thought consists of *propositions* that can 
be manipulated in the form of logical tags. Actually language consists of 
actions that have to be physically simulated in order both to understand and 
enact them.  Logic can’t simulate or enact. What you think is a logical “black 
box” is actually the greatest **simulator** yet invented. [Check out dreams as 
one illustration].

There’s a beautiful visual demonstration in the movie “Premium Rush”, which 
also constitutes a movie first. In a chase, a cyclist approaches an extremely 
crowded traffic intersection. The movie then imaginatively shows his thoughts. 
His mind literally maps out three different possible paths through the 
intersection – all ending in terrible accidents, -  before he settles on a 
fourth and cycles on. That is fundamental to thought – physically simulating 
the possibie courses of action, rather than formulating them logically.

Even to understand the apparently rarefied intellectual processing you are 
outlining above for a computer, the reader/writer must simulate the actions of 
thinking involved, (as Searle does metaphorically with the chinese room 
translator).  I’m simulating them – and I know it will all lead logically to 
yet another terrible fatality.
From: Piaget Modeler 
Sent: Tuesday, December 04, 2012 5:21 PM
To: AGI 
Subject: [agi] Internal Representation


Jim: "If you are curious about my opinions on this I would try to explain it,"


Sure Jim,  I'd like to know your thoughts on the subject. Perhaps I'm missing 
something.

My point is that we don't really need to know what's under the hood from an 
architectural
perspective.  The internal representation is an implementation detail, if you 
think of the 
larger functional processes as black boxes with specific inputs and outputs and 
well 
defined behavior. 

I have a straw man representation which I am experimenting with.  If it's 
adequate, then 
that's all that is required.   Basic experimentation will prove it out.  If it 
fails, then we 
ascribe causes to the failure, modify the representation to avoid the failure, 
and try again. 
Simple iterative process.  Call me naive.

The internal representation has to support certain requirements, assumptions, 
dependencies, 
and constraints.  For me my main criteria are as follows: 


1. The representation needs to support activation.
2. The representation needs to support relationships (patterns among elements).
3. The representation needs to support reification. 


As long as the representation does that, I'm satisfied. 

~PM



--------------------------------------------------------------------------------
Date: Tue, 4 Dec 2012 09:51:11 -0500
Subject: Re: [agi] Deb Roy: The Birth of a Word
From: [email protected]
To: [email protected]


PM: "For me knowing the brain's internal representation would be helpful, but 
is not necessary, 
as long as a program can mimic the output using its own internal 
representation.  I can 
use my own straw man representation and see if that works. Any representation 
would 
do for me actually, as long as it gets results."
-----------------------------------------------------------

I have no idea why you would make a remark like this, but as I was trying to 
explain why it was wrong I realized that argument was a side issue, at least 
partly based on semantics, which is not very important.  If you are curious 
about my opinions on this I would try to explain it, but since you probably 
aren't I am just going to get back on track as quickly as I can.
We certainly could write programs that could learn individual words using an 
observe-interact-and-compare strategy.  The problem is that as knowledge grows, 
the possibilities of finding meaning and relevant actions for a particular IO 
event increase to the point that it becomes impossible to search through them 
all.

In other words, all evidence (or my intuition about the evidence that I have 
seen) points to the necessity of using an extensive (not exhaustive but 
extensive) comparative method to look at possibilities for meaning and finding 
good reactions to an IO event.  An AGI program cannot note every detail of an 
ongoing event and use that information to perfectly denote the meaning of the 
event, so it must rely on an exhaustive search of possibilities.  When you have 
extensive knowledge about uncountable combinations of possibilities that might 
be relevant to a situation, then the program just cannot search through them 
all in a reasonable amount of time.  And remember, the program has to be using 
some creativity as it searches through the possibilities, so some of the 
possibilities that it has to consider would be functionally imaginative. 

Your (would-be) AGI program can learn first words much faster than a baby.  The 
problem is that we don't have any good strategies of producing  more complex 
levels of recognition and reaction that can be used effectively.  Perhaps I am 
wrong about this and perhaps I do have a good strategy in mind that might 
actually work to some degree.  It is just that I don't feel that is too likely. 
 But maybe I should try some of my ideas out just to see what happens.

Jim





On Tue, Dec 4, 2012 at 2:50 AM, Piaget Modeler <[email protected]> 
wrote:

  The way I view it these days is that a particular set of schemes (or 
solutions as I call them) 
  are activated and differentiated over this time period: the period it takes 
for "gaa" to 
  transform into "water" during sessions of primary circular reactions (the 
infant hearing 
  his own voice and deciding to have it match his caregiver's pronunciation) or 
secondary 
  circular reactions (the infant getting the caregiver to say "water"). 


  For me knowing the brain's internal representation would be helpful, but is 
not necessary,
  as long as a program can mimic the output using its own internal 
representation. I can 
  use my own straw man representation and see if that works. Any representation 
would 
  do for me actually, as long as it gets results.


  ~PM
      AGI | Archives  | Modify Your Subscription  

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to