There isn't an algorithm. It's basically a matter of overlaying shapes to see 
if they fit -  much as you put one hand against another to see if they fit - 
much as you can overlay a hand to see if it fits and is capable of grasping an 
object - except considerably more fluid/ rougher. There has to be some 
instruction generating the process, but it's not an algorithm. How can you have 
an algorithm for recognizing amoebas - or rocks or a drop of water? They are 
not patterned entities - or by extension reducible to algorithms. You don't 
need to think too much about internal visual processes - you can just look,at 
the external objects-to-be-classified , the objects that make up this world, 
and see this. Just as you can look at a set of diverse "patterns" and see that 
they too are not reducible to any single formula/pattern/algorithm. We're 
talking about the fundamental structure of the universe and its contents.  If 
this is right and "God is an artist" before he is a mathematician, then it 
won't do any good screaming about it, you're going to have to invent a way  to 
do art, so to speak, on computers . Or you can pretend that dealing with 
mathematical squares will somehow help here - but it hasn't and won't.

Do you think that a creative process like creating 

http://www.apocalyptic-theories.com/gallery/lastjudge/bosch.jpg

started with an algorithm?  There are other ways of solving problems than 
algorithms - the person who created each algorithm in the first place certainly 
didn't have one. 

From: David Jones 
Sent: Friday, July 09, 2010 4:20 PM
To: agi 
Subject: Re: [agi] Re: Huge Progress on the Core of AGI


Mike, 

Please outline your algorithm for fluid schemas though. It will be clear when 
you do that you are faced with the exact same uncertainty problems I am dealing 
with and trying to solve. The problems are completely equivalent. Yours is just 
a specific approach that is not sufficiently defined.

You have to define how you deal with uncertainty when using fluid schemas or 
even how to approach the task of figuring it out. Until then, its not a 
solution to anything. 

Dave


On Fri, Jul 9, 2010 at 10:59 AM, Mike Tintner <tint...@blueyonder.co.uk> wrote:

  If fluid schemas - speaking broadly - are what is needed, (and I'm pretty 
sure they are), it's n.g. trying for something else. You can't substitute a 
"square" approach for a "fluid amoeba outline" approach. (And you will 
certainly need exactly such an approach to recognize amoeba's).

  If it requires a new kind of machine, or a radically new kind of instruction 
set for computers, then that's what it requires - Stan Franklin, BTW, is one 
person who does recognize, and is trying to deal with this problem - might be 
worth checking up on him.

  This is partly BTW why my instinct is that it may be better to start with 
tasks for robot hands*, because it should be possible to get them to apply a 
relatively flexible and fluid grip/handshape and grope for and experiment with 
differently shaped objects And if you accept the broad philosophy I've been 
outlining, then it does make sense that evolution should have started with 
touch as a more primary sense, well before it got to vision. 

  *Or perhaps it may prove better to start with robot snakes/bodies or somesuch.


  From: David Jones 
  Sent: Friday, July 09, 2010 3:22 PM
  To: agi 
  Subject: Re: [agi] Re: Huge Progress on the Core of AGI





  On Fri, Jul 9, 2010 at 10:04 AM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

    Couple of quick comments (I'm still thinking about all this  - but I'm 
confident everything AGI links up here).

    A fluid schema is arguably by its v. nature a method - a trial and error, 
arguably universal method. It links vision to the hand or any effector. 
Handling objects also is based on fluid schemas - you put out a fluid 
adjustably-shaped hand to grasp things. And even if you don't have hands, like 
a worm, and must grasp things with your body, and must "grasp" the ground under 
which you move, then too you must use fluid body schemas/maps.

    All concepts - the basis of language and before language, all intelligence 
- are also almost certainly fluid schemas (and not as you suggested, patterns).

  fluid schemas is not an actual algorithm. It is not clear how to go about 
implementing such a design. Even so, when you get into the details of actually 
implementing it, you will find yourself faced with the exact same problems I'm 
trying to solve. So, lets say you take the first frame and generate an initial 
"fluid schema". What if an object disappears? What if the object changes? What 
if the object moves a little or a lot? What if a large number of changes occur 
at once, like one new thing suddenly blocking a bunch of similar stuff that is 
behind it? How far does your "fluid schema" have to be distorted for the 
algorithm to realize that it needs a new schema and can't use the same old one? 
You can't just say that all objects are always present and just distort the 
schema. What if two similar objects appear or both move and one disappears? How 
does your schema handle this? Regardless of whether you talk about hypotheses 
or schemas, it is the SAME problem. You can't avoid the fact that the whole 
thing is underdetermined and you need a way to score and compare hypotheses. 

  If you disagree, please define your schema algorithm a bit more specifically. 
Then we would be able to analyze its pros and cons better.
   

    All creative problemsolving begins from concepts of what you want to do  
(and not formulae or algorithms as in rational problemsolving). Any suggestion 
to the contrary will not, I suggest, bear the slightest serious examination.

  Sure.  I would point out though that children do stuff just to learn in the 
beginning. A good example is our desire to play. Playing is a strategy by which 
children learn new things even though they don't have a need for those things 
yet. It motivates us to learn for the future and not for any pressing present 
needs. 

  No matter how you look at it, you will need "algorithms" for general 
intelligence. To say otherwise makes zero sense. No algorithms, no design. No 
matter what design you come up with, I call that an algorithm. Algorithms don't 
have to be "formulaic" or narrow. Keep an open mind about the world 
"algorithm", unless you can suggest a better term to describe general AI 
algorithms.



    **Fluid schemas/concepts/fluid outlines are attempts-to-grasp-things - 
"gropings".**         

    Point 2 : I'd relook at your assumptions in all your musings  - my 
impression is they all assume, unwittingly, an *adult* POV - the view of s.o. 
who already knows how to see - as distinct from an infant who is just learning 
to see and "get to grips with" an extremely blurred world, (even more blurred 
and confusing, I wouldn't be surprised, than that Prakash video). You're 
unwittingly employing top down, fully-formed-intelligence assumptions even 
while overtly trying to produce a learning system - you're looking for what an 
adult wants to know, rather than what an infant 
starting-from-almost-no-knowledge-of-the-world wants to know.

    If you accept the point in any way, major philosophical rethinking is 
required.

  this point doesn't really define at all how the approach should be changed or 
what approach to take. So, it doesn't change the way I approach the problem. 
You would really have to be more specific. For example, you could say that the 
infant doesn't even know how to group pixels, so it has to automatically learn 
that. I would have to disagree with this approach because I can't think of any 
reasonable algorithms that could reasonably explore possibilities. It doesn't 
seem better to me to describe the problem even more generally to the point 
where you are learning how to learn. This is what Abram was suggesting. But, as 
I said to him, you need a way to suggest and search for possible learning 
methods and then compare them. There doesn't seem to be a way to do this 
effectively. And so, you shouldn't over generalize in this way. As I said in 
the initial email(this week), there is no such thing as perfectly general and a 
silver bullet for solving any problem. So, I believe that even infants are born 
expecting what the world will be like. They aren't able to learn about any 
world. They are optimized to configure their brains for this world. 





    From: David Jones 
    Sent: Friday, July 09, 2010 1:56 PM
    To: agi 
    Subject: Re: [agi] Re: Huge Progress on the Core of AGI


    Mike,


    On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

      Isn't the first problem simply to differentiate the objects in a scene? 

    Well, that is part of the movement problem. If you say something moved, you 
are also saying that the objects in the two or more video frames are the same 
instance.
     
      (Maybe the most important movement to begin with is not  the movement of 
the object, but of the viewer changing their POV if only slightly  - wh. won't 
be a factor if you're "looking" at a screen)

    Maybe, but this problem becomes kind of trivial in a 2D environment, 
assuming you don't allow rotation of the POV. Moving the POV would simply 
translate all the objects linearly. If you make it a 3D environment, it becomes 
significantly more complicated. I could work on 3D, which I will, but I'm not 
sure I should start there. I probably should consider it though and see what 
complications it adds to the problem and how they might be solved.
     
      And that I presume comes down to being able to put a crude, highly 
tentative, and fluid outline round them (something that won't be neces. if 
you're dealing with squares?) . Without knowing v. little if anything about 
what kind of objects they are. As an infant most likely does. {See infants' 
drawings and how they evolve v. gradually from a v. crude outline blob that at 
first can represent anything - that I'm suggesting is a "replay" of how visual 
perception developed).

      The fluid outline or image schema is arguably the basis of all 
intelligence - just about everything AGI is based on it.  You need an outline 
for instance not just of objects, but of where you're going, and what you're 
going to try and do - if you want to survive in the real world.  Schemas 
connect everything AGI.

      And it's not a matter of choice - first you have to have an outline/sense 
of the whole - whatever it is -  before you can start filling in the parts.


    Well, this is the question. The solution is underdetermined, which means 
that a right solution is not possible to know with complete certainty. So, you 
may take the approach of using contours to match objects, but that is certainly 
not the only way to approach the problem. Yes, you have to use local features 
in the image to group pixels together in some way. I agree with you there.  

    Is using contours the right way? Maybe, but not by itself. You have to 
define the problem a little better than just saying that we need to construct 
an outline. The real problem/question is this: "How do you determine the 
uncertainty of a hypothesis, lower it and also determine how good a hypothesis 
is, especially in comparison to other hypotheses?" 

    So, in this case, we are trying to use an outline comparison to determine 
the best match hypotheses between objects. But, that doesn't define how you 
score alternative hypotheses. That also is certainly not the only way to do it. 
You could use the details within the outline too. In fact, in some situations, 
this would be required to disambiguate between the possible hypotheses.  



      P.S. It would be mindblowingly foolish BTW to think you can do better 
than the way an infant learns to see - that's an awfully big visual section of 
the brain there, and it works.

    I'm not trying to "do better" than the human brain. I am trying to solve 
the same problems that the brain solves in a different way, sometimes better 
than the brain, sometimes worse, sometimes equivalently. What would be foolish 
is to assume the only way to duplicate general intelligence is to copy the 
human brain. By taking this approach, you are forced to reverse engineer and 
understand something that is extremely difficult to reverse engineer. In 
addition, a solution that using the brain's design may not be economically 
feasible. So, approaching the problem by copying the human brain has additional 
risks. You may end up figuring out how the brain works and not be able to use 
it. In addition might not end up with a good understanding of what other 
solutions might be possible.
     
    Dave

          agi | Archives  | Modify Your Subscription   

          agi | Archives  | Modify Your Subscription  



        agi | Archives  | Modify Your Subscription   

        agi | Archives  | Modify Your Subscription   



      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to