Hey Phil,

If I understand you correctly, I think you're very right.  The
information we have about the world is behavior and appearances, and for
most interesting things the mechanism is completely hidden from us.  We
can observe inputs and outputs, but not the source code.  We can see
fuel go in and motion come out, but can't see the engine, let alone
anything else.

Perhaps the core of intelligence is coming up with models of the world
and exploiting them.  That's a view that's right up my alley.

But say that to most AI researchers, and they'll stare at you
uncomprehendingly.  They want a well defined problem, such as using all
users purchases at Amazon to suggest other purchases for a single user.
    And they'll come up with an algorithm that makes good suggestions
most of the time.  The idea that the computer should be trying to make
sense of the world -- eh?  What are you talking about?  Or maybe "oh,
that's that flakey research from the 60s and 70s.  We've moved beyond
that."  I have a friend who does research in believable virtual
characters, and he gets that.

Best,
Martin

Phil Henshaw wrote:
> Got it!   But it's like making your way through a maze by running into
> walls.  There's no point in being disappointed and just sitting down
> when confronted by them.   I think locating the walls helps, i.e.
> finding the barriers and disconnects in our thinking.   I've been
> focused on one in particular, the lack of any working theoretical model
> of things organized from the inside.  I think that's where the start may
> be.  We all suffer from a core intellectual deficit on that account, to
> quote another post:
> 
> "I think it's comes from the biological human view of the world.  The
> basic structure of thinking comes from our being 'observers', locked up
> inside a brain, each of us reconstructing an imaginary model of the
> world around us from our own observations and experiences.  That's a
> problematic viewpoint for relating to any other thing built the same
> way, i.e. organized from the inside.  What's going on inside other
> things is invisible from the outside, and our [brain] builds its whole
> world view from an outside perspective!!   Given that handicap, it's
> quite natural for there to be more than one might guess missing from our
> awareness."
> 
> "...The theoretical sciences don't even have an image of anything
> organized from the inside!  That part of the world is invisible to us
> and so we're structurally unaware of the internally organized systems
> we're part of and surround us.  It's ridiculous to work with a world
> composed of several billion original, different and faulty universes,
> but I think we're stuck with it and should try poking around to see what
> other surprises there may be!  :)"
> 
> make any sense?
>  
>> Phil Henshaw wrote:
>>> I was curious about the film you were talking about, "Mind in the 
>>> Machine", and Googled it, coming across several things 
>> including its 
>>> origin and a simple statement by an Australian journalist (quoted 
>>> below) of Turing's idea of the test one would apply to 
>> measure success 
>>> in reproducing intelligence.
>>>
>>> I read the statement as saying if you're able to imitate 
>> something by 
>>> some other means (say behaviors of people by computers), in 
>> a way that 
>>> an observer doesn't notice the discrepancy, you've made the real 
>>> thing. I expect that's not quite accurate, and the current 
>> thinking has
>>> evolved.   Can anyone say where the concept is headed?   
>> The field of Artificial Intelligence no longer talks at all about 
>> general intelligence, the human mind, or anything like that.  
>> The lone 
>> exception might the the natural language community, who of course are 
>> try to replicate something human specific.  But they still don't talk 
>> about "human equivalence" or anything like that.
>>
>> After the hype for AI in the 60s and 70s, there was a backlash in the 
>> 80s.  Kind of what happened to ideas like "virtual reality" or "dot 
>> com."  In search of respectability, AI has become largely applied 
>> statistics and focused on near term results.
>>
>> For someone like me who wants to explore principles and methods that 
>> point the way to full intelligence, this is all very 
>> depressing.  Like 
>> wanting to study cognitive psychology during behaviorism.
>>
>> Best,
>> Martin
>>
>>
> 
> 


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to