Martin, > Hey Phil, > > If I understand you correctly, I think you're very right. The > information we have about the world is behavior and > appearances, and for > most interesting things the mechanism is completely hidden > from us. We > can observe inputs and outputs, but not the source code. We can see > fuel go in and motion come out, but can't see the engine, let alone > anything else.
The trickiest piece is proving in a comprehendable and comprehensive way that anything has any actual inside structure, largely invisible from the outside. Science is built on the ideas of control, after all, and it almost asks science to violate its own principles to discover it. It's a genuine conundrum, and my original path to figuring it out, TMS (to my satisfaction) was by trying to make sense of all the stuff approximation leaves out, among other things. Now I take it simply that loops of intermittent relationships are largely invisible except to dogged cross checking, and to be found ALL over the place. All kinds of real things seem to grow from them, and you can circumscribe their growth with a boundary you can be confident their event horizon lies within, QED 'built and run from inside'. Now there are perhaps lots of things that a computer environment might allow to be built from the inside... even if there are deep differences in the interactions of numbers and things... I think it's early yet to say whether autonomous things growing in computers will be of any use, but that's how nature seems to do it. That's part of what I was suggesting a while ago, asking if anyone had tried 'composting' as part of artificial environments. > Perhaps the core of intelligence is coming up with models of > the world > and exploiting them. That's a view that's right up my alley. > > But say that to most AI researchers, and they'll stare at you > uncomprehendingly. They want a well defined problem, such as > using all > users purchases at Amazon to suggest other purchases for a > single user. > And they'll come up with an algorithm that makes good suggestions > most of the time. The idea that the computer should be > trying to make > sense of the world -- eh? What are you talking about? Or maybe "oh, > that's that flakey research from the 60s and 70s. We've moved beyond > that." I have a friend who does research in believable virtual > characters, and he gets that. > > Best, > Martin > > Phil Henshaw wrote: > > Got it! But it's like making your way through a maze by > running into > > walls. There's no point in being disappointed and just sitting down > > when confronted by them. I think locating the walls helps, i.e. > > finding the barriers and disconnects in our thinking. I've been > > focused on one in particular, the lack of any working theoretical > > model of things organized from the inside. I think that's > where the > > start may be. We all suffer from a core intellectual > deficit on that > > account, to quote another post: > > > > "I think it's comes from the biological human view of the > world. The > > basic structure of thinking comes from our being > 'observers', locked > > up inside a brain, each of us reconstructing an imaginary > model of the > > world around us from our own observations and experiences. > That's a > > problematic viewpoint for relating to any other thing built > the same > > way, i.e. organized from the inside. What's going on inside other > > things is invisible from the outside, and our [brain] > builds its whole > > world view from an outside perspective!! Given that handicap, it's > > quite natural for there to be more than one might guess > missing from > > our awareness." > > > > "...The theoretical sciences don't even have an image of anything > > organized from the inside! That part of the world is > invisible to us > > and so we're structurally unaware of the internally > organized systems > > we're part of and surround us. It's ridiculous to work > with a world > > composed of several billion original, different and faulty > universes, > > but I think we're stuck with it and should try poking around to see > > what other surprises there may be! :)" > > > > make any sense? > > > >> Phil Henshaw wrote: > >>> I was curious about the film you were talking about, "Mind in the > >>> Machine", and Googled it, coming across several things > >> including its > >>> origin and a simple statement by an Australian journalist (quoted > >>> below) of Turing's idea of the test one would apply to > >> measure success > >>> in reproducing intelligence. > >>> > >>> I read the statement as saying if you're able to imitate > >> something by > >>> some other means (say behaviors of people by computers), in > >> a way that > >>> an observer doesn't notice the discrepancy, you've made the real > >>> thing. I expect that's not quite accurate, and the current > >> thinking has > >>> evolved. Can anyone say where the concept is headed? > >> The field of Artificial Intelligence no longer talks at all about > >> general intelligence, the human mind, or anything like that. > >> The lone > >> exception might the the natural language community, who of > course are > >> try to replicate something human specific. But they still > don't talk > >> about "human equivalence" or anything like that. > >> > >> After the hype for AI in the 60s and 70s, there was a > backlash in the > >> 80s. Kind of what happened to ideas like "virtual > reality" or "dot > >> com." In search of respectability, AI has become largely applied > >> statistics and focused on near term results. > >> > >> For someone like me who wants to explore principles and > methods that > >> point the way to full intelligence, this is all very > >> depressing. Like > >> wanting to study cognitive psychology during behaviorism. > >> > >> Best, > >> Martin > >> > >> > > > > > > ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
