You're right, of course. You advised me not to get sucked in, and then I passed on the wisdom to someone else, but here I am again, trying to communicate to the unwilling. I'm not gathering requirements. just debating a poorly argued point. For some reason, correcting ignorance is irresistible to many of us, like the call of a Siren. Maybe it's because so many of us on this list have a problem-solving mindset, and ignorance is a problem that's normally easy to solve -- just add knowledge. But when knowledge isn't received despite the clear transmission, here we go again...
On Tue, Oct 30, 2012 at 2:04 PM, Piaget Modeler <[email protected]>wrote: > > > Aaron, > > The problem you're encountering is that Mike Tinter can only specify > requirements, and cannot evaluate design. > > The moment you say your design meets his requirements, he has no basis for > evaluating whether or not that's > true, and will only be satisfied with a working system as proof. Gather > more requirements from him if you must, > but I'd even say that his requirements may be unnecessary. > > ~PM. > > ------------------------------ > Date: Tue, 30 Oct 2012 13:15:08 -0500 > From: [email protected] > To: [email protected] > > Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was > Superficiality] > > So in other words, the system needs to identify the problem and its > solution space, not just the solution, and you think we're taking the > problem & solution space for granted. While your point is correct, I don't > think it's a valid assumption that you're the only one here who sees it > that way. > > > > -- Sent from my Palm Pre > > ------------------------------ > On Oct 30, 2012 10:35 AM, Mike Tintner <[email protected]> wrote: > > Aaron, > > You’re being too literal – AFAIK all AGI projects involve systems that > learn – that’s a given. What I’m saying is that if you examine the systems, > you will find that the foundations and framework of learning are set by the > designer – so the system knows what it needs to know/learn. In that sense > it has full knowledge. > > Real AGI’s have to learn what to learn as they go along – and gradually > build and change paradigms of their activities – and have to select from > conflicting, competing paradigms. That’s what you’re doing now as you > decide how to prosecute building an AGI system, that’s what you did as you > decided how to have sex, or how to play football, or have a conversation. > > I am tempted here – off the top of my head – to draw a crude analogy with > vision. All AI vision systems AFAIK assume a passive retina that is > imprinted with information and that passively and automatically processes > information. But the reality of real AGI vision is that you also have a > fovea which has continuously to decide what parts of an image to look at - > and you keep learning about how to look at things throughout your life – > what “points of view” to assume. This whole top layer is missing from > AI’ers’ thinking AFAIK. > > > > *From:* Aaron Hosford <[email protected]> > *Sent:* Tuesday, October 30, 2012 3:15 PM > *To:* AGI <[email protected]> > *Subject:* Re: [agi] The Fundamental Misunderstanding in AGI [was > Superficiality] > > You consistently overgeneralize, Mike. *All* AGIers do this. *All*AGIers > fail to do that. If we all did things the same, there would be no > point in this list. Your opinion as to what needs to be done (and the > reasons for it), when stated clearly, is welcome, but your > overgeneralizations and assumptions about our failure to do things you deem > necessary, and your demands that we change just because you think we > should, tend to receive negative responses. I'm not sure you've noticed or > care, but I'm throwing it out there in the hope that it's former. > > Please stop assuming that you personally have full knowledge into what we > are doing or how we are doing it, what our plans entail, or how limited we > are in our ability to flex as the need arises. Engineering/design moves > forward in fits and starts. Progress halts when a new, unsolved aspect of > the problem crops up, and then leaps forward when a solution is found. This > is happening for each of us who is actually doing anything with our ideas. > If/when each of us runs into a problem that can only be solved a certain > way, we will stop and reconsider until we have found that way. Saying that > we are all incapable of seeing the light (unlike yourself) is an insult to > our intelligence and intellectual integrity. We are not blind. You simply > haven't convinced us, and likely won't until convincing proof arises, which > is only a matter of time if you're right. At that point, should it occur, > most of us will modify our designs accordingly and continue on about our > business as before, just as we've done with countless other issues that > have cropped up. Possibly a temporary challenge, but in the end, No Big > Deal. > > In my own system -- and I'm sure I'm not alone -- fluidity is a core > design principle, whether or not you see that. I specifically do *not*assume > a "full knowledge/fully developed mind". I consider learning from > experience to be fundamental to all intelligence. So when you make these > broad, sweeping statements about all AGIers and all AGI projects, you are > making patently false statements about things you personally know very > little about. You have not seen my code. You haven't paid attention to my > design. And you have no idea what's in my head or how I approach things > because you don't understand what I'm saying when I explain things to you. > What you personally and specifically are doing is assuming your *own*"full > knowledge", as if you have perfect insight and complete information > about each and every one of us, which quite honestly is ridiculous. *This > is why you get negative responses when you aren't outright ignored.* > > On Tue, Oct 30, 2012 at 5:16 AM, Mike Tintner <[email protected]>wrote: > > > > Mike A: > > All of Mike T's arguments seem to me to stem from a standpoint of > extreme empiricism. He doesn't seem to acknowledge anything other than > precisely what is under consideration. Even though a chair top can look > different in all cases, in all cases there IS a constant, and that is that > the essence of a chair persists. Philosophers have long fought with these > issues, and as most know it was Kant who came closest (arguably) to > reconciling the empiricists and the rationalizers. > > No I’m not a pure empiricist. (The philosophical/psychological background > is loosely important – recent comments seem unaware that this is one of > the most controversial areas). > > The difference is indeed about rationality – about what *kind* of > schema/classificatory devices the mind (human or any real world mind) must > impose on its images of objects. Rationality – and everyone here, except > for me, is in effect a rationalist – presupposes a CONSTANT schema – just > as you have said, and just as Plato implied 2,500 years ago. That’s because > you are still intellectually living in the age of text, where everything > you see is constant and unchanging. > > Move into the new millennium of movies, which are now a sine qua non, and > you realise that everything is FLUID/MOVING – and different individual > versions of things are different from (and in effect fluid versions of) > others. > > There is no constant, essential waterdrop or human being, or chair or > apple – especially in a world in which all things may be and usually are > transformed by external means in all kinds of way – like being stepped on, > smashed, burned or fragmented - if you just look, that lack of a constant > is self-evident. But you don’t look – you a priori seek to impose the > constant frameworks of language, maths and logic on a fluid world – > determined to defend them to the death – despite the fact that they > obviously are a complete, never failing to fail, bust for > conceptualisation/recognition and anything AGI. > > For a fluid, transformational world and objects, you need fluid, > transformational schemas – but there is nothing in the “languages” you know > about them, and you’re not open to new ideas. > > Fluid schemas are doubly essential because – the other thing that all here > forget – an AGI of any kind must get to know and classify objects > *piecemeal/gradually*, developmentally. The first chair or dog you see may > not be at all a typical or common one. All the current approaches to AGI > assume a *full knowledge/fully developed mind* - with well structured > concept graphs and a fully developed grammar - which has in effect already > learned more or less all it really needs to know - quite, quite absurd. > Every approach in the field is only appropriate to a fully knowledgeable > narrow AI routine/subsystem, not to a real world AGI, complete system > gradually, fluidly getting to know the world. > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> | > Modify <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com/> > > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> | > Modify <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> | > Modify <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
