On 1/8/15, Jim Bromer <[email protected]> wrote: > I think there are a lot of ways to make good models (or good working > models) of the (IO) world. (By the way, were you, or was your > professor making a joke?) >
Yes, a joke meant to illustrate what happens if you take an extreme empiricist attitude. You have to assume the physical laws valid yesterday hold today, but there is no such guarantee. So an extreme empiricist would not hard code such laws into his program! No proof the laws hold up.... my point is only that you have to commit at some point to some stability, some rational order. > There are ways we can make projections about things that we have no > experience with (and even for events that no one has experience with). > There are a few ways that we can be confident that AGI is possible. We > can look at the advancements of computers and programming during the > past 65 years and conclude that there is overwhelming evidence that > computers are figuring a lot of things out. They are being > successfully used in a vast number of fields to do work that only > human beings were capable of doing. In fact they are doing jobs that > human beings are not capable of. And we can look at other kinds of > technology and say that the amazing advancements made in those fields > clearly indicate that AGI is feasible and within reach. It is only > necessary to show that there is no reason to believe that AGI is > something that is totally beyond the scope of contemporary technology. > (Travelling to different galaxies through wormholes does not meet that > criteria because there is no history of advancement in the field and > it has the earmarks of science fantasy.) > Jim Bromer > > > On Thu, Jan 8, 2015 at 6:13 PM, Mike Archbold via AGI <[email protected]> > wrote: >> A lot of this argument sounds like the legendary >> rationalist/empiricist debate. Extreme empiricists only acknowledge >> mere simultaneity of events, for example, and it doesn't go any deeper >> than that. There is no such thing as cause; that is a mental creation >> only. I only took one upper level philosophy course in college. It >> was basically centered around this debate. I remember the professor >> saying that just because historically if you walk off the top off the >> building you fall to your death, that does not mean that next time it >> will happen. No proof! The laws could suddenly change! >> >> Basically I think to advance in AGI anywhere you have to regard some >> things under the rationalist school, I mean, make certain assumptions! >> >> >> Mike A >> >> On 1/8/15, Jim Bromer via AGI <[email protected]> wrote: >>> In olden days there was a distinction between analogous reasoning and >>> reasoning by similarity. Objects usually seem to have strong >>> similarities >>> and dissimilarities to people. But you could look at two dissimilar >>> 'systems' and find that there were systematic similarities. This then >>> would >>> constitute reasoning by analogy. However, the problems that seem to be >>> encountered in AI, especially something like text-based AI, is that you >>> start questioning which objects (of reference) really can stand as >>> basics >>> (or elements) and which stand as relational or derived properties. This >>> leads me to believe that the only way AGI is going to work is by basing >>> it >>> on relativist theories. The conclusion then is that the relations >>> between >>> fundamental referent objects and the relations between them can change >>> relative to a point of view. Then I say we tend to need to rely on the >>> strength of the structures between objects of thought in order to see >>> how >>> reasonable our points of view are. It doesn't mean that there is no >>> such >>> thing as a real world but rather our way of thinking about it has to >>> hold >>> up from different (reasonable) vantages even if that means that the >>> derived >>> systems of thought might stand (at least in some cases) as the >>> fundamentals >>> in our insight. Another way of looking at this is that the elements of a >>> thought can be seen as complicated systems which may be composed of >>> relations that go beyond the similarities between the elements. So in >>> order >>> to learn more about rocks you have to start realizing that they are >>> systems >>> of minerals and atoms and that the interactions of the atoms (for >>> example) >>> may have a lot in common with objects that are not rocks. This seems so >>> mundane that a lot of people don't understand what I am talking about. >>> If >>> you are not interested in categorization by similarity then you are >>> probably not getting this. >>> >>> Jim Bromer >>> >>> On Thu, Jan 8, 2015 at 10:59 AM, Jim Bromer <[email protected]> wrote: >>> >>>> Piaget Modeler via AGI <[email protected]> wrote: >>>> >>>>> Bipin Indurkhya would take it one step further and say, we don't juts >>>>> find relationships, >>>>> we create them. He explains this in his book *Metaphor and >>>>> Cognition*. >>>>> >>>> >>>> >>>> Well, I just took it a few steps further and said that we have to find >>>> -reasons- for a relation. This can be done with conjecture, for >>>> example, >>>> (or from 'education') but the reason has to fit in with the parts. >>>> Correlation or association might be a starting point but then there has >>>> to >>>> be some kind of 'story' which makes sense. Of course this process often >>>> does underlie metaphor and metaphor can be introduced as a method of >>>> explanation but it also can be based on substantive similarities. And I >>>> also believe that the imagination is an important part of understanding >>>> and >>>> that without it insight would be impossible. I feel that the emphasis >>>> of >>>> metaphor as if it were the only method to produce insight-like >>>> correlation is old-school. >>>> Jim Bromer >>>> >>>> >>>> >>>> On Thu, Jan 8, 2015 at 10:28 AM, Piaget Modeler via AGI >>>> <[email protected]> >>>> wrote: >>>> >>>>> Bipin Indurkhya would take it one step further and say, we don't juts >>>>> find relationships, >>>>> we create them. He explains this in his book *Metaphor and >>>>> Cognition*. >>>>> >>>>> See: >>>>> http://www.amazon.com/Metaphor-Cognition-Interactionist-Approach-Cognitive/dp/0792316878 >>>>> >>>>> Piaget would also agree that the relationships are constructed rather >>>>> than detected. >>>>> >>>>> ~PM >>>>> >>>>> > Date: Thu, 8 Jan 2015 09:42:17 -0500 >>>>> > Subject: [agi] Coherent Knowledge and Reason Based Reasoning >>>>> > From: [email protected] >>>>> > To: [email protected] >>>>> >>>>> > >>>>> > I believe that a (moderate) coherentist approach makes sense. You >>>>> > can >>>>> > use logic, correlation, abstraction, synthesis, generalization, >>>>> > specification, probability and conjecture across the conceptual >>>>> > objects of the system. But when some objects of interest are found >>>>> > to >>>>> > be related, I think there should be an attempt to find out why or >>>>> > how >>>>> > they are related. I feel that mere association or correlation is not >>>>> > enough to act as a basis for AGI. The program has to search for >>>>> > reason-based reasoning as well. If a reason can't be found or the >>>>> > observations do not stand out then association or correlation is >>>>> > adequate, but the idea that association or correlation is >>>>> > substantial >>>>> > as a basis for knowledge just does not seem right to me. >>>>> > Jim Bromer >>>>> > >>>>> > >>>>> > ------------------------------------------- >>>>> > AGI >>>>> > Archives: https://www.listbox.com/member/archive/303/=now >>>>> > RSS Feed: >>>>> https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc >>>>> > Modify Your Subscription: https://www.listbox.com/member/?& >>>>> > Powered by Listbox: http://www.listbox.com >>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | >>>>> Modify >>>>> <https://www.listbox.com/member/?&> >>>>> Your Subscription <http://www.listbox.com> >>>>> >>>> >>>> >>> >>> >>> >>> ------------------------------------------- >>> AGI >>> Archives: https://www.listbox.com/member/archive/303/=now >>> RSS Feed: >>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae >>> Modify Your Subscription: >>> https://www.listbox.com/member/?& >>> Powered by Listbox: http://www.listbox.com >>> >> >> >> ------------------------------------------- >> AGI >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: >> https://www.listbox.com/member/archive/rss/303/24379807-653794b5 >> Modify Your Subscription: >> https://www.listbox.com/member/?& >> Powered by Listbox: http://www.listbox.com > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
