Relative to a particular focus of a thought process, the relations between
the objects of the thought do not have to be causal.  They can be
correlations for example.  Of course, you can find ways to tie those
objects together and some of those relations will be causal.  But my point
is that some of those relations will be non-causal as well.
Jim Bromer
On Fri, Jun 22, 2012 at 5:23 PM, Jim Bromer <[email protected]> wrote:

> Sergio,
> Your statements on the causality of everything reminded me of a
> disagreement I got into with someone else a few years ago.  I tried to
> remember what it was that he was saying, and all of a sudden I remembered
> that he was making the same kind of argument that you made (although yours
> was made in a completely different way.)
>
> I had said that, "Much of our knowledge is based on non-causative
> relations. "
>
> You said:
> It would help me if you gave an example or two. I have an example: a
> system of simultaneous equations. They must all be satisfied at once, so
> there is no "first" or "last" or any sort of causal relationship. However,
> the causation is not in the fact that they are simultaneous.The
> cause-effect relationship is "simultaneous equations ==>solution."
> How do I know "simultaneous equations ==>solution?" Because I have studied
> all the methods for finding that solution. And all the methods, no
> exception, are causative.
>
> You started out talking about a set which were not intrinsically related
> by any particular causality but then said that the definition of the
> set that you suggested was a causal principle of method to find a
> solution.  There were other causal relations involved in the making of the
> members of the set since they  "equations" which could be solved
> simultaneously.  Suppose that in finding the data for your set you
> accidentally made a collection of, broccoli, neural networks, dude ranches
> and effervescence.  Would your causal relation of a method of solving them
> through the use of the method of finding simultaneous solutions really work
> as a causal relation to bind them?  I don't think so.  The relations
> between the members of a set do not need to be causally related and this is
> a characteristic of the general formation of sets.  So if a members *of a
> set* do not need to be causally related then, given your particular
> emphasis of the use of sets in AGI, why would you presume that knowledge
> has to be determined by causal relations?
>
> Remember, while we can use our imagination to scrounge around and find
> different kinds of relations between objects of thoughts (they all have to
> be thinkable) that does not mean that the relations that are central to
> some particular kind of thought have to be causative.  I am interested in
> focus.  I don't think that ideas can exist as extremely simple objects of
> thoughts but in spite of that I can see that from the focus of some
> consideration our thoughts do not have to be related by causality.
>
> I don't see most of the laws of mechanics as being causal.  I see them as
> relational.
>
> Jim Bromer
>
>
>
>
>
> On Fri, Jun 22, 2012 at 9:29 AM, Sergio Pissanetzky <
> [email protected]> wrote:
>
>> Jim, ****
>>
>> ** **
>>
>> Your letter proved to be very thought-provoking for me. I read it more
>> than once and will peruse it even more. For now, the following statement
>> you made worries me very much because it seems to contradict several things
>> I know: ****
>>
>> ** **
>>
>> > Much of our knowledge is based on non-causative relations. ****
>>
>> ** **
>>
>> But algorithms are causal. Computers are causal, our brains are causal, a
>> neuron fires only if some "preceding" neurons fire in turn. They use neural
>> networks to simulate brain function, and they are causal. If our knowledge
>> is non-causative, what are we doing representing it with causative means?
>> ****
>>
>> ** **
>>
>> It would help me if you gave an example or two. I have an example: a
>> system of simultaneous equations. They must all be satisfied at once, so
>> there is no "first" or "last" or any sort of causal relationship. However,
>> the causation is not in the fact that they are simultaneous. The
>> cause-effect relationship is "simultaneous equations ==>solution." I know
>> that, if I have simultaneous equations, then I have a solution (or, in some
>> cases, no solution, or many solutions) and this is what I use for my
>> thinking. ****
>>
>> ** **
>>
>> How do I know "simultaneous equations ==>solution?" Because I have
>> studied all the methods for finding that solution. And all the methods, no
>> exception, are causative. You select any arbitrary equation to be the
>> "first" and process it in some way. Then you select the "next." Now next
>> implies that there is another that precedes it, so you are forcing a
>> cause-effect relationship. And so on. ****
>>
>> ** **
>>
>> What is happening, is that I think of "simultaneous equations" as an
>> object, and then I use causation at a higher level: if I have the
>> equations, then I have a solution, omitting the intermediate steps. ****
>>
>> ** **
>>
>> Sergio****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> *From:* Jim Bromer [mailto:[email protected]]
>> *Sent:* Thursday, June 21, 2012 2:15 PM
>>
>> *To:* AGI
>> *Subject:* Re: [agi] Prediction Did Not Work (except in narrow ai.)****
>>
>> ** **
>>
>> On Thu, Jun 21, 2012 at 11:04 AM, Sergio Pissanetzky <
>> [email protected]> wrote: ****
>>
>> Jim,****
>>
>>  ****
>>
>> thanks. I was thinking about how we use prediction for survival. Without
>> prediction I would put my hand in the fire and leave it there, because I
>> would not be able to predict that fire causes pain. Or that food is good
>> for hunger. Just like a tree. Locomotion goes with prediction, without it I
>> would be able to avoid pain, or seek food. Just like a tree. That's why we
>> have a brain, to predict and to move. ****
>>
>>  ****
>>
>> Sergio****
>>
>>  ****
>>
>> Yes, prediction is an important method of human thought.  Perhaps I
>> should have focused on saying that "prediction" as it has stood so far has
>> not been reliable in producing higher intelligence.  That seems like a
>> strange idea since it is so useful in native intelligence.****
>>
>>  ****
>>
>> Much of our knowledge is based on non-causative relations.  It is useful
>> because we do not usually see the full scope of the causal relations.  (The
>> use of terms like, "full scope" become philosophically defeasible when we
>> are talking about knowing because it is only by limiting the scope of what
>> we are thinking about could we then say that we understand the full scope
>> of that idea.)  Similarly, much of our knowing is not based on hard edged
>> prediction.  But for the most part, if you can't get the airplane off the
>> ground you cannot reliably discover advanced methods to improve the flight
>> characteristics of the aircraft.****
>>
>>  ****
>>
>> What has happened is that we have discovered that our thinking is both
>> more complicated then we imagined and more mysterious than we thought it
>> should be at the beginning of the information age.  ****
>>
>>  ****
>>
>> On the other hand we can create extreme situations where the human mind
>> fails just as our AGI programs have or would fail (for less extreme
>> situations).  For example, even if you could reliably pick out a number of
>> objects in a scene, by reducing the light on the scene sufficiently, your
>> analysis would fail just as miserably as most AGI programs would fail.
>> This is an important thought experiment because it does reveal that the
>> human mind is capable of effectively using a wider variety of methods in
>> analyzing scenes than a computer program is.  (This is a conclusion but it
>> is a reasonable conclusion.)  This then shows that theory behind AGI is not
>> totally wrong.  We can buttress this conclusion by pointing out that if the
>> lighting of a scene (imagine an industrial setting) could be guaranteed to
>> produce ideal lighting, many visual AI methods would succeed. If a
>> researcher could establish what kinds of AI methods would work in the ideal
>> situations, he could then systematically move to deal with individual
>> variations that tend to produce worse results.  And so on.****
>>
>>  ****
>>
>> Jim Bromer****
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>|
>> Modify <https://www.listbox.com/member/?&;> Your Subscription****
>>
>> <http://www.listbox.com>****
>>
>> ** **
>>
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to