Wow, Frank!  What you said is much clearer than what I probably said.  

(}:-().  

 

There’s a reason I do not speak this clearly.  (};-)).

 

Seriously, the pragmat[ic]ist warrant for induction is if there is anything 
constant in an essentially random world, organisms (and knowledge systems in 
general) should be designed to track it. So, for instance, as we keep flipping 
heads, the probability that the flips are coming from a biased coin steadily 
increases.   Of course, if the coin we are flipping is not, in any sense, the 
SAME coin, then all  bets are off.  Literally.   Peirce’s notion of reality is 
thus statistical.  And it is based on the assumption that only generals (eg, 
the coin) can be real; specifics (eg, the coin today, the coin tomorrow) cannot 
be real because there is no way to sample them.  

 

In short, it is no longer clear to me that Peirce’s account of induction 
answers the grue/green quandary.   Green is the property of being the color of 
grass.  Grue is the property of being the color of grass until one samples  it 
N times and the color of the sky thereafter.  We never know for sure which kind 
of entity we are dealing with, a green-like entity or a grue-like entity.   We 
can imagine a situation in which we are sample a chemical to see it is the 
d-form or the l-form.  Let’s imagine also that each time we sample it, the 
“spoon” we use introduces a contaminant that, when it reaches a critical 
concentration, flips the solution from one isomer to the other.   Peirce would 
say, well, I never said the world was uniform;  I only said, if there are 
uniformities in the world, statistical inferential systems would be the only 
way to discover them.  But I don’t still think this really solves the problem 
of induction.  Alas.   

 

Thus, if you tell me that the probability of the coin turning up heads on the 
next flip is 50 percent, the relative frequency of the flips has been fifty 
percent up till now AND you have no reason to believe that the coin has changed 
in the meantime because that, in fact, is the basis for your expectations about 
the coin.  (Well, I suppose you could, being a mathematician, simply say you 
have lots of reasons to believe that the coin is the sort of thing that fits 
the binomial model, and let it go at that.)  

I am still studying glen’s interesting comments on the relation between 
confidence and belief.  As you can imagine, given that he has given me one more 
chance, I shall be cautious in my response.  

 

All the best, 

 

Nick 

 

 

 

 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

 <http://home.earthlink.net/~nickthompson/naturaldesigns/> 
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:[email protected]] On Behalf Of Frank Wimberly
Sent: Monday, July 09, 2018 2:54 PM
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] What's so bad about Scientism?

 

p.s.  I also said that the probability of heads for a fair coin is 0.5.  Of 
course, that's a definition but since he was denying the reality of probability 
I think that cut some ice.

----
Frank Wimberly

www.amazon.com/author/frankwimberly 
<http://www.amazon.com/author/frankwimberly> 

https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Mon, Jul 9, 2018, 12:50 PM Frank Wimberly <[email protected] 
<mailto:[email protected]> > wrote:

Actually Nick is competitive with you for skepticism.  We were discussing 
probabilities and he said you can't know the probability of an event based on 
past observations.  He basically said just because the probability of an event 
has always been P, how do you know it still is?  Is that a fair 
characterization of what you said, Nick?

----
Frank Wimberly

www.amazon.com/author/frankwimberly 
<http://www.amazon.com/author/frankwimberly> 

https://www.researchgate.net/profile/Frank_Wimberly2

Phone (505) 670-9918

 

On Mon, Jul 9, 2018, 12:05 PM uǝlƃ ☣ <[email protected] 
<mailto:[email protected]> > wrote:

Sorry for the extra post.  But it occurred to me you might be asking whether 
*my* autonomous nervous system believes in the utility of these measurements.  
If so, I can give a full-throated "No."  My doubt comes from listening to my 
S.O. (Renee') talk about things like blood pressure and how they're used in 
clinical settings as well as my own experience as a patient.  "Assessing the 
patient" by an intuitive, signal fusing, machine (nurse, doctor, anesthetist) 
seems to have much more utility than any given particular (linearized) 
measurement of a subsystem.  The utility of, say, the heart rate, is waaaaayyy 
below my threshold for belief.

On 07/09/2018 10:53 AM, uǝlƃ ☣ wrote:
> Interesting insertion of "utility", a kind of meta-variable to be considered. 
>  To be clear, I'd say the organism believes in heartbeats, lung pumping, etc. 
>  But to ask whether the organism believes in the usability/utility of 
> (subjective) measurements of such things smacks of a hidden assumption.
> 
> But to answer as authentically as I can in spite of that hidden assumption, 
> I'd answer that *after* the yogi did such a full cycle manipulation 
> successfully at least *once*, then that yogi might believe that 
> meta-variable. (By "full cycle manipulation", I mean taking conscious control 
> and reinstalling the new behavior into the autonomous part.)  After such 
> success, the yogi organism has some experience with whether, how, and what 
> impact any particular part may have had.  For example, perhaps heartbeat 
> plays no role in her ability to take conscious control and reinstall the new 
> program.  Hence, she might doubt the utility of heartbeats but believe the 
> utility of lung pumping regulation.
> 
> Again, though, whether the yogi organism believes in this meta-layer "utility 
> of X" would depend on where they draw the threshold.  I can imagine very 
> process-based yogis who, like me, put little stock in belief and more in the 
> process of doing, staying "hands on".  And I can imagine yogis who idealize 
> the process (perhaps similar to chi?) and may even write books about it.  I 
> have no experience with how yogis actually are, of course.
> 
> 
> On 07/09/2018 10:21 AM, Prof David West wrote:
>> I think the answer may be in what you just wrote, but a bit of assistance 
>> please. If we were to anthropomorphize your autonomous nervous system would 
>> you say it 'believed' or 'doubted' the utility of heartbeats, lungs pumping, 
>> etc.?
>>
>> My interest arises from studies of Yoga adepts who "take conscious control 
>> of breathing" and upon achieving total conscious control, delegate the 
>> control back to the autonomous system which maintains the regularized, 
>> 'managed' breathing instead of the 'normal', somewhat chaotic/strange 
>> attracter-ish breathing regimen prior to the application of Yoga technique.
> 

-- 
☣ uǝlƃ

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to