On 11/07/2016 08:01 AM, Marcus Daniels wrote:
> Reductionism is a null hypothesis:   First make sure something doesn't yield 
> to decomposition.  

Glen writes:

"But there's rarely time or political capital to do that _first_.  Obamacare is 
a great example.  Had we focused on an essentialist solution, nothing would 
have been done ... perhaps nothing would ever be done.  So, I would say commit 
to iterative solutions, which implies the _first_ one won't be fully 
decomposed."

I agree with that too.   The example that comes to mind is the use of 
statistical inference where direct measurement is possible but is more 
expensive.   If strong inferences can't be made and cross validated, just take 
the best ones and go back to experiment ASAP.   In the very complex social and 
biological sciences, just do _something_ and learn from the consequences (e.g. 
Affordable Care Act, immunotherapy, genetic engineering), because you'll never 
_really_ know what you are talking about unless you perturb the system in a lot 
of ways.

But, after a lot of experience, a decomposition may become evident.   As 
opposed to the situation of having N variables all coupled to each other in 
some non-linear, non-convex way.  

Marcus
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to