Here's a good example.  I'm at the fed conference on sustainable design
metrics and strategies, hearing about high quality metrics for the
energy content of building decisions and the complex redesigns for
reducing our impacts by 50% by 2030.  Only problem is, the best metrics
are showing the problem is more complicated, and that the trends in
performance gains are actually slowing down not speeding up, and, that
the measures of the total energy content of our decisions people have
spent the most time on miss literally 90% of the total.  Apparently the
distribution of energy content our decisions are responsible for has a
'fat tail', which makes 90% of it unaccountable.   That seems to mean
that all the strategies are missing the main target.   Given problematic
indicators like that it seems to we should 'look around' for the hidden
lists of things not represented....  


Phil Henshaw                       ¸¸¸¸.·´ ¯ `·.¸¸¸¸
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
680 Ft. Washington Ave 
NY NY 10040                       
tel: 212-795-4844                 
e-mail: [EMAIL PROTECTED]          
explorations: www.synapse9.com  


> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Glen E. P. Ropella
> Sent: Wednesday, December 12, 2007 9:51 AM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] complexity and emergence
> 
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Marcus G. Daniels on 12/11/2007 06:53 PM:
> > Sure, if you manage to invent two entirely new ways of looking at a
> > problem [the data collection plan/model and a design for a 
> synthetic 
> > model].   Theoretical frameworks rarely come out of thin air -- new 
> > models come from extensions and tweaks to a reference 
> model, and the 
> > finite gestalt of a scientific community.   That's 
> inevitable, I think, 
> > unless you happen to have a topic for study that has very 
> rich set of 
> > data available (that wasn't collected motivated by some 
> hypothesis).  
> 
> But you don't need a new theoretical framework for a model to 
> be fundamentally different from another model.  As we've seen 
> on this list alone, there are already a wide variety of 
> theoretical frameworks that are _never_ directly compared.  
> For the most part, I think this is because people prematurely 
> decide that two frameworks are incommensurate and that it 
> doesn't make sense to target the same referent with models in 
> the two frameworks.
> 
> A great example is hybrid (discrete + continuous) systems.  
> For some reason, we feel the need to call such systems 
> "hybrid" even though they're not really that difficult to 
> combine.  The trick is that the _theoretical_ tools used to 
> reason about them are different.  But, we can pull together 
> lots of different things and run them in co-simulation 
> without requiring theoretical commensurability.
> 
> Likewise, analogs come from the weirdest places.  For 
> example, we can compare the models for the "meter"; the metal 
> rod is fundamentally different from the distance light 
> travels in a vacuum.  These are fundamentally different 
> models of the meter.  Another example is an RC plane versus a 
> balsa wood plane as models of a life size plane.  The models 
> are fundamentally different.  All that's required is a common 
> aspect ("lift").
> 
> Granted, when multi-modeling becomes standard practice, we will
> (probably) eventually consolidate our model construction 
> methods, which will constrain such model construction (all 
> rooted in physics no doubt).  And _then_ it will be 
> reasonable to say that the various models are NOT 
> fundamentally different.  But right now, in the immature 
> modeling and simulation discipline we have, any two models 
> are very likely to be very different.  In fact, part of our 
> purpose in publishing our functional unit representation 
> method is to help push for the development of multi-modeling 
> methodology so that we can make models with incommensurate 
> structure phenomenally more commensurate through aspects and 
> co-simulation.  The idea being to construct/select 
> populations of structures to find those that best generate 
> the targeted behavior.
> 
> - --
> glen e. p. ropella, 971-219-3846, http://tempusdictum.com
> The assertion that our ego consists of protein molecules 
> seems to me one of the most ridiculous ever made. -- Kurt Gödel.
> 
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQFHX/VtZeB+vOTnLkoRAogNAJ9Nr/fvLcINJ90VTSnXFW/3SCHBdgCcDuMM
> aufG3PUvc5WnEvMYFVhenfk=
> =OkEa
> -----END PGP SIGNATURE-----
> 
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> lectures, archives, unsubscribe, maps at http://www.friam.org
> 
> 



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to