Right.  But what you've done, here, is remove any lossy compression like what 
happens when humans [mis]identify with some demographic.  Your robots are 
sharing their information in some perfect sense.  And by doing that, you've 
_baked_ in the flattening.  Your compression is non-lossy.

And while I admit it may eventually be feasible to do such a thing (with 
robots), things don't generally happen that way.  Reality as far as we know it 
is satisficing, not optimizing.  So, we'll start with lossy compression as well 
as really faulty devices.  To go a bit further in my rhetoric, it's plausible 
that the lossy and faulty integration is necessary for robustness.  (Although I 
can't rely on it, I at least have Hewitt to cite: 
http://www.powells.com/book/inconsistency-robustness-9781848901599/61-1)

On 02/07/2017 01:42 PM, Marcus Daniels wrote:
> Ok, one could imagine thousands of very lightweight processors that 
> independently process very high resolution sensor data, and share it 
> asynchronously.  Also one could show that the sensors were as good or better 
> than human sensitivity.  All of the events could be tagged with very high 
> precision atomic clocks and logged.  Then the events could be sorted by that 
> tag.   Somehow `flattening' is important to you here, but I haven't figured 
> out why.   Anyway, once flattening was accomplished to understand what was 
> going on it would just have to be unflattened again, like using some 
> communication sequential processes formalism.


-- 
☣ glen

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to