I like the ANN analogy more than the JIT one... though perhaps my understanding 
of either is flawed.  The JIT analogy is stronger, I think, because ANNs aren't 
(?) typically capable of multi-domain classification (right?).  The extent to 
which they can operate over a space on which they're not trained is very 
limited.  But the JIT, because its ultimate input (turing complete languages) 
and output (general purpose computers) are universally expressive, can apply 
across a huge number of domains.  It seems like the sizes of the fan-in and 
fan-out for compilers are huge compared to those for ANNs.

On 11/08/2016 07:45 AM, Marcus Daniels wrote:
> A neural net trained to discriminate between nuances in one environment (H) 
> would need to get re-trained (or I'd say untrained) to the D environment.   
> The signals in H type environments are higher dimensional, coupled, and 
> non-linear compared to the D environment which is made up of many more 
> independent and simpler hazards.    With finite resources, I expect the 
> H-specialized M agent apparatus needs to be torn-down to make room for 
> constant bombardment of D-world wild dogs.    Not really interpreted vs. 
> compiled, more like a Java hotspot JIT that is constantly refining to the 
> environment.   

-- 
␦glen?

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to