> TeslasTwo things I think are interesting about these trends in 
> high-performance commodity hardware:
 
1) The "flops/bit" ratio (processing power vs memory) is skyrocketing.  The 
move to parallel architectures makes the number of high-level "operations" per 
transistor go up, but bits of memory per transistor in large memory circuits 
doesn't go up.  The old "bit per op/s" or "byte per op/s" rules of thumb get 
really broken on things like Tesla (0.03 bit/flops).  Of course we don't know 
the ratio needed for de novo AGI or brain modeling, but the assumptions about 
processing vs memory certainly seem to be changing.
 
2) Much more than previously, effective utilization of processor operations 
requires incredibly high locality (processing cores only have immediate access 
to very small memories).  This is also referred to as "arithmetic intensity".  
This of course is because parallelism causes "operations per second" to expand 
much faster than methods for increasing memory bandwidth to large banks.  
Perhaps future 3D layering techniques will help with this problem, but for now 
AGI paradigms hoping to cache in (yuk yuk) on these hyperincreases in FLOPS 
need to be geared to high arithmetic intensity.
 
Interestingly (to me), these two things both imply to me that we get to 
increase the complexity of neuron and synapse models beyond the "muladd/synapse 
+ simple activation function" model with essentially no degradation in 
performance since the bandwidth of propagating values between neurons is the 
bottleneck much more than local processing inside the neuron model.
 


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to