On Tue, Oct 25, 2011 at 10:17:24AM -0700, BGB wrote:
> I was not arguing about the limits of computing, rather, IBM's specific  
> design.
> it doesn't really realistically emulate real neurons, rather it is a  

Real neurons have many features, many of them unknown, and do not
map well into solid state as is. However, you can probably find a
simplified model which is powerful and generic enough and maps
well to solid state by co-evolving substrate and representation.

> from what I can gather a simplistic "accumulate and fire" model, with  
> the neurons hard-wired into a grid.

In principle you can use a hybrid model by using a crossbar for
local connectivity which is analog, and a packet-switched signalling
mesh for long-range interactions, similiarly as real neurons do
it. The mesh can emulate total connectivity fine, and you can probably
even finagle something which scales better than a crossbar locally.

> I suspect something more "generic" would be needed.

I don't see how generic will do long-term any than for bootstrap
(above co-evolution) reasons.

> another question is what can be done in the near term and on present  
> hardware (future hardware may or may not exist, but any new hardware may  
> take years to make it into typical end-user systems).

Boxes with large number of ARM SoCs with embedded memory and signalling
mesh have been sighted, arguably this is the way to go for large-scale.
GPGPU approaches are also quite good, if you map your neurons to a 3d
array and stream through memory sequentially. Exchanging interface state
with adjacent nodes (which can be even on GBit Ethernet) is cheap enough.

>
> the second part of the question is:
> assuming one can transition to a purely biology-like model, is this a  
> good tradeoff?...
> if one gets rid of a few of the limitations of computers but gains some  
> of the limitations of biology, this may not be an ideal solution.

Biology had never had the issue to deal with high-performance numerics,
I'm sure if it had it wouldn't do too shabbily. You can always go hybrid
e.g. if you want to do proofs or cryptography.

> better would be to try for a strategy where the merits of both can be  
> gained, and as many limitations as possible can be avoided.
>
> most likely, this would be via a hybrid model.

Absolutely. Hybrid at many scales, down to analog computation for neurons.

>
> or such...
>


-- 
Eugen* Leitl <a href="http://leitl.org";>leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to