Brad,

Hmmm... yeah, the problem you describe is actually an implementation issue,
which is irrelevant to whether one does synchronoous or asynchronous
updating.

It's easy to use a software design where, when a neuron sends activation to
another neuron, a check is done as to whether the target neuron is over
threshold.  If it's over threshold, then it's put on the "ready to fire"
queue.  Rather than iterating through all neurons in each cycle, one simply
iterates through those neurons on the ready-to-fire queue.

Of course, one can use this approach with either synchronous or asynchronous
updating.

We used this design pattern in Webmind, which had a neural net aspect to its
design; Novamente is a bit different, so such a strategy isn't relevant.

-- Ben G


> While I haven't read any of the documents in question, I'd like
> to expound
> a bit here.
>
> While you are certainly correct, I think Pei was referring to the wasted
> computational power of updating synapses that are inactive and have no
> chance of being activated in the near future.  In our current Neumann
> architectures, memory is much cheaper than CPU cycles, which is
> not the case in the brain.
>
> So while the brain opts for minimal neurons, and keeps most of them active
> in any given situation, a silicon NN might have factors of 10 more
> neurons, but use very sparse encoding and a well optimized update
> algorithm.  This setup would emphasize only spending CPU time updating
> neurons that have a chance of being active.
>
>
> -Brad
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to