Ben, thanks for the reference it contains quite a large tree of material. I am a fan of local learning rules for ANN. One of the references from your post

An Approximation of the Error Backpropagation
Algorithm in a Predictive Coding Network
with Local Hebbian Synaptic Plasticity

James C. R. Whittington
[email protected]
MRC Brain Network Dynamics Unit, University of Oxford, Oxford, OX1 3TH, U.K.,
and FMRIB Centre, Nuffield Department of Clinical Neurosciences, University
of Oxford, John Radcliffe Hospital, Oxford, OX3 9DU, U.K.

Rafal Bogacz
[email protected]
MRC Brain Network Dynamics Unit, University of Oxford, Oxford OX1 3TH, U.K.,
and Nuffield Department of Clinical Neurosciences, University of Oxford,
John Radcliffe Hospital, Oxford OX3 9DU, U.K.

I saw a presentation by a professor from Harvard on using predictive coding for vision. A interesting point was only the number of layers needed to code the complexity of the input data are used. The later layer do not train as they are not needed. Richer inputs drive the use of more layers. I suggested this was objective evidence for the efficacy of the federal Head Start program. He was not interested in that idea.

Cheers,

Ed



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to