--- On Thu, 12/11/08, Eric Burton <brila...@gmail.com> wrote:

> It's all a big vindication for genetic memory, that's for certain. I
> was comfortable with the notion of certain templates, archetypes,
> being handed down as aspects of brain design via natural selection,
> but this really clears the way for organisms' life experiences to
> simply be copied in some form to their offspring. DNA form!

No it's not. 

1. There is no experimental evidence that learned memories are passed to 
offspring in humans or any other species.

2. If memory is encoded by DNA methylation as proposed in 
http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
 then how is the memory encoded in 10^11 separate neurons (not to mention 
connectivity information) transferred to a single egg or sperm cell with less 
than 10^5 genes? The proposed mechanism is to activate one gene and turn off 
another -- 1 or 2 bits.

3. The article at http://www.technologyreview.com/biomedicine/21801/ says 
nothing about where memory is encoded, only that memory might be enhanced by 
manipulating neuron chemistry. There is nothing controversial here. It is well 
known that certain drugs affect learning.

4. The memory mechanism proposed in 
http://www.ncbi.nlm.nih.gov/pubmed/16822969?ordinalpos=14&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum
is distinct from (2). It proposes protein regulation at the mRNA level near 
synapses (consistent with the Hebbian model) rather than DNA in the nucleus. 
Such changes could not make their way back to the nucleus unless there was a 
mechanism to chemically distinguish the tens of thousands of synapses and 
encode this information, along with the connectivity information (about 10^6 
bits per neuron) back to the nuclear DNA.

Last week I showed how learning could occur in neurons rather than synapses in 
randomly and sparsely connected neural networks where all of the outputs of a 
neuron are constrained to have identical weights. The network is trained by 
tuning neurons toward excitation or inhibition to reduce the output error. In 
general an arbitrary X to Y bit binary function with N = Y 2^X bits of 
complexity can be learned using about 1.5N to 2N neurons with ~ N^1/2 synapses 
each and ~N log N training cycles. As an example I posted a program that learns 
a 3 by 3 bit multiplier in about 20 minutes on a PC using 640 neurons with 36 
connections each.

This is slower than Hebbian learning by a factor of O(N^1/2) on sequential 
computers, as well as being inefficient because sparse networks cannot be 
simulated efficiently using typical vector processing parallel hardware or 
memory optimized for sequential access. However this architecture is what we 
actually observe in neural tissue, which nevertheless does everything in 
parallel. The presence of neuron-centered learning does not preclude Hebbian 
learning occurring at the same time (perhaps at a different rate). However, the 
number of neurons (10^11) is much closer to Landauer's estimate of human long 
term memory capacity (10^9 bits) than the number of synapses (10^15).

However, I don't mean to suggest that memory in either form can be inherited. 
There is no biological evidence for such a thing.

-- Matt Mahoney, matmaho...@yahoo.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to