On Tue, Jun 3, 2008 at 5:47 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> The example was a strawman?
>
> It was a precise analogue of the situation we are talking about, so calling
> it a strawman, or calling it irrelevant, is just a way of avoiding what I
> said.
>
> I was refering to a specific attitude among some people who claim that the
> function of a neuron can be completely modelled by treating it as a black
> box and using a bayesian model to capture its I/O function.
>
> There are two points at the heart of the debate.
>
> First, even  those people who claim that the neuron is really "simple" (e.g.
> J Andrew Rogers) have not, in fact, modelled real neurons in enough detail
> to be able to say that they understand them completely. There are many
> different types of neurons, and the exact layout of the connections to
> individual neurons has not been mapped except in simple cases such as the
> sea-slug (Aplysia) and the squid.  So the claim is at best a "we could do it
> in principle" claim, not a "we have actually modelled all known types of
> neurons in enough detail to exactly reproduce their firing patterns" claim.

Nobody is claiming that real biological neurons are simple, or that,
say, they can be modeled by a simple internal algorithm that will be
able to emulate current biological interface. So black boxes are not
relevant. Interfaces will have to change too. The similarity is on a
higher level, and the claim is merely that something that looks
roughly like a brain (neural network) can be implemented using much
much more simpler elements (and interfaces), on computational
substrate. Biology had to simultaneously optimize for many things, and
simplicity wasn't one of them. If we optimize for simplicity a little,
the result may be much more comprehensible then the evolved jumble
that neuroscientists study.


>  One of the reasons this matters is that, as I said before, there have been
> arguments that the exact layout of synapses in the dendritic tree may be
> important in the processing carried out by the neuron ... and most current
> models neglect this completely, pretending that the tree is just an
> amorphous arrival point for all signals.  If the tree matters, then all of
> those older models need to be trashed.

Still, it's something like a graph, even if it's inappropriate for
neurons to play the role of nodes, and instead, say, spans of
dendrites is a better candidate for this role, with edges
corresponding to spacial locations where synapses can form, instead of
only actual synapses.


> Second, even if we did have a complete catalog of neuron behaviors (with
> exact information about tree layout, timing, etc etc), and even if someone
> were to produce a complete bayesian model of all of the neurons in that
> catalog, the result would only be an "epicycle-like" model of the neuron.
>  You will notice that in that recent article that was mentioned on the
> parallel thread to this one ("Re: [agi] news bit: Is this a unified theory
> of the brain? ......") the people who had reservations about the bayesian
> claim of a unified theory made exactly the point that I am making:  they
> criticised that kind of model for being non-falsifiable.
>
> This non-falsifiability is exactly what I was referring to when I used the
> epicycle example:  you cannot criticise the epicycle model of planetary
> motion because the model is infinitely flexible!  It can explain anything,
> because it is built in such a way that parameters can be added to it to fit
> any data.  Scientifically, this is a barren exercise, as we have come to
> understand.  So the example I chose was not a strawman at all, but an exact
> summary of what other people (not just me) are saying about this kind of
> 'modelling' of neuron function.

It horribly generalizes, much worse then Newtonian model. You need to
feed it much more data in order to train it. Still, it's a useful
model.


> In your comments above you say things that make no sense, it seems to me:
>  what do you refer to when you say that "you have to have some prior"?  Is
> this a bayesian prior that you have in mind?  I don't understand how that
> can relate to the issue of *how* you go about chosing between scientific
> models.

Correctness of scientific hypothesis can be estimated using
probability, just like any other belief. This process starts from some
prior probabilities, more or less arbitrarily assigned to them, if you
don't know better. By observing evidence, you can adjust these
probabilities, and begin to strongly prefer one of the hypotheses over
another.

See this:
http://yudkowsky.net/bayes/technical.html


> The Newtonian model of planetary motion is not 'simpler' than the
> epicycle model, if you take a very crude measure of what 'simple' means in
> this context.  And yet, in another way there are many physicists who would
> say that in a deep way, the Newtonian model *is* simpler, because it is less
> arbitrary, more testable, and has fewer free parameters.

Occam's razor is a heuristics for choosing a prior, if you don't know
better. Notations tend to form in a such a way that shorter notations
correspond to more probable referents. It's not always so, of course,
that is why you'll need more direct evidence. For a human, angry Thor
blasting lightnings looks much simpler than quantum mechanics, but it
doesn't make Thor a better theory. It probably justify preferring Thor
a priory, though.


-- 
Vladimir Nesov
[EMAIL PROTECTED]


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to