Richard,

On 11/5/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> When the system is built, there will inevitably be bugs:  chunks of data
> that are corrupted along the way.  But if those bugs cause the final system
> to misbehave, there will be no way to track them down, because there will
> effectively be no way to test functional subsystems.  The debugging will be
> almost "blind".


Note that people can suffer an amazing amount of brain damage, so a few
errors shouldn't be completely disastrous. I am more worried about being
able to read out component values to sufficient precision.

Also, my scanning UV fluorescence microscope plan would lose NOTHING, even
though there may be minor malfunctions during processing. The trick is to
look into the surface of the brain, then cut off some of what you have
already diagrammed and do some more. If you cut off a little too little or a
little too much, you are still OK provided that you don't cut off more than
~6 microns at a time.

It would be comparable to you trying to implement the software required to
> run the entire air traffic control system of the United States by copying
> down the code that is read out to you over a noisy telephone line by someone
> who does not understand the code they are reading to you.


Not really, because there are LOTS of opportunities for error correction,
e.g. if a neuron is performing some sort of Beyesian computation, then its
synaptic efficacies should add up to 1.0, etc. However, this sort of
correction requires better NN/computational theory than we now have. I also
claim (but you will disagree, so spare yourself the wear and tear on your
keyboard) that exactly these same lapses in theory will eventually doom
present AGI efforts, even though there are no neuron-equivalents in the
code.

At the end of the day, if you end up with some problems in the code because
> you transcribed it wrong, how would you even begin to debug it?


If you got the basic neurons right, it will self-correct all by itself.

And if you heard that someone was thinking of doing such a project, would
> you not expect them to have a comprehensive plan for dealing with this
> problem, before they rush in and ask for billions of dollars to start
> collecting data?


Hopefully, some of that money will go toward refining the theory.

This report - which is supposed to be a comprehensive look at the
> feasibility of WBE - makes almost no mention of this difficulty except
> toward the end, where it includes a passing reference to the fact that new
> types of debugging techniques will be required.


Obviously they have a screw loose, but I believe that this problem IS
doable, but also agree with you that it is a BIG problem because it
absolutely requires new mathematics to ever get there.

Given that this is one of the most serious objections to the WBE idea, I
> would have expected at least half of the document to deal with the issue.
>
> The fact that they have not done this confirms my suspicion that work on
> WBE is, at this point in time, a wild goose chase.  Good for keeping
> neuroscientists employed, but of little value otherwise.


Neuroscientists are probably the most-wrong group you could find. They are
NOT oriented toward making working hardware, there isn't a mathematician
among them, etc.

Steve Richfield



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to