Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)

> The argument I presented was not a "conjectural assertion", it made the
> following coherent case:
>
>    1) There is a high prima facie *risk* that intelligence involves a
> significant amount of irreducibility (some of the most crucial
> characteristics of a complete intelligence would, in any other system,
> cause the behavior to show a global-local disconnect), and

The above statement contains two fuzzy terms -- "high" and "significant" ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.

>    2) Because of the unique and unusual nature of complexity there is
> only a vanishingly small chance that we will be able to find a way to
> assess the exact degree of risk involved, and
>
>    3) (A corollary of (2)) If the problem were real, but we were to
> ignore this risk and simply continue with an "engineering" approach
> (pretending that complexity is insignificant),

The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.

Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...

> then the *only* evidence
> we would ever get that irreducibility was preventing us from building a
> complete intelligence would be the fact that we would simply run around
> in circles all the time, wondering why, when we put large systems
> together, they didn't quite make it, and

No.  Experimenting with AI systems could lead to evidence that would
support the irreducibility hypothesis more directly than that.  I doubt they
will but it's possible.  For instance, we might discover that creating more and
more intelligent systems inevitably presents more and more complex
parameter-tuning problems, so that parameter-tuning appears to be the
bottleneck.  This would suggest that some kind of highly expensive evolutionary
or ensemble approach as you're suggesting might be necessary.

>    4) Therefore we need to adopt a "Precautionary Principle" and treat
> the problem as if irreducibility really is significant.
>
>
> Whether you like it or not - whether you've got too much invested in the
> contrary point of view to admit it, or not - this is a perfectly valid
> and coherent argument, and your attempt to try to push it into some
> lesser realm of a "conjectural assertion" is profoundly insulting.

The form of the argument is coherent and valid; but the premises involve
fuzzy quantifiers whose values you are apparently setting by
intuition, and whose
specific values sensitively impact the truth value of the conclusion.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72135696-ff196d

Reply via email to