|
>> Models
that are simple enough to debug are too simple to
scale.
>> The contents of a knowledge base for AGI will be beyond our
ability to comprehend.
Given sufficient time, anything
should be able to be understood and debugged. Size alone does not make
something incomprehensible and I defy you to point at *anything* that is truly
incomprehensible to a smart human (for any reason other than we lack
knowledge on it). I've seen all the analogies with pets not understanding
and the beliefs that AIs are going to have minds "immeasurably greater than our
own" and I submit that it's all just speculation on your part. My
contention is that there is a threshold and that we are above it and that beyond
that, it's just a matter of speed and how much you can hold in working memory at
a time. I certainly don't buy the "mystical" approach that says that
sufficiently large neural nets will come up with sufficiently complex
discoveries that we can't understand them. I contend that if you can't
explain it to a very smart human (given sufficient time), then you don't
understand it.
Give me *one* counter-example to
the above . . . .
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303 |
- Re: Re: [agi] A question on the symbol-system hypothesis Mark Waser
- Re: Re: [agi] A question on the symbol-system hypothe... James Ratcliff
- Re: Re: [agi] A question on the symbol-system hypothe... Philip Goetz
- Re: [agi] A question on the symbol-system hypothesis William Pearson
- Re: Re: [agi] A question on the symbol-system hypothe... James Ratcliff
- Re: Re: [agi] A question on the symbol-system hyp... BillK
- Re: Re: Re: [agi] A question on the symbol-sy... Ben Goertzel
- Re: Re: Re: Re: [agi] A question on the s... Ben Goertzel
- Re: Re: Re: [agi] A question on the symbo... James Ratcliff
- Re: Re: Re: [agi] A question on the symbo... Philip Goetz
- Re: Re: Re: [agi] A question on the ... Mark Waser
