Colin, you haven't answered my question. I don't understand how electrical
noise from neurons magically makes intelligence possible.

On Wed, Dec 23, 2020, 7:55 PM Colin Hales <[email protected]> wrote:

>
>
>
> On Thu, Dec 24, 2020 at 4:44 AM WriterOfMinds <[email protected]>
> wrote:
>
>> Colin reminds me of Searle. I think the claim that underlies all his
>> arguments is "cognition cannot be achieved by algorithms."
>>
>
> Thanks for opening this door.
>
> The *paper* (not me) claims (with empirical evidence) that a science that
> assumes a claim "cognition can be achieved by algorithms in GP-computers",
> an equivalence of nature and abstract models not achieved anywhere else in
> the history of the science of natural phenomena, if it is to be fully and
> formally tested conclusively, must include null hypothesis testing that
> does not presuppose it to be true. Assuming it to be true has ambiguously
> failed non-stop for 65 years ....  (evidence = see supplementary 1-3 for
> the failure details) while all along the actual empirical tests that
> properly prove it are simply missing. Restoration of the necessary
> empirical science option reveals AI as currently entirely conducted as a
> unique form of theoretical science. The physical activity of an entire
> community is indistinguishable in practice from what is called theoretical
> science everywhere else. Only AI does this. Neuroscience does not. It
> simply doesn't directly do AI at all but could if it knew what could be
> done (See supplementary 2-4).
>
> Section 5 details the proposed change to the testing (through introduction
> of the neuromorphic chip and its empirical science) ... and at the end of
> section 5 in black and white:
>
> *"Note that none of the above discussion is intended to imply that
> GP-computers cannot reach equivalence with natural brain function under
> circumstances not yet understood. That potentiality is not the issue here
> and is not contested. The issue here is how neuroscience and the science of
> AI must be configured to empirically determine any potential equivalence
> and the context in which it may happen. "*
>
> If you see holes in the paper's argument then supply evidence and how it
> impacts the specific claims in the paper. I can react helpfully to
> counter-evidence, not opinions.
>
> The paper can possibly be interpreted as completing Searle's argument from
> a science perspective. Whether it does or doesn't is moot and for somebody
> else to evaluate. It changes nothing in the paper and his work did not
> inspire the paper. This paper was founded on evidence in the form of a
> measurement/detection of broken science operating at the heart of 2
> scientific disciplines (neuroscience & AI) blinded to it by nothing more
> than discipline separation, habit and 65 years of mimicry of mentors.
>
> Who's frustrated? Get in the queue! :-)
>
> Colin
>
>
>
>
>
>> Therefore, he regards any algorithmic approach (including algorithms that
>> model neuronal EM fields) as a non-starter. In his mind, experiments that
>> measure the achievements of any algorithmic approach or brain simulation
>> are still not "empirical," because any algorithm (including algorithms that
>> simulate the brain) is a theoretical model of cognition rather than a
>> potential achievement of cognition. An analogy that I remember from either
>> him or Searle or both is, "a simulation of a rainstorm will not get your
>> computer wet."
>>
>> I don't agree with him, but watching all of you talk past each other is
>> frustrating me.
>>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M572d8d4c007158c74a6eae63>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M8b0624c265d8d6ca723968c6
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to