Jim. I still think you haven't passed the Turing test yet. I've seen these
patterns of behavior before. They reminded me how I became suspicious of
you being a Sophia-type bot more than a year ago. If you learned the Po1,
such patterns would probably start disappearing in time.

On Mon, Nov 1, 2021 at 8:44 PM Jim Bromer <[email protected]> wrote:

> Quan:
> Yes, or Not Sure. I do not actually know what you mean.
> The problem with the simplistic answer that you demanded is that I am not
> sure what you mean by, my "approach to programmable logic" and it is
> complicated even more because of your use of the terms, "programmable", and
> "logic".  Your use of the term, "programmable" is odd. Do you think that an
> AGI would have to be created without programming? I assume that your
> opinion to that question would be - of course not. Finally, I am not sure
> what the limitations of logic are so I am not sure how I might use
> "programmable logic" in my approach whatever you meant by that.
> Finally, there is the social side issues. For example, did you think that
> Nano's characterization of my approach was accurate?
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T1675e049c274c867-M4b344f066dc4595ed210627a>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1675e049c274c867-Mf6d3157e1f1c01304c98007d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to