understanding is grouping things and actions into similarities and
unsimilar things and actions in the mind.

On Sun, Sep 12, 2021 at 2:43 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> Bar a subtle difference, we're mostly in agreement. To my understanding,
> every de-facto fantastical claim isn't worth researching into.
>
> At a certain level of abstraction, contextual relevance is lost. This is
> reported as being the bane of object-oriented programming. Why then delve
> into the abstract without a deabstraction methodology in place, with which
> to systematically process the information with?
>
> Therefore, rather state the research question simply and clearly, even
> hypothetically. Perhaps then, measurable progress would be more likely.
>
> I readily accept my mind does not think like most-other minds do, but to
> my mind, asking a question that has already been answered, serves little
> intellectual purpose.
>
> ------------------------------
> *From:* Mike Archbold <jazzbo...@gmail.com>
> *Sent:* Saturday, 11 September 2021 23:36
> *To:* AGI <agi@agi.topicbox.com>
> *Subject:* Re: [agi] UNDERSTANDING -- Part I -- the Survey, online
> discussion: Sunday 10 a.m. Pacific Time, evening in Europe, you are invited
>
> On 9/11/21, Nanograte Knowledge Technologies <nano...@live.com> wrote:
> > A digital machine cannot understand. I don't know why we're trying to
> invoke
> > a reality, which simply isn't. Calling a dog's tail a leg, does not add
> up
> > to it having 5 legs. We're just playing word games with existentialism,
> > trying to give gestalt to an elective illusion. It's also referred to as;
> > forcing the issue.
> >
> > Give the machine a comprehension test then, using 2nd, and/or 3rd
> language
> > proficiency expressions in pseudo-random foreign dialect and un-English
> > grammar. What result would you get? Understanding, advanced recognition,
> or
> > functional failure?
> >
> > My guess is, at best an error message stating: "Would you repeat that
> > please. I don't understand." AI-driven Dragon Dictate, although a
> > most-useful program, is a prime example of this. The routine would also
> end
> > up in a fatal loop. Why? Because it's probably 3X+1 oriented.
> >
> > Suppose understanding was the beginning of wisdom, what would
> understanding
> > then be?
> >
> > I think the more-realistic research question should be: "Could an AGI
> entity
> > - even a biomechanical one - be encoded in such a manner as to achieve a
> > lower-level of recognizable, clinical consciousness (when compared to
> humans
> > in general)?"
>
> The reality is that nobody claims their machine is conscious  -- but
> regularly people claim their machine understands, but they don't say
> what that means
>
>
> >
> > Or stated differently; considering modern service bots, consciousness
> can be
> > faked. How to tell fake from real?
> >
> > ________________________________
> > From: Mike Archbold <jazzbo...@gmail.com>
> > Sent: Saturday, 11 September 2021 04:50
> > To: AGI <agi@agi.topicbox.com>
> > Subject: Re: [agi] UNDERSTANDING -- Part I -- the Survey, online
> discussion:
> > Sunday 10 a.m. Pacific Time, evening in Europe, you are invited
> >
> > It's an easy question to answer... if we know what the machine
> > understands, we know what it can do. If we don't know what it
> > understands, we might not. So that's why we don't want sloppy
> > definitions of understanding in an opaque age of gigantic neural
> > networks.
> >
> > On 9/10/21, Matt Mahoney <mattmahone...@gmail.com> wrote:
> >> I don't understand why we are so hung up on the definition of
> >> understanding. I think this is like the old debate over whether machines
> >> could think. Can submarines swim?
> >>
> >> Philosophy is arguing about the meanings of words. It is the opposite of
> >> engineering, which is about solving problems. Define what you want the
> >> machine to do and figure out how to do it.
> >>
> >> I know what it means for a human to understand or think or be conscious.
> >> For machines it's undefined. What problem does defining them solve?
> >> Machines learn, predict, and act to satisfy goals. What else do you want
> >> them to do?
> >>
> >> On Fri, Sep 10, 2021, 12:06 AM Mike Archbold <jazzbo...@gmail.com>
> wrote:
> >>> On 9/9/21, WriterOfMinds <jennifer.hane....@gmail.com> wrote:
> >>> > Hey Mike ... I took a look at the Survey doc, and it appears that a
> >>> > lot
> >>> of
> >>> > the opinions are under the wrong names. You've entered my definition
> >>> > as
> >>> > James Bowery's, Daniel Jue's definition as mine, and so forth (looks
> >>> like an
> >>> > "off by one" sort of error that continues down the document).
> >>>
> >>>
> >>> I think the problem is only that I put the name following the
> >>> description, right?? I'll switch it around tomorrow so that you see
> >>> the name first. I quick checked yours and it looked right.
> >>>
> >>>
> >>> > ------------------------------------------
> >>> > Artificial General Intelligence List: AGI
> >>> > Permalink:
> >>> >
> https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M525b91709c9e58f430cb0c40
> >>> > Delivery options: https://agi.topicbox.com/groups/agi/subscription
> >>> >
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M05e36469c70f4cee7f86ec99>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2ee04a3eb9a964b5-M45227021d3c016e16ddd89f9
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to