Boris,

the biological one should be emulated

https://www.biorxiv.org/content/biorxiv/early/2018/11/27/478644.full.pdf


On 18.03.2019 02:56, Boris Kazachenko wrote:
We must define a process in which language can emerge from incrementally complex encoding of analog sensory input. Anything short of that is a cargo-cult AI.


On Sun, Mar 17, 2019 at 2:53 PM Jim Bromer <jimbro...@gmail.com <mailto:jimbro...@gmail.com>> wrote:

    I guess I should add that I think strong AI may start out with
    simple 'principles/ or methods but it might start out with very
    complicated principles and methods as well. I do not think an
    attitude that animal physiology -must- be simple is very
    realistic. However, I do not see any good evidence to assume that
    simple methods cannot suffice as a starting point for stronger AI.
    On the other hand, I think there is lots of evidence that
    complicatedness is a major problem for stronger AI
    So when I argue that the study of natural language processing is a
    major move toward strong AI I am talking about AI that can adapt
    to special languages that are used frequently amongst a group,
    just as we have our own language to talk about what we are talking
    about. The average person would have no idea what I am talking
    about, but most of you can make some sense out of what I am saying
    (whether you agree with it or not.) If a natural language
    processing programming can adapt to novel usages of terms and
    sentences, then it can learn, and I would say that it would also
    need to have overcome the present day hurdles of complicatedness
    in some way.
    I think there are undiscovered mathematical methods that will one
    day take a giant step over the present-day hurdle of complicatedness.
    Jim Bromer


    On Sun, Mar 17, 2019 at 10:01 AM Jim Bromer <jimbro...@gmail.com
    <mailto:jimbro...@gmail.com>> wrote:

        This argument from Robert Levy is not quite right, in my
        opinion. While most animals do not have a sophisticated
        language, it can be seen that animals are capable of learning
        about routine events and attach meaning to linguistic cues (or
        other kinds of sensory events like bells) to those routine
        generalizations. That would constitute a language, and it
        exemplifies the contention that to collect insight about (the
        generalizations of a kind) of event constitutes a
        symbolization of a precursor of the event. The knowledge that
        a precursor might represent an event thereby demonstrates that
        the animal has a basic 'linguistic' ability. And the idea that
        an animal can associate a learned signal with a possible event
        (like dinner) shows that the animal has the power of a
        'linguistic' imagination.
        Could designing a robot that has to learn to walk be the
        breakthrough in strong AI according to Robert's thesis?
        Because there are some animals that can learn to walk within a
        few hours of being born A foal is an example. Foals have
        spindly legs that splay a little with the first steps but they
        are not mechanically designed for stability like a stationary
        landing pod on a spacecraft. The idea that designing an
        artificial process that is simple for some animals might
        represent a breakthrough in AI does not make sense for one
        reason. It does not take complexity into account. (I am
        speaking of complicatedness of course.) It is very easy to
        design AI programs that can operate within extremely simple
        domain data-spaces The problem is dealing with extremely
        complicated domain environments where complexity is a major
        hurdle.
        It is a mistake to think that language research in AI is not a
        pathway towards AGI. However it is a mistake to think that
        linguistic abilities are themselves strong AI just as it is a
        mistake to think that designing a robot that can learn to walk
        is strong AI. Both of these challenges can be met by
        simplifying the environmental domain sufficiently. The
        challenge is finding a way that true learning can take place
        when confronted with thousands of complications.
        Jim Bromer


        On Thu, Mar 7, 2019 at 7:24 PM Robert Levy <r.p.l...@gmail.com
        <mailto:r.p.l...@gmail.com>> wrote:

            It's very easy to show that "AGI should not be designed
            for NL".  Just ask yourself the following questions:

            1. How many species demonstrate impressive leverage of
            intentional behaviors?  (My answer would be: all of them,
            though some more than others)
            2. How many species have language (My answer: only one)
            3. How biologically different do you think humans are from
            apes? (My answer: not much different, the whole human
            niche is probably a consequence one adaptive difference:
            cooperative communication by scaffolding of joint attention)

            I'm with Rodney Brooks on this, the hard part of AGI has
            nothing to do with language, it has to do with agents
            being highly optimized to control an environment in terms
of ecological information supporting perception/action. Just as uplifting apes will likely require only minor
            changes, uplifting animaloid AGI will likely require only
            minor changes.  Even then we still haven't explicitly
            cared about language, we've cared about cooperation by
            means of joint attention, which can be made use of
            culturally develop language.

            On Thu, Mar 7, 2019 at 12:05 PM Boris Kazachenko
            <cogno...@gmail.com <mailto:cogno...@gmail.com>> wrote:

                I would be more than happy to pay:
                https://github.com/boris-kz/CogAlg/blob/master/CONTRIBUTING.md
                , but I don't think you are working on AGI.
                No one here does, this is a NLP chatbot crowd. Anyone
                who thinks that AGI should be designed for NL data as
                a primary input is profoundly confused.


                On Thu, Mar 7, 2019 at 7:04 AM Stefan Reich via AGI
                <agi@agi.topicbox.com <mailto:agi@agi.topicbox.com>>
                wrote:

                    Not from you guys necessarily... :o) But I thought
                    I'd let you know.

                    Pitch:
                    
https://www.meetup.com/Artificial-Intelligence-Meetup/messages/boards/thread/52050719

                    Let's see if it can be done... funny how some
                    hurdles always seem to appear when you're about to
                    finish something good. Something about the duality
                    of the universe I guess.

-- Stefan Reich
                    BotCompany.de // Java-based operating systems

*Artificial General Intelligence List <https://agi.topicbox.com/latest>* / AGI / see discussions <https://agi.topicbox.com/groups/agi> + participants <https://agi.topicbox.com/groups/agi/members> + delivery options <https://agi.topicbox.com/groups/agi/subscription> Permalink <https://agi.topicbox.com/groups/agi/T191003acdcbf5ef8-M8d5654b1e3c7a33c6033c805>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T191003acdcbf5ef8-M2a757bec7541e9adcd58a513
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to