On 8/5/19, Matt Mahoney <mattmahone...@gmail.com> wrote:
> Narrow AI doesn't grow into AGI. AGI is lots of narrow AI specialists put
> together. Nobody in an organization can do what the organization does.
> Every member either knows one specific task well, or can refer you to
> someone who does. Kind of like the organization of the structures of your
> brain.

Traditionally that is the way I thought about it, and that has made
sense. But as I am continuing develop my ~AGI base program, it's
increasingly apparent that the more narrow AI's the base could handle
(in the future, not that far yet!!!!) the smarter the whole works
potentially would get, and the easier it is to add new narrow
functions which are increasingly less narrow. I'm trying to advance my
theory along with the code.


>
> On Sun, Aug 4, 2019, 10:39 PM Manuel Korfmann <m...@korfmann.info> wrote:
>
>> Shout out to Stefan for being so real here
>>
>>
>> signed
>>
>> The realist
>>
>> On 4. Aug 2019, at 23:54, Stefan Reich via AGI <agi@agi.topicbox.com>
>> wrote:
>>
>> > Maybe the
>> best interface in the early going from one "narrow AGI" to another
>> would be somewhere between an oversimplified English with some
>> structure and maybe JSON
>>
>> Yes. It's called agi.blue. Here's your JSON:
>>
>> [
>>
>>   {"a":"AGI","b":"means","c":"artificial general
>> intelligence","slice":""}
>> ]
>>
>>
>> http://agi.blue/bot/centralIndexGrab?q=agi
>>
>> Cheers // https://www.youtube.com/watch?v=bwUE_XJMbok
>>
>>
>>
>> On Sun, 4 Aug 2019 at 20:38, Mike Archbold <jazzbo...@gmail.com> wrote:
>>
>>> On 8/2/19, Secretary of Trades <costi.dumitre...@gmx.com> wrote:
>>> > 1), 2) and 5) nouns extracted for phraseology.
>>> >
>>> > Unidentified "F" Objects in 3), 4).
>>> >
>>> >
>>> > Judgment is a highly intelligent procedure; shouldn't be the same with
>>> > discrimination or recognition.
>>> > Instead of 4): should be able to use the existing communication space
>>> > in
>>> > the usual manners.
>>>
>>> Ideally one "narrow AGI" should be able to communicate as you put it
>>> in the "existing communication space in the usual manners." I think
>>> Goertzel's structure calls for the use of a common cognitive
>>> structure, ie., a common data structure in practical terms. Maybe the
>>> best interface in the early going from one "narrow AGI" to another
>>> would be somewhere between an oversimplified English with some
>>> structure and maybe JSON? I don't know. It is an issue that is on my
>>> mind, since many people are attempting their own clean slate AGI
>>> nucleus.
>>>
>>> Mike A
>>>
>>> And most of all, AI should be active (or usable) in
>>> > offline computing systems.
>>> >
>>> >
>>> > On 01.08.2019 22:40, Mike Archbold wrote:
>>> >> I like this editorial but I'm not sure "Narrow AGI" is the best
>>> >> label.
>>> >> At the moment I don't have a better name for it though. I mean, I
>>> >> agree in principle but it's like somebody saying "X is a liberal
>>> >> conservative." X might really be so, but it might be that... oh hell,
>>> >> why don't we just call it "AI"?
>>> >>
>>> >> Really, all technology performs some function. A function is kind of
>>> >> intrinsically narrow. Real estate sales, radio advertising, wire
>>> >> transfer, musical composition...In that light, all technology is
>>> >> narrow for its function.
>>> >>
>>> >> The difficulty with AGI is: it doesn't understand, reason, and judge
>>> >> as a human can, at a human level. But I think that a narrow AGI app
>>> >> is
>>> >> still a narrow function! Thus narrow AGI is what is going on, a
>>> >> narrow
>>> >> function because all technology is basically narrow, we need it to do
>>> >> something specific. What narrow AI is, is really just a lot better
>>> >> good old fashioned programs that do something better at a human
>>> >> level.
>>> >>
>>> >> My opinion is a "narrow AGI" would need:
>>> >>
>>> >> 1) increased common sense, the ability to form rudimentary
>>> >> understanding, reasoning, and judgining pushing the boundary toward
>>> >> human level
>>> >> 2) can perform some function, some narrow function (all functions are
>>> >> narrow it seems) very well, approaching continually human level
>>> >> competence
>>> >> 3) Can handle wide variations in cases (DL level fuzzy pattern
>>> >> matching, patternism)
>>> >> 4) USES A COMMON BASE WITH OTHER NARROW AGIs which gets more
>>> >> competent
>>> >> 5) Becomes increasingly easier to specialize
>>> >>
>>> >>
>>> >> Mike A
>>> >>
>>> >> On 8/1/19, Costi Dumitrescu <costi.dumitre...@gmx.com> wrote:
>>> >>> So Mars gets conquered by AI robots. What Tensor Flaw is so
>>> intelligent
>>> >>> about surgery or proving math theorems?
>>> >>>
>>> >>> Bias?
>>> >>>
>>> >>>
>>> >>> On 01.08.2019 13:16, Ben Goertzel wrote:
>>> https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce
>>> >> ------------------------------------------
>>> >> Artificial General Intelligence List: AGI
>>> >> Permalink:
>>> >>
>>> https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M3aff1fc5fb106c331c3ce13e
>>> >> Delivery options: https://agi.topicbox.com/groups/agi/subscription
>>
>>
>> --
>> Stefan Reich
>> BotCompany.de // Java-based operating systems
>>
>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>> <https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Me92e7653bb5f058bcf222ecc>
>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Md1e6155ebda471da77c90f36
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to