Re: [agi] OpenAI is not so open.

2020-02-22 Thread immortal . discoveries
Note Ben that's probably my most messy post on here, check back later I'll have my AGI more put together. I've sort of tank-rampaged a lot and sound annoying at times, but there is good behind it. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] OpenAI is not so open.

2020-02-22 Thread Amara Angelica via AGI
LOL, Animal Farm > On Feb 22, 2020, at 9:22 PM, Ben Goertzel wrote: > > Hah, that's a beautiful formulation ;) > > "All AGI should be open -- but some AGI should be more open than others" ... > > What's invariant btw the Orwell case and the OpenAI case are the > capitalist pigs, right? >

Re: [agi] OpenAI is not so open.

2020-02-22 Thread Ben Goertzel
Hah, that's a beautiful formulation ;) "All AGI should be open -- but some AGI should be more open than others" ... What's invariant btw the Orwell case and the OpenAI case are the capitalist pigs, right? On Sun, Feb 23, 2020 at 8:22 AM Mike Archbold wrote: > > This reminds me of "some people

Re: [agi] Re: On AI funding

2020-02-22 Thread Daniel Jue
I am self funding through a regular salaried position at a cyber security company and through ML/NLP consulting gigs at my company Cognami LLC. Waiting for handouts or angel investors prior to an MVP is like waiting on lottery tickets. In the process of my self funding I get the benefit of

Re: [agi] OpenAI is not so open.

2020-02-22 Thread Mike Archbold
This reminds me of "some people are more equal than other people," as "some open AI is more open than other open AI" On Saturday, February 22, 2020, Matt Mahoney wrote: > Disturbing trend toward secrecy in OpenAI's efforts to develop friendly AGI.

[agi] OpenAI is not so open.

2020-02-22 Thread Matt Mahoney
Disturbing trend toward secrecy in OpenAI's efforts to develop friendly AGI. https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] AGI questions

2020-02-22 Thread immortal . discoveries
Well the problem I describe above is your prediction weights do work in generated related new good data, but if you have prior bad weights, you must first go back and answer those questions before answer deeper questions. For example you ask "i will get this wall down using _ (god)" which is

Re: [agi] AGI questions

2020-02-22 Thread immortal . discoveries
'Intelligence does have a single goal; Survival. Broken down there is sub goals, and if it isn't General Intelligence then this is where the top goal is not fulfilled as much. And yes goal updating is needed, for stepping among those sub goals. There is an ASI. Bigger brains (aka big diverse

Re: [agi] AGI questions

2020-02-22 Thread James Bowery
Although the LessWrong guys are right to say to the "Cartesian Divide" is a problem with AIXI , they are wrong about the nature of that problem. This is very much related to the problem of correcting "bias" that has everyone going into

Re: [agi] AGI questions

2020-02-22 Thread keghnfeem
On Friday, February 21, 2020, at 8:34 PM, Matt Mahoney wrote: > You can't even define consciousness or free will. How would you know if a > machine had it? What is your test?    I  can with in my frame work. Since you work in a different frame of mine i think you would not understand and you

Re: [agi] AGI questions

2020-02-22 Thread James Bowery
PS: I use "singularity" in its vernacular, not formal meaning since as Matt has quite adequately pointed out, that word doesn't really belong physics. On Sat, Feb 22, 2020 at 1:21 PM James Bowery wrote: > > > On Sat, Feb 22, 2020 at 12:32 PM Stanley Nilsen > wrote: > >> >> >> On 2/22/20 1:22

Re: [agi] AGI questions

2020-02-22 Thread James Bowery
On Sat, Feb 22, 2020 at 12:32 PM Stanley Nilsen wrote: > > > On 2/22/20 1:22 AM, WriterOfMinds wrote: > > ... > I recommend looking up the "orthogonality thesis" and doing some reading > thereon. Morality, altruism, "human values," etc. are distinct from > intellectual capacity, and must be

Re: [agi] AGI questions

2020-02-22 Thread WriterOfMinds
On Saturday, February 22, 2020, at 11:31 AM, Stanley Nilsen wrote: > Simply to say that a "goal" is the way you determine what is best (e.g.  does it "lead to" the goal ) is to miss the point that goals need to constantly change when circumstances change. Instrumental goals or subgoals

Re: [agi] AGI questions

2020-02-22 Thread Stanley Nilsen
On 2/22/20 1:22 AM, WriterOfMinds wrote: ...   I recommend looking up the "orthogonality thesis" and doing some reading thereon.  Morality, altruism, "human values," etc. are distinct from intellectual capacity, and must be intentionally incorporated into AGI if you

Re: [agi] On AI funding

2020-02-22 Thread James Bowery
Oh, Great! Let a thousand analogues to the SLS bloom! If government had its head screwed on straight with regards to AI, the NSF would have, long ago, taken Matt's advice and we would have essentially solved the natural language understanding problem by

[agi] Re: On AI funding

2020-02-22 Thread Basile Starynkevitch
On 2/22/20 4:30 PM, Basile Starynkevitch wrote: On 2/22/20 4:22 PM, Alan Grimes via AGI wrote: I don't have much to say at this very minute, but I've been holding on to this link for a few days and really should share it:

[agi] fyi,

2020-02-22 Thread Alan Grimes via AGI
I don't have much to say at this very minute, but I've been holding on to this link for a few days and really should share it: https://www.technologyreview.com/f/615174/the-white-house-will-spend-hundreds-of-millions-more-on-ai-research/ -- Clowns feed off of funny money; Funny money comes from

Re: [agi] AGI questions

2020-02-22 Thread immortal . discoveries
On Saturday, February 22, 2020, at 3:22 AM, WriterOfMinds wrote: > But it's entirely possible to *know *the right thing to do and still choose > not to do it. No, prediction in our brains is guided by not just frequency but also reward on the nodes. Intelligence/moral is intertwined in

Re: [agi] AGI questions

2020-02-22 Thread WriterOfMinds
> If you take the morality out of intelligence then you should use the term > "power." But that's exactly what intelligence is: a form of power, specifically concerned with the skills of thinking, planning, strategizing, etc.  Just look at a standard IQ (Intelligence Quotient) test: nothing