With the major advancement in traditional logic (that I am not going to
name), traditional non-AGI programming can be used to monitor what the AGIs
(including specialized AGIs) are doing..
Jim Bromer


On Mon, Dec 14, 2015 at 11:20 PM, Dr Miles Dyson <
[email protected]> wrote:

> Yes open source means more AI's at around the same level of capability.
>
> However, like you pointed out their reasoning if flawed.  Bad humans might
> be rare enough relative to good humans, but that is because humans evolved
> social/ethical genes/memes, which most humans have.  But there is no reason
> to assume that the same will be true for AI's, since they will not share
> 99% of the genes (source code) like humans do.  The only reason AI's would
> be good is if humans create social/ethical software modules that we somehow
> force into the majority of AI's, or if the AI's are forced to evolve in
> ways that they interact with each other and/or humans.  If you force them
> to evolve in a social world they will learn that it pays to get along with
> others, since you can get more stuff done that way.  Or perhaps they will
> discover what the great philosopher Al Capone discovered "You can get much
> farther with a kind word and a gun than you can with a kind word alone."
>
>
> On Mon, Dec 14, 2015 at 10:47 PM, Mike Archbold <[email protected]>
> wrote:
>
>> On 12/14/15, Ben Kapp <[email protected]> wrote:
>> > No, their argument for open AI leading to positive AI is as follows in
>> > their own words.
>> >
>> >
>> > "If I’m Dr. Evil and I use it, won’t you be empowering me?
>> > Musk: I think that’s an excellent question and it’s something that we
>> > debated quite a bit.
>> > Altman: There are a few different thoughts about this. Just like humans
>> > protect against Dr. Evil by the fact that most humans are good, and the
>> > collective force of humanity can contain the bad elements, we think its
>> far
>> > more likely that *many, many AIs, will work to stop the occasional bad
>> > actors* than the idea that there is a single AI a billion times more
>> > powerful than anything else. If that one thing goes off the rails or if
>> Dr.
>> > Evil gets that one thing and there is nothing to counteract it, then
>> we’re
>> > really in a bad place."
>> >
>> https://medium.com/backchannel/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over-17e0e27dd02a#.aojim3ery
>> >
>> >
>> > Their rationale for making this openly available, is the belief that
>> doing
>> > so would result in many AGI's which would compete with each other and in
>> > this way provide protection from one single super intelligence from
>> taking
>> > over.
>>
>>
>> Well, if the argument is that by virtue of having more "good" AIs than
>> "bad" AIs and the occasional Dr Evil, whether you happen to have a
>> great many AIs running closed source and other open source, I don't
>> see the difference.  Are you saying that just making it open source
>> will result in more AIs out there?
>>
>>
>> >
>> > On Mon, Dec 14, 2015 at 10:34 PM, Mike Archbold <[email protected]>
>> > wrote:
>> >
>> >> Regarding AI Safety:
>> >>
>> >> The argument appears to be that open source means the public can
>> >> examine the AI's source code in advance for potential dangerous
>> >> behavior, I think.  I haven't read the expanded argument Ben links to.
>> >>
>> >> It seems intuitively to me that if the AI is truly autonomous,
>> >> learning and thinking on its own, creating new thought structures
>> >> dynamically, etc etc, it would go far beyond what happened to be in
>> >> its source code to start off with, so I'm not sure how much of an
>> >> advantage is to be gained just by virtue of the code being public in
>> >> terms of safety alone.   There would be an advantage, no doubt, but
>> >> remember we are talking about AI which is essentially less defined and
>> >> coded in advance that our more typical applications which are open
>> >> sourced.
>> >>
>> >>
>> >> Mike A
>> >>
>> >> On 12/13/15, colin hales <[email protected]> wrote:
>> >> > Prediction: Without a significant overhaul of their strategy, that
>> >> $Billion
>> >> > will create another stratum in the 65+ years of layers of narrow-AI.
>> >> Deeper
>> >> > into the niches. Useful.... But ...
>> >> >
>> >> >  Deep automation (their 'digital intelligence') is not AGI.
>> >> >
>> >> > How many years more before the 65 year old, many-$Billion experiment
>> >> (that
>> >> > AGI involves computers) is found suspect enough to spend, say, $11.37
>> >> > on
>> >> the
>> >> > obvious alternative?
>> >> >
>> >> > End of year gripe. Am about to turn 60. My curmudgeon index is
>> >> > redlining.
>> >> >
>> >> > :)
>> >> > Colin
>> >> >
>> >> > -----Original Message-----
>> >> > From: "Ben Goertzel" <[email protected]>
>> >> > Sent: ‎14/‎12/‎2015 2:21 PM
>> >> > To: "AGI" <[email protected]>
>> >> > Subject: Re: [agi] Elon Musk Helps Launch OpenAI, Non-Profit
>> Dedicated
>> >> > toArtificial Intelligence - Funding: 1 billion USD
>> >> >
>> >> >
>> >> >
>> >> > My first thoughts on OpenAi are here:
>> >> >
>> >> >
>> >> > http://wp.goertzel.org/openai-quick-thoughts/
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On Sun, Dec 13, 2015 at 9:49 PM, <[email protected]> wrote:
>> >> >
>> >> > http://futurism.com/links/19499/
>> >> >
>> >> > AGI | Archives  | Modify Your Subscription
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> >
>> >> > Ben Goertzel, PhD
>> >> > http://goertzel.org
>> >> >
>> >> > "The reasonable man adapts himself to the world: the unreasonable one
>> >> > persists in trying to adapt the world to himself. Therefore all
>> >> > progress
>> >> > depends on the unreasonable man." -- George Bernard Shaw
>> >> >
>> >> > AGI | Archives  | Modify Your Subscription
>> >> >
>> >> >
>> >> > -------------------------------------------
>> >> > AGI
>> >> > Archives: https://www.listbox.com/member/archive/303/=now
>> >> > RSS Feed:
>> >> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>> >> > Modify Your Subscription:
>> >> > https://www.listbox.com/member/?&;
>> >> > Powered by Listbox: http://www.listbox.com
>> >> >
>> >>
>> >>
>> >> -------------------------------------------
>> >> AGI
>> >> Archives: https://www.listbox.com/member/archive/303/=now
>> >> RSS Feed:
>> >> https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee
>> >> Modify Your Subscription:
>> >> https://www.listbox.com/member/?&;
>> >> Powered by Listbox: http://www.listbox.com
>> >>
>> >
>> >
>> >
>> > -------------------------------------------
>> > AGI
>> > Archives: https://www.listbox.com/member/archive/303/=now
>> > RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>> > Modify Your Subscription:
>> > https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> >
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to