If released, Trump might declare an emergency and encircle the OpenAI
workspace in a wall which China will pay for using a shell game of tariffs?

On Saturday, February 16, 2019, Robert Levy <[email protected]> wrote:

> The politics of it, reading between the lines of Musk's stated objectives
> in founding the organization, seem to be that of cartel-style technological
> gate-keeping, not shying away from the toolkit of crony capitalism and
> regulatory capture to steer R&D in directions favorable to the involved
> corporate entities.
>
> On Sat, Feb 16, 2019 at 4:36 PM Rob Freeman <[email protected]>
> wrote:
>
>> I don't know Ben. It feels more sinister to me. It feels like virtue
>> signalling.
>>
>> Very bad to see this entering hard science.
>>
>> I see the idea behind it, probably unconscious and so more dangerous,
>> that engineers and engineering are bad, and the world must be protected
>> from them.
>>
>> If every time you state an idea you must spend half and hour evaluating
>> the ethical consequences of it, in no time science will be completely
>> captive to politics.
>>
>>
>> On Sun, Feb 17, 2019 at 1:05 PM Ben Goertzel <[email protected]> wrote:
>>
>>> Hmmm...
>>> 
>>> About this "OpenAI keeping their language model secret" thing...
>>> 
>>> I mean -- clearly, keeping their language model secret is a pure PR
>>> stunt... Their
>>> algorithm is described in an online paper... and their model was
>>> trained on Reddit text ... so anyone else with a bunch of $$ (for
>>> machine-time and data-preprocessing hacking) can download Reddit
>>> (complete Reddit archives are available as a torrent) and train a
>>> language model similar or better
>>> than OpenAI's ...
>>> 
>>> That said, their language model is a moderate improvement on the BERT
>>> model released by Google last year.   This is good AI work.  There is
>>> no understanding of semantics and no grounding of symbols in
>>> experience/world here, but still, it's pretty f**king cool to see what
>>> an awesome job of text generation can be done by these pure
>>> surface-level-pattern-recognition methods....
>>> 
>>> Honestly a lot of folks in the deep-NN/NLP space (including our own
>>> SingularityNET St. Petersburg team) have been talking about applying
>>> BERT-ish attention networks (with more comprehensive network
>>> architectures) in similar ways... but there are always so many
>>> different things to work on, and OpenAI should be congratulated for
>>> making these particular architecture tweaks and demonstrating them
>>> first... but not for the PR stunt of keeping their model secret...
>>> 
>>> Although perhaps they should be congratulated for revealing so clearly
>>> the limitations of the "open-ness" in their name "Open AI."   I mean,
>>> we all know there are some cases where keeping something secret may be
>>> the most ethical choice ... but the fact that they're willing to take
>>> this step simply for a short-term one-news-cycle PR boost, indicates
>>> that open-ness may not be such an important value to them after all...
>>> 
>>> --
>>> Ben Goertzel, PhD
>>> http://goertzel.org
>>> 
>>> "Listen: This world is the lunatic's sphere,  /  Don't always agree
>>> it's real.  /  Even with my feet upon it / And the postman knowing my
>>> door / My address is somewhere else." -- Hafiz
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T581199cf280badd7-Mf8042f02c1e05cae5e71b4bb>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-M1523f60cc255e8b9ae3b0dcb
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to