Last year it was reported Bostrom said "nigger" in 1998 or thereabouts.
https://youtu.be/Lu_i042oaNg

On Tue, Apr 23, 2024 at 9:00 AM James Bowery <[email protected]> wrote:

> Oh, and let's not forget the FHI itself!  When I approached one of its
> geniuses during the covid pandemic about setting up something like a Hutter
> Prize except using epidemiological data, he insisted on empirical testing
> of the efficacy of the Algorithmic Information Criterion.  That sounds
> great if you are utterly incapable of rational thought.
>
> On Tue, Apr 23, 2024 at 8:54 AM James Bowery <[email protected]> wrote:
>
>> A book title I've considered:
>>
>> "The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"
>>
>> Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong
>> spearheaded the sophistic attacks on The Hutter Prize.  Why?  So that there
>> is no recognition of the Algorithmic Information Criterion in the social
>> sciences.  If anything remotely like a Hutter Prize were to take root in
>> the social sciences, the TFR disaster being visited on the planet would be
>> over in very short order.
>>
>> On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney <[email protected]>
>> wrote:
>>
>>> Here is an early (2002) experiment described on SL4 (precursor to
>>> Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI
>>> could convince humans to let it escape from a box onto the internet.
>>> http://sl4.org/archive/0207/4935.html
>>>
>>> This is how actual science is done on AI safety. The results showed that
>>> attempts to contain it would be hopeless. Almost everyone let the (role
>>> played) AI escape.
>>>
>>> Of course the idea that a goal directed, self improving AI could even be
>>> developed in isolation from the internet seems hopelessly naïve in
>>> hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young
>>> and firmly believe that the unfriendly AI (now called alignment) problem
>>> could be and must be solved before it kills everyone, like it was a really
>>> hard math problem. Now, after decades of effort it seems he has given up
>>> hope. He organized communities of rationalists (Singularity Institute,
>>> later MIRI), attempted to formally define human goals (coherent
>>> extrapolated volition), timeless decision theory and information hazards
>>> (Roko's Basilisk), but to no avail.
>>>
>>> Vernor Vinge described the Singularity as an event horizon on the
>>> future. It cannot be predicted. The best we can do is extrapolate long term
>>> trends like Moore's law, increasing quality of life, life expectancy, and
>>> economic growth. But who forecast the Internet, social media, social
>>> isolation, and population collapse? What are we missing now?
>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>>> participants <https://agi.topicbox.com/groups/agi/members> +
>>> delivery options <https://agi.topicbox.com/groups/agi/subscription>
>>> Permalink
>>> <https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M74abe1f60f6dc75c28386a99>
>>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M17d07796414b89d092d93d4e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to