FHI shared an office with the Institute for Effective Altruism, which
is not a terribly popular movement since SBF

The EA association combined with off-and-on attention to some old
racist-appearing comments Nick made online years ago, could perhaps
have contributed to FHI's alienation from the Oxford bureaucracy

CSER at Cambridge has essentially the same mission/theme but is run
more-so by UK academic establishment so runs little risk of
cancellation...

-- Ben



On Tue, Apr 23, 2024 at 12:35 PM James Bowery <jabow...@gmail.com> wrote:
>
> Last year it was reported Bostrom said "nigger" in 1998 or thereabouts.
> https://youtu.be/Lu_i042oaNg
>
> On Tue, Apr 23, 2024 at 9:00 AM James Bowery <jabow...@gmail.com> wrote:
>>
>> Oh, and let's not forget the FHI itself!  When I approached one of its 
>> geniuses during the covid pandemic about setting up something like a Hutter 
>> Prize except using epidemiological data, he insisted on empirical testing of 
>> the efficacy of the Algorithmic Information Criterion.  That sounds great if 
>> you are utterly incapable of rational thought.
>>
>> On Tue, Apr 23, 2024 at 8:54 AM James Bowery <jabow...@gmail.com> wrote:
>>>
>>> A book title I've considered:
>>>
>>> "The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"
>>>
>>> Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong spearheaded 
>>> the sophistic attacks on The Hutter Prize.  Why?  So that there is no 
>>> recognition of the Algorithmic Information Criterion in the social 
>>> sciences.  If anything remotely like a Hutter Prize were to take root in 
>>> the social sciences, the TFR disaster being visited on the planet would be 
>>> over in very short order.
>>>
>>> On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney <mattmahone...@gmail.com> 
>>> wrote:
>>>>
>>>> Here is an early (2002) experiment described on SL4 (precursor to 
>>>> Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI 
>>>> could convince humans to let it escape from a box onto the internet.
>>>> http://sl4.org/archive/0207/4935.html
>>>>
>>>> This is how actual science is done on AI safety. The results showed that 
>>>> attempts to contain it would be hopeless. Almost everyone let the (role 
>>>> played) AI escape.
>>>>
>>>> Of course the idea that a goal directed, self improving AI could even be 
>>>> developed in isolation from the internet seems hopelessly naïve in 
>>>> hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young 
>>>> and firmly believe that the unfriendly AI (now called alignment) problem 
>>>> could be and must be solved before it kills everyone, like it was a really 
>>>> hard math problem. Now, after decades of effort it seems he has given up 
>>>> hope. He organized communities of rationalists (Singularity Institute, 
>>>> later MIRI), attempted to formally define human goals (coherent 
>>>> extrapolated volition), timeless decision theory and information hazards 
>>>> (Roko's Basilisk), but to no avail.
>>>>
>>>> Vernor Vinge described the Singularity as an event horizon on the future. 
>>>> It cannot be predicted. The best we can do is extrapolate long term trends 
>>>> like Moore's law, increasing quality of life, life expectancy, and 
>>>> economic growth. But who forecast the Internet, social media, social 
>>>> isolation, and population collapse? What are we missing now?
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
b...@goertzel.org

"One must have chaos in one's heart to give birth to a dancing star"
-- Friedrich Nietzsche

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M65cfae2d16e21ae8f460a758
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to