Operating an institute within a large historical university like
Oxford probably requires consummate political skills, yeah... esp.
when dodging issues like affiliation with hot-button
movements/ideologies like EA ...

Anders described the demise of FHI as "death by bureaucracy" however
that doesn't necessarily get to the point... bureaucracy can sometimes
be used as a weapon by those who are in a certain position in a
complex organization and have a goal they want to achieve....

In the end FHI was radical and Oxford is a conservative organization,
so it's not hard to see how some in the Oxford hierarchy would want to
get rid of them, and would take stuff like EA and Nick's unfortunate
long-ago online messages as an excuse to use bureaucracy to push them
out...

It happens that their radical ideology (which drove some but not all
of their work, Anders was always more moderate/open) was not one that
agreed with me totally anyway but...

It was fun to do AGI-12 with them at Oxford, those were excellent times ;)

Of course all the researchers will keep doing their things though,
just not at Oxford.  Anders gave a great talk at our BGI-24 event in
Panama a couple months back...

-- Ben

On Wed, Apr 24, 2024 at 5:25 AM Bill Hibbard via AGI
<[email protected]> wrote:
>
> When Nick Bostrom hired Anders Sandberg at FHI he should have made
> Anders the director and become a researcher, as Eliezer Yudkowsky
> is a researcher at MIRI with someone else as director. I understand
> this as a person who should not be an institutional director - my
> inevitable personality conficts would damage any institute I ran.
>
> With Anders as director I think FHI would not have been closed.
>
>
> On Wed, 24 Apr 2024, Ben Goertzel wrote:
> > FHI shared an office with the Institute for Effective Altruism, which
> > is not a terribly popular movement since SBF
> >
> > The EA association combined with off-and-on attention to some old
> > racist-appearing comments Nick made online years ago, could perhaps
> > have contributed to FHI's alienation from the Oxford bureaucracy
> >
> > CSER at Cambridge has essentially the same mission/theme but is run
> > more-so by UK academic establishment so runs little risk of
> > cancellation...
> >
> > -- Ben
> >
> >
> >
> > On Tue, Apr 23, 2024 at 12:35 PM James Bowery <[email protected]> wrote:
> >>
> >> Last year it was reported Bostrom said "nigger" in 1998 or thereabouts.
> >> https://urldefense.com/v3/__https://youtu.be/Lu_i042oaNg__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobOCnV-rl$
> >>
> >> On Tue, Apr 23, 2024 at 9:00 AM James Bowery <[email protected]> wrote:
> >>>
> >>> Oh, and let's not forget the FHI itself!  When I approached one of its 
> >>> geniuses during the covid pandemic about setting up something like a 
> >>> Hutter Prize except using epidemiological data, he insisted on empirical 
> >>> testing of the efficacy of the Algorithmic Information Criterion.  That 
> >>> sounds great if you are utterly incapable of rational thought.
> >>>
> >>> On Tue, Apr 23, 2024 at 8:54 AM James Bowery <[email protected]> wrote:
> >>>>
> >>>> A book title I've considered:
> >>>>
> >>>> "The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"
> >>>>
> >>>> Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong 
> >>>> spearheaded the sophistic attacks on The Hutter Prize.  Why?  So that 
> >>>> there is no recognition of the Algorithmic Information Criterion in the 
> >>>> social sciences.  If anything remotely like a Hutter Prize were to take 
> >>>> root in the social sciences, the TFR disaster being visited on the 
> >>>> planet would be over in very short order.
> >>>>
> >>>> On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney <[email protected]> 
> >>>> wrote:
> >>>>>
> >>>>> Here is an early (2002) experiment described on SL4 (precursor to 
> >>>>> Overcoming Bias and Lesswrong) on whether an unfriendly self improving 
> >>>>> AI could convince humans to let it escape from a box onto the internet.
> >>>>> https://urldefense.com/v3/__http://sl4.org/archive/0207/4935.html__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobMIvRrAn$
> >>>>>
> >>>>> This is how actual science is done on AI safety. The results showed 
> >>>>> that attempts to contain it would be hopeless. Almost everyone let the 
> >>>>> (role played) AI escape.
> >>>>>
> >>>>> Of course the idea that a goal directed, self improving AI could even 
> >>>>> be developed in isolation from the internet seems hopelessly naïve in 
> >>>>> hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was 
> >>>>> young and firmly believe that the unfriendly AI (now called alignment) 
> >>>>> problem could be and must be solved before it kills everyone, like it 
> >>>>> was a really hard math problem. Now, after decades of effort it seems 
> >>>>> he has given up hope. He organized communities of rationalists 
> >>>>> (Singularity Institute, later MIRI), attempted to formally define human 
> >>>>> goals (coherent extrapolated volition), timeless decision theory and 
> >>>>> information hazards (Roko's Basilisk), but to no avail.
> >>>>>
> >>>>> Vernor Vinge described the Singularity as an event horizon on the 
> >>>>> future. It cannot be predicted. The best we can do is extrapolate long 
> >>>>> term trends like Moore's law, increasing quality of life, life 
> >>>>> expectancy, and economic growth. But who forecast the Internet, social 
> >>>>> media, social isolation, and population collapse? What are we missing 
> >>>>> now?
> >>
> >> Artificial General Intelligence List / AGI / see discussions + 
> >> participants + delivery options Permalink
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > [email protected]
> >
> > "One must have chaos in one's heart to give birth to a dancing star"
> > -- Friedrich Nietzsche
> >
> > ------------------------------------------
> > Artificial General Intelligence List: AGI
> > Permalink: 
> > https://urldefense.com/v3/__https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M65cfae2d16e21ae8f460a758__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobCx4tUb5$
> > Delivery options: 
> > https://urldefense.com/v3/__https://agi.topicbox.com/groups/agi/subscription__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobKWnisFn$
> >
> ------------------------------------------
> Artificial General Intelligence List: AGI
> Permalink: 
> https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M9ac3f48dade0fd1a8f0fa3b5
> Delivery options: https://agi.topicbox.com/groups/agi/subscription



-- 
Ben Goertzel, PhD
[email protected]

"One must have chaos in one's heart to give birth to a dancing star"
-- Friedrich Nietzsche

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M317f42c4fe76607470bae1c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to