Re: [agi] FHI is shutting down

2024-04-24 Thread Bill Hibbard via AGI

On Wed, 24 Apr 2024, Ben Goertzel wrote:

It was fun to do AGI-12 with them at Oxford, those were excellent times ;)


Yeah, AGI-12 was great!

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M8f5cdd72fe14cb45a195706e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-24 Thread Ben Goertzel
Operating an institute within a large historical university like
Oxford probably requires consummate political skills, yeah... esp.
when dodging issues like affiliation with hot-button
movements/ideologies like EA ...

Anders described the demise of FHI as "death by bureaucracy" however
that doesn't necessarily get to the point... bureaucracy can sometimes
be used as a weapon by those who are in a certain position in a
complex organization and have a goal they want to achieve

In the end FHI was radical and Oxford is a conservative organization,
so it's not hard to see how some in the Oxford hierarchy would want to
get rid of them, and would take stuff like EA and Nick's unfortunate
long-ago online messages as an excuse to use bureaucracy to push them
out...

It happens that their radical ideology (which drove some but not all
of their work, Anders was always more moderate/open) was not one that
agreed with me totally anyway but...

It was fun to do AGI-12 with them at Oxford, those were excellent times ;)

Of course all the researchers will keep doing their things though,
just not at Oxford.  Anders gave a great talk at our BGI-24 event in
Panama a couple months back...

-- Ben

On Wed, Apr 24, 2024 at 5:25 AM Bill Hibbard via AGI
 wrote:
>
> When Nick Bostrom hired Anders Sandberg at FHI he should have made
> Anders the director and become a researcher, as Eliezer Yudkowsky
> is a researcher at MIRI with someone else as director. I understand
> this as a person who should not be an institutional director - my
> inevitable personality conficts would damage any institute I ran.
>
> With Anders as director I think FHI would not have been closed.
>
>
> On Wed, 24 Apr 2024, Ben Goertzel wrote:
> > FHI shared an office with the Institute for Effective Altruism, which
> > is not a terribly popular movement since SBF
> >
> > The EA association combined with off-and-on attention to some old
> > racist-appearing comments Nick made online years ago, could perhaps
> > have contributed to FHI's alienation from the Oxford bureaucracy
> >
> > CSER at Cambridge has essentially the same mission/theme but is run
> > more-so by UK academic establishment so runs little risk of
> > cancellation...
> >
> > -- Ben
> >
> >
> >
> > On Tue, Apr 23, 2024 at 12:35 PM James Bowery  wrote:
> >>
> >> Last year it was reported Bostrom said "nigger" in 1998 or thereabouts.
> >> https://urldefense.com/v3/__https://youtu.be/Lu_i042oaNg__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobOCnV-rl$
> >>
> >> On Tue, Apr 23, 2024 at 9:00 AM James Bowery  wrote:
> >>>
> >>> Oh, and let's not forget the FHI itself!  When I approached one of its 
> >>> geniuses during the covid pandemic about setting up something like a 
> >>> Hutter Prize except using epidemiological data, he insisted on empirical 
> >>> testing of the efficacy of the Algorithmic Information Criterion.  That 
> >>> sounds great if you are utterly incapable of rational thought.
> >>>
> >>> On Tue, Apr 23, 2024 at 8:54 AM James Bowery  wrote:
> 
>  A book title I've considered:
> 
>  "The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"
> 
>  Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong 
>  spearheaded the sophistic attacks on The Hutter Prize.  Why?  So that 
>  there is no recognition of the Algorithmic Information Criterion in the 
>  social sciences.  If anything remotely like a Hutter Prize were to take 
>  root in the social sciences, the TFR disaster being visited on the 
>  planet would be over in very short order.
> 
>  On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney  
>  wrote:
> >
> > Here is an early (2002) experiment described on SL4 (precursor to 
> > Overcoming Bias and Lesswrong) on whether an unfriendly self improving 
> > AI could convince humans to let it escape from a box onto the internet.
> > https://urldefense.com/v3/__http://sl4.org/archive/0207/4935.html__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobMIvRrAn$
> >
> > This is how actual science is done on AI safety. The results showed 
> > that attempts to contain it would be hopeless. Almost everyone let the 
> > (role played) AI escape.
> >
> > Of course the idea that a goal directed, self improving AI could even 
> > be developed in isolation from the internet seems hopelessly naïve in 
> > hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was 
> > young and firmly believe that the unfriendly AI (now called alignment) 
> > problem could be and must be solved before it kills everyone, like it 
> > was a really hard math problem. Now, after decades of effort it seems 
> > he has given up hope. He organized communities of rationalists 
> > (Singularity Institute, later MIRI), attempted to formally define human 
> > goals (coherent extrapolated 

Re: [agi] FHI is shutting down

2024-04-24 Thread Bill Hibbard via AGI

When Nick Bostrom hired Anders Sandberg at FHI he should have made
Anders the director and become a researcher, as Eliezer Yudkowsky
is a researcher at MIRI with someone else as director. I understand
this as a person who should not be an institutional director - my
inevitable personality conficts would damage any institute I ran.

With Anders as director I think FHI would not have been closed.


On Wed, 24 Apr 2024, Ben Goertzel wrote:

FHI shared an office with the Institute for Effective Altruism, which
is not a terribly popular movement since SBF

The EA association combined with off-and-on attention to some old
racist-appearing comments Nick made online years ago, could perhaps
have contributed to FHI's alienation from the Oxford bureaucracy

CSER at Cambridge has essentially the same mission/theme but is run
more-so by UK academic establishment so runs little risk of
cancellation...

-- Ben



On Tue, Apr 23, 2024 at 12:35 PM James Bowery  wrote:


Last year it was reported Bostrom said "nigger" in 1998 or thereabouts.
https://urldefense.com/v3/__https://youtu.be/Lu_i042oaNg__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobOCnV-rl$

On Tue, Apr 23, 2024 at 9:00 AM James Bowery  wrote:


Oh, and let's not forget the FHI itself!  When I approached one of its geniuses 
during the covid pandemic about setting up something like a Hutter Prize except 
using epidemiological data, he insisted on empirical testing of the efficacy of 
the Algorithmic Information Criterion.  That sounds great if you are utterly 
incapable of rational thought.

On Tue, Apr 23, 2024 at 8:54 AM James Bowery  wrote:


A book title I've considered:

"The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"

Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong spearheaded the 
sophistic attacks on The Hutter Prize.  Why?  So that there is no recognition 
of the Algorithmic Information Criterion in the social sciences.  If anything 
remotely like a Hutter Prize were to take root in the social sciences, the TFR 
disaster being visited on the planet would be over in very short order.

On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney  wrote:


Here is an early (2002) experiment described on SL4 (precursor to Overcoming 
Bias and Lesswrong) on whether an unfriendly self improving AI could convince 
humans to let it escape from a box onto the internet.
https://urldefense.com/v3/__http://sl4.org/archive/0207/4935.html__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobMIvRrAn$

This is how actual science is done on AI safety. The results showed that 
attempts to contain it would be hopeless. Almost everyone let the (role played) 
AI escape.

Of course the idea that a goal directed, self improving AI could even be 
developed in isolation from the internet seems hopelessly naïve in hindsight. 
Eliezer Yudkowsky, who I still regard as brilliant, was young and firmly 
believe that the unfriendly AI (now called alignment) problem could be and must 
be solved before it kills everyone, like it was a really hard math problem. 
Now, after decades of effort it seems he has given up hope. He organized 
communities of rationalists (Singularity Institute, later MIRI), attempted to 
formally define human goals (coherent extrapolated volition), timeless decision 
theory and information hazards (Roko's Basilisk), but to no avail.

Vernor Vinge described the Singularity as an event horizon on the future. It 
cannot be predicted. The best we can do is extrapolate long term trends like 
Moore's law, increasing quality of life, life expectancy, and economic growth. 
But who forecast the Internet, social media, social isolation, and population 
collapse? What are we missing now?


Artificial General Intelligence List / AGI / see discussions + participants + 
delivery options Permalink




--
Ben Goertzel, PhD
b...@goertzel.org

"One must have chaos in one's heart to give birth to a dancing star"
-- Friedrich Nietzsche

--
Artificial General Intelligence List: AGI
Permalink: 
https://urldefense.com/v3/__https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M65cfae2d16e21ae8f460a758__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobCx4tUb5$
Delivery options: 
https://urldefense.com/v3/__https://agi.topicbox.com/groups/agi/subscription__;!!Mak6IKo!NeVEFmCPYayuoNiIaTPuT1I_F5Whk4nK4-UjEla0lvrwoQFO8DTj2M-R_dPdcMLUNapUQYvRzdJobKWnisFn$


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M9ac3f48dade0fd1a8f0fa3b5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-24 Thread Ben Goertzel
FHI shared an office with the Institute for Effective Altruism, which
is not a terribly popular movement since SBF

The EA association combined with off-and-on attention to some old
racist-appearing comments Nick made online years ago, could perhaps
have contributed to FHI's alienation from the Oxford bureaucracy

CSER at Cambridge has essentially the same mission/theme but is run
more-so by UK academic establishment so runs little risk of
cancellation...

-- Ben



On Tue, Apr 23, 2024 at 12:35 PM James Bowery  wrote:
>
> Last year it was reported Bostrom said "nigger" in 1998 or thereabouts.
> https://youtu.be/Lu_i042oaNg
>
> On Tue, Apr 23, 2024 at 9:00 AM James Bowery  wrote:
>>
>> Oh, and let's not forget the FHI itself!  When I approached one of its 
>> geniuses during the covid pandemic about setting up something like a Hutter 
>> Prize except using epidemiological data, he insisted on empirical testing of 
>> the efficacy of the Algorithmic Information Criterion.  That sounds great if 
>> you are utterly incapable of rational thought.
>>
>> On Tue, Apr 23, 2024 at 8:54 AM James Bowery  wrote:
>>>
>>> A book title I've considered:
>>>
>>> "The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"
>>>
>>> Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong spearheaded 
>>> the sophistic attacks on The Hutter Prize.  Why?  So that there is no 
>>> recognition of the Algorithmic Information Criterion in the social 
>>> sciences.  If anything remotely like a Hutter Prize were to take root in 
>>> the social sciences, the TFR disaster being visited on the planet would be 
>>> over in very short order.
>>>
>>> On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney  
>>> wrote:

 Here is an early (2002) experiment described on SL4 (precursor to 
 Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI 
 could convince humans to let it escape from a box onto the internet.
 http://sl4.org/archive/0207/4935.html

 This is how actual science is done on AI safety. The results showed that 
 attempts to contain it would be hopeless. Almost everyone let the (role 
 played) AI escape.

 Of course the idea that a goal directed, self improving AI could even be 
 developed in isolation from the internet seems hopelessly naïve in 
 hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young 
 and firmly believe that the unfriendly AI (now called alignment) problem 
 could be and must be solved before it kills everyone, like it was a really 
 hard math problem. Now, after decades of effort it seems he has given up 
 hope. He organized communities of rationalists (Singularity Institute, 
 later MIRI), attempted to formally define human goals (coherent 
 extrapolated volition), timeless decision theory and information hazards 
 (Roko's Basilisk), but to no avail.

 Vernor Vinge described the Singularity as an event horizon on the future. 
 It cannot be predicted. The best we can do is extrapolate long term trends 
 like Moore's law, increasing quality of life, life expectancy, and 
 economic growth. But who forecast the Internet, social media, social 
 isolation, and population collapse? What are we missing now?
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
b...@goertzel.org

"One must have chaos in one's heart to give birth to a dancing star"
-- Friedrich Nietzsche

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M65cfae2d16e21ae8f460a758
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-23 Thread James Bowery
Last year it was reported Bostrom said "nigger" in 1998 or thereabouts.
https://youtu.be/Lu_i042oaNg

On Tue, Apr 23, 2024 at 9:00 AM James Bowery  wrote:

> Oh, and let's not forget the FHI itself!  When I approached one of its
> geniuses during the covid pandemic about setting up something like a Hutter
> Prize except using epidemiological data, he insisted on empirical testing
> of the efficacy of the Algorithmic Information Criterion.  That sounds
> great if you are utterly incapable of rational thought.
>
> On Tue, Apr 23, 2024 at 8:54 AM James Bowery  wrote:
>
>> A book title I've considered:
>>
>> "The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"
>>
>> Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong
>> spearheaded the sophistic attacks on The Hutter Prize.  Why?  So that there
>> is no recognition of the Algorithmic Information Criterion in the social
>> sciences.  If anything remotely like a Hutter Prize were to take root in
>> the social sciences, the TFR disaster being visited on the planet would be
>> over in very short order.
>>
>> On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney 
>> wrote:
>>
>>> Here is an early (2002) experiment described on SL4 (precursor to
>>> Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI
>>> could convince humans to let it escape from a box onto the internet.
>>> http://sl4.org/archive/0207/4935.html
>>>
>>> This is how actual science is done on AI safety. The results showed that
>>> attempts to contain it would be hopeless. Almost everyone let the (role
>>> played) AI escape.
>>>
>>> Of course the idea that a goal directed, self improving AI could even be
>>> developed in isolation from the internet seems hopelessly naïve in
>>> hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young
>>> and firmly believe that the unfriendly AI (now called alignment) problem
>>> could be and must be solved before it kills everyone, like it was a really
>>> hard math problem. Now, after decades of effort it seems he has given up
>>> hope. He organized communities of rationalists (Singularity Institute,
>>> later MIRI), attempted to formally define human goals (coherent
>>> extrapolated volition), timeless decision theory and information hazards
>>> (Roko's Basilisk), but to no avail.
>>>
>>> Vernor Vinge described the Singularity as an event horizon on the
>>> future. It cannot be predicted. The best we can do is extrapolate long term
>>> trends like Moore's law, increasing quality of life, life expectancy, and
>>> economic growth. But who forecast the Internet, social media, social
>>> isolation, and population collapse? What are we missing now?
>>> *Artificial General Intelligence List *
>>> / AGI / see discussions  +
>>> participants  +
>>> delivery options 
>>> Permalink
>>> 
>>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M17d07796414b89d092d93d4e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-23 Thread James Bowery
Oh, and let's not forget the FHI itself!  When I approached one of its
geniuses during the covid pandemic about setting up something like a Hutter
Prize except using epidemiological data, he insisted on empirical testing
of the efficacy of the Algorithmic Information Criterion.  That sounds
great if you are utterly incapable of rational thought.

On Tue, Apr 23, 2024 at 8:54 AM James Bowery  wrote:

> A book title I've considered:
>
> "The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"
>
> Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong
> spearheaded the sophistic attacks on The Hutter Prize.  Why?  So that there
> is no recognition of the Algorithmic Information Criterion in the social
> sciences.  If anything remotely like a Hutter Prize were to take root in
> the social sciences, the TFR disaster being visited on the planet would be
> over in very short order.
>
> On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney 
> wrote:
>
>> Here is an early (2002) experiment described on SL4 (precursor to
>> Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI
>> could convince humans to let it escape from a box onto the internet.
>> http://sl4.org/archive/0207/4935.html
>>
>> This is how actual science is done on AI safety. The results showed that
>> attempts to contain it would be hopeless. Almost everyone let the (role
>> played) AI escape.
>>
>> Of course the idea that a goal directed, self improving AI could even be
>> developed in isolation from the internet seems hopelessly naïve in
>> hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young
>> and firmly believe that the unfriendly AI (now called alignment) problem
>> could be and must be solved before it kills everyone, like it was a really
>> hard math problem. Now, after decades of effort it seems he has given up
>> hope. He organized communities of rationalists (Singularity Institute,
>> later MIRI), attempted to formally define human goals (coherent
>> extrapolated volition), timeless decision theory and information hazards
>> (Roko's Basilisk), but to no avail.
>>
>> Vernor Vinge described the Singularity as an event horizon on the future.
>> It cannot be predicted. The best we can do is extrapolate long term trends
>> like Moore's law, increasing quality of life, life expectancy, and economic
>> growth. But who forecast the Internet, social media, social isolation, and
>> population collapse? What are we missing now?
>> *Artificial General Intelligence List *
>> / AGI / see discussions  +
>> participants  +
>> delivery options 
>> Permalink
>> 
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M38aebe72088cb23a813b1e6e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-23 Thread James Bowery
A book title I've considered:

"The Unfriendly AGI:  How and Why The Global Economy Castrates Our Sons"

Yudowsky is basically a tool of The Unfriendly AGI.   LessWrong spearheaded
the sophistic attacks on The Hutter Prize.  Why?  So that there is no
recognition of the Algorithmic Information Criterion in the social
sciences.  If anything remotely like a Hutter Prize were to take root in
the social sciences, the TFR disaster being visited on the planet would be
over in very short order.

On Mon, Apr 22, 2024 at 10:13 PM Matt Mahoney 
wrote:

> Here is an early (2002) experiment described on SL4 (precursor to
> Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI
> could convince humans to let it escape from a box onto the internet.
> http://sl4.org/archive/0207/4935.html
>
> This is how actual science is done on AI safety. The results showed that
> attempts to contain it would be hopeless. Almost everyone let the (role
> played) AI escape.
>
> Of course the idea that a goal directed, self improving AI could even be
> developed in isolation from the internet seems hopelessly naïve in
> hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young
> and firmly believe that the unfriendly AI (now called alignment) problem
> could be and must be solved before it kills everyone, like it was a really
> hard math problem. Now, after decades of effort it seems he has given up
> hope. He organized communities of rationalists (Singularity Institute,
> later MIRI), attempted to formally define human goals (coherent
> extrapolated volition), timeless decision theory and information hazards
> (Roko's Basilisk), but to no avail.
>
> Vernor Vinge described the Singularity as an event horizon on the future.
> It cannot be predicted. The best we can do is extrapolate long term trends
> like Moore's law, increasing quality of life, life expectancy, and economic
> growth. But who forecast the Internet, social media, social isolation, and
> population collapse? What are we missing now?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M05a4f762eefaa3aeec64b9da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-22 Thread Matt Mahoney
Here is an early (2002) experiment described on SL4 (precursor to
Overcoming Bias and Lesswrong) on whether an unfriendly self improving AI
could convince humans to let it escape from a box onto the internet.
http://sl4.org/archive/0207/4935.html

This is how actual science is done on AI safety. The results showed that
attempts to contain it would be hopeless. Almost everyone let the (role
played) AI escape.

Of course the idea that a goal directed, self improving AI could even be
developed in isolation from the internet seems hopelessly naïve in
hindsight. Eliezer Yudkowsky, who I still regard as brilliant, was young
and firmly believe that the unfriendly AI (now called alignment) problem
could be and must be solved before it kills everyone, like it was a really
hard math problem. Now, after decades of effort it seems he has given up
hope. He organized communities of rationalists (Singularity Institute,
later MIRI), attempted to formally define human goals (coherent
extrapolated volition), timeless decision theory and information hazards
(Roko's Basilisk), but to no avail.

Vernor Vinge described the Singularity as an event horizon on the future.
It cannot be predicted. The best we can do is extrapolate long term trends
like Moore's law, increasing quality of life, life expectancy, and economic
growth. But who forecast the Internet, social media, social isolation, and
population collapse? What are we missing now?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M74abe1f60f6dc75c28386a99
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-22 Thread James Bowery
See Gian Carlo Rota "Indiscrete Thoughts"

On Sun, Apr 21, 2024 at 9:06 PM Alan Grimes via AGI 
wrote:

> Matt Mahoney wrote:
> > Maybe because philosophy isn't real science, and Oxford decided FHI's
> > funding would be better off spent elsewhere. You could argue that
> > existential risk of human extinction is important, but browsing their
> > list of papers doesn't give me a good feeling that they have produced
> > anything important besides talk. What hypotheses have they tested?
> 
> Science is a branch of philosophy, classically referred to as "natural
> philosophy". A local science club was founded in 1871...
> 
> https://pswscience.org/about-psw/
> 
> --
> You can't out-crazy a Democrat.
> #EggCrisis  #BlackWinter
> White is the new Kulak.
> Powers are not rights.
> 

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-Me375e2f1381a1bc923ad0cb2
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-21 Thread Alan Grimes via AGI

Matt Mahoney wrote:
Maybe because philosophy isn't real science, and Oxford decided FHI's 
funding would be better off spent elsewhere. You could argue that 
existential risk of human extinction is important, but browsing their 
list of papers doesn't give me a good feeling that they have produced 
anything important besides talk. What hypotheses have they tested?


Science is a branch of philosophy, classically referred to as "natural 
philosophy". A local science club was founded in 1871...


https://pswscience.org/about-psw/


--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M1c757ea607e123f2709de401
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-20 Thread Matt Mahoney
Maybe because philosophy isn't real science, and Oxford decided FHI's
funding would be better off spent elsewhere. You could argue that
existential risk of human extinction is important, but browsing their list
of papers doesn't give me a good feeling that they have produced anything
important besides talk. What hypotheses have they tested?

Is MIRI next? It seems like they are just getting in the way of progress
and hurting the profits of their high tech billionaire backers.

Where are the predictions of population collapse because people are
spending more time on their phones instead of making babies?

On Sat, Apr 20, 2024, 1:27 PM James Bowery  wrote:

> Is there quasi-journalistic synopsis of what happened to cause it to
> receive "headwinds"?  Is "Facebook" involved or just "some people on"
> Facebook?  And what was their motivation -- sans identity?
>
> On Fri, Apr 19, 2024 at 6:28 PM Mike Archbold  wrote:
>
>> Some people on facebook are spiking the ball... I guess I won't say who ;)
>>
>> On Fri, Apr 19, 2024 at 4:03 PM Matt Mahoney 
>> wrote:
>>
>>> https://www.futureofhumanityinstitute.org/
>>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-Mf6feb4f8bea607b7aed11189
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-20 Thread James Bowery
Is there quasi-journalistic synopsis of what happened to cause it to
receive "headwinds"?  Is "Facebook" involved or just "some people on"
Facebook?  And what was their motivation -- sans identity?

On Fri, Apr 19, 2024 at 6:28 PM Mike Archbold  wrote:

> Some people on facebook are spiking the ball... I guess I won't say who ;)
>
> On Fri, Apr 19, 2024 at 4:03 PM Matt Mahoney 
> wrote:
>
>> https://www.futureofhumanityinstitute.org/
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-M0b09cbb73e0bffe5e677f043
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] FHI is shutting down

2024-04-19 Thread Mike Archbold
Some people on facebook are spiking the ball... I guess I won't say who ;)

On Fri, Apr 19, 2024 at 4:03 PM Matt Mahoney 
wrote:

> https://www.futureofhumanityinstitute.org/
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0da187fd19737a7-Maf7b3f5af29eab4e17bcc6ac
Delivery options: https://agi.topicbox.com/groups/agi/subscription