Russell,

The link Matt provided was not for the AGI-12 conference on AGI
research, but rather for another smaller conference operated by
Oxford's Future of Humanity Institute, AGI-Impacts, which is
co-located with AGI-12 and deals with the potential future
implications of AGI...

If you look at the page for the actual AGI conference,

http://agi-conference.org/2012/schedule/

you will find a rather different set of papers ;p

ben

On Mon, Dec 10, 2012 at 11:44 AM, Russell Wallace
<[email protected]> wrote:
> Matt, thanks for the report. I had not been particularly optimistic
> about rate of progress in AGI, but even my pessimistic self had not
> expected the field to turn into toxic garbage at the rate at which it
> has done.
>
> On Mon, Dec 10, 2012 at 4:59 AM, Matt Mahoney <[email protected]> wrote:
>> On Sun, Dec 9, 2012 at 10:56 PM, Jim Bromer <[email protected]> wrote:
>>>
>>> You can find the papers here.
>>> http://www.mindmakers.org/projects/agiconf-2012/wiki/Schedule
>>
>> I read the extended abstracts here:
>> http://www.winterintelligence.org/agi-impacts-extended-abstracts/
>>
>> Summary: out of 12 papers, 0 report advances toward AGI.
>>
>> The one paper that reports any experimental results whatsoever is the
>> one by Stuart Armstrong: Predicting AI… or failing to.
>> The main result (from reading the whole paper) is that the
>> distribution of predictions of time to AI has not changed since the
>> 1950's, even though those predictions are known to be wrong.
>>
>> Nine of the papers are on the general theme of safety, ethics, or
>> friendliness of self improving AI. There seems to be a general belief
>> that if we can produce a smarter than human AI, then so could it, only
>> faster. So we had better get its goal system right. But there is only
>> one problem. It is all of humanity, not a single human, that produced
>> this super-human AI. So the threshold for self improvement hasn't been
>> crossed yet. If you want an example of recursive self improvement,
>> then look at civilization making a better version of civilization, as
>> measured by economic growth and increased life expectancy. That is
>> happening in spite of the lack of any obvious goal system that needs
>> to be programmed.
>>
>> To see why creating a single super-human AI is not recursive self
>> improvement, ask yourself if you could have done it 100 years ago,
>> knowing what you know now. Could you have done it if you were the only
>> living person on the planet? If not, then you had help.
>>
>>
>> -- Matt Mahoney, [email protected]
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/1658954-f53d1a3f
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to