> 9. a particular AGI theoryThat is, one that convinces me it's on the right
> track.
Now that you have run this poll, what did you learn from the responses and how
are you using this information in your effort?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
I did a deeper scan of my mind, and found that the only memory I actually
have is that someone at the conference said that they saw I wasn't in the
room that morning, and then looked around to see if there was a bomb.
My memory probably was incorrect in terms of substituting "fire" for "bomb"
# 7 8 9
Money is good, but the overall AGI theory and program plan is the most
important aspect.
James Ratcliff
"YKY (Yan King Yin)" <[EMAIL PROTECTED]> wrote: Can people rate the following
things?
1. quick $$, ie salary
2. long-term $$, ie shares in a successful corp
3. freedom to do w
This absolutely never happened. I absolutely do not say such things, even
as a joke
Your recollection is *very* different from mine. My recollection is
that you certainly did say it as a joke but that I was *rather* surprised
that you would say such a thing even as a joke. If anyone else
Hm. Memory may be tricking me.
I did a deeper scan of my mind, and found that the only memory I
actually have is that someone at the conference said that they saw I
wasn't in the room that morning, and then looked around to see if
there was a bomb.
I have no memory of the "fire" thing one w
Hey but it makes for an excellent quote. Facts don't have to be true if they're
beautiful or funny! ;-)
Sorry Eliezer, but the more famous you become, the more these types of
apocryphal facts will surface... most not even vaguely true... You should be
proud and happy! To quote Mr Bean 'Well, I e
Mark Waser wrote:
P.S. You missed the time where Eliezer said at Ben's AGI conference
that he would sneak out the door before warning others that the room was
on fire:-)
This absolutely never happened. I absolutely do not say such things,
even as a joke, because I understand the logic
On 6/5/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
I think this is the view put forward by Hugo De Garis. I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build
On 04/06/07, Derek Zahn <[EMAIL PROTECTED]> wrote:
I wonder if a time will come when the personal security of AGI researchers or
conferences will be a real concern. Stopping AGI could be a high priority
for existential-risk wingnuts.
I think this is the view put forward by Hugo De Garis. I us
Mark waser writes:
> P.S. You missed the time where Eliezer said at Ben's
> AGI conference that he would sneak out the door before
> warning others that the room was on fire:-)
You people making public progress toward AGI are very brave indeed! I wonder
if a time will come when the per
>> but I'm not very convinced that the singularity *will* automatically happen.
>> {IMHO I think the nature of intelligence implies it is not amenable to
>> simple linear scaling - likely not even log-linear
I share that guess/semi-informed opinion; however, while that means that I
am less
Re the military - nice thinking - like your Trojan horse idea. But more
seriously: no I don't think I would take money from DARPA (however, if they
rock up at my doorstep with a no-strings attached cheque... who knows... my
principles are most definitely not for sale... unless the price is *real
provided that I
thought they weren't just going to take my code and apply some licence
which meant I could no longer use it in the future..
I suspect that I wasn't clear about this . . . . You can always take what is
truly "your" code and do anything you want with it . . . . The problems
start
ompletely* freely as
I would deem fit
But what if you had unFriendly intentions?
- Original Message -
From: "Jean-Paul Van Belle" <[EMAIL PROTECTED]>
To:
Sent: Monday, June 04, 2007 3:28 AM
Subject: Re: [agi] poll: what do you look for when joining an AGI group?
Syn
I'd go along with this, which is basically the open source model. If
I felt that there was significant overlap between what I was doing and
another project I would have no problem contributing, provided that I
thought they weren't just going to take my code and apply some licence
which meant I co
Synergy or win-win between my work and the project i.e. if the project
dovetails with what I am doing (or has a better approach). This would require
some overlap between the project's architecture and mine. This would also
require a clear vision and explicit 'clues' about deliverables/modules (i
--- "YKY (Yan King Yin)" <[EMAIL PROTECTED]> wrote:
> Can people rate the following things?
>
> 1. quick $$, ie salary
> 2. long-term $$, ie shares in a successful corp
> 3. freedom to do what they want
> 4. fairness
> 5. friendly friends
> 6. the project looks like a winner overall
> 7. knowing
My reasons for joining a2i2 could only be expressed as subportions of
6 and 8 (possibly 9 and 4).
I joined largely on the strength of my impression of Peter. My
interest in employment was to work as closely as possible on general
artificial intelligence, and he wanted me to work for him on precis
important --> 6 which would necessarily include 8 and 9
potentially important --> 10 (average level is a poor gauge, if there are
sufficient highly-expert/superstar people you can afford an equal number of
relatively non-expert people, if you don't have any real superstars, you're
dead in the
The most important thing by far is having an AGI design that seems
feasible.
Only after that (very difficult) requirement is met, do any of the others
matter.
-- Ben G
On 6/3/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
Can people rate the following things?
1. quick $$, ie salary
2. lon
Clues. Plural.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/me
21 matches
Mail list logo