On Friday 19 October 2007 10:36:04 pm, Mike Tintner wrote:
The best way to get people to learn is to make them figure things out for
themselves .
Yeah, right. That's why all Americans understand the theory of evolution so
well, and why Britons have such an informed acceptance of
[...]
Reigning orthodoxy of thought is *very hard* to dislodge,
even in the face of plentiful evidence to the contrary.
Amen, brother! Rem acu tetigisti! That's why
http://mentifex.virtualentity.com/theory5.html
is like the small mammals scurrying beneath dinosaurs.
ATM
--
: Re: [agi] Poll
Edward,
Does your estimate consider only amount of information required for
*representation*, or it also includes additional processing elements
required in neural setting to implement learning? I'm not sure 10^9 is far
off, because much more can be required for domain-independent
Josh: People learn best when they recieve simple, progressive, unambiguous
instructions or examples. This is why young humans imprint on
parent-figures,
have heroes, and so forth -- heuristics to cut the clutter and reduce
conflict of examples. An AGI that was trying to learn from the Internet
On Friday 19 October 2007 01:30:43 pm, Mike Tintner wrote:
Josh: An AGI needs to be able to watch someone doing something and produce a
program such that it can now do the same thing.
Sounds neat and tidy. But that's not the way the human mind does it.
A vacuous statement, since I stated
Josh: An AGI needs to be able to watch someone doing something and produce a
program
such that it can now do the same thing.
Sounds neat and tidy. But that's not the way the human mind does it. We
start from ignorance and confusion about how to perform any given skill/
activity - and while we
: [agi] Poll
In case anyone else is interested, here are my own responses to these
questions. Thanks to all who answered ...
1. What is the single biggest technical gap between current AI and
AGI?
(e.g.
we need a way to do X or we just need more development of Y or we have
the
ideas, just need
Edward,
Does your estimate consider only amount of information required for
*representation*, or it also includes additional processing elements
required in neural setting to implement learning? I'm not sure 10^9 is far
off, because much more can be required for domain-independent
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI? (e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need hardware, etc)
2. Do you have an idea as to what should should be
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI? (
e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
1. What is the single biggest technical gap between current AI and AGI? (e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need hardware, etc)
Procedural knowledge. Data in relational databases
1. What is the single biggest technical gap between current AI and AGI?
(e.g.
we need a way to do X or we just need more development of Y or we have
the
ideas, just need hardware, etc)
The biggest gap is the design of a system that can absorb information
generated by other
On 18/10/2007, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI? (e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need hardware,
1. What is the single biggest technical gap between current AI and AGI?
I think hardware is a limitation because it biases our thinking to focus on
simplistic models of intelligence. However, even if we had more computational
power at our disposal we do not yet know what to do with it, and
Please find below commentaries of a naive neat which do not quite
agree with the approaches of the seasoned users on this list. Comments
and pointers are most welcome.
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'd be interested in everyone's take on the following:
1. What is the
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI?
(e.g.
we need a way to do X or we just need more development of Y or we have
the
ideas, just need hardware,
J Storrs Hall, PhD wrote:
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI? (e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need hardware, etc)
The gap is a matter of
On 10/18/07, Derek Zahn [EMAIL PROTECTED] wrote:
Because neither of these things can be done at present, we can barely even
talk to each other about things like goals, semantics, grounding,
intelligence, and so forth... the process of taking these unknown and
perhaps inherently complex things
That's where I think
narrow Assistive Intelligence could add the sender's assumed context
to a neutral exchange format that the receiver's agent could properly
display in an unencumbered way. The only way I see for that to happen
is that the agents are trained on/around the unique core
]
Sent: Thursday, October 18, 2007 9:15 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Poll
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and
AGI?
In hindsight we can say
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI?
In hindsight we can say that we did not have enough hardware. However there
has been no point in time since the
9. a particular AGI theoryThat is, one that convinces me it's on the right
track.
Now that you have run this poll, what did you learn from the responses and how
are you using this information in your effort?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Hey but it makes for an excellent quote. Facts don't have to be true if they're
beautiful or funny! ;-)
Sorry Eliezer, but the more famous you become, the more these types of
apocryphal facts will surface... most not even vaguely true... You should be
proud and happy! To quote Mr Bean 'Well, I
Hm. Memory may be tricking me.
I did a deeper scan of my mind, and found that the only memory I
actually have is that someone at the conference said that they saw I
wasn't in the room that morning, and then looked around to see if
there was a bomb.
I have no memory of the fire thing one
This absolutely never happened. I absolutely do not say such things, even
as a joke
Your recollection is *very* different from mine. My recollection is
that you certainly did say it as a joke but that I was *rather* surprised
that you would say such a thing even as a joke. If anyone
# 7 8 9
Money is good, but the overall AGI theory and program plan is the most
important aspect.
James Ratcliff
YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Can people rate the following
things?
1. quick $$, ie salary
2. long-term $$, ie shares in a successful corp
3. freedom to do what
I did a deeper scan of my mind, and found that the only memory I actually
have is that someone at the conference said that they saw I wasn't in the
room that morning, and then looked around to see if there was a bomb.
My memory probably was incorrect in terms of substituting fire for bomb
Synergy or win-win between my work and the project i.e. if the project
dovetails with what I am doing (or has a better approach). This would require
some overlap between the project's architecture and mine. This would also
require a clear vision and explicit 'clues' about deliverables/modules
provided that I
thought they weren't just going to take my code and apply some licence
which meant I could no longer use it in the future..
I suspect that I wasn't clear about this . . . . You can always take what is
truly your code and do anything you want with it . . . . The problems
start
but I'm not very convinced that the singularity *will* automatically happen.
{IMHO I think the nature of intelligence implies it is not amenable to
simple linear scaling - likely not even log-linear
I share that guess/semi-informed opinion; however, while that means that I
am less
Mark waser writes:
P.S. You missed the time where Eliezer said at Ben's
AGI conference that he would sneak out the door before
warning others that the room was on fire:-)
You people making public progress toward AGI are very brave indeed! I wonder
if a time will come when the
On 04/06/07, Derek Zahn [EMAIL PROTECTED] wrote:
I wonder if a time will come when the personal security of AGI researchers or
conferences will be a real concern. Stopping AGI could be a high priority
for existential-risk wingnuts.
I think this is the view put forward by Hugo De Garis. I
On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:
I think this is the view put forward by Hugo De Garis. I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build
Mark Waser wrote:
P.S. You missed the time where Eliezer said at Ben's AGI conference
that he would sneak out the door before warning others that the room was
on fire:-)
This absolutely never happened. I absolutely do not say such things,
even as a joke, because I understand the
Can people rate the following things?
1. quick $$, ie salary
2. long-term $$, ie shares in a successful corp
3. freedom to do what they want
4. fairness
5. friendly friends
6. the project looks like a winner overall
7. knowing that the project is charitable
8. special AGI features they look for
Clues. Plural.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
The most important thing by far is having an AGI design that seems
feasible.
Only after that (very difficult) requirement is met, do any of the others
matter.
-- Ben G
On 6/3/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Can people rate the following things?
1. quick $$, ie salary
2.
in the water)
unimportant -- 1, 2, 3, 4, 5, 7, 11
- Original Message -
From: YKY (Yan King Yin)
To: agi@v2.listbox.com
Sent: Sunday, June 03, 2007 6:04 PM
Subject: [agi] poll: what do you look for when joining an AGI group?
Can people rate the following things?
1. quick $$, ie
My reasons for joining a2i2 could only be expressed as subportions of
6 and 8 (possibly 9 and 4).
I joined largely on the strength of my impression of Peter. My
interest in employment was to work as closely as possible on general
artificial intelligence, and he wanted me to work for him on
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Can people rate the following things?
1. quick $$, ie salary
2. long-term $$, ie shares in a successful corp
3. freedom to do what they want
4. fairness
5. friendly friends
6. the project looks like a winner overall
7. knowing that the
40 matches
Mail list logo