Re: [agi] Poll

2007-10-20 Thread J Storrs Hall, PhD
On Friday 19 October 2007 10:36:04 pm, Mike Tintner wrote:
 The best way to get people to learn is to make them figure things out for 
 themselves .

Yeah, right. That's why all Americans understand the theory of evolution so 
well, and why Britons have such an informed acceptance of 
genetically-modified foods. It's why Galileo had such an easy time convincing 
the Church that the earth goes around the sun. It's why the Romans widely 
adopted the steam engine following its invention by Heron of Alexandria. It's 
why the Inquisition quickly realized that witchcraft is a superstition, 
rather than burning innocent women at the stake.

The truth is exactly the opposite: Humans are built to propagate culture 
memetically, by copying each other; the amount we know individually by this 
process is orders of magnitude greater than what we could have figured out 
for ourselves. Reigning orthodoxy of thought is *very hard* to dislodge, even 
in the face of plentiful evidence to the contrary. 

Isaac Asimov famously said that the most exciting moment in science is when 
someone says, That's funny... But the reason to point it out is that it 
*doesn't* happen all the time, even in science (it's not normal science in 
Kuhn's phrase), and even less so outside of it. 

In the real world, when people get confused and work out a way around it, what 
they're learning is not an inventive synthesis of the substance at issue, but 
an attention filter. And that, for the average person, is usually just 
picking an authority figure.

Theirs not to reason why; theirs but to do and die.

Humans are *stupid*, Mike. You're still committing the superhuman human 
fallacy.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55686241-899d6e


Re: [agi] Poll

2007-10-20 Thread A. T. Murray
 [...]
 Reigning orthodoxy of thought is *very hard* to dislodge, 
 even in the face of plentiful evidence to the contrary. 

Amen, brother! Rem acu tetigisti! That's why

http://mentifex.virtualentity.com/theory5.html 

is like the small mammals scurrying beneath dinosaurs.

ATM
--
http://mind.sourceforge.net/aisteps.html 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55739919-943f4b


RE: [agi] Poll

2007-10-19 Thread Edward W. Porter
 be there 'on watch'.



EWP Again I don’t know what you are referring to here.  I understand
that timing is important to neuronal patterns, but it seems that such
added temporal complexity would only increase the number of bits required
for a computer to model the information the brain holds.



EWP 1.
http://www.eurekalert.org/pub_releases/2006-02/nsae-tsf021706.php
http://www.eurekalert.org/pub_releases/2006-02/nsae-tsf021706.php


Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
Sent: Friday, October 19, 2007 5:28 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Poll


Edward,

Does your estimate consider only amount of information required for
*representation*, or it also includes additional processing elements
required in neural setting to implement learning? I'm not sure 10^9 is far
off, because much more can be required for domain-independent
association/correlation catching between (subsymbolic) concepts
implemented by groups of synapses(*). Gap of 10^6 is probably about right
for this purpose, I can't see how it would be possible with, say, gap of
only 10^2.

New concepts/correlations/associations can be established between events
(spikes) that are not initially aligned in any way, including different
delays in time (through axonal delays and spiking sequences), so to catch
regularities when and where they happen to appear, big enough amount of
synapse groups should be there 'on watch'.

-
(*) By groups of synapses I mean sets of synapses that can excite a common
neuron, but single neuron can host multiple groups of synapses responsible
for multiple subsymbolic concepts. It's not neurologically grounded, just
a wild theoretic estimate.



On 10/19/07, Edward W. Porter [EMAIL PROTECTED] wrote:

Matt Mahoney's Thu 10/18/2007 9:15 PM post states

MAHONEY There is possibly a 6 order of magnitude gap between the size of
a cognitive model of human memory (10^9 bits) and the number of synapses
in the brain (10^15), and precious little research to resolve this
discrepancy.  In fact, these numbers are so poorly known that we aren't
even sure there is a gap.

EWP This gap, which Matt was so correct to highlight, is an important
one, and points out one of the many crippling legacy of the small hardware
mindset.

EWP I have always been a big believer in memory based reasoning, and for
the last 37 years I have always assumed a human level representation of
world knowledge would require something like 10^12 to 10^14 bytes, which
is 10^13 to 10^15 bits. ( i.e., within several orders of magnitude of the
human brain, a phrase  I have used so many times before on this list.) My
recollection is that after reading Minsky's reading list in 1970 and my
taking of K-line theory to heart, the number I guessed at that time for
world knowledge was either 10^15 bits or bytes, I forget which.  But, of
course, my notions then were so primitive compared to what they are today.


EWP Should we allow ourselves to think in terms of such big numbers?
Yes.  Let's take 10^13 bytes, for example.

EWP 10^13 bytes with 2/3s of it in non-volatile memory and 10 million
simple RAM opp processors, capable of performing about 20 trillion random
RAM accesses/sec, and a network with a cross-sectional bandwidth of
roughly 45 TBytes/sec (if you ran it hot), should be manufacturable at a
marginal cost in 7 years of about $40,000, and could be profitably sold
with amortization of development costs for several hundred thousand
dollars if there were a market for several thousand of them -- which there
almost certainly would be because of their extreme power.

EWP Why so much more than the 10^9 bits mentioned above?

EWP Because 10^9 bits only stores roughly 1 million atoms (nodes or
links) with proper indexing and various state values.  Anybody who thinks
that is enough to represent human-level world knowledge in all its visual,
audio, linguistic, tactile, kinesthetic, emotional, behavioral, and social
complexity hasn't thought about it in sufficient depth.

EWP For example, my foggy recollection is that Serre's representation of
the hierarchical memory associated the portion of the visual cortext from
V1 up to the lower level of the pre-frontal cortex (from the paper I have
cited so many times on this list) has several million pattern nodes (and,
as Josh has pointed out, this is just for the mainly feedforward aspect of
visual modeling).  This includes nothing for the vast majority of V1 and
above, and nothing for audio, language, visual motion, associate cortex,
prefrontal cortex, etc.

EWP Matt, I am not in any way criticizing you for mentioning 10^9 bits,
because I have read similar numbers myself, and your post pointed with
very appropriate questioning to the gap between that and what the brain
would appear to have the capabilility to represent.  This very low number
is just another manifestation

Re: [agi] Poll

2007-10-19 Thread Mike Tintner

Josh: People learn best when they recieve simple, progressive, unambiguous
instructions or examples. This is why young humans imprint on 
parent-figures,

have heroes, and so forth -- heuristics to cut the clutter and reduce
conflict of examples. An AGI that was trying to learn from the Internet from
scratch would be very confused -- but that's not a good way to teach it.
I'll be happy if I can get my system to learn from me alone. Then I can 
*teach

it* to be able to handle contradictory inputs -- at least to the extent that
I can do so myself.

Nope. You're taking the obvious line, the simple, top-down, totalitarian 
line - as indeed most people in AI/AGI do.


Nature knows better. The best way to get people to learn complex, 
problematic activities is not to give them simple instructions -  education 
threw out rote learning and variations thereon, long ago, although it hasn't 
totally embraced nature's way yet.  (And simple unambiguous instructions for 
any problematic activity are a philosophical, cognitive and practical 
impossibility. What are Josh's simple unambiguous instructions for how to do 
sex/ conversation/ tennis/ investing?)


The best way to get people to learn is to make them figure things out for 
themselves .


At least you should start to be able to see now that there's a massive - and 
v. significant - divide between us.


P.S. Confusion is a basic psychobiological response of the brain - 
classically exemplified in the furrowed brow. Like I said, there's no 
equivalent in computers. Nature doesn't produce something so fundamental 
without extremely good, functional reason.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55643877-21cf65


Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 01:30:43 pm, Mike Tintner wrote:
 Josh: An AGI needs to be able to watch someone doing something and produce a 
 program such that it can now do the same thing.
 
 Sounds neat and tidy. But that's not the way the human mind does it. 

A vacuous statement, since I stated what needs to be done, not how to do it.

 We  start from ignorance and confusion about how to perform any given skill/ 
 activity

Particularly how to build an AGI :-)

 - and while we then acquire an enormous amount of relevant  
 routines - we never build a whole module or program for any activity. 

If what you're trying to say is nobody's perfect, well, duh.

If you're trying to say humans don't actually acquire skills, speak for 
yourself.

 We  never stop learning, whether we're committed to that attitude 
 philosophically or not. 

Some of us never *start* learning...

 And we never stop being confused. 

FDSN.

 Are you certain about how best to write programs? Or have sex? 
 Or a conversation? Or play chess? Or tennis? All our activities, like those,
 demand and repay a lifetime's study. An AGI will have to have a similar
 approach to enjoy any success.

How stupid of me not to realize that my vague ideas on how to build a program 
that can learn by watching, would not instantly achieve superhuman, Godlike, 
mathematically optimal performance on every possible task at first sight. 
I am awed by the brilliance of this insight.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55547253-8b4a4f


Re: [agi] Poll

2007-10-19 Thread Mike Tintner
Josh: An AGI needs to be able to watch someone doing something and produce a 
program

such that it can now do the same thing.

Sounds neat and tidy. But that's not the way the human mind does it. We 
start from ignorance and confusion about how to perform any given skill/ 
activity - and while we then acquire an enormous amount of relevant 
routines - we never build a whole module or program for any activity. We 
never stop learning, whether we're committed to that attitude 
philosophically or not. And we never stop being confused. Are you certain 
about how best to write programs? Or have sex? Or a conversation? Or play 
chess? Or tennis? All our activities, like those, demand and repay a 
lifetime's study. An AGI will have to have a similar approach to enjoy any 
success.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55533242-61d8c5


RE: [agi] Poll

2007-10-19 Thread Edward W. Porter
Josh,

Great post.  Warrants being read multiple times.

You said.

JOSH I'm working on a formalism that unifies a very high-level
programming language (whose own code is a basic datatype, as in lisp),
spreading-activation semantic-net-like representational structures, and
subsumption-style real-time control architectures.

Sounds interesting.  I -- and I am sure many others on the list -- look
forward to hearing more about it.

Ed Porter

-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Friday, October 19, 2007 12:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Poll


In case anyone else is interested, here are my own responses to these
questions. Thanks to all who answered ...

 1. What is the single biggest technical gap between current AI and
 AGI?
(e.g.
 we need a way to do X or we just need more development of Y or we have
 the
 ideas, just need hardware, etc)

I think the biggest gap can be seen in the actual programs themselves.
Look at
a typical narrow-AI program that *actually does something* (robot car
driver,
for example) and a typical (!) candidate AGI system. They are entirely
different kinds of code, datastructures, etc.

An AGI system should have modules that look like the narrow-AI systems for
all
of the skills it actually has, and when it learns new skills, should then
have more such modules. So it would help to have an easier way to
write skills programs and have them interact gracefully.

An AGI needs to be able to watch someone doing something and produce a
program
such that it can now do the same thing. The system needs to be
self-conscious
exactly when it is learning, because that's when it's fitting new
subprograms
into its own architecture. It has to be able to experiment and practice.
It
has to be able to adapt old skills into new ones by analogy, *and
represent
the differences and mappings*, and represent *that* skill in symbolic form
so
that it can learn meta-skills.

 2. Do you have an idea as to what should should be done about (1) that
 would
 significantly accelerate progress if it were generally adopted?

I'm working on a formalism that unifies a very high-level programming
language
(whose own code is a basic datatype, as in lisp), spreading-activation
semantic-net-like representational structures, and subsumption-style
real-time control architectures.

 3. If (2), how long would it take the field to attain (a) a baby mind,
 (b) a
 mature human-equivalent AI, if your idea(s) were adopted and AGI
seriously
 pursued?

Let me try to be a little clearer about the concepts, as there was some
dissention w/r their coherence: By a baby mind, I mean a system that can
be
taught rather than programmed to attain new skills. It certainly won't be
like a human baby in any other respect.

Furthermore, it's not clear that we'll know when we have one except in
retrospect. It's virtually certain that in the coming years there will be
many baby mind systems that seem to start learning only to run into
unforseen limits. Understanding these limits will be a very natural
scientific process. I imagine that in 10 years we'll have the beginnings
of a
field of unbounded-learning theory and ways of predicting when a given
learning system will run out of steam, but we don't now. So we'll probably

only know we have a system that learns at an arguably human level after
teaching it long enough to know it didn't crap out where its predecessors
did.

It could happen as quickly as 5 years, but I wouldn't put a lot of money
on
it. 10 is probably more like it, By that time the hardware to do what I
think
is necessary will not just exist (it does now) but be affordable. That'll
let
a lot more people try a lot more approaches, hopefully building on each
other's successes.

Time to adult human-level AI given the baby mind: zero. This is mostly
because
the bulk of a minimal human experience/common-sense mental inventory will
have been built by hand, or learned by earlier, limited learning
algorithms,
before the real unbounded learner gets here.

 4. How long to (a) and (b) if AI research continues more or less as it
 is
 doing now?

Here's how I see the field developing: Current approaches are either deep
but
handbuilt (robot drivers) or general but very shallow (Google). In 10
years,
the general ones will be deeper (say, Google can answer 85% of
natural-language questions with apparent comprehension; Novamente produces

very serviceable NPC's in online games) and the narrow ones are broader
but
more inportantly, much more numerous. So throughout the 20-teens, AI will
seem to take off as people hitch the general systems to collections of
narrow
ones. The result will be like someone with an IQ of 90 who has a number of

idiot-savant skills. They'll pass the Turing test. But they still won't
build
their own skills.

So I'll guess (b) in 10 years, (a) in 15, because the Moore's Law thing
still
works, lots of people are trying lots of ideas, but it's a harder problem.
But I could

Re: [agi] Poll

2007-10-19 Thread Vladimir Nesov
Edward,

Does your estimate consider only amount of information required for
*representation*, or it also includes additional processing elements
required in neural setting to implement learning? I'm not sure 10^9 is far
off, because much more can be required for domain-independent
association/correlation catching between (subsymbolic) concepts implemented
by groups of synapses(*). Gap of 10^6 is probably about right for this
purpose, I can't see how it would be possible with, say, gap of only 10^2.

New concepts/correlations/associations can be established between events
(spikes) that are not initially aligned in any way, including different
delays in time (through axonal delays and spiking sequences), so to catch
regularities when and where they happen to appear, big enough amount of
synapse groups should be there 'on watch'.

-
(*) By groups of synapses I mean sets of synapses that can excite a common
neuron, but single neuron can host multiple groups of synapses responsible
for multiple subsymbolic concepts. It's not neurologically grounded, just a
wild theoretic estimate.


On 10/19/07, Edward W. Porter [EMAIL PROTECTED] wrote:

  Matt Mahoney's Thu 10/18/2007 9:15 PM post states

 MAHONEY There is possibly a 6 order of magnitude gap between the size of
 a cognitive model of human memory (10^9 bits) and the number of synapses in
 the brain (10^15), and precious little research to resolve this
 discrepancy.  In fact, these numbers are so poorly known that we aren't even
 sure there is a gap.

 EWP This gap, which Matt was so correct to highlight, is an important
 one, and points out one of the many crippling legacy of the small hardware
 mindset.

 EWP I have always been a big believer in memory based reasoning, and for
 the last 37 years I have always assumed a human level representation of
 world knowledge would require something like 10^12 to 10^14 bytes, which is
 10^13 to 10^15 bits. (i.e., within several orders of magnitude of the
 human brain, a phrase  I have used so many times before on this list.) My
 recollection is that after reading Minsky's reading list in 1970 and my
 taking of K-line theory to heart, the number I guessed at that time for
 world knowledge was either 10^15 bits or bytes, I forget which.  But, of
 course, my notions then were so primitive compared to what they are today.

 EWP Should we allow ourselves to think in terms of such big numbers?
 Yes.  Let's take 10^13 bytes, for example.

 EWP 10^13 bytes with 2/3s of it in non-volatile memory and 10 million
 simple RAM opp processors, capable of performing about 20 trillion random
 RAM accesses/sec, and a network with a cross-sectional bandwidth of roughly
 45 TBytes/sec (if you ran it hot), should be manufacturable at a marginal
 cost in 7 years of about $40,000, and could be profitably sold with
 amortization of development costs for several hundred thousand dollars if
 there were a market for several thousand of them -- which there almost
 certainly would be because of their extreme power.

 EWP Why so much more than the 10^9 bits mentioned above?

 EWP Because 10^9 bits only stores roughly 1 million atoms (nodes or
 links) with proper indexing and various state values.  Anybody who thinks
 that is enough to represent human-level world knowledge in all its visual,
 audio, linguistic, tactile, kinesthetic, emotional, behavioral, and social
 complexity hasn't thought about it in sufficient depth.

 EWP For example, my foggy recollection is that Serre's representation of
 the hierarchical memory associated the portion of the visual cortext from V1
 up to the lower level of the pre-frontal cortex (from the paper I have cited
 so many times on this list) has several million pattern nodes (and, as Josh
 has pointed out, this is just for the mainly feedforward aspect of visual
 modeling).  This includes nothing for the vast majority of V1 and above, and
 nothing for audio, language, visual motion, associate cortex, prefrontal
 cortex, etc.

 EWP Matt, I am not in any way criticizing you for mentioning 10^9 bits,
 because I have read similar numbers myself, and your post pointed with very
 appropriate questioning to the gap between that and what the brain would
 appear to have the capabilility to represent.  This very low number is just
 another manifestation of the small hardware mindset that has dominated the
 conventional wisdom in the AI since its beginning.  If the only models one
 could make had to fit in the very small memories of most past machines, it
 is only natural that one's mind would be biased toward grossly simplified
 representation.

 EWP So forget the notion that 10^9 bits can represent human-level world
 knowledge. Correct me if I am wrong, but I think the memory required to
 store the representation in most current best selling video games is 10 to
 40 times larger.

 Ed Porter

 P.S., Please give me feed back on whehter this technique of distinguishing
 original from responsive text is better than my use 

Re: [agi] Poll

2007-10-18 Thread Benjamin Goertzel
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 I'd be interested in everyone's take on the following:

 1. What is the single biggest technical gap between current AI and AGI? (
 e.g.
 we need a way to do X or we just need more development of Y or we have the
 ideas, just need hardware, etc)



I don't think the gap is a technical gap really, it's a conceptual gap.  An
AGI is
a fundamentally different sort of beast than a narrow AI.

What we need is to have a number of years of concentrated detailed-design
and engineering
effort by a dedicated, appropriately skilled software/computer-science team,
focused
on implementing, tuning and teaching an AGI based on a workable high-level
design.

The Novamente design is a workable high-level design for an AGI.  There may
be others.

2. Do you have an idea as to what should should be done about (1) that would
 significantly accelerate progress if it were generally adopted?

 3. If (2), how long would it take the field to attain (a) a baby mind, (b)
 a
 mature human-equivalent AI, if your idea(s) were adopted and AGI seriously
 pursued?


When we put all the pending Novamente tasks into Microsoft Project about a
year ago,
it came out to something like 6.5 years of work for a strong team of 10-15
totally focused, appropriately skilled people.  By now we have probably
shaved a few months off that due to our ongoing work.  This is for getting
to a young-child-mind, not a baby mind.  A baby mind is too hard to
validate, IMO.  The goal is to get to the level of English communication at
the level of a 4 or 5 year old child.

[for examples of conversation at this level, see the end of my post at
http://www.singinst.org/blog/2007/10/13/a-toddler-turing-test/#comment-8509]

I think it will not be more than 3-5 years between a and b.  Potentially a
bunch less than that, depending on how much resources the baby mind
attracts.


4. How long to (a) and (b) if AI research continues more or less as it is
 doing now?


3 decades, perhaps?

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54913571-cbca1b

Re: [agi] Poll

2007-10-18 Thread Russell Wallace
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 1. What is the single biggest technical gap between current AI and AGI? (e.g.
 we need a way to do X or we just need more development of Y or we have the
 ideas, just need hardware, etc)

Procedural knowledge. Data in relational databases can be sliced this
way and that, text documents can at least be searched in sophisticated
ways, proofs in formal logic can be checked and reasoned from etc; but
code can only be run as-is, and then only in precisely the environment
for which it was written. Procedural knowledge needs to become first
class.

 2. Do you have an idea as to what should should be done about (1) that would
 significantly accelerate progress if it were generally adopted?

Yes, build and use a system that makes procedural knowledge first class.

 3. If (2), how long would it take the field to attain (a) a baby mind, (b) a
 mature human-equivalent AI, if your idea(s) were adopted and AGI seriously
 pursued?

 4. How long to (a) and (b) if AI research continues more or less as it is
 doing now?

Human-equivalent AI isn't feasible. Give me a Space Shuttle's worth of
funding (a few billion a year sustained for a few decades) and I'll
take a shot at it, but in reality the world doesn't want it enough to
commit anywhere near those resources to it.

Smart CAD, however, might be doable in the next decade or two if things go well.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54913836-7be5b1


Re: [agi] Poll

2007-10-18 Thread Benjamin Goertzel


 
  1. What is the single biggest technical gap between current AI and AGI?
 (e.g.
  we need a way to do X or we just need more development of Y or we have
 the
  ideas, just need hardware, etc)

 The biggest gap is the design of a system that can absorb information
 generated by other intelligent systems the *same sort of way* humans
 can. This can include anything from copying body movements,
 understanding body language (pointing, smiling) and higher maths on a
 blackboard.



I agree this is important, which is one of the benefits I see in virtually
embodied
AI ...

see
-- QuickTime movie on http://novamente.net homepage

-- new article on kurzweilai.net,
http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=3%23710

I am currently writing a paper for AGI-08 on the combination of imitative
and reinforcement learning...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54975241-ed3ab5

Re: [agi] Poll

2007-10-18 Thread William Pearson
On 18/10/2007, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 I'd be interested in everyone's take on the following:

 1. What is the single biggest technical gap between current AI and AGI? (e.g.
 we need a way to do X or we just need more development of Y or we have the
 ideas, just need hardware, etc)

The biggest gap is the design of a system that can absorb information
generated by other intelligent systems the *same sort of way* humans
can. This can include anything from copying body movements,
understanding body language (pointing, smiling) and higher maths on a
blackboard. It will also require some form of ability to control how
information is absorbed to prevent malicious changes having too much
power..

 2. Do you have an idea as to what should should be done about (1) that would
 significantly accelerate progress if it were generally adopted?


There are some problems that have to be solved first however. If you
assume that cultural information and trial and error can change most
parts of the system during human like absorption, that presents some
problems. You will need to find a system/architecture that is
goal-oriented and somewhat stable in its goal orientation under the
introduction of arbitrary programs.

So if this was created and significant numbers of people were trying
to create social robots, then things would speed up.


 3. If (2), how long would it take the field to attain (a) a baby mind, (b) a
 mature human-equivalent AI, if your idea(s) were adopted and AGI seriously
 pursued?

It depends on whether we get a good theory of how cultural information
is transmitted, processed and incorporated into a system. Without a
good theory there will have to be lots of trial and error, and as some
trials will have to be done in a social setting, they will take a long
time.

I'm also not sure human equivalent is desired (assuming you mean a
system with a goal system devoted to its own well-being).

 4. How long to (a) and (b) if AI research continues more or less as it is
 doing now?


Well if it continues as it is, you will continue to get some very
powerful narrow AI system (potentially passing the Turing test on
cursory examination), but not the flexibility of AGI.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=54962347-356c7d


RE: [agi] Poll

2007-10-18 Thread Derek Zahn
 1. What is the single biggest technical gap between current AI and AGI? 
 
I think hardware is a limitation because it biases our thinking to focus on 
simplistic models of intelligence.   However, even if we had more computational 
power at our disposal we do not yet know what to do with it, and so the biggest 
gap is conceptual rather than technical.
 
In particular, I become more and more skeptical that the effort to produce 
concise theories of things like knowledge representation are likely to succeed. 
 Frames, is-a relations, logical inference on atomic tokens, and so on, are 
efforts to make intelligent behavior comprehensible in concisely describable 
ways, but they seem to only be crude approximations to the reality of 
intelligent behavior, which seem less and less likely to have formulations that 
are comfortably within our human ability to reason about effectively.  As one 
example, consider the study in cognitive science of the theory of categories -- 
from the necessary and sufficient conditions classical view to the more 
modern competing views of prototypes vs exemplars.  All of these are nice 
simple descriptions but as so often happens it seems that the effort to boil 
down the phenomena to nice simple ideas we can work with in our tiny brains 
actually boils off most of the important stuff.
 
The challenge is for us to come up with ways to think about or at least work 
with (and somehow reproduce or invent!) mechanisms that appear not to be 
reduceable to convenient theories.  I expect that our ways of thinking about 
these things will evolve as the systems we build operate on more and more data. 
 As Novamente's atom table grows from thousands to millions and eventually 
billions of rows; as cortex simulations become more and more detailed and 
studyable; as we start to grapple with semantic nets containing many millions 
of nodes -- our understanding of the dynamics of such systems will increase.  
Eventually we will become comfortable with and become more able to build 
systems whose desired behaviors cannot even be specified in a simple or 
rigorous way.
 
Or, perhaps, theoretical breakthroughs will occur making it possible to 
describe intelligence and its associated phenomena in simple scientific 
language.
 
Because neither of these things can be done at present, we can barely even talk 
to each other about things like goals, semantics, grounding, intelligence, and 
so forth... the process of taking these unknown and perhaps inherently complex 
things and compressing them into simple language symbols throws out too much 
information to even effectively communicate what little we do understand.
 
Either way, it will take decades if we're lucky.  Moving from mouse-level 
hardware to monkey-level hardware in the next couple decades will be helpful, 
just like our views on machine intelligence have expanded beyond those of our 
forebears looking at the first digital computers and wondering about how they 
might be made to think.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55002867-d97b38

Re: [agi] Poll

2007-10-18 Thread Cenny Wenner
Please find below commentaries of a naive neat which do not quite
agree with the approaches of the seasoned users on this list. Comments
and pointers are most welcome.

On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 I'd be interested in everyone's take on the following:

 1. What is the single biggest technical gap between current AI and AGI? (e.g.
 we need a way to do X or we just need more development of Y or we have the
 ideas, just need hardware, etc)

I believe there are primarily two fundamental problems for the optimal
decision making approach to AGI and largely for AI in general. Albeit
purely a guess, contemporary machinery should suffice for proper
solutions.
(1a) Metareasoning problems and the exploration-exploitation dilemma,
which seems to be specializations and/or formulations of the same
problems.
(1b) A formal approach to ill-defined problems. Most notable
assumptions of inductive bias and subsequently empirical
generalization.


 2. Do you have an idea as to what should should be done about (1) that would
 significantly accelerate progress if it were generally adopted?
Old-fashioned foundational research deals with (1a). (1a) consists of
modern problems which neither has or is recieving a great deal of
attention. My hypothesis is that it is primarily due to its
difficulty, which research in adjacent fields might cover in an ever
so inert but imminent manner.


 3. If (2), how long would it take the field to attain (a) a baby mind, (b) a
 mature human-equivalent AI, if your idea(s) were adopted and AGI seriously
 pursued?
The ultimate goal is not human-level intelligence but optimal decision
making. Obviously a human or superhuman intelligence, as with a
near-optimal decision maker, could render our work useless as it
approaches the questions itself.
Wildly guessing, I imagine the level of intelligence or rational
decision maker of a toddler would take three to a twenty years of
active research, a mature human four to a hundred, and an optimal six
to infinity. During these years, invaluable and innumerable
contributions should have been made to computer science in general and
associated fields.


 4. How long to (a) and (b) if AI research continues more or less as it is
 doing now?
I would triple the numbers.


 Thanks,

 Josh

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Cenny Wenner

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55007886-023787


RE: [agi] Poll

2007-10-18 Thread John G. Rose
 From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
 I'd be interested in everyone's take on the following:
 
 1. What is the single biggest technical gap between current AI and AGI?
 (e.g.
 we need a way to do X or we just need more development of Y or we have
 the
 ideas, just need hardware, etc)


I just see the difference as a LOT more lines of code. AGI for software 
implementation is a lot of code. AI is smaller chunks here and there.


 2. Do you have an idea as to what should should be done about (1) that
 would
 significantly accelerate progress if it were generally adopted?


More development time and resources. Some AGI designs may work and others, say 
2nd generation AGI designs (can I call them that?) like Novamente may be real 
close to working with their actual mirrored implementation in software. Other 
AGI designs may need significant testing and adjusting when development is 
performed.

 3. If (2), how long would it take the field to attain (a) a baby mind,
 (b) a
 mature human-equivalent AI, if your idea(s) were adopted and AGI
 seriously
 pursued?


Human AGI is difficult to estimate. Many highly profitable pre-AGI or non-human 
AGI systems may be more of a justifiable business expense. 

 
 4. How long to (a) and (b) if AI research continues more or less as it
 is
 doing now?


It is difficult to see more than a few years down the road as there are too 
many variables. The economy and marketplace usually works things out in its own 
way. I think universities should throw more resources at the problem perhaps in 
coordination with corporations. 

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55012017-569838

Re: [agi] Poll

2007-10-18 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

I'd be interested in everyone's take on the following:

1. What is the single biggest technical gap between current AI and AGI? (e.g. 
we need a way to do X or we just need more development of Y or we have the 
ideas, just need hardware, etc) 


The gap is a matter of (a) methodology and (b) tools. To close the gap 
we need to understand that AGI systems can only be built if we use a 
methodology that takes account of the complexity likely to exist in 
intelligent systems, and that this implies a need to stay as close as 
possible to emulating the high-level (not neural) design of the human 
mind.  The tools needed are specifically those that would support the 
methodology.


2. Do you have an idea as to what should should be done about (1) that would 
significantly accelerate progress if it were generally adopted?


A large scale project to implement the prescription given in (1).


3. If (2), how long would it take the field to attain (a) a baby mind, (b) a 
mature human-equivalent AI, if your idea(s) were adopted and AGI seriously 
pursued?


Impossible to put numbers on this without further work.  At the pure 
guess level, however, I would say it *could* happen in as little as

(a) 5 years and (b) 8 years.


4. How long to (a) and (b) if AI research continues more or less as it is 
doing now?


It would probably not happen at all.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55030922-d66c87


Re: [agi] Poll

2007-10-18 Thread Mike Dougherty
On 10/18/07, Derek Zahn [EMAIL PROTECTED] wrote:
  Because neither of these things can be done at present, we can barely even
 talk to each other about things like goals, semantics, grounding,
 intelligence, and so forth... the process of taking these unknown and
 perhaps inherently complex things and compressing them into simple language
 symbols throws out too much information to even effectively communicate what
 little we do understand.

Are you suggesting that a narrow AI designed to improve communication
between researchers would be a worthwhile investment?  Imagine it as
the scaffolding required to support the building efforts.  Natural
language is enough of a problem in its own right that we have
difficulty talking to each other, to say nothing of building
algorithms that can do it even as poorly as we do.  At least if there
were a way to exchange the context along with an idea, there might be
less confusion between sender and receiver.  The danger of
contextually rich posts (the kind Richard Loosemore often authors) is
that there is too much information to consume.  That's where I think
narrow Assistive Intelligence could add the sender's assumed context
to a neutral exchange format that the receiver's agent could properly
display in an unencumbered way.  The only way I see for that to happen
is that the agents are trained on/around the unique core conceptual
mode of each researcher.

(I know... that's brainstorming with no idea how to begin any implementation)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55034905-bea938


Re: [agi] Poll

2007-10-18 Thread Benjamin Goertzel

  That's where I think
 narrow Assistive Intelligence could add the sender's assumed context
 to a neutral exchange format that the receiver's agent could properly
 display in an unencumbered way.  The only way I see for that to happen
 is that the agents are trained on/around the unique core conceptual
 mode of each researcher.



Seems like an AGI-hard problem to me...

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55035622-2ebb10

RE: [agi] Poll

2007-10-18 Thread Edward W. Porter
Matt Mahoney’s Thu 10/18/2007 9:15 PM post states

MAHONEY There is possibly a 6 order of magnitude gap between the size of
a cognitive model of human memory (10^9 bits) and the number of synapses
in the brain (10^15), and precious little research to resolve this
discrepancy.  In fact, these numbers are so poorly known that we aren't
even sure there is a gap.

EWP This gap, which Matt was so correct to highlight, is an important
one, and points out one of the many crippling legacy of the small hardware
mindset.

EWP I have always been a big believer in memory based reasoning, and for
the last 37 years I have always assumed a human level representation of
world knowledge would require something like 10^12 to 10^14 bytes, which
is 10^13 to 10^15 bits. (i.e., “within several orders of magnitude of the
human brain”, a phrase  I have used so many times before on this list.) My
recollection is that after reading Minsky’s reading list in 1970 and my
taking of K-line theory to heart, the number I guessed at that time for
world knowledge was either 10^15 bits or bytes, I forget which.  But, of
course, my notions then were so primitive compared to what they are today.


EWP Should we allow ourselves to think in terms of such big numbers?
Yes.  Let’s take 10^13 bytes, for example.

EWP 10^13 bytes with 2/3s of it in non-volatile memory and 10 million
simple RAM opp processors, capable of performing about 20 trillion random
RAM accesses/sec, and a network with a cross-sectional bandwidth of
roughly 45 TBytes/sec (if you ran it hot), should be manufacturable at a
marginal cost in 7 years of about $40,000, and could be profitably sold
with amortization of development costs for several hundred thousand
dollars if there were a market for several thousand of them -- which there
almost certainly would be because of their extreme power.

EWP Why so much more than the 10^9 bits mentioned above?

EWP Because 10^9 bits only stores roughly 1 million atoms (nodes or
links) with proper indexing and various state values.  Anybody who thinks
that is enough to represent human-level world knowledge in all its visual,
audio, linguistic, tactile, kinesthetic, emotional, behavioral, and social
complexity hasn’t thought about it in sufficient depth.

EWP For example, my foggy recollection is that Serre’s representation of
the hierarchical memory associated the portion of the visual cortext from
V1 up to the lower level of the pre-frontal cortex (from the paper I have
cited so many times on this list) has several million pattern nodes (and,
as Josh has pointed out, this is just for the mainly feedforward aspect of
visual modeling).  This includes nothing for the vast majority of V1 and
above, and nothing for audio, language, visual motion, associate cortex,
prefrontal cortex, etc.

EWP Matt, I am not in any way criticizing you for mentioning 10^9 bits,
because I have read similar numbers myself, and your post pointed with
very appropriate questioning to the gap between that and what the brain
would appear to have the capabilility to represent.  This very low number
is just another manifestation of the small hardware mindset that has
dominated the conventional wisdom in the AI since its beginning.  If the
only models one could make had to fit in the very small memories of most
past machines, it is only natural that one’s mind would be biased toward
grossly simplified representation.

EWP So forget the notion that 10^9 bits can represent human-level world
knowledge. Correct me if I am wrong, but I think the memory required to
store the representation in most current best selling video games is 10 to
40 times larger.

Ed Porter

P.S., Please give me feed back on whehter this technique of distinguishing
original from responsive text is better than my use of all-caps, which
received criticism.


-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 18, 2007 9:15 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Poll



--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 I'd be interested in everyone's take on the following:

 1. What is the single biggest technical gap between current AI and
 AGI?

In hindsight we can say that we did not have enough hardware.  However
there has been no point in time since the 1950's when we knew that at the
time.  We are in that position today.

There is possibly a 6 order of magnitude gap between the size of a
cognitive model of human memory (10^9 bits) and the number of synapses in
the brain (10^15), and precious little research to resolve this
discrepancy.  In fact, these numbers are so poorly known that we aren't
even sure there is a gap.

 2. Do you have an idea as to what should should be done about (1) that
 would significantly accelerate progress if it were generally adopted?

Resolving the cost estimate would only let us avoid expensive mistakes
like Blocks World or Cyc or 5th Generation or the 1959 Russian-English
translation project, all of which began with great

Re: [agi] Poll

2007-10-18 Thread Matt Mahoney

--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 I'd be interested in everyone's take on the following:
 
 1. What is the single biggest technical gap between current AI and AGI?

In hindsight we can say that we did not have enough hardware.  However there
has been no point in time since the 1950's when we knew that at the time.  We
are in that position today.

There is possibly a 6 order of magnitude gap between the size of a cognitive
model of human memory (10^9 bits) and the number of synapses in the brain
(10^15), and precious little research to resolve this discrepancy.  In fact,
these numbers are so poorly known that we aren't even sure there is a gap.

 2. Do you have an idea as to what should should be done about (1) that would
 significantly accelerate progress if it were generally adopted?

Resolving the cost estimate would only let us avoid expensive mistakes like
Blocks World or Cyc or 5th Generation or the 1959 Russian-English translation
project, all of which began with great enthusiasm and no idea of the
difficulty involved.  What mistakes are we making now?

 3. If (2), how long would it take the field to attain (a) a baby mind, (b) a
 mature human-equivalent AI, if your idea(s) were adopted and AGI seriously 
 pursued?

The question is meaningless.  IQ is not a point on a line.  On some scales,
computers surpassed humans in the 1940's.  The goal of AGI is not to build
human minds, but to do our work.

 4. How long to (a) and (b) if AI research continues more or less as it is 
 doing now?

It would make not a bit of difference.  There is already a US $66
trillion/year incentive to develop AGI (the value of all human labor).  Nobody
on this list has the One Big Breakthrough.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=55201946-df9581


RE: [agi] poll: what do you look for when joining an AGI group?

2007-06-13 Thread Derek Zahn
 9. a particular AGI theoryThat is, one that convinces me it's on the right 
 track.
 
Now that you have run this poll, what did you learn from the responses and how 
are you using this information in your effort?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Jean-Paul Van Belle
Hey but it makes for an excellent quote. Facts don't have to be true if they're 
beautiful or funny! ;-)
Sorry Eliezer, but the more famous you become, the more these types of 
apocryphal facts will surface... most not even vaguely true... You should be 
proud and happy! To quote Mr Bean 'Well, I enjoyed it anyway.'



 Eliezer S. Yudkowsky [EMAIL PROTECTED] 06/05/07 4:38 AM 
Mark Waser wrote:
  
 P.S.  You missed the time where Eliezer said at Ben's AGI conference 
 that he would sneak out the door before warning others that the room was 
 on fire:-)

This absolutely never happened.  I absolutely do not say such things, 
even as a joke, because I understand the logic of the multiplayer 
iterated prisoner's dilemma - as soon as anyone defects, everyone gets 
hurt.

Some people who did not understand the IPD, and hence could not 
conceive of my understanding the IPD, made jokes about that because 
they could not conceive of behaving otherwise in my place.  But I 
never, ever said that, even as a joke, and was saddened but not 
surprised to hear it.

-- 
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Eliezer S. Yudkowsky

Hm.  Memory may be tricking me.

I did a deeper scan of my mind, and found that the only memory I 
actually have is that someone at the conference said that they saw I 
wasn't in the room that morning, and then looked around to see if 
there was a bomb.


I have no memory of the fire thing one way or the other, but it 
sounds like a plausible distortion of the first event after a few 
repetitions.   Or maybe the intended meaning is that, if I saw a fire 
in a room, I would leave the room first to make sure of my own safety, 
and then shout Fire! to warn everyone else?  If so, I still don't 
remember saying that, but it doesn't have the same quality of being 
the first to defect in an iterated prisoner's dilemma - which is the 
main thing I feel I need to emphasize heavily that I will not do; no, 
not even as a joke, because talking about defection encourages people 
to defect, and I won't be the first to talk about it, either.


So I guess the moral is that I shouldn't toss around the word 
absolutely - even when the point needs some heavy moral emphasis - 
about events so far in the past.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Mark Waser
This absolutely never happened.  I absolutely do not say such things, even 
as a joke


   Your recollection is *very* different from mine.  My recollection is 
that you certainly did say it as a joke but that I was *rather* surprised 
that you would say such a thing even as a joke.  If anyone else would like 
to chime in (since several member's of this list were in attendance) it 
might be interesting . . . . (or we could go back to the video since it was 
part of a panel that was videotaped -- if it isn't in the video, I am 
certainly willing to apologize but I'd be *very* puzzled since I've never 
had such a vivid recollection be shown to be incorrect before).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread James Ratcliff
# 7 8 9 

Money is good, but the overall AGI theory and program plan is the most 
important aspect.

James Ratcliff

YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Can people rate the following 
things?
  
 1. quick $$, ie salary
 2. long-term $$, ie shares in a successful corp
 3. freedom to do what they want
 4. fairness
 5. friendly friends
 6. the project looks like a winner overall
 7. knowing that the project is charitable
 8. special AGI features they look for (eg a special type of friendliness, pls 
specify)
 9. a particular AGI theory
 10. average level of expertise in the group
 11. others?
  
 Thanks in advance, it'd be hugely helpful... =)
 YKY
 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Mark Waser
I did a deeper scan of my mind, and found that the only memory I actually 
have is that someone at the conference said that they saw I wasn't in the 
room that morning, and then looked around to see if there was a bomb.


My memory probably was incorrect in terms of substituting fire for bomb 
(since the effect is much the same).


Or maybe the intended meaning is that, if I saw a fire in a room, I would 
leave the room first to make sure of my own safety, and then shout Fire! 
to warn everyone else?


I believe that that was indeed the context (with the probability that it was 
bomb instead of fire).



about events so far in the past.


It wasn't that long ago!:-)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Jean-Paul Van Belle
Synergy or win-win between my work and the project i.e. if the project 
dovetails with what I am doing (or has a better approach). This would require 
some overlap between the project's architecture and mine. This would also 
require a clear vision and explicit 'clues' about deliverables/modules (i.e. 
both code and ideas). I would have to be able to use these (code, idea) 
*completely* freely as I would deem fit, and would, in return, happily exchange 
the portions of my work that are relevant to the project.
Basically I agree with what the others wrote below - especially Ben. Except I 
would not work for a company that would aim to retain (exclusive or long-term) 
commercial rights to AGI design (and thus become rulers of the world :) nor 
would I accept funding from any source that aims to adopt AGI research outcomes 
for military purposes. 
Oh and yes, I'd like to be wealthy (definitely *not* rich and most definitely 
not famous - see the recent singularity discussion for a rationale on that one) 
but I already have the things I really need (not having to work for a regular 
income *would* be nice, tho)
= Jean-Paul

Justin Corwin wrote:
If I had to find a new position tomorrow, I would try to find (or
found) a group which I liked what they were 'doing', rather than their
opinions, organization, or plans.
Mark Waser wrote:
important -- 6 which would necessarily include 8 and 9
Matt wrote:
12. A well defined project goal, including test criteria.
Ben wrote:
The most important thing by far is having an AGI design that seems
feasible. For me, wanting to make a thinking machine is a far stronger motivator
than wanting to get rich. The main use of being rich is if it helps to more 
effectively launch a positive Singularity, from my view...
Eliezer wrote:
Clues.  Plural.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Mark Waser

provided that I
thought they weren't just going to take my code and apply some licence
which meant I could no longer use it in the future..


I suspect that I wasn't clear about this . . . . You can always take what is 
truly your code and do anything you want with it . . . . The problems 
start when you take the modifications to your code that were made by 
others or where you take what you call your code which is actually a very 
minimal change to someone else's massive effort.


No one is happy when someone else takes their work, makes a minor tweak, and 
then outcompetes them. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Mark Waser
 but I'm not very convinced that the singularity *will* automatically happen. 
 {IMHO I think the nature of intelligence implies it is not amenable to 
 simple linear scaling - likely not even log-linear

I share that guess/semi-informed opinion; however, while that means that I 
am less afraid of hard-takeoff horribleness, it inflates my fear of someone 
taking a Friendly AI and successfully dismantling and misusing the pieces (if 
not reconstructing a non-Friendly AGI in their own image) -- and then maybe 
winning in a hardware and numbers race.

Mark

P.S.  You missed the time where Eliezer said at Ben's AGI conference that he 
would sneak out the door before warning others that the room was on fire:-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Derek Zahn
Mark waser writes:
 
 P.S.  You missed the time where Eliezer said at Ben's 
 AGI conference that he would sneak out the door before 
 warning others that the room was on fire:-)
 
You people making public progress toward AGI are very brave indeed!  I wonder 
if a time will come when the personal security of AGI researchers or 
conferences will be a real concern.  Stopping AGI could be a high priority for 
existential-risk wingnuts.
 
On a slightly related note, I notice that many (most?) AGI approaches do not 
include facilities for recursive self-improvement in the sense of giving the 
AGI access to its base source code and algorithms.  I wonder if that approach 
is inherently safer, as the path to explosive self-improvement becomes much 
more difficult and unlikely to happen without being noticed.
 
Personally I think that there is little danger that a properly-programmed 
GameBoy is going to suddenly recursively self-improve itself into a 
singularity-causing AGI, and the odds of any computer in the next 10 years at 
least being able to do so are only slightly higher.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Bob Mottram

On 04/06/07, Derek Zahn [EMAIL PROTECTED] wrote:

I wonder if a time will come when the personal security of AGI researchers or
conferences will be a real concern.  Stopping AGI could be a high priority
for existential-risk wingnuts.


I think this is the view put forward by Hugo De Garis.  I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build human-rivaling intelligences
and those who don't, probably at first amongst academics then later in
the rest of society.  I think it's quite possible that todays
existential riskers may turn into tomorrows neo-luddite movement.  I
also think that some of those promoting AI today may switch sides as
they see the prospect of a singularity becoming more imminent.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread David Hart

On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:


I think this is the view put forward by Hugo De Garis.  I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build human-rivaling intelligences
and those who don't, probably at first amongst academics then later in
the rest of society.  I think it's quite possible that todays
existential riskers may turn into tomorrows neo-luddite movement.  I
also think that some of those promoting AI today may switch sides as
they see the prospect of a singularity becoming more imminent.




On the subject of neo-luddite terrorists, the Unabomber's Manifesto makes
for fascinating but chilling reading:

http://www.thecourier.com/manifest.htm

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Eliezer S. Yudkowsky

Mark Waser wrote:
 
P.S.  You missed the time where Eliezer said at Ben's AGI conference 
that he would sneak out the door before warning others that the room was 
on fire:-)


This absolutely never happened.  I absolutely do not say such things, 
even as a joke, because I understand the logic of the multiplayer 
iterated prisoner's dilemma - as soon as anyone defects, everyone gets 
hurt.


Some people who did not understand the IPD, and hence could not 
conceive of my understanding the IPD, made jokes about that because 
they could not conceive of behaving otherwise in my place.  But I 
never, ever said that, even as a joke, and was saddened but not 
surprised to hear it.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-03 Thread Eliezer S. Yudkowsky

Clues.  Plural.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-03 Thread Benjamin Goertzel

The most important thing by far is having an AGI design that seems
feasible.

Only after that (very difficult) requirement is met, do any of the others
matter.

-- Ben G

On 6/3/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


Can people rate the following things?

1. quick $$, ie salary
2. long-term $$, ie shares in a successful corp
3. freedom to do what they want
4. fairness
5. friendly friends
6. the project looks like a winner overall
7. knowing that the project is charitable
8. special AGI features they look for (eg a special type of friendliness,
pls specify)
9. a particular AGI theory
10. average level of expertise in the group
11. others?

Thanks in advance, it'd be hugely helpful... =)
YKY
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-03 Thread Mark Waser
important -- 6 which would necessarily include 8 and 9

potentially important -- 10  (average level is a poor gauge, if there are 
sufficient highly-expert/superstar people you can afford an equal number of 
relatively non-expert people, if you don't have any real superstars, you're 
dead in the water)

unimportant -- 1, 2, 3, 4, 5, 7, 11
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Sunday, June 03, 2007 6:04 PM
  Subject: [agi] poll: what do you look for when joining an AGI group?


  Can people rate the following things?

  1. quick $$, ie salary
  2. long-term $$, ie shares in a successful corp
  3. freedom to do what they want
  4. fairness
  5. friendly friends
  6. the project looks like a winner overall
  7. knowing that the project is charitable
  8. special AGI features they look for (eg a special type of friendliness, pls 
specify)
  9. a particular AGI theory
  10. average level of expertise in the group
  11. others?

  Thanks in advance, it'd be hugely helpful... =)
  YKY

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-03 Thread justin corwin

My reasons for joining a2i2 could only be expressed as subportions of
6 and 8 (possibly 9 and 4).

I joined largely on the strength of my impression of Peter. My
interest in employment was to work as closely as possible on general
artificial intelligence, and he wanted me to work for him on precisely
that. His opinions on the subject were extremely pragmatic, and
focused on what worked. I appreciated that, thinking that so long as I
could support my opinions, they would be respected.

In retrospect, I doubt I would have joined if I had tried to evaluate
a2i2 theoretically from my own design/organizational perspective.
Peter and I still do not have identical ideas about AGI(or the
business of developing AGI), but I agree about all the specific issues
we've dealt with thus far, and I have come to think that the process
and resources an organization can bring to bear on it's problems are
much more important than the precise design, opinions, or data they
have at any given time.

If I had to find a new position tomorrow, I would try to find (or
found) a group which I liked what they were 'doing', rather than their
opinions, organization, or plans.

That said, I wouldn't have joined if I hadn't been offered stock or
equivalent ownership of the work. Not because of the implied later
capital gains, but because I wouldn't want my work effectively
contributing to an organization in which I had no formal say or
control. I expect Peter will remain the overwhelming majority owner of
a2i2 for the foreseeable future, but the responsibility is important
to me.

--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-03 Thread Matt Mahoney
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 Can people rate the following things?
 
 1. quick $$, ie salary
 2. long-term $$, ie shares in a successful corp
 3. freedom to do what they want
 4. fairness
 5. friendly friends
 6. the project looks like a winner overall
 7. knowing that the project is charitable
 8. special AGI features they look for (eg a special type of friendliness,
 pls specify)
 9. a particular AGI theory
 10. average level of expertise in the group
 11. others?

12. A well defined project goal, including test criteria.

About 1.5 years ago we discussed optical character recognition, which is not
AGI but has some potential short term income, requires solving some
prerequisite problems in vision and language modeling, and is feasible on a
small budget.  However, it went nowhere.

YKY, do you want to build AGI or make money?  If you try to do both you will
get neither.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e