Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Jean-Paul Van Belle
Synergy or win-win between my work and the project i.e. if the project 
dovetails with what I am doing (or has a better approach). This would require 
some overlap between the project's architecture and mine. This would also 
require a clear vision and explicit 'clues' about deliverables/modules (i.e. 
both code and ideas). I would have to be able to use these (code, idea) 
*completely* freely as I would deem fit, and would, in return, happily exchange 
the portions of my work that are relevant to the project.
Basically I agree with what the others wrote below - especially Ben. Except I 
would not work for a company that would aim to retain (exclusive or long-term) 
commercial rights to AGI design (and thus become rulers of the world :) nor 
would I accept funding from any source that aims to adopt AGI research outcomes 
for military purposes. 
Oh and yes, I'd like to be wealthy (definitely *not* rich and most definitely 
not famous - see the recent singularity discussion for a rationale on that one) 
but I already have the things I really need (not having to work for a regular 
income *would* be nice, tho)
= Jean-Paul

Justin Corwin wrote:
If I had to find a new position tomorrow, I would try to find (or
found) a group which I liked what they were 'doing', rather than their
opinions, organization, or plans.
Mark Waser wrote:
important -- 6 which would necessarily include 8 and 9
Matt wrote:
12. A well defined project goal, including test criteria.
Ben wrote:
The most important thing by far is having an AGI design that seems
feasible. For me, wanting to make a thinking machine is a far stronger motivator
than wanting to get rich. The main use of being rich is if it helps to more 
effectively launch a positive Singularity, from my view...
Eliezer wrote:
Clues.  Plural.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-04 Thread Mark Waser
You may be assuming flexibility in the securities and tax regulations 
than actually exists now.  They've tightened things up quite a bit  over 
the last ten years.


I don't think so.  I'm pretty aware of the current conditions.

Equity and pseudo-equity (like incentive stock options -- ISOs)  should be 
contracted at the earliest possible time, and before either  financial or 
delivery milestones if at all possible, if you care  about the value you 
will actually be delivering to your  contributors.


I'm not sure what you mean by if you care about the value you will actually 
be delivering to your contributors but, in any case, ISOs are exactly as 
problematical as regular shares/equity -- ongoing post-AGI profits are what 
need to be distributed, equity and control really only matter to ensure that 
the profits are distributed as promised.


And then there is the what-if of dissolution,  acquisition, etc in which a 
pre-AGI determination of equity ownership  needs to be figured out -- the 
way you've set it up, the contributors  would be entitled to squat.


True, but there would be little to distribute pre-AGI anyways and the 
trusted owners would be morally (though not legally) obligated to the 
fairest distribution of source code, etc. possible (probably making it open 
source).  Actually, that's not true, the contribution agreement could easily 
be written so that the code etc. goes open source upon dissolution.


This kinds of things are pretty strictly regulated now, and waiting  until 
the end to contract a stake to your contributors would be a  disaster for 
them in terms of both their return and/or tax liability,


If you're waiting until the end to distribute shares/equity, the immediate 
tax liability is nasty because it is counted as a sudden transfer of value. 
The return, however, if the shares/equity were sold immediately is exactly 
the same as if they owned it all along.  If, however, ongoing profits are 
simply distributed (instead of equity), there is no problematical sudden 
transfer of value.  And realistically, there aren't going to be profits 
pre-AGI.



never mind the unpleasant scenarios that can occur.


Which is the one true bugaboo for which I only have the solution of 
trustworthy owners.


I cannot imagine  that a savvy person would accept deferred contracting of 
options and  equity.  It would be one of the worst possible equity stake 
schemes I  have seen.


It's not an equity stake scheme.  It's a profit-sharing scheme.  Equity 
implies control and control is problematical.


Can you propose something better that doesn't require guessing what a 
person's contribution will be IN ADVANCE.


The closest *decent* way to do what you want to do is to contract  options 
upfront with modifying conditions and qualifications based on  future 
performance.


Do you believe that you could successfully do that?  Would you be willing to 
write up an initial shot at it?


   Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-04 Thread Benjamin Goertzel

I think he's just saying to

-- make a pool of N shares allocated to technical founders.  Call this the
Technical Founders Pool

-- allocate M options on these shares to each technical founder, but with a
vesting condition that includes the condition that only N of the options
will ever be vested all total

This doesn't solve the tax problem though.


The closest *decent* way to do what you want to do is to contract  options
 upfront with modifying conditions and qualifications based on  future
 performance.

Do you believe that you could successfully do that?  Would you be willing
to
write up an initial shot at it?

Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-04 Thread Bob Mottram

One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular tasks.  The licence might permit
use of the code for non commercial uses, such as research, but for
commercial use the developer would have to donate some cash into a
pool which could then be divided up or reinvested in core development
projects.

If you're going down the route of distributing shares in future
profits then those involved should be clear about the chances of
getting some return and likely time scales.  The Mindpixel project
from some years ago was really a textbook example of how not to do it
(i.e. by raising big expectations of near term profit, and then
failing to deliver).



On 04/06/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:



 I think he's just saying to

-- make a pool of N shares allocated to technical founders.  Call this the
Technical Founders Pool

-- allocate M options on these shares to each technical founder, but with a
vesting condition that includes the condition that only N of the options
will ever be vested all total

This doesn't solve the tax problem though.

  The closest *decent* way to do what you want to do is to contract
options
  upfront with modifying conditions and qualifications based on  future
  performance.

 Do you believe that you could successfully do that?  Would you be willing
to
 write up an initial shot at it?

 Mark


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


 

 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


RE: [agi] Open AGI Consortium

2007-06-04 Thread Keith Elis
Mark, have you looked at phantom stock plans? These offer some of the
same incentives as equity ownership without giving an actual equity
stake or options, allowing grantees the chance to benefit from
appreciation in the organization's value without the owners actually
relinquishing ownership. Drawbacks abound, of course. But it might be
worth looking into.

Keith


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Mark Waser

provided that I
thought they weren't just going to take my code and apply some licence
which meant I could no longer use it in the future..


I suspect that I wasn't clear about this . . . . You can always take what is 
truly your code and do anything you want with it . . . . The problems 
start when you take the modifications to your code that were made by 
others or where you take what you call your code which is actually a very 
minimal change to someone else's massive effort.


No one is happy when someone else takes their work, makes a minor tweak, and 
then outcompetes them. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-04 Thread Mark Waser
But how do you add more contributors without a lot of very contentious 
work?  Think of all the hassles that you've had with just the close-knit 
Novamente folk (and I don't mean to disparage them or you at all) and then 
increase it by some number (further complicated by distance, difference of 
viewpoint, difference in possible contributions much less being able to 
accurately assess that, etc.)
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, June 04, 2007 7:54 AM
  Subject: Re: [agi] Open AGI Consortium



   I think he's just saying to

  -- make a pool of N shares allocated to technical founders.  Call this the 
Technical Founders Pool

  -- allocate M options on these shares to each technical founder, but with a 
vesting condition that includes the condition that only N of the options will 
ever be vested all total 
   
  This doesn't solve the tax problem though.  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-04 Thread Mark Waser

Mark, have you looked at phantom stock plans?


Keith,

   I have not since I was unaware of them.  Thank you very much for the 
pointer.  I will investigate.  (Now this is why I spend so much time 
on-line -- If only there were some almost-all-knowing being that could take 
what you're trying to accomplish and almost immediately offer 
suggestions/help on what information might be relevant :-).


   Mark 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-04 Thread Bob Mottram

On 04/06/07, Mark Waser [EMAIL PROTECTED] wrote:

 One possible method of becoming an AGI tycoon might be to have the
 main core of code as conventional open source under some suitable
 licence, but then charge customers for the service of having that core
 system customised to solve particular tasks.

Uh, I don't think you're getting this.  Any true AGI is going to be able to
customize itself . . . . (that's kind of the point)



Well, humans have a kind of general intelligence and can also to some
extent modify their own knowledge/situation.  Having a general ability
to adapt doesn't necessarily mean that you're are automatically an
expert in all domains.  Humans usually need some training or
familiarisation before they can perform well at any given job, and I
expect the same will be true for AGIs, at least initially.  Of course
AGIs will have many advantages which we do not, being able to transfer
acquired knowledge more easily.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Mark Waser
 but I'm not very convinced that the singularity *will* automatically happen. 
 {IMHO I think the nature of intelligence implies it is not amenable to 
 simple linear scaling - likely not even log-linear

I share that guess/semi-informed opinion; however, while that means that I 
am less afraid of hard-takeoff horribleness, it inflates my fear of someone 
taking a Friendly AI and successfully dismantling and misusing the pieces (if 
not reconstructing a non-Friendly AGI in their own image) -- and then maybe 
winning in a hardware and numbers race.

Mark

P.S.  You missed the time where Eliezer said at Ben's AGI conference that he 
would sneak out the door before warning others that the room was on fire:-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-04 Thread Mark Waser

One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular tasks.


Uh, I don't think you're getting this.  Any true AGI is going to be able to 
customize itself . . . . (that's kind of the point)



If you're going down the route of distributing shares in future
profits then those involved should be clear about the chances of
getting some return and likely time scales.


I don't believe that it is possible for anyone to be clear about the 
chances of getting some return and likely time scales because I don't 
believe that anyone has that information.  Each individual is going to have 
to make their own guesstimate of these things from the architecture, the 
project plan, their assessment of their co-contributors, their assessment of 
the difficulty of the project plan tasks, and whether of not they believe 
that the project plan tasks will actually lead to an AGI,



The Mindpixel project
from some years ago was really a textbook example of how not to do it
(i.e. by raising big expectations of near term profit, and then
failing to deliver).


Yet that is what they (incorrectly) believed and what you're trying to force 
me to do.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


RE: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Derek Zahn
Mark waser writes:
 
 P.S.  You missed the time where Eliezer said at Ben's 
 AGI conference that he would sneak out the door before 
 warning others that the room was on fire:-)
 
You people making public progress toward AGI are very brave indeed!  I wonder 
if a time will come when the personal security of AGI researchers or 
conferences will be a real concern.  Stopping AGI could be a high priority for 
existential-risk wingnuts.
 
On a slightly related note, I notice that many (most?) AGI approaches do not 
include facilities for recursive self-improvement in the sense of giving the 
AGI access to its base source code and algorithms.  I wonder if that approach 
is inherently safer, as the path to explosive self-improvement becomes much 
more difficult and unlikely to happen without being noticed.
 
Personally I think that there is little danger that a properly-programmed 
GameBoy is going to suddenly recursively self-improve itself into a 
singularity-causing AGI, and the odds of any computer in the next 10 years at 
least being able to do so are only slightly higher.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-04 Thread J. Andrew Rogers


On Jun 4, 2007, at 4:35 AM, Mark Waser wrote:
This kinds of things are pretty strictly regulated now, and  
waiting  until the end to contract a stake to your contributors  
would be a  disaster for them in terms of both their return and/or  
tax liability,


If you're waiting until the end to distribute shares/equity, the  
immediate tax liability is nasty because it is counted as a sudden  
transfer of value. The return, however, if the shares/equity were  
sold immediately is exactly the same as if they owned it all  
along.  If, however, ongoing profits are simply distributed  
(instead of equity), there is no problematical sudden transfer of  
value.  And realistically, there aren't going to be profits pre-AGI.



Depending on how the nominal value is disbursed, the true financial  
value can vary significantly.  Other than outright equity,  
Instruments like profit distribution are about the worst in this  
regard, instruments like warrants are among the best (but you can't  
give those to just anyone), and most other instruments fall somewhere  
in the middle.  The difference is significant: the real return  
between the best and worst can easily be 2x.  (Depending on your  
specific type of interest in a company, an argument can be made that  
warrants can be more valuable than equity.)



The closest *decent* way to do what you want to do is to contract   
options upfront with modifying conditions and qualifications based  
on  future performance.


Do you believe that you could successfully do that?  Would you be  
willing to write up an initial shot at it?



Since many startups in Silicon Valley do exactly this, I would say  
that it is quite doable.  It is less flexible and accurate than  
waiting until the end to make determinations of value, but it is a  
fair proxy and both parties have to agree to it anyway.  If  
structured well, bits can frequently be negotiated off-contract later  
if conditions change.  It is how startups deal with things like high  
rates of churn.  Personally, I do find the current state of  
regulation to be irritatingly inflexible.


Cheers,

J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-04 Thread Mark Waser
The difference is significant: the real return  between the best and worst 
can easily be 2x.


Given that this is effectively a venture capital moon-shot as opposed to a 
normal savings plan type investment, a variance of 2x is not as much as it 
initially seems (and we would, of course, do whatever we could to avoid the 
worst of the worst cases).


(Depending on your  specific type of interest in a company, an argument 
can be made that  warrants can be more valuable than equity.)


Warrants have the same control problems as options do -- magnified by the 
fact that they are transferable.  They are definitely not what I would call 
acceptable for this purpose.


Since many startups in Silicon Valley do exactly this, I would say  that 
it is quite doable.  It is less flexible and accurate than  waiting until 
the end to make determinations of value, but it is a  fair proxy and both 
parties have to agree to it anyway.  If  structured well, bits can 
frequently be negotiated off-contract later  if conditions change.  It is 
how startups deal with things like high  rates of churn.


I would argue that while it is doable when there is a relatively small 
number of people, when the people know each other, and when they have a 
reasonable amount of togetherness time -- I don't see it working in the 
proposed circumstances.


= = = = =

You *are* giving good solid financial advice and I *do* appreciate it.  I'm 
just not seeing a good, clean way to do everything that I want to do (which 
is really a sad commentary on the current state of regulation).


   Mark 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-04 Thread J. Andrew Rogers


On Jun 4, 2007, at 8:07 AM, Mark Waser wrote:
(Depending on your  specific type of interest in a company, an  
argument can be made that  warrants can be more valuable than  
equity.)


Warrants have the same control problems as options do -- magnified  
by the fact that they are transferable.  They are definitely not  
what I would call acceptable for this purpose.



Eh?  What is the problem with them being transferable?  Of what value  
are these instruments to anyone if they are not ultimately  
transferable?  This is the kind of control freak tendency that  
makes many startup ventures untenable; if you cannot give up some  
control (and I will grant such tendencies are not natural), you might  
not be the best person to be running such a startup venture.  If I  
was a VC looking at your company -- not a foreign role for me -- the  
fixation on that aspect would raise red flags.


Blue sky ventures and maintaining control are pretty much in  
opposition to each other if you do not want to marginalize your  
funding opportunities.  The lack of intrinsic capital is going to  
make things tough, because the only real currency you have *is* control.


Cheers,

J. Andrew Rogers



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] credit attribution method

2007-06-04 Thread YKY (Yan King Yin)

On 6/4/07, Bob Mottram [EMAIL PROTECTED] wrote:

[...]  Judging by the volume of text generated so far
on this subject I expect that anyone joining this sort of venture will
waste a lot of their mental energy determining precisely who owns what
and arguing over the details of the mechanism for allocation of
profits based upon contributions (does number of lines of code equal
contribution, and if so is this likely to lead to a good AGI design?).



Yes, measuring contributions is the crux of the problem.

To measure *each* item of contribution,
self-rating +
(optional) peer-rating +
(optional) peer complaint +
(optional) managerial board arbitration
is the only feasible solution I can think of.  Peer-rating should be made
optional as we cannot expect all members to rate every contribution -- too
time-wasting.

Trying to measure a member's *overall* contribution over a period of time
is problematic because peers can't have photographic memory of the myriad
contributions made by an individual in a period.

The problem is compounded by the problems of assessing:
1.  short-term income-generating ability
2.  risk of the contribution being ripped-off by theives

Interestingly, self-rating +... can solve all these problems, provided that
we have a significant number of well-behaving and honest members.

Using a non-existent AGI to rate contributions... is not a realistic idea.

So if someone can suggest a better, realistic solution, I'm willing to adopt
it.  If not, I still believe that self-rating +... is worth giving a try.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Bob Mottram

On 04/06/07, Derek Zahn [EMAIL PROTECTED] wrote:

I wonder if a time will come when the personal security of AGI researchers or
conferences will be a real concern.  Stopping AGI could be a high priority
for existential-risk wingnuts.


I think this is the view put forward by Hugo De Garis.  I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build human-rivaling intelligences
and those who don't, probably at first amongst academics then later in
the rest of society.  I think it's quite possible that todays
existential riskers may turn into tomorrows neo-luddite movement.  I
also think that some of those promoting AI today may switch sides as
they see the prospect of a singularity becoming more imminent.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-04 Thread James Ratcliff

But you haven't answered my question.  How do you test if a machine is
conscious, and is therefore (1) dangerous, and (2) deserving of human rights?

Easily, once it acts autonomously, not based on your direct given goals and 
orders, when it begins acting and generating its own new goals.  
After that all bets are off, and its a 'being' in its own right.

James Ratcliff


Matt Mahoney [EMAIL PROTECTED] wrote: --- Mark Waser  wrote:

  Belief in consciousness and belief in free will are parts of the human 
  brain's
  programming.  If we want an AGI to obey us, then we should not program 
  these
  beliefs into it.
 
 Are we positive that we can avoid doing so?  Can we prevent others from 
 doing so?

We have discussed friendly AI at length.  As far as I know, we cannot
guarantee an AI will be friendly, any more than we can guarantee that software
will be free of bugs.  The obstacle is not our lack of effort, but a
fundamental limitation of algorithmic complexity theory.

But programming a belief of consciousness or free will seems to be a hard
problem, that has no practical benefit anyway.  It seems to be easier to build
machines without them.  We do it all the time.

  including systems that exceed human intelligence, such as Google.
 
 Google doesn't exceed human intelligence.  You aren't worth debating with if
 you truly believe such things. 

Google (the search engine) has a larger memory and greater processing speed
than the human brain with respect to language.  These are two possible ways in
which intelligence can be measured.  But since we lack a definition of
intelligence that we can all agree on, you are free to change the rules every
time a machine threatens human supremacy, and you will win every time.

But you haven't answered my question.  How do you test if a machine is
conscious, and is therefore (1) dangerous, and (2) deserving of human rights?




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Building a website is a piece of cake. 
Yahoo! Small Business gives you all the tools to get online.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] credit attribution method

2007-06-04 Thread Panu Horsmalahti

Now, all we need to do is find 2 AGI designers who agree on something.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread David Hart

On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:


I think this is the view put forward by Hugo De Garis.  I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build human-rivaling intelligences
and those who don't, probably at first amongst academics then later in
the rest of society.  I think it's quite possible that todays
existential riskers may turn into tomorrows neo-luddite movement.  I
also think that some of those promoting AI today may switch sides as
they see the prospect of a singularity becoming more imminent.




On the subject of neo-luddite terrorists, the Unabomber's Manifesto makes
for fascinating but chilling reading:

http://www.thecourier.com/manifest.htm

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

[agi] Re: PolyContextural Logics

2007-06-04 Thread Lukasz Stafiniak

One more bite:
Locus Solum: From the rules of logic to the logic of rules by
Jean-Yves Girard, 2000.
http://lambda-the-ultimate.org/node/1994

On 6/5/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

Speaking of logical approaches to AGI... :-)

http://www.thinkartlab.com/pkl/



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] New AI related charity?

2007-06-04 Thread William Pearson

Is there space within the charity world for another one related to
intelligence but with a different focus to SIAI?

Rather than specifically funding an AGI effort or creating one in
order to bring about a specific goal state of humanity in mind, it
would be dedicated to funding a search for the answers to a series of
questions that will help answer the questions of, What is
intelligence? and, What are the possible futures that follow on from
humans discovering what intelligence is? The second question being
answered once we have better knowledge of the answer to the first.

I feel a charity is needed to focus the some of the efforts of all of
us, as the time is not right for applications, and we are all pulling
in diverse directions.

The sorts of questions I would like the charity to fund to answer are
the following (in a full and useful fashion, my own very partial
answers follow).

1) What sorts of limits are there to learning systems?
2) Which systems can approach those limits?And are they suitable for
creating intelligences, depending upon their assumptions they make
about the world around them.

I am following this track, as a system that can make better use of the
information streams to alter how it behaves, than other systems, is
more likely to be what we think of as intelligent. The only caveat to
this is that this is true as long as it has to deal with the same
classes or quality of information streams as humans.

Pointers towards answers

1) No system can make justified choices about how it should behave at
a greater rate than the bit rate of their input streams.

2)A von Neumann architecture computer, loading a program from external
information sources, approaches that limit (as it makes a choice to
alter its behaviour by one bit for every bit it receives, assuming a
non redundant encoding of the program). It is not suitable for
intelligent systems though as it assumes that the information it gets
from the environment is correct and non-hostile.

How to make a system with the same ability to approach the limits to
learning and deal with potentially harmful information is what I would
like to focus on after these answers are formalised.

I would be interested in other peoples opinion on these questions and
answers, and also the questions they would get an intelligence
research charity to fund.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] What was that again?

2007-06-04 Thread J Storrs Hall, PhD
The benefits of forgetfulness: smaller search spaces mean easier recall
http://arstechnica.com/news.ars/post/20070604-the-benefits-of-forgetfulness-smaller-search-spaces-mean-faster-recall.html

j

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] credit attribution method

2007-06-04 Thread Mark Waser
 Using a non-existent AGI to rate contributions... is not a realistic idea.

Ok, I'll bite.  Why not?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Eliezer S. Yudkowsky

Mark Waser wrote:
 
P.S.  You missed the time where Eliezer said at Ben's AGI conference 
that he would sneak out the door before warning others that the room was 
on fire:-)


This absolutely never happened.  I absolutely do not say such things, 
even as a joke, because I understand the logic of the multiplayer 
iterated prisoner's dilemma - as soon as anyone defects, everyone gets 
hurt.


Some people who did not understand the IPD, and hence could not 
conceive of my understanding the IPD, made jokes about that because 
they could not conceive of behaving otherwise in my place.  But I 
never, ever said that, even as a joke, and was saddened but not 
surprised to hear it.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] credit attribution method

2007-06-04 Thread YKY (Yan King Yin)

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:

 Using a non-existent AGI to rate contributions... is not a realistic

idea.


Ok, I'll bite.  Why not?


It seems that you're just using the promise that there'll be a future AGI
(and so presumably credits can be assessed more objectively, which I agree)
but in reality you're using traditional, opaque managerial practices to run
the company.  This is not the way to attract a large number of
participants.  The point of my proposal is recruit *many more* participants
than traditional companies.  I believe AGI will need the collaboration of
many individuals.

In a traditional startup the founders can exercise much more control but
they also need to work much harder to be a competent leader.  If their
startup is successful they usually get disproportionately rich because the
non-linear effects that Bill Gates so famously talked about, while the rest
of the regular employees receive a linear wage.  Under my
proposal, basically all participants can enjoy the nonlinear leverage of the
project's success, depending on the amount of their contributions (and the
attribution mechanism).  Anyway, I see this as an improvement over
traditional startups / partnerships.

Let's do a joint project -- I need people to help me ;)

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-04 Thread Jiri Jelinek

Hi Mark,


Your brain can be simulated on a large/fast enough von Neumann architecture.



From the behavioral perspective (which is good enough for AGI) - yes,

but that's not the whole story when it comes to human brain. In our
brains, information not only is and moves but also feels. From
my perspective, the idea of uploading human mind into (or fully
simulating in) a VN architecture system is like trying to create (not
just draw) a 3D object in a 2D space. You can find a way how to
represent it even in 1D, but you miss the real view - which, in this
analogy, would be the beauty (or awfulness) needed to justify actions.
It's meaningless to take action without feelings - you are practically
dead - there is just some mechanical device trying to make moves in
your way of thinking. But thinking is not our goal. Feeling is. The
goal is to not have goal(s) and safely feel the best forever.


prove that you aren't just living in a simulation.


Impossible


If you can't, then you must either concede that feeling pain is possible for a

simulated entity..

It is possible. There are just good reasons to believe that it takes
more than a bunch of semiconductor based slots storing 1s and 0s.

Regards,
Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e