[agi] Re: [agi] P≠NP

2010-08-12 Thread Kaj Sotala
2010/8/12 John G. Rose johnr...@polyplexic.com

 BTW here is the latest one:

 http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf

See also:

http://www.ugcs.caltech.edu/~stansife/pnp.html - brief summary of the proof

Discussion about whether it's correct:

http://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-not-equal-to-np/
http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p?np/
http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p≠np/
http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p≠np-proof/
http://news.ycombinator.com/item?id=1585850

Wiki page summarizing a lot of the discussion, as well as collecting
many of the links above:

http://michaelnielsen.org/polymath1/index.php?title=Deolalikar%27s_P!%3DNP_paper#Does_the_argument_prove_too_much.3F


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Kaj Sotala
On Mon, Dec 29, 2008 at 10:15 PM, Lukasz Stafiniak lukst...@gmail.com wrote:
 http://www.sciencedaily.com/releases/2008/12/081224215542.htm

 Nothing surprising ;-)

So they have a result saying that we're good at subconsciously
estimating the direction in which dots on a screen are moving in.
Apparently this can be safely generalized into Our Unconscious Brain
Makes The Best Decisions Possible (implied: always).

You're right, nothing surprising. Just the kind of unfounded,
simplistic hyperbole I'd expect from your average science reporter.
;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Kaj Sotala
On Fri, Dec 19, 2008 at 1:47 AM, Mike Tintner tint...@blueyonder.co.uk wrote:
 Ben,

 I radically disagree. Human intelligence involves both creativity and
 rationality, certainly.  But  rationality - and the rational systems  of
 logic/maths and formal languages, [on which current AGI depends]  -  are
 fundamentally *opposed* to creativity and the generation of new ideas.  What
 I intend to demonstrate in a while is that just about everything that is bad
 thinking from a rational POV is *good [or potentially good] thinking* from a
 creative POV (and vice versa). To take a small example, logical fallacies
 are indeed illogical and irrational - an example of rationally bad thinking.
 But they are potentially good thinking from a creative POV -   useful
 skills, for example, in a political spinmeister's art. (And you and Pei use
 them a lot in arguing for your AGI's  :)).

I think this example is more about needing to apply different kinds of
reasoning rules in different domains, rather than the underlying
reasoning process itself being different.

In the domain of classical logic, if you encounter a contradiction,
you'll want to apply a reasoning rule saying that your premises are
inconsistent, and at least one of them needs to be eliminated or at
least modified.

In the domain of politics, if you encounter a contradiction, you'll
want to apply a reasoning rule saying that this may come useful as a
rhetorical argument. Note that even then, you need to apply
rationality in order to figure out what kinds of contradictions are
effective on your intended audience, and what kinds of contradictions
you'll want to avoid. You can't just go around proclaiming it is my
birthday and it is not my birthday and expect people to take you
seriously.

It seems to me like Mike is committing the fallacy of interpreting
rationality in a too narrow way, thinking it to be something like a
slightly expanded version of classical formal logic. That's a common
mistake (oh, what damage Gene Roddenberry did to humanity when he
created the character of Spock), but a mistake nonetheless.

Furthermore, this currently seems to be mostly a debate over
semantics, and the appropriate meaning of labels... if both Ben and
Mike took the approach advocated in
http://www.overcomingbias.com/2008/02/taboo-words.html and taboo'd
both rationality and creativity, so that e.g.

rationalityBen = [a process by which ideas are verified for internal
consistency]
creativityBen = [a process, currently not entirely understood, by
which new ideas are generated]
rationalityMike = [a set of techniques such as math and logic]
creativityMike = well, not sure of what Mike's exact definition for
creativity *would* be

then, instead of sentences like the wider culture has always known
that rationality and creativity are  opposed (to quote Mike's earlier
mail), we'd get sentences like the wider culture has always known
that the set of techniques of math and logic are opposed to
creativity, which would be much easier to debate. No need to keep
guessing what, exactly, the other person *means* with rationality
and logic...


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Kaj Sotala
On Fri, Sep 5, 2008 at 11:21 AM, Brad Paulsen [EMAIL PROTECTED] wrote:
 http://www.nytimes.com/2008/09/05/science/05brain.html?_r=3partner=rssnytemc=rssoref=sloginoref=sloginoref=slogin

http://www.sciencemag.org/cgi/content/short/1164685 for the original study.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Coin-flipping duplicates (was: Breaking Solomonoff induction (really))

2008-06-23 Thread Kaj Sotala
On 6/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Sun, 6/22/08, Kaj Sotala [EMAIL PROTECTED] wrote:
   On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:
   
Eliezer asked a similar question on SL4. If an agent
   flips a fair quantum coin and is copied 10 times if it
   comes up heads, what should be the agent's subjective
   probability that the coin will come up heads? By the
   anthropic principle, it should be 0.9. That is because if
   you repeat the experiment many times and you randomly
   sample one of the resulting agents, it is highly likely
   that will have seen heads about 90% of the time.
  
   That's the wrong answer, though (as I believe I pointed out when the
   question was asked over on SL4). The copying is just a red
   herring, it doesn't affect the probability at all.
  
   Since this question seems to confuse many people, I wrote a
   short Python program simulating it:
   http://www.saunalahti.fi/~tspro1/Random/copies.py

 The question was about subjective anticipation, not the actual outcome. It 
 depends on how the agent is programmed. If you extend your experiment so that 
 agents perform repeated, independent trials and remember the results, you 
 will find that on average agents will remember the coin coming up heads 99% 
 of the time. The agents have to reconcile this evidence with their knowledge 
 that the coin is fair.


If the agent is rational, then its subjective anticipation should
match the most likely outcome, no?

Define perform repeated, independent trials. That's a vague wording
- I can come up with at least two different interpretations:

a) Perform the experiment several times. If, on any of the trials,
copies are created, then have all of them partake in the next trial as
well, flipping a new coin and possibly being duplicated again (and
quickly leading to an exponentially increasing number of copies).
Carry out enough trials to eliminate the effect of random chance.
Since every agent is flipping a fair coin each time, by the time you
finish running the trials, all of them will remember seeing a roughly
equal amount of heads and tails. Knowing this, a rational agent should
anticipate this result, and not a 99% ratio.

b) Perform the experiment several times. If, on any of the trials,
copies are created, leave most of them be and only have one of them
partake in the repeat trials. This will eventually result in a large
number of copies who've most recently seen heads and at most one copy
at a time who's most recently seen tails. But this doesn't tell us
anything about the original question! The original situation was, if
you flip a coin and get copied on seeing heads, what result should you
anticipate seeing, not if you flip a coin several times, and on each
time that heads turn up, copies of you get made and most are set aside
while one keeps flipping the coin, should you anticipate eventually
ending up in a group that has most recently seen heads. Yes, there is
a high chance of ending up in such a group, but we again have a
situation where the copying doesn't really affect things - this kind
of wording is effectively the same as asking, if you flip a coin and
stop flipping once you see heads, should you on enough trials
anticipate that the outcome you most recently saw was heads - the
copying only gives you a small chance to keep flipping anyway. The
agent should still anticipate seeing an equal ratio of tails and heads
beforehand, since that's what it will see, up to the point that it
ends up in a position where it'll stop flipping the coin anymore.

  It is a tricker question without multiple trials. The agent then needs to 
 model its own thought process (which is impossible for any Turing computable 
 agent to do with 100% accuracy). If the agent knows that it is programmed so 
 that if it observes an outcome R times out of N that it would expect the 
 probability to be R/N, then it would conclude I know that I would observe 
 heads 99% of the time and therefore I would expect heads with probability 
 0.99. But this programming would not make sense in a scenario with 
 conditional copying.

That's right, it doesn't.

  Here is an equivalent question. If you flip a fair quantum coin, and you are 
 killed with 99% probability conditional on the coin coming up tails, then, 
 when you look at the coin, what is your subjective anticipation of seeing 
 heads?

What sense of equivalent do you mean? It isn't directly equivalent,
since it will produce a somewhat different outcome on the single-trial
(or repeated single trial) case. Previously all the possible outcomes
would have either been in the seen heads or the seen tails
category, this question adds the hasn't seen anything, is dead
category.

In the original experiment my expectation would have been 50:50 - here
I have a 50% subjective anticipation of seeing heads, a 0.5%
anticipation of seeing tails, and 49,5% anticipation of not seeing
anything at all.




-- 
http://www.saunalahti.fi/~tspro1

Re: [agi] Breaking Solomonoff induction (really)

2008-06-22 Thread Kaj Sotala
On 6/21/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Eliezer asked a similar question on SL4. If an agent flips a fair quantum 
 coin and is copied 10 times if it comes up heads, what should be the agent's 
 subjective probability that the coin will come up heads? By the anthropic 
 principle, it should be 0.9. That is because if you repeat the experiment 
 many times and you randomly sample one of the resulting agents, it is highly 
 likely that will have seen heads about 90% of the time.

That's the wrong answer, though (as I believe I pointed out when the
question was asked over on SL4). The copying is just a red herring, it
doesn't affect the probability at all.

Since this question seems to confuse many people, I wrote a short
Python program simulating it:
http://www.saunalahti.fi/~tspro1/Random/copies.py

Set the number of trials to whatever you like (if it's high, you might
want to comment out the A randomly chosen agent has seen... lines to
make it run faster) - the ratio will converge to 1:1 on any higher
amount of trials.




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://www.mfoundation.org/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Steve Richfield [EMAIL PROTECTED] wrote:
 Story: I recently attended an SGI Buddhist meeting with a friend who was a 
 member there. After listening to their discussions, I asked if there was 
 anyone there (from ~30 people) who had ever found themselves in a position of 
 having to kill or injure another person, as I have. There were none, as such 
 experiences tend to change people's outlook on pacifism. Then I mentioned how 
 Herman Kahn's MAD solution to avoiding an almost certain WW3 involved an 
 extremely non-Buddhist approach, gave a thumbnail account of the historical 
 situation, and asked if anyone there had a Buddhist-acceptable solution. Not 
 only was there no other solutions advanced, but they didn't even want to 
 THINK about such things! These people would now be DEAD if not for Herman 
 Kahn, yet they weren't even willing to examine the situation that he found 
 himself in!
 The ultimate power on earth: An angry 3-year-old with a loaded gun.

 Hence, I come to quite the opposite solution - that AGIs will want to appear 
 to be IRrational, like the 3-year-old, taking bold steps that force 
 capitulation.


Certainly a rational AGI may find it useful to appear irrational, but
that doesn't change the conclusion that it'll want to think rationally
at the bottom, does it?




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Kaj Sotala [EMAIL PROTECTED] wrote:
 Certainly a rational AGI may find it useful to appear irrational, but
  that doesn't change the conclusion that it'll want to think rationally
  at the bottom, does it?

Oh - and see also http://www.saunalahti.fi/~tspro1/reasons.html ,
especially parts 5 - 6.



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Evaluating Conference Quality [WAS Re: Symbol Grounding ...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Stefan Pernar [EMAIL PROTECTED] wrote:
   What follows are wild speculations and grand pie-in-the-sky plans without 
 substance with a letter to investors attached. Oh, come on!

Um, people, is this list really the place for fielding personal insults?

For what it's worth, my two cents: I don't always see, off the bat,
why Richard says something or holds a particular opinion, and as I
don't see the inferential steps that he's taken to reach his
conclusion, his sayings might occasionally seem like wild
speculation. However, each time that I've asked him for extra
details, he has without exception delivered a prompt and often rather
long explanation of what his premises are and how he's arrived at a
particular conclusion. If that hasn't been enough to clarify things,
I've pressed for more details, and I've always received a clear and
logical response until I've finally figured out where he's coming
from.

I do admit that my qualifications to discuss any AGI-related subject
are insignficant compared to most of this list's active posters (heck,
I don't even have my undergraduate degree completed yet), and as such
I might have unwittingly ignored some crucial details of what's been
going on From what I've been able to judge, though, I've seen no
absolutely reasons to dismiss Richard as dogmatic, irrational or a
wild speculator. (At least not any more than anyone else on this
list...)


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-05 Thread Kaj Sotala
Richard,

again, I must sincerely apologize for responding to this so
horrendously late. It's a dreadful bad habit of mine: I get an e-mail
(or blog comment, or forum message, or whatever) that requires some
thought before I respond, so I don't answer it right away... and then
something related to my studies or hobbies shows up and doesn't leave
me with enough energy to compose responses to anybody at all, after
which enough time has passed that the message has vanished from my
active memory, and when I remember it so much time has passed already
that a day or two more before I answer won't make any difference...
and then *so* much time has passed that replying to the message so
late feels more embarassing than just quietly forgetting about it.

I'll try to better my ways in the future. On the same token, I must
say I can only admire your ability to compose long, well-written
replies to messages in what seem to be blinks of an eye to me. :-)

On 3/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Kaj Sotala wrote:

  On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 
   Kaj Sotala wrote:
 Alright. But previously, you said that Omohundro's paper, which to me
 seemed to be a general analysis of the behavior of *any* minds with
 (more or less) explict goals, looked like it was based on a
 'goal-stack' motivation system. (I believe this has also been the
 basis of your critique for e.g. some SIAI articles about
 friendliness.) If built-in goals *can* be constructed into
 motivational system AGIs, then why do you seem to assume that AGIs
 with built-in goals are goal-stack ones?
  
  
   I seem to have caused lots of confusion earlier on in the discussion, so
let me backtrack and try to summarize the structure of my argument.
  
1)  Conventional AI does not have a concept of a
 Motivational-Emotional
System (MES), the way that I use that term, so when I criticised
Omuhundro's paper for referring only to a Goal Stack control system,
 I
was really saying no more than that he was assuming that the AI was
driven by the system that all conventional AIs are supposed to have.
These two ways of controlling an AI are two radically different
 designs.
  
  [...]
 
So now:  does that clarify the specific question you asked above?
  
 
  Yes and no. :-) My main question is with part 1 of your argument - you
  are saying that Omohundro's paper assumed the AI to have a certain
  sort of control system. This is the part which confuses me, since I
  didn't see the paper to make *any* mentions of how the AI should be
  built. It only assumes that the AI has some sort of goals, and nothing
  more.
[...]
  Drive 1: AIs will want to self-improve
  This one seems fairly straightforward: indeed, for humans
  self-improvement seems to be an essential part in achieving pretty
  much *any* goal you are not immeaditly capable of achieving. If you
  don't know how to do something needed to achieve your goal, you
  practice, and when you practice, you're improving yourself. Likewise,
  improving yourself will quickly become a subgoal for *any* major
  goals.
 

  But now I ask:  what exactly does this mean?

  In the context of a Goal Stack system, this would be represented by a top
 level goal that was stated in the knowledge representation language of the
 AGI, so it would say Improve Thyself.
[...]
  The reason that I say Omuhundro is assuming a Goal Stack system is that I
 believe he would argue that that is what he meant, and that he assumed that
 a GS architecture would allow the AI to exhibit behavior that corresponds to
 what we, as humans, recognize as wanting to self-improve.  I think it is a
 hidden assumption in what he wrote.

At least I didn't read the paper in such a way - after all, the
abstract says that it's supposed to apply equally to all AGI systems,
regardless of the exact design:

We identify a number of drives that will appear in sufficiently
advanced AI systems of any design. We call them drives because they
are tendencies which will be present unless explicitly counteracted.

(You could, of course, suppose that the author was assuming that an
AGI could *only* be built around a Goal Stack system, and therefore
any design would mean any GS design... but that seems a bit
far-fetched.)

  Drive 2: AIs will want to be rational
  This is basically just a special case of drive #1: rational agents
  accomplish their goals better than irrational ones, and attempts at
  self-improvement can be outright harmful if you're irrational in the
  way that you try to improve yourself. If you're trying to modify
  yourself to better achieve your goals, then you need to make clear to
  yourself what your goals are. The most effective method for this is to
  model your goals as a utility function and then modify yourself to
  better carry out the goals thus specified.

  Well, again, what exactly do you mean by rational?  There are many
 meanings of this term

Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Kaj Sotala
On 3/26/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  A lot of students email me asking me what to read to get up to speed on AGI.

Ben,

while we're on the topic, could you elaborate a bit on what kind of
prerequisite knowledge the books you've written/edited require? For
instance, I've been putting off reading Artificial General
Intelligence on the assumption that for the full benefit, it requires
a good understanding of narrow-AI/basic compsci concepts that I
haven't necessarily yet acquired (currently working my way through
Russel  Norvig in order to fix that). The Hidden Pattern sounds like
it would be heavier on the general cogsci/philosophy of mind
requirements, and the Probabilistic Logic Networks one probably needs
a heavy dose of maths (what kind of maths)? What about the
OpenCog/Novamente documentation you've mentioned maybe releasing this
year?

(Agiri.org seems to be down, by the way, so I can't access the textbook page.)


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Kaj Sotala
On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Kaj Sotala wrote:
   Alright. But previously, you said that Omohundro's paper, which to me
   seemed to be a general analysis of the behavior of *any* minds with
   (more or less) explict goals, looked like it was based on a
   'goal-stack' motivation system. (I believe this has also been the
   basis of your critique for e.g. some SIAI articles about
   friendliness.) If built-in goals *can* be constructed into
   motivational system AGIs, then why do you seem to assume that AGIs
   with built-in goals are goal-stack ones?


 I seem to have caused lots of confusion earlier on in the discussion, so
  let me backtrack and try to summarize the structure of my argument.

  1)  Conventional AI does not have a concept of a Motivational-Emotional
  System (MES), the way that I use that term, so when I criticised
  Omuhundro's paper for referring only to a Goal Stack control system, I
  was really saying no more than that he was assuming that the AI was
  driven by the system that all conventional AIs are supposed to have.
  These two ways of controlling an AI are two radically different designs.
[...]
  So now:  does that clarify the specific question you asked above?

Yes and no. :-) My main question is with part 1 of your argument - you
are saying that Omohundro's paper assumed the AI to have a certain
sort of control system. This is the part which confuses me, since I
didn't see the paper to make *any* mentions of how the AI should be
built. It only assumes that the AI has some sort of goals, and nothing
more.

I'll list all of the drives Omohundro mentions, and my interpretation
of them and why they only require existing goals. Please correct me
where our interpretations differ. (It is true that it will be possible
to reduce the impact of many of these drives by constructing an
architecture which restricts them, and as such they are not
/unavoidable/ ones - however, it seems reasonable to assume that they
will by default emerge in any AI with goals, unless specifically
counteracted. Also, the more that they are restricted, the less
effective the AI will be.)

Drive 1: AIs will want to self-improve
This one seems fairly straightforward: indeed, for humans
self-improvement seems to be an essential part in achieving pretty
much *any* goal you are not immeaditly capable of achieving. If you
don't know how to do something needed to achieve your goal, you
practice, and when you practice, you're improving yourself. Likewise,
improving yourself will quickly become a subgoal for *any* major
goals.

Drive 2: AIs will want to be rational
This is basically just a special case of drive #1: rational agents
accomplish their goals better than irrational ones, and attempts at
self-improvement can be outright harmful if you're irrational in the
way that you try to improve yourself. If you're trying to modify
yourself to better achieve your goals, then you need to make clear to
yourself what your goals are. The most effective method for this is to
model your goals as a utility function and then modify yourself to
better carry out the goals thus specified.

Drive 3: AIs will want to preserve their utility functions
Since the utility function constructed was a model of the AI's goals,
this drive is equivalent to saying AIs will want to preserve their
goals (or at least the goals that are judged as the most important
ones). The reasoning for this should be obvious - if a goal is removed
from the AI's motivational system, the AI won't work to achieve the
goal anymore, which is bad from the point of view of an AI that
currently does want the goal to be achieved.

Drive 4: AIs try to prevent counterfeit utility
This is an extension of drive #2: if there are things in the
environment that hijack existing motivation systems to make the AI do
things not relevant for its goals, then it will attempt to modify its
motivation systems to avoid those vulnerabilities.

Drive 5: AIs will be self-protective
This is a special case of #3.

Drive 6: AIs will want to acquire resources and use them efficiently
More resources will help in achieving most goals: also, even if you
had already achieved all your goals, more resources would help you in
making sure that your success wouldn't be thwarted as easily.



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-02 Thread Kaj Sotala
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Kaj Sotala wrote:
   Well, the basic gist was this: you say that AGIs can't be constructed
   with built-in goals, because a newborn AGI doesn't yet have built up
   the concepts needed to represent the goal. Yet humans seem tend to
   have built-in (using the term a bit loosely, as all goals do not
   manifest in everyone) goals, despite the fact that newborn humans
   don't yet have built up the concepts needed to represent those goals.
  
 Oh, complete agreement here.  I am only saying that the idea of a
  built-in goal cannot be made to work in an AGI *if* one decides to
  build that AGI using a goal-stack motivation system, because the
  latter requires that any goals be expressed in terms of the system's
  knowledge.  If we step away from that simplistic type of GS system, and
  instead use some other type of motivation system, then I believe it is
  possible for the system to be motivated in a coherent way, even before
  it has the explicit concepts to talk about its motivations (it can
  pursue the goal seek Momma's attention long before it can explicitly
  represent the concept of [attention], for example).

Alright. But previously, you said that Omohundro's paper, which to me
seemed to be a general analysis of the behavior of *any* minds with
(more or less) explict goals, looked like it was based on a
'goal-stack' motivation system. (I believe this has also been the
basis of your critique for e.g. some SIAI articles about
friendliness.) If built-in goals *can* be constructed into
motivational system AGIs, then why do you seem to assume that AGIs
with built-in goals are goal-stack ones?

  The way to get around that problem is to notice two things.  One is that
  the sex drives can indeed be there from the very beginning, but in very
  mild form, just waiting to be kicked into high gear later on.  I think
  this accounts for a large chunk of the explanation (there is evidence
  for this:  some children are explictly thinking engaged in sex-related
  activities at the age of three, at least).  The second part of the
  explanation is that, indeed, the human mind *does* have trouble making a
  an easy connection to those later concepts: sexual ideas do tend to get
  attached to the most peculiar behaviors.  Perhaps this is a sigh that
  the hook-up process is not straightforward.

This sounds like the beginnings of the explanation, yes.



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-15 Thread Kaj Sotala
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...

On Feb 4, 2008 8:49 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 They would not operate at the proposition level, so whatever
 difficulties they have, they would at least be different.

 Consider [curiosity].  What this actually means is a tendency for the
 system to seek pleasure in new ideas.  Seeking pleasure is only a
 colloquial term for what (in the system) would be a dimension of
 constraint satisfaction (parallel, dynamic, weak-constraint
 satisfaction).  Imagine a system in which there are various
 micro-operators hanging around, which seek to perform certain operations
 on the structures that are currently active (for example, there will be
 several micro-operators whose function is to take a representation such
 as [the cat is sitting on the mat] and try to investigate various WHY
 questions about the representation (Why is this cat sitting on this mat?
   Why do cats in general like to sit on mats?  Why does this cat Fluffy
 always like to sit on mats?  Does Fluffy like to sit on other things?
 Where does the phrase 'the cat sat on the mat' come from?  And so on).
[cut the rest]

Interesting. This sounds like it might be workable, though of course,
the exact assosciations and such that the AGI develops sound hard to
control. But then, that'd be the case for any real AGI system...

  Humans have lots of desires - call them goals or motivations - that
  manifest in differing degrees in different individuals, like wanting
  to be respected or wanting to have offspring. Still, excluding the
  most basic ones, they're all ones that a newborn child won't
  understand or feel before (s)he gets older. You could argue that they
  can't be inborn goals since the newborn mind doesn't have the concepts
  to represent them and because they manifest variably with different
  people (not everyone wants to have children, and there are probably
  even people who don't care about the respect of others), but still,
  wouldn't this imply that AGIs *can* be created with in-built goals? Or
  if such behavior can only be implemented with a motivational-system
  AI, how does that avoid the problem of some of the wanted final
  motivations being impossible to define in the initial state?

 I must think about this more carefully, because I am not quite sure of
 the question.

 However, note that we (humans) probably do not get many drives that are
 introduced long after childhood, and that the exceptions (sex,
 motherhood desires, teenage rebellion) could well be sudden increases in
 the power of drives that were there from the beginning.

 Ths may not have been your question, so I will put this one on hold.

Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a newborn AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem tend to
have built-in (using the term a bit loosely, as all goals do not
manifest in everyone) goals, despite the fact that newborn humans
don't yet have built up the concepts needed to represent those goals.

It is true that many of those drives seem to begin in early childhood,
but it seems to me that there are still many goals that aren't
activated until after infancy, such as the drive to have children.


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-03 Thread Kaj Sotala
On 1/30/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Kaj,

 [This is just a preliminary answer:  I am composing a full essay now,
 which will appear in my blog.  This is such a complex debate that it
 needs to be unpacked in a lot more detail than is possible here.  Richard].

Richard,

[Where's your blog? Oh, and this is a very useful discussion, as it's
given me material for a possible essay of my own as well. :-)]

Thanks for the answer. Here's my commentary - I quote and respond to
parts of your message somewhat out of order, since there were some
issues about ethics scattered throughout your mail that I felt were
best answered with a single response.

 The most important reason that I think this type will win out over a
 goal-stack system is that I really think the latter cannot be made to
 work in a form that allows substantial learning.  A goal-stack control
 system relies on a two-step process:  build your stack using goals that
 are represented in some kind of propositonal form, and then (when you
 are ready to pursue a goal) *interpret* the meaning of the proposition
 on the top of the stack so you can start breaking it up into subgoals.

 The problem with this two-step process is that the interpretation of
 each goal is only easy when you are down at the lower levels of the
 stack - Pick up the red block is easy to interpret, but Make humans
 happy is a profoundly abstract statement that has a million different
 interpretations.

 This is one reason why nobody has build an AGI.  To make a completely
 autonomous system that can do such things as learn by engaging in
 exploratory behavior, you have to be able insert goals like Do some
 playing, and there is no clear way to break that statement down into
 unambiguous subgoals.  The result is that if you really did try to build
 an AGI with a goal like that, the actual behavior of the system would be
 wildly unpredictable, and probably not good for the system itself.

 Further:  if the system is to acquire its own knowledge independently
 from a child-like state (something that, for separate reasons, I think
 is going to be another prerequisite for true AGI), then the child system
 cannot possibly have goals built into it that contain statements like
 Engage in an empathic relationship with your parents because it does
 not have the knowledge base built up yet, and cannot understand such a
 propositions!

I agree that it could very well be impossible to define explict goals
for a child AGI, as it doesn't have enough built up knowledge to
understand the propositions involved. I'm not entirely sure of how the
motivation approach avoids this problem, though - you speak of
setting up an AGI with motivations resembling the ones we'd call
curiosity or empathy. How are these, then, defined? Wouldn't they run
into the same difficulties?

Humans have lots of desires - call them goals or motivations - that
manifest in differing degrees in different individuals, like wanting
to be respected or wanting to have offspring. Still, excluding the
most basic ones, they're all ones that a newborn child won't
understand or feel before (s)he gets older. You could argue that they
can't be inborn goals since the newborn mind doesn't have the concepts
to represent them and because they manifest variably with different
people (not everyone wants to have children, and there are probably
even people who don't care about the respect of others), but still,
wouldn't this imply that AGIs *can* be created with in-built goals? Or
if such behavior can only be implemented with a motivational-system
AI, how does that avoid the problem of some of the wanted final
motivations being impossible to define in the initial state?

 But beyond this technical reason, I also believe that when people start
 to make a serious efort to build AGI systems - i.e. when it is talked
 about in government budget speeches across the world - there will be
 questions about safety, and the safety features of the two types of AGI
 will be examined.  I believe that at that point there will be enormous
 pressure to go with the system that is safer.

This makes the assumption that the public will become aware of AGI
being near well ahead of the time, and takes the possibility
seriously. If that assumption holds, then I agree with you. Still, the
general public seems to think that AGI will never be created, or at
least not in hundreds of years - and many of them remember the
overoptimistic promises of AI researchers in the past. If a sufficient
amount of scientists thought that AGI was doable, the public might be
convinced - but most scientists want to avoid making radical-sounding
statements, so they won't appear as crackpots to the people reviewing
their research grant applications. Combine this with the fact that the
keys for developing AGI might be scattered across so many disciplines
that very few people have studied them all, or that sudden
breakthroughs may accelerate the research, I don't think it's a 

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Kaj Sotala
On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Okay, sorry to hit you with incomprehensible technical detail, but maybe
 there is a chance that my garbled version of the real picture will
 strike a chord.

 The message to take home from all of this is that:

 1) There are *huge* differences between the way that a system would
 behave if it had a single GS, or even a group of conflicting GS modules
 (which is the way you interpreted my proposal, above) and the kind of
 MES system I just described:  the difference would come from the type of
 influence exerted, because the vector field is operating on a completely
 different level than the symbl processing.

 2) The effect of the MES is to bias the system, but this bias amounts
 to the following system imperative:  [Make your goals consistent with
 this *massive* set of constraints]  where the massive set of
 constraints is a set of ideas built up throughout the entire
 development of the system.  Rephrasing that in terms of an example:  if
 the system gets an idea that it should take a certain course of action
 because it seems to satisfy an immediate goal, the implications of that
 action will be quickly checked against a vast range o constraints, and
 if there is any hint of an inconsistency with teh value system, this
 will pull the thoughts of the AGI toward that issue, whereupon it will
 start to elaborate the issue in more detail and try to impose an even
 wider net of constraits, finally making a decision based on the broadest
 possible set of considerations.  This takes care of all the dumb
 examples where people suggest that an AGI could start with the goal
 Increase global happiness and then finally decide that this would be
 accomplished by tiling the universe with smiley faces.  Another way to
 say this:  there is no such thing as a single utility function in this
 type of system, nor is there a small set of utility functions  there
 is a massive-dimensional set of utility functions (as many as there are
 concepts or connections in the system), and this diffuse utility
 function is what gives the system its stability.

I got the general gist of that, I think.

You've previously expressed that you don't think a seriously
unfriendly AGI will be likely, apparently because you assume the
motivational-system AGI will be the kind that'll be constructed and
not, for instance, a goal stack-driven one. Now, what makes you so
certain that people will build a this kind of AGI? Even if we assume
that this sort of architecture would be the most viable one, a lot
seems to depend on how tight the constraints on its behavior are, and
what kind they are - you say that they are a a set of ideas built up
throughout the entire development of the system. The ethics and
values of humans are the result of a long, long period of evolution,
and our ethical system is pretty much of a mess. What makes it likely
that it really will build up a set of ideas constraints that we humans
would *want* it to build? Could it not just as well pick up ones that
are seriously unfriendly, especially if its designers or the ones
raising it are in the least bit careless?

Even among humans, there exist radical philosophers whose ideas of a
perfect society are repulsive to the vast majority of the populace,
and a countless number of disagreements about ethics. If we humans
have such disagreements - we who all share the same evolutionary
origin biasing us to develop our moral systems in a certain direction
- what makes it plausible to assume that the first AGIs put together
(probably while our understanding of our own workings is still
incomplete) will develop a morality we'll like?



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91461196-a87c48


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-29 Thread Kaj Sotala
On 1/29/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Summary of the difference:

 1) I am not even convinced that an AI driven by a GS will ever actually
 become generally intelligent, because of the self-contrdictions built
 into the idea of a goal stack.  I am fairly sure that whenever anyone
 tries to scale one of those things up to a real AGI (something that has
 never been done, not by a long way) the AGI will become so unstable that
 it will be an idiot.

 2) A motivation-system AGI would have a completely different set of
 properties, and among those properties would be extreme stability.  It
 would be possible to ensure that the thing stayed locked on to a goal
 set that was human-empathic, and which would stay that way.

 Omohundros's analysis is all predicated on the Goal Stack approach, so
 my response is that nothing he says has any relevance to the type of AGI
 that I talk about (which, as I say, is probably going to be the only
 type ever created).

Hmm. I'm not sure of exact definition that you're using of the term
motivational AGI, so let me wager a guess based on what I remember
reading from you before - do you mean something along the lines of a
system built out of several subsystems, each with partially
conflicting desires, that are constantly competing for control and
exerting various kinds of pull to the behavior of the system as a
whole? And you contrast this with a goal stack AGI, which would only
have one or a couple of such systems?

While this is certainly a major difference on the architectural level,
I'm not entirely convinced how large of a difference it makes in
behavioral terms, at least in this context. In order to accomplish
anything, the motivational AGI would still have to formulate goals and
long-term plans. Once it managed to hammer out acceptable goals that
the majority of its subsystems agreed on, it would set out on
developing ways to fulfill those goals as effectively as possible,
making it subject to the pressures outlined in Omohundro's paper.

The utility function that it would model for itself would be
considerably more complex than for an AGI with less subsystems, as it
would have to be a compromise between the desires of each subsystem
in power, and if the balance of power would be upset too radically,
the modeled utility function may even be changed entirely (like the
way different moods in humans give control to different networks,
altering the current desires and effective utility functions).
However, AGI designers likely wouldn't make the balance of power
between the different subsystems /too/ unstable, as an agent that
constantly changed its mind about what it wanted would just go around
in circles. So it sounds plausible that the utility function it
generated would remain relatively stable, and the motivational AGI's
behavior optimized just as Omohundro analysis suggests.

-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91075649-b77bad


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Kaj Sotala
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Theoretically yes, but behind my comment was a deeper analysis (which I
 have posted before, I think) according to which it will actually be very
 difficult for a negative-outcome singularity to occur.

 I was really trying to make the point that a statement like The
 singularity WILL end the human race is completely ridiculous.  There is
 no WILL about it.

Richard,

I'd be curious to hear your opinion of Omohundro's The Basic AI
Drives paper at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
(apparently, a longer and more technical version of the same can be
found at 
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
, but I haven't read it yet). I found the arguments made relatively
convincing, and to me, they implied that we do indeed have to be
/very/ careful not to build an AI which might end up destroying
humanity. (I'd thought that was the case before, but reading the paper
only reinforced my view...)




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90642622-a4687d


Re: [agi] Ben's Definition of Intelligence

2008-01-12 Thread Kaj Sotala
On 1/12/08, Mike Tintner [EMAIL PROTECTED] wrote:
 The primary motivation behind the Novamente AI Engine
 is to build a system that can achieve complex goals in
 complex environments, a synopsis of the definition of intelligence
 given in (Goertzel 1993). The emphasis is on the

This is not just Ben's definition - it's a one used more generally in
cognitive science, and it was e.g. taught on the Cog101 course I was
on.



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85148331-4627fe


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Kaj Sotala
On 11/10/07, Bryan Bishop [EMAIL PROTECTED] wrote:
 On Saturday 10 November 2007 09:29, Derek Zahn wrote:
  On such a chart I think we're supposed to be at something like mouse
  level right now -- and in fact we have seen supercomputers beginning
  to take a shot at simulating mouse-brain-like structures.
 Ref?

http://news.bbc.co.uk/2/hi/technology/6600965.stm

Somebody else can probably provide more technical details, as well as
information about where this research is now, half a year later.




-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63835208-fffe86


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Kaj Sotala
On 11/10/07, Robin Hanson [EMAIL PROTECTED] wrote:
 skeptical.   Specifically, after ten years as an AI researcher, my
 inclination has been to see progress as very slow toward an explicitly-coded
 AI, and so to guess that the whole brain emulation approach would succeed
 first if, as it seems, that approach becomes feasible within the next
 century.

  But I want to try to make sure I've heard the best arguments on the other
 side, and my impression was that many people here expect more rapid AI
 progress.   So I am here to ask: where are the best analyses arguing the
 case for rapid (non-emulation) AI progress?   I am less interested in the

You specify non-emulation AI progress. Can you be a bit more specific?
Obviously arguments for why full-brain emulation will happen aren't
the ones you're after, but what about arguments about, say, brain
reverse-engineering techinques becoming better and thereby also
leading to breakthroughs in pure AI if the algorithms employed by
the brain become understood?



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63903984-5ab472


Re: [agi] Religion-free technical content

2007-09-30 Thread Kaj Sotala
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
 On 9/29/07, Kaj Sotala [EMAIL PROTECTED] wrote:
  I'd be curious to see these, and I suspect many others would, too.
  (Even though they're probably from lists I am on, I haven't followed
  them nearly as actively as I could've.)

 http://lists.extropy.org/pipermail/extropy-chat/2006-May/026943.html
 http://www.sl4.org/archive/0608/15606.html
 http://lists.extropy.org/pipermail/extropy-chat/2007-June/036406.html
 http://karolisr.canonizer.com/topic.asp?topic_num=16statement_num=4

Replied to off-list.


-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48208947-755b91


Re: [agi] Religion-free technical content

2007-09-30 Thread Kaj Sotala
On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
 So, let's look at this from a technical point of view. AGI has the potential
 of becoming a very powerful technology and misused or out of control could
 possibly be dangerous. However, at this point we have little idea of how
 these kinds of potential dangers may become manifest. AGI may or may not
 want to take over the world or harm humanity. We may or may not find some
 effective way of limiting its power to do harm. AGI may or may not even
 work. At this point there is no AGI. Give me one concrete technical example
 where AGI is currently a threat to humanity or anything else.

 I do not see how at this time promoting investment in AGI research is
 dangerously irresponsible or fosters an atmosphere that could lead to
 humanity's demise. It us up to the researchers to devise a safe way of
 implementing this technology not the public or the investors. The public and
 the investors DO want to know that researchers are aware of these potential
 dangers and are working on ways to mitigate them, but it serves nobodies
 interest to dwell on dangers we as yet know little about and therefore can't
 control. Besides, it's a stupid way to promote the AGI industry or get
 investment to further responsible research.

It's not dangerously irresponsible to promote investment in AGI
research, in itself. What is irresponsible is to purposefully only
talk about the promising business opportunities, while leaving out
discussion about the potential risks. It's a human tendency to engage
in wishful thinking and ignore the good sides (just as much as it,
admittedly, is a human tendency to concentrate on the bad sides and
ignore the good). The more that we talk about only the promising
sides, the more likely people are to ignore the bad sides entirely,
since the good sides seem so promising.

The it is too early to worry about the dangers of AGI argument has
some merit, but as Yudkowsky notes, there was very little discussion
about the dangers of AGI even back when researchers thought it was
just around the corner. What is needed when AGI finally does start to
emerge is a /mindset/ of caution - a way of thinking that makes safety
issues the first priority, and which is shared by all researchers
working on AGI. A mindset like that does not spontaneously appear - it
takes either decades of careful cultivation, or sudden catastrophes
that shock people into realizing the dangers. Environmental activists
have been talking about the dangers of climate change for decades now,
but they are only now starting to get taken seriously. Soviet
engineers obviously did not have a mindset of caution when they
designed the Chernobyl power plant, nor did its operators when they
started the fateful experiment. Most current AI/AGI researchers do not
have a mindset of caution that makes them consider thrice every detail
of their system architectures - or that would even make them realize
there /are/ dangers. If active discussion is postponed to the moment
when AGI is starting to become a real threat - if advertisement
campaigns for AGI are started without mentioning all of the potential
risks - then it will be too late to foster that mindset.

There is also the issue of our current awareness of risks influencing
the methods we use in order to create AGI. Investors who have only
been told of the good sides are likely to pressure the researchers to
pursue progress at any means available - or if the original
researchers are aware of the risks and refuse to do so, the investors
will hire other researchers who are less aware of them. To quote
Yudkowsky:

The field of AI has techniques, such as neural networks and
evolutionary programming, which have grown in power with the slow
tweaking of decades. But neural networks are opaque - the user has no
idea how the neural net is making its decisions - and cannot easily be
rendered unopaque; the people who invented and polished neural
networks were not thinking about the long-term problems of Friendly
AI. Evolutionary programming (EP) is stochastic, and does not
precisely preserve the optimization target in the generated code; EP
gives you code that does what you ask, most of the time, under the
tested circumstances, but the code may also do something else on the
side. EP is a powerful, still maturing technique that is intrinsically
unsuited to the demands of Friendly AI. Friendly AI, as I have
proposed it, requires repeated cycles of recursive self-improvement
that precisely preserve a stable optimization target.

The most powerful current AI techniques, as they were developed and
then polished and improved over time, have basic incompatibilities
with the requirements of Friendly AI as I currently see them. The Y2K
problem - which proved very expensive to fix, though not
global-catastrophic - analogously arose from failing to foresee
tomorrow's design requirements. The nightmare scenario is that we find
ourselves stuck with a catalog of mature, 

Re: [agi] Religion-free technical content

2007-09-29 Thread Kaj Sotala
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
 I've been through the specific arguments at length on lists where
 they're on topic, let me know if you want me to dig up references.

I'd be curious to see these, and I suspect many others would, too.
(Even though they're probably from lists I am on, I haven't followed
them nearly as actively as I could've.)

 I will be more than happy to refrain on this list from further mention
 of my views on the matter - as I have done heretofore. I ask only that
 the other side extend similar courtesy.

I haven't brought up the topics here, myself, but I feel the need to
note that there has been talk about massive advertisements campaigns
for developing AGI, campaigns which, I quote,

On 9/27/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
However, this
 organization should take a very conservative approach and avoid over
 speculation. The objective is to portray AGI as a difficult but imminently
 doable technology. AGI is a real technology and a real business opportunity.
 All talk of Singularity, life extension, the end of humanity as we know it
 and run amok sci-fi terminators should be portrayed as the pure speculation
 and fantasy that it is. Think what you want to yourself, what investors and
 the public want is a useful and marketable technology. AGI should be
 portrayed as the new internet, circa 1995. Our objective is to create some
 interest and excitement in the general public, and most importantly,
 investors.

From the point of view of those who believe that AGI is real danger,
any campaigns to promote the development of AGI while specificially
ignoring discussion about the potential implications are dangerously
irresponsible (and, in fact, exactly the thing we're working to stop).
Personally, I am ready to stay entirely quiet about the Singularity on
this list, since it is, indeed, off-topical - but that is only for as
long as I don't run across messages which I feel are helping foster an
atmosphere that could lead to humanity's demise.

(As a sidenote - if you really are convinced that any talk about
Singularity is religious nonsense, I don't know if I'd consider it a
courtesy for you not to bring up your views. I'd feel that it would be
more appropriate to debate the matter out, until either you or the
Singularity activists would be persuaded of the other side's point of
view. After all, this is something that people are spending large
amounts of money on (my personal donations to SIAI sum to over a 1000
USD, and are expected to only go up once I get more money) - if
they're wasting their time and money, they'd deserve to know as soon
as possible so they can be more productive with their attention.)

-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=48070904-f4659a