Mark Waser wrote:
When I first read Omohundro's paper, my first reaction was . . . Wow!
That's awesome.
Then, when I tried to build on it, I found myself picking it apart
instead. My previous e-mails from today should explain why. He's
trying to extrapolate and predict from first principles and toy
experiments to a very large and complex system -- when there are just
too many additional variables and too much emergent behavior to do so
successfully. He made a great try and it's worth spending a lot of time
with the paper. My biggest fault with it is that he should have
recognized that his statements about "all" goal-driven systems don't
apply to the proto--typical example (humans) and he should have made at
least some explanation as to why he believed that it didn't.
In a way, Omohundro's paper is the prototypical/archetypal example for
Richard's arguments about many AGIers trying to design complex systems
through decomposition and toy examples and expecting the results to
self-assemble and scale up to full intelligence.
I disagree entirely with Richard's arguments that Omohundro's errors
have *anything* to do with architecture. I am even tempted to argue
that Richard is so enamored with/ensnared in his MES vision that he may
well be violating his own concerns about building complex systems.
Mark,
Well I don't agree at all.... however, I have to add an important
postscript to this discussion.
This thread started when Kaj Sotala asked me a question about
Omohundro's "AI Drives" paper, in the following words:
Richard,
I'd be curious to hear your opinion of Omohundro's "The Basic AI
Drives" paper at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
(apparently, a longer and more technical version of the same can be
found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
, but I haven't read it yet). I found the arguments made relatively
convincing, and to me, they implied that we do indeed have to be
/very/ careful not to build an AI which might end up destroying
humanity. (I'd thought that was the case before, but reading the paper
only reinforced my view...)
THROUGHOUT THIS DISCUSSION I HAVE BEEN CRITIQUING THAT ORIGINAL PAPER!
It now seems that Josh, for one, was looking at a completely different
paper (the one that Kaj says is a longer and more technical version, but
which is in many respects quite different, and which I have only just
now obtained).
When we have these discussions it is important to be clear that we are
at least all referring to the same document. I don't know if you, Mark,
have been looking at the first one or the second, but it is worth noting
that I make no warranties about what he said in the later paper.
Perhaps you could let me know which paper you were referring, just so I
know where to go from here.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com