Re: [agi] Language Comprehension: Archival Memory or ...

2008-05-24 Thread Mike Tintner
Mark Waser:Several comments . . . . First, this work is hideously outdated. The author cites his own reading for some chapters he produced in 1992. His claim that the dominant paradigms for studying language comprehension imply that it is an archival process is *at best* hideously outdated --

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Mark Waser
So if Omuhundro's claim rests on that fact that being self improving is part of the AGI's makeup, and that this will cause the AGI to do certain things, develop certain subgoals etc. I say that he has quietly inserted a *motivation* (or rather assumed it: does he ever say how this is supposed

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread J Storrs Hall, PhD
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote: ...Omuhundro's claim... YES! But his argument is that to fulfill *any* motivation, there are generic submotivations (protect myself, accumulate power, don't let my motivation get perverted) that will further the search to fulfill your

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Richard Loosemore
Mark Waser wrote: So if Omuhundro's claim rests on that fact that being self improving is part of the AGI's makeup, and that this will cause the AGI to do certain things, develop certain subgoals etc. I say that he has quietly inserted a *motivation* (or rather assumed it: does he ever say

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread wannabe
I was sitting in the room when they were talking about it and I didn't feel like speaking up at the time (why break my streak?) but I felt he was just wrong. It seemed like you could boil the claim down to this: If you are sufficiently advanced, and you have a goal and some ability to

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Richard Loosemore
[EMAIL PROTECTED] wrote: I was sitting in the room when they were talking about it and I didn't feel like speaking up at the time (why break my streak?) but I felt he was just wrong. It seemed like you could boil the claim down to this: If you are sufficiently advanced, and you have a goal

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Richard Loosemore
J Storrs Hall, PhD wrote: On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote: ...Omuhundro's claim... YES! But his argument is that to fulfill *any* motivation, there are generic submotivations (protect myself, accumulate power, don't let my motivation get perverted) that will further