Re: [agi] More Info Please

2008-05-25 Thread Panu Horsmalahti
What is your approach on ensuring AGI safety/Friendliness on this project? --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Jim Bromer
- Original Message From: Richard Loosemore [EMAIL PROTECTED] Richard Loosemore said: If you look at his paper carefully, you will see that at every step of the way he introduces assumptions as if they were obvious facts ... and in all the cases I have bothered to think through, these

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the literature, some parts of completely standard and accepted economics and game

Re: [agi] More Info Please

2008-05-25 Thread Nathan Cravens
Hi Peter, Ben, and Panu What is your approach on ensuring AGI safety/Friendliness on this project? I would immediately gather reason to assert that if there's money in AGI, and money is made from such a project, it is bound to be one of a friendly nature. That assertion of course makes for a

Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
My own view is that our state of knowledge about AGI is far too weak for us to make detailed plans about how to **ensure** AGI safety, at this point What we can do is conduct experiments designed to gather data about AGI goal systems and AGI dynamics, which can lead us to more robust AGI

Re: [agi] More Info Please

2008-05-25 Thread Bob Mottram
2008/5/25 Nathan Cravens [EMAIL PROTECTED]: yet AGI has potentially dramatic concrete consequences in one direction or another. Money will only be made from this in the short run, and if not, for those with a capacity to muster life, misery will prevail, unless you are the last one or ones

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
Your argument about the difference between a GS and an MES system is a strawman argument. Omohundro never made the argument, nor did he touch on it as far as I can tell. I did not find his paper very interesting either, but you are the one who seems to be pulling conclusions out of thin

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the literature, some parts of completely standard and accepted economics and game theory. It's all very basic stuff. The problem with accepted economics and game theory is that in a proper

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
Um. I *really* need to point out that statements like transhumanists. They have this sort of gut emotional belief that self improvement is all good are really nasty, unwarranted bigotry. - Original Message - From: [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, May 24,

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
Rationality and irrationality are interesting subjects . . . . Many people who endlessly tout rationally use it as an exact synonym for logical correctness and then argue not only that irrational then means logically incorrect and therefore wrong but that anything that can't be proved is

Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
On Sun, May 25, 2008 at 10:42 AM, Mark Waser [EMAIL PROTECTED] wrote: My own view is that our state of knowledge about AGI is far too weak for us to make detailed plans about how to **ensure** AGI safety, at this point I disagree strenuously. If our arguments will apply to *all*

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Jim Bromer
- Original Message From: J Storrs Hall, PhD [EMAIL PROTECTED] The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the

Re: [agi] More Info Please

2008-05-25 Thread Mark Waser
I disagree strenuously. If our arguments will apply to *all* intelligences (/intelligent architectures) -- like Omohundro attempts to do -- instead of just certain AGI subsets, then I believe that our lack of knowledge about particular subsets is irrelevant. yes, but I don't think these

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
When I first read Omohundro's paper, my first reaction was . . . Wow! That's awesome. Then, when I tried to build on it, I found myself picking it apart instead. My previous e-mails from today should explain why. He's trying to extrapolate and predict from first principles and toy

Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
Please, if you're going to argue something -- please take the time to argue it and don't pretend that you can't magically solve it all with your guesses (I mean, intuition). time for mailing list posts is scarce for me these days, so sometimes I post a conclusion w/out the supporting arguments

Re: [agi] More Info Please

2008-05-25 Thread Lukasz Stafiniak
On Sun, May 25, 2008 at 10:26 PM, Ben Goertzel [EMAIL PROTECTED] wrote: Certainly there are plenty of folks with equal software engineering experience to you, advocating the Linux/C++ route (taken in the current OpenCog version) rather than the .Net/C# route that I believe you advocate... No,

Re: [agi] More Info Please

2008-05-25 Thread Mark Waser
Certainly there are plenty of folks with equal software engineering experience to you, advocating the Linux/C++ route (taken in the current OpenCog version) rather than the .Net/C# route that I believe you advocate... Cool. An *argument from authority* without even having an authority. Show

Re: [agi] More Info Please

2008-05-25 Thread Nathan Cravens
Intuition is not science. Intuition is just hardened opinion. Mark, without intuition the development of science would grind to a halt. Logic doesn't come from god who made science in your image, those things come from humans with faulty, sometimes elegant, perceptions. Ben and Peter. Do you

Re: [agi] More Info Please

2008-05-25 Thread Mark Waser
without intuition the development of science would grind to a halt. Nathan, please elaborate more. Your second sentence about logic is obviously true but I can't see where it has anything to do with your halting statement unless you are totally misinterpreting me. - Original Message

Re: [agi] More Info Please

2008-05-25 Thread Mark Waser
About C++ versus C# ... This blog post and some of the more intelligent comments (e.g. David Brownell's) summarize many of the standard arguments back and forth. http://blogs.msdn.com/ericgu/archive/2005/01/26/360879.aspx It is a good blog post. Now, how many of Dave Brownell's features are

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Richard Loosemore
Jim Bromer wrote: - Original Message From: Richard Loosemore [EMAIL PROTECTED] Richard Loosemore said: If you look at his paper carefully, you will see that at every step of the way he introduces assumptions as if they were obvious facts ... and in all the cases I have bothered to

Re: [agi] More Info Please

2008-05-25 Thread Mark Waser
Continuing on from a mistaken send . . . I'm aware .Net has evolved a lot in recent years, but so has the C++ world, especially the Boost libraries which are extremely powerful. Boost is not particularly powerful. Using Boost involves a *lot* of work because the interfaces are not

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Richard Loosemore
J Storrs Hall, PhD wrote: The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the literature, some parts of completely standard and

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote: Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the literature, some parts of completely standard and accepted economics and game theory. It's all very basic stuff. The problem with

Wrong Bloody Document, Folks! [WAS Re: [agi] Goal Driven Systems and AI Dangers]

2008-05-25 Thread Richard Loosemore
Mark Waser wrote: When I first read Omohundro's paper, my first reaction was . . . Wow! That's awesome. Then, when I tried to build on it, I found myself picking it apart instead. My previous e-mails from today should explain why. He's trying to extrapolate and predict from first

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote: This is NOT the paper that is under discussion. WRONG. This is the paper I'm discussing, and is therefore the paper under discussion. --- agi Archives:

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
In the context of Steve's paper, however, rational simply means an agent who does not have a preference circularity. On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote: Rationality and irrationality are interesting subjects . . . . Many people who endlessly tout rationally use it as an

Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
Mark, For OpenCog we had to make a definite choice and we made one. Sorry you don't agree w/ it. I agree that you had to make a choice and made the one that seemed right to various reason. The above comment is rude and snarky however -- particularly since it seems to come *because* you

Re: [agi] More Info Please

2008-05-25 Thread Nathan Cravens
Mark. Intuition is a form of vague perception, a kind of logic in the making. Like a grain of sand with pearl potential. AGI has a lot of power to cure the society of scarcity situation. So it's up to us to roll out the beneficial apps before others roll out the nasty ones. This is not a

Re: [agi] More Info Please

2008-05-25 Thread J. Andrew Rogers
Some not-quite-random observations that hopefully injects some moderation: - There are a number of good arguments for using C over C++, not the least of which is that it is dead simple to implement very efficient C bindings into much friendlier languages that hide the fact that it is

[agi] Phoenix has Landed

2008-05-25 Thread Brad Paulsen
Hi Gang! The first Phoenix Lander pix from Mars: http://fawkes4.lpl.arizona.edu/images.php?gID=315cID=7 Cheers, Brad --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Richard Loosemore
J Storrs Hall, PhD wrote: On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote: This is NOT the paper that is under discussion. WRONG. This is the paper I'm discussing, and is therefore the paper under discussion. Josh, are you sure you're old enough to be using a computer without

[agi] More Phoenix Info...

2008-05-25 Thread Brad Paulsen
Hi again... As I write this I'm watching the post-landing NASA press conference live on NASA TV (http://www.nasa.gov/multimedia/nasatv/index.html). One of the NASA people was talking about what a difficult navigation problem they'd successfully overcome. His analogy was, It was like golfing