What is your approach on ensuring AGI safety/Friendliness on this project?
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
Richard Loosemore said:
If you look at his paper carefully, you will see that at every step of
the way he introduces assumptions as if they were obvious facts ... and
in all the cases I have bothered to think through, these
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the literature, some parts of completely standard and
accepted economics and game
Hi Peter, Ben, and Panu
What is your approach on ensuring AGI safety/Friendliness on this project?
I would immediately gather reason to assert that if there's money in AGI,
and money is made from such a project, it is bound to be one of a friendly
nature. That assertion of course makes for a
My own view is that our state of knowledge about AGI is far too weak
for us to make detailed
plans about how to **ensure** AGI safety, at this point
What we can do is conduct experiments designed to gather data about
AGI goal systems and
AGI dynamics, which can lead us to more robust AGI
2008/5/25 Nathan Cravens [EMAIL PROTECTED]:
yet AGI has
potentially dramatic concrete consequences in one direction or another.
Money will only be made from this in the short run, and if not, for those
with a capacity to muster life, misery will prevail, unless you are the last
one or ones
Your argument about the difference between a GS and an MES system is a
strawman argument. Omohundro never made the argument, nor did he touch on
it as far as I can tell. I did not find his paper very interesting either,
but you are the one who seems to be pulling conclusions out of thin
Read the appendix, p37ff. He's not making arguments -- he's explaining,
with a
few pointers into the literature, some parts of completely standard and
accepted economics and game theory. It's all very basic stuff.
The problem with accepted economics and game theory is that in a proper
Um. I *really* need to point out that statements like transhumanists.
They have this sort of gut emotional belief that self improvement is all
good are really nasty, unwarranted bigotry.
- Original Message -
From: [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, May 24,
Rationality and irrationality are interesting subjects . . . .
Many people who endlessly tout rationally use it as an exact synonym for
logical correctness and then argue not only that irrational then means
logically incorrect and therefore wrong but that anything that can't be
proved is
On Sun, May 25, 2008 at 10:42 AM, Mark Waser [EMAIL PROTECTED] wrote:
My own view is that our state of knowledge about AGI is far too weak
for us to make detailed
plans about how to **ensure** AGI safety, at this point
I disagree strenuously. If our arguments will apply to *all*
- Original Message
From: J Storrs Hall, PhD [EMAIL PROTECTED]
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the
I disagree strenuously. If our arguments will apply to *all* intelligences
(/intelligent architectures) -- like Omohundro attempts to do -- instead of
just certain AGI subsets, then I believe that our lack of knowledge about
particular subsets is irrelevant.
yes, but I don't think these
When I first read Omohundro's paper, my first reaction was . . . Wow! That's
awesome.
Then, when I tried to build on it, I found myself picking it apart instead. My
previous e-mails from today should explain why. He's trying to extrapolate and
predict from first principles and toy
Please, if you're going to argue something --
please take the time to argue it and don't pretend that you can't magically
solve it all with your guesses (I mean, intuition).
time for mailing list posts is scarce for me these days, so sometimes I post
a conclusion w/out the supporting arguments
On Sun, May 25, 2008 at 10:26 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Certainly there are plenty of folks with equal software engineering experience
to you, advocating the Linux/C++ route (taken in the current OpenCog version)
rather than the .Net/C# route that I believe you advocate...
No,
Certainly there are plenty of folks with equal software engineering experience
to you, advocating the Linux/C++ route (taken in the current OpenCog version)
rather than the .Net/C# route that I believe you advocate...
Cool. An *argument from authority* without even having an authority. Show
Intuition is not science. Intuition is just hardened opinion.
Mark, without intuition the development of science would grind to a halt.
Logic doesn't come from god who made science in your image, those things
come from humans with faulty, sometimes elegant, perceptions.
Ben and Peter. Do you
without intuition the development of science would grind to a halt.
Nathan, please elaborate more. Your second sentence about logic is obviously
true but I can't see where it has anything to do with your halting statement
unless you are totally misinterpreting me.
- Original Message
About C++ versus C# ...
This blog post and some of the more intelligent comments (e.g. David
Brownell's) summarize many of the standard arguments back and forth.
http://blogs.msdn.com/ericgu/archive/2005/01/26/360879.aspx
It is a good blog post. Now, how many of Dave Brownell's features are
Jim Bromer wrote:
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
Richard Loosemore said:
If you look at his paper carefully, you will see that at every step of
the way he introduces assumptions as if they were obvious facts ... and
in all the cases I have bothered to
Continuing on from a mistaken send . . .
I'm aware .Net has evolved a lot in recent years, but so has the C++
world, especially the Boost libraries which are extremely powerful.
Boost is not particularly powerful. Using Boost involves a *lot* of work
because the interfaces are not
J Storrs Hall, PhD wrote:
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the literature, some parts of completely standard and
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote:
Read the appendix, p37ff. He's not making arguments -- he's explaining,
with a
few pointers into the literature, some parts of completely standard and
accepted economics and game theory. It's all very basic stuff.
The problem with
Mark Waser wrote:
When I first read Omohundro's paper, my first reaction was . . . Wow!
That's awesome.
Then, when I tried to build on it, I found myself picking it apart
instead. My previous e-mails from today should explain why. He's
trying to extrapolate and predict from first
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
This is NOT the paper that is under discussion.
WRONG.
This is the paper I'm discussing, and is therefore the paper under discussion.
---
agi
Archives:
In the context of Steve's paper, however, rational simply means an agent who
does not have a preference circularity.
On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote:
Rationality and irrationality are interesting subjects . . . .
Many people who endlessly tout rationally use it as an
Mark,
For OpenCog we had to make a definite choice and we made one. Sorry
you don't agree w/ it.
I agree that you had to make a choice and made the one that seemed right to
various reason. The above comment is rude and snarky however --
particularly since it seems to come *because* you
Mark. Intuition is a form of vague perception, a kind of logic in the
making. Like a grain of sand with pearl potential.
AGI has a lot of power to cure the society of scarcity situation.
So it's up to us to roll out the beneficial apps before others roll
out the nasty ones.
This is not a
Some not-quite-random observations that hopefully injects some
moderation:
- There are a number of good arguments for using C over C++, not the
least of which is that it is dead simple to implement very efficient C
bindings into much friendlier languages that hide the fact that it is
Hi Gang!
The first Phoenix Lander pix from Mars:
http://fawkes4.lpl.arizona.edu/images.php?gID=315cID=7
Cheers,
Brad
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your
J Storrs Hall, PhD wrote:
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
This is NOT the paper that is under discussion.
WRONG.
This is the paper I'm discussing, and is therefore the paper under discussion.
Josh, are you sure you're old enough to be using a computer without
Hi again...
As I write this I'm watching the post-landing NASA press conference live on
NASA TV (http://www.nasa.gov/multimedia/nasatv/index.html). One of the NASA
people was talking about what a difficult navigation problem they'd
successfully overcome. His analogy was, It was like golfing
33 matches
Mail list logo