Interesting discussion. And we brought up wireheading. It's kind of the
ultimate example that shows that pursuing pleasure is different from
pursuing the good. It really is an area for the philosophers. What is
the good, anyway?
But what I wanted to comment on was my understanding of the
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
On 8/27/08, Matt Mahoney [EMAIL PROTECTED] wrote:
An AGI will not design its goals. It is up to humans to define the goals
of an AGI, so that it will do what we want it to do.
All these points you made are good points, and I agree with you. However,
what I was trying to say - and I realized I did not express myself too well,
is that, from what I understand I see a paradox in what Eliezer is trying to
do. Assuming that we agree on the definition of AGI - a being far more
Lol..it's not that impossible actually.
On Tue, Aug 26, 2008 at 6:32 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Valentina:In other words I'm looking for a way to mathematically define
how the AGI will mathematically define its goals.
Holy Non-Existent Grail? Has any new branch of logic or
On Thu, Aug 28, 2008 at 12:34 PM, Valentina Poletti [EMAIL PROTECTED] wrote:
All these points you made are good points, and I agree with you. However,
what I was trying to say - and I realized I did not express myself too well,
is that, from what I understand I see a paradox in what Eliezer is
2008/8/27 Mike Tintner [EMAIL PROTECTED]:
You on your side insist that you don't have to have such precisely defined
goals
- your intuitive (and by definition, ill-defined) sense of intelligence will
do.
As a child I don't believe that I set out with the goal of becoming a
software developer.
Just in case there is any confusion, ill-defined is in this particular
context is in no way pejorative. The crux of a General Intelligence for me
is that it is necessarily a machine that works with more or less ill-defined
goals to solve ill-structured problems. Bob's self-description is to a
2008/8/28 Mike Tintner [EMAIL PROTECTED]:
(I still think of course that current AGI should have a not-so-ill
structured definition of its problem-solving goals).
It's certainly true that an AGI could be endowed with well defined
goals. Some people also begin from an early age with well
No, the state of ultimate bliss that you, I, and all other rational, goal
seeking agents seek
Your second statement copied below not withstanding, I *don't* seek ultimate
bliss.
You may say that is not what you want, but only because you are unaware of
the possibilities of reprogramming
Mark,
I second that!
Matt,
This is like my imaginary robot that rewires its video feed to be
nothing but tan, to stimulate the pleasure drive that humans put there
to make it like humans better.
If we have any external goals at all, the state of bliss you refer to
prevents us from achieving
Matt,
Ok, you have me, I admit defeat.
I could only continue my argument if I could pin down what sorts of
facts need to be learned with high probability for RSI, and show
somehow that this set does not include unlearnable facts. Learnable
facts form a larger set than provable facts, since for
PS-- I have thought of a weak argument:
If a fact is not probabilistically learnable, then it is hard to see
how it has much significance for an AI design. A non-learnable fact
won't reliably change the performance of the AI, since if it did, it
would be learnable. Furthermore, even if there were
It doesn't matter what I do with the question. It
only matters what an AGI does with it.
AGI doesn't do anything with the question, you do. You
answer the
question by implementing Friendly AI. FAI is the answer to
the
question.
The question is: how could one specify Friendliness in
Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure
--- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
One of the main motivations for the fast development of
Friendly AI is
that it can be allowed to develop superintelligence to
police the
human space from global catastrophes like Unfriendly AI,
which
includes as a special case a
Mark,
Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in
However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.
Why not wait until a theory is derived before making this decision?
Wouldn't such a theory be a good starting point, at least?
better to put such
On Thu, Aug 28, 2008 at 12:29 PM, Terren Suydam [EMAIL PROTECTED] wrote:
I challenge anyone who believes that Friendliness is attainable in principle
to construct a scenario in which there is a clear right action that does not
depend on cultural or situational context.
It does depend on culture
Mark,
I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision-making entity, right?
Personally, if I were to take the approach of a preprogrammed
Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain
--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution. Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in
Valentina Poletti [EMAIL PROTECTED] wrote:
Got ya, thanks for the clarification. That brings up another question. Why do
we want to make an AGI?
I'm glad somebody is finally asking the right question, instead of skipping
over the specification to the design phase. It would avoid a lot of
Nobody wants to enter a mental state where thinking and awareness are
unpleasant, at least when I describe it that way. My point is that having
everything you want is not the utopia that many people think it is. But it is
where we are headed.
-- Matt Mahoney, [EMAIL PROTECTED]
-
I'm not trying to win any arguments, but I am trying to solve the problem of
whether RSI is possible at all. It is an important question because it
profoundly affects the path that a singularity would take, and what precautions
we need to design into AGI. Without RSI, then a singularity has to
Matt:If RSI is possible, then there is the additional threat of a fast
takeoff of the kind described by Good and Vinge
Can we have an example of just one or two subject areas or domains where a
takeoff has been considered (by anyone) as possibly occurring, and what
form such a takeoff might
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in
any specific areas has been considered?
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
Here is Vernor Vinge's original essay on the singularity.
http://mindstalk.net/vinge/vinge-sing.html
The premise is that if humans can create agents with above human intelligence,
then so can they. What I am questioning is whether agents at any intelligence
level can do this. I don't believe
Hi Jiri,
Comments below...
--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
That's difficult to reconcile if you don't
believe embodiment is all that important.
Not really. We might be qualia-driven, but for our AGIs
it's perfectly
ok (and only natural) to be driven by given
Artificial Minds in Win32Forth are online at
http://mind.sourceforge.net/mind4th.html and
http://AIMind-i.com -- a separate AI branch.
http://mentifex.virtualentity.com/js080819.html
is the JavaScript AI Mind Programming Journal
about the development of a tutorial program at
I think we would all agree that context is crucial to understanding.
Kill them! means something quite different if you're at a soccer game,
in a military battle, or playing a FPS video game.
But in a pragmatic, let's implement it, sense, I'm not as clear what
context means. Let me try to
Thanks. But like I said, airy generalities.
That machines can become faster and faster at computations and accumulating
knowledge is certain. But that's narrow AI.
For general intelligence, you have to be able first to integrate as well as
accumulate knowledge. We have learned vast amounts
On Fri, Aug 22, 2008 at 9:44 AM, A. T. Murray [EMAIL PROTECTED] wrote:
Artificial Minds in Win32Forth are online at
http://mind.sourceforge.net/mind4th.html and
http://AIMind-i.com -- a separate AI branch.
http://mentifex.virtualentity.com/js080819.html
is the JavaScript AI Mind Programming
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human intelligence,
then so can they. What I am questioning is whether agents at any intelligence
level can do this. I don't believe that agents at any level can recognize
higher
Parasites are very successful at surviving but they don't have other
goals. Try being parasitic *and* succeeding at goals other than survival.
I think you'll find that your parasitic ways will rapidly get in the way of
your other goals the second that you need help (or even
On 8/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
Sorry, I forgot to ask for what I most wanted to know - what form of RSI in
any specific areas has been considered?
To quote Charles Babbage, I am not able rightly to apprehend the kind of
confusion of ideas that could provoke such a question.
Eric,
It was a real-life near-death experience (auto accident).
I'm aware of the tryptamine compound and its presence in hallucinogenic
drugs such as LSD. According to Wikipedia, it is not related to the NDE
drug of choice which is Ketamine (Ketalar or ketamine HCL -- street name
back in
EXPLORING THE FUNCTION OF SLEEP
http://www.physorg.com/news138941239.html
From the article:
Because it is universal, tightly regulated, and cannot be lost without
serious harm, Cirelli argues that sleep must have an important core
function. But what?
All are welcome...
-- Forwarded message --
From: Monica [EMAIL PROTECTED]
Date: Thu, Aug 28, 2008 at 9:51 PM
Subject: [ai-94] New Extraordinary Meetup: Ben Goertzel, Novamente
To: [EMAIL PROTECTED]
Announcing a new Meetup for Bay Area Artificial Intelligence Meetup Group!
What:
Terren,
is not embodied at all, in which case it is a mindless automaton
Researchers and philosophers define mind and intelligence in many
different ways = their classifications of particular AI systems
differ. What really counts though are problem solving abilities of the
system. Not how it's
Hi Mark,
Obviously you need to complicated your original statement I believe that
ethics is *entirely* driven by what is best evolutionarily... in such a way
that we don't derive ethics from parasites. You did that by invoking social
behavior - parasites are not social beings.
So from there
Jiri,
I think where you're coming from is a perspective that doesn't consider or
doesn't care about the prospect of a conscious intelligence, an awake being
capable of self reflection and free will (or at least the illusion of it).
I don't think any kind of algorithmic approach, which is to
Terren,
I don't think any kind of algorithmic approach, which is to say, un-embodied,
will ever result in conscious intelligence. But an embodied agent that is
able to construct ever-deepening models of its experience such that it
eventually includes itself in its models, well, that is
Brad, scary stuff. Dissociatives/NMDA inhibitors were secret option
number three! ;D
On 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
Terren,
I don't think any kind of algorithmic approach, which is to say,
un-embodied, will ever result in conscious intelligence. But an embodied
agent that
43 matches
Mail list logo