Matt
Below is a sampling of my peer reviewed conference presentations on my
background ethical theory ...
This should elevate me above the common crackpot
#
Talks
a.. Presentation of a paper at ISSS 2000 (International Society for Systems
2008/8/24 Mike Tintner [EMAIL PROTECTED]:
Just a v. rough, first thought. An essential requirement of an AGI is
surely that it must be able to play - so how would you design a play machine
- a machine that can play around as a child does?
Play may be about characterising the state space.
On Tue, Aug 26, 2008 at 8:09 AM, Terren Suydam [EMAIL PROTECTED] wrote:
I know we've gotten a little off-track here from play, but the really
interesting question I would pose to you non-embodied advocates is:
how in the world will you motivate your creation? I suppose that you
won't. You'll
Bob M: Play may be about characterising the state space. As an embodied
entity you need to know which areas of the space are relatively
predictable and which are not. Armed with this knowledge when
planning an action in future you can make a reasonable estimate of the
possible range of
On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Is anyone trying to design a self-exploring robot or computer? Does this
principle have a name?
Interestingly, some views on AI advocate specifically prohibiting
self-awareness and self-exploration as a precaution against the development
of
Terren:I know we've gotten a little off-track here from play, but the really
interesting question I would pose to you non-embodied advocates is:
how in the world will you motivate your creation?
Again, I think you're missing out the most important aspect of having a body
, ( is there a good
Note that in this view play has nothing to do with having a body. An AGi
concerned solely with mathematical theorem proving would also be able to
play...
On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
About play... I would argue that it emerges in any sufficiently
About play... I would argue that it emerges in any sufficiently
generally-intelligent system
that is faced with goals that are difficult for it ... as a consequence of
other general cognitive
processes...
If an intelligent system has a goal G which is time-consuming or difficult
to achieve ...
Examples of the kind of similarity I'm thinking of:
-- The analogy btw chess or go and military strategy
-- The analogy btw roughhousing and actual fighting
In logical terms, these are intensional rather than extensional similarities
ben
On Tue, Aug 26, 2008 at 9:38 AM, Mike Tintner [EMAIL
On Tue, Aug 26, 2008 at 7:53 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Or take any number of ethical dilemmas, in which it's ok to steal food if it's
to feed your kids. Or killing ten people to save twenty. etc. How do you
define
Friendliness in these circumstances? Depends on the context.
On Tue, Aug 26, 2008 at 2:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:
The be-all and end-all here though, I presume is similarity. Is it a
logic-al concept? Finding similarities - rough likenesses as opposed to
rational, precise, logicomathematical commonalities - is actually, I would
argue,
Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me
That's a fair criticism. I did explain what I mean by embodiment in a previous
post, and what I mean by autonomy in the article of mine I referenced. But I do
recognize that in both cases there is still some ambiguity, so I will withdraw
the question until I can formulate it in more concise
Are you saying Friendliness is not context-dependent? I guess I'm struggling
to understand what a conceptual dynamics would mean that isn't dependent on
context. The AGI has to act, and at the end of the day, its actions are our
only true measure of its Friendliness. So I'm not sure what it
On Mon, Aug 25, 2008 at 11:09 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
What is the point of building general intelligence if all
it does is
takes the future from us and wastes it on
Valentina:In other words I'm looking for a way to mathematically define how the
AGI will mathematically define its goals.
Holy Non-Existent Grail? Has any new branch of logic or mathematics ever been
logically or mathematically (axiomatically) derivable from any old one? e.g.
topology,
I don't think it's necessary to be self-aware to do self-modifications.
Self-awareness implies that the entity has a model of the world that separates
self from other, but this kind of distinction is not necessary to do
self-modifications. It could act on itself without the awareness that it
On Tue, Aug 26, 2008 at 8:05 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Are you saying Friendliness is not context-dependent? I guess I'm
struggling to understand what a conceptual dynamics would mean
that isn't dependent on context. The AGI has to act, and at the end of the
day, its actions
If Friendliness is an algorithm, it ought to be a simple matter to express what
the goal of the algorithm is. How would you define Friendliness, Vlad?
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
It is expressed in individual decisions, but it isn't
these decisions
On Tue, Aug 26, 2008 at 8:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:
If Friendliness is an algorithm, it ought to be a simple matter to express
what the goal of the algorithm is. How would you define Friendliness, Vlad?
Algorithm doesn't need to be simple. The actual Friendly AI that
I didn't say the algorithm needs to be simple, I said the goal of the algorithm
ought to be simple. What are you trying to compute?
Your answer is, what is the right thing to do?
The obvious next question is, what does the right thing mean? The only way
that the answer to that is not
On Tue, Aug 26, 2008 at 9:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:
I didn't say the algorithm needs to be simple, I said the goal of
the algorithm ought to be simple. What are you trying to compute?
Your answer is, what is the right thing to do?
The obvious next question is, what does
Mike,
The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.
--Abram Demski
On
Abram,
Thanks for reply. This is presumably after the fact - can set theory
predict new branches? Which branch of maths was set theory derivable from? I
suspect that's rather like trying to derive any numeral system from a
previous one. Or like trying to derive any programming language from
Abram,
I suspect what it comes down to - I'm tossing this out off-the-cuff - is
that each new branch of maths involves new rules, new operations on numbers
and figures, and new ways of relating the numbers and figures to real
objects and sometimes new signs, period. And they aren't
Vlad, Terren and all,
by reading your interesting discussion, this saying popped in my mind..
admittedly it has little to do with AGI but you might get the point anyhow:
An old lady used to walk down a street everyday, and on a tree by that
street a bird sang beautifully, the sound made her
Mike,
That may be the case, but I do not think it is relevant to Valentina's
point. How can we mathematically define how an AGI might
mathematically define its own goals? Well, that question assumes 3
things:
-An AGI defines its own goals
-In doing so, it phrases them in mathematical language
On Tue, Aug 26, 2008 at 3:10 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Abram,
I suspect what it comes down to - I'm tossing this out off-the-cuff - is
that each new branch of maths involves new rules, new operations on numbers
and figures, and new ways of relating the numbers and figures to
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 6:49 AM
Subject: Re: [agi] How Would You Design a Play Machine?
Examples of the kind of similarity I'm thinking of:
-- The analogy btw chess or go and military strategy
-- The
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 6:49 AM
Subject: Re: [agi] How Would You Design a Play Machine?
Examples of the kind of similarity I'm thinking of:
-- The analogy btw chess or go and military strategy
-- The
On Tue, Aug 26, 2008 at 11:13 PM, Valentina Poletti [EMAIL PROTECTED] wrote:
Vlad, Terren and all,
by reading your interesting discussion, this saying popped in my mind..
admittedly it has little to do with AGI but you might get the point anyhow:
An old lady used to walk down a street
It doesn't matter what I do with the question. It only matters what an AGI does
with it.
I'm challenging you to demonstrate how Friendliness could possibly be specified
in the formal manner that is required to *guarantee* that an AI whose goals
derive from that specification would actually
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
But what is safe, and how to improve safety? This is a
complex goal
for complex environment, and naturally any solution to this
goal is
going to be very intelligent. Arbitrary intelligence is not
safe
(fatal, really), but what is
Mike,
So you feel that my disagreement with your proposal is sad? That's quite
an ego you have there, my friend. You asked for input and you got it. The
fact that you didn't like my input doesn't make me or the effort I spent
composing it sad. I haven't read all of the replies to your
Charles,
By now you've probably read my reply to Tintner's reply. I think that
probably says it all (and them some!).
What you say holds IFF you are planing on building an airplane that flies
just like a bird. In other words, if you are planning on building a
human-like AGI (that could,
35 matches
Mail list logo