From: Derek Zahn [EMAIL PROTECTED]
Date: Sun, 30 Sep 2007 08:57:53 -0600
...
One thing that could improve safety is to reject the notion that AGI
projects should be focused on, or even capable of, recursive self
improvement in the sense of reprogramming its core implementation.
...
Let's take
Tim Freeman writes: Let's take Novamente as an example. ... It cannot improve
itself until the following things happen: 1) It acquires the knowledge
and skills to become a competent programmer, a task that takes a human many
years of directed training and practical experience. 2) It is
Tim Freeman: No value is added by introducing considerations about
self-reference into conversations about the consequences of AI engineering.
Junior geeks do find it impressive, though.
The point of that conversation was to illustrate that if people are worried
about Seed AI exploding, then
On Wed, Oct 10, 2007 at 01:22:26PM -0400, Richard Loosemore wrote:
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
Yes, and I'm sorry I triggred the thread.
I particularly object to libertarianism being shoved down our
Derek, Tim,
There is no oversight: self-improvement doesn't necessarily refer to
actual instance of self that is to be improved, but to AGI's design.
Next thing must be better than previous one for runaway progress to
happen, and one way of doing it is for next thing to be a refinement
of
Let's take Novamente as an example. ... It cannot improve itself
until the following things happen:
1) It acquires the knowledge and skills to become a competent
programmer, a task that takes a human many years of directed
training and practical experience.
Wrong. This was hashed to
From: Derek Zahn [EMAIL PROTECTED]
You seem to think that self-reference buys you nothing at all since it
is a simple matter for the first AGI projects to reinvent their own
equivalent from scratch, but I'm not sure that's true.
The from scratch part is a straw-man argument. The AGI project will
Linas Vepstas: Let's take Novamente as an example. ... It cannot improve
itself until the following things happen:1) It acquires the
knowledge and skills to become a competent programmer, a task that takes a
human many years of directed training and practical experience. Wrong.
This
Tim Freeman wrote:
My point is that if one is worried about a self-improving Seed AI
exploding, one should also be worried about any AI that competently
writes software exploding.
There *is* a slight gap between competently writing software and
competently writing minds. Large by human
On 10/12/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
some of us are much impressed by it. Anyone with even a surface grasp
of the basic concept on a math level will realize that there's no
difference between self-modifying and writing an outside copy of
yourself, but *either one*
On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
Agreed. There are many other forums where political ideology can be debated.
-
This list is sponsored by AGIRI:
-to-market
effect [WAS Re: [agi] Religion-free technical content]
On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
Agreed. There are many other forums where political
-
From: Bob Mottram [EMAIL PROTECTED]
Sent: Oct 11, 2007 11:12 AM
To: agi@v2.listbox.com
Subject: Re: [META] Re: Economic libertarianism [was Re: The first-to-market
effect [WAS Re: [agi] Religion-free technical content]
On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Am I
The only solution to this problem I ever see suggested is to
intentionally create a Really Big Fish called the government that can
effortlessly eat every fish in the pond but promises not to -- to
prevent the creation of Really Big Fish. That is quite the Faustian
bargain to protect
BillK On 10/6/07, a wrote:
I am skeptical that economies follow the self-organized criticality
behavior. There aren't any examples. Some would cite the Great
Depression, but it was caused by the malinvestment created by
Central Banks. e.g. The Federal Reserve System. See the Austrian
On Oct 10, 2007, at 2:26 AM, Robert Wensman wrote:
Yes, of course, the Really Big Fish that is democracy.
No, you got this quite wrong. The Really Big Fish is institution
responsible for governance (usually the government); democracy is
merely a fuzzy category of rule set used in
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
I particularly object to libertarianism being shoved down our throats,
not so much because I disagree with it, but because so much of the
singularity / extropian / futurist
(off topic, but there are something relevant for AGI)
My fears about economical libertarianism could be illustrated with a fish
pond analogy. If there is a small pond with a large number of small fish of
some predatory species, after an amount of time they will cannibalize and
eat each other
With googling, I found that older people has lower IQ
http://www.sciencedaily.com/releases/2006/05/060504082306.htm
IMO, the brain is like a muscle, not an organ. IQ is said to be highly
genetic, and the heritability increases with age. Perhaps that older
people do not have much mental
On Oct 9, 2007, at 4:27 AM, Robert Wensman wrote:
This is of course just an illustration and by no means a proof that
the same thing would occur in a laissez-faire/libertarianism
economy. Libertarians commonly put blame for monopolies on
government involvement, and I guess some would
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content breaking the small
hardware mindset
With googling, I found that older people has lower IQ
http://www.sciencedaily.com/releases/2006/05/060504082306.htm
IMO, the brain is like a muscle, not an organ. IQ is said to be highly
J. Andrew Rogers wrote:
Generally though, the point that you fail to see is that an AGI can
just as easily subvert *any* power structure, whether the environment
is a libertarian free market or an autocratic communist state. The
problem has nothing to do with the governance of the
Economic libertarianism would be nice if it were to occur. However,
in practice companies and governments put in place all sorts of
anti-competitive structures to lock people into certain modes of
economic activity. I think economic activity in general is heavily
influenced by cognitive biases
Derek Zahn wrote:
Richard Loosemore:
a...
I often see it assumed that the step between first AGI is built
(which I interpret as a functoning model showing some degree of
generally-intelligent behavior) and god-like powers dominating the
planet is a short one. Is that really likely?
Nobody
a wrote:
Linas Vepstas wrote:
...
The issue is that there's no safety net protecting against avalanches
of unbounded size. The other issue is that its not grains of sand, its
people. My bank-account and my brains can insulate me from small
shocks.
I'd like to have protection against the
Bob Mottram wrote:
Economic libertarianism would be nice if it were to occur. However,
in practice companies and governments put in place all sorts of
anti-competitive structures to lock people into certain modes of
economic activity. I think economic activity in general is heavily
influenced
On Sat, Oct 06, 2007 at 10:05:28AM -0400, a wrote:
I am skeptical that economies follow the self-organized criticality
behavior.
Oh. Well, I thought this was a basic principle, commonly cited in
microeconomics textbooks: when there's a demand, producers rush
to fill the demand. When there's
) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 10:00 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content breaking the small
hardware mindset
Edward W. Porter wrote:
It's also
Edward W. Porter wrote:
It's also because the average person looses 10 points in IQ between mid
twenties and mid fourties and another ten points between mid fourties and
sixty. (Help! I'am 59.)
But this is just the average. Some people hang on to their marbles as
they age better than
Linas Vepstas wrote:
My objection to economic libertarianism is its lack of discussion of
self-organized criticality. A common example of self-organized
criticality is a sand-pile at the critical point. Adding one grain
of sand can trigger an avalanche, which can be small, or maybe
On 10/6/07, a wrote:
I am skeptical that economies follow the self-organized criticality
behavior.
There aren't any examples. Some would cite the Great Depression, but it
was caused by the malinvestment created by Central Banks. e.g. The
Federal Reserve System. See the Austrian Business Cycle
.
Simple. Unambiguous. Impossible to implement. (And not my proposal)
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 04, 2007 7:26 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED
: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 04, 2007 7:26 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED] wrote:
I'll repeat again since you don't seem to be paying attention to what I'm
saying
On 10/5/07, Mark Waser [EMAIL PROTECTED] wrote:
Then I guess we are in perfect agreement. Friendliness is what the
average
person would do.
Which one of the words in And not my proposal wasn't clear? As far as I
am concerned, friendliness is emphatically not what the average person
Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 05, 2007 10:40 AM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED] wrote:
Then state the base principles or the algorithm that generates them,
without
ambiguity and without
--- Mike Dougherty [EMAIL PROTECTED] wrote:
On 10/5/07, Mark Waser [EMAIL PROTECTED] wrote:
Then I guess we are in perfect agreement. Friendliness is what the
average
person would do.
Which one of the words in And not my proposal wasn't clear? As far as I
am concerned,
On Tue, Oct 02, 2007 at 03:03:35PM -0400, Mark Waser wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
I like to think of myself as peaceful and non-violent,
Linas Vepstas wrote:
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
I like to think of myself as
On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote:
the
IQ bell curve is not going down. The evidence is its going up.
So that's why us old folks 'r gettin' stupider as compared to
them's young'uns.
--linas
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To
OK, this is very off-topic. Sorry.
On Fri, Oct 05, 2007 at 06:36:34PM -0400, a wrote:
Linas Vepstas wrote:
For the most part, modern western culture espouses and hews to
physical non-violence. However, modern right-leaning pure capitalism
advocates not only social Darwinism, but also the
.)
Edward W. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Friday, October 05, 2007 7:31 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion
Linas Vepstas wrote:
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
When the first AGI is built, its first actions will be to make sure that
nobody is trying to build a dangerous, unfriendly AGI.
Yes, OK, granted, self-preservation is a reasonable character trait.
After
Linas Vepstas wrote:
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
Second, You mention the 3-body problem in Newtonian mechanics. Although
I did not use it as such in the paper, this is my poster child of a
partial complex system. I often cite the case of planetary
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Linas Vepstas wrote:
Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd
Bob Mottram wrote:
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Linas Vepstas wrote:
Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
This seems very vague. I would suggest that if there is no clear
I mean that ethics or friendliness is an algorithmically complex function,
like our legal system. It can't be simplified.
The determination of whether a given action is friendly or ethical or not is
certainly complicated but the base principles are actually pretty darn simple.
However, I
Bob Mottram wrote:
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
This seems very vague. I would suggest that if
To me this seems like elevating that status of nanotech to magic.
Even given RSI and the ability of the AGI to manufacture new computing
resources it doesn't seem clear to me how this would enable it to
prevent other AGIs from also reaching RSI capability. Presumably
lesser techniques means black
On 10/4/07, Bob Mottram [EMAIL PROTECTED] wrote:
To me this seems like elevating that status of nanotech to magic.
Even given RSI and the ability of the AGI to manufacture new computing
resources it doesn't seem clear to me how this would enable it to
prevent other AGIs from also reaching RSI
On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote:
To me this seems like elevating that status of nanotech to magic.
Even given RSI and the ability of the AGI to manufacture new computing
resources it doesn't seem clear to me how this would enable it to
prevent other AGIs from also
--- Mark Waser [EMAIL PROTECTED] wrote:
I mean that ethics or friendliness is an algorithmically complex function,
like our legal system. It can't be simplified.
The determination of whether a given action is friendly or ethical or not is
certainly complicated but the base principles
On 10/4/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
We can't build a system that learns as fast as a 1-year-old just now. Which is
our most likely next step: (a) A system that does learn like a 1-year-old, or
(b) a system that can learn 1000 times as fast as an adult?
Following Moore's
--- Mark Waser [EMAIL PROTECTED] wrote:
I'll repeat again since you don't seem to be paying attention to what I'm
saying -- The determination of whether a given action is friendly or
ethical or not is certainly complicated but the base principles are actually
pretty darn simple.
Then state
On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
I find your argument quotidian and lacking in depth. ...
What you said above was pure, unalloyed bullshit: an exquisite cocktail
of complete technical ignorance, patronizing insults and breathtaking
So do you claim that there are universal moral truths that can be applied
unambiguously in every situation?
What a stupid question. *Anything* can be ambiguous if you're clueless.
The moral truth of Thou shalt not destroy the universe is universal. The
ability to interpret it and apply it
On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
The below is a good post:
Thank you!
I have one major question for Josh. You said
“PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS
TO DO, WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
[...]
RSI (Recursive Self Improvement)
[...]
I didn't know exactly what the term covers.
So could you, or someone, please define exactly what its meaning is?
Is it any system capable of learning how to improve its current
I criticised your original remarks because they demonstrated a complete
lack of understanding of what complex systems actually are. You said
things about complex systems that were, quite frankly, ridiculous:
Turing-machine equivalence, for example, has nothing to do with this.
In your more
03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 10:14 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On Tuesday 02 October 2007 05:50:57 pm
: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 12:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
[...]
RSI (Recursive Self Improvement)
[...]
I didn't know exactly what
Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote:
In fact, if the average AI post-grad of today had such hardware to play
with, things would really start jumping. Within ten years the equivents
of such machines could easily be sold for somewhere between $10k and
$100k, and lots of
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
From what you say below it would appear human-level AGI would not require
recursive self improvement,
[...]
A lot of people on this list seem to hang a lot on RSI, as they use it,
implying it is necessary for human-level AGI.
. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Mike Dougherty [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 5:20 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free
RE: [agi] Religion-free technical contentEdward Porter:I don't know about you,
but I think there are actually a lot of very bright people in the interrelated
fields of AGI, AI, Cognitive Science, and Brain science. There are also a lot
of very good ideas floating around.
Yes there are bright
--- Mark Waser [EMAIL PROTECTED] wrote:
So do you claim that there are universal moral truths that can be applied
unambiguously in every situation?
What a stupid question. *Anything* can be ambiguous if you're clueless.
The moral truth of Thou shalt not destroy the universe is
, 2007 5:51 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
From what you say below it would appear human-level AGI would not
require recursive self improvement,
[...]
A lot of people on this list seem
On Wed, Oct 03, 2007 at 06:31:35PM -0400, Edward W. Porter wrote:
One of them once told me that in Japan it was common for high school boys
who were interested in math, science, or business to go to abacus classes
after school or on weekends. He said once they fully mastered using
physical
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
When the first AGI is built, its first actions will be to make sure that
nobody is trying to build a dangerous, unfriendly AGI.
Yes, OK, granted, self-preservation is a reasonable character trait.
After that
point, the
On Wednesday 03 October 2007 06:21:46 pm, Mike Tintner wrote:
Yes there are bright people in AGI. But there's no one remotely close to the
level, say, of von Neumann or Turing, right? And do you really think a
revolution such as AGI is going to come about without that kind of
revolutionary,
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
Second, You mention the 3-body problem in Newtonian mechanics. Although
I did not use it as such in the paper, this is my poster child of a
partial complex system. I often cite the case of planetary system
dynamics as an
-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 6:22 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
Edward Porter:I dont know about you, but I think there are actually a lot
of very bright people in the interrelated fields of AGI
On 10/4/07, Edward W. Porter [EMAIL PROTECTED] wrote:
The biggest brick wall is the small-hardware mindset that has been
absolutely necessary for decades to get anything actually accomplished on
the hardware of the day. But it has caused people to close their minds to
the vast power of brain
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote:
I think your notion that post-grads with powerful machines would only
operate in the space of ideas that don't work is unfair.
Yeah, i can agree - it was harsh. My real intention was to suggest
that NOT having a bigger computer is not
: **SPAM** Re: [agi] Religion-free technical content
So this hackability is a technical question about possibility of
closed-source deployment that would provide functional copies of the
system but would prevent users from modifying its goal system. Is it
really important? Source/technology
Mark Waser wrote:
Interesting. I believe that we have a fundamental disagreement. I
would argue that the semantics *don't* have to be distributed. My
argument/proof would be that I believe that *anything* can be described
in words -- and that I believe that previous narrow AI are brittle
But yet robustness of goal system itself is less important than
intelligence that allows system to recognize influence on its goal
system and preserve it. Intelligence also allows more robust
interpretation of goal system. Which is why the way particular goal
system is implemented is not very
PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 01, 2007 8:36 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
Mark Waser wrote:
And apart from the global differences between the two types of AGI, it
would be no good to try to guarantee friendliness using the kind
PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 9:49 AM
Subject: **SPAM** Distributed Semantics [WAS Re: [agi] Religion-free
technical content]
Mark Waser wrote:
Interesting. I believe that we have a fundamental disagreement. I
would argue that the semantics *don't* have
Sent: Tuesday, October 02, 2007 9:49 AM
Subject: **SPAM** Re: [agi] Religion-free technical content
But yet robustness of goal system itself is less important than
intelligence that allows system to recognize influence on its goal
system and preserve it. Intelligence also allows more robust
On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote:
... Since the AGIs are all built to be friendly, ...
The probability that this will happen is approximately the same as the
probability that the Sun could suddenly quantum-tunnel itself to a new
position inside the perfume
Okay, I'm going to wave the white flag and say that what we should do is
all get together a few days early for the conference next March, in
Memphis, and discuss all these issues in high-bandwidth mode!
But one last positive thought. A response to your remark:
So let's look at the mappings
J Storrs Hall, PhD wrote:
On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote:
... Since the AGIs are all built to be friendly, ...
The probability that this will happen is approximately the same as the
probability that the Sun could suddenly quantum-tunnel itself to a new
On 10/2/07, Mark Waser wrote:
A quick question for Richard and others -- Should adults be allowed to
drink, do drugs, wirehead themselves to death?
This is part of what I was pointing at in an earlier post.
Richard's proposal was that humans would be asked in advance by the
AGI what level of
Beyond AI pp 253-256, 339. I've written a few thousand words on the subject,
myself.
a) the most likely sources of AI are corporate or military labs, and not just
US ones. No friendly AI here, but profit-making and mission-performing AI.
b) the only people in the field who even claim to be
-- Should adults be allowed to
drink, do drugs, wirehead themselves to death?
- Original Message -
From: Vladimir Nesov [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 9:49 AM
Subject: **SPAM** Re: [agi] Religion-free technical content
But yet robustness of goal
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
A quick question for Richard and others -- Should adults be allowed to
drink, do drugs, wirehead themselves to death?
A correct response is That depends.
Any should question involves consideration of the pragmatics of the
system, while
: **SPAM** Re: [agi] Religion-free technical content
But yet robustness of goal system itself is less important than
intelligence that allows system to recognize influence on its goal
system and preserve it. Intelligence also allows more robust
interpretation of goal system. Which
.
So how do I get to be an assessor and decide?
- Original Message -
From: Jef Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 12:55 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Effective deciding of these should questions has two major elements:
(1) understanding of the evaluation-function of the assessors with
respect to these specified ends, and (2) understanding of principles
(of nature) supporting increasingly
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
Argh! Goal system and Friendliness are roughly the same sort of
confusion. They are each modelable only within a ***specified***,
encompassing context.
In more coherent, modelable terms, we express our evolving nature,
rather than strive
Richard Loosemore: a) the most likely sources of AI are corporate or
military labs, and not just US ones. No friendly AI here, but profit-making
and mission-performing AI. Main assumption built into this statement: that
it is possible to build an AI capable of doing anything except dribble
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
Argh! Goal system and Friendliness are roughly the same sort of
confusion. They are each modelable only within a ***specified***,
encompassing context.
In more coherent, modelable
] Religion-free technical content
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Effective deciding of these should questions has two major elements:
(1) understanding of the evaluation-function of the assessors with
respect to these specified ends, and (2) understanding of principles
Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 2:53 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Wrong. There *are* some absolute answers. There are some obvious
universal
Thou shalt nots
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient except
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
I'm not going to cheerfully right you off now, but feel free to have the last
word.
Of course I meant cheerfully write you off or ignore you.
- Jef
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
(and don't give me ridiculous crap like
Well, if the universe was only inflicting suffering on everyone . . . . )
- Original Message -
From: Jef Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 3:14 PM
Subject: **SPAM** Re: [agi] Religion-free technical
1 - 100 of 156 matches
Mail list logo