On Monday 01 October 2007 10:32:57 pm, William Pearson wrote:
A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software? It does not to my mind make sense that for
it to be layered on top
A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software?
For the system that it is running itself on? Yes, eventually. For most/all
other machines? No. For the initial version of the
So this hackability is a technical question about possibility of
closed-source deployment that would provide functional copies of the
system but would prevent users from modifying its goal system. Is it
really important?
I would argue that it is not important but it would take me *a lot* of
Mark Waser wrote:
Interesting. I believe that we have a fundamental disagreement. I
would argue that the semantics *don't* have to be distributed. My
argument/proof would be that I believe that *anything* can be described
in words -- and that I believe that previous narrow AI are brittle
But yet robustness of goal system itself is less important than
intelligence that allows system to recognize influence on its goal
system and preserve it. Intelligence also allows more robust
interpretation of goal system. Which is why the way particular goal
system is implemented is not very
--- William Pearson [EMAIL PROTECTED] wrote:
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
--- William Pearson [EMAIL PROTECTED] wrote:
On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
The real danger is this: a program intelligent enough to understand
software
Vladimir Nesov wrote:
So this hackability is a technical question about possibility of
closed-source deployment that would provide functional copies of the
system but would prevent users from modifying its goal system. Is it
really important? Source/technology will eventually get away, and from
On 9/22/07, Matt Mahoney [EMAIL PROTECTED] wrote:
You understand that I am not proposing to solve AGI by using text compression.
I am proposing to test AI using compression, as opposed to something like the
Turing test. The reason I use compression is that the test is fast,
objective, and
I would say that natural languages are indeed approximate packaging of
something deeper . . . Is a throne a chair? How about a tree-stump?
I believe that the problem that we are circling around is what used to be
called fuzzy concepts -- i.e. that the meaning of almost any term is
You misunderstood me -- when I said robustness of the goal system, I meant
the contents and integrity of the goal system, not the particular
implementation.
I do however continue to object to your phrasing about the system
recognizing influence on it's goal system and preserving it.
Re Jiri Jelineks below 10/2/2007 1:21 AM post:
Interesting links.
I just spent about a half hour skimming them. I must admit I havent
spent enough time to get my head around how one would make a powerful AGI
using Hadoop or MapReduce, although it clearly could be helpful for
certain parts of
On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote:
... Since the AGIs are all built to be friendly, ...
The probability that this will happen is approximately the same as the
probability that the Sun could suddenly quantum-tunnel itself to a new
position inside the perfume
Okay, I'm going to wave the white flag and say that what we should do is
all get together a few days early for the conference next March, in
Memphis, and discuss all these issues in high-bandwidth mode!
But one last positive thought. A response to your remark:
So let's look at the mappings
J Storrs Hall, PhD wrote:
On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote:
... Since the AGIs are all built to be friendly, ...
The probability that this will happen is approximately the same as the
probability that the Sun could suddenly quantum-tunnel itself to a new
AGI could be the ultimate nerd, intellectually brilliant but socially
clueless. With no bodily needs, materialistic wants or sexual desires, AGI
may never understand the motivations of biological creatures like humans.
The AGI nerd may seem like a babe in the woods when it comes to our complex
On 10/2/07, Mark Waser wrote:
A quick question for Richard and others -- Should adults be allowed to
drink, do drugs, wirehead themselves to death?
This is part of what I was pointing at in an earlier post.
Richard's proposal was that humans would be asked in advance by the
AGI what level of
Beyond AI pp 253-256, 339. I've written a few thousand words on the subject,
myself.
a) the most likely sources of AI are corporate or military labs, and not just
US ones. No friendly AI here, but profit-making and mission-performing AI.
b) the only people in the field who even claim to be
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
You misunderstood me -- when I said robustness of the goal system, I meant
the contents and integrity of the goal system, not the particular
implementation.
I meant that too - and I didn't mean to imply this distinction.
Implementation of goal
Okay, I'm going to wave the white flag and say that what we should do is
all get together a few days early for the conference next March, in
Memphis, and discuss all these issues in high-bandwidth mode!
Definitely. I'm not sure that we're at all in disagreement except that I'm
still trying
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
A quick question for Richard and others -- Should adults be allowed to
drink, do drugs, wirehead themselves to death?
A correct response is That depends.
Any should question involves consideration of the pragmatics of the
system, while
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
You misunderstood me -- when I said robustness of the goal system, I meant
the contents and integrity of the goal system, not the particular
implementation.
I meant that too - and I didn't
Effective deciding of these should questions has two major elements:
(1) understanding of the evaluation-function of the assessors with
respect to these specified ends, and (2) understanding of principles
(of nature) supporting increasingly coherent expression of that
evolving evaluation
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Effective deciding of these should questions has two major elements:
(1) understanding of the evaluation-function of the assessors with
respect to these specified ends, and (2) understanding of principles
(of nature) supporting increasingly
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
Argh! Goal system and Friendliness are roughly the same sort of
confusion. They are each modelable only within a ***specified***,
encompassing context.
In more coherent, modelable terms, we express our evolving nature,
rather than strive
Richard Loosemore: a) the most likely sources of AI are corporate or
military labs, and not just US ones. No friendly AI here, but profit-making
and mission-performing AI. Main assumption built into this statement: that
it is possible to build an AI capable of doing anything except dribble
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
Argh! Goal system and Friendliness are roughly the same sort of
confusion. They are each modelable only within a ***specified***,
encompassing context.
In more coherent, modelable
You already are and do, to the extent that you are and do. Is my
writing really that obscure?
It looks like you're veering towards CEV . . . . which I think is a *huge*
error. CEV says nothing about chocolate or strawberry and little about
great food or mediocre sex.
The pragmatic point
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient except yourself.
- Original Message -
From: Jef
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient except
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
I'm not going to cheerfully right you off now, but feel free to have the last
word.
Of course I meant cheerfully write you off or ignore you.
- Jef
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
On 02/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software?
For the system that it is running itself on? Yes, eventually. For
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including
yourself.
Thou shalt not kill every living and/or sentient except yourself.
Mark, this is so PHIL101. Do you *really*
On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
a) the most likely sources of AI are corporate or military labs, and not
just
US ones. No friendly AI here, but profit-making and mission-performing
AI.
Main assumption built into this statement:
--- Mike Dougherty [EMAIL PROTECTED] wrote:
On 9/22/07, Matt Mahoney [EMAIL PROTECTED] wrote:
You understand that I am not proposing to solve AGI by using text
compression.
I am proposing to test AI using compression, as opposed to something like
the
Turing test. The reason I use
The below is a good post:
I have one major question for Josh. You said
“PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS TO DO,
WITH
THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND TECHNIQUES. THAT'S
THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING, GÖDEL-INVOKING COMPLEX
--- Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient except yourself.
Matt,
You're missing the point. Your questions are regarding interpretations
as to whether or not certain conditions are equivalent to my statements, not
challenges to my statements.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday,
J Storrs Hall, PhD wrote:
On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
Main assumption built into this statement: that it is possible to build
an AI capable of doing anything except dribble into its wheaties, using
the techiques currently being used.
I have explained
A good AGI would rise above the ethical dilemma and solve the problem by
inventing safe alternatives that were both more enjoyable and allowed the the
individual to contribute to his future, his family and society while they were
experiencing that enjoyment. And hopefully not doing so in a way
Josh asked,
Who could seriously think that ALL AGIs will then be built to be
friendly?
Children are not born friendly or unfriendly.
It is as they learn from their parents that they develop their
socialization, their morals, their empathy, and even love.
I am sure that our future fathers
--- Mark Waser [EMAIL PROTECTED] wrote:
Matt,
You're missing the point. Your questions are regarding interpretations
as to whether or not certain conditions are equivalent to my statements, not
challenges to my statements.
So do you claim that there are universal moral truths that
On 10/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:
It says a lot about the human visual perception system. This is an extremely
lossy function. Video contains only a few bits per second of useful
information. The demo is able to remove a large amount of uncompressed image
data without
42 matches
Mail list logo