On Monday 01 October 2007 10:32:57 pm, William Pearson wrote:
A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software? It does not to my mind make sense that for
it to be layered on top
that it is useless.
- Original Message -
From: William Pearson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 01, 2007 10:32 PM
Subject: Re: AI and botnets Re: [agi] What is the complexity of RSI?
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
--- William Pearson
--- William Pearson [EMAIL PROTECTED] wrote:
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
--- William Pearson [EMAIL PROTECTED] wrote:
On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
The real danger is this: a program intelligent enough to understand
software
On 02/10/2007, Mark Waser [EMAIL PROTECTED] wrote:
A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software?
For the system that it is running itself on? Yes, eventually. For
On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote:
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
And detrimental mutations greatly outnumber beneficial ones.
It depends. Eukaryotes mutate more intelligently than prokaryotes. Their
mutations (by mixing large snips of DNA from
--- William Pearson [EMAIL PROTECTED] wrote:
On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
The real danger is this: a program intelligent enough to understand
software
would be intelligent enough to modify itself.
Well it would always have the potential. But you are assuming it
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote:
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
And detrimental mutations greatly outnumber beneficial ones.
It depends. Eukaryotes mutate more intelligently than
Matt Mahoney wrote:
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
...
So you are arguing that RSI is a hard problem? That is my question.
Understanding software to the point where a program could make intelligent
changes to itself seems to require human level intelligence. But could
On Monday 01 October 2007 11:41:35 am, Matt Mahoney wrote:
So you are arguing that RSI is a hard problem? That is my question.
Understanding software to the point where a program could make intelligent
changes to itself seems to require human level intelligence. But could it
come sooner?
--- Russell Wallace [EMAIL PROTECTED] wrote:
On 9/30/07, Matt Mahoney [EMAIL PROTECTED] wrote:
What would be the simplest system capable of recursive self improvement,
not
necessarily with human level intelligence? What are the time and memory
costs? What would be its algorithmic
On Mon, Oct 01, 2007 at 12:48:00PM -0700, Matt Mahoney wrote:
The problem is that an intelligent RSI worm might be millions of
times faster than a human once it starts replicating.
Yes, but the proposed means of finding it, i.e. via evolution
and random mutation, is hopelessly time consuming.
So the real question is what is the minimal amount of
intelligence needed for a system to self-engineer
improvments to itself?
Some folks might argue that humans are just below that
threshold.
Humans are only below the threshold because our internal systems are so
convoluted and difficult to
Mark Waser wrote:
So the real question is what is the minimal amount of
intelligence needed for a system to self-engineer
improvments to itself?
Some folks might argue that humans are just below that
threshold.
Humans are only below the threshold because our internal systems are so
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Clarification, please -- suppose you had a 3-year-old equivalent mind, e.g.
a
working Joshua Blue. Would this qualify, for your question? You have a mind
with the potential to grow into an adult-human equivalent, but it still
needs
years of
On Monday 01 October 2007 05:47:25 pm, Matt Mahoney wrote:
Understanding software is equivalent to compressing it. Programs that are
useful, bug free, and well documented have higher probability. An intelligent
model capable of RSI would compress these programs smaller. We do not seem
to
On 01/10/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
--- William Pearson [EMAIL PROTECTED] wrote:
On 30/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
The real danger is this: a program intelligent enough to understand
software
would be intelligent enough to modify itself.
Well it
What would be the simplest system capable of recursive self improvement, not
necessarily with human level intelligence? What are the time and memory
costs? What would be its algorithmic complexity?
One could imagine environments that simplify the problem, e.g. Core Wars as
a competitive
The simple intuition from evolution in the wild doesn't apply here, though. If
I'm a creature in most of life's history with a superior mutation, the fact
that there are lots of others of my kind with inferior ones doesn't hurt
me -- in fact it helps, since they make worse competitors. But on
On 9/30/07, Matt Mahoney [EMAIL PROTECTED] wrote:
What would be the simplest system capable of recursive self improvement, not
necessarily with human level intelligence? What are the time and memory
costs? What would be its algorithmic complexity?
Depends on what metric you use to judge
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
The simple intuition from evolution in the wild doesn't apply here, though.
If
I'm a creature in most of life's history with a superior mutation, the fact
that there are lots of others of my kind with inferior ones doesn't hurt
me -- in
20 matches
Mail list logo