expectancy (66 years).
- Number of bits of recorded information.
- Combined processing power of brains and computers in OPS.
-- Matt Mahoney, matmaho...@yahoo.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.list
. Modern Nobel prize winners are awarded for work
done decades ago. How do you distinguish one genius from millions of cranks?
You wait until the rest of society catches up in intelligence.
-- Matt Mahoney, matmaho...@yahoo.com
---
agi
Archives: https://w
ays to find the best options to compress a file that
normally takes 45 seconds.
-- Matt Mahoney, matmaho...@yahoo.com
From: Steve Richfield
To: agi
Sent: Sun, June 20, 2010 2:06:55 AM
Subject: [agi] An alternative plan to discover self-organization theory
No,
e ideas on how to solve it? Preferably something
that takes less than 3 billion years on a planet sized molecular computer.
-- Matt Mahoney, matmaho...@yahoo.com
From: Mike Tintner
To: agi
Sent: Mon, June 21, 2010 7:59:29 AM
Subject: Re: [agi] An alternative pla
very complicated of course. You are more
likely to detect motion in objects that you recognize and expect to move, like
people, animals, cars, etc.
-- Matt Mahoney, matmaho...@yahoo.com
From: David Jones
To: agi
Sent: Mon, June 21, 2010 9:39:30 AM
Subject: [ag
killed,
> assuming these entities could ultimately prevail over the previous forms of
> life on our planet.
What do you mean by "conscious"? If your brain were removed and replaced by a
functionally equivalent computer that simulated your behavior (presumably a
zombie), how w
u need to simulate the 3
billion years of evolution that created human intelligence?
-- Matt Mahoney, matmaho...@yahoo.com
From: rob levy
To: agi
Sent: Mon, June 21, 2010 11:56:53 AM
Subject: Re: [agi] An alternative plan to discover self-organization theory
(I
een that other AGI, Mentifex. I never did trust it ;-)
-- Matt Mahoney, matmaho...@yahoo.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listb
.
But logically you know that your brain is just a machine, or else AGI would
not be possible.
>
>
> On Nov 4, 2007 1:15 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >
> > > Matt,
> > >
&g
r being
> able to paint with one's toes.
I guess the question is what purpose does challenging oneself play? How does
climbing mountains or going to the moon help humans survive? Experimentation
is an essential component of intelligence, so I believe it will survive in
AGI.
-- Matt M
rading our brains
or uploading. But if consciousness does not exist, as logic tells us, then
this outcome is no different than the other.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62256955-5c83cf
ection
> > there is.
> >
> >
>
> I think all of these boil down to a simple equation with just a few
> variables. Anyone have it? It'd be nice if it included some sort of
> computational complexity energy expression in it.
Yes. Intelligence is
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > >
> > > I think all of these boil down to a simple equation with just a few
> > > variables. Anyone have it? It'd be nice if it inclu
puting power we need.
- Big companies like Google and IBM (Blue Brain) with massive data sets and
computing power are still doing basic research.
- Really smart people like Minsky, Kurzweil, and Yudkowsky are not trying to
actually build AGI.
1. A. Newell, H. A. Simon, "GPS: A Program that
rst iteration.
> > But if consciousness does not exist...
>
> obviously, it does exist.
Belief in consciousness exists. There is no test for the truth of this
belief.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64049981-eab92d
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > We just need to control AGIs goal system.
> >
> > You can only control the goal system of the first iteration.
>
>
> ..and you can
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >
> >> On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >>>> We just need to control AGIs goal
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >
> >> Matt Mahoney wrote:
> >>> --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >>>
> >>>
chnology, for example, uploading our brains
> > into computers and reprogramming them. When a rat can stimulate its
> nucleus
> > accumbens by pressing a lever, it will forgo food, water, and sleep until
> it
> > dies. We worry about AGI destroying the world by launching
on how much pleasure or pain you can experience in a lifetime. In
particular, if you consider t1 = birth, t2 = death, then K(dS) = 0.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66439343-981277
t is that AGI will quickly evolve to invisibility
from a human-level intelligence. I say "human-level" because life will be so
fundamentally different that we could no longer be called human, although any
intelligence at our level won't be aware of the change.
A dog is much close
lia is an illusion. I wrote autobliss to expose
this illusion.
> Good luck with this,
I don't expect that any amount of logic will cause anyone to refute beliefs
programmed into their DNA, myself included.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://w
rm utility. If
an agent is rewarded for output y given input x, it must still experiment with
output -y to see if it results in greater reward. Evolution rewards smart
optimization processes. It explains why people climb mountains, create
paintings, and build rockets.
-- Matt Mahoney, [EMAIL PRO
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> Could you describe a piece of technology that simultaneously:
> - Is required for AGI.
> - Cannot be required part of any useful narrow AI.
A one million CPU cluster.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored
be available for humans as well.
> So the gap won't be really that big.
>
> To visualize potential differences try to compare income of humans
> with IQ 100 and humans with IQ 150.
> The difference is not really that big.
Try to visualize an Earth turned into computronium with
ecessary
computing power. http://en.wikipedia.org/wiki/Storm_botnet
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=68806283-a55ce8
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> >
> > What are the best current examples of (to any extent) self-building
> > software
> > > ?
> >
> > So far, most of the effort has b
a 1 year old child) with only a crude model of semantics and no
syntax. Memory is so tightly constrained (at 2 GB) that modeling at a higher
level is mostly pointless. The slope of compression surface in speed/memory
space is steep along the memory axis.
-- Matt Mahoney, [EMAIL PROTECTED]
-
Thi
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > It amazes me that a crime of this scale can go on for a year and we are
> > powerless to stop it either through law enforcement or technology. The
> >
vent start working.
Such a creature would be invisible, just as you are invisible to the bacteria
in your gut. Such a creature might be simulating the universe you now
observe. You would never know it exists if it has programmed your brain to
refuse to accept its existence.
-- Matt Mahoney,
--- "J. Andrew Rogers" <[EMAIL PROTECTED]> wrote:
>
> On Nov 27, 2007, at 7:21 PM, Matt Mahoney wrote:
> > As a counterexample, evolution is already smarter than
> > the human brain. It just takes more computing power. Evolution has
> > figured
--- BillK <[EMAIL PROTECTED]> wrote:
> On Nov 30, 2007 2:37 PM, James Ratcliff wrote:
> > More Women:
> >
> > Kokoro (image attached)
> >
>
>
> So that's what a women is! I wondered..
Wrong. http://www.youtube.com/watch?v=N7mZStNNN7g
he 1980's) confirms the basic architecture, in
particular Hebb's rule, postulated in 1949 but not fully confirmed in animals
even today.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70910879-ed86c9
ed C/C++. There is an SSE2 version too.
> Actual difference in size would be 10 times, since your matrix is only
> 10% filled.
For a 64K by 64K matrix, each pointer is 16 bits, or 1.6 bits per element. I
think for neural networks of that size you could use 1 bit weights.
-- Matt Mahoney,
semantics, then grammar, and then the problem
solving. The whole point of using massive parallel computation is to do the
hard part of the problem.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please g
nd a
time stamp.
I wrote my thesis on the question of whether such a system would scale to a
large, unreliable network. (Short answer: yes).
http://cs.fit.edu/~mmahoney/thesis.html
Implementation detail: how to make a P2P client useful enough that people will
want to install it?
-- Ma
ghly parallel computer.
1. Gorrell, Genevieve (2006), “Generalized Hebbian Algorithm for Incremental
Singular Value Decomposition in Natural Language Processing”, Proceedings of
EACL 2006, Trento, Italy.
http://www.aclweb.org/anthology-new/E/E06/E06-1013.pdf
-- Matt Mahoney, [EMAIL PROTECTED]
--
lso relay O(log n) messages.
If the communication protocol is natural language text, then I am pretty sure
our existing networks can handle it.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please
section of the SAT exams. See:
Turney, P., Human Level Performance on Word Analogy Questions by Latent
Relational Analysis (2004), National Research Council of Canada,
http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-47422.pdf
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71685861-05fe0f
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> For example, I disagree with Matt's claim that AGI research needs
> special hardware with massive computational capabilities.
I don't claim you need special hardware.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sp
uot;important" messages to propagate to a large number of nodes. All critically
balanced complex systems are subject to rare but significant events, for
example software (state changes and failures), evolution (population
explosions, plagues, and mass extinctions), and gene regulatory netw
--- Ed Porter <[EMAIL PROTECTED]> wrote:
> >MATT MAHONEY=> My design would use most of the Internet (10^9 P2P
> nodes).
> ED PORTER=> That's ambitious. Easier said than done unless you have a
> Google, Microsoft, or mass popular movement backing you.
ent) if we could solve
the distributed search problem.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72969535-74e4ee
--- "John G. Rose" <[EMAIL PROTECTED]> wrote:
> > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> > My design would use most of the Internet (10^9 P2P nodes). Messages
> > would be
> > natural language text strings, making no distinction between docu
h it probably
could be). The protocol requires that the message's originator and
intermediate routers all be identified by a reply address and time stamp. It
won't work otherwise.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsub
en if the peers behave properly.
Malicious peers could forge headers, for example, to hide the true source of
messages or to force replies to be directed to unintended targets. Some
attacks could be very complex depending on the idiosyncratic behavior of
particular peers.
-- Matt Mahoney, [EM
rts. The P2P protocol is natural
language text. I will write up the proposal so it will make more sense than
the current collection of posts.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
h
#x27;s vulnerability, but it
doesn't stop people from using them.
>
> -Original Message-
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 06, 2007 4:06 PM
> To: agi@v2.listbox.com
> Subject: RE: Distributed search (was RE: Hacker intel
should be a
useful service at least in the short term before it destroys us.
>
> -Original Message-
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 06, 2007 6:17 PM
> To: agi@v2.listbox.com
> Subject: RE: Distributed search (was RE: Hacker intellig
needs
> > >>> special hardware with massive computational capabilities.
> > >
> >
> > Could you give an example or two of the kind of problems that your AGI
> > system(s) will need such massive capabilities to solve? It's so good - in
> > fact, I would
> But you claim that you need massive computational capabilities
> [considerably above capabilities of regular modern PC], right?
> That means "special".
No, my proposal requires lots of regular PCs with regular network connections.
It is a purely software approach. But more ha
er, rather than on what
services the AGI should provide for us.
>
> =Jean-Paul
> >>> On 2007/12/07 at 06:41, in message
> <[EMAIL PROTECTED]>, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > I wrote up a quick description of my AGI proposal at
>
h massive capabilities to solve? It's so good - in
> fact, I would argue, essential - to ground these discussions.
For example, I ask the computer "who is this?" and attach a video clip from my
security camera.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by
html
Too bad we don't know how much computing power is needed for AI. Without this
knowledge, it will take us by surprise.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbo
number of peers that accept it according to the
peers' policies, which are set individually by their owners. The network
provides an incentive for peers to produce useful information so that other
peers will accept it. Thus, useful and truthful information is more likely to
be propagated.
-
use Google's 10^6 CPU cluster and its database with 10^9
human contributors.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74663822-e0a686
a human could
experience 10^9 bits according to cognitive models of long term memory.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74724148-5841d4
What do you call the computer that simulates what you perceive to be the
universe?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_se
to what "pain" (or any other kind of subjective experience)
> actually is.
I would like to hear your definition of pain and/or negative reinforcement.
Can you answer the question of whether a machine (say, an AGI or an uploaded
human brain) can feel pain?
-- Matt Mahoney, [EMAIL
--- Bryan Bishop <[EMAIL PROTECTED]> wrote:
> On Monday 10 December 2007, Matt Mahoney wrote:
> > The worst case scenario is that AI wipes out all life on earth, and
> > then itself, although I believe at least the AI is likely to survive.
>
> http://lifeboat.com/ex/
wer.
The message posting service I have proposed does not address friendliness at
all. It should be benign as long as it can't reprogram the peers. I can't
guarantee that won't happen because peers can be arbitrarily configured by
their owners.
-- Matt Mahoney, [EMAIL PROTECTED
--- Bryan Bishop <[EMAIL PROTECTED]> wrote:
> On Tuesday 11 December 2007, Matt Mahoney wrote:
> > --- Bryan Bishop <[EMAIL PROTECTED]> wrote:
> > > Re: how much computing power is needed for ai. My worst-case
> > > scenario accounts for nea
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> I have to say that this is only one interpretation of what it would mean
> >> for an AGI to experience something, and I for one
ing Test involved fooling/convincing judges, not
> > clueless men hoping to get some action?
>
> In my taste, testing with clueless judges is more appropriate
> approach. It makes test less biased.
To be a valid Turing test, the judges must know that with 50% a-priori
probability the
wish. I make no claims about the morality of
inflicting pain on animals or programs. Morality is an evolved cultural
belief. We believe in compassion to other humans because tribes that
practiced this belief (toward their own members) were more successful than
those that didn't. Likewise,
in pencil and paper,
transistors, or neurons.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=75584599-c885b5
ter to have superhuman intelligence.
If you define intelligence as passing the Turing test, then I agree that you
could not have a computer much smarter than human. But I don't define
intelligence that way. A superhuman intelligence will be invisible, because
it will have complete control over
services, charisma, deceit, or
extortion, and at other methods we haven't even thought of yet.
> Beliefs also operate in the models. I can imagine an intelligent
> machine choosing not to trust humans. Is this intelligent?
Yes. Intelligence has nothing to do with subservience to humans
. You don't have to ask for this.
The AI has modeled your brain and knows what you want. Whatever it does, you
will not object because it knows what you will not object to.
My views on this topic. http://www.mattmahoney.net/singularity.html
-- Matt Mahoney, [EMAIL PROTECTED]
-
T
t;
> YKY
What is the goal of your system. What application?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78397761-41d311
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Stan Nilsen <[EMAIL PROTECTED]> wrote:
> >
> >> Matt,
> >>
> >> Thanks for the links sent earlier. I especially like the paper by Legg
> >> and H
t" and
> "intelligence". As such, his conclusions were bankrupt.
>
> Having pointed this out for the benefit of others who may have been
> overly impressed by the Hutter paper, just because it looked like
> impressive maths, I have no interest in discussing this
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Dec 21, 2007 6:56 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > > Still more nonsense: as I have pointed out before, Hutter's implied
> >
e set of complex tasks in complex environments faster and better
> than humans, such as ...
So if we can't agree on what intelligence is (in a non human context), then
how can we argue if it is possible?
My calculator can add numbers faster than I can. Is it intelligent? Is
Google inte
are exponential, or 2^(10^122) steps.
So we approximate.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=80322602-300c67
--- "YKY (Yan King Yin)" <[EMAIL PROTECTED]> wrote:
> On Dec 21, 2007 11:08 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > What is the goal of your system. What application?
> Sorry about the delay, and Merry Xmas =)
>
> The goal is to provide an easy
oided that
problem if you learned the meanings first, before learning the grammar.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_s
igotry.
>
>
>
>
> If
> I am wrong then I just made a
> mistake. So what? What's the big
> deal? I am not trying to mislead
> or hurt anyone and making errors seems
> to be a necessary part of human
> existence.
>
>
>
> Right
> now I am co
believed that
P!=NP because a lot of people have tried and failed to do this. However, it
is not proven that P!=NP. The Clay Institute has offered a $1 million prize
for a proof either way. A partial list of problems can be found here:
http://en.wikipedia.org/wiki/List_of_NP-complete_problems
Good
if the answer is yes. Verifiability
on a deterministic Turing machine is equivalent to solving the decision
problem on a nondeterministic machine, but I think a little easier to
understand.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/emai
tml I discuss how a singularity
will end the human race, but without judgment whether this is good or bad.
Any such judgment is based on emotion. Posthuman emotions will be
programmable.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To un
--- Mike Dougherty <[EMAIL PROTECTED]> wrote:
> On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote:
> >
>
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?current
ut reference to hardcoded goals, such as fear of
death.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=88196831-fdebcc
ty distribution over string s can be
expressed as a product of conditional predictions of consecutive symbols in s.
If you know that I am for or against X then you have one bit of knowledge. A
data compressor knowing this can compress a message from me about X one bit
smaller than a compressor wi
= PROD_i
> > P(s_i|s_1..i-1), that any probability distribution over string s can be
> > expressed as a product of conditional predictions of consecutive symbols
> in s.
> > If you know that I am for or against X then you have one bit of
> knowledge. A
> > data compressor k
es. It could be a Dyson sphere with atomic
level computing elements. It may or may not have a copy of your memories. It
won't always be happy, because happiness is not fitness.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To
o
explain it, like Penrose's quantum gravity. A better explanation would be
that evolution selects for animals whose behavior is consistent with the
belief in qualia.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Jan 23, 2008 11:55 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> > This is another example of starting with the false assumption that
> > consciousness (or qualia) exists, and then deriving bizarre theories
ffer evolution. There is good evidence that every
living thing evolved from a single organism: all DNA is twisted in the same
direction.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
htt
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> The problem with the scenarios that people imagine (many of which are
> >> Nightmare Scenarios) is that the vast majority of
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Jan 24, 2008 4:29 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> > Just about all humans claim to have an awareness of sensations, thoughts,
> and
> > feelings, and control over decisions they make, what
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Because recursive self improvement is a competitive evolutionary process
> even
> > if all agents have a common ancestor.
>
> As explained in parallel post: this is a non-sequiteur.
OK, cons
e from the
internet and analyzes it for vulnerabilities, finding several. As instructed,
it writes a virus, a modified copy of itself running on the infected system.
Due to a bug, it continues spreading. Oops... Hard takeoff.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIR
I believe something like it WILL be built, probably ad-hoc
and very complex, because it has economic value.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89895239-3ad383
does. http://cs.fit.edu/~mmahoney/thesis.html
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89907317-b55af7
xceed
individual brains in intelligence. They can't yet, but they will. Google
already knows more than any human, and can retrieve the information faster,
but it can't launch a singularity. When your computer can write and debug
software faster and more accurately than you can, then you
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Maybe you can
> > program it with a moral code, so it won't write malicious code. But the
> two
> > sides of the security problem require almost identical skills. Suppose
> you
>
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Jan 27, 2008 5:32 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> > Software correctness is undecidable -- the halting problem reduces to it.
> > Computer security isn't going to be magically solved by AGI
they are motivated by
greed, so attacks remain hidden while stealing personal information and
computing resources. Acquiring resources is the fitness function for
competing, recursively self improving AGI, so it is sure to play a role.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsor
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Jan 28, 2008 4:53 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > > Consider the following subset of possible requirements: the program is
> > > correct
> > > > if and only if it halts.
> &
301 - 400 of 909 matches
Mail list logo