light
without darkening me.' Thomas Jefferson, letter to Isaac McPherson, 13
August 1813
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss
was a year
ago and nothing was released yet.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com
own development project.
So if the value of AGI is all the human labor it replaces (about US $1
quadrillion), how much will it cost to build? Keep in mind there is a
tradeoff between waiting for the cost of technology to drop vs. having it now.
How much should we expect to spend?
-- Matt
and Entropy of Printed English, Bell Sys.
Tech. J (3) p. 50-64, 1950.
2. Cover, T. M., and R. C. King, A Convergent Gambling Estimate of the
Entropy of English, IEEE Transactions on Information Theory (24)4 (July) pp.
413-421, 1978.
-- Matt Mahoney, [EMAIL PROTECTED
reasoning. Including these
capabilities would not improve compression.
Tests on small data sets could be used to gauge early progress. But
ultimately, I think you are going to need hardware that supports AGI to test
it.
-- Matt Mahoney, [EMAIL PROTECTED
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
We
already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
etc. A worm that can write and debug code and discover new vulnerabilities
will be unstoppable. Do you really think your AI will win the race
biggest applications. Unfortunately, the knowledge needed to secure
computers is almost exactly the same kind of knowledge needed to attack them.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Fri, Apr 11, 2008 at 10:50 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
If the problem is so simple, why don't you just solve it?
http://www.securitystats.com/
http://en.wikipedia.org/wiki/Storm_botnet
There is a trend toward using
) was
released in 1990, you probably imagined that all search engines would require
you to know the name of the file you were looking for.
If you have a better plan for AGI, please let me know.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http
--- John G. Rose [EMAIL PROTECTED] wrote:
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
The simulations can't loop because the simulator needs at least as much
memory
as the machine being simulated.
You're making assumptions when you say that. Outside of a particular
simulation we
spam and malicious messages at risk of having their own reputations
lowered if they fail.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983
impact on its outcome.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps you have not read my proposal at
http://www.mattmahoney.net/agi.html
or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with
Artificial Intelligence. It is a labor-intensive
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Just what do you want out of AGI? Something that thinks like a person or
something that does what you ask it to?
Either will do: your suggestion achieves neither.
If I ask your non-AGI the following question: How
a human mind. I don't believe
that one person or a small group can solve the AGI problem faster than
the billions of people on the Internet are already doing.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive
--- Derek Zahn [EMAIL PROTECTED] wrote:
Matt Mahoney writes: As for AGI research, I believe the most viable
path is a distributed architecture that uses the billions of human
brains and computers already on the Internet. What is needed is an
infrastructure that routes information
as the machine being simulated.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member
and are not in deep conflict?
I don't expect the experts to agree. It is better that they don't. There are
hard problem remaining to be solved in language modeling, vision, and
robotics. We need to try many approaches with powerful hardware. The network
will decide who the winners are.
-- Matt Mahoney
emerging from the Internet bears little resemblance
to Novamente. It is simply too big to invest in directly, but it will present
many opportunities.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
--- David Hart [EMAIL PROTECTED] wrote:
Hi All,
I'm quite worried about Google's new *Machine Automated Temporal
Extrapolation* technology going FOOM!
http://www.google.com.au/intl/en/gday/
More on the technology
http://en.wikipedia.org/wiki/Google's_hoaxes
:-)
-- Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- John G. Rose [EMAIL PROTECTED] wrote:
Is there really a bit per synapse? Is representing a synapse with a bit
an
accurate enough simulation? One synapse is a very complicated system.
A typical neural network
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I was referring to Landauer's estimate of long term memory learning rate
of
about 2 bits per second. http://www.merkle.com/humanMemory.html
This does not include procedural memory, things like visual perception
--- Eric B. Ramsay [EMAIL PROTECTED] wrote:
Matt Mahoney [EMAIL PROTECTED] wrote:
[For those not familiar with Richard's style: once he disagrees with
something
he will dispute it to the bitter end in long, drawn out arguments, because
nothing is more important than being right
of an associative memory stores 0.15 bits per synapse. But
cognitive models suggest the human brain stores .01 bits per synapse.
(There are 10^15 synapses but human long term memory capacity is 10^9 bits).
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
that is a product of evolution, and therefore biased
toward beliefs that favor survival of the species.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 29/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
By equivalent computation I mean one whose behavior is indistinguishable
from the brain, not an approximation. I don't believe that an exact
simulation requires copying
--- John G. Rose [EMAIL PROTECTED] wrote:
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
By equivalent computation I mean one whose behavior is
indistinguishable
from the brain, not an approximation. I don't believe that an exact
simulation requires copying the implementation down
. And
removing a 0.1 micron chunk out of a CPU chip can cause it to fail, yet I can
run the same programs on a chip with half as many transistors.
Nobody knows how to make an artificial brain, but I am pretty confident that
it is not necessary to preserve its structure to preserve its function.
-- Matt
--- Charles D Hixson [EMAIL PROTECTED] wrote:
John K Clark wrote:
Matt Mahoney [EMAIL PROTECTED]
It seems to me the problem is
defining consciousness, not testing for it.
And it seems to me that beliefs of this sort are exactly the reason
philosophy is in such a muddle
to use Turing machines in proofs, even
though we can't actually build one. Hutter is not proposing a universal
solution to AI. He is proving that it is not computable. Lanier is not
suggesting implementing consciousness as a rainstorm. He is refuting its
existence.
-- Matt Mahoney, [EMAIL
--- John Ku [EMAIL PROTECTED] wrote:
On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote:
I would prefer to leave behind these counterfactuals altogether and
try to use information theory and control theory to achieve a precise
understanding of what it is for something
--- John Ku [EMAIL PROTECTED] wrote:
On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Nevertheless we can make similar reductions to absurdity with respect to
qualia, that which distinguishes you from a philosophical zombie. There
is no
experiment to distinguish whether you actually
is that it doesn't matter. The pleasure of a thousand permanent orgasms is
just a matter of changing a few lines of code, and you go into a degenerate
state where learning ceases.
-- Matt Mahoney, [EMAIL PROTECTED]
---
singularity
Archives: http://www.listbox.com/member
(knowing what we cannot know) to
conclude that human brains are just computers and our existence doesn't
matter. It is ironic that our programmed beliefs leads us to advance
technology to the point where the question can no longer be ignored.
-- Matt Mahoney, [EMAIL PROTECTED
is that the long distance Van-der-Waals bonding
strengths between A-T pairs or C-G pairs in double stranded DNA is slightly
greater than the bonding strengths between A-T and C-G (although much weaker
than the hydrogen bonds between A and T or C and G).
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list
replicating nanobots.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=87473459-bd643d
on your choice of mathematical model.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85465376-f0c66e
--- Gifting [EMAIL PROTECTED] wrote:
There is plenty of physical evidence that the universe is simulated by
a
finite state machine or a Turing machine.
1. The universe has finite size, mass, and age, and resolution
etc.
-- Matt Mahoney, [EMAIL PROTECTED]
I assume
environment that correctly
believe that the world is a simulation would be less likely to pass on their
genes than agents that falsely believe the world is real.
Perhaps you suspect that the food you eat is not real, but you continue to eat
anyway.
-- Matt Mahoney, [EMAIL PROTECTED
your DNA.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85206553-fdbdcb
is not healthy. It is what motivates kamikaze
pilots and suicide bombers. Religion has thrived because it teaches rules
that maximize reproduction, such as prohibiting sexual activity for any other
purpose.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http
that the universe is a simulation, nor are any of my other
points. I don't believe that a proof is possible.
Eric B. Ramsay
Matt Mahoney [EMAIL PROTECTED] wrote:
--- Eric B. Ramsay wrote:
Apart from all this philosophy (non-ending as it seems), Table 1. of the
paper referred
/?;
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=79771594-63e447
--- Bryan Bishop [EMAIL PROTECTED] wrote:
On Friday 30 November 2007, Matt Mahoney wrote:
How can we design AI so that it won't wipe out all DNA based life,
possibly this century?
That is the wrong question.
How can we preserve DNA-based life? Perhaps by throwing it out
,
now dropping the assumption of CEV. The question remains whether this AGI
would preserve the lives of the original humans or their memories. Not what
it should do, but what it would do. We have a few decades left to think about
this.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your
human
lifetime. Would it make much difference if it was erased to make room
to the relative
probability of different outcomes, but just snears at the whole idea
with a Yeah, but what if everything goes wrong, huh? What if
Frankenstein turns up? Huh? Huh? comment.
Happens every time.
Richard Loosemore
Matt Mahoney wrote:
--- Richard Loosemore
the issue of consciousness from the possibility of AI.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57737187-d7ae0a
more important?
I am not saying that the extinction of humans and its replacement with godlike
intelligence is necessarily a bad thing, but it is something to be aware of.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
, and goal directed agents seem to be
necessary for RSI. It raises hard questions about what role humans will play
in this, if any.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 11/09/2007, Matt Mahoney [EMAIL PROTECTED] wrote:
No, you are thinking in the present, where there can be only one copy
of a
brain. When technology for uploading exists, you have a 100% chance
of
becoming the original
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 10/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:
No, it is not necessary to destroy the original. If you do destroy the
original you have a 100% chance of ending up as the copy, while if you
don't you have a 50% chance of ending up
--- Panu Horsmalahti [EMAIL PROTECTED] wrote:
2007/9/10, Matt Mahoney [EMAIL PROTECTED]:
- Human belief in consciousness and subjective experience is universal and
accepted without question.
It isn't.
I am glad you spotted the flaw in these statements.
Any belief programmed
knowledge to create a reasonable facsimile. For example, given just my home
address, you could guess I speak English, make reasonable guesses about what
places I might have visited, and make up some plausible memories. Even if
they are wrong, my copy wouldn't know the difference.
-- Matt Mahoney
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 09/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Your dilemma: after you upload, does the original human them become a
p-zombie, or are there two copies of your consciousness? Is it
necessary
to
kill the human body for your
? If it does exist, then is it a property of the computation, or does
it depend on the physical implementation of the computer? How do you test for
it?
Do you claim that the human brain cannot be emulated by a Turing machine?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 08/09/07, Matt Mahoney [EMAIL PROTECTED] wrote:
I agree this is a great risk. The motivation to upload is driven by fear
of
death and our incorrect but biologically programmed belief in
consciousness.
The result
There has been a minor setback in the plan to implant RFID tags in all humans.
http://news.yahoo.com/s/ap/20070908/ap_on_re_us/chipping_america_ii;_ylt=AiZyFu9ywOpQA0T6nXkEAcFH2ocA
Perhaps it would be safer to have our social security numbers tattooed on our
foreheads?
-- Matt Mahoney, [EMAIL
in extinction with no replacement.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39571188-7e5cf6
editorializing, but rather for a clear
short popular mass-media explanation of the Singularity.
I think the classic paper by Vernor Vinge expresses it pretty well.
http://mindstalk.net/vinge/vinge-sing.html
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http
--- Samantha Atkins [EMAIL PROTECTED] wrote:
On Aug 19, 2007, at 12:26 PM, Matt Mahoney wrote:
3. Studying the singularity raises issues (e.g. does consciousness
exist?)
that conflict with hardcoded beliefs that are essential for survival.
Huh? Are you conscious?
I believe that I am
--- Randall Randall [EMAIL PROTECTED] wrote:
On Jun 28, 2007, at 7:51 PM, Matt Mahoney wrote:
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
How does this answer questions like, if I am destructively teleported
to two different locations, what can I expect to experience? That's
what
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 28/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
So how do we approach the question of uploading without leading to a
contradiction? I suggest we approach it in the context of outside
observers
simulating competing agents. How
?
And then the original friend walks in...
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8
new ways to
not invent AI. =(((
--
Opera: Sing it loud! :o( )-
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored
--- Jey Kottalam [EMAIL PROTECTED] wrote:
On 6/25/07, Matt Mahoney [EMAIL PROTECTED] wrote:
You can only transfer
consciousness if you kill the original.
What is the justification for this claim?
There is none, which is what I was trying to argue. Consciousness does not
actually
as building a 747, and then figuring out what
to program with regards to volition, death, human
suffering, etc. as learning how to fly the 747 and
finding a good destination.
- Tom
--- Matt Mahoney [EMAIL PROTECTED] wrote:
I think I am missing something on this discussion
? Suppose they build a single AGI,
all the agents upload, and the AGI reprograms its goals and goes into a
degenerate state or turns itself off. Would you care?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
in your simulated universe, which bear no resemblance to the universe
in which the simulation is being run. This will all be clear after you die
and wake up.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
Or if your intellect advanced to the point where you
could, you would not be
able to describe what you observed to other humans.
To use an analogy, a
Singularity level intelligence would be as advanced
a world where resources are plentiful.
For all you know, the latter has already happened.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
directions, to decide which email we want to read,
to do ever more of our work.
When machines can do all of our thinking for us, what will happen to us?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
bad, or equivalently, create good music? How
could human artists compete with machines that can customize their work for
each individual in real time?
I guess that leaves sex, but I would not be surprised to see some technical
innovation here as well.
-- Matt Mahoney, [EMAIL PROTECTED
...
I plan to upload myself and become transhuman anyway, but maybe the
Ben-version
who stays a mostly-unimproved human will become a full-time musician ;-) ...
Hell, with a few thousand years practice, he may even become a good one!!!
-- Ben G
-- Matt Mahoney, [EMAIL PROTECTED
--- Nathan Cook [EMAIL PROTECTED] wrote:
On 5/21/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Now there really is no difference between being able to judge the quality of
a
movie (relative to a particular viewer or audience), and being able to
generate high quality movies.
So
machines
that obey our commands. But this is controversial. Should a machine obey a
command to destroy itself or harm others? Do you want a gun that fires when
you squeeze the trigger, or a gun that makes moral judgments and refuses to
fire when aimed at another person?
-- Matt Mahoney, [EMAIL
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
What did your simulation actually accomplish? What were the results?
What do
you think you could achieve on a modern computer?
Oh, I hope there's no misunderstanding: I did not build networks to do
any kind
. If
we can estimate the complexity of language modeling in a similar way, I see no
reason not to.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
One problem with some
connectionist models is trying to assign a 1-1 mapping between words and
neurons. The brain might have 10^8 neurons devoted to language, enough to
represent many copies of the different senses
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I doubt you could model sentence structure usefully with a neural network
capable of only a 200 word vocabulary. By the time children learn to use
complete sentences they already know thousands of words after exposure
of Shane's work. After all, he is the one who
proved the correctness of your assertion.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
Personally, I would experiment with
neural language models that I can't currently
implement because I lack the
computing power
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Sun, May 13, 2007 at 05:23:53PM -0700, Matt Mahoney wrote:
It is not that hard, really. Each of the 10^5 PCs simulates about 10 mm^3
of
You know, repeating assertions doesn't make them any more true.
brain tissue. Axon diameter varies
doesn't know how many fingers I
am holding up.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
--- Tom McCabe [EMAIL PROTECTED] wrote:
You cannot get large amounts of computing power
simply
by hooking up a hundred thousand PCs for problems
that
are not easily parallelized, because
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
Language and vision are prerequisites to AGI.
No, they aren't, unless you care to suggest that
someone with a defect who can't see and can't form
sentences (eg, Helen Keller) is unintelligent.
Helen
--- Tom McCabe [EMAIL PROTECTED] wrote:
--- Matt Mahoney [EMAIL PROTECTED] wrote:
I posted some comments on DIGG and looked at the
videos by Thiel and
Yudkowsky. I'm not sure I understand the push to
build AGI with private
donations when companies like Google are already
to dumb down a machine
just to duplicate human limitations.
If AGI is not the Turing test, then what is it? What test do you propose?
Without a definition, we should stop calling it AGI and focus on the problems
for which machines are still inferior to humans, such as language or vision.
-- Matt
--- Eugen Leitl [EMAIL PROTECTED] wrote:
On Tue, Apr 24, 2007 at 01:35:31PM -0700, Matt Mahoney wrote:
None, because we have not defined what AGI is.
AGI is like porn. I'll know it when I'll see it.
Not really. You recognize porn because you have seen examples of porn and
not-porn
from just following the way set by programmed
rules.
There is an algorithm. We just don't know what it is.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member
approaching a black hole in a free fall observes
nearby objects accelerating away in all directions.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983
--- Charles D Hixson [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Eugen Leitl [EMAIL PROTECTED] wrote:
...
A proton is a damn complex system. Don't see how you could equal it with
one
mere bit.
I don't. I am equating one bit with a volume of space about
a
wholesale rearrangement of a large majority of the
matter in the solar system.
A technology this advanced could also reprogram your neurons to make you
believe whatever it wanted. There is no way you could detect this.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
*The entropy of the universe is of the order T^2 c^5/hG ~ 10^122 bits,
where T
is the age of the universe, c is the speed of light, h is Planck's
constant
and G is the gravitational constant. By coincidence
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so
had a Turing machine, you still could not compute a solution
to AIXI. It is not computable, like the halting problem.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow *demonstrate*
that your mathematical
does not necessarily imply learning. There are
other approaches.
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983
would one test for this
belief?
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983
to at least be reassuring.
-- Matt Mahoney
-- Matt Mahoney, [EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983
1 - 100 of 114 matches
Mail list logo