Matt Mahoney wrote:
--- Stan Nilsen <[EMAIL PROTECTED]> wrote:

Matt,

Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am.

The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting Turing
machines exchanging symbols.  In addition, at each step the environment also
sends a "reward" signal to the agent.  The goal of the agent is to maximize
the accumulated reward.  Hutter proves that if the environment is computable
or has a computable probability distribution, then the optimal behavior of the
agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far.  This
optimal behavior is not computable in general, which means there is no upper
bound on intelligence.

Nonsense. None of this follows from the AIXI paper. I have explained why several times in the past, but since you keep repeating these kinds of declarations about it, I feel obliged to repeat that these assertions are speculative extrapolations that are completeley unjustified by the paper's actual content.



comment on two things:

1) The response "Intelligence has nothing to do with subservience to humans," seems to miss the point of the original comment. The original word was "trust." Why would trust be interpreted by the higher intelligence as subservience? And, it is worth noting that we wouldn't really know if there was lack of trust, as the AI would probably be silent about it. The result would be a possible needless discounting of anything we attempt to offer.

An agent would assign probabilities to the truthfulness of your words, just
like other people would.  The more intelligent the agent, the greater the
accuracy of its estimates.  An agent could be said to be "subservient" if it
overestimates your truthfulness.  In this respect, a highly intelligent agent
is unlikely to be subservient.

2) In the earlier note the comment was made that the higher intelligence would control our thoughts. I suspect this was in jest, but if not, what would be the "reward" or benefit of this?

I mean this literally.  To a superior intelligence, the human brain is a
simple computer that behaves predictably. An AI would


Notice the use of the phrase "An AI would....."

See parallel message for comments on why this deserves to be pounced on.

Matt's views on these matters are by no means typical of opinion in general.

I for one find them completely irresponsible. He gives the impression that some of these issues are understood and the conclusions robust. Most of these conclusions are, in fact, complete non sequiteurs.


Richard Loosemore.



have the same kind of
control over humans as humans do over simple animals whose nervous systems we
have analyzed down to the last neuron.  If you can model a system or predict
its behavior, then you can control it.

Humans, like all animals, have goals selected by evolution: fear of death, a
quest for knowledge, and belief in consciousness and free will.  Our survival
instinct motivates us to use technology to meet our physical needs and to live
as long as possible.  Our desire for knowledge (which exists because
intelligent animals are more likely to reproduce) will motivate us to use
technology to increase our intelligence, to invent new means of communication,
to offload data and computing power to external devices, to add memory and
computing power to our brains, and ultimately to upload our memories to more
powerful computers.  All of these actions increase the programmability of our
brains.

I can see benefit from allowing us our own thoughts as follows: The super intelligent gives us opportunity to produce "reward" where there was none. The net effect is to produce more benefit from the universe.

The net effect is extinction of homo sapiens.  We will attempt
(unsuccessfully) to give the AI the goal of satisfying the goals of humans. But an AI can achieve its goal by reprogramming our goals. The reason you are
alive is because you can't have everything you want.  The AI will achieve its
goal by giving you drugs, or moving some neurons around, or simulating a
universe with magic genies, or just changing a few lines of code in your
uploaded brain so you are eternally happy. You don't have to ask for this. The AI has modeled your brain and knows what you want. Whatever it does, you
will not object because it knows what you will not object to.

My views on this topic.  http://www.mattmahoney.net/singularity.html



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78304471-aa4d7b

Reply via email to