RE: [agi] just a thought

2009-01-14 Thread Matt Mahoney
expectancy (66 years). - Number of bits of recorded information. - Combined processing power of brains and computers in OPS. -- Matt Mahoney, matmaho...@yahoo.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.list

Re: [agi] Encouraging?

2009-01-14 Thread Matt Mahoney
. Modern Nobel prize winners are awarded for work done decades ago. How do you distinguish one genius from millions of cranks? You wait until the rest of society catches up in intelligence. -- Matt Mahoney, matmaho...@yahoo.com --- agi Archives: https://w

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Matt Mahoney
ays to find the best options to compress a file that normally takes 45 seconds. -- Matt Mahoney, matmaho...@yahoo.com From: Steve Richfield To: agi Sent: Sun, June 20, 2010 2:06:55 AM Subject: [agi] An alternative plan to discover self-organization theory No,

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Matt Mahoney
e ideas on how to solve it? Preferably something that takes less than 3 billion years on a planet sized molecular computer. -- Matt Mahoney, matmaho...@yahoo.com From: Mike Tintner To: agi Sent: Mon, June 21, 2010 7:59:29 AM Subject: Re: [agi] An alternative pla

Re: [agi] Re: High Frame Rates Reduce Uncertainty

2010-06-21 Thread Matt Mahoney
very complicated of course. You are more likely to detect motion in objects that you recognize and expect to move, like people, animals, cars, etc. -- Matt Mahoney, matmaho...@yahoo.com From: David Jones To: agi Sent: Mon, June 21, 2010 9:39:30 AM Subject: [ag

Re: [agi] Fwd: AGI question

2010-06-21 Thread Matt Mahoney
killed, > assuming these entities could ultimately prevail over the previous forms of > life on our planet. What do you mean by "conscious"? If your brain were removed and replaced by a functionally equivalent computer that simulated your behavior (presumably a zombie), how w

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Matt Mahoney
u need to simulate the 3 billion years of evolution that created human intelligence? -- Matt Mahoney, matmaho...@yahoo.com From: rob levy To: agi Sent: Mon, June 21, 2010 11:56:53 AM Subject: Re: [agi] An alternative plan to discover self-organization theory (I

Re: [agi] Questions for an AGI

2010-06-24 Thread Matt Mahoney
een that other AGI, Mentifex. I never did trust it ;-) -- Matt Mahoney, matmaho...@yahoo.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listb

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-05 Thread Matt Mahoney
. But logically you know that your brain is just a machine, or else AGI would not be possible. > > > On Nov 4, 2007 1:15 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote: > > > > > Matt, > > > &g

Re: [agi] Questions

2007-11-05 Thread Matt Mahoney
r being > able to paint with one's toes. I guess the question is what purpose does challenging oneself play? How does climbing mountains or going to the moon help humans survive? Experimentation is an essential component of intelligence, so I believe it will survive in AGI. -- Matt M

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-06 Thread Matt Mahoney
rading our brains or uploading. But if consciousness does not exist, as logic tells us, then this outcome is no different than the other. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=62256955-5c83cf

RE: [agi] definition source?

2007-11-07 Thread Matt Mahoney
ection > > there is. > > > > > > I think all of these boil down to a simple equation with just a few > variables. Anyone have it? It'd be nice if it included some sort of > computational complexity energy expression in it. Yes. Intelligence is

RE: [agi] definition source?

2007-11-07 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > > > > > > I think all of these boil down to a simple equation with just a few > > > variables. Anyone have it? It'd be nice if it inclu

Re: [agi] Connecting Compatible Mindsets

2007-11-07 Thread Matt Mahoney
puting power we need. - Big companies like Google and IBM (Blue Brain) with massive data sets and computing power are still doing basic research. - Really smart people like Minsky, Kurzweil, and Yudkowsky are not trying to actually build AGI. 1. A. Newell, H. A. Simon, "GPS: A Program that

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-11 Thread Matt Mahoney
rst iteration. > > But if consciousness does not exist... > > obviously, it does exist. Belief in consciousness exists. There is no test for the truth of this belief. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=64049981-eab92d

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote: > On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > We just need to control AGIs goal system. > > > > You can only control the goal system of the first iteration. > > > ..and you can

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote: > > > >> On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > >>>> We just need to control AGIs goal

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > > > >> Matt Mahoney wrote: > >>> --- Jiri Jelinek <[EMAIL PROTECTED]> wrote: > >>> > >>>

Re: [agi] Danger of getting what we want [from AGI]

2007-11-17 Thread Matt Mahoney
chnology, for example, uploading our brains > > into computers and reprogramming them. When a rat can stimulate its > nucleus > > accumbens by pressing a lever, it will forgo food, water, and sleep until > it > > dies. We worry about AGI destroying the world by launching

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney
on how much pleasure or pain you can experience in a lifetime. In particular, if you consider t1 = birth, t2 = death, then K(dS) = 0. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=66439343-981277

Re: Re[2]: [agi] Danger of getting what we want [from AGI]

2007-11-18 Thread Matt Mahoney
t is that AGI will quickly evolve to invisibility from a human-level intelligence. I say "human-level" because life will be so fundamentally different that we could no longer be called human, although any intelligence at our level won't be aware of the change. A dog is much close

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney
lia is an illusion. I wrote autobliss to expose this illusion. > Good luck with this, I don't expect that any amount of logic will cause anyone to refute beliefs programmed into their DNA, myself included. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://w

RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney
rm utility. If an agent is rewarded for output y given input x, it must still experiment with output -y to see if it results in greater reward. Evolution rewards smart optimization processes. It explains why people climb mountains, create paintings, and build rockets. -- Matt Mahoney, [EMAIL PRO

Re: Re[8]: [agi] Funding AGI research

2007-11-21 Thread Matt Mahoney
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Could you describe a piece of technology that simultaneously: > - Is required for AGI. > - Cannot be required part of any useful narrow AI. A one million CPU cluster. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored

Re: Re[4]: [agi] Danger of getting what we want [from AGI]

2007-11-21 Thread Matt Mahoney
be available for humans as well. > So the gap won't be really that big. > > To visualize potential differences try to compare income of humans > with IQ 100 and humans with IQ 150. > The difference is not really that big. Try to visualize an Earth turned into computronium with

Re: Re[4]: [agi] Funding AGI research

2007-11-26 Thread Matt Mahoney
ecessary computing power. http://en.wikipedia.org/wiki/Storm_botnet -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=68806283-a55ce8

RE: Re[4]: [agi] Funding AGI research

2007-11-27 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > > > > What are the best current examples of (to any extent) self-building > > software > > > ? > > > > So far, most of the effort has b

Re: Re[10]: [agi] Funding AGI research

2007-11-27 Thread Matt Mahoney
a 1 year old child) with only a crude model of semantics and no syntax. Memory is so tightly constrained (at 2 GB) that modeling at a higher level is mostly pointless. The slope of compression surface in speed/memory space is steep along the memory axis. -- Matt Mahoney, [EMAIL PROTECTED] - Thi

RE: Re[4]: [agi] Funding AGI research

2007-11-27 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > > It amazes me that a crime of this scale can go on for a year and we are > > powerless to stop it either through law enforcement or technology. The > >

Re: Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-27 Thread Matt Mahoney
vent start working. Such a creature would be invisible, just as you are invisible to the bacteria in your gut. Such a creature might be simulating the universe you now observe. You would never know it exists if it has programmed your brain to refuse to accept its existence. -- Matt Mahoney,

Re: Re[6]: [agi] Danger of getting what we want [from AGI]

2007-11-28 Thread Matt Mahoney
--- "J. Andrew Rogers" <[EMAIL PROTECTED]> wrote: > > On Nov 27, 2007, at 7:21 PM, Matt Mahoney wrote: > > As a counterexample, evolution is already smarter than > > the human brain. It just takes more computing power. Evolution has > > figured

Re: [agi] Where are the women?

2007-11-30 Thread Matt Mahoney
--- BillK <[EMAIL PROTECTED]> wrote: > On Nov 30, 2007 2:37 PM, James Ratcliff wrote: > > More Women: > > > > Kokoro (image attached) > > > > > So that's what a women is! I wondered.. Wrong. http://www.youtube.com/watch?v=N7mZStNNN7g

Re: [agi] Lets count neurons

2007-11-30 Thread Matt Mahoney
he 1980's) confirms the basic architecture, in particular Hebb's rule, postulated in 1949 but not fully confirmed in animals even today. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=70910879-ed86c9

Re: Re[2]: [agi] Lets count neurons

2007-12-01 Thread Matt Mahoney
ed C/C++. There is an SSE2 version too. > Actual difference in size would be 10 times, since your matrix is only > 10% filled. For a 64K by 64K matrix, each pointer is 16 bits, or 1.6 bits per element. I think for neural networks of that size you could use 1 bit weights. -- Matt Mahoney,

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney
semantics, then grammar, and then the problem solving. The whole point of using massive parallel computation is to do the hard part of the problem. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please g

Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Matt Mahoney
nd a time stamp. I wrote my thesis on the question of whether such a system would scale to a large, unreliable network. (Short answer: yes). http://cs.fit.edu/~mmahoney/thesis.html Implementation detail: how to make a P2P client useful enough that people will want to install it? -- Ma

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney
ghly parallel computer. 1. Gorrell, Genevieve (2006), “Generalized Hebbian Algorithm for Incremental Singular Value Decomposition in Natural Language Processing”, Proceedings of EACL 2006, Trento, Italy. http://www.aclweb.org/anthology-new/E/E06/E06-1013.pdf -- Matt Mahoney, [EMAIL PROTECTED] --

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney
lso relay O(log n) messages. If the communication protocol is natural language text, then I am pretty sure our existing networks can handle it. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney
section of the SAT exams. See: Turney, P., Human Level Performance on Word Analogy Questions by Latent Relational Analysis (2004), National Research Council of Canada, http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-47422.pdf -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=71685861-05fe0f

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Matt Mahoney
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote: > For example, I disagree with Matt's claim that AGI research needs > special hardware with massive computational capabilities. I don't claim you need special hardware. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sp

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Matt Mahoney
uot;important" messages to propagate to a large number of nodes. All critically balanced complex systems are subject to rare but significant events, for example software (state changes and failures), evolution (population explosions, plagues, and mass extinctions), and gene regulatory netw

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Matt Mahoney
--- Ed Porter <[EMAIL PROTECTED]> wrote: > >MATT MAHONEY=> My design would use most of the Internet (10^9 P2P > nodes). > ED PORTER=> That's ambitious. Easier said than done unless you have a > Google, Microsoft, or mass popular movement backing you.

Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-05 Thread Matt Mahoney
ent) if we could solve the distributed search problem. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=72969535-74e4ee

Distrubuted message pool (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-05 Thread Matt Mahoney
--- "John G. Rose" <[EMAIL PROTECTED]> wrote: > > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > > My design would use most of the Internet (10^9 P2P nodes). Messages > > would be > > natural language text strings, making no distinction between docu

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney
h it probably could be). The protocol requires that the message's originator and intermediate routers all be identified by a reply address and time stamp. It won't work otherwise. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsub

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney
en if the peers behave properly. Malicious peers could forge headers, for example, to hide the true source of messages or to force replies to be directed to unintended targets. Some attacks could be very complex depending on the idiosyncratic behavior of particular peers. -- Matt Mahoney, [EM

Re: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney
rts. The P2P protocol is natural language text. I will write up the proposal so it will make more sense than the current collection of posts. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: h

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney
#x27;s vulnerability, but it doesn't stop people from using them. > > -Original Message- > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > Sent: Thursday, December 06, 2007 4:06 PM > To: agi@v2.listbox.com > Subject: RE: Distributed search (was RE: Hacker intel

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney
should be a useful service at least in the short term before it destroys us. > > -Original Message- > From: Matt Mahoney [mailto:[EMAIL PROTECTED] > Sent: Thursday, December 06, 2007 6:17 PM > To: agi@v2.listbox.com > Subject: RE: Distributed search (was RE: Hacker intellig

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney
needs > > >>> special hardware with massive computational capabilities. > > > > > > > Could you give an example or two of the kind of problems that your AGI > > system(s) will need such massive capabilities to solve? It's so good - in > > fact, I would

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney
> But you claim that you need massive computational capabilities > [considerably above capabilities of regular modern PC], right? > That means "special". No, my proposal requires lots of regular PCs with regular network connections. It is a purely software approach. But more ha

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-07 Thread Matt Mahoney
er, rather than on what services the AGI should provide for us. > > =Jean-Paul > >>> On 2007/12/07 at 06:41, in message > <[EMAIL PROTECTED]>, Matt Mahoney > <[EMAIL PROTECTED]> wrote: > > I wrote up a quick description of my AGI proposal at >

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney
h massive capabilities to solve? It's so good - in > fact, I would argue, essential - to ground these discussions. For example, I ask the computer "who is this?" and attach a video clip from my security camera. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by

Re: [agi] Worst case scenario

2007-12-10 Thread Matt Mahoney
html Too bad we don't know how much computing power is needed for AI. Without this knowledge, it will take us by surprise. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbo

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-11 Thread Matt Mahoney
number of peers that accept it according to the peers' policies, which are set individually by their owners. The network provides an incentive for peers to produce useful information so that other peers will accept it. Thus, useful and truthful information is more likely to be propagated. -

Re: Re[2]: [agi] Do we need massive computational capabilities?

2007-12-11 Thread Matt Mahoney
use Google's 10^6 CPU cluster and its database with 10^9 human contributors. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=74663822-e0a686

An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
a human could experience 10^9 bits according to cognitive models of long term memory. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=74724148-5841d4

RE: [agi] AGI and Deity

2007-12-11 Thread Matt Mahoney
What do you call the computer that simulates what you perceive to be the universe? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_se

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
to what "pain" (or any other kind of subjective experience) > actually is. I would like to hear your definition of pain and/or negative reinforcement. Can you answer the question of whether a machine (say, an AGI or an uploaded human brain) can feel pain? -- Matt Mahoney, [EMAIL

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
--- Bryan Bishop <[EMAIL PROTECTED]> wrote: > On Monday 10 December 2007, Matt Mahoney wrote: > > The worst case scenario is that AI wipes out all life on earth, and > > then itself, although I believe at least the AI is likely to survive. > > http://lifeboat.com/ex/

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
wer. The message posting service I have proposed does not address friendliness at all. It should be benign as long as it can't reprogram the peers. I can't guarantee that won't happen because peers can be arbitrarily configured by their owners. -- Matt Mahoney, [EMAIL PROTECTED

Re: [agi] Worst case scenario

2007-12-11 Thread Matt Mahoney
--- Bryan Bishop <[EMAIL PROTECTED]> wrote: > On Tuesday 11 December 2007, Matt Mahoney wrote: > > --- Bryan Bishop <[EMAIL PROTECTED]> wrote: > > > Re: how much computing power is needed for ai. My worst-case > > > scenario accounts for nea

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > >> I have to say that this is only one interpretation of what it would mean > >> for an AGI to experience something, and I for one

Re: Re[2]: [agi] CyberLover passing Turing Test

2007-12-12 Thread Matt Mahoney
ing Test involved fooling/convincing judges, not > > clueless men hoping to get some action? > > In my taste, testing with clueless judges is more appropriate > approach. It makes test less biased. To be a valid Turing test, the judges must know that with 50% a-priori probability the

Re: [agi] The Function of Emotions is Torture

2007-12-12 Thread Matt Mahoney
wish. I make no claims about the morality of inflicting pain on animals or programs. Morality is an evolved cultural belief. We believe in compassion to other humans because tribes that practiced this belief (toward their own members) were more successful than those that didn't. Likewise,

Re: [agi] The Function of Emotions is Torture

2007-12-13 Thread Matt Mahoney
in pencil and paper, transistors, or neurons. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=75584599-c885b5

Is superhuman intelligence possible? (was Re: [agi] AGI and Deity)

2007-12-19 Thread Matt Mahoney
ter to have superhuman intelligence. If you define intelligence as passing the Turing test, then I agree that you could not have a computer much smarter than human. But I don't define intelligence that way. A superhuman intelligence will be invisible, because it will have complete control over

Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
services, charisma, deceit, or extortion, and at other methods we haven't even thought of yet. > Beliefs also operate in the models. I can imagine an intelligent > machine choosing not to trust humans. Is this intelligent? Yes. Intelligence has nothing to do with subservience to humans

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
. You don't have to ask for this. The AI has modeled your brain and knows what you want. Whatever it does, you will not object because it knows what you will not object to. My views on this topic. http://www.mattmahoney.net/singularity.html -- Matt Mahoney, [EMAIL PROTECTED] - T

Re: [agi] NL interface

2007-12-21 Thread Matt Mahoney
t; > YKY What is the goal of your system. What application? -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=78397761-41d311

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Stan Nilsen <[EMAIL PROTECTED]> wrote: > > > >> Matt, > >> > >> Thanks for the links sent earlier. I especially like the paper by Legg > >> and H

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
t" and > "intelligence". As such, his conclusions were bankrupt. > > Having pointed this out for the benefit of others who may have been > overly impressed by the Hutter paper, just because it looked like > impressive maths, I have no interest in discussing this

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Dec 21, 2007 6:56 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > > > Still more nonsense: as I have pointed out before, Hutter's implied > >

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
e set of complex tasks in complex environments faster and better > than humans, such as ... So if we can't agree on what intelligence is (in a non human context), then how can we argue if it is possible? My calculator can add numbers faster than I can. Is it intelligent? Is Google inte

Re: [agi] What's the diff. between a simulation and a copy?

2007-12-30 Thread Matt Mahoney
are exponential, or 2^(10^122) steps. So we approximate. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=80322602-300c67

Re: [agi] NL interface

2007-12-30 Thread Matt Mahoney
--- "YKY (Yan King Yin)" <[EMAIL PROTECTED]> wrote: > On Dec 21, 2007 11:08 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > What is the goal of your system. What application? > Sorry about the delay, and Merry Xmas =) > > The goal is to provide an easy

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Matt Mahoney
oided that problem if you learned the meanings first, before learning the grammar. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_s

Re: [agi] Logical Satisfiability

2008-01-13 Thread Matt Mahoney
igotry. > > > > > If > I am wrong then I just made a > mistake. So what? What's the big > deal? I am not trying to mislead > or hurt anyone and making errors seems > to be a necessary part of human > existence. > > > > Right > now I am co

Re: [agi] Logical Satisfiability

2008-01-18 Thread Matt Mahoney
believed that P!=NP because a lot of people have tried and failed to do this. However, it is not proven that P!=NP. The Clay Institute has offered a $1 million prize for a proof either way. A partial list of problems can be found here: http://en.wikipedia.org/wiki/List_of_NP-complete_problems Good

Re: [agi] Logical Satisfiability

2008-01-18 Thread Matt Mahoney
if the answer is yes. Verifiability on a deterministic Turing machine is equivalent to solving the decision problem on a nondeterministic machine, but I think a little easier to understand. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/emai

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Matt Mahoney
tml I discuss how a singularity will end the human race, but without judgment whether this is good or bad. Any such judgment is based on emotion. Posthuman emotions will be programmable. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To un

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Matt Mahoney
--- Mike Dougherty <[EMAIL PROTECTED]> wrote: > On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote: > > > http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?current

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Matt Mahoney
ut reference to hardcoded goals, such as fear of death. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=88196831-fdebcc

Re: [agi] SAT, SMT and AGI

2008-01-21 Thread Matt Mahoney
ty distribution over string s can be expressed as a product of conditional predictions of consecutive symbols in s. If you know that I am for or against X then you have one bit of knowledge. A data compressor knowing this can compress a message from me about X one bit smaller than a compressor wi

Re: [agi] SAT, SMT and AGI

2008-01-21 Thread Matt Mahoney
= PROD_i > > P(s_i|s_1..i-1), that any probability distribution over string s can be > > expressed as a product of conditional predictions of consecutive symbols > in s. > > If you know that I am for or against X then you have one bit of > knowledge. A > > data compressor k

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Matt Mahoney
es. It could be a Dyson sphere with atomic level computing elements. It may or may not have a copy of your memories. It won't always be happy, because happiness is not fitness. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To

Re: [agi] CEMI Field

2008-01-23 Thread Matt Mahoney
o explain it, like Penrose's quantum gravity. A better explanation would be that evolution selects for animals whose behavior is consistent with the belief in qualia. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change

Re: [agi] CEMI Field

2008-01-23 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Jan 23, 2008 11:55 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > > This is another example of starting with the false assumption that > > consciousness (or qualia) exists, and then deriving bizarre theories

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Matt Mahoney
ffer evolution. There is good evidence that every living thing evolved from a single organism: all DNA is twisted in the same direction. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: htt

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > --- Richard Loosemore <[EMAIL PROTECTED]> wrote: > >> The problem with the scenarios that people imagine (many of which are > >> Nightmare Scenarios) is that the vast majority of

Re: [agi] CEMI Field

2008-01-24 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Jan 24, 2008 4:29 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > > Just about all humans claim to have an awareness of sensations, thoughts, > and > > feelings, and control over decisions they make, what

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > Because recursive self improvement is a competitive evolutionary process > even > > if all agents have a common ancestor. > > As explained in parallel post: this is a non-sequiteur. OK, cons

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
e from the internet and analyzes it for vulnerabilities, finding several. As instructed, it writes a virus, a modified copy of itself running on the infected system. Due to a bug, it continues spreading. Oops... Hard takeoff. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIR

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
I believe something like it WILL be built, probably ad-hoc and very complex, because it has economic value. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=89895239-3ad383

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
does. http://cs.fit.edu/~mmahoney/thesis.html -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=89907317-b55af7

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
xceed individual brains in intelligence. They can't yet, but they will. Google already knows more than any human, and can retrieve the information faster, but it can't launch a singularity. When your computer can write and debug software faster and more accurately than you can, then you

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > Matt Mahoney wrote: > > Maybe you can > > program it with a moral code, so it won't write malicious code. But the > two > > sides of the security problem require almost identical skills. Suppose > you >

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Jan 27, 2008 5:32 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > > Software correctness is undecidable -- the halting problem reduces to it. > > Computer security isn't going to be magically solved by AGI

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Matt Mahoney
they are motivated by greed, so attacks remain hidden while stealing personal information and computing resources. Acquiring resources is the fitness function for competing, recursively self improving AGI, so it is sure to play a role. -- Matt Mahoney, [EMAIL PROTECTED] - This list is sponsor

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Jan 28, 2008 4:53 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > > Consider the following subset of possible requirements: the program is > > > correct > > > > if and only if it halts. > &

<    1   2   3   4   5   6   7   8   9   10   >