Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
John, You're making a massively important point, wh. I have been thinking about recently. I think it's more useful to say that AGI-ers are thinking in terms of building a *complete AGI system* (rather than person) wh. could range from a simple animal robot to fantasies of an all intelligent

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
sometimes make mistakes. Jim Bromer On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote: [BTW Sloman's quote is a month old] I think he means what I do - the end-problems that an AGI must face. Please name me one true AGI end-problem being dealt with by any AGI-er - apart

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
Dave, Re my first point there is no choice whatsoever - you (any serious creative) *have* to start by addressing the creative problem - in this case true AGI end-problems. You have to start, e.g.,, addressing the problem part of your would-be plane, the part that's going to give you take-off

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
or at work, are closely related to the mechanisms that also produce artistic forms of creativity. From: Jim Bromer Sent: Thursday, June 24, 2010 6:57 PM To: agi Subject: Re: [agi] The problem with AGI per Sloman On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread David Jones
Mike, start by addressing the creative problem. this phrase doesn't mean anything to me. You haven't properly defined what you mean by creative to me. What do you think the true AGI end-problems are? Try not to use the word creative so much. There possible algorithms that produce high level

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ian Parker
I think there is a great deal of confusion between these two objectives. When I wrote that if you had a car accident due to a fault in AI/AGI and Matt wrote back talking about downloads this was a case in point. I was assuming that you had a system which was intelligent but was *not* a download

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
can be creative. They just need to learn to think more creatively, and that is another one of your mistakes. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Fatmah
I suggest we form a team for this purpose ..and I am willing to join From: Mike Tintner tint...@blueyonder.co.uk To: agi agi@v2.listbox.com Sent: Thu, June 24, 2010 2:33:01 PM Subject: [agi] The problem with AGI per Sloman One of the problems of AI

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Mike Tintner
Matt:It is like the way evolution works, except that there is a human in the loop to make the process a little more intelligent. IOW this is like AGI, except that it's narrow AI. That's the whole point - you have to remove the human from the loop. In fact, it also sounds like a misconceived

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Matt Mahoney
Mike Tintner wrote: Matt:It is like the way evolution works, except that there is a human in the loop to make the process a little more intelligent. IOW this is like AGI, except that it's narrow AI. That's the whole point - you have to remove the human from the loop. In fact, it also

[agi] Re: High Frame Rates Reduce Uncertainty

2010-06-21 Thread David Jones
little good if you don't understand why it works that way. You would have to create a synthetic brain to take advantage of the knowledge, which is not a approach to AGI for many reasons. There are a million other ways, even better ways, to do it than the way the brain does it. Just because the brain

Re: [agi] Re: High Frame Rates Reduce Uncertainty

2010-06-21 Thread Matt Mahoney
complicated of course. You are more likely to detect motion in objects that you recognize and expect to move, like people, animals, cars, etc. -- Matt Mahoney, matmaho...@yahoo.com From: David Jones davidher...@gmail.com To: agi agi@v2.listbox.com Sent: Mon

Re: [agi] Re: High Frame Rates Reduce Uncertainty

2010-06-21 Thread David Jones
...@yahoo.com -- *From:* David Jones davidher...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Mon, June 21, 2010 9:39:30 AM *Subject:* [agi] Re: High Frame Rates Reduce Uncertainty Ignoring Steve because we are simply going to have to agree to disagree... And I don't see

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread deepakjnath
that you recognize and expect to move, like people, animals, cars, etc. -- Matt Mahoney, matmaho...@yahoo.com From: David Jones davidher...@gmail.com To: agi agi@v2.listbox.com Sent: Mon, June 21, 2010 9:39:30 AM Subject: [agi] Re: High Frame Rates Reduce

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread David Jones
are more likely to detect motion in objects that you recognize and expect to move, like people, animals, cars, etc. -- Matt Mahoney, matmaho...@yahoo.com From: David Jones davidher...@gmail.com To: agi agi@v2.listbox.com Sent: Mon, June 21

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Abram Demski
is indeed the case, then AGI and related efforts don't stand a snowball's chance in hell of ever outperforming humans, UNTIL the underlying network stability theory is well enough understood to perform perfectly to digital precision. This wouldn't necessarily have to address all aspects of intelligence

[agi] Fwd: AGI question

2010-06-21 Thread rob levy
-- and attempting to comprehend how these constellations of significance fit in with a larger picture of what we can reliably know about the natural world. I am secondarily motivated by the fact that (considerations of morality or amorality aside) AGI is inevitable, though it is far from being a forgone

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread rob levy
(I'm a little late in this conversation. I tried to send this message the other day but I had my list membership configured wrong. -Rob) -- Forwarded message -- From: rob levy r.p.l...@gmail.com Date: Sun, Jun 20, 2010 at 5:48 PM Subject: Re: [agi] An alternative plan to discover

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
-Original Message- From: Steve Richfield [mailto:steve.richfi...@gmail.com] My underlying thought here is that we may all be working on the wrong problems. Instead of working on the particular analysis methods (AGI) or self-organization theory (NN), perhaps if someone found

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
having yet (or ever) reaching perfection. Hence, evolution may have struck a balance, where less intelligence directly impairs survivability, and greater intelligence impairs network stability, and hence indirectly impairs survivability. If the above is indeed the case, then AGI and related

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Jim Bromer
have struck a balance, where less intelligence directly impairs survivability, and greater intelligence impairs network stability, and hence indirectly impairs survivability. If the above is indeed the case, then AGI and related efforts don't stand a snowball's chance in hell of ever

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
be working on the wrong problems. Instead of working on the particular analysis methods (AGI) or self-organization theory (NN), perhaps if someone found a solution to large- network stability, then THAT would show everyone the ways to their respective goals. For a distributed AGI

Re: [agi] Fwd: AGI question

2010-06-21 Thread Matt Mahoney
rob levy wrote: I am secondarily motivated by the fact that (considerations of morality or amorality aside) AGI is inevitable, though it is far from being a forgone conclusion that powerful general thinking machines will have a first-hand subjective relationship to a world, as living

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
) reaching perfection. Hence, evolution may have struck a balance, where less intelligence directly impairs survivability, and greater intelligence impairs network stability, and hence indirectly impairs survivability. If the above is indeed the case, then AGI and related efforts don't stand

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Matt Mahoney
years of evolution that created human intelligence? -- Matt Mahoney, matmaho...@yahoo.com From: rob levy r.p.l...@gmail.com To: agi agi@v2.listbox.com Sent: Mon, June 21, 2010 11:56:53 AM Subject: Re: [agi] An alternative plan to discover self-organization theory

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
(AGI) or self-organization theory (NN), perhaps if someone found a solution to large- network stability, then THAT would show everyone the ways to their respective goals. For a distributed AGI this is a fundamental problem. Difference is that a power grid is such a fixed network

[agi] Formulaic vs. Equation AGI

2010-06-21 Thread Steve Richfield
and our world is implementation detail. We do our part, and it does its part. I'm sure that there are Zen Buddhists out there who would just LOVE this yin-yang view of things. Any thoughts? Steve --- agi Archives: https://www.listbox.com/member/archive/303

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
of the oscillations that long ping times can introduce in people's (and intelligent bot's) behavior. Again, this is basically the same 12db/octave phenomenon. Steve --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Mike Tintner
Steve: For example, based on ability to follow instruction, cats must be REALLY stupid. Either that or really smart. Who wants to obey some dumb human's instructions? --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Ian Parker
network stability, and hence indirectly impairs survivability. If the above is indeed the case, then AGI and related efforts don't stand a snowball's chance in hell of ever outperforming humans, UNTIL the underlying network stability theory is well enough understood to perform perfectly

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Ian Parker
:30, deepakjnath deepakjn...@gmail.com wrote: The brain does not get the high frame rate signals as the eye itself only gives brain images at 24 frames per second. Else u wouldn't be able to watch a movie. Any comments? --- agi Archives: https

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
but it seems from observance that intelligence/consciousness exhibits some sort of harmonic property, or levels. John --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread rob levy
-- *From:* rob levy r.p.l...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Mon, June 21, 2010 11:56:53 AM *Subject:* Re: [agi] An alternative plan to discover self-organization theory (I'm a little late in this conversation. I tried to send this message the other day but I had my list

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread David Jones
problems? Lack of computing power. How much computation would you need to simulate the 3 billion years of evolution that created human intelligence? -- Matt Mahoney, matmaho...@yahoo.com -- *From:* rob levy r.p.l...@gmail.com *To:* agi agi@v2.listbox.com

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
of this? Yes, our repeated successes in simultaneously improving both the size and stability of very large scale networks (trade, postage, telegraph, electricity, road, telephone, Internet) serve as very nice existence proofs. --- agi Archives: https

[agi] Read Fast, Trade Fast

2010-06-21 Thread Mike Tintner
http://www.zerohedge.com/article/fast-reading-computers-are-about-drink-your-trading-milkshake --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
of interconnections. serve as very nice existence proofs. I'm still looking. Steve --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
immaterial. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
instability to their benefit?! Is this related to your harmonic thoughts? Thanks. Steve --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Mark Nuzzolilo
of the time axis in the formula results in a reduction of short-term memory loss and thus more resources for the brain to work with. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Mark Nuzzolilo
with. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Michael Swan
Hi, * AGI should be scalable - More data just mean the potential for more accurate results. * More data can chew up more computation time without a benefit. ie If all you want to do is identify a bird, it's still a bird at 1 fps and 1000 fps. * Don't aim for precision, aim for generality. Eg. AGI

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
familiar component which we could get at by study of the structure of natural information and knowledge. John --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

[agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Steve Richfield
be simple enough to work, and simple enough that it just HAS to be tried. All thoughts, stones, and rotten fruit will be gratefully appreciated. Thanks in advance. Steve --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Jim Bromer
into how a programmer can design a test for self-organization. It is a subtle question. Jim Bromer --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Mike Tintner
Classic example of the crazy way AGI-ers think about AGI - divorced from any reality. Starting-point - NOT what's the problem? - what is this brain/thinking machine supposed to do? - what problems should it be dealing with?.. and how do we design a machine to deal with those problems

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Jim Bromer
On Sun, Jun 20, 2010 at 8:52 AM, Mike Tintner tint...@blueyonder.co.ukwrote: Classic example of the crazy way AGI-ers think about AGI - divorced from any reality. You must have missed the part where Steve said, No, I haven't been smokin' any wacky tobacy. Instead, I was having a long talk

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Steve Richfield
Mike, There is a very fundamental flaw in your response, which I will explain. I suggest/request that you re-post while addressing the flawed issue: You presume that I (and/or Eddie) have ANY interest in creating an AGI. I don't, and I don't think that Eddie does. What Eddie and I are trying

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Steve Richfield
. It is a subtle question. I agree. Do you have any thoughts about how to go about this? Steve --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Jim Bromer
of possibilities that they could work with would be so great. Programmed computers are capable of appearing as if they were behaving in non-programmed ways because they are capable of learning through input-output in ways the AGI programmer could not anticipate and the possible combinations

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Mike Tintner
Steve, I'm not really interested in shooting at particular people, only in the grand principles here. And they still seem to apply despite your qualifications. What AGI problems - actual problems that actual animals or humans/ agents living in the real world have to deal with - is self

Re: [agi] An alternative plan to discover self-organization theory

2010-06-20 Thread Matt Mahoney
seconds. -- Matt Mahoney, matmaho...@yahoo.com From: Steve Richfield steve.richfi...@gmail.com To: agi agi@v2.listbox.com Sent: Sun, June 20, 2010 2:06:55 AM Subject: [agi] An alternative plan to discover self-organization theory No, I haven't been smokin' any wacky

Re: [agi] Bayesian surprise attracts human attention

2009-01-15 Thread Bob Mottram
. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered by Listbox: http://www.listbox.com

Re: [agi] Doubts raised over brain scan findings

2009-01-15 Thread Richard Loosemore
--- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered by Listbox: http://www.listbox.com

Re: [agi] Bayesian surprise attracts human attention

2009-01-15 Thread Richard Loosemore
that covaries with novelty is like shooting fish in a barrel. Of course, it's not like these are the only people making this kind of non-progress ;-) Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] just a thought

2009-01-15 Thread David Clark
be as few as 10's of millions or so. I don't think this is a problem for AGI because, if you could create an AGI with about the level of intelligence of a single human, you could duplicate it quickly and exactly to as many individual computer systems as you desired. Humans have many ways

[agi] Paper: Voodoo Correlations in Social Neuroscience

2009-01-15 Thread Mark Waser
http://machineslikeus.com/news/paper-voodoo-correlations-social-neuroscience http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss

Re: [agi] just a thought

2009-01-14 Thread Christopher Carr
, we have to replicate the capabilities of not one human mind, but a system of 10^10 minds. That is why my AGI proposal is so hideously expensive. http://www.mattmahoney.net/agi2.html Let's fire Matt and hire 10 chimps instead. Problems with IQ notwithstanding, I'm confident that, were

Re: [agi] just a thought

2009-01-14 Thread Mike Tintner
to solve the engram problem, Richard is not a lone hero, but a part of the vast collective enterprise of science/scientists trying to understand the brain as a whole, and his eventual discovery will have to dovetail with others' efforts. So not just one AGI, Ben, a whole society of them. He's

Re: [agi] just a thought

2009-01-14 Thread Ronald C. Blue
the capabilities of not one human mind, but a system of 10^10 minds. That is why my AGI proposal is so hideously expensive. http://www.mattmahoney.net/agi2.html Now really expensive if quantum entanglement is in fact present in a hybrid of quantum circuits stored in carbon tetrachloride functioning

Re: [agi] just a thought

2009-01-14 Thread Bob Mottram
many human-made artifacts are like this. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270

Re: [agi] just a thought

2009-01-14 Thread Mike Tintner
. That is why my AGI proposal is so hideously expensive. http://www.mattmahoney.net/agi2.html Now really expensive if quantum entanglement is in fact present in a hybrid of quantum circuits stored in carbon tetrachloride functioning as a capacitor. In principle 420 billion human minds or about

Re: [agi] just a thought

2009-01-14 Thread Matt Mahoney
? Please give me an IQ test that measures something that can't be done by n log n people (allowing for some organizational overhead). -- Matt Mahoney, matmaho...@yahoo.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

RE: [agi] just a thought

2009-01-14 Thread John G. Rose
--- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered by Listbox: http

[agi] Encouraging?

2009-01-14 Thread Mike Tintner
five or six years after they were [up and running]. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id

Re: [agi] Encouraging?

2009-01-14 Thread Steve Richfield
Richfield --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered by Listbox: http

[agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore
of the field. We've attacked from a different direction, but we had a wide range of targets to choose, believe me. The short version of the overall story is that neuroscience is out of control as far as overinflated claims go. Richard Loosemore --- agi

Re: [agi] just a thought

2009-01-14 Thread Valentina Poletti
Cool, this idea has already been applied successfully to some areas of AI, such as ant-colony algorithms and swarm intelligence algorithms. But I was thinking that it would be interesting to apply it at a high level. For example, consider that you create the best AGI agent you can come up

Re: [agi] just a thought

2009-01-14 Thread Mike Tintner
Chris: Problems with IQ notwithstanding, I'm confident that, were my silly IQ of 145 merely doubled,.. Chris/Matt: Hasn't anyone ever told you - it's not the size of it, it's what you do with it that counts? --- agi Archives: https

Re: [agi] just a thought

2009-01-14 Thread Pei Wang
I guess something like this is in the plan of many, if not all, AGI projects. For NARS, see http://nars.wang.googlepages.com/wang.roadmap.pdf , under (4) Socialization in page 11. It is just that to attempt any non-trivial multi-agent experiment, the work in single agent needs to be mature enough

Re: [agi] just a thought

2009-01-14 Thread Joshua Cowan
and/or have other ideas for encouraging empathy (assuming you see empathy as a good goal)? From: Pei Wang mail.peiw...@gmail.com Reply-To: agi@v2.listbox.com To: agi@v2.listbox.com Subject: Re: [agi] just a thought Date: Wed, 14 Jan 2009 16:21:23 -0500 I guess something like this is in the plan

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore
--- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered by Listbox: http

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
little to do with neuroscience. The field as a whole is hardly mortally afflicted with that problem (whether it's even real or not). If you look at any field large enough, there will be bad science. How is it relevant to study of AGI? -- Vladimir Nesov robot...@gmail.com http

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Ronald C. Blue
work. Ron --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered by Listbox

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Ronald C. Blue
to study of AGI? Your child comes home and says they make a zero on the big test. A child says they made 80 on the test and failed , The reason they missed 80 questions out of 100. A child says they had a grade of 98 right and the teacher gave them a B. The reason there were 110 questions

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Richard Loosemore
is it relevant to study of AGI? People here are sometimes interested in cognitive science matters, and some are interested in the concept of building an AGI by brain emulation. Neuroscience is relevant to that. Beyond that, this is just an FYI. I really do not care to put much effort

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
substantiation, mere 50 papers that got confused with statistics don't do it justice. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Mike Tintner
of) doesn't strike me as a v.big deal, since emotions are so vague anyway. If you have criticisms of the lack of correlation with more precise cognitive observations, like words or sights, that would be v. interesting. --- agi Archives: https

[agi] Bayesian surprise attracts human attention

2009-01-14 Thread Ronald C. Blue
Bayesian surprise attracts human attention http://tinyurl.com/77p9xo --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member

Re: [agi] Encouraging?

2009-01-14 Thread Kyle Kidd
. Kyle Kidd kylek...@gmail.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0

Re: [agi] fuzzy-probabilistic logic again

2009-01-13 Thread Vladimir Nesov
to reason about, Next thing I'll work on is the planning module. That's where the AGI interacts with the environment. ... about why and how a given approach to reasoning is expected to be powerful. I think if PZ logic can express a great variety of uncertain phenomena, that's good enough

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/9 Ben Goertzel b...@goertzel.org: This is an attempt to articulate a virtual world infrastructure that will be adequate for the development of human-level AGI http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf goertzel.org seems to be down. So I can't refresh my memory of the paper

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Yes, I'm expecting the AI to make tools from blocks and beads No, i'm not attempting to make a detailed simulation of the human brain/body, just trying to use vaguely humanlike embodiment and high-level mind-architecture together with computer science algorithms, to achieve AGI On Tue, Jan 13

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
algorithms, to achieve AGI I wasn't suggesting you were/should. The comment about ones own changing body was simply one of the many examples of things that happen in the world that we have to try and cope with and adjust to, making our brains flexible and leading to development rather than

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
of the BlocksNBeadsWorld, and I think it's an acceptable one... ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
Melting and boiling at least should be doable: assign every bead a temperature, and let solid interbead bonds turn liquid above a certain temperature and disappear completely above some higher temperature. --- agi Archives: https://www.listbox.com/member

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered by Listbox: http://www.listbox.com

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
temperature and disappear completely above some higher temperature. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
russell.wall...@gmail.com wrote: Melting and boiling at least should be doable: assign every bead a temperature, and let solid interbead bonds turn liquid above a certain temperature and disappear completely above some higher temperature. --- agi

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
2009/1/12 Ben Goertzel b...@goertzel.org: The problem with simulations that run slower than real time is that they aren't much good for running AIs interactively with humans... and for AGI we want the combination of social and physical interaction There's plenty you can do with real-time

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
input the AGI would have. E.g. you might specify that its vision system would consist of 2 pixelmaps (binocular vision) each 1000x1000 pixels, in three colours and 16 bits of intensity, updated 20 times per second. Of course, you may want to specify the visual system differently, but it's useful

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Actually, I view that as a matter for the AGI system, not the world. Different AGI systems hooked up to the same world may choose to receive different inputs from it Binocular vision, for instance, is not necessary in a virtual world, and some AGIs might want to use it whereas others don't

[agi] [WAS The Smushaby] The Logic of Creativity

2009-01-13 Thread Mike Tintner
. Nothing is easy about what you did - for either AI or AGI. And no one in AGI has ever attempted creative problems.Perhaps you can show me wrong. 1. THE CENTRAL ISSUE - I suggest, to put it v. v. broadly at first, is this: *are there general logical procedures that can tackle creative problems, esp

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Matt Mahoney
My response to Ben's paper is to be cautious about drawing conclusions from simulated environments. Human level AGI has an algorithmic complexity of 10^9 bits (as estimated by Landauer). It is not possible to learn this much information from an environment that is less complex. If a baby AI did

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
paper is to be cautious about drawing conclusions from simulated environments. Human level AGI has an algorithmic complexity of 10^9 bits (as estimated by Landauer). It is not possible to learn this much information from an environment that is less complex. If a baby AI did perform well

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Matt Mahoney
, matmaho...@yahoo.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0 Powered

Re: [agi] [WAS The Smushaby] The Logic of Creativity

2009-01-13 Thread Jim Bromer
of houses and to pictures of flying, would have the ability to eventually draw a picture of a flying house (along with a lot of other creative efforts that you have not) even thought of. But the thing is, that I can do this without using advanced AGI techniques! So, I must retain the recognition that I

Re: [agi] [WAS The Smushaby] The Logic of Creativity

2009-01-13 Thread Matt Mahoney
I think what Mike is saying is that I could draw what I think a flying house would look like, and you could look at my picture and say it was a flying house, even though neither of us has ever seen one. Therefore, AGI should be able to solve the same kind of problems, and why aren't we

Re: [agi] [WAS The Smushaby] The Logic of Creativity

2009-01-13 Thread Mike Tintner
- which is having an incomplete domain set, and incomplete set of rules, proceed to construct something in an altogether new domain, and make up the rules as you go. That's the problem for - and whole challenge of - AGI. - You're kind of illustrating my central thesis of creative

<    3   4   5   6   7   8   9   10   11   12   >