[agi] IBM building a brain.

2005-06-28 Thread deering
Henry Markram: "I believe the intelligence that is going to emerge ifwe succeed in doing that is going to be far more than we can evenimagine." http://tinyurl.com/bawt2 http://bluebrainproject.epfl.ch/ http://tinyurl.com/8e8e8 To unsubscribe, change your address, or temporarily

Re: [agi] Hawkins founds AI company named Numenta

2005-03-24 Thread deering
http://www.forbes.com/technology/personaltech/2005/03/24/cz_qh_0324numenta.html Ben, this is good news, that someone with such mainstream computer business credentials is getting into the AI business. This can't but add legitimacy to the field, and if he makes any money at it many will

[agi] Unlimited intelligence.

2004-10-21 Thread deering
Computer chess programs are merely one example of many kinds of software that display human level intelligence in a very narrow domain. The chess program on my desktop computer can beat me (but just barely), nevertheless, I consider myself more intelligent than it because I can do a lot of

Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread deering
Yes, we have instincts, drives built into our systems at a hardware level, beyond the ability to reprogram through merely a software upgrade. These drives, sex, pain/pleasure, food, air, security, social status, self-actualization, are not supergoals, they are reinforcers. Reinforcers give

Re: [agi] Singularity Institute's The SIAI Voice - August 2004

2004-08-26 Thread deering
r have too many links to your website. Mike Deering,General Editor, http://nano-catalog.com/ Director, http://www.singularityawareness.com/Email: deering9 at mchsi dot com To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/mem

Re: [agi] Kinds of minds: minimal-, modest-, huge-resource

2004-08-26 Thread deering
e. You can never have too many links to your website. Mike Deering,General Editor, http://nano-catalog.com/ Director, http://www.singularityawareness.com/Email: deering9 at mchsi dot com To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.l

Re: [agi] Teaching AI's to self-modify

2004-07-05 Thread deering
at its code. The Novamente makes changes to its code, and reboots itself. The humans wonder what the hell is going on. Mike Deering. To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] AGI's and emotions

2004-02-24 Thread deering
An unexpected mental event or an unplanned mental excursion does not in itself constitute an emotion. An epileptic seizure is not an emotion. Most emotions, perhaps all, are very predictable from causes. You will the lottery or the girl next door says "yes" and you are happy. Someone runs

Re: [agi] AGI's and emotions

2004-02-24 Thread deering
moderately emotional AI. (like us, undependable) 3. slightly emotional AI. (your supposition, possibly good) 4. non-emotional AI. (my choice, including simulated emotions for human interaction) Mike Deering. To unsubscribe, change your address, or temporarily deactivate your subscription,

Re: [agi] AGI's and emotions

2004-02-23 Thread deering
to bed. I'll sleep on it. Mike Deering. To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] probability theory and the philosophy of science

2004-01-31 Thread deering
Ben, I get the impression from reading this article that it is very closely related to your work on Novamente. In trying to design a mind that is intelligent and useful you have decided that the scientist comes closest as an example. So you are trying to figure out how the best scientists

Re: [agi] Real world effects on society after development of AGI????

2004-01-13 Thread deering
e and wait and see. See you after the Singularity. Mike Deering. To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread deering
'. Obviously our current social structure will need to make significant adjustments in the transition to the 'abundance economy'. The last stage is the integration of super-human AGI into government and decision making positions at the top of the societal control structures. Mike Deering

Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread deering
occur before the replacement of all workers, which involve mid and low-level decision making. Mike Deering. To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] Dr. Turing, I presume?

2004-01-10 Thread deering
Ben, you are absolutely correct. It was my intention to exaggerate the situation a bit without actually crossing the line. But I don't think it is much of an exaggeration to say that a 'baby' Novamente even with limited hardware and speed is a tremendous event in the history of life on

Re: [agi] Dr. Turing, I presume?

2004-01-09 Thread deering
on this email list: What do you think some of the real world effects on society will be after the development of AGI? Mike Deering. To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread deering
Ben, you haven't given us an update on how things are going with the Novamente A.I. engine lately. Is this because progress has been slow and there is nothing much to report, or you don't want to get peoples hopes up while you are still so far from being done, or that you want to surprise

Re: [agi] request for feedback

2003-08-14 Thread deering
, fermions, atoms, galaxies, stars, planets, DNA, cells, organisms, societies, information, computers, AGI's, the Singularity, it's all inevitable because of conceptual necessity. Mike Deering, Director,http://www.SingularityActionGroup.com To unsubscribe, change your address, or temporarily

Re: [agi] request for feedback

2003-08-14 Thread deering
e are here is not intelligent design or chance, but rather conceptual necessity. Well, if there is anything the SAG can do to help your AGI project, email me off-list at [EMAIL PROTECTED] Mike Deering, Director,http://www.SingularityActionGroup.com To unsubscribe, change your address, or t

Re: [agi] Request for invention of new word

2003-07-04 Thread Deering
AND ? AND a collective-level conscious theater? How the heck does that work? Does the collective mind have control over the individual minds? If not, in what way is it different from just another of the individual minds? Are the individual minds (Novamentes)components of an

[agi] Doubling-time watcher - March 2003.

2003-03-24 Thread Deering
application. Mike Deering, Director www.SingularityActionGroup.com --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] swarm intellience

2003-02-28 Thread Mike Deering
Ants by Daniel Hoffman Theirs is a perfection of pure form.Nobody but has his proper place and knows it.Everything they do is functional.Each foray in a zigzag lineEach prodigious liftingOf thirty-two times their own weightEach excavation into the earth's coreEach erectionOf a crumbly

[agi] doubling time watcher.

2003-02-18 Thread Mike Deering
Graphics OS: Windows XP (HOME) Speakers: Y Sound card: Y Ethernet: Y Modem: Y Software: WordPerfect, Quicken. Price: $399 I might get one of these for my wife so she will stay off mine. We are a poor one computer family. Mike Deering. www.SingularityActionGroup.com ---new website.

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Mike Deering
for some other purpose that they have no idea will someday be used in AGI. There is nothing truly unique about the functional building blocks of AGI, just the overall architecture. Having gone way out on a limb here, all you AGI experts can now start sawing. Mike Deering

Re: [agi] doubling time revisted.

2003-02-17 Thread Mike Deering
it will be too late. Mike Deering. www.SingularityActionGroup.com ---new website.

Re: Games for AIs (Was: [agi] TLoZ: Link's Awakening.)

2002-12-12 Thread Mike Deering
AI will look up at the simulated sky and scream, "I want to talk to whoever is in charge! And I want to know what the heck is going on!" Mike Deering.

Re: Games for AIs (Was: [agi] TLoZ: Link's Awakening.)

2002-12-12 Thread Mike Deering
strategy will mold the AI into a form that will be easier to relate to than a less biocentric approach. And your suggestion to transition the environment and the AI into the real world is a natural advantage of this approach. Mike Deering.