Kevin, A belated congratulations on your phenomenal mimetic achievement ...the 2002 Loebner Prize Contest for "Most Human Computer" via Ella.
Your winning indicates a certain level of understanding of the pursuit of AGI, not to mention your seriousness and commitment. But, I guess your seriousness of the pursuit might have to be second-guessed given your admission that your "The Next Wave" post was intended to be humorous. Not to worry...you may have contributed more with your 'funny' forward thinking than just a ' feebly frivolous failure.' For starters you bring up the important issue of human psychology in the creation process - What's the motivation for building an AGI? I mean you are contemplating expending an enormous amount of thought, effort, and energy to create this so called AGI and if, at the end of the day, all you get is an indifferent or even unfriendly, w.r.t. humans, artificial human entity, why do it? And as a Sophist, I know well that a glib mountaineering response like "..because it was there" works well to explain motivations for trying to understand how and why humans work cognitively and why they are the way they are, after all, the human is the mountain in the analagistic reference to 'there'. There most be a reasonable motivation that addresses the motivation for engaging in a creative processes that contemplates building something complex beyond ourselves, a so called AGI. It seems to me that one reasonable motivation might be that you want to build an AGI that can 'outperform' humans in one or more significant ways to solve complex problems in a complex environment. What better arena to test the 'metal' of an AGI than the physical world. >From my training as an experimental physicist, I would suggest that your 'wish list' of programmed directives for testing the 'metal' of the AGI's >> TIME TRAVEL << >> PARALLEL UNIVERSES << >> GENETIC ENGINEERING << >> ULTIMATE KNOWLEDGE << is as unlikely as it is interesting. Unlikely for 'human scientists' given present theoretical structures and experimental approaches, but interesting for 'AGI scientists' given 'a new kind of science' And the reference to 'a new kind of science' is, in fact, to Stephan Wolfram's most recent 'opus mangus' of over 1000 pages by the same name "A New Kind of Science". For those unfamiliar with Wolfram or his work, Steve created Mathematica, the world's leading software system for technical computing and symbolic programming, and, among other things, studied complexity theory in everything from biology to physics, a la cellular automata, over the last 10 years resulting in the book "A New Kind of Science" replete with his published thoughts and findings. The thoughts and findings from the book seem rather startling for an 'AGI scientist' given 'a new kind of science'. These results are captured in Wolfram's Principle of Computational Equivalence paraphrased as: 1. All the systems in nature follow computable rules. (strong AI) 2. All systems that reach the fundamental upper bound to their complexity, namely Turing's halting problem, are equivalent. 3. Almost all systems that are not obviously weak reach the bound and are thus equivalent to the halting problem. Wolfram's Principle of Computational Equivalence suggest that theoretical approaches, and perhaps even experimental approaches, to science vis-a-vis attempts to formulate science in terms of traditional mathematics falls short of capturing all the richness of the complex world. What is needed is 'a new kind of science'. And that 'a new kind of science' can be achieved through the use of algorithmic models and experimentation the likes of which he studies. If you take Steve's "A New Kind of Science" at face value...and I believe Steve is well worth considering since he is a very serious, intelligent scientist ..., you are left with some rather startling implications for an 'AGI scientist' that, at the most fundamental level, is build en silico and cognates digitally through algorithms. ...AGI design...hmm, I wonder what Steve is up to these days? Ed ----- Original Message ----- From: "Kevin Copple" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Friday, January 10, 2003 8:42 AM Subject: [agi] The Next Wave > It seems clear that AGI will be obtained in the foreseeable future. It also > seems that it will be done with adequate safeguards against a runaway entity > that will exterminate us humans. Likely it will remain under our control > also. > > HOWEVER, this brings up another wave of issues we must debate. An AGI will > naturally begin building and programming itself, and quickly develop > abilities that our human minds cannot hope to achieve. We need a consensus > on limits for humans using the AGI abilities, perhaps leading to some > programmed directives for the AGI's. Here is my effort to start a list: > > >> TIME TRAVEL << > > Likely the AGI will quickly learn how to travel through time. Should we > develop rules of conduct in advance? Sure, it's tempting to think of giving > folks like Usama bin Laden and Kim Jong Il visits in their youth from an > agitated Baby Face Nelson, but where do the "adjustments" stop? > > >> PARALLEL UNIVERSES << > > The AGI may allow passage to an infinite number of parallel universes, each > slightly different than the next. Do we really want to go mucking about, > changing things willy-nilly just for entertainment? > > >> GENETIC ENGINEERING << > > The AGI will make genetic engineering and body adjustments a snap. But when > we are all beautiful, strong, talented, and smart, are any of us? Can there > be Yin without Yang? > > >> ULTIMATE KNOWLEDGE << > > Our AGI will come to know everything. Every single flap of every butterfly > wing in all of history. If it has emotions like ours, it may become rather > depressed and realize that it is all pointless. Maybe we will understand > and agree with the AGI's explanation. What happens then? > > While I shudder at the enormity of the responsibility, I am in the process > of forming committees to address the challenges of each category. For those > of you that feel the burden of the future upon your shoulders, please let me > know which committees you feel compelled to serve on. > > Kevin Copple > > P.S. I also need a name for the website, the foundation, and a good slogan. > Any suggestions? > > > ------- > To unsubscribe, change your address, or temporarily deactivate your subscription, > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED] > ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
