[agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-14 Thread Jef Allbright
This may be of interest to the group. http://video.google.com/videoplay?docid=-112735133685472483 This presentation is about a potential shortcut to artificial intelligence by trading mind-design for world-design using artificial evolution. Evolutionary algorithms are a pump for turning CPU

Re: [agi] Holonomics

2007-11-12 Thread Jef Allbright
On 11/12/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: I read it more as if it were a very highbrow sort of poetry ;-) Same here. At first I was disappointed and irritated by the lack of meaningful content (or was it all content, but lacking form...?) and subsequent waste of time. Then I

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Jef Allbright
On 11/12/07, Linas Vepstas [EMAIL PROTECTED] wrote: I see a human, better give him wide berth. Certainly, the ability to detect and deal with pedestrians will be required before these things become street-legal. Well, I think we'll see robotic vehicles first play a significant role in war

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Jef Allbright
On 11/12/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote: On Nov 12, 2007 10:34 PM, Linas Vepstas [EMAIL PROTECTED] wrote: I can easily imagine that next-years grand challenge, or the one thereafter, will explicitly require ability to deal with cyclists, motorcyclists, pedestrians, children

Re: [agi] Holonomics

2007-11-12 Thread Jef Allbright
On 11/12/07, John G. Rose [EMAIL PROTECTED] wrote: From: Jef Allbright [mailto:[EMAIL PROTECTED] On a more practical note, intelligence is not so much about making connections, but about the selective pruning (or equivalently, weighting) of connections. I found this sizable document

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Jef Allbright
On 11/11/07, Edward W. Porter [EMAIL PROTECTED] wrote: Ben said -- the possibility of dramatic, rapid, shocking success in robotics is LOWER than in cognition That's why I tell people the value of manual labor will not be impacted as soon by the AGI revolution as the value of mind labor.

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Jef Allbright
On 11/10/07, Robin Hanson [EMAIL PROTECTED] wrote: My impression is that the cognitive performance of mice is vastly superior to that of current robot cars. I don't see how they could be considered even remotely comparable. But perhaps I have misjudged. Has anyone attempted to itemize

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Jef Allbright
On 11/10/07, Edward W. Porter [EMAIL PROTECTED] wrote: There is a small, but increasing number of people who pretty much understand how to build artificial brains as powerful as that of humans, not 100% but probably at least 90% at an architectual level. Being 90% certain of where to get on

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Jef Allbright
On 11/10/07, Edward W. Porter [EMAIL PROTECTED] wrote: Ben Goertzel and his Novamente is best architect/architecture I know of. I had independently come with a similar approach myself (I could have written 80-85% of that summary of Novamente's architecture in my recent post before I read about

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Jef Allbright
On 11/10/07, Bob Mottram [EMAIL PROTECTED] wrote: On 10/11/2007, Jef Allbright [EMAIL PROTECTED] wrote: At the DARPA Urban Challenge last weekend, the optimism and flush of rapid growth was palpable... I was saying to someone recently that it's hard to watch something like the recent

Re: [agi] Using Google Sets for common sense in computer vision

2007-11-10 Thread Jef Allbright
On 11/10/07, Neil H. [EMAIL PROTECTED] wrote: The research is still quite early, but could Google Sets also be useful for more general AI tasks? Only to the extent that simple first-order association by textual proximity is useful, which is to say, only slightly. Others have performed deeper

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread Jef Allbright
I recently found this paper to contain some thinking worthwhile to the considerations in this thread. http://lcsd05.cs.tamu.edu/papers/veldhuizen.pdf - Jef - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread Jef Allbright
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote: Jeff, In your below flame you spent much more energy conveying contempt than knowledge. I'll readily apologize again for the ineffectiveness of my presentation, but I meant no contempt. Since I don't have time to respond to all of your

Re: [agi] How valuable is Solmononoff Induction for real world AGI?

2007-11-08 Thread Jef Allbright
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote: In my attempt to respond quickly I did not intended to attack him or his paper Edward - I never thought you were attacking me. I certainly did attack some of your statements, but I never attacked you. It's not my paper, just one that I

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: A quick question for Richard and others -- Should adults be allowed to drink, do drugs, wirehead themselves to death? A correct response is That depends. Any should question involves consideration of the pragmatics of the system, while

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote: On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: You misunderstood me -- when I said robustness of the goal system, I meant the contents and integrity of the goal system, not the particular implementation. I meant that too - and I didn't

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: Effective deciding of these should questions has two major elements: (1) understanding of the evaluation-function of the assessors with respect to these specified ends, and (2) understanding of principles (of nature) supporting increasingly

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote: On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote: Argh! Goal system and Friendliness are roughly the same sort of confusion. They are each modelable only within a ***specified***, encompassing context. In more coherent, modelable

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote: Do you really think you can show an example of a true moral universal? Thou shalt not destroy the universe. Thou shalt not kill every living and/or sentient being including yourself. Thou shalt not kill every living and/or sentient except

Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote: I'm not going to cheerfully right you off now, but feel free to have the last word. Of course I meant cheerfully write you off or ignore you. - Jef - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change

Re: [agi] Religion-free technical content

2007-10-01 Thread Jef Allbright
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote: The motivational system of some types of AI (the types you would classify as tainted by complexity) can be made so reliable that the likelihood of them becoming unfriendly would be similar to the likelihood of the molecules of an

Re: [agi] Religion-free technical content

2007-10-01 Thread Jef Allbright
On 10/1/07, Richard Loosemore [EMAIL PROTECTED] wrote: Jef Allbright wrote: On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote: The motivational system of some types of AI (the types you would classify as tainted by complexity) can be made so reliable that the likelihood of them

Re: [agi] Religion-free technical content

2007-10-01 Thread Jef Allbright
On 10/1/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote: Right, now consider the nature of the design I propose: the motivational system never has an opportunity for a point failure: everything that happens is

Re: [agi] Religion-free technical content

2007-09-30 Thread Jef Allbright
On 9/30/07, Kaj Sotala [EMAIL PROTECTED] wrote: Quoting Eliezer: ... Evolutionary programming (EP) is stochastic, and does not precisely preserve the optimization target in the generated code; EP gives you code that does what you ask, most of the time, under the tested circumstances, but the

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote: I think a system can get arbitrarily complex without being conscious -- consciousness is a specific kind of model-based, summarizing, self-monitoring architecture. Yes. That is a good clarification of what I meant rather than what I said.

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote: Isn't it indisputable that agency is necessarily on behalf of some perceived entity (a self) and that assessment of the morality of any decision is always only relative to a subjective model of rightness? I'm not sure that I should dive into

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote: I do think its a misuse of agency to ascribe moral agency to what is effectively only a tool. Even a human, operating under duress, i.e. as a tool for another, should be considered as having diminished or no moral agency, in my opinion. So,

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote: I would not claim that agency requires consciousness; it is necessary only that an agent acts on its environment so as to minimize the difference between the external environment and its internal model of the preferred environment OK. Moral

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote: Decisions are seen as increasingly moral to the extent that they enact principles assessed as promoting an increasing context of increasingly coherent values over increasing scope of consequences. Or another question . . . . if I'm analyzing

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-20 Thread Jef Allbright
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Personally, I find many of his posts highly entertaining... If your sense of humor differs, you can always use the DEL key ;-) -- Ben G I initially found it sad and disturbing, no, disturbed. Thanks to Mark I was able to see the humor

Re: [agi] Low I.Q. AGI

2007-04-15 Thread Jef Allbright
On 4/15/07, Pei Wang [EMAIL PROTECTED] wrote: I actually agree with most of what Richard and Ben said, that is, we can create AI that is more intelligent, in some sense, than human beings --- that is also what I've been working on. However, to me Singularity is a stronger claim than superhuman

Re: [agi] Glocal knowledge representation?

2007-03-26 Thread Jef Allbright
On 3/25/07, Ben Goertzel [EMAIL PROTECTED] wrote: Hi, Does anyone know if the term glocal (meaning global/local) has previously been used in the context of AI knowledge representation? While not recognized as a formal term of knowledge representation, glocal has strong connotations of think

Re: [agi] Glocal knowledge representation?

2007-03-26 Thread Jef Allbright
On 3/26/07, Ben Goertzel [EMAIL PROTECTED] wrote: Yes, Google reveals that the term glocal has been used a few times in the context of social activism. While popularized by social activists, particularly with regard to ecological concerns, the fairly deep principle I had in mind goes somewhat

Re: [agi] AGI interests

2007-03-26 Thread Jef Allbright
On 3/26/07, DEREK ZAHN [EMAIL PROTECTED] wrote: David Clark writes: Everyone on this list is quite different. It would be interesting to see what basic interests and views the members of this list hold. For a few people, published works answer this pretty clearly but that's not true for most

Re: [agi] Project Halo [Was: DARPA Ends Brain Reverse Engineering Project]

2007-03-20 Thread Jef Allbright
On 3/20/07, Pei Wang [EMAIL PROTECTED] wrote: I wonder if Jef, or anyone else here, knows what has happened to Project Halo, the Digital Aristotle. The project website (http://www.projecthalo.com/) hasn't been updated for three years. I think Danny Hillis became consumed with FreeBasing. ;-)

Re: [agi] Project Halo [Was: DARPA Ends Brain Reverse Engineering Project]

2007-03-20 Thread Jef Allbright
On 3/20/07, Pei Wang [EMAIL PROTECTED] wrote: Was Hillis involved with Halo? I only saw him listed as one of the inspirations. My mistake, I was working from memory and made a false association... Here's all I can find as to the latest status: Three teams, Team SRI International, Team

[agi] DARPA Ends Brain Reverse Engineering Project

2007-03-16 Thread Jef Allbright
FYI, - Jef An article in the New Jersey Star-Ledgerhttp://www.nj.com/news/ledger/index.ssf?/base/news-11/1173937313282210.xmlcoll=1says DARPA has quietly killed their project to reverse engineering the human brain. The project, known as Biologically Inspired Cognitive Architectures

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote: On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: If I understand Minsky's Society of Mind, the basic idea is to have the tools be such that you can build your deck by first pointing at the saw and saying you do your thing and then

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
, Free-Will, Morality, Rationality, Justice and on to effective social decision-making. - Jef --- On 3/9/07, Jef Allbright [EMAIL PROTECTED] wrote: On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote: On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote: On 3/9/07, Jef Allbright [EMAIL PROTECTED] wrote: Thanks for the clarification. You can surely call it high-level functional description, but what I mean is that it is not an ordinary high-level functional description, but a concrete expectation

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote: On 3/9/07, Jef Allbright [EMAIL PROTECTED] wrote: We seem to have skipped over my point about intelligence being about the encoding of regularities of effective interaction of an agent with its environment, but perhaps that is now moot. Now I see

RE: [agi] Priors and indefinite probabilities

2007-02-14 Thread Jef Allbright
Chuckling that this is still going on, and top posting based on Ben's prior example... Cox's proof is all well and good, but I think gts still misses the point: The principle of indifference is still the *best* one can do under conditions of total ignorance. Any other distribution would imply

RE: [agi] Betting and multiple-component truth values

2007-02-10 Thread Jef Allbright
gts wrote: I'm not expecting essentially perfect coherency in AGI. I understand perfection is out of reach. My question to you was whether, as a professed C++ developer, you are familiar with the well-known impracticality of certifying a non-trivial software product to be essentially free of

RE: [agi] Betting and multiple-component truth values

2007-02-10 Thread Jef Allbright
gts wrote: This same concept of coherence is the basis of the axioms of probability... Yes. ... and the principle of indifference. No. Understand this underlying concept and you may understand the others. I understand it, Jef. But do you? The principle of indifference is

RE: [agi] Correction: Betting and multiple-component truth values

2007-02-10 Thread Jef Allbright
Correction: Needed to add [the idea that] below. - Jef gts wrote: This same concept of coherence is the basis of the axioms of probability... Yes. ... and the principle of indifference. No. Understand this underlying concept and you may understand the others. I

RE: [agi] Betting and multiple-component truth values

2007-02-10 Thread Jef Allbright
gts wrote: [Jef wrote:] That's like saying you have no use for [the idea that] a balance scale reads zero when both pans are empty. Your beef is not just with me; it is with Bruno De Finetti and Frank P. Ramsey and their modern followers in the subjectivist school of probability theory,

RE: [agi] Betting and multiple-component truth values

2007-02-09 Thread Jef Allbright
gts wrote: Well, although I am not an AI developer, I am a C++ application developer and I know I or any reasonably skilled developer could write task-specific applications that would be extremely coherent in the De Finetti sense (applicable to making probabilistic judgements in

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
Pei Wang wrote: ... in this example, there are arguments supporting the rationality of human, that is, even if two betting cases corresponding to the same expected utility, there are reasons for them to be treated differently in decision making, because the probability in one betting is

RE: [agi] Consistency: Values versus goals

2007-02-06 Thread Jef Allbright
Ben wrote: Well, in fact, Novamente is **not** constrained from having Dutch books made against it, because it is not a perfectly consistent probabilistic reasoner. It seeks to maintain probabilistic consistency, but balances this with other virtues... This is really a necessary

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
gts wrote: I understand the resources problem, but to be coherent a probabilistic reasoner need only be constrained in very simple ways, for example from assigning a higher probability to statement 2 than to statement 1 when statement 2 is contingent on statement 1. Is such basic

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
gts wrote: On Tue, 06 Feb 2007 16:27:22 -0500, Jef Allbright [EMAIL PROTECTED] wrote: - You would have to assume that statement 2 is *entirely* contingent on statement 1. - I

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
Ah, the importance of semantic precision (still context-dependent, of course). ;-) - Jef -Original Message- From: gts [mailto:[EMAIL PROTECTED] Sent: Tuesday, February 06, 2007 2:41 PM To: agi@v2.listbox.com Subject: Re: [agi] Betting and multiple-component truth values My last

[agi] RE: [extropy-chat] Criticizing One's Own Goals---Rational?

2006-12-07 Thread Jef Allbright
Ben Goertzel wrote: The relationship between rationality and goals is fairly subtle, and something I have been thinking about recently Ben, as you know, I admire and appreciate your thinking but have always perceived an inside-outness with your approach (which we have discussed before)

RE: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Jef Allbright
Eric Baum wrote: As I and Jef and you appear to agree, extant Intelligence works because it exploits structure *of our world*; there is and can be (unless P=NP or some such radical and unlikely possibility) no such thing as as General Intelligence that works in all worlds. I'm going to

RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Jef Allbright
Eric Baum wrote: James Jef Allbright [EMAIL PROTECTED] wrote: Russell Wallace James wrote: Syntactic ambiguity isn't the problem. The reason computers don't understand English is nothing to do with syntax, it's because they don't understand the world. snip But the computer still

RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Jef Allbright
Jef wrote: Each of these examples is of a physical system responding with some degree of effectiveness based on an internal model that represents with some degree of fidelity its local environment. Its an unnecessary complication, and leads to endless discussions of qualia,

RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Jef Allbright
Eric - Thanks to the pointer to your paper. Upon reading I quickly saw what I think provoked your reaction to my observation about understanding. We were actually saying much the same thing there. My point was that no human understands the world, because our understanding, as with all examples

RE: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Jef Allbright
Russell Wallace wrote: Syntactic ambiguity isn't the problem. The reason computers don't understand English is nothing to do with syntax, it's because they don't understand the world. It's easy to parse The cat sat on the mat into sentence verb sit /verb subject cat

Re: [agi] Four axioms

2006-06-08 Thread Jef Allbright
On 6/8/06, Mark Waser [EMAIL PROTECTED] wrote: The first thing that is necessary is to define your goals. It is my contention that there is no good and no bad (or evil) except in the context of a goal It seems to me it would be better to say that there is no absolute or objective good-bad

Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-07 Thread Jef Allbright
On 6/6/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: I espouse the Proactionary Principle for everything *except* existential risks. The Proactionary Principle is a putative optimum strategy for progress within an inherently risky and uncertain environment. How do you reconcile your

Re: [agi] Future AGI's based on theorem-proving

2005-02-23 Thread Jef Allbright
Ben Goertzel wrote: The purpose of ITSSIM is to prevent such decisions. The purpose of the fancy emergency modifications to ITSSIM is to allow it to make such an decision in cases of severe emergency. A different way to put your point, however, would be to speak not just about averages but also

RE: [agi] Future AGI's based on theorem-proving

2005-02-23 Thread Jef Allbright
Ben Goertzel [EMAIL PROTECTED] wrote: __ When a proposed system design turns out to require fancy emergency patches and somewhat arbitrary set points to achieve part of its function, then perhaps that's a hint that it's time to widen-back and re-evaluate the concept at a higher

Re: [agi] What are qualia...

2005-01-26 Thread Jef Allbright
Ben Goertzel wrote: Brad, Actually this depends on your philosophy of consciousness. Panpsychists believe everything experiences qualia -- just some things experience more than others ;) ben The puzzle of qualia vanished for me when I realized that the only way we know the experience of

Re: [agi] What are qualia...

2005-01-26 Thread Jef Allbright
Philip Sutton wrote: Brad/Eugen/Ben, Early living things/current simple-minded living things, we can conjecture didn't/don't have perceptions that can be described as qualia. Then somewhere along the line humans start describing perceptions that some of them describe as qualia. It seems that

Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Jef Allbright
Dennis Gorelik wrote: Deering, I strongly disagree. Humans have preprogrammed super-goals. Humans don't update ability to update their super-goals. And humans are intelligent creatures, aren't they? In what sense do human have pre-programmed super-goals? It seems to me that our evolved

Re: [agi] ESR on philosophy of mind, free will, computability and complexity theory

2004-10-21 Thread Jef Allbright
Ben - I know you, via the web, as one who has both a strong mathematical (objective) background and also one who tends very strongly to value the experiential (subjective) stance. How is it that your mathematical side can allow you to downplay a solution to a difficult problem, saying the

Re: [agi] AGI's and emotions

2004-02-25 Thread Jef Allbright
Philip Sutton wrote: I guess we call emotions 'feelings' because we *feel *them - ie. we can feel the effect they trigger in our whole body, detected via our internal monitoring of physical body condition. Given this, unless AGIs are also programmed for thoughts or goal satisfactions to

Re: [agi] Web Consciousness and self consciousness

2003-09-07 Thread Jef Allbright
Jef wrote: On Saturday 06 September 2003 20:45, Jef Allbright wrote: Of course the strong sense of immediacy and directness trumps logical and philosophical arguments. In a very circular (and conventionally correct way) we certainly are our feelings. And in the bigger picture, that self

Re: [agi] Web Consciousness and self consciousness

2003-09-06 Thread Jef Allbright
James Rogers wrote: I would say that consciousness is at its essence a purely inferred self-model, which naturally requires a fairly large machine to support the model. Ben Goertzel wrote: Of course, this captures part of the consciousness phenomenon, but it's very much a