This may be of interest to the group.
http://video.google.com/videoplay?docid=-112735133685472483
This presentation is about a potential shortcut to artificial
intelligence by trading mind-design for world-design using artificial
evolution. Evolutionary algorithms are a pump for turning CPU
On 11/12/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I read it more as if it were a very highbrow sort of poetry ;-)
Same here. At first I was disappointed and irritated by the lack of
meaningful content (or was it all content, but lacking form...?) and
subsequent waste of time. Then I
On 11/12/07, Linas Vepstas [EMAIL PROTECTED] wrote:
I see a human, better give him wide berth. Certainly, the ability to
detect and deal with pedestrians will be required before these things
become street-legal.
Well, I think we'll see robotic vehicles first play a significant role
in war
On 11/12/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
On Nov 12, 2007 10:34 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
I can easily imagine that next-years grand challenge, or the one
thereafter, will explicitly require ability to deal with cyclists,
motorcyclists, pedestrians, children
On 11/12/07, John G. Rose [EMAIL PROTECTED] wrote:
From: Jef Allbright [mailto:[EMAIL PROTECTED]
On a more practical note, intelligence is not so much about making
connections, but about the selective pruning (or equivalently,
weighting) of connections. I found this sizable document
On 11/11/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Ben said -- the possibility of dramatic, rapid, shocking success in
robotics is LOWER than in cognition
That's why I tell people the value of manual labor will not be impacted as
soon by the AGI revolution as the value of mind labor.
On 11/10/07, Robin Hanson [EMAIL PROTECTED] wrote:
My impression is that the cognitive performance of mice is vastly superior
to that of current robot cars. I don't see how they could be considered
even remotely comparable. But perhaps I have misjudged. Has anyone
attempted to itemize
On 11/10/07, Edward W. Porter [EMAIL PROTECTED] wrote:
There
is a small, but increasing number of people who pretty much understand how
to build artificial brains as powerful as that of humans, not 100% but
probably at least 90% at an architectual level.
Being 90% certain of where to get on
On 11/10/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Ben Goertzel and his Novamente is best architect/architecture I know of.
I had independently come with a similar approach myself (I could have
written 80-85% of that summary of Novamente's architecture in my recent
post before I read about
On 11/10/07, Bob Mottram [EMAIL PROTECTED] wrote:
On 10/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
At the DARPA Urban Challenge last weekend, the optimism and flush of
rapid growth was palpable...
I was saying to someone recently that it's hard to watch something
like the recent
On 11/10/07, Neil H. [EMAIL PROTECTED] wrote:
The research is still quite early, but could Google Sets also be
useful for more general AI tasks?
Only to the extent that simple first-order association by textual
proximity is useful, which is to say, only slightly.
Others have performed deeper
I recently found this paper to contain some thinking worthwhile to the
considerations in this thread.
http://lcsd05.cs.tamu.edu/papers/veldhuizen.pdf
- Jef
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Jeff,
In your below flame you spent much more energy conveying contempt than
knowledge.
I'll readily apologize again for the ineffectiveness of my
presentation, but I meant no contempt.
Since I don't have time to respond to all of your
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:
In my attempt to respond quickly I did not intended to attack him or
his paper
Edward -
I never thought you were attacking me.
I certainly did attack some of your statements, but I never attacked you.
It's not my paper, just one that I
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
A quick question for Richard and others -- Should adults be allowed to
drink, do drugs, wirehead themselves to death?
A correct response is That depends.
Any should question involves consideration of the pragmatics of the
system, while
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
You misunderstood me -- when I said robustness of the goal system, I meant
the contents and integrity of the goal system, not the particular
implementation.
I meant that too - and I didn't
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Effective deciding of these should questions has two major elements:
(1) understanding of the evaluation-function of the assessors with
respect to these specified ends, and (2) understanding of principles
(of nature) supporting increasingly
On 10/2/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
Argh! Goal system and Friendliness are roughly the same sort of
confusion. They are each modelable only within a ***specified***,
encompassing context.
In more coherent, modelable
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient except
On 10/2/07, Jef Allbright [EMAIL PROTECTED] wrote:
I'm not going to cheerfully right you off now, but feel free to have the last
word.
Of course I meant cheerfully write you off or ignore you.
- Jef
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the
likelihood of them becoming unfriendly would be similar to the
likelihood of the molecules of an
On 10/1/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Jef Allbright wrote:
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that
the likelihood of them
On 10/1/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
Right, now consider the nature of the design I propose: the
motivational system never has an opportunity for a point failure:
everything that happens is
On 9/30/07, Kaj Sotala [EMAIL PROTECTED] wrote:
Quoting Eliezer:
... Evolutionary programming (EP) is stochastic, and does not
precisely preserve the optimization target in the generated code; EP
gives you code that does what you ask, most of the time, under the
tested circumstances, but the
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
I think a system can get arbitrarily complex without being conscious --
consciousness is a specific kind of model-based, summarizing,
self-monitoring
architecture.
Yes. That is a good clarification of what I meant rather than what I said.
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the morality of any
decision is always only relative to a subjective model of rightness?
I'm not sure that I should dive into
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
I do think its a misuse of agency to ascribe moral agency to what is
effectively only a tool. Even a human, operating under duress, i.e.
as a tool for another, should be considered as having diminished or no
moral agency, in my opinion.
So,
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
I would not claim that agency requires consciousness; it is necessary
only that an agent acts on its environment so as to minimize the
difference between the external environment and its internal model of
the preferred environment
OK.
Moral
On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
Decisions are seen as increasingly moral to the extent that they enact
principles assessed as promoting an increasing context of increasingly
coherent values over increasing scope of consequences.
Or another question . . . . if I'm analyzing
On 5/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Personally, I find many of his posts highly entertaining...
If your sense of humor differs, you can always use the DEL key ;-)
-- Ben G
I initially found it sad and disturbing, no, disturbed.
Thanks to Mark I was able to see the humor
On 4/15/07, Pei Wang [EMAIL PROTECTED] wrote:
I actually agree with most of what Richard and Ben said, that is, we
can create AI that is more intelligent, in some sense, than human
beings --- that is also what I've been working on.
However, to me Singularity is a stronger claim than superhuman
On 3/25/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,
Does anyone know if the term glocal (meaning global/local) has
previously been used in the context of
AI knowledge representation?
While not recognized as a formal term of knowledge representation,
glocal has strong connotations of think
On 3/26/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Yes, Google reveals that the term glocal has been used a few times
in the context of social activism.
While popularized by social activists, particularly with regard to
ecological concerns, the fairly deep principle I had in mind goes
somewhat
On 3/26/07, DEREK ZAHN [EMAIL PROTECTED] wrote:
David Clark writes:
Everyone on this list is quite different.
It would be interesting to see what basic interests and views the members of
this list hold. For a few people, published works answer this pretty
clearly but that's not true for most
On 3/20/07, Pei Wang [EMAIL PROTECTED] wrote:
I wonder if Jef, or anyone else here, knows what has happened to
Project Halo, the Digital Aristotle. The project website
(http://www.projecthalo.com/) hasn't been updated for three years.
I think Danny Hillis became consumed with FreeBasing. ;-)
On 3/20/07, Pei Wang [EMAIL PROTECTED] wrote:
Was Hillis involved with Halo? I only saw him listed as one of the
inspirations.
My mistake, I was working from memory and made a false association...
Here's all I can find as to the latest status:
Three teams, Team SRI International, Team
FYI,
- Jef
An article in the New Jersey
Star-Ledgerhttp://www.nj.com/news/ledger/index.ssf?/base/news-11/1173937313282210.xmlcoll=1says
DARPA has quietly killed their project to reverse engineering the
human brain. The project, known as Biologically Inspired Cognitive
Architectures
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote:
On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
If I understand Minsky's Society of Mind, the basic idea is to have the tools
be such that you can build your deck by first pointing at the saw and saying
you do your thing and then
, Free-Will, Morality, Rationality, Justice and on to
effective social decision-making.
- Jef
---
On 3/9/07, Jef Allbright [EMAIL PROTECTED] wrote:
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote:
On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote:
On 3/9/07, Jef Allbright [EMAIL PROTECTED] wrote:
Thanks for the clarification. You can surely call it high-level
functional description, but what I mean is that it is not an ordinary
high-level functional description, but a concrete expectation
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote:
On 3/9/07, Jef Allbright [EMAIL PROTECTED] wrote:
We seem to have skipped over my point about intelligence being about
the encoding of regularities of effective interaction of an agent with
its environment, but perhaps that is now moot.
Now I see
Chuckling that this is still going on, and top posting based on Ben's
prior example...
Cox's proof is all well and good, but I think gts still misses the
point:
The principle of indifference is still the *best* one can do under
conditions of total ignorance.
Any other distribution would imply
gts wrote:
I'm not expecting essentially perfect coherency in AGI.
I understand perfection is out of reach.
My question to you was whether, as a professed C++ developer, you are
familiar with the well-known impracticality of certifying a non-trivial
software product to be essentially free of
gts wrote:
This same concept of coherence is the basis of the axioms of
probability...
Yes.
... and the principle of indifference.
No.
Understand this underlying concept and you may understand the
others.
I understand it, Jef. But do you? The principle of indifference
is
Correction: Needed to add [the idea that] below.
- Jef
gts wrote:
This same concept of coherence is the basis of the axioms of
probability...
Yes.
... and the principle of indifference.
No.
Understand this underlying concept and you may understand the
others.
I
gts wrote:
[Jef wrote:]
That's like saying you have no use for [the idea that] a
balance scale reads zero when both pans are empty.
Your beef is not just with me; it is with Bruno De Finetti and
Frank P. Ramsey and their modern followers in the subjectivist
school of probability theory,
gts wrote:
Well, although I am not an AI developer, I am a C++
application developer and I know I or any reasonably
skilled developer could write task-specific
applications that would be extremely coherent in the
De Finetti sense (applicable to making probabilistic
judgements in
Pei Wang wrote:
... in this example, there are arguments supporting the
rationality of human, that is, even if two betting cases
corresponding to the same expected utility, there are
reasons for them to be treated differently in decision
making, because the probability in one betting is
Ben wrote:
Well, in fact, Novamente is **not** constrained from having Dutch
books made against it, because it is not a perfectly consistent
probabilistic reasoner.
It seeks to maintain probabilistic consistency, but balances this
with other virtues...
This is really a necessary
gts wrote:
I understand the resources problem, but to be coherent a
probabilistic reasoner need only be constrained in very
simple ways, for example from assigning a higher
probability to statement 2 than to statement 1 when
statement 2 is contingent on statement 1.
Is such basic
gts wrote:
On Tue, 06 Feb 2007 16:27:22 -0500, Jef Allbright
[EMAIL PROTECTED]
wrote:
-
You would have to assume that statement 2 is *entirely*
contingent on statement 1.
-
I
Ah, the importance of semantic precision (still context-dependent, of
course). ;-)
- Jef
-Original Message-
From: gts [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 06, 2007 2:41 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Betting and multiple-component truth values
My last
Ben Goertzel wrote:
The relationship between rationality and goals is fairly
subtle, and something I have been thinking about recently
Ben, as you know, I admire and appreciate your thinking but have always
perceived an inside-outness with your approach (which we have
discussed before)
Eric Baum wrote:
As I and Jef and you appear to agree, extant Intelligence
works because it exploits structure *of our world*; there is
and can be (unless P=NP or some such radical and unlikely
possibility) no such thing as as General Intelligence that
works in all worlds.
I'm going to
Eric Baum wrote:
James Jef Allbright [EMAIL PROTECTED] wrote: Russell Wallace
James wrote:
Syntactic ambiguity isn't the problem. The reason computers don't
understand English is nothing to do with syntax, it's because they
don't understand the world.
snip
But the computer still
Jef wrote:
Each of these examples is of a physical system responding
with some degree of effectiveness based on an internal model
that represents with some degree of fidelity its local
environment. Its an unnecessary complication, and leads to
endless discussions of qualia,
Eric -
Thanks to the pointer to your paper. Upon reading I quickly saw what I
think provoked your reaction to my observation about understanding. We
were actually saying much the same thing there. My point was that no
human understands the world, because our understanding, as with all
examples
Russell Wallace wrote:
Syntactic ambiguity isn't the problem. The reason computers don't
understand English is nothing to do with syntax, it's because they
don't understand the world.
It's easy to parse The cat sat on the mat into
sentence
verb sit /verb
subject cat
On 6/8/06, Mark Waser [EMAIL PROTECTED] wrote:
The first thing that is necessary is to define your goals. It is my
contention that there is no good and no bad (or evil) except in the context
of a goal
It seems to me it would be better to say that there is no absolute or
objective good-bad
On 6/6/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
I espouse the
Proactionary Principle for everything *except* existential risks.
The Proactionary Principle is a putative optimum strategy for progress
within an inherently risky and uncertain environment. How do you
reconcile your
Ben Goertzel wrote:
The purpose of ITSSIM is to prevent such decisions. The purpose of the
fancy emergency modifications to ITSSIM is to allow it to make such an
decision in cases of severe emergency.
A different way to put your point, however, would be to speak not just about
averages but also
Ben Goertzel [EMAIL PROTECTED] wrote:
__
When a proposed system design turns out to require fancy emergency
patches and somewhat arbitrary set points to achieve part of its
function, then perhaps that's a hint that it's time to widen-back and
re-evaluate the concept at a higher
Ben Goertzel wrote:
Brad,
Actually this depends on your philosophy of consciousness. Panpsychists
believe everything experiences qualia -- just some things experience more
than others ;)
ben
The puzzle of qualia vanished for me when I realized that the only way
we know the experience of
Philip Sutton wrote:
Brad/Eugen/Ben,
Early living things/current simple-minded living things, we can conjecture
didn't/don't have perceptions that can be described as qualia. Then
somewhere along the line humans start describing perceptions that some of
them describe as qualia. It seems that
Dennis Gorelik wrote:
Deering,
I strongly disagree.
Humans have preprogrammed super-goals.
Humans don't update ability to update their super-goals.
And humans are intelligent creatures, aren't they?
In what sense do human have pre-programmed super-goals? It seems to me
that our evolved
Ben -
I know you, via the web, as one who has both a strong mathematical
(objective) background and also one who tends very strongly to value the
experiential (subjective) stance.
How is it that your mathematical side can allow you to downplay a
solution to a difficult problem, saying the
Philip Sutton wrote:
I guess we call emotions 'feelings' because we *feel *them - ie. we can
feel the effect they trigger in our whole body, detected via our
internal monitoring of physical body condition.
Given this, unless AGIs are also programmed for thoughts or goal
satisfactions to
Jef wrote:
On Saturday 06 September 2003 20:45, Jef Allbright wrote:
Of course the strong sense of immediacy and directness trumps
logical and philosophical arguments. In a very circular (and
conventionally correct way) we certainly are our feelings. And in
the bigger picture, that self
James Rogers wrote:
I would say that consciousness is at its essence a purely inferred
self-model, which naturally requires a fairly large machine to
support the model.
Ben Goertzel wrote:
Of course, this captures part of the consciousness phenomenon, but
it's very much a
69 matches
Mail list logo