Hi Mike,
Yeah, this is a well known requirement ;-)
In studying the MMOG application area for early-stage AGI's, the observation
has been made that, in current MMOG's, one of the big weaknesses of NPCs
(non player characters) is their inability to work together (with each other
or humans) as a
Yes, all intelligent systems (i.e. living creatures) have a psychoeconomy
of goals - important point.
But in solving any particular problem, they may be dealing with only one or
two goals at a time.
Have measures of intelligence mentioned, included:
1) the depth of the problem - the number
On 4/26/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Can you point to an objective definition that is clear about which
things are more intelligent than others, and which does not accidentally
include things that manifestly conflict with the commonsense definition
(by false negatives or false
Now this I would imagine is where evolutionary thinking has to be v. helpful if
not essential. .. if one traces the evolution of cooperative activity in
animals, it should be a useful guide to the stages needed to build up
cooperative activity among any agents.
Can anyone think of any work
On the face of it, this isn't anything more than Parry did. If you have a
substrate that can interpret actions and situations into emotional inputs (I
just got insulted) the overall emotional control mechanism can be modelled
by an embarrassingly simple system of differential equations.
Josh
On 4/27/07, Mark Waser [EMAIL PROTECTED] wrote:
On 4/26/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Can you point to an objective definition that is clear about which
things are more intelligent than others, and which does not
accidentally
include things that manifestly conflict with the
Kaj Sotala wrote:
On 4/26/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Can you point to an objective definition that is clear about which
things are more intelligent than others, and which does not accidentally
include things that manifestly conflict with the commonsense definition
(by false
Yet, intelligence and sexiness are not the same thing...
Though fortunately for us geeks, some people do find them
to be correlated ;-D
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Mark Waser wrote:
On 4/26/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Can you point to an objective definition that is clear about which
things are more intelligent than others, and which does not
accidentally
include things that manifestly conflict with the commonsense definition
(by
Benjamin Goertzel wrote:
If all definitions of intelligence end up with a judgment call, we might
as well throw away all of them and just say Intelligence is what a
human would consider intelligent.
But, also, then
Sexy is what a human would consider sexy
Yet, intelligence and
I don't like this so much, because two sets of world-states with equal
measure (size) may have very different complexity...
I don't believe so because complex world states are by definition larger
since they have more variables to vary (and thus more points/states/variables).
It is true
Kaj,
(Disclaimer: I do not claim to know the sort of maths that Ben and
Hutter and others have used in defining intelligence. I'm fully aware
that I'm dabbling in areas that I have little education in, and might
be making a complete fool of myself. Nonetheless...)
I'm currently writing my
I guess I don't really understand your formalism, i.e.
-- how you define a world-state ... do you mean a set of elementary
world-states described by some logical predicate?
-- how do you define the size of a world-state ... would it be the
complexity of that predicate?
thx
ben
On 4/27/07,
Maybe you want to build an intelligent system to get a better understanding of
how people think. Or maybe you want to make money. Or maybe it would be cool
to launch a singularity. None of these projects is going to result in
something indistinguishable from a human brain. They will be
Has anyone noticed that people who have studied AGI all their lives, like
Kurzweil and Minsky, aren't trying to build one?
-- Matt Mahoney, [EMAIL PROTECTED]
I don't get it ... some of us who have studied AGI all our lives ARE trying
to build one...
-
This list is sponsored by
Entity in question is any particle in the universe.
OK. Said particle either has a fixed velocity and direction or is
stationary.
States that it is trying to avoid: collision with its antiparticle.
No, the particle is not trying to do anything.
Size of space containing all world-states
The world can always be described by an arbitrarily large logical
predicate. The world as it exists now is one world-state. If one clause in
the predicate were changed, that would be another world state.
More complex world states need more clauses in the predicate to describe
them
Interesting but, if I've understood the Universal Intelligence paper, there are
3 major flaws.
1) It seems to assume that intelligence is based on a rational, deterministic
program - is that right? Adaptive intelligence, I would argue, definitely
isn't. There isn't a rational, right way to
--- Benjamin Goertzel [EMAIL PROTECTED] wrote:
Has anyone noticed that people who have studied AGI all their lives, like
Kurzweil and Minsky, aren't trying to build one?
-- Matt Mahoney, [EMAIL PROTECTED]
I don't get it ... some of us who have studied AGI all our lives ARE
It's a shame that Sony discontinued their robotics division. I suspect in
this case this is just anthropomorphisation of some quirk of the robots
control system. What's being described here is the psychological principle
of reciprocation: I give something of value to you, you give something of
pray tell which errors [The Emotion Machine BTW contains a v. rough proposal
for an AGI, human-like system which was formally put together in a separate
paper]
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Friday, April 27, 2007 8:42 PM
Subject:
Benjamin Goertzel wrote:
Has anyone noticed that people who have studied AGI all their lives,
like
Kurzweil and Minsky, aren't trying to build one?
-- Matt Mahoney, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
I don't get it ... some of us who have studied AGI all our lives
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
Has anyone noticed that people who have studied AGI all their lives,
like
Kurzweil and Minsky, aren't trying to build one?
-- Matt Mahoney, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
All this talk about stupid thermometers... Some thermometers are smarter
than others. Why? When it comes to telling temperature they know what they
are talking about. Some have higher margins of error; they may be off by
1/2 degree F. Others are right on. But then put one in an environment
Please unsubscribe me from this list. Thank you.
Jiang
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
On Friday 27 April 2007 17:44, John G. Rose wrote:
So then we decide OK let's try to measure the temperature of a black hole
so we jettison the X1117 into the black hole and right before it passes the
event horizon it converts itself into radially emitted neutrinos by
sacrificing itself and
26 matches
Mail list logo