- Forwarded message from Chris Williams [EMAIL PROTECTED] -
From: Chris Williams [EMAIL PROTECTED]
Date: Mon, 30 Apr 2007 18:10:41 +0100 (BST)
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED], John Winn [EMAIL PROTECTED],
[EMAIL PROTECTED], Mark Everingham [EMAIL PROTECTED]
Subject:
--- a [EMAIL PROTECTED] wrote:
Help me with the algorithm. Thank you
Dear a for anonymous (are you related to Ben?),
Before you worry about whether an AGI should be friendly or selfish or
religious, first you have to solve some lower level problems in language,
vision, hearing, navigation,
This is similar to the thread I was workign on recently about Goals that
didnt get quite as far as I would have liked either.
1. For use as testing metrics or for our personal goals of What an AGI should
achieve, or what is important these goals or classes of problems should be
defined as
Richard,
What's the point here? You seem to be just being cussed. You're not really
interested in the structure of the sciences, are you?
Psychosemiotics, first off, does NOT EXIST - so how cognitive science
could already cover it is interesting. It has been mooted vaguely - in a
book
2. More specific for your AGI,
What do you see the virtual pets doing? Specifically as end user
functions for the consumer, the selling points you would give them, and how
the AGI would help these functions.
Is it going to be a rich enough situation in general to display more
than just a
Ben Goertzel writes:
Well, it's a commercial project so I can't really talk about what the
capabilities of the version 1.0 virtual pets will be.
I did spend a few evenings looking around Second Life. From
that experience, I think that virtual protitutes would be
a more profitable product :)
Mike,
Richard is not being difficult. He is trying to ascertain the basis for
your beliefs (and get pointers to it). Only from this e-mail did *I* ascertain
that you believe that you had made up psychosemiotics. Previously, it looked
to me as if you thought you were pointing at
Second Life also has a teen grid, by the way, which is not very
active right now, but which virtual pets could enhance significantly.
Virtual prostitutes are not in the plans anytime soon ;-)
On 5/4/07, Derek Zahn [EMAIL PROTECTED] wrote:
Ben Goertzel writes:
Well, it's a commercial project
Is there any already existing competition in this area - virtual adaptive pets
- that we can look at?
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Friday, May 04, 2007 4:46 PM
Subject: Re: [agi] The role of incertainty
2. More specific
On a less joking note, I think your ideas about applying your
cognitive engine to NPCs in RPG type games (online or otherwise)
could work out really well. The AI behind the game entities
that are supposedly people is depressingly stupid, and games
are a bazillion-dollar business.
I hope your
Hi James,
I'm going to handle your questions in reverse order . . . .
Do you think learning is a requirement for understanding, or intelligence?
Yes, I believe that learning is a requirement for intelligence. Intelligence
is basically how fast you learn. Zero learning equals zero
However, we can do some borg mind stuff to work around this problem -- so
that each pet retains its own personality, yet benefits from collective
learning done basis on the totality of all the pets' memories...
Nice!
- Original Message -
From: Benjamin Goertzel
To:
Mike Tintner wrote:
Richard,
What's the point here? You seem to be just being cussed. You're not
really interested in the structure of the sciences, are you?
Is this ad hominem remark really necessary?
Psychosemiotics, first off, does NOT EXIST - so how cognitive science
could already
What would motivate you to put work into an AGI project?
1) A reasonable point of entry into the project
2) The project would need to be FOSS, or at least communally owned.
(FOSS for preference.) I've had a few bad experiences where the project
leader ended up taking everything, and don't
J. Storrs Hall, PhD. wrote:
On Wednesday 02 May 2007 15:08, Charles D Hixson wrote:
Mark Waser wrote:
... Machines will know
the meaning of text (i.e. understand it) when they have a coherent
world model that they ground their usage of text in.
...
But note that in this case
I think at some point in time an AGI will need to be embodied.. I know many
intend to use robots in the future, and to copy the software into them, as a
step embodiment in a virtual environment could prove useful.
One thing I intend to do with mine is give it autonomy as soon as possible,
Yeah I am trying to get his to run, but no luck yet, wish it wasnt only linux
based,
But even a general 3-D graphical app is not that hard to write, I have done a
few,
and I am also looking at something like using a Second Life interface, as much
of the graphics and interface design has
I would say rote memorization and knowledge / data, IS understanding.
OK, we have a definitional difference then. My justification for my view
is that I believe that you only *really* understand something when you have
predictive power on cases that you haven't directly seen yet (sort of
I do not believe that the algorithm must be more complex. The more complex the
algorithm, the more ad hoc it is. Complex algorithms are not able to perform
generalized tasks. I believe the reason that n-digit was a failure was because
there is no vision system, NOT because the algorithm is too
Its mainly that I believe there is a full range of intelligences available,
from a simple thermostat, to a complex one that measures and controls humudity
and knows if a person is in a run, and has specific settings for differnt
people, to a an expert system, to a human to an AI and super
20 matches
Mail list logo