Hi Eliezer,
On 5/26/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Not the baby-halving threat, actually.
http://www.geocities.com/eganamit/NoCDT.pdf
Here Solomon's Problem is referred to as The Smoking Lesion, but the
formulation is equivalent.
Seems that the geocities account has run
Hi all,
Since there was recently discussion about machine languages, including
Lojban, here is a comic that pokes a bit of fun at Lojban (Plus if you
click on the image you get the comic in Lojban).
http://xkcd.com/c191.html
Enjoy :)
--
-Joel
Unless you try to do something beyond what you
On 12/14/06, Charles D Hixson [EMAIL PROTECTED] wrote:
To speak of evolution as being forward or backward is to impose upon
it our own preconceptions of the direction in which it *should* be
changing. This seems...misguided.
IMHO Evolution tends to increase extropy and self-organisation. Thus
On 12/21/06, Philip Goetz [EMAIL PROTECTED] wrote:
That in itself is quite bad. But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O.
On 1/14/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
I'm considering this idea: build a repository of facts/rules in FOL (or
Prolog) format, similar to Cyc's. For example water is wet, oil is
slippery, etc. The repository is structureless, in the sense that it is
just a collection of
On 1/20/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Regarding Mindpixel 2, FWIW, one kind of knowledge base that would be
most interesting to me as an AGI developer would be a set of pairs of
the form
(Simple English sentence, formal representation)
For instance, a [nonrepresentatively
That quote made my evening!
Thanks :)
On 5/22/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
The best definition of intelligence comes from (of all people) Hugh Loebner:
It's like pornography -- I can't define it exactly, but I like it when I see
it.
-
This list is sponsored by AGIRI:
On 5/25/07, Mark Waser [EMAIL PROTECTED] wrote:
Sophisticated logical
structures (at least in our bodies) are not enough for actual
feelings. For example, to feel pleasure, you also need things like
serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and
endorphins. Worlds of
On 6/3/07, Jiri Jelinek [EMAIL PROTECTED] wrote:
Further, prove that pain (or more preferably sensation in general) isn't an
emergent property of sufficient complexity.
Talking about Neumann's architecture - I don't see how could increases
in complexity of rules used for switching Boolean
YKY,
Which is a bigger motivator -- charity/altruism, or $$? For me it's $$,
and charity is of lower priority. And let's not forget that self-interested
individuals in a free market can bring about progress, at least according to
Adam Smith.
A suggestion, if you really are motivated by $$
On 6/8/07, Mike Tintner [EMAIL PROTECTED] wrote:
The issue is this: how can you prove a given form - whether a physical form
or form of behaviour - is disordered? How can you prove that it cannot be
considered as having been programmed, and there is no underlying formula for
it? (And another
On 6/9/07, Mark Waser [EMAIL PROTECTED] wrote:
...Same goes for most software developed by this method–almost
all the great open source apps are me-too knockoffs of innovative
proprietary programs, and those that are original were almost always created
under the watchful eye of a passionate,
On 6/18/07, Charles D Hixson [EMAIL PROTECTED] wrote:
Consider a terminal cancer patient.
It's not the actual weighing that causes consciousness of pain, it's the
implementation which normally allows such weighing. This, in my
opinion, *is* a design flaw. Your original statement is a more
Learning baby speech:
http://www.stuff.co.nz/4140624a28.html?source=RSStech_20070726
In the past, people have tried to argue it wasn't possible for any
machine to learn these things, and so it had to be hard-wired (in
humans), he said. Those arguments, in my view, were not particularly
well
Particularly pertinent xkcd comic. ;)
http://xkcd.com/329/
-J
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=53586730-ff2d96
Somewhat apropos to what Novababy was planning to do in AGISIM:
http://www.pinktentacle.com/2007/10/android-acquires-nonverbal-communication-skills/
-J
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Kind of curious thing I ran into last night. Youtube user called
Eidolon TLP that claims to be an AI, posting on various topics and
interacting with users. Videos go back for about a week. I've only
just started watching them, and don't put much stock in it being real,
but it's still interesting
On Fri, May 9, 2008 at 3:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
I have a vague memory of coming across this research to duplicate savant
behavior, and I seem to remember thinking that the conclusion seems to be
that there is a part of the brain that is responsible for 'damping down'
On Sat, Aug 2, 2008 at 9:56 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
There is nothing quite so pathetic as someone who starts their comment with
a word like Bull, and then proceeds to spout falsehoods.
Thus: in my paper there is a quote from a book in which Conway's efforts
were
On Wed, Aug 13, 2008 at 6:31 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
To use Thorton's example, he demontrated that a checkerboard pattern can
be learned using logic easily, but it will drive a NN learner crazy.
Note that neural networks are a broad subject and don't only include
Hi all,
My commitment is with OpenCog at the moment - but this looks like a
really cool project/job that may suit some of you on this list :)
J
-- Forwarded message --
From: Jennifer Devine [EMAIL PROTECTED]
Date: Fri, Oct 31, 2008 at 10:40 PM
Subject: Job offering
21 matches
Mail list logo