I've just read the first chapter of The Metamorphosis of Prime Intellect.
http://www.kuro5hin.org/prime-intellect
It makes you realise that Ben's notion that ethical structures should be
based on a hierarchy going from general to specific is very valid - if
Prime Intellect had been programmed
I'm working on a paper to compare predicate logic and term logic. One
argument I want to make is that it is hard to infer on uncountable nouns in
predicate logic, such as to derive ``Rain-drop is a kind of liquid'' from
Water is a kind of liquid'' and ``Rain-drop is a kind of water'', (which
can
Eliezer S. Yudkowsky wrote:
There may be additional rationalization mechanisms I haven't identified
yet which are needed to explain anosognosia and similar disorders.
Mechanism (4) is the only one deep enough to explain why, for example,
the left hemisphere automatically and unconsciously
Ben Goertzel wrote:
This is exactly why I keep trying to emphasize that we all should forsake
those endlessly fascinating, instinctively attractive political arguments
over our favorite moralities, and instead focus on the much
harder problem
of defining an AI architecture which can understand
In Novamente, this skeptical attitude has two aspects:
1) very high level schemata that must be taught not programmed
2) some basic parameter settings that will statistically tend
to incline the
system toward skepticism of its own conclusions [but you can't
turn the dial
too far in
Eliezer S. Yudkowsky wrote:
I don't think we are the beneficiaries of massive evolutionary debugging.
I think we are the victims of massive evolutionary warpage to win
arguments in adaptive political contexts. I've identified at least four
separate mechanisms of rationalization in human
Hey, look what my alma mater is up to. The Humanities and Social Sciences
department, no less. Although it was common for undergrads to be in economics
experiments, and this 'test' looks pretty similar. No hard language stuff.
http://turing.ssel.caltech.edu/
-xx- Damien X-)
---
To
Thanks, Ben, that answer will be useful for different things.
http://sl4.org/bin/wiki.pl?SingularityQuestions (edited answer below
question 5)
Best,
Anand
Ben Goertzel wrote:
The CT thesis would seem to imply the possibility of strong AI.
That is, it implies that: On any general-purpose