On Nov 2, 2007 3:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not something you can add on later.
I disagree. You can
Jiri Jelinek wrote:
People will want to enjoy life: yes. And they should, of course.
But so, of course, will the AGIs.
Giving AGI the ability to enjoy = potentially asking for serious
trouble. Why shouldn't AGI just work for us like other tools we
currently have (no joy involved)?
Isn't
Richard in your November 02, 2007 11:15 AM post you stated:
If AI systems are built with motivation systems that are stable, then we
could predict that they will remain synchronized with the goals of the
human race until the end of history.
and
I can think of many, many types of non-goal-stack
On Nov 3, 2007 12:58 PM, Mike Dougherty [EMAIL PROTECTED] wrote:
You are describing a very convoluted process of drug addiction.
The difference is that I have safety controls built into that scenario.
If I can get you hooked on heroine or crack cocaine, I'm pretty confident
that you will
I have skimmed many of the postings in this thread, and (although I have
not seen anyone say so) to a certain extent Jiri's positiion seems
somewhat similar to that in certain Eastern meditative traditions or
perhaps in certain Christian or other mystical Blind Faiths.
I am not a particularly
--- Edward W. Porter [EMAIL PROTECTED] wrote:
If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven to
maximize bliss why wouldn't they kill