Re: [agi] The Function of Emotions is Torture

2007-12-13 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:
 To try and reduce all this to numbers is - if unintentionally -  also very 
 offensive, far far more so than any remark of mine, and does indeed involve 
 v. false, limited ideas of emotions (and limited in terms of AGI).

Apology accepted.  I did not find your comments offensive.  But perhaps you
find the idea of reducing the human mind to computation offensive?  This is
what AGI is about.  I don't find it offensive as much as I find it disturbing
- that instincts such as the beliefs in consciousness and free will and fear
of death are just that - traits that have been programmed into our brains
through evolution.  The computational model of thought logically denies that
consciousness exists, but your own brain does not allow you to believe it. 
When you ponder whether a simulation of an AGI being tortured is the same as
torture, you are experiencing a conflict between your instincts and logic that
cannot be resolved because your brain is programmed in such a way that it is
impossible.  Logically, a simulation of torture is just a computation, and it
makes no difference if the computation is implemented in pencil and paper,
transistors, or neurons.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75584599-c885b5


Re: [agi] The Function of Emotions is Torture

2007-12-13 Thread Mike Tintner

Matt: But perhaps you
find the idea of reducing the human mind to computation offensive?

Yes, I do. There are the human implications I started to point out. Far more 
important here are the mechanistic ones - it is indeed a massive reduction 
with respect to emotions of a complex systemic operation. These AI 
simulations of emotion leave out so much, (such as self, body and actual 
emotions - but hey, who cares about *those*?), and the bits they do copy - 
the valuation of behaviour conducted by the brain - they get fundamentally 
wrong. You have emotions about things that you CAN'T value other than 
extremely crudely, analogically and that are strictly, non-comparable. 
That's what the whole system is designed for. What are the mathematical 
values expressed when you have conflicting emotions about whether to 
masturbate or do your work? Masturbate = how many units of utility? Work = 
how many units?


How much do you like marshmallow ice cream, and how much creme caramel? Well 
about this much and about that much, (can you see how wide my arms are 
stretched each time)? Well, maybe it's about this much and that much 
(changing my arm width  again).


As always, AI gets the development and evolution of mind completely the 
wrong way round. The emotional system we and animals have is primary. It's a 
crude, non-numerical value system, among other things, which works v. well, 
considering. Putting numbers to it is an approach that is an evolutionarily 
extremely belated afterthought, and an occasional help, but also a 
hindrance.


Anyway, you don't seem to be contesting - what you  other AI-ers are doing 
is adding a value system to your computers, not an emotional system.


P.S. Emotions are designed to evaluate a complex psychoeconomy of 
ctivities  - the many, v. different and strictly non-comparable activities 
that every animal and human being engage in - work, hunting, foraging, 
eating, sex, grooming, cuddling, sightseeing etc. To some extent you *can* 
compare emotions to a currency, to some extent you *can't* - because what 
are often at stake are two fundamentally different kinds or vast categories 
of emotion about two fundamentally different kinds of activity. The positive 
emotions you get from activities like reading mags, watching TV, eating ice 
cream etc. are of a fundamentally different kind to those you get from work, 
exercise, thinking about AI and other active activities - because the 
activities are fundamentally,l physically different - passive consumption vs 
active production. And just to complicate matters, the emotions you get from 
sex are a mixture of both kinds. (A lot of this is to do with our divided/ 
conflicted autonomic nervous system - the basis of emotions). Got all that 
in your AGI?


P.P.S. Ben: As far as I can see, these comments apply to your AGI approach 
to emotions re your post - you too seem to be talking about a pure value 
system in practice rather than a true emotional system - but I may have 
misread. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75617389-19152d


Re: [agi] AGI and Deity

2007-12-13 Thread Bob Mottram
On 11/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
 I think one of the most immediate connection between religion and AGI is
 that once the religious right (and many others for that matter) begin to
 realize that all our crazy talk about the human mind, and possibly human
 control of the world, being eclipsed by machines is not just fantasy, they
 may well demand much more limitation of AGI than they have of abortion, and
 stem cell and human cloning.


Yes I think this is quite plausible, although I don't think it's a
near term prospect (unless someone makes a big unexpected
breakthrough, which is always a possibility).  I think that maybe by
the 2020s or 2030s AI related issues will be in the mainstream of
politics, since they will increasingly come to effect most people's
lives.  The heated political issues of today will look like a walk in
the park compared to some of the issues which advancing technologies
will raise, which will be of a really fundamental nature.

For example, if I could upload myself I could then make a thousand
or a million copies and start a software engineering company made
entirely of uploads which competes directly against even the biggest
IT megacorporations.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75655072-0aaeaf


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-13 Thread James Ratcliff
It shouldnt matter how a general ontology is used, it should be available for 
multiple different AI and AGI processes, to be generally useful.

And the key thing about this usage is it doesnt get any information from a 
single text, but extracts patterns from the mass usage, reading a single 
passage is much more difficult.

I have also used this in conjunction with Google news feed, where many many 
articles can be gathered in a short period on a single topic, and reinforce the 
information.

James Ratcliff


Vladimir Nesov [EMAIL PROTECTED] wrote: On Dec 13, 2007 12:09 AM, James 
Ratcliff  wrote:
   Mainly as a primer ontology / knowledge representation data set for an AGI
 to work with.
   Having a number of facts known without having to be typed in about many
 frames and connections between frames gives an AGI a good booster to start
 with.

   Taken a simple set of common words in a house chair, table, sock, closet
 etc, a house agi bot could get a feel for objects it would expect to find in
 a house, and what locations to look for say a sock, and properties of a
 sock, without having to have that information typed in from a human user.
   Then that information would be updated thru experience, and with a human
 trainer working with an embodied (probably virtual) agi.


Yes, it's how story usually goes. But if you don't specify how
ontology will be used, why do you believe that it will be more useful
than original texts? Probably at a point where you'll be able to make
use of ontology you'd also be able to analyze texts directly (that is,
if you aim that high, otherwise it's a different issue entirely).


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Never miss a thing.   Make Yahoo your homepage.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75658037-88df5d

Re: [agi] AGI and Deity

2007-12-13 Thread Lukasz Stafiniak
Under this thread, I'd like to bring your attention to Serial
Experiments: Lain, an interesting pre-Matrix (1998) anime.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75885762-854b15


Re: [agi] AGI and Deity

2007-12-13 Thread Jiri Jelinek
Stan,

there are believers on this list. I am one of them.
I have notes about a write up on Will a Strong AI pray?

An AGI may experiment with prayer if fed with data suggesting that it
actually helps, but it would IMO quickly conclude that it's a waste of
its resources.

Studies (when done properly) show that it doesn't work for humans either.
http://www.hno.harvard.edu/gazette/2006/04.06/05-prayer.html
When it does help humans in some ways, the same results can be
achieved using other techniques that have nothing to do with
praying/deity. It's IMO obvious so far that man will not gain much
unless he gets off his knees and actually does something about
himself.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75960344-1f0d2b