Terren,
OK, you hooked me. A virgin is something I haven't been called (or even
been associated with) in about forty-five years. So, I feel compelled to
defend my non-virginity at all costs. I'm 58 now. You do the math (don't
forget to subtract for the 30 years I was married). ;-) My widowed
girlfriend of the last eight years is a mother of two
30-something-year-olds (a boy and a girl) and four grandchildren, ages 11
(going on 16) down to 2. All girls! The woman is post-menopausal and
insatiable! A little Astroglide (thank you, NASA!) and we're "ready to
rumble." No birth control required! Sex under 50? OK. Sex after 50? To
the moon!! The Bradster is one lucky puppy. So there!
I thought orgasms were cool, too. Until I died. Now THAT was cool. So,
for orgasms, it's sort of a quantity vs. quality thing for me these days.
I'll eventually get to do that dying thing again (probably just once,
though). But between now and then, I hope to have lots and lots of
orgasms! Not as cool as dying, but a bit easier to come by. (I won't say
it if you don't think it!) ;-)
Err ... I don't have to mention that I didn't stay dead, do I? Good.
I don't recall whether or not I said one could describe an orgasm to a
virgin in lieu of experiencing the "real thing." But, the AGI I have in
mind is of the non-Turing/Loebner, non-orgasmic type, so the description
will just have to do. In my design, this is required only so the AGI can
empathize with human experience. It may need to know what a "happy ending"
is, but it doesn't have to have one. Who knows, though? Maybe we've
finally discovered that it's not Microsoft's fault we have to re-boot
Windows at least twice a day. Maybe a re-boot is sort of like an orgasm
for Windows? Explains that little "happy chiming sound" it makes during
boot-up, right? Maybe, just maybe, Windows was, to quote Steely Dan,
"programmed by fellows with compassion and vision."
Anyhow, that example fits with views I've expressed in the context of
explaining how my AGI design requires empathy on the part of the AGI so it
can "empathize" with human experiences without having to actually have
them. So, maybe I did say that. Since I have no intention of developing a
Turing/Loebner AGI, the ability to empathize is all my design really needs.
And, it may not even need that. "Benign indifference" may be enough. My
design is still evolving even as I work on the implementation (it's a big
job and I'm only one man).
If I do my job right, my AGI will have no "sense of self." I achieve that,
mostly, by building a non-embodied AGI. Embodiment leads directly to a
sense of self which leads inexorably to an "I am me and you're not" world
view. I don't know about you, but an AGI with a sense of self gives me the
willies. Turns out, by NOT bestowing a sense of self on a
non-Turing/Loebner AGI, one does away with a great many rather sticky
problems in the area of morality and ethics. How do I know what it's like
to not have a sense of self? Ahhhh... That's where the dying but not
really dying part fits the puzzle. Talk about experiences that are hard to
explain! But, that's another topic for another thread.
Now, to the meaty stuff... You wrote: "... the really interesting question
I would pose to you non-embodied advocates is: how in the world will you
motivate your creation?"
Some animals and all humans are motivated to maximize pleasure and minimize
pain. This requires the existence of a brain and a nervous system,
preferably both peripheral and central. In animals other than humans and
some higher-order primates and mammals, motivation is more typically called
instinct. The difference? Motivations are usually conscious and somewhat
malleable. Instincts are usually not. To be sure, there is some gray
area here, but not enough, I think, to alone derail my argument. While
human motivations may appear more complex, this is almost always because
they are more abstract. They can usually be boiled down to fit the
pleasure/pain model (e.g., reward/punishment). There has been some
interesting recent work on altruism reported in the cog sci literature.
When I can lay hands on some URIs, I'll post them here.
With that conceptual background established, my reply is that your question
contains the implicit assumption we "non-embodied advocates" are planning
to build Turing/Loebner AGIs. Some of us may be. I am not. Since my AGI
model is not of the T/L variety, motivation does NOT apply. But, I'm
prepared to meet you halfway and cop to instinct. My AGI WILL have at
least one overriding instinct. I've discussed it here recently (but it
seemed most people who commented on my post didn't fully "get it"). Here
it is:
My AGI will be equipped with an instinctual drive to resolve cognitive
dissonance (simulated, of course) engendered by its own inability to
understand or answer queries posed by humans (or other AGIs). I hasten to
point out that having or acting on instinct does not require having a sense
of self. Human-type motivation probably does.
Remember my "list of things we know we don't know?" My AGI's instinct is
to keep that list as short as possible. As implemented, it will simply be
a daemon process that hits the TIKIDK (pronounced "tee-kee-dick" -- cool
mnemonic, eh?) "list" whenever the AGI has "nothing better to do" (e.g.,
while it's "sleeping"). Kind of like the human subconscious processes
cognitive science and psychology speculate are continually moving the
furniture around in our brains. My AGI will instinctively seek it's
missing knowledge from humans (by asking them questions when possible) and
from the Googleverse (via "subconscious" queries). If it could "think"
about it, it would consider both humans and the Googleverse to be a part of
the universe that just is. Sounds a bit weird, doesn't it? Sorry, but
that will have to do for now. Explaining concepts some entity with no
sense of self might have is almost impossible using a language developed by
(and, therefore, tailored to explaining things to) entities with a strong
sense of self. Frustrating. Fortunately, my AGI won't have to think about
stuff like that since it will have no "self" to "answer to." ;-)
Cheers,
Brad
Terren Suydam wrote:
I mean trick in the same sense that an orgasm is a trick to get us to have sex
more often. We're designed to chase experiences of pleasure, so evolution
makes
things pleasurable that lead to us optimizing our genetic fitness. They're
tricks because from a designer's point of view there's nothing about
reproduction that requires pleasure. Pleasure and pain are peculiar aspects
of embodied experience - strictly speaking they are motivators and
de-motivators,
but what actually motivates us humans is the subjective feel of orgasm, pain,
fun, and so on. That's difficult to reconcile if you don't believe embodiment
is all that important.
You cannot encode an orgasm, or fun, or any other qualia simply by describing
it. You cannot describe the experience of sight to the congenitally blind. I
believe Brad said you can explain sex to a virgin, and this utterance made me
wonder if Brad is a virgin. No offense, Brad, but until I had my first orgasm,
I wondered what all the fuss was about. After I had one, I was downright
methodical about reproducing the experience. :-]
I know we've gotten a little off-track here from play, but the really
interesting question I would pose to you non-embodied advocates is: how in the
world will you motivate your creation? I suppose that you won't. You'll just
tell it what to do (specify its goals) and it will do it, because it has no
autonomy at all. Am I guilty of anthropomorphizing if I say autonomy is
important to intelligence?
Terren
--- On Mon, 8/25/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
What do you mean by "trick"? Fun of playing is
evolutionary encoded,
no tricks. You can try to encode it into a seed AI by
adding a
reference to an actual kitten in a right way, saying
"fun is that
thing over there!" without saying what it is
explicitly, and providing
this AI with a kitten. How to do it technically is of
course a
Friendly AI-complete problem, but its solution doesn't
need to include
all the fine points of the fun concept itself. On this
subject, see:
http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
-- in what sense AI can be as a mirror for complex concept
instead of
a pencil sketch explicitly hacked together by programmers;
http://www.overcomingbias.com/2008/08/unnatural-categ.html
-- why morality concept needs to be transferred in all
details, and
can't be learned from few examples;
http://www.overcomingbias.com/2008/08/computations.html
-- what a real-life concept may look like.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com