Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Jean-Paul Van Belle
Hey but it makes for an excellent quote. Facts don't have to be true if they're 
beautiful or funny! ;-)
Sorry Eliezer, but the more famous you become, the more these types of 
apocryphal facts will surface... most not even vaguely true... You should be 
proud and happy! To quote Mr Bean 'Well, I enjoyed it anyway.'



 Eliezer S. Yudkowsky [EMAIL PROTECTED] 06/05/07 4:38 AM 
Mark Waser wrote:
  
 P.S.  You missed the time where Eliezer said at Ben's AGI conference 
 that he would sneak out the door before warning others that the room was 
 on fire:-)

This absolutely never happened.  I absolutely do not say such things, 
even as a joke, because I understand the logic of the multiplayer 
iterated prisoner's dilemma - as soon as anyone defects, everyone gets 
hurt.

Some people who did not understand the IPD, and hence could not 
conceive of my understanding the IPD, made jokes about that because 
they could not conceive of behaving otherwise in my place.  But I 
never, ever said that, even as a joke, and was saddened but not 
surprised to hear it.

-- 
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] credit attribution method

2007-06-05 Thread Jean-Paul Van Belle
Ok, Panu, I agree with *your statement* below.

[Meta: Now how much credit do I get for operationalizing your idea?]


 Panu Horsmalahti [EMAIL PROTECTED] 06/04/07 10:42 PM 
Now, all we need to do is find 2 AGI designers who agree on something.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Eliezer S. Yudkowsky

Hm.  Memory may be tricking me.

I did a deeper scan of my mind, and found that the only memory I 
actually have is that someone at the conference said that they saw I 
wasn't in the room that morning, and then looked around to see if 
there was a bomb.


I have no memory of the fire thing one way or the other, but it 
sounds like a plausible distortion of the first event after a few 
repetitions.   Or maybe the intended meaning is that, if I saw a fire 
in a room, I would leave the room first to make sure of my own safety, 
and then shout Fire! to warn everyone else?  If so, I still don't 
remember saying that, but it doesn't have the same quality of being 
the first to defect in an iterated prisoner's dilemma - which is the 
main thing I feel I need to emphasize heavily that I will not do; no, 
not even as a joke, because talking about defection encourages people 
to defect, and I won't be the first to talk about it, either.


So I guess the moral is that I shouldn't toss around the word 
absolutely - even when the point needs some heavy moral emphasis - 
about events so far in the past.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Minimally ambiguous languages

2007-06-05 Thread Bob Mottram

I remember last year there was some talk about possibly using Lojban
as a possible language use to teach an AGI in a minimally ambiguous
way.  Does anyone know if the same level of ambiguity found in
ordinary English language also applies to sign language?  I know very
little about sign language, but it seems possible that the constraints
applied by the relatively long time periods needed to produce gestures
with arms/hands compared to the time required to produce vocalizations
may mean that sign language communication is more compact and maybe
less ambiguous.

Also, comparing the way that the same concepts are represented using
spoken and sign language might reveal something about how we normally
parse sentences.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] credit attribution method

2007-06-05 Thread YKY (Yan King Yin)

On 6/5/07, Panu Horsmalahti [EMAIL PROTECTED] wrote:

Now, all we need to do is find 2 AGI designers who agree on something.


My guess is that *after* people see and discuss each other's ideas, they'll
be more likely to change their views and be able to synthesize them.  At
first we may see a few main projects each with their own followers.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread BillK

On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:

I remember last year there was some talk about possibly using Lojban
as a possible language use to teach an AGI in a minimally ambiguous
way.  Does anyone know if the same level of ambiguity found in
ordinary English language also applies to sign language?  I know very
little about sign language, but it seems possible that the constraints
applied by the relatively long time periods needed to produce gestures
with arms/hands compared to the time required to produce vocalizations
may mean that sign language communication is more compact and maybe
less ambiguous.

Also, comparing the way that the same concepts are represented using
spoken and sign language might reveal something about how we normally
parse sentences.



http://en.wikipedia.org/wiki/Basic_English

Ogden's rules of grammar for Basic English allows people to use the
850 words to talk about things and events in the normal English way.
Ogden did not put any words into Basic English that could be
paraphrased with other words, and he strove to make the words work for
speakers of any other language. He put his set of words through a
large number of tests and adjustments. He also simplified the grammar
but tried to keep it normal for English users.

More recently, it has influenced the creation of Simplified English, a
standardized version of English intended for the writing of technical
manuals.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] credit attribution method

2007-06-05 Thread YKY (Yan King Yin)

On 6/5/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:

[Meta: Now how much credit do I get for operationalizing your idea?]

We can have some default fixed values for relatively-small contributions,
such as the ones we're having now in this brain-storming session.

I think we'll maintain a tree and linked-list hybrid data structure.
AGI would be at the root.  Then we allow users to add nodes like
Novamente's breakdown of AGI modules into A, B, C,... and YKY's breakdown
of AGI modules... etc.  Also some nodes may be temporally linked, ie task
A can be achieved by building X followed by Y.  And we need a user
interface to navigate such a tree structure.

I hope we can keep it as simple as possible.  Afterall, it's the AGI
project(s) we should be interested in!

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread William Pearson

On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:

Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.



I suspect an AGI that executes one fixed unchangeable program is not
physically possible.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread Ricardo Barreira

On 6/5/07, William Pearson [EMAIL PROTECTED] wrote:

On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 Suppose you build a human level AGI, and argue
 that it is not autonomous no matter what it does, because it is
 deterministically executing a program.


I suspect an AGI that executes one fixed unchangeable program is not
physically possible.


What do you mean by one fixed unchangeable program? That seems
nonsense to me... There's no necessary distinction between a program
and its data, so that concept is useless.

Ricardo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread William Pearson

On 05/06/07, Ricardo Barreira [EMAIL PROTECTED] wrote:

On 6/5/07, William Pearson [EMAIL PROTECTED] wrote:
 On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  Suppose you build a human level AGI, and argue
  that it is not autonomous no matter what it does, because it is
  deterministically executing a program.
 

 I suspect an AGI that executes one fixed unchangeable program is not
 physically possible.

What do you mean by one fixed unchangeable program? That seems
nonsense to me... There's no necessary distinction between a program
and its data, so that concept is useless.


A function in the mathematical sense is a fixed unchangeable program.
Though I'd agree that there is no distinction between program and
data. I may have got interpreted the sentence incorrectly but the
implication I got was that because a human supplied the program that
the computer ran to be intelligent, the computer was not autonomous.
Now as you have pointed out data can seen as a program, and an
intelligent system is sure to have acquired its own data, what
determines its behaviour and learning is not fully specified by
humans, therefore it can be considered autonomous to some degree.

If, however, he was referring to questions of autonomity based upon
how autonomous systems cannot be made out of pieces that unthinkingly
following rules, then humans to the best of our understanding would
not be autonomous by this standard. So this meaning of autonomous is
useless, which is why I assumed he meant the initial meaning.

I would also go further than that and say that a system that can't
treat what determines its external behaviour and how it learns as
data, does not seem to be a good candidate for an intelligent system.
Because surely one of the pillars of intelligence is self-control. We
have examples of systems that are pretty good at self-control in
modern PCs, however they are not suited to self-experimentation in the
methods of control.


 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-05 Thread Mark Waser
This is the kind of control freak tendency that  makes many startup 
ventures untenable; if you cannot give up some  control (and I will grant 
such tendencies are not natural), you might  not be the best person to be 
running such a startup venture.


Yup, my suggestion of giving control to five or six trustworthy owners is 
definitely the epitome of control freak.:-)


Why all the emotion?

Blue sky ventures and maintaining control are pretty much in  opposition 
to each other if you do not want to marginalize your  funding 
opportunities.  The lack of intrinsic capital is going to  make things 
tough, because the only real currency you have *is* control.


No, the real currency that I want to have is an awesome talent pool and some 
good demonstrable progress before we look for additional funding.


I don't have a need for control.  I insist upon the boundary that the AGI 
must be protected and not able to be used to take over the world.


Yes, that is going to reduce my funding opportunities -- but it's a 
requirement that I'm not willing to concede and I will black-ball any 
trustworthy owner candidates who show *any* signs of being willing to 
concede it.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Mark Waser
This absolutely never happened.  I absolutely do not say such things, even 
as a joke


   Your recollection is *very* different from mine.  My recollection is 
that you certainly did say it as a joke but that I was *rather* surprised 
that you would say such a thing even as a joke.  If anyone else would like 
to chime in (since several member's of this list were in attendance) it 
might be interesting . . . . (or we could go back to the video since it was 
part of a panel that was videotaped -- if it isn't in the video, I am 
certainly willing to apologize but I'd be *very* puzzled since I've never 
had such a vivid recollection be shown to be incorrect before).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
Except that Ogden only included a very few verbs [be , have , come - go , put - 
take , give - get , make , keep , let , do , say , see , send , causeand 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
= Jean-Paul
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 BillK [EMAIL PROTECTED] 2007/06/05 11:18:49 
On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:
 I remember last year there was some talk about possibly using Lojban
 as a possible language use to teach an AGI in a minimally ambiguous
 way.  Does anyone know if the same level of ambiguity found in
 ordinary English language also applies to sign language?  I know very
 little about sign language, but it seems possible that the constraints
 applied by the relatively long time periods needed to produce gestures
 with arms/hands compared to the time required to produce vocalizations
 may mean that sign language communication is more compact and maybe
 less ambiguous.

 Also, comparing the way that the same concepts are represented using
 spoken and sign language might reveal something about how we normally
 parse sentences.


http://en.wikipedia.org/wiki/Basic_English

Ogden's rules of grammar for Basic English allows people to use the
850 words to talk about things and events in the normal English way.
Ogden did not put any words into Basic English that could be
paraphrased with other words, and he strove to make the words work for
speakers of any other language. He put his set of words through a
large number of tests and adjustments. He also simplified the grammar
but tried to keep it normal for English users.

More recently, it has influenced the creation of Simplified English, a
standardized version of English intended for the writing of technical
manuals.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread James Ratcliff
# 7 8 9 

Money is good, but the overall AGI theory and program plan is the most 
important aspect.

James Ratcliff

YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Can people rate the following 
things?
  
 1. quick $$, ie salary
 2. long-term $$, ie shares in a successful corp
 3. freedom to do what they want
 4. fairness
 5. friendly friends
 6. the project looks like a winner overall
 7. knowing that the project is charitable
 8. special AGI features they look for (eg a special type of friendliness, pls 
specify)
 9. a particular AGI theory
 10. average level of expertise in the group
 11. others?
  
 Thanks in advance, it'd be hugely helpful... =)
 YKY
 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
Your brain can be simulated on a large/fast enough von Neumann 
architecture.

From the behavioral perspective (which is good enough for AGI) - yes,
but that's not the whole story when it comes to human brain. In our
brains, information not only is and moves but also feels.


It's my belief/contention that a sufficiently complex mind will be conscious 
and feel -- regardless of substrate.



It's meaningless to take action without feelings - you are practically
dead - there is just some mechanical device trying to make moves in
your way of thinking. But thinking is not our goal. Feeling is. The
goal is to not have goal(s) and safely feel the best forever.


Feel the best forever is a hard-wired goal.  What makes you feel good are 
hard-wired goals in some cases and trained goals in other cases.  As I've 
said before, I believe that human beings only have four primary goals (being 
safe, feeling good, looking good, and being right).  The latter two, to me, 
are clearly sub-goals but it's equally clear that some people have 
mistakenly raised them to the level of primary goals.


If you can't, then you must either concede that feeling pain is possible 
for a

simulated entity..

It is possible. There are just good reasons to believe that it takes
more than a bunch of semiconductor based slots storing 1s and 0s.


Could you specify some of those good reasons (i.e. why a sufficiently 
large/fast enough von Neumann architecture isn't sufficient substrate for a 
sufficiently complex mind to be conscious and feel -- or, at least, to 
believe itself to be conscious and believe itself to feel and isn't that a 
nasty thought twist? :-)?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Mike Tintner
Except that Ogden only included a very few verbs [be , have , come - go , put - 
take , give - get , make , keep , let , do , say , see , send , cause and 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
= Jean-Paul

How does that work? The first 12 verbs above are among the most general, 
infinitely-meaningful and therefore ambiguous words in the language. There are 
an infinity of ways to come or go to a place.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-05 Thread James Ratcliff
It will b e very hard at that point to hold up in court, given that the AGI 
must choose who gets what, cause there sure aint no precedent for a 
non-legal-entity like an AI for making legal decisions.
  Will have to have it declared a person first.

James Ratcliff

Benjamin Goertzel [EMAIL PROTECTED] wrote: 
So you are going to make a special set of corporate bylaws that disentangle 
shares from control?

Hmmm...

Something like: the initial trustworthy owners are given temporary 
trusteeship over the shares, but are then bound to distribute them according to 
the wishes of the AGI once the AGI passes some threshold level of 
intelligence??   

I suppose that could work...

I know the Frankfurter Allgemaine Zeitung (famous German newspaper) is operated 
by each of the 5 publishers being given trusteeship over 1/5 of the shares ... 
but then they pass this trusteeship along to their successors when they 
retire... 

-- Ben G

On 6/3/07, Mark Waser [EMAIL PROTECTED] wrote::-)The ones 
controlling the  company are that set of trustworthy owners that I mentioned 
before.   One of the reasons why I'm not giving out intermediate options is to  
prevent questions/problems like this.  
  
 I *do* understand pretty well how VCs  think/operate and the biggest drawback 
is going to be that, in order to protect  the AGI, we're not going to be 
willing to give up a majority share.
- Original Message - 
From:Benjamin Goertzel
   To:  agi@v2.listbox.com 
   Sent: Sunday, June 03, 2007 9:08 PM 
   Subject: Re: [agi] Open AGIConsortium
   


Because, unless they take a majority share, they want toknow who it is 
they're dealing with... i.e. who is controlling thecompany

One of the most important things an investor looks at is THEPEOPLE who are 
controlling the company, and in your scheme, it is not clearwho that is... 

Yes, you can say I control the company even though Idon't have a 
controlling set of shares, but investors are not likely totrust, this, 
because they view financial ownership as the essence ofmotivation [since 
that is what motivates them, by and large]  

-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?; 
 

 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Mark Waser
I did a deeper scan of my mind, and found that the only memory I actually 
have is that someone at the conference said that they saw I wasn't in the 
room that morning, and then looked around to see if there was a bomb.


My memory probably was incorrect in terms of substituting fire for bomb 
(since the effect is much the same).


Or maybe the intended meaning is that, if I saw a fire in a room, I would 
leave the room first to make sure of my own safety, and then shout Fire! 
to warn everyone else?


I believe that that was indeed the context (with the probability that it was 
bomb instead of fire).



about events so far in the past.


It wasn't that long ago!:-)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Mark Waser
Actually, information theory would argue that if  the more compactness was 
driven by having less information due to a low transmission speed/bandwidth, 
then you would likely have more ambiguity (i.e. less information on the 
receiving side) not less.


Also, there have been numerous studies comparing spoken and sign languages 
in terms of sentence structure.  The most interesting ones (for both spoken 
and sign) are the ones dealing with languages that are invented by small 
groups who haven't been previously exposed to other languages. 
Unfortunately, I don't have access to specific references currently.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] credit attribution method

2007-06-05 Thread Mark Waser
 My guess is that *after* people see and discuss each other's ideas, they'll 
 be more likely to change their views

Like Ben and Pei and Peter and Eliezer and Sam and Richard and . . . . ?  What 
are you basing your guess on?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread James Ratcliff
Sorry, noticed that after I posted, acting autonomously given that it is acting 
Intelligently as well.
  I was assuming the existence of an AGI / intelligent machine, and being asked 
about the consciousness of that.

  An AGI that plans, reasons, and acts autonomously would be conscious.

Where the actual acting of course must be measured to some degree.

Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.

It appears from the discussions here that it is virtually impossible to 
determine if an intelligence (AGI/human) acts deterministically, so any actions 
that reach a certain level of complexity must be gauged to be 
non-deterministic, or conversely, if the AGI acts complexly enough at a level 
of human action, then it would be considered the autonomous.

James Ratcliff

Matt Mahoney [EMAIL PROTECTED] wrote: --- James Ratcliff  wrote:

 
 But you haven't answered my question.  How do you test if a machine is
 conscious, and is therefore (1) dangerous, and (2) deserving of human
 rights?
 
 Easily, once it acts autonomously, not based on your direct given goals and
 orders, when it begins acting and generating its own new goals.  
 After that all bets are off, and its a 'being' in its own right.

Not so easy.  A random number generator acts autonomously, or at least appears
to.  A malfunctioning flight control system that refuses to obey commands from
the pilot also appears to be acting autonomously, according to goals not
specified by its builders.  Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Park yourself in front of a world of choices in alternative vehicles.
Visit the Yahoo! Auto Green Center.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] credit attribution method

2007-06-05 Thread Mark Waser
 I think we'll maintain a tree and linked-list hybrid data structure.  
 AGI would be at the root.  Then we allow users to add nodes like 
 Novamente's breakdown of AGI modules into A, B, C,... and YKY's breakdown 
 of AGI modules... etc.  Also some nodes may be temporally linked, ie task 
 A can be achieved by building X followed by Y.  And we need a user 
 interface to navigate such a tree structure.

 I hope we can keep it as simple as possible.  Afterall, it's the AGI 
 project(s) we should be interested in!

But instead, someday real soon now, you're going to realize that such a credit 
attribution structure *is* fundamentally isomorphic to AGI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread J Storrs Hall, PhD
As I understand it, true sign language (e.g. ASL) has its own syntax and to 
some extent tis own vocabulary. The slowness sign language is almost 
entirely in those artificial variants where there has been an attempt to 
transliterate the spoken language into a set of gestures. Natively signed 
language is at least as fast and expressive as spoken, possibly more so. I'm 
fairly sure the bottleneck in both cases is the mental production of the 
string of symbols, not their physical enactment.

My operating theory, not original, is that language arose from the ability to 
watch another's hands and understand what they were doing. (The fact that 
signed language operates at native as opposed to emulated speed and 
breadth tends to support this.) This is the angle I'm attacking language from 
in Tommy -- have him interpret sequences of actions, and see what kind of 
mechanism that forces me to build.

Josh


On Tuesday 05 June 2007 05:00:49 am Bob Mottram wrote:
 I remember last year there was some talk about possibly using Lojban
 as a possible language use to teach an AGI in a minimally ambiguous
 way.  Does anyone know if the same level of ambiguity found in
 ordinary English language also applies to sign language?  I know very
 little about sign language, but it seems possible that the constraints
 applied by the relatively long time periods needed to produce gestures
 with arms/hands compared to the time required to produce vocalizations
 may mean that sign language communication is more compact and maybe
 less ambiguous.
 
 Also, comparing the way that the same concepts are represented using
 spoken and sign language might reveal something about how we normally
 parse sentences.
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread James Ratcliff
To get any further with feelings you again have to have a better definition 
and examples of what you are dealing with.

In humans, most feelings and emotions are brought about by chemical changes 
in the body yes?  Then from there it becomes knowledge in the brain, which we 
use to make decisions and react upon.

Is there more to it than that?  (simplified overview)

Simply replacing the chemical parts with machine code easily allows an AGI to 
feel most of these feelings.  Mechanical sensors would allow a robot to 
feel/sense being touched or hit, and a brain could react upon this.  Even a 
simulated AGI virtual agent could and does indicate a prefence for Not being 
shot, or being in pain, and running away, and could easily show preference 
like/feeling for certain faces or persons it find 'appealing'.  
   This can all be done using algorithms, and learned / preferred behavior of 
the bot with no mysterious 'extra' bits needed.

Many people have posted and argue the ambiguous statement:
  But an AGI cant feel feelings.
I'm not really sure what this kind of sentence means, because we cant even say 
that or how humans feel feelings
  If we can define these in some way that is devoid of all logic, and has 
something that an AGI CANT do, I would be interested.

An AGI should be able, and will benefit from having feelings, will act reason, 
and believe that it has these feelings, and will give it a greater range of 
abilities later in its life cycle.

James Ratcliff

Mark Waser [EMAIL PROTECTED] wrote: Your brain can be simulated on a 
large/fast enough von Neumann 
architecture.
 From the behavioral perspective (which is good enough for AGI) - yes,
 but that's not the whole story when it comes to human brain. In our
 brains, information not only is and moves but also feels.

It's my belief/contention that a sufficiently complex mind will be conscious 
and feel -- regardless of substrate.

 It's meaningless to take action without feelings - you are practically
 dead - there is just some mechanical device trying to make moves in
 your way of thinking. But thinking is not our goal. Feeling is. The
 goal is to not have goal(s) and safely feel the best forever.

Feel the best forever is a hard-wired goal.  What makes you feel good are 
hard-wired goals in some cases and trained goals in other cases.  As I've 
said before, I believe that human beings only have four primary goals (being 
safe, feeling good, looking good, and being right).  The latter two, to me, 
are clearly sub-goals but it's equally clear that some people have 
mistakenly raised them to the level of primary goals.

 If you can't, then you must either concede that feeling pain is possible 
 for a
 simulated entity..
 It is possible. There are just good reasons to believe that it takes
 more than a bunch of semiconductor based slots storing 1s and 0s.

Could you specify some of those good reasons (i.e. why a sufficiently 
large/fast enough von Neumann architecture isn't sufficient substrate for a 
sufficiently complex mind to be conscious and feel -- or, at least, to 
believe itself to be conscious and believe itself to feel 
nasty thought twist? :-)?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
 
-
 Get your own web address.
 Have a HUGE year through Yahoo! Small Business.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread J Storrs Hall, PhD
On Tuesday 05 June 2007 10:51:54 am Mark Waser wrote:
 It's my belief/contention that a sufficiently complex mind will be conscious 
 and feel -- regardless of substrate.

Sounds like Mike the computer in Moon is a Harsh Mistress (Heinlein). Note, 
btw, that Mike could be programmed in Loglan (predecessor of Lojban).

I think a system can get arbitrarily complex without being conscious -- 
consciousness is a specific kind of model-based, summarizing, self-monitoring 
architecture. There has to be a certain system complexity for it to make any 
sense, but something the complexity of say Linux could be made conscious (and 
would work better if it were). That said, I think consciousness is necessary 
but not sufficient for moral agency.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Minimally ambiguous languages

2007-06-05 Thread J Storrs Hall, PhD
On Tuesday 05 June 2007 11:49:11 am Mark Waser wrote:

 Also, there have been numerous studies comparing spoken and sign languages 
 in terms of sentence structure.  The most interesting ones (for both spoken 
 and sign) are the ones dealing with languages that are invented by small 
 groups who haven't been previously exposed to other languages. 

The technical term for such a language is a creole.

Interestingly, people do the same thing with moral ontologies and rules. 
There's quite a strong parallel between certain linguistic and ethical 
phenomena.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] credit attribution method

2007-06-05 Thread J Storrs Hall, PhD
On Tuesday 05 June 2007 12:04:21 pm Mark Waser wrote:

 But instead, someday real soon now, you're going to realize that such a 
credit attribution structure *is* fundamentally isomorphic to AGI.

... which is why it makes sense to look at architectures with a market as one 
of their key mechanisms -- see my book and Eric Baum's.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] PolyContextural Logics vs. General Logic

2007-06-05 Thread Fundamental Research Lab


On cze 5, 2007, at 00:18, Lukasz Stafiniak wrote:


Speaking of logical approaches to AGI... :-)

http://www.thinkartlab.com/pkl/

Luk … I didn’t find any interesting in PCL
It’s well know that logician research the common features of a wide  
variety of logics for many years: from classical Lindenbaum's extension  
lemma and Tarski's approaches (logic as a consequence operator or model  
theory which was developed via the kind of universal algebra to   
Suszko’s abstract logic and now  Beziau’s logica universalis
http://springerlink.com/content/t22665107512/? 
p=220ac5182a5840c696be8bc68369d81dpi=0
We are focus on the general logic in the sense of the study of common  
structures of logics.
You can find very interesting techniques in this field: translations,  
embeddings, fibring, combining logics.


Robert B. Lisek


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
Hi Mike
 
Just Google 'Ogden' and/or Basic English - there's lots of info.
And if you doubt that only a few verbs are sufficient, then obviously you need 
to do some reading: anyone interested in building AGI should be familiar with 
Schank's (1975) contextual dependency theory which deals with the 
representation of meaning in sentences. Building upon this framework, Schank  
Abelson (1977) introduced the concepts of scripts, plans and themes to handle 
story-level understanding. Later work (e.g., Schank, 1982,1986) elaborated the 
theory to encompass other aspects of cognition. 
[http://tip.psychology.org/schank.html]
A number of other researchers have also worked on the concept of a few semantic 
primitives (one called them semantic primes) but I'd be a bad teacher if I did 
*your* homework for you... ;-)
 
Jean-Paul
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 Mike Tintner [EMAIL PROTECTED] 2007/06/05 16:48:32 
Except that Ogden only included a very few verbs [be , have , come - go , put - 
take , give - get , make , keep , let , do , say , see , send , causeand 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
= Jean-Paul
 
How does that work? The first 12 verbs above are among the most general, 
infinitely-meaningful and therefore ambiguous words in the language. There are 
an infinity of ways to come or go to a place.
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread James Ratcliff
And the Simple / Basic english provides for breaking up of many complex 
compound sentences, for shorter structures, that even without the vocabulary 
reduction increases the ability to parse sentences greatly.

There is even a Simple English wikipedia, though it seems to lack many articles 
and information.

James Ratcliff

Jean-Paul Van Belle [EMAIL PROTECTED] wrote:Hi Mike
  
 Just Google 'Ogden' and/or Basic English - there's lots of info.
 And if you doubt that only a few verbs are sufficient, then obviously you need 
to do some reading: anyone interested in building AGI should be familiar with 
Schank's (1975) contextual dependency theory which deals with the 
representation of meaning in sentences. Building upon this framework, Schank  
Abelson (1977) introduced the concepts of scripts, plans and themes to handle 
story-level understanding. Later work (e.g., Schank, 1982,1986) elaborated the 
theory to encompass other aspects of cognition. 
[http://tip.psychology.org/schank.html]
 A number of other researchers have also worked on the concept of a few 
semantic primitives (one called them semantic primes) but I'd be a bad teacher 
if I did *your* homework for you... ;-)
  
 Jean-Paul
  

 Department of Information Systems

 Email: [EMAIL PROTECTED]
 Phone: (+27)-(0)21-6504256
 Fax: (+27)-(0)21-6502280
 Office: Leslie Commerce 4.21


 Mike Tintner [EMAIL PROTECTED] 2007/06/05 16:48:32 

  Except that Ogden only included a very few verbs [be , have , come - go , put 
- take , give - get , make , keep , let , do , say , see , send , cause and 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
 = Jean-Paul
  
 How does that work? The first 12 verbs above are among the most general, 
infinitely-meaningful and therefore ambiguous words in the language. There are 
an infinity of ways to come or go to a place.

 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-05 Thread Mark Waser
 It will b e very hard at that point to hold up in court, given that the AGI 
 must choose who gets what, cause there sure aint no precedent for a 
 non-legal-entity like an AI for making legal decisions.
  Will have to have it declared a person first.

There is nothing necessary to hold up in court.  The trustees/trustworthy 
owners are taking the action.  The fact that their decision was based upon the 
ramblings of an AGI is entirely irrelevant as far as the legal system is 
concerned.  There is, of course, the danger of trustee defection but I don't 
believe that you can legally stop that short of declaring the AGI a person and 
making the trustees unnecessary (and I'm not holding my breath).  The entire 
point of the trustees is to provide the correct legal cover for the AGI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
I think a system can get arbitrarily complex without being conscious -- 
consciousness is a specific kind of model-based, summarizing, 
self-monitoring

architecture.


Yes.  That is a good clarification of what I meant rather than what I said.


That said, I think consciousness is necessary
but not sufficient for moral agency.


On the other hand, I don't believe that consciousness is necessary for moral 
agency.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] credit attribution method

2007-06-05 Thread Mark Waser

But instead, someday real soon now, you're going to realize that such a
credit attribution structure *is* fundamentally isomorphic to AGI.


... which is why it makes sense to look at architectures with a market as 
one

of their key mechanisms -- see my book and Eric Baum's.


Huh.  I was doing that without realizing that I was fundamentally 
re-creating a simplified isomorphism of markets.  What a wonderful 
analogical thread to pursue . . . . thank you!



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-05 Thread J. Andrew Rogers


On Jun 5, 2007, at 10:01 AM, Mark Waser wrote:
There is nothing necessary to hold up in court.  The  
trustees/trustworthy owners are taking the action.  The fact that  
their decision was based upon the ramblings of an AGI is entirely  
irrelevant as far as the legal system is concerned.  There is, of  
course, the danger of trustee defection but I don't believe that  
you can legally stop that short of declaring the AGI a person and  
making the trustees unnecessary (and I'm not holding my breath).   
The entire point of the trustees is to provide the correct legal  
cover for the AGI.



That sounds like a contributor lawsuit waiting to happen outside of  
the contributors contractually agreeing to have zero rights, and who  
would want to sign such a contract?


Cheers,

J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-05 Thread Mark Waser
 What distinguishes this venture from the hundreds of other ones that  
are frankly indistinguishable from yours?  What is that killer thing that you  
can convincingly demonstrate you have that no one else can?  Without  
that, your chances are poor on many different levels.

 I'm trying to find your unique angle here, but have come up empty so  
far.

:-) You have no chance of finding such in what has been recently written.

As I said a few e-mail previously -- There will be a massive write-up of 
the project in July and I'll be inviting all interested parties then.  YKY's 
post just offered an immediate opportunity to float some of my 
organizational ideas to see if they'd float or sink like a rock (since I 
really don't want the non-project stuff to prevent people from working on 
the project).

 I'm not trying to stop you, I'm merely pointing out that it will very  
significantly reduce your opportunities and probably far more than  
you are anticipating.  Either way, it won't be *my* problem. :-)  I'm  
just trying to give you some practical perspective on the venture  
thing, both generally and as it pertains to AI.

Understood.  Let me reverse the question -- Given an absolute requirement of 
not letting the AGI be misused, what would you do?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Mike Tintner
Thanks. But Schank has fallen into disuse, no? The ideas re script algorithms 
just don't work, do they?  And what I was highlighting was one possible reason 
- those primitives are infinitely open-ended and can be, and are, repeatedly 
being used in new ways. That supposedly minimally ambiguous language looks, 
ironically, like it's maximally ambiguous. 

I agree that the primitives you list are extremely important - arguably central 
- in the development of human language. But to my mind, and I'll have to argue 
this at length, and elsewhere, they show something that you might not like - 
the impossibility of programming (in any conventional sense) a mind to handle 
them. 
  - Original Message - 
  From: Jean-Paul Van Belle 
  To: agi@v2.listbox.com 
  Sent: Tuesday, June 05, 2007 5:44 PM
  Subject: Re: [agi] Minimally ambiguous languages


  Hi Mike

  Just Google 'Ogden' and/or Basic English - there's lots of info.
  And if you doubt that only a few verbs are sufficient, then obviously you 
need to do some reading: anyone interested in building AGI should be familiar 
with Schank's (1975) contextual dependency theory which deals with the 
representation of meaning in sentences. Building upon this framework, Schank  
Abelson (1977) introduced the concepts of scripts, plans and themes to handle 
story-level understanding. Later work (e.g., Schank, 1982,1986) elaborated the 
theory to encompass other aspects of cognition. 
[http://tip.psychology.org/schank.html]
  A number of other researchers have also worked on the concept of a few 
semantic primitives (one called them semantic primes) but I'd be a bad teacher 
if I did *your* homework for you... ;-)

  Jean-Paul


  Department of Information Systems
  Email: [EMAIL PROTECTED]
  Phone: (+27)-(0)21-6504256
  Fax: (+27)-(0)21-6502280
  Office: Leslie Commerce 4.21


   Mike Tintner [EMAIL PROTECTED] 2007/06/05 16:48:32 

  Except that Ogden only included a very few verbs [be , have , come - go , put 
- take , give - get , make , keep , let , do , say , see , send , cause and 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
  = Jean-Paul

  How does that work? The first 12 verbs above are among the most general, 
infinitely-meaningful and therefore ambiguous words in the language. There are 
an infinity of ways to come or go to a place.

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 
--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.472 / Virus Database: 269.8.9/832 - Release Date: 04/06/2007 
18:43

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:


 I think a system can get arbitrarily complex without being conscious --
 consciousness is a specific kind of model-based, summarizing,
 self-monitoring
 architecture.

Yes.  That is a good clarification of what I meant rather than what I said.

 That said, I think consciousness is necessary
 but not sufficient for moral agency.

On the other hand, I don't believe that consciousness is necessary for moral
agency.


What a provocative statement!

Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the morality of any
decision is always only relative to a subjective model of rightness?
In other words, doesn't the difference between it works and it's
moral hinge on the role of a subjective self as actor?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-05 Thread Mark Waser
That sounds like a contributor lawsuit waiting to happen outside of  the 
contributors contractually agreeing to have zero rights, and who  would 
want to sign such a contract?


And there's the rub.  We've gotten into a situation where it's almost 
literally impossible to honestly set up a venture that can't be ruined by 
one litigious individual.


Personally, I *would* sign such a contract if I trusted that the 
trustworthy owners were on the up and up because I don't see how it would 
be used to take advantage of me other than someone, somehow getting to the 
AGI (and I very much value the creation of an AGI).


Obviously, other people's mileage will vary tremendously. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
I think you are mis-interpreting me. I do *not* subscribe to the semantic 
primitives (I probably didn't put it clearly though). Just trying to answer 
your question re the sufficiency of 10 or so verbs. However, if you are 
considering any reduced vocabulary then you should be familiar with the 
literature/theories and *also* know why it failed. I think other people also 
mentioned that list readers should check old discredited approaches first and 
then see how your current approach is different/better.
Jean-Paul


 Mike Tintner [EMAIL PROTECTED] 06/05/07 7:14 PM 
Thanks. But Schank has fallen into disuse, no? The ideas re script algorithms 
just don't work, do they?  And what I was highlighting was one possible reason 
- those primitives are infinitely open-ended and can be, and are, repeatedly 
being used in new ways. That supposedly minimally ambiguous language looks, 
ironically, like it's maximally ambiguous. 

I agree that the primitives you list are extremely important - arguably central 
- in the development of human language. But to my mind, and I'll have to argue 
this at length, and elsewhere, they show something that you might not like - 
the impossibility of programming (in any conventional sense) a mind to handle 
them. 
  - Original Message - 
  From: Jean-Paul Van Belle 
  To: agi@v2.listbox.com 
  Sent: Tuesday, June 05, 2007 5:44 PM
  Subject: Re: [agi] Minimally ambiguous languages


  Hi Mike

  Just Google 'Ogden' and/or Basic English - there's lots of info.
  And if you doubt that only a few verbs are sufficient, then obviously you 
need to do some reading: anyone interested in building AGI should be familiar 
with Schank's (1975) contextual dependency theory which deals with the 
representation of meaning in sentences. Building upon this framework, Schank  
Abelson (1977) introduced the concepts of scripts, plans and themes to handle 
story-level understanding. Later work (e.g., Schank, 1982,1986) elaborated the 
theory to encompass other aspects of cognition. 
[http://tip.psychology.org/schank.html]
  A number of other researchers have also worked on the concept of a few 
semantic primitives (one called them semantic primes) but I'd be a bad teacher 
if I did *your* homework for you... ;-)

  Jean-Paul


  Department of Information Systems
  Email: [EMAIL PROTECTED]
  Phone: (+27)-(0)21-6504256
  Fax: (+27)-(0)21-6502280
  Office: Leslie Commerce 4.21


   Mike Tintner [EMAIL PROTECTED] 2007/06/05 16:48:32 

  Except that Ogden only included a very few verbs [be , have , come - go , put 
- take , give - get , make , keep , let , do , say , see , send , cause and 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
  = Jean-Paul

  How does that work? The first 12 verbs above are among the most general, 
infinitely-meaningful and therefore ambiguous words in the language. There are 
an infinity of ways to come or go to a place.

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 
--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.472 / Virus Database: 269.8.9/832 - Release Date: 04/06/2007 
18:43

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Mark Waser

list readers should check old discredited approaches first


Would you really call Schank discredited or is it just that his line of 
research petered out?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-05 Thread James Ratcliff
Have we not decided that impossible yet?

You can delay it, but not prevent it, once it hits the mainstream.

The best way to delay it, is to have the smallest group, with the tightest 
restrictions in place, which goes against the grain of having a large mostly 
open groups that have been put forward.

You can put all the standard mechanisms in to try to have it be friendly, but 
in the end, taking out those restrictions is an order much easier than putting 
them in place.

James Ratcliff


Mark Waser [EMAIL PROTECTED] wrote:What distinguishes this venture 
from the  hundreds of other ones that  
are frankly indistinguishable from  yours?  What is that killer thing that you  
can convincingly  demonstrate you have that no one else can?  Without  
that, your  chances are poor on many different levels.
  
  I'm trying to find your unique angle here,  but have come up empty so  
far.

 :-) You have no chance of finding such in what  has been recently written.
  
 As I said a few e-mail previously -- There will be  a massive write-up of 
the project in July and I'll be inviting all  interested parties then.  YKY's 
post just offered an immediate  opportunity to float some of my 
organizational ideas to see if they'd float  or sink like a rock (since I 
really don't want the non-project stuff to  prevent people from working on 
the project).
  
  I'm not trying to stop you, I'm merely  pointing out that it will very  
significantly reduce your opportunities  and probably far more than  
you are anticipating.  Either way, it  won't be *my* problem. :-)  I'm  
just trying to give you some  practical perspective on the venture  
thing, both generally and as it  pertains to AI.
  
 Understood.  Let  me reverse the question -- Given an absolute requirement of 
not  letting the AGI be misused, what would you  do?



-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
 
-
8:00? 8:25? 8:40?  Find a flick in no time
 with theYahoo! Search movie showtime shortcut.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser

Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the morality of any
decision is always only relative to a subjective model of rightness?


I'm not sure that I should dive into this but I'm not the brightest 
sometimes . . . . :-)


If someone else were to program a decision-making (but not conscious or 
self-conscious) machine to always recommend for what you personally (Jef) 
would find a moral act and always recommend against what you personally 
would find an immoral act, would that machine be acting morally?


hopefully, we're not just debating the term agency


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


[agi] Programmed dissatisfaction

2007-06-05 Thread Mark Waser
http://www.the-scientist.com/article/home/53231/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread James Ratcliff
I wouldnt say discredited, though he has went off to study education more 
instead of AI now.
Good article on Conceptual Reasoning

http://library.thinkquest.org/18242/concept.shtml

His SAM project was very interesting with Scripts back in '75, but for a very 
limited domain.

My project has the ability for a KR to contain multiple scripts describing a 
similar event to allow reasoning and generalization of simple tasks.

James Ratcliff

Mark Waser [EMAIL PROTECTED] wrote:  list readers should check old 
discredited approaches first

Would you really call Schank discredited or is it just that his line of 
research petered out?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, 
when. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:

 Isn't it indisputable that agency is necessarily on behalf of some
 perceived entity (a self) and that assessment of the morality of any
 decision is always only relative to a subjective model of rightness?

I'm not sure that I should dive into this but I'm not the brightest
sometimes . . . . :-)

If someone else were to program a decision-making (but not conscious or
self-conscious) machine to always recommend for what you personally (Jef)
would find a moral act and always recommend against what you personally
would find an immoral act, would that machine be acting morally?

hopefully, we're not just debating the term agency


I do think its a misuse of agency to ascribe moral agency to what is
effectively only a tool.  Even a human, operating under duress, i.e.
as a tool for another, should be considered as having diminished or no
moral agency, in my opinion.

Oh well.  Thanks Mark for your response.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Open AGI Consortium

2007-06-05 Thread Mark Waser
 Have we not decided that impossible yet?
 You can delay it, but not prevent it, once it hits the mainstream.

No, because my question deals with *before* it hits the mainstream.

 The best way to delay it, is to have the smallest group, with the tightest 
 restrictions in place, which goes against the grain of having a large mostly 
 open groups that have been put forward.

Right.  But the question is Is there some way to do it with a large, mostly 
open pool of contributors (which is why I'm restricting access to the code to 
need-to-know).

I'm really not being stupid and wrestling with an easy issue here:-).

 You can put all the standard mechanisms in to try to have it be friendly, 
 but in the end, taking out those restrictions is an order much easier than 
 putting them in place.

Agreed.  Once it's released, it's got to be able to fend for itself -- but I'm 
currently only concerning myself about the time before that point.




  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Tuesday, June 05, 2007 2:53 PM
  Subject: Re: [agi] Open AGI Consortium


  Have we not decided that impossible yet?

  You can delay it, but not prevent it, once it hits the mainstream.

  The best way to delay it, is to have the smallest group, with the tightest 
restrictions in place, which goes against the grain of having a large mostly 
open groups that have been put forward.

  You can put all the standard mechanisms in to try to have it be friendly, but 
in the end, taking out those restrictions is an order much easier than putting 
them in place.

  James Ratcliff


  Mark Waser [EMAIL PROTECTED] wrote:
 What distinguishes this venture from the hundreds of other ones that  
are frankly indistinguishable from yours?  What is that killer thing that 
you  
can convincingly demonstrate you have that no one else can?  Without  
that, your chances are poor on many different levels.

 I'm trying to find your unique angle here, but have come up empty so  
far.

:-) You have no chance of finding such in what has been recently written.

As I said a few e-mail previously -- There will be a massive write-up of 
the project in July and I'll be inviting all interested parties then.  
YKY's 
post just offered an immediate opportunity to float some of my 
organizational ideas to see if they'd float or sink like a rock (since I 
really don't want the non-project stuff to prevent people from working on 
the project).

 I'm not trying to stop you, I'm merely pointing out that it will very  
significantly reduce your opportunities and probably far more than  
you are anticipating.  Either way, it won't be *my* problem. :-)  I'm  
just trying to give you some practical perspective on the venture  
thing, both generally and as it pertains to AI.

Understood.  Let me reverse the question -- Given an absolute requirement 
of not letting the AGI be misused, what would you do?




This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



  ___
  James Ratcliff - http://falazar.com
  Looking for something...


--
  8:00? 8:25? 8:40? Find a flick in no time
  with theYahoo! Search movie showtime shortcut.
--
   This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: Slavery (was Re: [agi] Opensource Business Model)

2007-06-05 Thread Matt Mahoney
There is a tendency among people to grant human rights to entities that are
more human-like, more like yourself.  For example, if you give an animal a
name, it is likely to get better treatment.  (We name dogs and cats, but not
cows or pigs).  Among humans, those who speak the same language and have the
same color skin as you are going to get better treatment.  (Spare me the
objections.  100% of adults have some prejudice.  The best you can do is to
admit it).

I believe we will apply similar logic to AGI.  If a robot has a human face and
speaks with the appearance of intelligence, you are more likely to treat it as
a person than as a machine, regardless of its other attributes.

Autonomy is one aspect of humanity.  But, like intelligence, it is difficult
to define.  I pointed out how the simple definition (making its own decisions)
is flawed when applied to machines.  The random number generator is one
example.  The other example -- autonomy in a deterministic program -- deserves
more explanation, because it questions whether humans are autonomous if you
accept that the human brain can be simulated on a computer.  The objection
stems from the hardcoded belief in consciousness and free will, which the
computational model denies.

But to be clear, when I say that an AGI's behavior depends only on its program
and its inputs, I include the possibility that the program only tells it how
to learn.


--- James Ratcliff [EMAIL PROTECTED] wrote:

 to Will: Sure you can make that program.
 Any program that has no random number generator, will always run the same
 based on the same input, thats a core concept of computer science.
 (given same input, and same database state)
 
 to Matt: For that I would need your definition of autonomy.
 From Wiki:
 Autonomy: one who gives oneself his own law) means freedom from external
 authority. Within these contexts it refers to the capacity of a rational
 individual to make an informed, uncoerced decision. 
 
 In robotics autonomy means independence of control. This characterization
 implies that autonomy is a property of the relation between two agents, in
 the case of robotics, of the relations between the designer and the
 autonomous robot. Self-sufficiency, situatedness, learning or development,
 and evolution increase an agent’s degree of autonomy., according to Rolf
 Pfeifer.
 
 -So this appears to be an agent that is controlling itself
 Now I would argue that pretty quickly an AGI that is given a control unit
 that does not directly answer to a human (IE google style query answer, or
 directly commanded bot) would have autonomy.
 
 Now there is a restriction in autonomy, given by the environment, and to a
 degree, always from other entities.
 Humans are believed to be autonomous.  But we must act within the laws of
 physics and our environment.  I choose what I am going to do next.
 These choices however are limited by my internal programming  IE the
 limits and bounds of what I know how to do, and what I am able to do.
 
 An AGI that has the ability to choose its next action would be autonomous as
 well, though still has to act within bounds of its environment.
 These choices however are limited by its internal programming  IE the
 limits and bounds of what it know how to do, and what it is able to do.
 
 I would go much FURTHER and say that a complex agent such as a race-car
 opponent simulator is an autonomous agent, in that within its world realm,
 it has free choice of what it is able to do, even though it must still act
 within the bounds of the environment it is in.
 
 So I would say that an AGI is as autonomous as a person is under those
 definitions.
 
 
 For the consciousness argument I would take the same route.  Point to
 something that a conscious human can do, that an AGI could not.
 
 James Ratcliff
 
 William Pearson [EMAIL PROTECTED] wrote: On 04/06/07, Matt Mahoney 
 wrote:
  Suppose you build a human level AGI, and argue
  that it is not autonomous no matter what it does, because it is
  deterministically executing a program.
 
 
 I suspect an AGI that executes one fixed unchangeable program is not
 physically possible.
 
   Will Pearson
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 
 
 ___
 James Ratcliff - http://falazar.com
 Looking for something...

 -
 Got a little couch potato? 
 Check out fun summer activities for kids.
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 I do think its a misuse of agency to ascribe moral agency to what is
 effectively only a tool.  Even a human, operating under duress, i.e.
 as a tool for another, should be considered as having diminished or no
 moral agency, in my opinion.

So, effectively, it sounds like agency requires both consciousness and willful 
control (and this debate actually has nothing to do with moral at all).

I can agree with that.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread J Storrs Hall, PhD
On Tuesday 05 June 2007 02:47:27 pm Mark Waser wrote:
  list readers should check old discredited approaches first
 
 Would you really call Schank discredited or is it just that his line of 
 research petered out?

I think Schank's stuff was quite sound at its level but was abstract enough 
(at the level that it was right) to have a gap between it and the ability of 
the GOFAI infrastructure to implement. Note that this is a generally valid 
concern with quite a few of the things we need at the higher levels of 
organization of an AGI.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
Sorry yes you're right, I should and would not call Schank's approach 
discredited (though he does have his critics). FWIW I think he got much closer 
than most of the GOFAIers i.e. he's one of my old school AI heroes :) I thought 
for a long time his approach was one of the quickest ways to AGI and I still 
think anyone studying AGI should definitely study his approach closely. In the 
end any would-be AGIst (?:) will have to decide whether she adopts conceptual 
primitives or not - probably, apart from ideological arguments, mainly on the 
basis of how she decides to (have her AGI) ground its/his/her concepts (or not, 
as the case may be).
Personally I'd say that a lot of mental acts do not reduce to his primitives 
easily (without losing a lot in the translation, to paraphrase a good movie:) 
and mental acts are quite important in my AGI architecture.
Just personal opinion of course. =Jean-Paul


 James Ratcliff [EMAIL PROTECTED] 06/05/07 9:19 PM 
I wouldnt say discredited, though he has went off to study education more 
instead of AI now.
Good article on Conceptual Reasoning

http://library.thinkquest.org/18242/concept.shtml

His SAM project was very interesting with Scripts back in '75, but for a very 
limited domain.

My project has the ability for a KR to contain multiple scripts describing a 
similar event to allow reasoning and generalization of simple tasks.

James Ratcliff

Mark Waser [EMAIL PROTECTED] wrote:  list readers should check old 
discredited approaches first

Would you really call Schank discredited or is it just that his line of 
research petered out?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, 
when. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Mark Waser
:-)A lot of the reason why I was asking is because I'm effectively 
somewhat (how's that for a pair of conditionals? :-) relying on Schank's 
approach not having any showstoppers that I'm not aware of -- so if anyone 
else is aware of any surprise show-stopper's in his work, I'd love to have 
some pointers.  Thanks.


- Original Message - 
From: Jean-Paul Van Belle [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, June 05, 2007 3:56 PM
Subject: Re: [agi] Minimally ambiguous languages


Sorry yes you're right, I should and would not call Schank's approach 
discredited (though he does have his critics). FWIW I think he got much 
closer than most of the GOFAIers i.e. he's one of my old school AI heroes :) 
I thought for a long time his approach was one of the quickest ways to AGI 
and I still think anyone studying AGI should definitely study his approach 
closely. In the end any would-be AGIst (?:) will have to decide whether she 
adopts conceptual primitives or not - probably, apart from ideological 
arguments, mainly on the basis of how she decides to (have her AGI) ground 
its/his/her concepts (or not, as the case may be).
Personally I'd say that a lot of mental acts do not reduce to his primitives 
easily (without losing a lot in the translation, to paraphrase a good 
movie:) and mental acts are quite important in my AGI architecture.

Just personal opinion of course. =Jean-Paul



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:



 I do think its a misuse of agency to ascribe moral agency to what is
 effectively only a tool.  Even a human, operating under duress, i.e.
 as a tool for another, should be considered as having diminished or no
 moral agency, in my opinion.

So, effectively, it sounds like agency requires both consciousness and
willful control (and this debate actually has nothing to do with moral at
all).

I can agree with that.


Funny, I thought there was nothing of significance between our
positions; now it seems clear that there is.

I would not claim that agency requires consciousness; it is necessary
only that an agent acts on its environment so as to minimize the
difference between the external environment and its internal model of
the preferred environment  The perception of agency inheres in an
observer, which might or might not include the agent itself.  An ant
(while presumably lacking self-awareness) can be seen as its own agent
(promoting its own internal values) as well as being an agent of the
colony.  A person is almost always their own agent to some extent, and
commonly seen as acting as an agent of others.  A newborn baby is seen
as an agent of itself, reaching for the nipple, even while it yet
lacks the self-awareness to recognize its own agency.  A simple robot,
autonomous but lacking self-awareness is an agent promoting the values
expressed by its design, and possibly also an agent of its designer to
the extent that the designer's preferences are reflected in the
robot's preferences.

Moral agency, however, requires both agency and self-awareness.  Moral
agency is not about the acting but the deciding, and is necessarily
over a context that includes the values of at least one other agent.
This requirement of expanded decision-making context is what makes the
difference between what is seen as merely good (to an individual)
and what is seen as right or moral (to a group.)Morality is a
function of a group, not of an individual. The difference entails
**agreement**, thus decision-making context greater than a single
agent, thus recognition of self in order to recognize the existence of
the greater context including both self and other agency.

Now we are back to the starting point, where I saw your statement
about the possibility of moral agency sans consciousness as a
provocative one.  Can you see why?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:

 I would not claim that agency requires consciousness; it is necessary
 only that an agent acts on its environment so as to minimize the
 difference between the external environment and its internal model of
 the preferred environment

OK.

 Moral agency, however, requires both agency and self-awareness.  Moral
 agency is not about the acting but the deciding

So you're saying that deciding requires self-awareness?


No, I'm saying that **moral** decision-making requires self-awareness.



 This requirement of expanded decision-making context is what makes the
 difference between what is seen as merely good (to an individual)
 and what is seen as right or moral (to a group.)Morality is a
 function of a group, not of an individual. The difference entails
 **agreement**, thus decision-making context greater than a single
 agent, thus recognition of self in order to recognize the existence of
 the greater context including both self and other agency.

So you're saying that if you act morally without recognizing the greater
context then you are not acting morally (i.e. you are acting amorally --
without morals -- as opposed to immorally -- against morals).


Yes, a machine that has been programed to carry out acts which others
have decided are moral, or a human who follows religious (or military)
imperatives is not displaying moral agency.



I would then argue that we humans *rarely* recognize this greater context --
and then most frequently act upon this realization for the wrong reasons
(i.e. fear of ostracism, punishment, etc.) instead of moral reasons
because realistically most of us are hard-wired by evolution to feel in
accordance with most of what is regarded as moral (with the exceptions often
being psychopaths).


Yes!  Our present-day moral agency is limited due to what we might
lump under the term lack of awareness. Most of what is presently
considered morality is actually only distilled patterns of
cooperative behavior that worked in the environment of evolutionary
adaptation, now encoded into our innate biological preferences as well
as cultural artifacts such as the Ten Commandments.

A more accurate understanding of morality or decision-making seen as
right, and extensible beyond the EEA to our increasingly complex
world might be something like the following:

Decisions are seen as increasingly moral to the extent that they enact
principles assessed as promoting an increasing context of increasingly
coherent values over increasing scope of consequences.

For the sake of brevity here I'll resist the temptation to forestall
some anticipated objections.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 A more accurate understanding of morality or decision-making seen as
 right, and extensible beyond the EEA to our increasingly complex
 world might be something like the following:
 
 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.

OK.  I would contend that a machine can be programmed to make decisions to 
enact principles assessed as promoting an increasing context of increasingly 
coherent values over increasing scope of consequences and that it can be 
programmed in this fashion without it attaining consciousness.

You did say machine that has been programmed to carry out acts which others 
have decided are moral . . . is not displaying moral agency but I interpreted 
this as the machine merely following rules of what the human has already 
decided as enacting principles assessed . . . (i.e. the machine is not doing 
the actual morality checking itself)

So . . . my next two questions are 
  a.. Do you believe that a machine programmed to make decisions to enact 
principles assessed as promoting an increasing context of increasingly coherent 
values over increasing scope of consequences (I assume that it has/needs an 
awesome knowledge base and very sophisticated rules and evaluation criteria) is 
still not acting morally?  (and, if so, why?)
  b.. Or, do you believe that it is not possible to program a machine in this 
fashion without giving it consciousness.
Also, BTW, with this definition of morality, I would argue that it is a very 
rare human that makes moral decisions any appreciable percent of the time (and 
those that do have ingrained it as reflex -- so do those reflexes count as 
moral decisions?  Or are they not moral since they're not conscious decisions 
at the time of choice?:-).

Mark

- Original Message - 
From: Jef Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 05, 2007 5:45 PM
Subject: Re: [agi] Pure reason is a disease.


 On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:
  I would not claim that agency requires consciousness; it is necessary
  only that an agent acts on its environment so as to minimize the
  difference between the external environment and its internal model of
  the preferred environment

 OK.

  Moral agency, however, requires both agency and self-awareness.  Moral
  agency is not about the acting but the deciding

 So you're saying that deciding requires self-awareness?
 
 No, I'm saying that **moral** decision-making requires self-awareness.
 
 
  This requirement of expanded decision-making context is what makes the
  difference between what is seen as merely good (to an individual)
  and what is seen as right or moral (to a group.)Morality is a
  function of a group, not of an individual. The difference entails
  **agreement**, thus decision-making context greater than a single
  agent, thus recognition of self in order to recognize the existence of
  the greater context including both self and other agency.

 So you're saying that if you act morally without recognizing the greater
 context then you are not acting morally (i.e. you are acting amorally --
 without morals -- as opposed to immorally -- against morals).
 
 Yes, a machine that has been programed to carry out acts which others
 have decided are moral, or a human who follows religious (or military)
 imperatives is not displaying moral agency.
 
 
 I would then argue that we humans *rarely* recognize this greater context --
 and then most frequently act upon this realization for the wrong reasons
 (i.e. fear of ostracism, punishment, etc.) instead of moral reasons
 because realistically most of us are hard-wired by evolution to feel in
 accordance with most of what is regarded as moral (with the exceptions often
 being psychopaths).
 
 Yes!  Our present-day moral agency is limited due to what we might
 lump under the term lack of awareness. Most of what is presently
 considered morality is actually only distilled patterns of
 cooperative behavior that worked in the environment of evolutionary
 adaptation, now encoded into our innate biological preferences as well
 as cultural artifacts such as the Ten Commandments.
 
 A more accurate understanding of morality or decision-making seen as
 right, and extensible beyond the EEA to our increasingly complex
 world might be something like the following:
 
 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.
 
 For the sake of brevity here I'll resist the temptation to forestall
 some anticipated objections.
 
 - Jef
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: 

RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
 
Mark Waser writes:
 
 BTW, with this definition of morality, I would argue that it is a very rare 
 human that makes moral decisions any appreciable percent of the time 
 
Just a gentle suggestion:  If you're planning to unveil a major AGI initiative 
next month, focus on that at the moment.  This stuff you have been arguing 
lately is quite peripheral to what you have in mind, except perhaps for the 
business model but in that area I see little compromise on more than subtle 
technical points.
 
As I have begun to re-attach myself to the issues of AGI I have become 
suspicious of the ability or wisdom of attaching important semantics to atomic 
tokens (as I suspect you are going to attempt to do, along with most 
approaches), but I'd dearly like to contribute to something I thought had a 
chance.
This stuff, though, belongs on comp.ai.philosophy (which is to say, it belongs 
unread).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 Just a gentle suggestion:  If you're planning to unveil a major AGI 
 initiative next month, focus on that at the moment.

I think that morality (aka Friendliness) is directly on-topic for *any* AGI 
initiative; however, it's actually even more apropos for the approach that I'm 
taking.

 As I have begun to re-attach myself to the issues of AGI I have become 
 suspicious of the ability or wisdom of attaching important semantics to 
 atomic tokens (as I suspect you are going to attempt to do, along with most 
 approaches), but I'd dearly like to contribute to something I thought had a 
 chance.

Atomic tokens are quick and easy labels for what can be very convoluted and 
difficult concepts which normally end up varying in their details from person 
to person.  We cannot communicate efficiently and effectively without such 
labels but unless all parties have the exact same concept (to the smallest 
details) attached to the same label, we are miscommunicating to the exact 
degree that our concepts in all their glory aren't congruent.  A very important 
part of what I'm proposing is attempting to deal with the fact that no two 
humans agree *exactly* on the meaning of any but the simplest labels.  Does 
that allay your fears somewhat?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Mark Waser
 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.

Or another question . . . . if I'm analyzing an action based upon the criteria 
specified above but am actually taking the action that the criteria says is 
moral because I feel that it is in my best self-interest to always act morally 
-- am I still a moral agent?

Mark

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Pure reason is a disease.

2007-06-05 Thread Jef Allbright

On 6/5/07, Mark Waser [EMAIL PROTECTED] wrote:



 Decisions are seen as increasingly moral to the extent that they enact
 principles assessed as promoting an increasing context of increasingly
 coherent values over increasing scope of consequences.

Or another question . . . . if I'm analyzing an action based upon the
criteria specified above but am actually taking the action that the criteria
says is moral because I feel that it is in my best self-interest to always
act morally -- am I still a moral agent?


Shirley you jest.

Out of respect for the gentle but slightly passive-aggressive Derek,
and others who see this as excluding lots of nuts and bolts AGI stuff,
I'll leave it here.

If you're serious, contact me offlist and I'll be happy to expand on
what it really means.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


RE: [agi] Pure reason is a disease.

2007-06-05 Thread Derek Zahn
Mark Waser writes:

 I think that morality (aka Friendliness) is directly on-topic for *any* AGI 
 initiative; however, it's actually even more apropos for the approach that 
 I'm taking.
 
 A very important part of what I'm proposing is attempting to deal with the 
 fact that no two humans agree *exactly* on the meaning of any but the 
 simplest labels.  Does that allay your fears somewhat?
 
I agree that refraining from devastating humanity is a good idea :-), luckily I 
think we have some time before it's an imminent risk.
 
As to my fears about your project, we can wait until July to see the details. 
 You've done a good job of piquing interest :)
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] analogy, blending, and creativity

2007-06-05 Thread Lukasz Stafiniak

On 6/2/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

And many scientists refer to potential energy surfaces and the like. There's a
core of enormous representational capability with quite a few well-developed
intellectual tools.


Another Grand Unification theory: Estimation of Distribution
Algorithms behind Bayesian Nets, Genetic Programming and unsupervised
Neural Networks.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e