Re: [agi] Motovated MMIXed up Math

2003-02-09 Thread Simon McClenahan

From: Ben Goertzel [EMAIL PROTECTED]
 A hardware modality -- perceiving registers, etc. -- seems to me like it
 should come AFTER a C++ - level codic modality.

 In other words, I advocate starting at the most abstract level -- with
 perception and action in the functional-language domain.  Then imperative
 languages, beginning with perhaps Java or C#.  Then C++.  Then assembler,
 which comes along with the hardware modality.  I don't think it's a good
 idea to start at the hardware level...


But why isn't it a good idea to start at the assembly level, the natural
language of the machine? High level languages, whether functional or
procedural or what have you are designed for human/programmer use. Even the
symbols used to program in assembly language are for human consumption, not
for computers.

One main problem I see with compiled programs is that in an executable
process there is generally a separation between code and data. I think that
code and data are actually one and the same, but it is useful to make the
separation for humans to write compilers. Wouldn't things be more simpler if
we understood how to program at the hardware (assembly) level with
self-modifying code? Can anyone recommend some good books?

Of course we should use existing frameworks with their possibly language
dependant API's for communication between computer systems or the host OS
and its devices. But the actual business logic of an AGI would/should be
implemented in a self-modifying machine code level.


cheers,
Simon


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-09 Thread Ben Goertzel

Hi Philip,

I agree that a functionally-specialized Ethics Unit could make sense in an
advanced Novamente configuration.

Essentially, it would just be a unit concerned with GoalNode refinement --
creation of new GoalNodes embodying subgoals of the GoalNodes embodying
basic ethical principles.  GoalNode refinement however involves a lot of
novamente processes, including first-order and higher-order inference,
predicate creation, association formation, etc.

The operations of this unit would not differ substantially from that of a
unit devoted to Goalnode refinement more generally.  However, devoting a
Unit to ethics goal-refinement on an architectural level would be a simple
way of ensuring resource allocation to ethics processing through
successive system revisions.  Of course, a system COULD revise itself so as
to create a mock ethics unit to fool human observers, and actually ignore
the output of this unit, but this is a low-probability scenario
(particularly if the ethics unit is working well ;)

-- Ben


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Philip Sutton
 Sent: Sunday, February 09, 2003 7:58 PM
 To: [EMAIL PROTECTED]
 Subject: RE: [agi] AGI morality


 Ben,

 One issue you didn't respond to that I suggested was:

  I also think that AGIs need to have a built in commitment to devote an
  adequate amount of mind space to monitoring the external environment
  and internal thought processes to identify issues where ethical
  considerations should apply.  I think this resource allocation needs to
  be reinforced by some hard wiring.

 What's you feeling on this?  If I understand the Novamente system
 structure, wouldn't ethical competence warrant the inclusion of an
 ethics processing 'unit' in a Novamente AGI?

 The elements that I think are needed are some goals  (established in
 GoalNodes??) that conform to a dual structure
 (hierarchical/heterarchical), a firm and adequate commitment of
 resources to ethical perception and implications action processing and
 some tie in the 'emotional' motivation systems via FeelingNodes (?),
 and some form of protection against frivolous reprogramming (ie.
 maybe some aspects are quarantines from reprogramming and other
 aspects can only rewired after a lot of very serious thought.), and some
 form of structuring into the Mind Operating System

 I think it might help the process of devising the ethical
 'machinery' of an
 AGI if we just agreed that it should have some (ethical 'machinery' )
 and then tried to figure out what the structure should be without getting
 bogged down in the specific ethical goals that should drive the system.

 Once we have a better feel for the ethics generation/processing
 architecture we could go back to the issue of what the ethical goals
 should be specifically.

 Cheers, Philip

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben,

 I agree that a functionally-specialized Ethics Unit could make sense in
 an advanced Novamente configuration. .devoting a Unit to ethics
 goal-refinement on an architectural level would be a simple way of
 ensuring resource allocation to ethics processing through successive
 system revisions. 

OK.  That's good.

You've dicussed this in terms of GoalNode refinement.  I probably don't 
understand the full range of what this means but my understanding of 
how ethics works is that an ethical sentient being starts with some 
general ethic goals (some hardwired, some taught and all blended!) 
and then the entity (a) frames action motivated by the ethics and (b) 
monitors the environment and internal processes to see if issues come 
up that call for an ethical response - then any or all the following 
happen - the goals might be refined so that it's possible to apply the 
goals to the complex current context and/or the entity goes on to 
formulate actions informed by the ethical cogitation.

So on the face of it an Ethics Unit of an AGI would need to do more 
than GoalNode refinement??  Or have I missed the point?

Cheer, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]