Re: [agi] Cell

2005-02-14 Thread Philip Sutton
On 10 Feb 05 Steve Reed said:  

 In 2014, according to trend, the semiconductor manufacturers may reach
 the 16 nanometer lithography node, with 32 CPU cores per chip, perhaps
 150+ times more capable than today's x86 chip. 

I raised this issue with a colleague who said that he wondered whether this 
extrapolation would work because of the dynamics of economic cost.  He 
argued that CPUs have been getting more expensive in absolute terms (not 
relative to performance) as their capacity has increased and he thought that 
this trend of CPU price increases would continue.  He said he thought that the 
reasons that computers have been getting cheaper as whole systems has 
come close to running its course leaving the rising price of the CPUs as the 
dominant trend.  He therefore thought that Moore's Law might run out of puff - 
not because of technology limits but because of cost escalations.

Since I had no idea whether he was right (my subjective impression had been 
that the long run trajectory for the price of computers was a long run decline) 
I 
thought I should ask whether anyone has a view on my colleague's argument.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] What are qualia...

2005-01-26 Thread Philip Sutton



Hi Brad 

 This is not at all true. I could design a neural network, or perhaps even
 symbolic computer program that can evaluate the attractivenes of a peacock
 tail and tune it to behave in a similar fashion as that tiny portion of a
 real peacock's brain. Does this crude simulation contain qualia?

I think you reversed my logic.


I'm sure that a relatively simply AI system could be devised to emulate a 
peacock's identification of fancy tails. But my guess is that no sense of qualia 
would be involved for the simple AI system. But I wouldn't mind betting that 
real peacocks perceive something like what we call qualia - and my 
expectation is that this sensation plays a part in peacock breeding behaviour.


My real interest is in why brains have evolved to produce sensations that can 
be described as qualia - when at first analysis this sensation doesn't appear to 
be necessary for intelligent behaviour to occur.


The options seem to me to be that qualia :

o are not necessary and come free as an accidental byproduct; or 

o are not necessary but come as a desired byproduct that has got
 implicated in gene replication and hence has been propagated and
 enhanced; or 

o are the logical result of advanced subjective information processing
 in a setting of limited computational power. 

Cheers, Philip




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





Re: [agi] What are qualia...

2005-01-26 Thread Philip Sutton
Brad/Eugen/Ben,

Early living things/current simple-minded living things, we can conjecture 
didn't/don't have perceptions that can be described as qualia.  Then 
somewhere along the line humans start describing perceptions that some of 
them describe as qualia.  It seems that something has happened in between.

My guess is that the precusor elements of brain processing that are now 
described by some people as qualia probably emerged from two sources:

-   accidental artifacts (eg. ??reverberation from processing subjective
experience?? the general case being artifacts that arise through
*imperfect* processing systems ) 

-   positively selected 'design' solutions for advanced subjective
information processing in a setting of limited computational power

Advanced animals - especially social ones - play around with all the attributes 
of their bodies - whether these attributes are under first-order evolutionary 
selection.  If animal groups develop a cultural trend around an accidental 
attribute it might then start being selected for. 

Take peacocks again.  The colourful tails are presumably a hey look at me' 
flag. Sexual success will flow from better flags (more famboyant tails) but 
also, presumably, from a higher state of excitement when seeing really flashy 
tails.  So the evolutionary arms race that is the peacock tail presumably will 
drive changes in the flag (in the males) and changes in the preception system 
(in the females) - this could be expected to lead to enhanced qualia 
experience in the females.  And since males and females share nearly the 
same genome and largely similar development processes, its likely that the 
male peacocks will get enhanced capacity for rich subjective exerience (ie 
qualia-like perception) as a byproduct.

Anyway this is all rampant speculation on my part.  :)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Setting up the systems....

2005-01-23 Thread Philip Sutton
Ben said:

 My experience is that in building any narrow-AI app based on Novamente
 components, around 80% of the work goes into the application and the
 domain-engineering and 20% into stuff of general value for Novamente 

Abdrew said:

 I would say that general high-quality financial market prediction
 implementation is more of a generalist domain than many other possible
 narrow-AI domains, such that really good implementations would only be
 narrow in an application sense.  The abstract classes of data you
 have to integrate will map directly into most of the sensory fields
 that are often considered important for general purpose AI. 
 ..(snip)
 On the other hand, they are probably more explicitly aware of AI than
 just about any other business community. 

If financial work or other topic actually has a high demand for general 
intellligence, then if Novamente or any other AGI project teamed with a narrow 
AI group maybe the AGI team could develote most of it's time/money to the 
general AI aspects and the narrow AI team could worry about the 80% of the 
total task that is narrow.

I know that the general and narrow systems have to integrate so each team 
will have to think about the work the other is doing but presumably the AGI 
team, under the two team scenario could spend more than 20% of its time on 
general AI work.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] What are qualia...

2005-01-22 Thread Philip Sutton
Hi Ben,

 how the subjective experience of qualia is connected to the neural
 correlates of qualia. but the tricky question is how a physical
 system (the brain) can generate subjective, phenomenal experiences. 

Oh dearhaving jumped in I feel like I'm in over my head already!  :)  

What follows is just intuition, with no research or deep reading foundation at 
all...  

Let's say I look at something and I see/feel red colour.  First my brain 
lumps 
lots of different frequencies under a limited pallete of colours that have a 
network in the brain.  So pure frequencies and mixtures of light frequencies 
are all routed to the same colour network.  Also my brain corrects for light 
intensity and context  etc. So many different external light stimuli trigger a 
certain 'redness' network in the brain.  This colour network has evolved since 
colour vision exisited and also has a particular evolutionary history leading 
to 
humans - so chances are most humans know they are seeing the 'same' red 
because the recognition system has, through evolution, created much the 
same response structure in most human brains (exceptions for colour 
blindness phenomena, also cultural and training experience will modify the 
response).  My guess is that the pallete of colours (smells, tastes, tactile, 
all 
other sense feelings) we see is a bit like a hard-wired language - especially 
important in social beings that need to intuitively understand each other (ie. 
the system evolved a long time before word-based language) and relates to 
the value of social animals being able to 'mind read' ie. it is valuable for 
coordination to have a set of similar qualia experiences going in on in many 
brains so that the animals are working to the same 'story'.  Also my guess is 
that qualia are linked fairly closely to the neural 'attention system' - are 
qualia 
apparent to anyone if they are not paying attention to a phenomenon? My 
intuition is to say they are not.  

My guess is that when we pay attention to sensory, or other data that our 
brains connects with a quasi-sensory response, the data is tagged with labels 
that are used to trigger a suite of qualia responses - deep hard-wired patterns 
and associations built up through life - linking to memories, emotions etc. My 
guess is that it is the richness of associations that makes the qualia feel 
rich.  
But this would be very demanding of brain processing capacity so I imagine 
that is why 'qualia triggering' would only be done in relation to things we are 
paying attention to.

Am I right in feeling that many people associate the experience of qualia with 
the inuitive/folk notion of 'consciousness'?  If so, the connection might be 
the 
'attention system' linkage?

I don't know whether any of what I've said deals with the 'hard problem' that 
you felt I had not addresses in my last message.  Let me know!  :)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] What are qualia...

2005-01-22 Thread Philip Sutton



Hi Ben,


I just read Chalmers article and yours.


You concluded your article with:

 In artificial intelligence terms, the present theory suggests that if
 an AI program is constructed so that its dynamics give rise to a
 constant stream of patterns that are novel and significant (measured
 relative to the system itself), then this program will report
 experiences of awareness and consciousness somewhat similar to those
 that humans report. 

This is a useful statement because it testable at some stage when AI exists 
that can hold complex conversations.


By the way, would it be true that a novel 
and significant pattern is one that 
by definition trigger the AI's attention system? If so then that is a commmon 
point in both our speculations.


I think I've nearly exhausted the value of my speculations for the moment. My 
intuition is that qualia are going to be different in intelligences that do *not* 
have long evolutionary histories of being social, compared to those that do 
have such histories (a species could be currently non-social, but if it has 
evolved from antecendents that have gone through a social phase then my 
guess is that it would experieice qualia more like social species ie. the 
capacity for experiencing qualia is like to be retained to some degree). 


My guess is that there will be structured processes discovered in brains that 
account for the subjective experience of qualia - and that qualia will not be 
experienced without some appropriate system for qualia generation ie. pattern 
recognition by an AI will not be enough by itself to give rise to the experience 
of qualia. But this intuition is so speculative and so poorly based on my part 
that it probably doesn't warrant comment from others! :) So I might leave it 
there and just wait to see what people come up with in the future.


Cheers, Philip




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] What are qualia...

2005-01-21 Thread Philip Sutton



Hi,


I just been thinking about qualia a bit more recently. 


(I have to make a disclaimer. I know next to nothing about them, but other 
people's ideas from this list have been fermenting in my mind for a while.)


Anyway, here goes..


How about the idea that qualia are properties generated in the brain relating to 
the *experience* of real world. They are artifacts that are generated as ways 
of embedding labels or evaluations of data from the real world into the data 
streams that, in the brain, are tagged as being 'real'. eg. 'red' is an 
identification label, a 'stink' is a safety evaluation. 


Using qualia is probably the quickest way to compile data about the real world 
into a digested form that contains everything from data that is relatively close 
to what the real world is like together with other subjective responses from 
within the brain that is convenient to be transferred through the brain tightly 
bound to the data that might be considered more objective.


Given that biological systems have had hundreds of millions of years to evolve 
this 'objective'/'subjective'data bundling it is no wonder that it seems 
marvellously rich and seemless and carries an overwhelming sense of being 
reality.


Once complex brained / complecly motivated creatures start using qualia they 
could play into lifepatterns so profoundly that even obscure trends in the use 
of qualia for aesthetic purposes could actually effect reproductive prospects. 
For example, male peacocks have large tails that look nice - clearly qualia are 
playing a role in the differentiation process that decides which peacocks will 
be more or less successful in breeding.


Cheers, Philip




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] A theorem of change and persistence????

2005-01-04 Thread Philip Sutton
Hi Ben,

 If you model a system in approximate detail then potentially you can
 avoid big surprises and have only small surprises.  

In chaotic systems, my guess is that compact models would capture many 
possibilities that would otherwise be surprises  - especially in the near term. 
 
But I think it's unlikely that these models would capture all the big potential 
surprises leaving only small surprises to happen.  I would imagine that 
compact models would fail to capture at least some lower-probability very big 
surprises.

 If a super-AI were reshaping the universe, it could reshape the
 universe in such a way that from that point on, the dynamics of the
 universe would be reasonably well predictable via compact approximative
 models.  In fact this would probably be a clever thing to do, assuming
 it could be done without sacrificing too much of the creative potential
 of the universe... 

My guess is that, to make the universe a moderately predictable place, 
creativity would have to be kept at a very low level - with only creativity 
space 
for one super-AGI.  Trying to knock the unpredictability out of the universe 
could be engaging for a super-AGI (that was so inclined) for a while (given the 
resistance it would encounter). But I reckon the super-AGI might find a 
moderately predictable universe fairly unstimulating in the long run.  

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] A theorem of change and persistence????

2004-12-30 Thread Philip Sutton
Hi Ben,

On 23 Dec you said:
 I would say that if the universe remains configured roughly as it is now,
 then your statement (that long-term persistence requires goal-directed
 effort) is true.
 
 However, the universe could in the future find itself in a configuration
 in which your statement was FALSE, either
 
 -- via self-organization, or
 
 -- via the goal-directed activity of an intelligent system, which then
 stopped being goal-directed after it had set the universe in a
 configuration where its persistence could continue without goal-directed
 effort
 

Taking the last first.. wouldn't option 2 require the intelligent system to 
end 
the evolution of the universe to achieve this result..ie. bring on the heat 
death of the universe!

I can't see why 'self-organisation' would lead to a universe where persistence 
through deep time of apects of the universe that an intelligence favours did 
not 
require goal directed effort/expenditure of energy. How could you see this 
happening?

Even if the intelligence actually absorbed the whole of the universe into 
itself I 
think my theorum would still hold - because a whole-universe intelligence 
would find it's internal sub-systems still evolving in surprising ways.  

It seems to me that the only way to 'model' the universe is to use the real 
whole-universe - so a whole-universe intelligence would not have enough 
computing power to model itself in complete detail therefore the future would 
still hold surprises that the whole-universe intelligence would need to expend 
energy on to manage - while its internal low entropy lasted. 

Cheers, Philip


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Re: AI boxing

2004-09-19 Thread Philip Sutton
Hi Ben,  

 One thing I agree with Eliezer Yudkowsky on is: Worrying about how to
 increase the odds of AGI, nanotech and biotech saving rather than
 annihilating the human race, is much more worthwhile than worrying
 about who is President of the US. 

It's the nature of evolution that getting to a preferred future depends on getting 
through every particular today between here and there.  So the two issues 
above may not be as disconnected as you suggest.  :)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Learning friendliness/morality in the sand box

2004-06-18 Thread Philip Sutton
Maybe a good way for AGIs to learn friendliness and morality, while still 
in the sand box, is to:

-   be able to form friendships - affiliations with 'others' that go
beyond self-interest - vitual 'others' in the sand box

-   to have responsibility for caring for virtual 'pets'

I guess this is part of a broader program to build AGIs' social skills. 

One value of targeting the these two forms of relationship is that it 
raises issues of how to construct the sandbox and 'who' should be in it.  
It also raises the issue of how to make it easy for AGIs to form these 
relationships and how to structure useful learning.

The value that I see in having an AGI look after a virtual pet is that it 
gets the AGI used to recognising:

- the existence of others

- the needs of others

- the positive things that the AGI could do for the other

- then need to avoid doing damage while trying to do good

etc.

I could well imagine that the first virtual pet could be *very simple* - 
maybe simple virtual version of the Tamagotchi 'pets'.  It might just be 
a blob with inputs, outputs and some internal processes/state 
requirements.  So if the AGI doesn't diligently work on maintaining the 
inputs and handling the outputs and keeping the environmental 
conditions OK (eg. some arbitary factor but could be modelled on 
temperature or protection from rain or ...whatever.) then the pet will 
decline in health/happiness or could die. 

The AGI would need to be taught and/or given a built in empathy to 
help it avoid negative states for the pet.

Care would need to be exercised to make sure that the AGIs don't learn 
or get programmed to have sharp lines of demarcation between the 
'others' it should care for and the all other 'others'.

(As far as I can see most of the really nasty things that people do arise 
when they place others into the not to be empathised with category ie. 
others are put in the instrumental object category or the enemy 
category.)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Tools and techniques for complex adaptive systems

2004-06-14 Thread Philip Sutton
Evolving Logic has developed and is continuing to work on tools for 
handling complex adaptive systems where no model less complex than 
the system itself can accurately predict in detail how the system will 
behave at future times
http://www.evolvinglogic.com/Learn/pdf/ToolsTechniques.pdf

Tools and Techniques for Developing Policies for Complex and 
Uncertain Systems  

Steven C. Bankes ([EMAIL PROTECTED])
Senior Computer Scientist
RAND

Abstract: Complex Adaptive Systems (CAS) can be characterized as 
those systems for which no model less complex than the system itself 
can accurately predict in detail how the system will behave at future 
times. Consequently, the standard tools of policy analysis, based as 
they are on devising policies that perform well on some best estimate 
model of the system, cannot be reliably used for CAS. This paper 
argues that policy analysis for CAS requires an alternative approach to 
decision theory. The general characteristics of such an approach are 
described, and examples provided of its application to policy analysis.  

Keywords: Complex adaptive systems (CAS), Decision Theory, 
Computational Experiments, Deep Uncertainty, Robustness, Adaptive 
Policies.  


 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Networks for plugging in FAI / AGI ?

2004-06-12 Thread Philip Sutton
People involved in FAI / AGI development might like to have look at 
PlaNetwork. This might be a useful network for plugging in FAIs / AGIs 
in development.

Cheers, Philip

http://www.planetwork.net/
From their homepage:
Planetwork illuminates the critical role that the conscious use of 
information technologies and the Internet can, and indeed must, play in 
creating a truly democratic, ecologically sane and socially just future.

Founded in 1998, Planetwork

* Convenes a unique forum exploring the most critical issues 
affecting civil society in the context of the strategic use of the Web and 
information technologies.

* Attracts a multidisciplinary community of highly skilled social 
change agents to envision, implement and share solutions to the most 
pressing interrelated crises on the planet.

* Incubates projects that demonstrate the role information 
technologies play in accelerating implementation of pragmatic solutions 
to local and global issues.

* Disseminates ideas, examples and case studies of digital solutions 
designed to bring about ecological sustainability and social justice 
worldwide.

* Galvanizes a new community; providing a rare opportunity to 
experience a renewed sense of hope, inspiration and empowerment.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Intelligent software agents

2004-05-31 Thread Philip Sutton
The work by European Telecoms might be of interest:

http://more.btexact.com/projects/ibsr/technologythemes/softwareagents.htm

The text below was taken from this webpage:

Software Agents

To support the future enterprise, we deploy intelligent technology based
on a decentralised philosophy in which decisions are made by
interacting autonomous units or agents. Global structure and behaviour
is emergent, resulting from the cumulative effects of actions and
interactions of agents.

Methodologies for Engineering Multi-agent systems

We are keen to utilise the knowledge we have developed in building
multi agent systems to help ourselves and others in modelling
distributed enterprises and building multi-agent solutions. Currently this
research is realised by the MESSAGE project and funded PhD projects.
Future research is likely to consider how we can add more
methodological support to the ZEUS agent toolkit.
http://www.btexact.com/projects/agents/zeus/

MESSAGE (Methodology for Engineering Systems of Software Agents)
is a methodology for developing multi-agent systems. It is a
collaborative project conducted by EURESCOM, an institute for
collaborative RD in telecommunications. Also involved in MESSAGE
are France Télécom, TILAB, Portugal Telecom, Broadcom, Telefónica
and Belgacom.

While most current software engineering methodologies are designed
for an object-oriented approach, MESSAGE is specifically designed for
developing agent solutions. MESSAGE aims to extend existing
methodologies by allowing them to support agent oriented software
engineering. By using such concepts as goal, task, role and interaction
from analysis through design and implementation, the developer is able
to focus on and document the specific concerns of agent functionality,
leading to quick, robust solutions.

Patterns and Role Models
We are proponents of the use of organisational patterns and role
models in developing agent systems. Patterns are living methodologies
that are generative and can deal with change. Role models can be
used to describe organisations in terms of patterns of collaboration and
interaction. Role models can be used to conceptualise, specify, design
and implement new organisations made up of people, processes,
agents and other entities.



Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Open AGI?

2004-03-05 Thread Philip Sutton
Bill,  

 I'd definitely see creating the first open source AGI system as a big
 opportunity.

Do you see any overwhelming risks in making AGI technology available 
to everyone including malcontents and criminals?  Would the rest of 
society be able to handle these risks if they also had access to AGI 
computation power??

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Open AGI?

2004-03-05 Thread Philip Sutton
Shane,  

In your first posting on the open AGI subject you mentioned that you 
were concerned about the risk on the one hand of:
*   inordinate power being concentrated in the hands of the controllers 
of the first advanced AGI
*   power to do serious harm being made widely available if AGI 
technology is available to all.

My guess is that if there is very restricted access to a *very* powerful 
technology - especially one that could be used to make lots of money 
or be used to make a person or an organisation or nation very powerful 
in other ways that these sorts of forces will beat a path to the source of 
that power and they will make sure they have it (by whatever means 
works).  All it will take I suspect is a serious demonstration of the 'proof 
of concept' and this process will be set decisively in motion.

Making the whole technology available to everyone would be one way 
to avoid the concentration of power, but it would put the technology in 
the hands of every loner malcontent and criminal across the globe. So 
on the face of it that doesn't seem to be such a good way to go.

But perhaps if everyone had access to advanced AGI computational 
power in the way that most of us have access to desktop computers 
now - would that give the rest of society the computational power to 
keep the loner malcontents and crime syndicates in check??

Maybe the way to go is to make sure that AGI computational power is 
rapidly disseminated to a *medium-sized* initial circle of users - 
corporations, governments and civil society groups - so that none of the 
legitimate forces in society get a power advantage over the others and 
so the legitimate forces in society are widely empowered and can keep 
on top of the effects of the inadvertent (but inevitable) diffusion of AGI 
power to malcontents and criminals.

If super advanced AGI power emerges under the control of one or a 
few powerful governments then I think power mongers will simply work 
to make sure they can control the government and hence the AGI 
power (as they have worked to control the military industrial complexes 
of the most powerful nations).

If AGI power emerges as a purely commercial proposition then I think 
civil society will be priced out of the market and the power balance in 
society will be seriously disturbed in the direction of further 
concentration of power favouring either corporations and or 
governments.  

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] UNU report 2003 identified human-machine intelligence as key issue

2004-02-28 Thread Philip Sutton
The Millenium Project of the United Nations University has produced 
the 2003 State of the Future report.

The second para of the executive summary says:

Dramatic increases in collective human-machine intelligence are 
possible within 25 years. It is also possible that within the next 25 years 
single individuals acting alone might use advances science and 
technology (ST) to create and use weapons of mass destruction 
(WMD).   

Exec summary
http://www.acunu.org/millennium/Executive-Summary-2003.pdf

Details of full report
http://www.acunu.org/millennium/sof2003.html



Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Consolidated statement of Ben's preferred AGI goal structure? (was AGIs and emotions)

2004-02-23 Thread Philip Sutton
Hi Ben,  

 Yes, of course a brief ethical slogan like choice, growth and joy is
 underspecified and all the terms need to be better defined, either by
 example or by formal elucidation, etc.  I carry out some of this
 elucidation in the Encouraging a Positive Transcension essay that
 triggered this whole dialogue... 

Sorry. I wasn't trying to annoy you.  

I think it might be a good idea to create a web page somewhere where 
you collect the current best specificiation/formal elucidation of your 
preferred goal structure, including the best examples.  Then the 
consolidated results of what you take from the discussion can be seen.  
Otherwise it's hard to tell what's been picked up from all the threads of 
the discussion and which emails/documents are meant to be seen as 
the current state of the art.  

I now realise that some of my recent comments on the AGI list were 
influenced by the message on Defining growth, choice and joy that 
you sent to SL4.  

Having a single consolidated/actively updated web page on the goal 
structure would also be a way to make it easier for people who might 
not be following both the AGI and SL4 list discussions and so might 
otherwise miss key developments.   

What do you think?  

Cheers, Philip  

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton
Hi Ben,

 Question: Will AGI's experience emotions like humans do?
 Answer:
 http://www.goertzel.org/dynapsyc/2004/Emotions.htm

I'm wondering whether *social* organisms are likely to have a more 
active emotional life because inner psychological states need to be 
flagged physiologically to other organisms that need to be able to read 
their states.  This will also apply across species in the case of challenge 
and response situations (buzz off or I'll bite you, etc.).  Your point about 
the physiological states operating outside the mental processes (that 
are handled by the multiverse modeller) being likely to bring on feelings 
of emotion makes sense in a situation involving trans-entity 
communication.  It would be possible for physiologically flagged 
emotional states (flushed face/body, raised hackles, bared teeth snarl, 
broad grin, aroused sexual organs, etc.) to trigger a (pre-patterned?) 
response in another organism on an organism-wide decentralised basis 
- tying in with your idea that certain responses require a degree of 
speed that precludes centralised processing.

So my guess would be that emotions in AIs would be more 
common/stronger if the AIs are *social* (ie. capable of relating to any 
other entitites ie. other AIs or with social biological entities) and they 
are able to both 'read' (and perhaps 'express/flag') psychological states 
- through 'body language' as well as verbal language.

Maybe emotions, as humans experience them, are actually a muddled 
(and therefore interesting!?) hybrid of inner confusion in the multiverse 
modelling system and also a broad patterened communication system 
for projecting and reading *psychological states* where the reason(s) 
for the state is not communicated but the existence of the state is 
regarded (subconsciously?/pre-programmed?) by one or both of the 
parties in the communication as being important.

Will AIs need to be able to share *psychological states* as opposed to 
detailed rational data with other AIs?  If AIs are to be good at 
communicating with humans, then chances are that the AIs will need to 
be able to convey some psychological states to humans since humans 
seem to want to be able to read this sort of information.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton
Hi Ben,  

Why would an AGI be driven to achieve *general* harmony between 
inner and outer worlds - rather than just specific cases of congruence? 

Why would a desire for specific cases of congruence between the inner 
and outer worlds lead an AGI (that is not programmed or trained to do 
so) to appreciate (desire??) to want to be at one with the *universe* 
(when you use that term do you mean the Universe or just the outer 
world?)?  

And is a desire to seek *general* congruence between the inner and 
outser world via changing the world rather changing the self a good 
recipe for creating a megalomaniac?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton



Hi Ben, 

 Adding Choice to the mix provides a principle-level motivation not to
 impose one's own will upon the universe without considering the wills
 of others as well... 

Whose choice - everyone or the AGI? That has to be specified in the 
ethic - otherwise it could be the AGI only - then the AGI would 
*certainly* consider the wills of others as well but only to see that 
they did not block the will of the AGI. 


A non-carefully structured goal set leading to the pursuit of 
choice/growth/joy could still lead to a megalomaniac, seems to me.


Cheers, Philip






To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] Re: Positive Transcension 2

2004-02-19 Thread Philip Sutton



Ben,


I've just finished reading your 14 February version of Encouraging a 
Positive Transcension.


It's taken me two reads of the paper to become clear on a few issues.


It seems to me that there are really three separate ethical issues at the 
heart of the paper that have been conflated and they are: how can we 
ensure that the next big advance in cognitive capacity in our neck of 
the universe-

- is not a disaster for existing sentient beings (humans being the
 only ones we know of presently), 

- doesn't fail to carry forward the gains made so far by existing
 creative sentient beings (including humans) and 

- helps to drive (and does not prevent) further wondrous flowering of
 the universe. 


While these issues clearly interrelate (will protecting existing sentient 
beings lead to a stagnation in the flowering of the universe?) I think 
there is something to be gained from being clear about each one.


And there is a special aspect to the first issue that shouldn't be 
overlooked. The emergence of AGI is not some inevitable process that 
Fate deals up to us. On the earth at least, it is the outcome of 
deliberate actions by a few humans that could impact on the rest of 
humanity (and perhaps a lot of the rest of the universe as well). So 
while we discuss the ethics we want to see AGIs apply, we also need to 
also think about the ethics of what we ourselves are doing. If we can't 
get our own ethics sorted out then I'm not too hopeful we'll be able to 
generate appropriate and adequate ethics in our AGI progeny.


So let's start with how some humans might feel about some other 
humans creating a 'thing' which could wipe out humans without their 
agreement. 


Ben you said: And this may or may not lead to the demise of humanity 
- which may or may not be a terrible thing. At best loose language like 
this means one thing to most people - somebody else is being cavalier 
about their future - at worst they are likely to perceive an active threat 
to their existence.


Frankly I doubt if anyone will care if humanity evolves or transcends to 
a higher state of being so long as it's voluntary. To a timeless observer 
it might be arguable that the humanity of 2004 (or whatever) is no 
longer to be found - but the people who have evolved/transcended will 
still feel like humanity of the new era - they will not have been 
obliterated. To mix this sort of change up with the death of humanity 
via, for example, rather un-necessary discussions of Nietzsche's 
notions of a good death and Man is something to be overcome 
seems to me to be pointless and dangerous. After the bad death of 
many thousands of people in the Twin Towers the US has rained death 
on many more thousands of people in the rest of the world. For AGI-
advocates to be cavalier about the lives of billions of people is to my 
mind to, very understandably, invite similar very nasty reactions.


To withhold concern for other humans lives because theoretically some 
AGI might form the view that our mass/energy could be deployed more 
beautifully/usefully seems simply silly. The universe is a big place with, 
most likely, a mind bogglingly large amount of mass/energy not used 
by any sentient beings - so having a few billion humans on the Earth or 
the nearby planets is hardly going to cramp the style of any self-
respecting AGI with a big brain.


I think the first step in creating safe AGI is for the would-be creators of 
AGI to themselves make an ethical commitment to the protection of 
humans - not because humans are the peak of creation or all that 
stunningly special from the perspective of the universe as a whole but 
simply because they exist and they deserve respect - especially from 
their fellow humans. If AGI developers cannot give their fellow humans 
that commitment or that level of respect, then I think they demonstrate 
they are not safe parents for growing AGIs! I was actually rather 
disturbed by your statement towards the end of your paper where you 
said: In spite of my own affection for Voluntary Joyous Growth, 
however, I have strong inclinations toward both the Joyous Growth 
Guided Voluntarism and pure Joyous Growth variants as well. My 
reading of this is that you would be prepared to inflict Joyous Growth 
future on people whether they wanted it or not and even if this resulted 
in the involuntary elimination of people or other sentients that somehow 
were seen by the AGI or AGIs pursuing Joyous Growth as being an 
impediment in the way of the achievement of joyous growth. If I've 
interpreted what you are saying correctly that's pretty scary stuff!


I think the next step is to consider what values we would like AGIs to 
hold in order for them to be sound citizens in a community of sentients. 
I think the minimum that is needed is for them to have a tolerant, 
respectful, compassionate, live-and-let-live attitude. This is what I 
personally would hope for from all sentients - no matter how low or 
mighty their 

Re: [agi] Futurological speculations

2004-02-11 Thread Philip Sutton
Ben,

Which list do you want Encouraging a Positive Transcension discussed
on? AGI or SL4? It could get cumbersome having effectively the same
discussion on both lists.

Thanks for the paper. it was stimulating read.

I have a few quibbles.

You discuss the value of aligning with universal tendencies.  How can
you know what's really universal since were in a pretty small patch of
the universe at a pretty circumscribed moment in time?  If we
happened to be in a universe that oscillated between big bangs what
looked universal might be rather different in the expanding universe
phase and in the contracting universe phase.

Also socially certain things look universal until some new possibilities
pop up.  If judged at an early stage of the said popping, then new thing
might look like a quaint Quixotic endeavour.  It only after the quaint
quixotic idea becomes widespread that its inherent universalism
becomes clear.

On the subject of growth - do you really want to foster growth per se
(quantitative moreness?) or development (qualitative improvement?)??

I’ve got a feeling that promoting growth or even development is not a
rounded enough goal.  I think there’s something powerful in the idea of
promoting (using my pet terminology) ‘genuine progress’ AND
‘sustainability.  So at all times philosophising entities are considering
what they want to change for the better for the first time and they are
also thinking about what should be maintained from the past/present
and carried through into the future.  So both continuity and change are
important paired notions.

Your principle that the more abstract the principle, the more likely it is
to survive successive self-modification seems to make intuitive sense
to me.

You said: “that in order to make a Megalomaniac AI, one would
probably need to explicitly program an AI with a lust for power.”
Wouldn’t it be rather easy to catch the lust for power bug from the
humans that raise an AGI - or even from our literature?  I think there’s
a high chance that at least a few baby AGIs will be brought up by
megalomaniacs.  And one super-powerful megalomaniac AGI is
probably more than we want and more than we can easily deal with.

If humans are allowed to stay in their present form if they wish and if
some humans as they are now might go dangerously berserk with
advanced technology and we go down the AI Buddha path then the
logic developed in your paper seems to suggest that AI Buddhas will
have to also take on a Big Brother role as well as whatever else they
might do.

Cheers, Philip


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Within-cell computation in biological neural systems??

2004-02-06 Thread Philip Sutton
Does anyone have an up-to-date fix on how much computation occurs 
(if any) within-cells (as opposed to the traditional neural net level) that 
are part of biolgical brain systems?  especially in the case of 
animals that have a premium placed on the number of neurones they 
can support (eg. limited by size, weight or energy supply compared to 
the need for computational capacity).

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Philip Sutton



Thanks Bill for the Eric Baum reference.


Deep thinker that I am, I've just read the book review on Amazon and 
that has orientated me to some of the key ideas in the book (I hope!) so 
I'm happy to start speculating without having actually read the book.


(See the review below.)


It seems that Baum is arguing that biological minds are amazingly quick 
at making sense of the world because, as a result of evolution, the 
structure of the brain is set up with inbuilt limitations/assumptions based 
on likely possibilities in the real world - thus cutting out vast areas for 
speculative but ultimately fruitless computation - but presumably limiting 
biological minds' ability to understand phenomena that go beyond 
common sense that has been structurally summarised by evolved 
shortcuts. (That must be why non-Newtonian phisics always makes my 
brain hurt!)


I'm sure that most people on the list who are heavily into developing 
AGIs will have traversed this ground before. But I wondered..


(By the waywhat follows is most likely not of any interest to people 
well versed in this issue..what I'm doing is feeding back to the list my 
understanding of this issue in the hope that somebody who knows all this 
stuff can can tell me if I'm on the right track...so I'm really hoping I can 
learn something from both my own cogitations and from the feedback 
others can offer someone still very much in the AGI sandbox.)


So here we go..On the face of it, any AGI that is not designed with all 
these short cuts and assumptions in place has a huge amount of 
catching up to do to develop (or learn) efficient rules of thumb 
(heuristics?). Given the flexibility of AGIs and their advantages of 
computation speed and accuracy, the 4000 million years of evolutionary 
learning could perhaps be recapitulated in rather less time. But how 
much less? Would it only take I million years? 100,000 years, 100 
years? I'm sure, Ben that you won't want to be sitting around traiing a 
baby Novamente for that long.


Perhaps AGI's need to be structured so that their minds can do two 
things:

- absorb rules of thumb from observations of other players in the world
 around them (like children picking up ways of thinking from grown ups
 around them) or utilise rules of thumb that are donated to it via
 data dumps. 

- be prepared to and be capable of challenging absorbed rules of thumb
 and be able to revert to a systematic, relatively unbiased
 exploration of an issue when rules of thumb turn up anomalous results
 or when the AGI simply feels curious to go beyond the current rules
 of thumb 

Maybe all the databases of common sense relationships that Cyc is 
developing and the Wordnet database etc. can be considered to be huge 
sets of inherited rules of thumb ie. they are not derived from the 
experience of the AGI.


The biggest problem for an AGI starting to learn seems to me to be able 
to simply get to first base whereby the AGI can make *any* sense of its 
basic sensory input. It seems to me that this is the AGIs hardest task if it 
doesn't have any built in rules of thumb to orientate it.


Maybe an AGI does have to see the world through the lens of inherited 
rules of thumb in it's first hours and even years in order to boost it's 
competence at interpreting the world around it and then it can go about 
replacing inherited rules of thumb with its own grounded self-generated 
rules of thumb?


Maybe it needs to have an inbuilt program a bit like an optical character 
recognition program that takes each class of incoming data and sifts it 
into pre-recognised categies of data - ie. patterns can be letters, 
numbers, colours, shapes, spacial orientation (up, down, left right, 
forward, back etc,). Once the AGI is used to dealing with these preset 
categories it could be fed more abiguous data where it has to perhaps 
invent new categories of its own.


Presumably this is all very obvious, but from comments Ben has made 
over a fair length of time, it seems he's very reluctant to fill an AGI's 
head full of downloaded data/rules of thum or whatever. Ben, the 
language you use suggests that you'd be happy to start with none of this 
downloaded stuff. But it seems to me that an new Novamente would 
struggle really badly, perhaps floundering endlessly in its effort to 
interpret incoming data unless it's primed to make some good guesses 
and to have some preset notions of what to do with this incoming data.


It seems to me that a new-born Novamente needs to be able to use lots 
of preset rules related to its first learning environment so that of the data 
coming in, a very large amount of it already makes sense at some level 
so that the AGI can apply most of it's brain power to resolving a few very 
simple ambiguities - like we do when solving a jigsaw puzzle. It seems 
to me the key learning experience comes from successfully mastering 
these very minor areas of ambiguity thus starting to build up some 
personally 

RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben,

 So, I am skeptical that an AI can really think effectively in ANY
 domain unless it has done a lot of learning based on grounded
 knowledge in SOME domain first; because I think advanced cognitive
 schemata will evolve only through learning based on grounded
 knowledge... 

OK. I think we're getting close to agreement on most of this except 
what could be the key starting point.

My intuition is that, if an AGI is to avoid (an admittedly accelerated) 
recapitulation of 3500 billion year evolution of functioning mind, it will 
have to start thinking *first* in one domain using inherited rules of 
thumb for interpreting data (and it might help to download some initial 
ungrounded data that otherwise would have had to be accumulated 
through exposure to its surroundings).  Once the infant AGI has some 
competence using these implanted rules of thumb it can then go 
through the job of building it's own grounded rules of thumb for data 
intepretation and substituting them for the rules of thumb provided at 
the outset by its creators/trainers.

So my guess is that the fastest (and still effective) path to learning 
would be:
-   *first* a partially grounded experience 
-   *then* a fully grounded mastery 
-   then a mixed learning strategy of grounded and non-grounded as need
and oportunity dictates 

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben,

What you said to Debbie Duong sound intuitively right to me.  I think 
that most human intuition would be inferential rather than a simulation.  
but it seems that higher primates store a huge amount of data on the 
members of their clan - so my guess is that we do a lot of simulating of 
the in-group.  Maybe your comment about empathy throw intersting 
light on this.  If we simulate our in-group but use crude inferential 
intuition for most of the outgroup (except favourite enemies that we 
fixate on!!) then maybe that explains why we have so little empathy for 
the outgroup (and can so easily treat them abominably).

Given that simulation is much more computationally intensive it gives 
us a really strong reason for emphasising this capacityy in AGIs 
precisely because they may be able to escape our limitations in this 
area to great extent.  AGIs with strong simulation capacity could 
therefore be very valuable partners (complementors) for humans.

The empathy issue is interesting in the ethical context.  We can feel 
empathy because we can simulate the emotions of others.  Maybe the 
AllSeing AI needs to make an effort to not only simulate the 'thinking of 
other beings but also their emotions as well.  I guess you'd have to do 
that anyway since emotions affect thinking so strongly in many (most?) 
beings.

Cheers, Philip




You and I have chatted a bit about the role of simulation in cognition, in
the past.  I recently had a dialogue on this topic with a colleague (Debbie
Duong), which I think was somewhat clarifying.  Attached is a message I
recently sent to her on the topic.

-- ben



Debbie,

Let's say that a mind observes a bunch of patterns in a system S: P1,
P2,...,Pn.

Then, suppose the mind wants to predict the degree to which a new pattern,
P(n+1), will occur in the system S.

There are at least two approaches it can take:

1) reverse engineer a simulation S' of the system, with the property that
if the simulation S' runs, it will display patterns P1, P2, ..., Pn.  There
are many possible simulations S' that will display these patterns, so you
pick the simplest one you can find in a reasonable amount of effort.

2) Do probabilistic reasoning based on background knowledge, to derive the
probability that P(n+1) will occur, conditional on the occurence of
P1,...,Pn

My contention is that process 2 (inference) is the default one, with process
1 (simulation) followed only in cases where

a) fully understanding the system S is very important to the mind, so that
it's worth spending the large amount of effort required to build a
simulation of it [inference being much computationally cheaper]

b) the system S is very similar to systems that have previously been
modeled, so that building a simulation model of S can quickly be done by
analogy

About the simulation process.  Debbie, you call this process simulation;
in the Novamente design it's called predicate-driven schema learning, the
simulation S' being the a SchemaNode and the conjunction P1  P2  ...  Pn
being a PredicateNode.

We plan to do this simulation-learning using two methods

* combinator-BOA, where both the predicate and schema are represented as
CombinatorTrees.

* analogical inference, modifying existing simulation models to deal with
new contexts, as in case b) above

If we have a disagreement, perhaps it is just about the relative frequency
of processes 1 and 2 in the mind.  You seem to think 1 is more frequent
whereas I seem to think 2 is much more frequent.  I think we both agree that
both processes exist.

I think that our reasoning about other peoples' actions is generally a mix
of 1 and 2.  We are much better at simulating other humans than we are at
simulating nearly anything else, because we essentially re-use the wiring
used to control *ourselves*, in order to simulate others.

This re-use of self-wiring for simulation-of-others, as Eliezer Yudkowsky
has pointed out, may be largely responsible for the feeling of empathy we
get sometimes (i.e., if you're using your self-wiring to simulate someone
else, and you simulate someone else's emotions, then due to the use of your
self-wiring you're gonna end up feeling their (simulated) emotions to some
extent... presto! empathy...).




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben,  

 Well, this appears to be the order we're going to do for the Novamente
 project -- in spite of my feeling that this isn't ideal -- simply due
 to the way the project is developing via commercial applications of the
 half-completed system.  And, it seems likely that the initial
 partially grounded experience will largely be in the domain of
 molecular biology... at least, that's a lot of what our Novamente code
 is thinking about these days... 

The order might be the same but I don't think the initial content will be 
right - unless you intend to that a conscious Novababy is born into a 
molecular biology world/sandbox!

What were imagining the Novababy's firs simulated or real world would 
be?  A world with a blue square and a sim-self with certain senses and 
actuators?  Or whatever.  Then that is the world I think you'll need to 
help the Novababy understand bu giving it ready-made rules of thumb 
for interpreting the data generated in that precise world.  I'd be inclined 
to move on to a molecular biology world a little later in Novababy's life!  
:)

Anyway - you can test my conjectures very easily with a bit of 
experimentation.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben,

Maybe we do simulate a *bit* more with out groups than I first thought - 
but we do it using caricature stereotypes based on *ungrounded* data - 
ie. we refuse to use grounded data (from our ingroup), perhaps, since 
that would make these outgroup people uncomfortably too much like 
us.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] AGIs, sub-personalities, clones and safety

2004-01-27 Thread Philip Sutton
Hi Ben,  

(I sent this message a couple of hours ago and it didn't come through 
so I've just resent the message in case it's just disappeared into 
cyberspace - never to reappear.)

 An AI mind can spin off a clone of itself with parameters re-tuned to
 be more speculative and intellectually adventuresome, and give this
 clone no interest in life other than intellectual discovery.  Imagine
 if any of us could do that?  

This section of your last email raises very interesting questions about 
the fluidity of AGIs.

If an AGI can clone itself and retune the parameters of the clone to 
pursue specialised endeavours, complete with a new personality and 
goal structure, what is to stop this new clone from becoming 
independent - escaping the sandbox?  It's a bit like the old superstition 
that it's dangerous to think about an idea that involves dystopia or 
danger because the idea can become corporeal.  How can AGIs 
speculate about good or bad possibilities without reifying these 
speculations in a way that can them escape from the sandbox and 
become independent entities??

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] probability theory and the philosophy of science

2004-01-26 Thread Philip Sutton
Hi Ben,

I've just read: Science, Probability and Human Nature: A  
Sociological/ Computational/ Probabilist Philosophy of Science.  It's 
accessible (thanks) and very thought provoking.

As I read the paper, I imagined how these questions might relate to the 
creation and training and activities of Novamentes.

Coming out of my own work I've also been thinking about how 
Novamentes might deal with the issue of ecological sustainability.  This 
question then links up with some of the ideas in your paper.

You mentioned that key attributes of people (and perhaps also 
Novamentes?) who are likely to contribute most to the development of 
science is an interest in 'novelty' and 'simplicity' of theories (in the 
Einsteinian sense of as simple as possible, but no simpler?).  This 
was counterposed to people who seek 'stability' and 'persistence'.

For a while I've been thinking that AGIs should have an inbuilt value of 
caution in the face of possibilities to change the real world (a 
precautionary principle).  But in the light of your paper it occurred to me 
that you might see such a principle as predisposing AGIs to a 
personality of seeking stability and persistence and hence you might 
not be so keen on the idea of an inbuilt precautionary principle.

In my own work I've been trying to work out how to handle 
simultaneous drives for continuity and change.  I think these lie at the 
heart of the notion of 'sustainable development'.

I think a balanced personality needs to have both drives - to identify 
what needs to or is desirable to persist from the present into the future 
and what needs to or is desirable to be changed for the better (for the 
first time).  Perhaps then wisdom lies in the ability to decide what 
should be managed for continuity and what for change and what can be 
left to survive or not as an outcome of the evolution of the system.

So maybe the challenge is not to priviledge a drive for stability and 
persistence over an drive for novelty and change - or vice versa, but to 
enable people and AGIs to have *both* sub-personalities but have a 
system for applying these sub-personalities to different key issues.  
This then pushes the debate onto the question of what guides us to 
prefer to actively sustain versus to actively change in relation to 
different issues or questions.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Hi Ben,

 For example, consider the two scenarios where AGI's are developed by
 a) the US Army
 b) Sony's toy division
 
 In the one case, AGI's are introduced to the world as super-soldiers (or
 super virtual fighter pilots, super strategy analyzers,etc.); in the other
 case, as robot companions for their children...
 
  the nature of the socialization the AGI gets will be quite different
 in case b from case a. 

The Sony option is starting to look good! :)

Better in fact than working as the manager of the computer players in 
most advanced computer games since so many of these games are no 
more peaceful than the US Army!

If AGIs get involved in running aspects of computer games, my feeling 
is the that the games they contribute to would have to be chosen *very* 
carefully - unless AGIs have a brilliant capacity to stop the work they do 
from significantly reshaping their ethics.  Maybe instilling this capacity 
is one essential general element in the implementation of friendliness 
regardless of what work they do.  The implementation of this capacity 
might need to be quite subtle since AGIs would need to be able to learn 
and refine their ethics in the light of experience and yet certain types of 
work that violate their ethics shouldn't result in the emergence of 
unfriendliness.  (I think some AGIs will be able to get work as ethics 
counsellors to their AGI colleagues!  In fact it could be a growth 
industry.)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Why not get a few AGIs jobs working on modelling of the widespread 
introduction of AGIs - under a large number of scenario conditions to 
find the transition paths that don't result in mayhem and chaos - for us 
humans and for them too.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Ben,

 I think that modeling of transition scenarios could be interesting,
 but I also think we need to be clear about what its role will be: a
 stimulant to thought about transition scenarios.  I think it's
 extremely unlikely that such models are going to be *accurate* in any
 significant sense. 

I completely agree.  It's not predictive power in the crystal ball sense 
that I'm after but the ability to think through consequences and develop 
backcasting strategies (how to make preferred furures possible) in a 
much more complex way that is nevertheless manageable and 
effective.  Also the ability to consider masses of scenarios I think is 
important.

It might also be important to be able to do this kind of 
modelling/thinking in a way that people can join in as within-model' 
agents.  eg. via a hybrid modelling/role play process.  Then we can tap 
some of the unpredicatable creativity of people but hold the whole 
process together in a coherent way using the special capablities of 
AGIs.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Human Cyborg

2003-10-27 Thread Philip Sutton



Hi Kevin,

I was able to reach the article at a different address:

http://star-techcentral.com/tech/story.asp?file=/2003/10/14/itfeature/6414580sec=technology



Cheers, Philip





To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] Early AGI training - multiple communications channels / multi-tasking

2003-09-02 Thread Philip Sutton
Hi Ben,

It just occurred to me that very early in a Novamente's training you 
might want to give it more than one set of coordinated communication 
channels so that the Novamente can learn to communicate with more 
than one external intelligence at a time.

My guess is that this would lead to a a multilobed consciousness - 
where each communication channel (2 way, possibly multiple senses) 
would have it's own mini-consciousness and the Novamente would 
have a metaconsciousness that knits all its mental parts together as a 
whole self.

I don't think we should assume a single communications channel mode 
for Novamentes just because that's how we think of biological minds 
communicating.

Maybe it's a bit like teaching a person to play the piano with two 
hands???  Or how people learn to use whole-body motor skills for 
sport.  But with a sharper and higher level of independent consciouness 
attached to each communication channel/conversation.

We learn to play with each hand and with both hands together.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] funky robot kits and AGI

2003-08-27 Thread Philip Sutton
Hi Ben,

I'm not an electronics expert but my electric tooth brush runs on an 
induction 'connection' so there's no need for a bare wire conection to an 
electric circuit.  Maybe a covered ground-level induction grid could be 
set up.  Also you could run an electric cord to the robot.  Also I had a 
goat (biological and alive) that was run that way - on a cord - otherwise 
it was dynamite with all the plants and clothes on the line!  :)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Embedding AI agents in simulated worlds

2003-08-19 Thread Philip Sutton
Hi Ben,

I've just read your paper (Goertzel  Pennachin) at:
http://www.goertzel.org/dynapsyc/2003/NovamenteSimulations.htm

I'm not expert in any of this - but I'm 10 years and three years into 
raising two kids so that gives me some experience that might or might 
not be useful 

I thought what you said made good sense.

I've got two suggestions for modifications to your approach.

One is that I wonder whether it's worth building into Novamente a pre-
set predisposition to distinguish between 'me' and 'not' me.  I would 
start by setting up in 'itch' to discover whether data flowing through 
Novamente is sourced from 'outside me' or from 'within me'.  The 
second 'itch' would be to label data generated outside of 'me' as being 
to do with 'me' or to do with 'other'.  I think a baby does the latter by 
noticing close correlations between internal feelings/intentions and 
seeing things happen 'outside' - I send messages to my arms/legs and I 
see objects move in a closely correlated way (turns out that as often as 
not I see my arms and legs move, etc.). 

Another itch that I imagine you already have built into Novamente is to 
try to find closely correlated streams of data - this would tend to speed 
the process of creating 'objects' or standard actions.

It might be worth running a few experiments to see if it significantly 
speeds up learning for a Novababy to have the 'itches' about 'me' / 'not 
me' built in at the start.

Another thought is about the way you've split leaning into direct 
environmental learning, learning to be taught, and then learning 
symbolic communication.

I think learning symbolic communication is inseparable from learnng to 
be taught.  And direct environmental learning is inseparable from the 
precursors to speech.

I'll explain what I mean.

A long time ago I picked up at second hand a rather crude notion of the 
Piagetian stages of learning.  What I absorbed was the notion that 
Piaget said that kids must first learn concretely before learning 
abstractly.

This had a surface common sense ring to it, but I've now decided that I 
don't at all agree that concrete learning has to preceed abstract 
learning.  I have two reasons for thinking this.  When babies are in the 
ealiest stages of development I think they face the hardest possible 
learning tasks - they are getting a staggering stream of sensory input 
data - most of which is meaningless.  Out of this they have to sift 
signals that are meaningful - so from the minute their brains are able to 
process input data (while in utero) they engage in abstract learning - 
take the stream of raw data and abstract from it...so a tight 
coupling of environmental experience and abstract thinking is required 
form the first moment of mental capability.  Babies are undoubtedly 
pre-programmed to be alert to certain patterns of data.  This might be 
useful to get the baby responding in ways that helps its immediate 
survival.  But it might be that the pre-programing sets up a process of 
awareness crystalisation - certain streams of data can be treated as 
meaningful - and then out of the soup of other non-meaningful data 
additional correlations to the currently meaningful data can be 
developed - a bit like the way we do jigsaw puzzles.

So, I think that the process of streaming data into objects and actions 
and relationships and characteristics is already a proccess of abstract 
learning.

The second reason for thinging that abstract thinking starts very early in 
human babies is that their primary carers are talking to them all the 
time - of using highly abstract notions like I love you, you georgeous 
little thing, how clever, oh, don't be messy etc. etc.  The baby hears 
the words (jigsaw puzzle-like at first - ie. a fuzzy set of sounds in 
amongst other words they know) and then over time they associate 
behaviours, feelings, settings, other known words, etc. that invest these 
abstract terms with more and more meaning.  But the abstract symbol 
comes first and the meaning later.  The words are like pegs to hang 
meaning on.

Given these ways of seeing things, it's not hard to say that 'learning to 
learn from a teacher' is already a process of symbolic learning.  If a 
robot is circling another object and hoping the NovaBaby will realise 
that it wants the NovaBaby to go and get the object (or whatever) then 
it is teaching symbolic communication.  But it's just doing it in a way 
that a mute person would teach it or the way that it would have to teach 
it to a deaf child.  This form of teaching is no less abstract that the use 
of verbal symbols and it is no easier to learn (might even be harder as 
the action might not correlate so uniquely to the symbolic meaning that 
the teacher is trying to convey).

My first child started speaking at 8 months and he was clearly 
understanding words long before that - so my guess is that symbolic 
reasoning starts very, very early - and that language take-off is more to 
do with 

Re: [agi] request for feedback

2003-08-14 Thread Philip Sutton



Hi Mike,

 Conceptual necessity .. Bosons, fermions, atoms, galaxies,
 stars, planets, DNA, cells, organisms, societies, information,
 computers, AGI's, the Singularity, it's all inevitable because of
 conceptual necessity. 

I think that what you are talking about is not conceptual necessity but 
structural necessity. And while I think the structural necessity notion 
holds for vast amount of the universe's stuff - my guess is that as 
things get more compex there is room for a huge amount of divergent 
evolution - so that it's not at all clear that massively compex 
entities/systems will emerge similarly all over the universe or every 
time the universe is rerun (should such a thing be possible).


At a very abstract level, the emergence of higher intelligence 
entities/systems may well be a structural necessity but the emergence 
of humans (homo sapiens) or human-like intelligence was highly 
contingent. (Maybe racoons could evolve into roughly human-level 
intelligences in the right circumstances but it's unlikely they would have 
human-like intelligence?)


Hmm. I just realised that this speculation is getting off the topic 
for the AGI list so I will desist now. If anyone wants to follow it up then 
maybe we should take it to the SL4 list??


Cheers, Philip




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] Educating an AI in a simulated world

2003-07-19 Thread Philip Sutton
Hi Ben,

If Novababies are going to play and learn in a simulated world which is 
most likely based on an agent-based/object-orientated programming 
foundation, would it be useful for the basic Novamente to have prebuilt 
capacity for agent-based modelling? Would this in be necessary if a 
Novababy is to process objects in their native format as suggested by 
Brad Wyble (eg. sprites in 3d coordinates).

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Tool for buildsing virtual worlds

2003-07-13 Thread Philip Sutton
Hi Ben,

Have you come across Game Maker 5?  Its a freeware program that 
can be used to create reasonably simple computer games, fast.

See:  www.gamemaker.nl

It might be useful for very early stage virtual worlds where you don't 
need true 3D.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Educating an AI in a simulated world

2003-07-12 Thread Philip Sutton
Ben,

I think there's a prior question to a Novamente learning how to 
perceive/act through an agent in a simulated world.

I think the first issue is for Novamente to discover that, as an intrinsic 
part of its nature, it can interact with the world via more than one agent 
interface.

Biological intelligences are born with one body and a predetermined set 
of sensors and actuators.  Later humans learn that we can extend our 
powers via technologically added sensors and actuators.

But an AGI is a much more plastic beast at the outset - it can be 
hooked to any number of sensor/actuator sets/combinations and these 
can be in the real world or in virtual reality.

My guess is that it might be useful for an AGI to learn from the outset 
that it needs to make conscious choices about which sensor/actuator 
set to use when trying to interact with the world 'out there'.

Probably to reduce early learning confusion it might be useful initially to 
give the AGI only 2 choices - between an agent that is a fixed-location 
box and an agent that is mobile - but with similar sensor sets so that it 
can fairly quickly learn that there is a relationship between what it 
perceives/learns via each sensor/actuator set.  (Biligual children often 
learn to speak quite a bit later than monolingual children - the young 
AGI doesn't want to have early leaning hurdles set too high.)

What I've said above I guess only matters if you are going to let a 
Novamente persist for a long period of time ie. you don't just reset it to 
the factory settings every time you run a learning session.  If the 
Novamente persists as an entity for any length of time then its early 
learning is going to play a huge role in shaping its capabilities and 
personality.

On a different matter, I think that it would be good for the AGI to learn 
to live initially in a world that is governed by the laws of physics, 
chemistry, ecology, etc.  So, although the best initial learning 
environment might be virtual world (mainly to reduce the need for 
massive sensory processing power), I think that world should simulate 
the bounded/non magical nature of the real world we live in.

Even if an AGI chooses to live in a non-bounded/magical virtual world 
most of the time in later life it needs to know that its fundamental 
existence is tied to a real world - it's going to need non-magical 
computational power and that's going to need real physical energy and 
it dependence on the real world has consequences for the other entities 
that live in the real world ie you and me an a few billion other people 
and some tens of million of other forms of life.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Educating an AI in a simulated world

2003-07-11 Thread Philip Sutton
Hi Ben,

I think this is a great way to give one or more Novamentes the  
experience it/they need to develop mentally, in a controlled  
environment and in an environment where the need for massive  
computational power to handle sensory data is cut (I would imagine)  
hugely thus leaving Novamente a fair bit of computational power to do  
the cognitive self-development/thinking work.  

You've probably thought of this already, but the simulated environment 
could be the way for a Novamente's carers and teachers to interface 
with the Novamente.  Rather than trying to bring Novamente into our 
world we could enter its world via virtual reality - strictly both we and 
the Novamente(s) would enter each other's experience via a shared 
virtual reality world.  So a Novamente would control the behavior of an 
agent in a simulated world and it's carers/mentors would do likewise. 
The playpen that you've often talked about would be a simulated world 
and both Novamente(s) and humans could be in there together.  

I'd be very keen to collaborate on the design of the simulated world and  
on the roles/goals that Novamente might be set in such an  
environment.  I haven't got the skills to help with the development of  
Novamente internal architecture but I think I have something to offer in  
relationship to the project you are now contemplating.

Cheers, Philip


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Are multiple superintelligent AGI's safer than a single AGI?

2003-03-04 Thread Philip Sutton



Eliezer,


As a counter to my own previous argument 
about the risk of the 
simultaneous failure of AGIs, your argument is likely to be closest to 
being right in certain circumstances after the time dimension is taken 
into account.


Our previous argument has been around 
the black and white yes/no 
notion of whether simultaneous failure is likely in an AGI population.


I have argued (to my own satisfaction 
:) ) that simultaneous failure in 
the most important areas of mentation and psychological health is likely 
to be very low. ie. that failure is likely to have a normal distribution 
(bell curve form).


But the practical question is, even 
if I'm right technically, how 
temporally compressed is the bell curve likely to be? Will we get 
enough time from when the first failures occur in a population to when 
the majority of the failures occur for corrective action to be taken by 
humans and the non-failed AGIs working together?


It seems to me that if the bell curve 
is compressed temporally then the 
message of your argument has practical significance. So we need to 
look carefully at design inadequacies and early childhood education 
inadequacies to see where temporally bunched failures might occur 
(and given that AGI minds will be so complex that precise anticipation 
of temporally bunched failures is likely in many cases to be 
impossible) then we probably need to implement AGI architectures, 
training programs and monitoring and improvement regimes that have 
a precautionary preventive effect.


The people from Boeing, Airbus and 
NASA might have some 
experience in trying to make fail-safe super-complex systems - and 
they might be prepared to fund research into this area. Maybe AGI 
researchers/developers could get some $$s to further their work 
through this channel. There is an interesting loop here - AGIs might be 
useful entities on aircraft as part of an anti-terrorism strategy - but you 
would need to guard against AGI failure. So the civil aircraft industry 
might be interested in general AGI development as well as the safety 
issue.


Cheers, Philip





[agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Philip Sutton



Ben,

 would you rather 
have one person with an IQ of 200, or 4 people with
 IQ's of 50? Ten 
computers of intelligence N, or one computer with
 intelligence 
10*N ? Sure, the intelligence of the ten computers of
 intelligence 
N will be a little smarter than N, all together, because
 of cooperative 
effects But how much more? You can say that true
 intelligence 
can only develop thru socialization with peers -- but
 why? How 
do you know that will be true for AI's as well as humans? 
 I'm not so sure 

I don't think we are faced with an 
either or situation in the case of AGIs. 
I think AGIs will be able to create pooled intelligence with an efficiency 
that far exceeds what humans can accomplish by group-work.


I can see no reason why a community 
of AGIs wouldn't be able to link 
brains and pool some of the computing power of the platforms that 
each one manages - so by agreement with a groups of AGIs, one AGI 
might be given the right to use some of the computer hardware that is 
normally used by the other AGIs. This of course is the idea behind the 
United Devices grid computing.


Plus the efficiency and potency of 
what can be passed between AGI 
minds is likely to be significantly greater than what can be passed 
between human minds.


And as with humans, pooling brains 
with several different 
perspectives and specialisations 
is likely to yield significant gains in 
intelligence over the simple sum of the parts.


So my guess is that the pursuit of 
the safety in numbers strategy is 
not likely to result in a very large penalty in lost intelligence.


And even if their was a large intelligence 
loss due to dividing up the 
available computing power bewteen multiple AGIs, I'd rather have less 
AGI intelligence, that was much safer, than more intellegence that was 
much less safe.

Cheers, Philip





Re: [agi] Playing with fire

2003-03-03 Thread Philip Sutton



Hi Pei / Colin,


 Pei: This is 
the conclusion that I have been most afraid of from this
 Friendly 
AI discussion. Yes, AGI can be very dangerous, and I don't
 think any of 
the solutions proposed so far can eliminate the danger
 completely. However 
I don't think this is a valid reason to slow down
 the research. 

Wow! This is an interesting 
statement. yes, new development X 
could be very dangerous, but since we can't get 100% certainty of 
safety, we should press ahead with an implementation that is very 
significantly less than 100% guaranteed safe because we might need 
this technology to ensure that we are safe! And we can't afford to slow 
down the development of this technology X even if the purpose is to 
make technology X safer


So far, noone on this list has suggested 
stopping AGI research or 
development. What has been suggested is that, if 
it is necessary to 
free resources to work on the means to make AGIs safe/friendly, 
then the work on building the basic AGI mentation architecture should 
be slowed to free those resources and to allow the work on friendliness 
implementation to catch up.


No-one on the list has suggested any 
reason for all the haste. Why is 
the haste important or necessary?


You might like to compare the AGI 
development issue to the 
Manhattan Project. There was an argument that having the A-bomb, 
while dangerous, was going to be a net benefit - in terms of ensuring 
that the Germans didn't get it first and then later in terms of bringing the 
Pacific war to a faster close.


But safety was always a consideration. 
Firstly at the obvious level that 
the bomb had to be safe enough for the US to handle and deliver. It 
was all pretty pointless building a bomb that was likely to blow up 
before it left the US! Secondly Openheimer was concerned that setting 
off an A-bomb could cause a run-away fire in the atmosphere - I've 
forgotten what he and others thought might combust (I guess it was 
oxygen and nitrogen). If such a run-away conflagration could be 
triggered then there was clearly no point in having the bomb since it 
would kill everyone. But the crucial point was that this issue of run-
away conflagration was (a) identified as a legitimate concern, (b) it was 
investigated, and (c) the bomb was not used until the issue had been 
shown to not be a problem.

 Pei: I don't 
think any of the solutions proposed so far can eliminate
 the danger completely

Maybe so, but reducing it at least 
somewhat seems to me to be worth 
the effort.


 Pei: So my position is: let's go ahead, but carefully.


So far at least, that's my own position 
too. But what do you mean by 
being careful if it doesn't include using multiple strategies to try to 
significantly improve the odds that AGIs will be safe and friendly?


You said:

 Pei: (2) Don't 
have AGI developed in time may be even more dangerous.
 We may encounter 
a situation where AGI is the only hope for the
 survival of the 
human species. I haven't seen a proof that AGI is
 more likely to 
be evil than otherwise. 

I haven't seen the case for why we 
actually are urgently and critically 
dependent on having AGIs to solve humans big problems. (Safe  
friendly AGIs could be useful in lots of areas but that's totally different 
from being something that we cannot survive without.)


I personally think humans as a society 
are capable of saving 
themselves from their own individual and collective stupidity. I've 
worked explicitly on this issue for 30 years and still retain some 
optimism on the subject.


 Colin: I'm with Pei Wang. Let's explore and deal with it.

OK, if you're with Pei, what exactly 
is the position that you are not with? 

Cheers, Philip





RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton



Ben,


 Ben: That paragraph 
gave one possible dynamic in a society of AGI's,
 but there are 
many many other possible social dynamics 

Of course. What you say is quite 
true. But so what?


Let's go back to that one possible 
dynamic. Can't you bring yourself to 
agree that if a one-and-only super-AGI went feral that humans would 
then be at a greater disadvantage relative to it than if there was more 
than one AGI around and the humans could call on the help of one or 
more of the other AGIs??


Forget about all the other possible 
hypotheticals. Is my assessment of 
the specific scenario above about right or not - doesn't it have some 
element of common sense about 
it?

If there is any benefit 
in having more than one AGI around in the case 
where an AGI does go feral then your comment I'm just not so sure 
that there's any benefit to the 
society of AGI as opposed to one big 
AGI approach no longer holds as an absolute.


It then gets back to having a 
society of AGIs might be an advantage 
in certain cercumstances, but having more than one AGI might have 
the following down sides. At this point a balanced risk/benefit 
assessment can be made (not definitive of course since we haven't 
seen super-intelligent AGIs operation yet). But at least we've got some 
relevant issues on the table to think about.

Cheers, Philip





[agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton
Pei,

 I also have a very low expectation on what the current Friendly AI
 discussion can contribute to the AGI research. 

OK - that's a good issue to focus on then.

In an earlier post Ben described three ways that ethical systems could 
be facilitated:
A)  Explicit programming-in of ethical principles (EPIP) 
B)  Explicit programming-in of methods specially made for the learning
of ethics through experience and teaching 
C)  Acquisition of ethics through experience and teaching, through
generic AI methods

It seems to me that (A) and (B) have immediate relevance to the 
research needed for the development of a friendly AGI.

And Kevin has proposed the development of machinery for a big red 
button which is another tangible issue.

So maybe we should take up your point and try to make the ethics 
discussion deliberately focussed on being relevant to the research and 
trial development issues.

Would you be prepared to help us focus the discussion in this way?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton



Ben,


 I think Pei's 
point is related to the following point
 We're now working 
on aspects of
 A) explicit programming-in 
of ideas and processes
 B) Explicit programming-in 
of methods specially made for the learning
 
of ideas and processes through experience and teaching
 and that until 
we understand these better, there's not that much useful work
 to be done specifically 
pertaining to *ethical* ideas and processes.

OK. That makes sense. 
While there may be some interesting tweaks 
that arise from consideration of ethical issues, I can see the broad 
sense of this strategy.

Cheers, Philip





RE: [agi] What would you 'hard-wire'?

2003-03-03 Thread Philip Sutton



Ben,


 I can see some 
possible value in giving a system these goals, and
 giving it a strong 
motivation to figure out what the hell humans mean
 by the words 
care, living, etc. These rules are then really rule
 templates 
with instructions for filling them in... 

Yes.


 However, I view 
this as only a guide to learning ethical rules... the
 real rules the 
system learns will be based on the meanings with which
 the system fills 
in the words in the given template rules... For
 example, the 
system's idea of what humans mean by living may not be
 accurate, or 
may be biased in some way (since after all humans have a
 rather ambiguous 
shifty definition of living). 

Yes again.


Picking up on your point, when AGIs 
are first created most humans will 
not see them as life. So the AGIs will need to be able to extend the 
concept of life beyond where most humans 
locate it.


Cheers, Philip





Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton



Hi Eliezer,


 This does not 
follow. If an AI has a P chance of going feral, then a
 society of AIs 
may have P chance of all simultaneously going feral 

I can see you point but I don't agree 
with it.


If General Motors churns out 100,000 
identical cars with all the same 
charcteristics and potiential flaws, they will not all 
fail at the same 
instant in time. Each of them will be placed in a different operating 
environment and the failures will probably spread over a bell curve 
style distribution.


If we apply this logic to AGIs we 
have a chance to enlist the support of 
most of the AGIs to 'recall' the population to take preventive action to 
avoid failure and will have their help to deal with the AGIs that have 
already failed.


Cheers, Philip





Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Eliezer,

 That's because your view of this problem has automatically factored
 out all the common variables.  All GM cars fail when dropped off a
 cliff.  All GM cars fail when crashed at 120 mph.  All GM cars fail on
 the moon, in space, underwater, in a five-dimensional universe.  All
 GM cars are, under certain circumstances, inferior to telecommuting. 

Good point.

Although not all failires will be of this sort so the group strategy is still 
useful for at least a sibset of the failure cases.

Seems to me then that safety lies in a combination of all our best safety 
factors:

-   designing all AGIs to be as effectively friendly as possible - as if
we had a one shot chance of getting it right and we can't afford the
risk of failure, and AS WELL

-   developing quite a few different types of AGI architecture so that
the risk of sharing the same class of critical error is reduced; and
AS WELL 

-   having a society of AGIs with multiples of each different type -
that are uniquely trained - so that the degree of sameness and hence
risk of failure is not so tightly linked. 

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-03-02 Thread Philip Sutton



Ben, 


OK - so Novamente has a system for 
handling 'importance' already and 
there is an importance updating function that feeds back to other aspects of 
Attention Value. That's good in terms of Novamente having an internal 
architecture capable of supporting and ethical system. 

 You're asking 
the AGI to solve the inverse problem: Find the concept
 that is consistent 
with these descriptions and associations, and then
 embody that concept 
in your own behavior. I think this is a very hard
 learning problem 


 which presumably means that it 
will be put off until the AGI has the 
capacity to undertake the leaning process. So why is this a problem? 

 and the AGI might 
will come up with something subtly but dangerously
 twisted I 
don't trust this 1/1000 at much as experience-based
 learning. 


But it's not either/or - under the 
approach that I've suggested, Novamente 
would have an itch to learn about certain ethical concepts AND it would 
gain experience-based learning - so if experience-based learning is so good 
why wouldn't it help Novamente to handle it internal itch-driven learning 
without the subtle but dangerous twisting that you fear? 


And anyway why would your pure experience-based 
learning approach be 
any less likely to lead to subtly but dangerously warped ethical systems? 
The trainers could make errors and a Novamente's self-learning could be 
skewed by the limits of its experience and the modelling it observes. 

 H. 
I am not certain, but I don't have a good feeling about it.
 I think 
it's fine to stimulate things related to compassion and
 inhibit things 
opposed to it, but, I think this will be useful as a
 *guide* to an 
experience-based learning process, not as a primary means
 of stimulating 
the development of a sense of compassion. 


Your approach to ethics seems to be 
based almost 100% on learning and 
you seem to thing that your own team will be training all the Novamentes 
before they leave the sandbox. How can you guarantee that your team will 
always be the trainers and the quality standards will always be maintained? 


For example, why couldn't someone 
outside your group get a copy of a 
Novamente and just strip out the learned data and then retrain the new 
copy of Novamente themselves? 


--- 


Getting back to your basic preferred 
concept of using expience-based 
learning to build a Novamente's ethical beliefs - this means that every 
Novamente has to start as a tabula rasa and in effect learn all the lessons 
of evolution all by itself. 


With anything less intelligent than 
a human-equivalent Novamente this 
would be a highly inefficient approach. But with something as intelligent as 
human-equivalent Novamente this a hugely dangerous strategy. 


Given that ethics were not hard wired 
into early animals - you have to ask 
why this hard wiring eventually emerged. My guess is that as animals 
became more powerful and potentially dangerous to their own kind it was 
only the ones with inbuilt ethics that could be activated soon after birth that 
were safe enough to survive and pass on their genes. 


In other word the lesson of evolution 
was that evolutionary recapitulation 
could not be relied on to get each animal to a point where it was safe for its 
fellows. 


 


Just as an aside, it seems that autism 
is a condition caused by problems 
with a human's pre-wired empathy system. According to your preferred 
approach to GI training it should only be a matter of training human GI in 
ethics and empathy. Why then does autism exist as a problem since 99% 
of autistic kids are put through a major training program by parents and 
others to get them to relate socially? I simply can't see why a Novamente 
that is without a modicum of ethical hardwiring will not end up being 
autisitic - no matter how good the training program you might give it. 


Why will your Novamentes not be autistic 
- despite the training regime that 
you intend? 


 


At this stage in the discussion on 
the AGI list I haven't heard anything to 
convinve me that a certain amount of ethical pre-wiring is certain to cause 
problems that are any greater than the problems that could be caused by 
NOT having a modicum of carefully designed ethical hardwiring. 


 


You have said many times that we need 
to suck it an see through 
experiment - that the theory of AGI psychological development is too 
underdeveloped because we don't know what we are dealing with. 


So why not proceed to develop Novamente's 
down two different paths 
simultaneously - the path you have already designed - where experience-
based learning is virtually the only strategy, and a variant where some 
Novamentes have a modicum of carefully designed pre-wiring for ethics? 


Then you've got some experiential 
basis for comparing the two proposed 
strategies - and quick corrective action will be easier if one or other strategy 
shows signs of running into problems. 


And less 

RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-03-02 Thread Philip Sutton



Ben,

 I don't 
have a good argument on this point, just an intuition, based
 on the fact that 
generally speaking in narrow AI, inductive learning
 based rules based 
on a very broad range of experience, are much more
 robust than expert-encoded 
rules. The key is a broad range of
 experience, otherwise 
inductive learning can indeed lead to rules that
 are overfit 
to their training situations and don't generalize well
 to fundamentally 
novel situations. 

I've played around with expert systems 
years ago (I designed one to 
interpret a legal framework I was working on) and I'm familiar with the 
notion of inductive learning - using computers to generate algorithms 
representing patterns in large data sets. And I can see why the fuzzier 
system might be more robust in the face of partial novely.


But I'm not proposing that AGIs rely only on 
pre-wired ethical drivers - 
a major program of experience-based learning would also be needed - 
just as you are planning.


And in any case I didn't propose that 
the modicum of hard-wiring take 
the form a deductive 'expert system'-style rule-base. That would be 
very inflexible as the sole basis for ethical judgement formation (and in 
any case the AGI itself would be capable of developing very good 
deductive rule-bases and inductive expert system 'rule' bases without 
the need for these to be preloaded).

 If there need 
to be multiple Novamentes (not clear -- one might be
 enough), they 
could be produced through cloning rather than raising
 each one from 
scratch. 


Ok - I hadn't thought of cloning as 
a way to avoid having to directly 
train every Novamente.

But the idea of having just one Novamente 
seems somewhat 
unrealistic and quite risky to me. 

If the Novamente design is going to 
enable boostraping as you plan 
then your one Novamente is going to end up being very powerful. If you 
try to be the gatekeeper to this one powerful AGI then (a) the rest of the 
world will end up considering your organisation as worse than Microsoft 
and many of your clients are not going to want to be held to ranson by 
being dependent on your one AGI for their mission critical work and (b) 
the one super-Novamente might develop ideas if it own that might not 
include you or anyone else being the gatekeeper.


The idea of one super-Novamente is 
also dangerous because this one 
AGI will develop its own perspecitive on things and given its growing 
power that perpective or bias could become very dangerous for any 
one or anything that didn't fit in with that perspective.


I think an AGI needs other AGIs to 
relate to as a community so that a 
community of leaning develops with multiple perspectives available. 
This I think is the only way that the accelerating bootstraping of AGIs 
can be handled with any possibility of being safe.

 The engineering/teaching 
of ethics in an AI system is pretty different
 from its evolution 
in natural systems... 

Of course. But that is not to 
say that there is nothing to be learned 
from evolution about the value of building in ethics in creatures that are 
very intelligent and very powerful.


You didn't respond to one part of 
my last message:

 Philip: So why 
not proceed to develop Novamentes down two different
 paths simultaneously 
- the path you have already designed - where
 experience-based 
learning is virtually the only strategy, and a variant
 where some Novamentes 
have a modicum of carefully designed pre-wiring
 for ethics.
(coupled with a major program of experience-based 
learning)? 


On reflection I can well imagine that 
you are not ready to make any 
commitment to my suggestion to give the dual (simultaneous) 
development path approach a go. But would 
you be prepared to 
explore the possibility of dual (simultaneous) development path 
approach? I think there would be much to be learned from at least 
examining the dual approach prior to making any commitment.


What do you think?


Cheers, Philip





RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-03-02 Thread Philip Sutton



Ben,

  Philip: 
I think an AGI needs other AGIs to relate to as a community so that a
  community 
of learning develops with multiple perspectives available.
  This I think 
is the only way that the accelerating bootstraping of
  AGIs can 
be handled with any possibility of being safe. ** 
 
 Ben: That feels 
to me like a lot of anthropomorphizing...


Why? Why would humans be the 
only super-intelligent GI to have 
perspectives or points of view? I would have thought it was inevitable 
for any resource limited/experience limited GI system. And any AGI in 
the real world is going to be resource and experience limited.

 To me, it's an 
unanswered question whether it's a better use of, say,
 10^5 computers 
to make them all one Novamente, or to partition them
 into a society 
of Novamente's 

This was the argument that raged over 
mainframe vs mini/PC 
computers. 

The question is only partly technical 
- there are many other issues that 
will determine the outcome.


If for no other reason, the monopolies 
regulators are probably not going 
to allow all the work requiring an AGI to go through one company. Also 
users of AGI services are not going to want to have to deal with a 
monopolist - most big companies will want to have at the very least 
least 2-3 AGI service companies in the market place.And its unlikely 
that these service companies are going to want to have to buy all their 
AGI grunt from just one company.


Even in the CPU market there's still 
AMD serving up a bit of 
competition to Intel. And Windows isn't the only OS in the market.


And then there's the wider community 
- if there are going to be AGIs at 
all will the community rest easier if they think there is just one super 
AGI?? What do people think of Oracle's plan to have one big 
government database?


In any case it's clearly not safe 
to have just one AGI in existance - if the 
one AGI goes feral the rest of us are going to need to access the power 
of some pretty powerful AGIs to contain/manage the feral one. 
Humans have the advantage of numbers but in the end we may not 
have the intellectual power or speed to counter an AGI that is actively 
setting out to threaten humans.


  Philip: 
So why not proceed to develop Novamentes down two different
  paths simultaneously 
- the path you have already designed - where
  experience-based 
learning is virtually the only strategy, and a
  variant 
where some Novamentes have a modicum of carefully designed
  pre-wiring 
for ethics. (coupled with a major program of
  experience-based 
learning)?


 Ben: I guess 
I'm accustomed to working in a limited-resources
 situation, where 
you just have to make an intuitive call as to the
 best way to do 
something and then go with it ... and then try the next
 way on the list, 
if one's first way didn't work... Of course, if
 there are a lot 
of resources available, one can explore parallel paths
 simultaneously 
and do more of a breadth-first rather than a
 depth-first search 
through design space ! 

There is at least one other option 
that you haven't mentioned and that 
is to take longer to create the AGI via the 100% experience-based 
learning route so you can free some resources to devote to following 
the 'hard-wiring plus experiential 
learning' route as well.


It's not going to be the end of the 
world if we take a little longer to 
create a safe AGI but it could be the end of the line for all humans or at 
least those humans not allied with the AGI if we get a feral or 
dangerous AGI by mistake.


And maybe by pursuing both routes 
simulaneously you might generate 
more goodwill that might increase the resourcing levels a bit further 
down the track.


Cheers, Philip





RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-02-27 Thread Philip Sutton



Ben,

 One question 
is whether it's enough to create general
 pattern-recognition 
functionality, and let it deal with seeking
 meaning for symbols 
as a subcase of its general behavior. Or does
 one need to create 
special heuristics/algorithms/structures just for
 guiding this 
particular process? 

Bit of both I think. Its a bit 
like there's a search for 'meaning' and a search 
for 'Meaning'.


I think all AGIs need to search for 
meaning behind patterns to be able to 
work out useful cause/effect webs. And when AGIs work with symbols this 
general 'seeking the meaning of patterns' process can be applied as the first 
level of contemplation.


But in the ethical context I think 
we are after 'Meaning' where this relates to 
to some notion of the importance of 
the pattern or symbol for some 
significant entity - for the AGI, the AGIs mentors, other sentient beings and 
other life.


At the moment you have truth and attention 
values attached to nodes and 
links. I'm wondering whether you need to have a third numerical value type 
relating to 'importance'. Attention has a temporal implication - it's intended 
to focus significant mental resources on a key issue in the here and now. 
And truth values indicate the reliability of the data. Neither of these 
concepts capture the notion of importance.


I guess the next question is, what 
would an AGI do with data on importance. 
I'm just thinking off the top of my head, but my guess is that if the nodes 
and links had high importance values but low truth values that this should 
set up an 'itch' in the system driving the AGI to engage in learning and 
contemplation that would lift the truth values. Maybe the higher the 
dissonance between the importance values and the truth values, the more 
this would stimulate high attention values for the related nodes and links.


Then there's the question of what 
would generate the importance values. I 
think these values would ultimately be derived from the perceived 
importance values conveyed by 'significant others' for the AGI and by the 
AGIs own ethical goal structure.


 I don't think 
that preloading symbols and behavior models for
 something as 
complex as *ethical issues* is really going to be
 possible. I think 
ethical issues and associated behavior models are
 full of nuances 
that really need to be learned. 

Of course ethical issues and 
associated behavior models are full of 
nuances that really need to be learned to make much deep sense. Even 
NGIs like us, with presumably loads of hardwired predisposition to ethical 
behaviour, can spend their whole life in ethical learning and contemplation! 
:) 


So I guess the issues are (a) whether 
it's worth preload ethical concepts and 
(b) whether it's possible to do it.


I'll start with (b) first and then 
cosider (a) (since lots of people have a 
pragmatic tendency not to bother about issues till the means for acting on 
them are available).


(Please bear in mind that I'm not 
experienced or expert in any of the 
domains I'm riding rough shod over.everything I say will be intuitive 
generalist ideas...)


Let's take the hardest case first. 
Let's take the most arcane abstract 
concept that you can think of or the one that has the most intricate and 
complex implications/shades of meaning for living.


Lets label the concept B31-58-DFT. 
We create a register in the AGI 
machinery to store important ethical concepts. We load in the label B31-
58-DFT and we give it a high importance value. We also load in a set of 
words in quite a few major languages into two other registers - one set of 
words are considered to have meaning very close to the concept that we 
have prelabelled as B31-58-DFT. We also load in words that are not the 
descriptive *meaning* of the B31-58-DFT concept but are often associated 
with it. We then set the truth value of B31-58-DFT to, say, zero. We also 
create a GoalNode associated to B31-58-DFT that indicates whether the 
AGI should link B31-58-DFT to its positive goal structure or to its negative 
goal structure ie. is B31-58-DFT more of an attractor or a repeller concept?


(BTW, most likely there would need 
to be some system for ensuring that the 
urge to contemplate concept B31-58-DFT didn't get so strong that the AGI 
was incapable of doing anything else.)


We could also load in some body-language 
patterns often observed in 
association with the concept if there are such things in this case eg. smiles 
on human faces, wagging tails on dogs, purring in cats, etc. (or some other 
pattern, eg. (1) bared teeth, growling hissing, frowns, red faces; (2) pricked 
ears, lifted eye brows, quite alterness; and so on).


We make sure that the words we load 
in to the language registers include 
words that the AGI in the infantile stages of development might most likely 
associate with concept B31-58-DFT - so that the assocation between the 
prebuilt info about B31-58-DFT and what the AGI learns early in its life can 

RE: [agi] more interesting stuff

2003-02-25 Thread Philip Sutton
Ben/Kevin,

 The dynamics of evolution through progressive self-re-engineering
 will, in my view, be pretty different from the dynamics of evolution
 through natural selection. 

Lamarkian evolution (cf. Darwinian evolution) gets a new lease of life!

Cheers, Philip


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Can an AGI remain general if it can self-improve?

2003-02-22 Thread Philip Sutton



If an AGI can self-improve what is 
the likelihood that the AGI will 
remain general and will not instead evolve itself rapidly to be a super-
intelligent specialist following the goal(s) that grab its attention early in 
life? I think that most humans tend to move in the specialist direction 
as they develop.


Would biological life and especially 
humans face more of a threat from 
super-intelligent AGIs or from super-intelligent artificial specialist 
intelligences (ASIs)?


In the dystopian scenarios that people 
have played out on this list most 
of the intelligence upgrade paths seem to be implicitly from AGI to 
super-intelligent ASIs.


If AGIs are to be ethical (have compassionate 
concern for otherness) 
then I wonder whether they need to remain AGIs ie. to be able to think 
and empathises in a very rounded multi-faceted way.


If so what goals and structural features 
need to be built in to drive the 
AGI stably 'forever' in the direction of building 'general' intelligence (no 
matter what specialist intelligence might be developed along the way)?


By the way, is anyone on the list 
into cybernetics or control theory? It 
seems to me that one of the useful leads from this area is the use of 
clusters of goals that result in the desired behavioural trajectory (in 
effect a super goal) as an emergent. In other words the cluster of 
apparently lower order goals provide the necessary variety of 
feedbacks that are needed to keep the emergent super-system on track 
despite having to deal with a complex and unpredictable environment.


It might require something like a 
complex goal-set, built in at the start, 
to keep an AGI wanting to stay a general intelligence as it gets more 
intellectually powerful. Possibly sub-goals like curiosity and prudence 
when paired (and almost certainly when also combined with a number 
of other sub-goals) could deliver the persistent 'general intelligence'-
seeking behavour that might be desirable?


Cheers, Philip





RE: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Philip Sutton
Ben,

 OK... life lesson #567: When a mathematical explanation confuses
 non-math people, another mathematical explanation is not likely to
 help 

While I can't help with the solution, I can say that this version of your 
problem at last made sense to me - previous version were 
incomprehensible to me, this last version leaped off the page as 
comprehensible communication.  So you're rule above holds very well.

If you can teach Novamente to do what you have just done here you've 
made a big leap forward in human / Novamente communication.

Cheers, Philip

From:   Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject:RE: [agi] A probabilistic/algorithmic puzzle...
Date sent:  Thu, 20 Feb 2003 14:25:54 -0500
Send reply to:  [EMAIL PROTECTED]



OK... life lesson #567: When a mathematical explanation confuses 
non-math people, another mathematical explanation is not likely to 
help

The basic situation can be thought of as follows.

Suppose you have a large set of people, say, all the people on Earth

Then you have a bunch of categories you're interested in, say:

Chinese
Arab
fat
skinny
smelly 
female
...


Then you have some absolute probabilities, e.g.

P(Chinese) = .2
P(fat) = .15

etc. , which tell you how likely a randomly chosen person is to fall into 
each of the categories

Then you have some conditional probabilities, e.g.

P(fat | skinny)=0
P(smelly|male) = .62
P(fat | American) = .4
P(slow|fat) = .7

The last one, for instance, tells you that if you know someone is 
American, then there's a .4 chance the person is fat (i.e. 40% of 
Americans are fat).

The problem at hand is, you're given some absolute and some 
conditional probabilities regarding the concepts at hand, and you want 
to infer a bunch of others.

In localized cases this is easy, for instance using probability theory one 
can get evidence for

P(slow|American)

from the combination of

P(slow|fat)

and

P(fat | American)

Given n concepts there are n^2 conditional probabilities to look at. 
The most interesting ones to find are the ones for which

P(A|B) is very different from P(B)

just as for instance

P(fat|American) is very different from P(fat)

This problem is covered by elementary probability theory. Solving it in 
principle is no issue. The tricky problem is solving it approximately, for 
a large number of concepts and probabilities, in a very rapid 
computational way.

Bayesian networks try to solve the problem by seeking a set of 
concepts that are arranged in an independence hierarchy (a directed 
acyclic graph with a concept at each node, so that each concept is 
independent of its parents conditional on its ancestors -- and no I don't 
feel like explaining that in nontechnical terms at the moment ;). But 
this can leave out a lot of information because real conceptual 
networks may be grossly interdependent. Of course, then one can try 
to learn a whole bunch of different Bayes nets and merge the 
probability estimates obtained from each one

One thing that complicates the problem is that ,in some cases, as well 
as inferring probabilities one hasn't been given, one may want to make 
corrections to probabilities one HAS been given. For instance, 
sometimes one may be given inconsistent information, and one has to 
choose which information to accept.

For example, if you're told

P(male) = .5
P(young|male) = .4
P(young) = .1

then something's gotta give, because the first two probabilities imply 
P(young) = .5*.4 = .2

Novamente's probabilistic reasoning system handles this problem pretty 
well, but one thing we're struggling with now is keeping this correction 
of errors in the premises under control. If you let the system revise its 
premises to correct errors (a necessity in an AGI context), then it can 
easily get carried away in cycles of revising premises based on 
conclusions, then revising conclusions based on the new premises, and 
so on in a chaotic trajectory leading to meaningless inferred 
probabilities.

As I said before, this is a very simple incarnation of a problem that 
takes a lot of other forms, more complex but posing the same essential 
challenge.

-- Ben G







---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AIXI and Solomonoff induction

2003-02-21 Thread Philip Sutton
Ed,

 From my adventures in physics, I came to the conclusion that my
 understanding of the physical world had more to do with 1. My ability
 to create and use tools for modeling, i.e. from the physical tools of
 an advanced computer system to my internal abstraction tools like a
 new theorem of group algebra that helps me organize the particle world,
 2. My internal mechanism for modeling, i.e. my internal neural
 structure, than it had to do with any 'physical reality'. 

Isn't the deterministic universe a working hypothesis that drives a lot of 
technological development and science?  In other words we expect to 
find regularities and causal webs when we know enough about the 
system?

It seems to me that we can't tell at this point whether we live in a 
universe that is deterministic all the way down.  The permanently 
inevitable limits on our perception, modelling skills and depth of 
knowledgebase prevent us from developing a fully deterministic model 
for all issues based on modelling all details of the universe down to the 
finest detail.  So for most questions we must simplify and work with 
black boxes at all sorts of levels.  This means that the statistical 
probablistic approach works best for lots of issues but as our 
knowledgebase, perception and modelling skills improve we can apply 
approximate deterministic approaches to more things.

My guess is that if, as we or AGIs improve our knowledgebase, 
perception and modelling skills that we find that 'we' can apply 
approximately deterministic models to explain more and more and 
more things that previously had to be grappled with using statistical 
probablisitic approaches then I would say that strengthens the value of 
the deterministic-universe working hypothesis - but of course since we 
can never model the whole universe in full detail while we are within 
the universe iteslf then we will never know whether at bottom it really is 
deterministic or probablistic - this is the Pooh bear problem.  Is there 
really cheese at the bottom of the honey jar?  Can't tell till you get 
there.

I once skimmed a book that claimed  we are actually artifacts living in 
some other being's simulation which was supposedly why the 
newtonian work of day to day life gives way to the probablisitic 
quantum world.  :)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Developing biological brains and computer brains

2003-02-18 Thread Philip Sutton
Brad/Ben/all,

I think Ben's point about not trying to emulate biological brains with 
computers is quite important.

The medium they are working with (living cells, computer chips are 
very different).   Effective brains emerge out of an interplay between 
the fundamental substrate and the connections with the external 
environment that stimulate the need for and utility of mind processes 
(unconscious or conscious).

The emergence of mind requires an evolutionary interaction between 
the potential mind sustrate and the environment.  In the case of the 
computer-based system, humans and later AGIs can also consciously 
design components/concepts that can be thrown into the mind 
generating architecture.  But over it all there will still be be a powerful 
evolutionary process of try some things, see what happens, make a 
selection of what seems to work best, try some more things 

One thing Ben said is very relevant:

 This precision allows entirely different structures and dynamics to be
 utilized, in digital AGI systems as opposed to brains.  For example,
 it allows correct probabilistic inference calculations (which humans,
 at least on the conscious level) are miserable at making; it allows
 compact expression of complex procedures as higher-order functions (a
 representation that is really profoundly unbrainlike); etc. 

In other words when you are dealing with a profoundly different 
substrate what you can try to do can be very different and the evolution 
of systems in different substrates will therefore inevitably be different.

So AGIs are our first experience with truly alien intelligence - ie. built 
on a profoundly different substrate to biological systems (that have 
Earth history).

That is not to say that there will not be convergent evolution in 
biological brains and computer brains - we share the same meta 
environment and we will interact with each other.

And I'm sure there will be lots of things that can be learned from the 
study of biological brains that will be useful for designing/evolving 
computer brains but it seems that starting with an awareness that 
biological brains and computer brains need to evolve differently due to 
their fundamental substrate difference makes sense to me.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] doubling time revisted.

2003-02-17 Thread Philip Sutton
Stephen Reed said:

 Suppose that 30-50 thousand state of the art computers are equivalent
 to the brain's processing power (using Moravec's assumptions).  If
 global desktop computer system sales are in the neighborhood of 130
 million units, then we have the computer processing equivalent of
 2,600 human brains should they all somehow be linked together. 

That means with 6 billion people in the world we have the installed 
capcity of humans equivalent to between 180,000,000,000,000 and 
300,000,000,000,000state of the art computers.

Cheers, Philip


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl - AGI friendliness

2003-02-16 Thread Philip Sutton
Hi Eliezer/Ben,

My recollection was that Eliezer initiated the Breaking AIXI-tl 
discussion as a way of proving that friendliness of AGIs had to be 
consciously built in at the start and couldn't be assumed to be 
teachable at a later point. (Or have I totally lost the plot?)

Do you feel the discussion has covered enough technical ground and 
established enough concensus to bring the original topic back into 
focus?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Novamente: how crtical is self-improvement to getting human parity?

2003-02-16 Thread Philip Sutton
Ben,

Thanks for that. Your explanation makes the whole thing a lot clearer. 
I'll come back to this thread again after Eliezer's discussion on AGI 
friendliness has progressed a bit further.

Cheers, Philip

From:   Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject:RE: [agi] Novamente: how crtical is self-improvement to 
getting human parity?
Date sent:  Sun, 16 Feb 2003 12:13:16 -0500
Send reply to:  [EMAIL PROTECTED]



Hi,

As we're thinking about it now, Novamente Version 1 will not have 
feature 4. It will involve Novamente learning a lot of small programs to 
use within its overall architecture, but not modifying its overall 
architecture.

Technically speaking: Novamente Version 1 will be C++ code, and 
within this C++ code, there will small programs running in a language 
called Sasha. Novamente will write its own Sasha code to run in its 
C++ Mind OS, but will not modify its C++ source.

The plan for Novamente Version 2 is still sketchy, because we're 
focusing on Version 1, which still has a long way to go. One possible 
path is to write a fast, scalable Sasha compiler and write the whole 
thing in Sasha. Then the Sasha-programming skills of Novamente 
Version 1 will fairly easily translate into skills at deeper-level self-
modification. (Of course, the Sasha compiler will be in C++ ... so 
eventually you can't escape teaching Novamente C++ ;-).

How intelligent Novamente Version 1 will be -- well ... hmmm ... who 
knows!! 

Among the less sexy benefits of the Novamente Version 2 architecture, 
I really like the idea of having Novamente correct bugs in its own 
source code. It is really hard to get a complex system like this truly 
bug-free. An AGI should be a lot better at debugging very complex 
code than humans are! 

So the real answer to your question is, I'm not sure. My hope, and my 
guess, is that Novamente Version 1 will --- with ample program learning 
and self-modification on the Sasha level -- be able to achieve levels of 
intelligence that seem huge by human standards. 

Of course, a lot of sci-fi scenarios suggest themselves: What happens 
when we have a super-smart Version 1system and it codes Version 2 
and finds a security hole in Linux and installs Version 2 in place of half 
of itself, then all of itself... etc. 


-- Ben G



-Original Message-
From: [EMAIL PROTECTED] [mailto:owner-
[EMAIL PROTECTED]]On Behalf Of Philip Sutton
Sent: Sunday, February 16, 2003 10:55 AM
To: [EMAIL PROTECTED]
Subject: [agi] Novamente: how crtical is self-improvement to 
getting human parity?

Hi Ben,

As far as I can work out, there are four things that could conceivably 
contribute to a Novamente reaching human intelligence parity:

1 the cleverness/power of the original architecture 

2 the intensity, length and effectiveness of the Novamente learning
 after being booted up

3 the upgrading of the achitecture/code base by humans as a result 
of
 learning by anyone (including Novamentes). 

4 the self-improvement of the achitecture/code base by the 
Novamente
 as a result of learning by anyone (humans and Novamentes). 

To what extend is the learning system of the current Novamente 
system (current or planned for the first switched on version) dependent 
on or intertwined with the capacity for a Novamente to alter its own 
fundamental architecture?

It seems to me that the risk of getting to the sigularity (or even a 
dangerous earlier stage) without the human plus AGI community being 
adequately prepared and sufficiently ethically mature lies in the 
possiblity of AGIs self-improving on an unhalted exponential trajectory.

If you could get Novamentes to human parity using strategies 1-3 only 
then you might be able to control the process of moving beyond human 
parity sufficiently to make it safe.

If getting to human parity relies on strategy 4 then the safety strategy 
could well be very problematic - Eliezer's full Friendly AI program might 
need to be applied in full (ie. developing the theory of friendlieness first 
and then applying Supersaturated Friendliness (as Eliezer calls it).

What do you reckon?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] The core of the current debate??

2003-02-16 Thread Philip Sutton

I was just thinking, it might be useful to make sure that in pusuing the 
Breaking AIXI-tl - AGI friendliness debate we should be clear what the 
starting issue is.

I think it is best defined by Eliezer's post on 12 Feb and Ben's reply of 
the same day

Eliezer's post:
http://www.mail-archive.com/agi@v2.listbox.com/msg00792.html

Ben's post:
http://www.mail-archive.com/agi@v2.listbox.com/msg00799.html

Should the core issue be restated in any way or are these two posts 
adequate as the launch point?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Breaking AIXI-tl - AGI friendliness - how to move on

2003-02-16 Thread Philip Sutton
Hi Ben,

From a high order implications point of view I'm not sure that we need 
too much written up from the last discussion.

To me it's almost enough to know that both you and Eliezer agree that 
the AIXItl system can be 'broken' by the challenge he set and that a 
human digital simulation might not.  The next step is to ask so what?.  
What has this got to do with the AGI friendliness issue.

 Hopefully Eliezer will write up a brief paper on his observations
 about AIXI and AIXItl.  If he does that, I'll be happy to write a
 brief commentary on his paper expressing any differences of
 interpretation I have, and giving my own perspective on his points.  

That sounds good to me.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Philip Sutton
Eliezer/Ben,

When you've had time to draw breath can you explain, in non-obscure, 
non-mathematical language, what the implications of the AIXI-tl 
discussion are?

Thanks.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] who is this Bill Hubbard I keep reading about?

2003-02-14 Thread Philip Sutton
Bill,

Gulp..who was the Yank who said ... it was I ??? Johnny Appleseed 
or something?  

Well, it my turn to fess up.  I'm pretty certain that it was my slip of the 
keyboard who started it all.  Sorry.

:)

My only excuse is that in my area of domain knowledge King Hubbard 
is very famous. He was chief geologist in the US Geological Survey in 
the 1950s or something like that.  He developed a model of oil 
depletion that is being played out now.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Philip Sutton
Ben/Bill,

My feeling is that goals and ethics are not identical concepts.  And I 
would think that goals would only make an intentional ethical 
contribution if they related to the empathetic consideration of others.

So whether ethics are built in from the start in the Novamente 
architecture depends on whether there are goals *with ethical purposes* 
included from the start.

And whether the ethical system is *adequate* from the start would 
depend on the specific content of the ethically related goals and the 
resourcing and sophistication of effort that the AGI architecture directs 
at understanding and the acting on the implications of the goals vis-a-
vis any other activity that the AGI engages in.  I think the adequacy of 
the ethics system also depends on how well the architecture helps the 
AGI to learn about ethics.  If it a slow learner then the fact that it has 
machinery there to handle what it eventually learns is great but not 
sufficient.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality - goals and reinforcement values - plus early learning

2003-02-11 Thread Philip Sutton
Ben,

 Right from the start, even before there is an intelligent autonomous mind
 there, there will be goals that are of the basic structural character of
 ethical goals.  I.e. goals that involve the structure of compassion, of
 adjusting the system's actions to account for the well-being of others based
 on observation of and feedback from others. These one might consider as the seeds 
of future ethical goals.  They will
 grow into real ethics only once the system has evolved a real reflective
 mind with a real understanding of others...

Sounds good to me!  It feels right.

At some stage when we've all got more time, I'd like to discuss how the 
system architecture might be structured to assist the ethical learning of 
baby AGIs.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI

2003-02-11 Thread Philip Sutton
Eliezer,

In this discussion you have just moved the focus to the superiority of 
one AGI approach versus another in terms of *interacting with 
humans*.

But once one AGI exists it's most likely not long before there are more 
AGIs and there will need to be a moral/ethical system to guide AGI-AGI 
interaction.  And with super clever AGIs around it's likely that that 
human modification speeds up leading the category 'human' to be a 
very loose term.  So we need a moral/ethical system to guide AGI-
once-were-human interactions.

So for these two reasons alone I think we need to start out thinking in 
more general terms that AGIs being focussed on 'interacting with 
humans'.

If you have an goal-modifying AGI it might figure this all out.  But why 
should the human designers/teachers not avoid the probem in the first 
place since were can anticipate the issue already fairly easily.

Of coursei n terms of the 'unFriendly AIXI' debate this issue of a tight 
focus on interaction with humans is of no significance, but it I think it is 
important in its own right. 

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Self, other, community

2003-02-10 Thread Philip Sutton
A number of people have expressed concern about making AGIs 'self' 
aware - fearing that this will lead to selfish behaviour.

however I don't think that AGIs can actually be ethical without being 
able to develop awareness of the needs of others and I don't think you 
can be aware of others needs without being able to distinguish between 
own needs and others needs (ie. others needs are not simply the self's 
needs)

Maybe the solution is to help AGIs to develop a basic suite of concepts:
-self
-other
-community

I think all social animals have these concepts.  

Where AGIs need to go further is to have a very inclusive sense of 
what the community is - humans, AGIs, other living things - and then to 
have a belief that it should modify its behaviour to optimise for all the 
entities in the community rather than for just 'self'.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben,

 I agree that a functionally-specialized Ethics Unit could make sense in
 an advanced Novamente configuration. .devoting a Unit to ethics
 goal-refinement on an architectural level would be a simple way of
 ensuring resource allocation to ethics processing through successive
 system revisions. 

OK.  That's good.

You've dicussed this in terms of GoalNode refinement.  I probably don't 
understand the full range of what this means but my understanding of 
how ethics works is that an ethical sentient being starts with some 
general ethic goals (some hardwired, some taught and all blended!) 
and then the entity (a) frames action motivated by the ethics and (b) 
monitors the environment and internal processes to see if issues come 
up that call for an ethical response - then any or all the following 
happen - the goals might be refined so that it's possible to apply the 
goals to the complex current context and/or the entity goes on to 
formulate actions informed by the ethical cogitation.

So on the face of it an Ethics Unit of an AGI would need to do more 
than GoalNode refinement??  Or have I missed the point?

Cheer, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] A thought.

2003-02-06 Thread Philip Sutton
Brad,

 But I think that the further down you go towards the primitive level,
 the more and more specialized everything is.  While they all use
 neurons, the anatomy, and neurophysiology of low level brain areas are
 so drastically different from one another as to be conceptually
 distinct. 

I can understand the brain structure we see in intelligent animals would 
emerge from a process of biological evolution where no conscious 
design is involved (ie. specialised non conscious functions emerge first, 
generalised processes emerge later), but why should AGI design 
emulate this given that we can now apply conscious design processes, 
in addition to the traditional evolutionary incremental trial and error 
methods? 

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Brain damage, anti-social behaviour and moral hard wiring?

2003-01-30 Thread Philip Sutton
Has anyone on the list looked in any detail at the link between brain 
damage and anti-social behaviour and the possible implications for 
hardwiring of moral capacity?  Specifically has anyone looked at the 
contribution that brain damage or brain development disorders may 
make towards the development of autism, asperger's syndrome and the 
broad sweep of anti-social disorders and psychopathic conditions?

An article at:
http://news.bbc.co.uk/1/hi/sci/tech/479405.stm
includes the following intrguing quote in relation to damage to the 
prefrontal lobe:

 The team also noticed a difference between those people brain injured
 as children and those damaged as adults. The adult patients understood
 moral and social rules but appeared unable to apply them to their own
 lives. 
 
 Those damaged at an early age seemed unable to learn the rules in the
 first place, having as adults the moral reasoning skills of 10 year
 olds. They also were more likely to exhibit psychopathic behaviour like
 stealing and being violent. 

This reinforces my moderately uninformed intuition that the early 
learning of AGIs morality might be assisted by structurally dedicating 
some AGI 'brainspace' to constantly reviewing the external 
environment and internal thinking processes to consider the moral 
implications and to building in some sort of structure to motivate action 
on moral/ethical issues.

To save clogging up the AGI list airwaves with my explorations of this 
subject I would be interested to know if anyone is interested in a low 
volume discussion outside the list.  We could then report back to this 
list with any meaty information/ideas that we might come across or 
develop.

Are there any other lists  where these issues are discussed in relation to 
AGI development?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Emergent ethics via training - eg. game playing

2003-01-29 Thread Philip Sutton
Hi Jonathan,

I think Sim City and many of the Sim games would be good but 
Civilization 3 and Alpha Centauri and Black  White are highly 
competitive and allow huge scope for being combative.

Compared to earlier versions, Civilisation 3 has added more options for 
non-war based domination but unless players are committed to a 
peaceful approach the program is largely a war game.

I don't know Black  White personally but I picked up a review at:
http://www.game-revolution.com/games/pc/strategy/black_and_white.htm

 The premise is simple: you're a god and it's your task to convert as
 many nonbelievers to your cause as possible, thereby gaining power. You
 can be a good god or a bad god, an evil master of destruction or a
 benevolent flower daddy - or any of the millions of shades in between.
 By managing your villages and fighting other gods, you vie for ultimate
 control.

I'm not sure that Black  White would be good training for an AGI. Do 
we really want it to limber up as a dominating god - maybe benevolent 
and maybe not??

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Emergent ethics via training - eg. game playing

2003-01-28 Thread Philip Sutton
A very large number of computer games are based on competition and 
frequently combat.  If we train an AGI on an average selection of 
current computer games is it possible that a lot of implicit ethical 
training will happen at the same time (ie. the AGI starts to see the world 
as revolving around competition and even worse, combat?)

I'm having to deal with this problem in raising two young kids and I 
wonder why an AGI would not have the same problem.

Other games are based on mastery/competence improvement etc.  Is 
anyone working on selecting training games that are chosen on the 
basis of both skills/knowledge development and ethical development as 
well?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Philip . Sutton
I've just read the first chapter of The Metamorphosis of Prime Intellect.

http://www.kuro5hin.org/prime-intellect

It makes you realise that Ben's notion that ethical structures should be 
based on a hierarchy going from general to specific is very valid - if 
Prime Intellect had been programmed to respect all *life* and not just 
humans then the 490 worlds with sentient life not to mention the 14,623 
worlds with life of some type might have been spared.

It also makes it clear that when we talk about building AGIs for 'human 
friendliness' we are using language that does not follow Ben's 
recommended ethical goal structure.

I'm wondering (seriously) whether the AGI movement needs to change 
it short hand language (human friendly) in this case - in other arenas 
people talk about the need for ethical behaviour.  Would that term 
suffice?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Urgent Letter from Zimbabwe SCAM

2002-11-21 Thread Philip Sutton
Dear AGIers,

I presume that Youlian Troyanov was speaking tongue-in-cheek,
because the Dr Mboyo email is of course a scam.

It has the same form as the now famous Nigerian scams.  See:
http://www.snopes.com/inboxer/scams/nigeria.htm

Nobody should touch this stuff with a 10 foot barge pole - or longer.

Cheers, Philip

Date sent:  Thu, 21 Nov 2002 03:56:16 -0800 (PST)
From:   Youlian Troyanov [EMAIL PROTECTED]
Subject:*SPAM* Re: [agi] Urgent Letter from Zimbabwe
To: [EMAIL PROTECTED]
Send reply to:  [EMAIL PROTECTED]

[ Double-click this line for list subscription options ]

i think dr mboyo can help all those ai startups that
need venture campital right now.

y

--- Dr.Wilfred Mboyo [EMAIL PROTECTED] wrote:
 Sir,

  URGENT BUSINESS
 RELATIONSHIP

 Firstly, I have to introduce myself to you. My name
 is Dr Wilfred Mboyo from Zimbabwe. I
 was the chairman of contract review panel in my
 country before the problem of the land
 reform program.
 Before the escalation of the situation in Zimbabwe I
 recovered $16.8Million US dollars from
 over inflated contracts by some government
 officials. But I was a member of the opposition
 party the MDC(Movement for Democratic Change), and
 the ruling Party, (ZANU PF) has
 been against us. So I had to flee the country for a
 neighbouring African Country which I am
 currently residing.

 Before the escalation of the situation in Zimbabwe I
 had not reported The recovery of my
 findings to the panel. So this money was in my
 possession and I lodged it in a security
 company here in Africa and currently this money has
 been moved to their security branch in
 Europe. I have been trying to fly to Europe but it
 has been difficult  for me to get a visa from
 Africa. So I want you to help me make claims of this
 fund($16.8m) in Europe as my
 beneficiary and transfer the money to your account
 or any account of your choice before I
 can get a visa to fly down. So that we can share
 this money.

 I have agreed to give you 10%,which would be
 ($1.6Million dollars) of this Money for your
 assistance, and 85% would be mine and the other 5%
 would be set aside for any expenses
 that we may incure during the course of this
 transaction. And my 85% would be invested in
 your country in any profitable business propossed by
 you.

 We have never met, but I want to trust you and
 please do not let me down when this fund
 finally gets into your account. Please if you are
 interested, get to me through the email
 address below to enable me feed you with more
 details and all necessary documentations.

 Please treat this as confidential.  (
 [EMAIL PROTECTED]   or   [EMAIL PROTECTED] )

 Regards,

 Dr.Wilfred Mboyo

 NOTE: In the event of your inability to handle this
 transaction please
 inform me so that i can look for another reliable
 person who can assist me.



 ---
 To unsubscribe, change your address, or temporarily
 deactivate your subscription,
 please go to http://v2.listbox.com/member/


__
Do you Yahoo!?
Yahoo! Mail Plus – Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



[agi] Re: Asimov-like reaction ?

2002-11-04 Thread Philip Sutton
Hi David

 What of the possibility, Ben, of an Asimov-like reaction to the
 possibility of thinking machines that compete with humans?  It's the
 kind of dumb, Man-Was-Not-Meant-to-Go-There, scenario we see all the
 time on Sci-Fi Channel productions, but it is plausible, especially in
 a world where so many people still haven't accepted that technology has
 improved lives, ignoring the evidence of much of their own environment. 

If the next big thing (advanced AGI) were to treat us like we treat the 
species we've advanced over, then I'd say humans have good reason 
to be nervous.

But I think the solution is for humans and AGIs to grow up together and 
for AGIs to have to develop with well developed ethical 
capabilities/standards.  

Is anybody working on building ethical capacity into AGI from the 
ground up?

As I mentioned to Ben yesterday, AGIs without ethics could end up 
being the next decade's e-viruses (on steriods).

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



[agi] RE: Ethical drift

2002-11-04 Thread Philip Sutton
Ben Goertzel wrote:
 What if iterative self-revision causes the system's goal G to drift
 over time... 

I think this is inevitable - it's just evolution keeping on going as it always 
will.  The key issue then is what processes can be set in train to operate 
throughout time to keep evolution re-inventing/re-committing AGIs (and 
humans too) to ethical behaviour.  Maybe communities of AGIs can 
create this dynamic.

Can isolated, non-socialised AGIs be ethical in relation to the whole?

A book that I found facinating on the ethics issue in ealier evolutionaryu 
stages is:

Good Natured: The Origins of Right and Wrong in Humans and Other 
Animals 
by Frans De Waal, Frans de Waal (Paperback - October 1997) 
Harvard Univ Pr; ISBN: 0674356616; Reprint edition (October 1997) 

It's well worth a read.

Cheers, Philip


Of course, one can seek to architect one's AGI system to mitigate against
goal drift under iterative self-revisions.

But algorithmic information theory comes up again, here.

At some point, a self-revising AGI system, which adds new hardware onto
itself periodically, will achieve a complexity (in the alg. info. theory
sense) greater than that of the human brain.  At this point, one can
formally show, it is *impossible for humans to predict what it will do*.  We
just don't have the compute power in our measly little brains  So we
certainly can't be sure that goal drift won't occur in a system of
superhuman complexity...

This is an issue to be rethought again  again as AGI gets closer 
closer...

-- Ben



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/