Re: [agi] Cell

2005-02-14 Thread Philip Sutton
On 10 Feb 05 Steve Reed said: In 2014, according to trend, the semiconductor manufacturers may reach the 16 nanometer lithography node, with 32 CPU cores per chip, perhaps 150+ times more capable than today's x86 chip. I raised this issue with a colleague who said that he wondered whether

Re: [agi] What are qualia...

2005-01-26 Thread Philip Sutton
Hi Brad This is not at all true. I could design a neural network, or perhaps even symbolic computer program that can evaluate the attractivenes of a peacock tail and tune it to behave in a similar fashion as that tiny portion of a real peacock's brain. Does this crude simulation contain

Re: [agi] What are qualia...

2005-01-26 Thread Philip Sutton
Brad/Eugen/Ben, Early living things/current simple-minded living things, we can conjecture didn't/don't have perceptions that can be described as qualia. Then somewhere along the line humans start describing perceptions that some of them describe as qualia. It seems that something has

Re: [agi] Setting up the systems....

2005-01-23 Thread Philip Sutton
Ben said: My experience is that in building any narrow-AI app based on Novamente components, around 80% of the work goes into the application and the domain-engineering and 20% into stuff of general value for Novamente Abdrew said: I would say that general high-quality financial market

RE: [agi] What are qualia...

2005-01-22 Thread Philip Sutton
Hi Ben, how the subjective experience of qualia is connected to the neural correlates of qualia. but the tricky question is how a physical system (the brain) can generate subjective, phenomenal experiences. Oh dearhaving jumped in I feel like I'm in over my head already! :)

RE: [agi] What are qualia...

2005-01-22 Thread Philip Sutton
Hi Ben, I just read Chalmers article and yours. You concluded your article with: In artificial intelligence terms, the present theory suggests that if an AI program is constructed so that its dynamics give rise to a constant stream of patterns that are novel and significant (measured

[agi] What are qualia...

2005-01-21 Thread Philip Sutton
Hi, I just been thinking about qualia a bit more recently. (I have to make a disclaimer. I know next to nothing about them, but other people's ideas from this list have been fermenting in my mind for a while.) Anyway, here goes.. How about the idea that qualia are properties

RE: [agi] A theorem of change and persistence????

2005-01-04 Thread Philip Sutton
Hi Ben, If you model a system in approximate detail then potentially you can avoid big surprises and have only small surprises. In chaotic systems, my guess is that compact models would capture many possibilities that would otherwise be surprises - especially in the near term. But I

RE: [agi] A theorem of change and persistence????

2004-12-30 Thread Philip Sutton
Hi Ben, On 23 Dec you said: I would say that if the universe remains configured roughly as it is now, then your statement (that long-term persistence requires goal-directed effort) is true. However, the universe could in the future find itself in a configuration in which your statement was

RE: [agi] Re: AI boxing

2004-09-19 Thread Philip Sutton
Hi Ben, One thing I agree with Eliezer Yudkowsky on is: Worrying about how to increase the odds of AGI, nanotech and biotech saving rather than annihilating the human race, is much more worthwhile than worrying about who is President of the US. It's the nature of evolution that getting to

[agi] Learning friendliness/morality in the sand box

2004-06-18 Thread Philip Sutton
Maybe a good way for AGIs to learn friendliness and morality, while still in the sand box, is to: - be able to form friendships - affiliations with 'others' that go beyond self-interest - vitual 'others' in the sand box - to have responsibility for caring for virtual 'pets' I guess

[agi] Tools and techniques for complex adaptive systems

2004-06-14 Thread Philip Sutton
Evolving Logic has developed and is continuing to work on tools for handling complex adaptive systems where no model less complex than the system itself can accurately predict in detail how the system will behave at future times http://www.evolvinglogic.com/Learn/pdf/ToolsTechniques.pdf Tools

[agi] Networks for plugging in FAI / AGI ?

2004-06-12 Thread Philip Sutton
People involved in FAI / AGI development might like to have look at PlaNetwork. This might be a useful network for plugging in FAIs / AGIs in development. Cheers, Philip http://www.planetwork.net/ From their homepage: Planetwork illuminates the critical role that the conscious use of

[agi] Intelligent software agents

2004-05-31 Thread Philip Sutton
The work by European Telecoms might be of interest: http://more.btexact.com/projects/ibsr/technologythemes/softwareagents.htm The text below was taken from this webpage: Software Agents To support the future enterprise, we deploy intelligent technology based on a decentralised philosophy in

Re: [agi] Open AGI?

2004-03-05 Thread Philip Sutton
Bill, I'd definitely see creating the first open source AGI system as a big opportunity. Do you see any overwhelming risks in making AGI technology available to everyone including malcontents and criminals? Would the rest of society be able to handle these risks if they also had access to

Re: [agi] Open AGI?

2004-03-05 Thread Philip Sutton
Shane, In your first posting on the open AGI subject you mentioned that you were concerned about the risk on the one hand of: * inordinate power being concentrated in the hands of the controllers of the first advanced AGI * power to do serious harm being made widely available if AGI

[agi] UNU report 2003 identified human-machine intelligence as key issue

2004-02-28 Thread Philip Sutton
The Millenium Project of the United Nations University has produced the 2003 State of the Future report. The second para of the executive summary says: Dramatic increases in collective human-machine intelligence are possible within 25 years. It is also possible that within the next 25 years

[agi] Consolidated statement of Ben's preferred AGI goal structure? (was AGIs and emotions)

2004-02-23 Thread Philip Sutton
Hi Ben, Yes, of course a brief ethical slogan like choice, growth and joy is underspecified and all the terms need to be better defined, either by example or by formal elucidation, etc. I carry out some of this elucidation in the Encouraging a Positive Transcension essay that triggered

Re: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton
Hi Ben, Question: Will AGI's experience emotions like humans do? Answer: http://www.goertzel.org/dynapsyc/2004/Emotions.htm I'm wondering whether *social* organisms are likely to have a more active emotional life because inner psychological states need to be flagged physiologically to other

RE: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton
Hi Ben, Why would an AGI be driven to achieve *general* harmony between inner and outer worlds - rather than just specific cases of congruence? Why would a desire for specific cases of congruence between the inner and outer worlds lead an AGI (that is not programmed or trained to do so) to

RE: [agi] AGI's and emotions

2004-02-22 Thread Philip Sutton
Hi Ben, Adding Choice to the mix provides a principle-level motivation not to impose one's own will upon the universe without considering the wills of others as well... Whose choice - everyone or the AGI? That has to be specified in the ethic - otherwise it could be the AGI only - then

[agi] Re: Positive Transcension 2

2004-02-19 Thread Philip Sutton
Ben, I've just finished reading your 14 February version of Encouraging a Positive Transcension. It's taken me two reads of the paper to become clear on a few issues. It seems to me that there are really three separate ethical issues at the heart of the paper that have been conflated

Re: [agi] Futurological speculations

2004-02-11 Thread Philip Sutton
Ben, Which list do you want Encouraging a Positive Transcension discussed on? AGI or SL4? It could get cumbersome having effectively the same discussion on both lists. Thanks for the paper. it was stimulating read. I have a few quibbles. You discuss the value of aligning with universal

[agi] Within-cell computation in biological neural systems??

2004-02-06 Thread Philip Sutton
Does anyone have an up-to-date fix on how much computation occurs (if any) within-cells (as opposed to the traditional neural net level) that are part of biolgical brain systems? especially in the case of animals that have a premium placed on the number of neurones they can support

Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Philip Sutton
Thanks Bill for the Eric Baum reference. Deep thinker that I am, I've just read the book review on Amazon and that has orientated me to some of the key ideas in the book (I hope!) so I'm happy to start speculating without having actually read the book. (See the review below.) It seems

RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben, So, I am skeptical that an AI can really think effectively in ANY domain unless it has done a lot of learning based on grounded knowledge in SOME domain first; because I think advanced cognitive schemata will evolve only through learning based on grounded knowledge... OK. I think

Re: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben, What you said to Debbie Duong sound intuitively right to me. I think that most human intuition would be inferential rather than a simulation. but it seems that higher primates store a huge amount of data on the members of their clan - so my guess is that we do a lot of simulating of

RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben, Well, this appears to be the order we're going to do for the Novamente project -- in spite of my feeling that this isn't ideal -- simply due to the way the project is developing via commercial applications of the half-completed system. And, it seems likely that the initial

RE: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben, Maybe we do simulate a *bit* more with out groups than I first thought - but we do it using caricature stereotypes based on *ungrounded* data - ie. we refuse to use grounded data (from our ingroup), perhaps, since that would make these outgroup people uncomfortably too much like us.

RE: [agi] AGIs, sub-personalities, clones and safety

2004-01-27 Thread Philip Sutton
Hi Ben, (I sent this message a couple of hours ago and it didn't come through so I've just resent the message in case it's just disappeared into cyberspace - never to reappear.) An AI mind can spin off a clone of itself with parameters re-tuned to be more speculative and intellectually

RE: [agi] probability theory and the philosophy of science

2004-01-26 Thread Philip Sutton
Hi Ben, I've just read: Science, Probability and Human Nature: A Sociological/ Computational/ Probabilist Philosophy of Science. It's accessible (thanks) and very thought provoking. As I read the paper, I imagined how these questions might relate to the creation and training and activities

RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Hi Ben, For example, consider the two scenarios where AGI's are developed by a) the US Army b) Sony's toy division In the one case, AGI's are introduced to the world as super-soldiers (or super virtual fighter pilots, super strategy analyzers,etc.); in the other case, as robot companions

Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Why not get a few AGIs jobs working on modelling of the widespread introduction of AGIs - under a large number of scenario conditions to find the transition paths that don't result in mayhem and chaos - for us humans and for them too. Cheers, Philip --- To unsubscribe, change your

RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Ben, I think that modeling of transition scenarios could be interesting, but I also think we need to be clear about what its role will be: a stimulant to thought about transition scenarios. I think it's extremely unlikely that such models are going to be *accurate* in any significant sense.

Re: [agi] Human Cyborg

2003-10-27 Thread Philip Sutton
Hi Kevin, I was able to reach the article at a different address: http://star-techcentral.com/tech/story.asp?file=/2003/10/14/itfeature/6414580sec=technology Cheers, Philip To unsubscribe, change your address, or temporarily deactivate your subscription, please go to

[agi] Early AGI training - multiple communications channels / multi-tasking

2003-09-02 Thread Philip Sutton
Hi Ben, It just occurred to me that very early in a Novamente's training you might want to give it more than one set of coordinated communication channels so that the Novamente can learn to communicate with more than one external intelligence at a time. My guess is that this would lead to a a

Re: [agi] funky robot kits and AGI

2003-08-27 Thread Philip Sutton
Hi Ben, I'm not an electronics expert but my electric tooth brush runs on an induction 'connection' so there's no need for a bare wire conection to an electric circuit. Maybe a covered ground-level induction grid could be set up. Also you could run an electric cord to the robot. Also I had

Re: [agi] Embedding AI agents in simulated worlds

2003-08-19 Thread Philip Sutton
Hi Ben, I've just read your paper (Goertzel Pennachin) at: http://www.goertzel.org/dynapsyc/2003/NovamenteSimulations.htm I'm not expert in any of this - but I'm 10 years and three years into raising two kids so that gives me some experience that might or might not be useful I thought what

Re: [agi] request for feedback

2003-08-14 Thread Philip Sutton
Hi Mike, Conceptual necessity .. Bosons, fermions, atoms, galaxies, stars, planets, DNA, cells, organisms, societies, information, computers, AGI's, the Singularity, it's all inevitable because of conceptual necessity. I think that what you are talking about is not conceptual

RE: [agi] Educating an AI in a simulated world

2003-07-19 Thread Philip Sutton
Hi Ben, If Novababies are going to play and learn in a simulated world which is most likely based on an agent-based/object-orientated programming foundation, would it be useful for the basic Novamente to have prebuilt capacity for agent-based modelling? Would this in be necessary if a

[agi] Tool for buildsing virtual worlds

2003-07-13 Thread Philip Sutton
Hi Ben, Have you come across Game Maker 5? Its a freeware program that can be used to create reasonably simple computer games, fast. See: www.gamemaker.nl It might be useful for very early stage virtual worlds where you don't need true 3D. Cheers, Philip --- To unsubscribe, change

Re: [agi] Educating an AI in a simulated world

2003-07-12 Thread Philip Sutton
Ben, I think there's a prior question to a Novamente learning how to perceive/act through an agent in a simulated world. I think the first issue is for Novamente to discover that, as an intrinsic part of its nature, it can interact with the world via more than one agent interface. Biological

Re: [agi] Educating an AI in a simulated world

2003-07-11 Thread Philip Sutton
Hi Ben, I think this is a great way to give one or more Novamentes the experience it/they need to develop mentally, in a controlled environment and in an environment where the need for massive computational power to handle sensory data is cut (I would imagine) hugely thus leaving

[agi] Are multiple superintelligent AGI's safer than a single AGI?

2003-03-04 Thread Philip Sutton
Eliezer, As a counter to my own previous argument about the risk of the simultaneous failure of AGIs, your argument is likely to be closest to being right in certain circumstances after the time dimension is taken into account. Our previous argument has been around the black and white

[agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Philip Sutton
Ben, would you rather have one person with an IQ of 200, or 4 people with IQ's of 50? Ten computers of intelligence N, or one computer with intelligence 10*N ? Sure, the intelligence of the ten computers of intelligence N will be a little smarter than N, all together, because of

Re: [agi] Playing with fire

2003-03-03 Thread Philip Sutton
Hi Pei / Colin, Pei: This is the conclusion that I have been most afraid of from this Friendly AI discussion. Yes, AGI can be very dangerous, and I don't think any of the solutions proposed so far can eliminate the danger completely. However I don't think this is a valid reason to

RE: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Ben, Ben: That paragraph gave one possible dynamic in a society of AGI's, but there are many many other possible social dynamics Of course. What you say is quite true. But so what? Let's go back to that one possible dynamic. Can't you bring yourself to agree that if a one-and-only

[agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton
Pei, I also have a very low expectation on what the current Friendly AI discussion can contribute to the AGI research. OK - that's a good issue to focus on then. In an earlier post Ben described three ways that ethical systems could be facilitated: A) Explicit programming-in of ethical

RE: [agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton
Ben, I think Pei's point is related to the following point We're now working on aspects of A) explicit programming-in of ideas and processes B) Explicit programming-in of methods specially made for the learning of ideas and processes through experience and teaching and that until

RE: [agi] What would you 'hard-wire'?

2003-03-03 Thread Philip Sutton
Ben, I can see some possible value in giving a system these goals, and giving it a strong motivation to figure out what the hell humans mean by the words care, living, etc. These rules are then really rule templates with instructions for filling them in... Yes. However, I view

Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Hi Eliezer, This does not follow. If an AI has a P chance of going feral, then a society of AIs may have P chance of all simultaneously going feral I can see you point but I don't agree with it. If General Motors churns out 100,000 identical cars with all the same charcteristics

Re: [agi] Why is multiple superintelligent AGI's safer than a single AGI?

2003-03-03 Thread Philip Sutton
Eliezer, That's because your view of this problem has automatically factored out all the common variables. All GM cars fail when dropped off a cliff. All GM cars fail when crashed at 120 mph. All GM cars fail on the moon, in space, underwater, in a five-dimensional universe. All GM cars

RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-03-02 Thread Philip Sutton
Ben, OK - so Novamente has a system for handling 'importance' already and there is an importance updating function that feeds back to other aspects of Attention Value. That's good in terms of Novamente having an internal architecture capable of supporting and ethical system. You're

RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-03-02 Thread Philip Sutton
Ben, I don't have a good argument on this point, just an intuition, based on the fact that generally speaking in narrow AI, inductive learning based rules based on a very broad range of experience, are much more robust than expert-encoded rules. The key is a broad range of experience,

RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-03-02 Thread Philip Sutton
Ben, Philip: I think an AGI needs other AGIs to relate to as a community so that a community of learning develops with multiple perspectives available. This I think is the only way that the accelerating bootstraping of AGIs can be handled with any possibility of being safe. **

RE: [agi] Symbols in search of meaning - what is the meaning of B31-58-DFT?

2003-02-27 Thread Philip Sutton
Ben, One question is whether it's enough to create general pattern-recognition functionality, and let it deal with seeking meaning for symbols as a subcase of its general behavior. Or does one need to create special heuristics/algorithms/structures just for guiding this particular

RE: [agi] more interesting stuff

2003-02-25 Thread Philip Sutton
Ben/Kevin, The dynamics of evolution through progressive self-re-engineering will, in my view, be pretty different from the dynamics of evolution through natural selection. Lamarkian evolution (cf. Darwinian evolution) gets a new lease of life! Cheers, Philip --- To unsubscribe,

[agi] Can an AGI remain general if it can self-improve?

2003-02-22 Thread Philip Sutton
If an AGI can self-improve what is the likelihood that the AGI will remain general and will not instead evolve itself rapidly to be a super- intelligent specialist following the goal(s) that grab its attention early in life? I think that most humans tend to move in the specialist direction

RE: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Philip Sutton
Ben, OK... life lesson #567: When a mathematical explanation confuses non-math people, another mathematical explanation is not likely to help While I can't help with the solution, I can say that this version of your problem at last made sense to me - previous version were incomprehensible

Re: [agi] AIXI and Solomonoff induction

2003-02-21 Thread Philip Sutton
Ed, From my adventures in physics, I came to the conclusion that my understanding of the physical world had more to do with 1. My ability to create and use tools for modeling, i.e. from the physical tools of an advanced computer system to my internal abstraction tools like a new theorem of

[agi] Developing biological brains and computer brains

2003-02-18 Thread Philip Sutton
Brad/Ben/all, I think Ben's point about not trying to emulate biological brains with computers is quite important. The medium they are working with (living cells, computer chips are very different). Effective brains emerge out of an interplay between the fundamental substrate and the

Re: [agi] doubling time revisted.

2003-02-17 Thread Philip Sutton
Stephen Reed said: Suppose that 30-50 thousand state of the art computers are equivalent to the brain's processing power (using Moravec's assumptions). If global desktop computer system sales are in the neighborhood of 130 million units, then we have the computer processing equivalent of

Re: [agi] Breaking AIXI-tl - AGI friendliness

2003-02-16 Thread Philip Sutton
Hi Eliezer/Ben, My recollection was that Eliezer initiated the Breaking AIXI-tl discussion as a way of proving that friendliness of AGIs had to be consciously built in at the start and couldn't be assumed to be teachable at a later point. (Or have I totally lost the plot?) Do you feel the

RE: [agi] Novamente: how crtical is self-improvement to getting human parity?

2003-02-16 Thread Philip Sutton
-Original Message- From: [EMAIL PROTECTED] [mailto:owner- [EMAIL PROTECTED]]On Behalf Of Philip Sutton Sent: Sunday, February 16, 2003 10:55 AM To: [EMAIL PROTECTED] Subject: [agi] Novamente: how crtical is self-improvement to getting human parity? Hi Ben, As far

[agi] The core of the current debate??

2003-02-16 Thread Philip Sutton
I was just thinking, it might be useful to make sure that in pusuing the Breaking AIXI-tl - AGI friendliness debate we should be clear what the starting issue is. I think it is best defined by Eliezer's post on 12 Feb and Ben's reply of the same day Eliezer's post:

RE: [agi] Breaking AIXI-tl - AGI friendliness - how to move on

2003-02-16 Thread Philip Sutton
Hi Ben, From a high order implications point of view I'm not sure that we need too much written up from the last discussion. To me it's almost enough to know that both you and Eliezer agree that the AIXItl system can be 'broken' by the challenge he set and that a human digital simulation

Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Philip Sutton
Eliezer/Ben, When you've had time to draw breath can you explain, in non-obscure, non-mathematical language, what the implications of the AIXI-tl discussion are? Thanks. Cheers, Philip --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to

Re: [agi] who is this Bill Hubbard I keep reading about?

2003-02-14 Thread Philip Sutton
Bill, Gulp..who was the Yank who said ... it was I ??? Johnny Appleseed or something? Well, it my turn to fess up. I'm pretty certain that it was my slip of the keyboard who started it all. Sorry. :) My only excuse is that in my area of domain knowledge King Hubbard is very famous.

RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Philip Sutton
Ben/Bill, My feeling is that goals and ethics are not identical concepts. And I would think that goals would only make an intentional ethical contribution if they related to the empathetic consideration of others. So whether ethics are built in from the start in the Novamente architecture

RE: [agi] AGI morality - goals and reinforcement values - plus early learning

2003-02-11 Thread Philip Sutton
Ben, Right from the start, even before there is an intelligent autonomous mind there, there will be goals that are of the basic structural character of ethical goals. I.e. goals that involve the structure of compassion, of adjusting the system's actions to account for the well-being of

Re: [agi] unFriendly AIXI

2003-02-11 Thread Philip Sutton
Eliezer, In this discussion you have just moved the focus to the superiority of one AGI approach versus another in terms of *interacting with humans*. But once one AGI exists it's most likely not long before there are more AGIs and there will need to be a moral/ethical system to guide AGI-AGI

[agi] Self, other, community

2003-02-10 Thread Philip Sutton
A number of people have expressed concern about making AGIs 'self' aware - fearing that this will lead to selfish behaviour. however I don't think that AGIs can actually be ethical without being able to develop awareness of the needs of others and I don't think you can be aware of others needs

RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben, I agree that a functionally-specialized Ethics Unit could make sense in an advanced Novamente configuration. .devoting a Unit to ethics goal-refinement on an architectural level would be a simple way of ensuring resource allocation to ethics processing through successive system

Re: [agi] A thought.

2003-02-06 Thread Philip Sutton
Brad, But I think that the further down you go towards the primitive level, the more and more specialized everything is. While they all use neurons, the anatomy, and neurophysiology of low level brain areas are so drastically different from one another as to be conceptually distinct. I

[agi] Brain damage, anti-social behaviour and moral hard wiring?

2003-01-30 Thread Philip Sutton
Has anyone on the list looked in any detail at the link between brain damage and anti-social behaviour and the possible implications for hardwiring of moral capacity? Specifically has anyone looked at the contribution that brain damage or brain development disorders may make towards the

Re: [agi] Emergent ethics via training - eg. game playing

2003-01-29 Thread Philip Sutton
Hi Jonathan, I think Sim City and many of the Sim games would be good but Civilization 3 and Alpha Centauri and Black White are highly competitive and allow huge scope for being combative. Compared to earlier versions, Civilisation 3 has added more options for non-war based domination but

[agi] Emergent ethics via training - eg. game playing

2003-01-28 Thread Philip Sutton
A very large number of computer games are based on competition and frequently combat. If we train an AGI on an average selection of current computer games is it possible that a lot of implicit ethical training will happen at the same time (ie. the AGI starts to see the world as revolving

[agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Philip . Sutton
I've just read the first chapter of The Metamorphosis of Prime Intellect. http://www.kuro5hin.org/prime-intellect It makes you realise that Ben's notion that ethical structures should be based on a hierarchy going from general to specific is very valid - if Prime Intellect had been programmed

[agi] Urgent Letter from Zimbabwe SCAM

2002-11-21 Thread Philip Sutton
Dear AGIers, I presume that Youlian Troyanov was speaking tongue-in-cheek, because the Dr Mboyo email is of course a scam. It has the same form as the now famous Nigerian scams. See: http://www.snopes.com/inboxer/scams/nigeria.htm Nobody should touch this stuff with a 10 foot barge pole - or

[agi] Re: Asimov-like reaction ?

2002-11-04 Thread Philip Sutton
Hi David What of the possibility, Ben, of an Asimov-like reaction to the possibility of thinking machines that compete with humans? It's the kind of dumb, Man-Was-Not-Meant-to-Go-There, scenario we see all the time on Sci-Fi Channel productions, but it is plausible, especially in a world

[agi] RE: Ethical drift

2002-11-04 Thread Philip Sutton
Ben Goertzel wrote: What if iterative self-revision causes the system's goal G to drift over time... I think this is inevitable - it's just evolution keeping on going as it always will. The key issue then is what processes can be set in train to operate throughout time to keep evolution