Re: [singularity] Vista/AGI

2008-04-14 Thread Charles D Hixson

MI wrote:

...
Being able to abstract and then implement only those components and
mechanisms relevant to intelligence from all the data these better
brain scans provide?

If intelligence can be abstracted into layers (analogous to network
layers), establishing a set of performance indicators at each layer
and then increasing the values corresponding to these indicators
might probably provide a better measure of AGI's progress. Using that
model, increments of progress might then be much easier to identify,
verify and communicate even for the smallest increments.

Slawek
  

Abstracting away the non-central-to-AI parts of the brain isn't necessary.

Try it this way (a possible, if not plausible path to AI).
1) Artificial knee/hip joints
2) Artificial corneas
3) Artificial retinas
4) Artificial cochlea
5) Artificial vertebrae
6) Nerve welds to rejoin severed spinal nerves
7) Artificial nerves
8) Artificial nerve welds to repair severed optic/aural nerves
9) Artificial visual or audio cortex
10) Repair of stroke damaged nerves
11) Replacement of damaged portions of the brain with artificial 
replacements (Hippocampus, etc.)

12) Repair of damaged brains in infants (birth defects)
13) continue on with gradually more significant replacements...at some 
point you'll hit an AGI.


P.S.:  I think this is a workable approach, but one that will 
materialize too slowly to dominate.  Still, we're already working on 
steps 2, 3, 4,  5.  Possibly also 6.



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-22 Thread Charles D Hixson

John K Clark wrote:

Charles D Hixson [EMAIL PROTECTED]


Consciousness is the entity evaluating


And you evaluate something when you have conscious understanding of it.
No.  The process of evaluation *is* the consciousness.  Consciousness is 
a process, not a thing.





a portion of itself which represents


And something represents something else when a conscious entity can
point to a thing and then to another thing and say this means that.

Matt Mahoney [EMAIL PROTECTED]

 consciousness just means awareness

And awareness just means consciousness. Round and round we go
and where she'll stop nobody knows.

 John K Clark



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Charles D Hixson

John K Clark wrote:

Matt Mahoney [EMAIL PROTECTED]


It seems to me the problem is
defining consciousness, not testing for it.


And it seems to me that beliefs of this sort are exactly the reason 
philosophy is in such a muddle. A definition of consciousness is not
needed, in fact unless you're a mathematician where they can be of 
some use, one can lead a full rich rewarding intellectually life without

having a good definition of anything. Compared with examples
definitions are of trivial importance.

 John K Clark


But consciousness is easy to define, if not to implement:
Consciousness is the entity evaluating a portion of itself which 
represents it's position in it's model of it's environment.


If there's any aspect of consciousness which isn't included within this 
definition, I would like to know about it.  (Proving the definition 
correct would, however, be between difficult and impossible.  As 
normally used consciousness is a term without an external referent, so 
there's no way of determining that any two people are using the same 
definition.  It *may* be possible to determine that they are using 
different definitions.)



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Requested: objections to SIAI, AGI, the Singularity and Friendliness

2007-12-29 Thread Charles D Hixson

Kaj Sotala wrote:

For the recent week, I have together with Tom McCabe been collecting
all sorts of objections that have been raised against the concepts of
AGI, the Singularity, Friendliness, and anything else relating to
SIAI's work. We've managed to get a bunch of them together, so it
seemed like the next stage would be to publicly ask people for any
objections we may have missed.

...
  
Well, it could be that in any environment there is an optimal level of 
intelligence, and that possessing more doesn't yield dramatically 
improved results, but does yield higher costs.  This is, of course, 
presuming that intelligence is a unitary kind of thing, which I doubt, 
but a more sophisticated argument along the same lines could argue that 
there is an optimum in each dimension of intelligence.


This argument *could* even be correct.  It is, however, worth noting 
that an AI would live in a drastically different environment than does a 
human.  As a result it's benefits and costs can be expected to be quite 
different.  This doesn't invalidate the argument, but it does imply the 
existence of some sort of bounds.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=80209462-9ed3a9


Re: [singularity] Why SuperAGI's Could Never Be That Smart

2007-10-30 Thread Charles D Hixson

Benjamin Goertzel wrote:



 
Try  find a single example of any form of intelligence that has

ever existed in splendid individual isolation. That is so wrong an
idea - like perpetual motion -  so fundamental to the question of
superAGI's. (It's also a fascinating philosophical issue).


Oh, I see ... super-AGI's have never existed therefore they never will 
exist.  QED ;-p


Perpetual motion seems to be impossible according to the laws of 
physics, which are our best-so-far abstractions from our observations 
of the physical world.


Super-AGI outside a sociocultural framework is certainly not ruled out 
by any body of knowledge on which there is comparable consensus, or 
for which there is comparable evidence!!


ben g


I think you're missing his (possibly valid) point. 
1) Nobody starts as a clean slate.
2) A search for a solution works better if you start from a multitude of 
separate initial positions.


I'm not sure that Therefore we need a bunch of AGIs  rather than one 
is a valid conclusion, but it *is* a defensible one.  I have a suspicion 
that a part of the reason for the success of humans as problem solvers 
is that not only do we start from a large number of initial positions, 
but also we tend to have differing goals, so when one is blocked, 
another may not be.  But I see no reason why, intrinsically, the same 
characteristics couldn't be given to a single AGI...though the variation 
in goals might be difficult.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=59140248-19fa89


Re: [singularity] John Searle...

2007-10-26 Thread Charles D Hixson

Richard Loosemore wrote:

candice schuster wrote:

Richard,
 
Your responses to me seem to go in round abouts.  No insult intended 
however.
 
You say the AI will in fact reach full consciousness.  How on earth 
would that ever be possible ? 


I think I recently (last week or so) wrote out a reply to someone on 
the question of what a good explanation of consciousness might be 
(was it on this list?).  I was implictly referring to that explanation 
of consciousness.  It makes the definite prediction that consciousness 
(subjective awareness, qualia, etc.  what Chalmer's called the 
Hard Problem of consciousness) is a direct result of an intelligent 
system being built with a sufficient level of complexity and 
self-reflection.


Make no mistake:  the argument is long and tangled (I will write it up 
a length when I can) so I do not pretend to be trying to convince you 
of its validity here.  All I am trying to do at this point is to state 
that THAT is my current understanding of what would happen.


Let me rephrase that:  we (a subset of the AI community) believe that 
we have discovered concrete reasons to predict that a certain type of 
organization in an intelligent system produces consciousness.


This is not meant to be one of those claims that can be summarized in 
a quick analogy, or quick demonstration, so there is no way for me to 
convince you quickly, all I can say is that we have very string 
reasons to believe that it emerges.
Sounds reasonable to me.   Actually, it seems intuitively obvious.  
I'm not sure that a reasoned argument in favor of it can exist, because 
there's no solid definition of consciousness or qualia.  That which some 
will consider reasonable, others won't understand any grounds for 
accepting.  Consider the people who can argue with a straight face that 
dogs don't have feelings.





You mentioned in previous posts that the AI would only be programmed 
with 'Nice feelings' and would only ever want to serve the good of 
mankind ?  If the AI has it's own ability to think etc, what is 
stopping it from developing negative thoughtsthe word 'feeling' 
in itself conjures up both good and bad.  For instance...I am an 
AI...I've witnessed an act of injustice, seeing as I can feel and 
have consciousness my consciousness makes me feel Sad / Angry ?


Again, I have talked about this a few times before (cannot remember 
the most recent discussion) but basically there are two parts to the 
mind: the thinking part and the motivational part.  If the AGI has a 
motivational that feels driven by empathy for humans, and if it does 
not possess any of the negative motivations that plague people, then 
it would not react in a negative (violent, vengeful, resentful 
etc) way.


Did I not talk about that in my reply to you?  How there is a 
difference between having consciousness and feeling motivations?  Two 
completely separate mechanisms/explanations?
I'll admit that this one bothers me.  How is the AI defining this entity 
WRT which it is supposed to have empathy?  Human is a rather high 
order construct, and a low-level AI won't have a definition for one 
unless one is built in.  The best I've come up with is the kinds of 
entities that will communicate with me, but this is clearly a very bad 
definition.  For one thing, it's language bound.  For another thing, if 
the AI has a stack depth substantially deeper than you do, you won't 
be able to communicate with it even if you are speaking the same 
language.   Empathy for tool-users might be better, but not satisfactory. 

It's true that the goal system can be designed so that it wants to 
remain stable, and thinking is only a part of tools used for 
actualizing goals, so the AI won't want to do anything to change it's 
goals unless it has that as a goal.  But the goals MUST be designed 
referring basically to the internal states of the AI, rather than of the 
external world, as the AI-kernel doesn't have a model of the world built 
in...or does it?  But if the goals are based on the state of the model 
of the world, then what's to keep it from solving the problem by 
modifying the model directly, rather than via some external action?  I 
think it safer if the goals aren't tied to it's model of the world.  
Tyeing it to What's really out there would be safer...but the goals 
would need to be rather abstract, particularly if you presume that 
sensory organs can be added, removed, or updated.

...


Richard Loosemore



Candice
 



  Date: Thu, 25 Oct 2007 19:02:35 -0400
  From: [EMAIL PROTECTED]
  To: singularity@v2.listbox.com
  Subject: Re: [singularity] John Searle...
 
  candice schuster wrote:
   Richard,
   

...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57911976-ab147d


Re: [singularity] John Searle...(supplement to prior post)

2007-10-26 Thread Charles D Hixson
I noticed in a later read that you differentiate between systems 
designed to operate via goal stacks and those operating via motivational 
system.  This is not the meaning of goal that I was using.


To me, if a motive is a theory to prove, then a goal is a lemma needed 
to prove the theory.  I trust that's a clear enough metaphor.


Motives are potentially unchanging in direction, though varying in 
intensity as the state of the system changes.  (Consider Maslow's 
hierarchy of needs:  if you're hungry and suffocating, you ignore the 
hunger.)


This has it's problems, which need to be considered.  E.g.:
If an animal has a motive to eat, then the motive will be adjusted (by 
evolution) to be sufficiently strong to keep the animal healthy, and 
weak enough to be safe.  Now change the situation, so that a plains ape 
is living in cities with grocery stores, and lots of foods that have 
been specially designed to be supernaturally appealing.  Call these 
Burger King's.  Then you can expect that animal to put on an unhealthy 
amount of weight.  It's motivational system no longer fits with it's 
environment, and the motivational system is resistant to change.


When designing an AGI, we need to provide a motivational system with 
both positive and negative adjustable weights.  It may want to protect 
human life, but if someone is living in an intolerable state, and 
nothing can be done to ameliorate this, then it needs to be able to 
allow that human life to be ended.  (Say a virus that cannot be cured, 
which cannot be thrown into remission, and which directly stimulates the 
neural cortex to perceive the maximal amount of pain while 
simultaneously killing off cerebellar neurons.  If that's not 
sufficiently bad, think of something worse.)


Goals are steps that are taken to satisfy some motive, which is 
currently of sufficiently high priority.  I don't see goals and motives 
as alternatives.  (This is probably a definitional matter, but I'm being 
verbose to avoid confusion.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57920826-4f6da3


Re: [singularity] Re: QUESTION

2007-10-22 Thread Charles D Hixson

Aleksei Riikonen wrote:

On 10/22/07, albert medina [EMAIL PROTECTED] wrote:
  

My question is:  AGI, as I perceive your explanation of it, is when a
computer gains/develops an ego and begins to consciously plot its own
existence and make its own decisions.



That would be one form of AGI, but it should also be possible to
create systems of human-equivalent and human-surpassing intelligence
that

(1) don't have instincts, goals or values of their own
  
Do you have any specific reason to believe that such a thing is 
possible?  I don't believe it is, though I'll admit that the goal-set of 
an artificial intelligence might be very strange to a human.

and
(2) may not even be conscious, even though they carry out superhuman
cognitive processing (the possibility of this is not known yet, at
least to me)
  
I think you need to define your terms here.  It isn't clear to me what 
you are talking about when you talk about something which is engaging in 
superhuman cognitive processing not being conscious.  Having a non-local 
consciousness I could ... not understand, but accept.  I suspect that a 
non-centralized consciousness may be necessary for an intelligence to be 
very much superior to humans in cognitive processing.  Non-local is 
harder to understand, but may be necessary.  But it's not clear to me 
what you could mean when you talk about not even be conscious, even 
though they carry out superhuman cognitive processing.   Consciousness 
is one component of intelligence, it seems to occur at the interface 
between mind an language, so I suspect that it's related to 
serialization, and, perhaps, to the logging of memories.

Do you really believe that such a thing can happen?



Yes, but I think most of us would prefer potentially superhuman
systems to not have goals/etc of their own.
  
To me this sounds like wishing for a square circle.  What we really want 
is that the goals, etc. of the AI facilitate our own, or at minimum not 
come into conflict with them.  (Few would object if the AI wanted to 
grant our every wish.)
  

If so, is this the phenomenon you are calling singularity?



These days people are referring to several different things when they
use this word. For an explanation of this, see:

http://www.singinst.org/blog/2007/09/30/three-major-singularity-schools/

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=56374268-69af8a


Re: [singularity] AI is almost here (2/2)

2007-07-31 Thread Charles D Hixson

John G. Rose wrote:

From: Alan Grimes [mailto:[EMAIL PROTECTED]

Yes, that is all it does. All AIs will be, at their core, pattern
matching machines. Every one of them. You can then procede to tack on
any other function which you believe will improve the AI's performance
but in every case you will be able to strip it down to pretty much a
bare pattern matching system and still obtain a truly formidable
intelligence!



I think that pattern matching is an ancillary function and yes the more
resources devoted to this allows for a better AI/AGI.  Pattern matching
operates on data streams for example video and pattern matching at various
abstractions can operate on a data store.  It seems pattern matching is a
part of consciousness and a component of intelligence but just part of the
core.  If you strip everything out except pattern matching operations, what
do you do with the matched patterns?  How do you store and organize them?
What decides further action?  How are the pattern matching operators
adjusted?

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

  
I think that pattern matching is crucial, and pretty basic.  It's also 
not sufficient.  Calling it ancillary is improperly denigrating it's 
centrality and importance.  Asserting that it's all that's needed, 
however, is WAY overstating it's importance.  Saying All AIs will be, 
at their core, pattern matching machines. is overstating the importance 
of pattern matching so much that it's grotesque, but pattern matching 
WILL be very central, and one of the basic modes of thought.  Think of 
asserting that All computers will be, at their core, adding machines. 
to get what appears to me to be the right feeling tone.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=26915029-047fb7


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Charles D Hixson

Tom McCabe wrote:

-...
To quote:

I am not sure you are capable of following an
argument

If I'm not capable of even following an argument, it's
a pretty clear implication that I don't understand the
argument.
  
You have thus far made no attempt that I have been able to detect to 
justify the precise number that you used.  And it was a very precise 
number.  It wasn't a statement of in the large majority of cases, 
which might well have been defensible.


I would have said that you don't appear to *chose* to follow the 
argument.  In either case, it renders debating with you of dubious 
utility.  If you didn't raise so many interesting points I would 
probably consider you a troll.  As it is, my suspicion is that you have 
no grounding in statistics or other math which includes probability theory.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=9719180-755b1a


Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson

Samantha Atkins wrote:

Charles D Hixson wrote:

Stathis Papaioannou wrote:

Available computing power doesn't yet match that of the human brain,
but I see your point, software (in general) isn't getting better
nearly as quickly as hardware is getting better.


Well, not at the personally accessible level.  I understand that 
there are now several complexes that approximate the processing power 
of a human brain...of course, they aren't being used for AI development.



Where?  I have not heard of any such.
I believe that IBM has one such.  The successor to blue-gene.  I think 
this is the one that they intend to use to model protein folding in a 
proteome.  Also I believe that the DOD World Simulator has such a 
computer complex.  (That one I'm less certain of, but since it's 
supposed to be doing crude modeling down the small statistical groups of 
individuals...I don't see how it could have any less and still do what 
it's supposed to do, i.e. predict popular trends by modeling individual 
behavior.)  Another possibly comparable complex is the weather 
department's world weather model...but I somehow doubt that it 
qualifies.  It's only supposed to get down to the 100 square kilometer 
level in it's simulations (i.e., each 100 square kilometers is 
represented by one simulator node).  That's getting up there, but 
probably not quite close enough.  (I *said* that none of them were used 
for AI :-) )


I'm also willing to admit that I may be quite off in the amount of 
processing that a brain does.  I tend to think of brains as being much 
simpler than many people do.  It's my suspicion that a tremendous amount 
of what brains do is housekeeping that other implementations wouldn't 
need rather than calculation that's needed for any implementation.  
Evolution is a great strategy, but it has it's problems, and one of the 
problems is that if it's got a working system, it never does a 
clean-room reimplementation.  So we combine breathing and drinking in 
the same tube, and several other basic engineering design defects.  The 
work-arounds are quite elegant, but that doesn't mean that they aren't 
sub-optimal.  In this case what I think happened was that neural-net 
control systems got the first-mover advantage, and nothing else could 
ever get good enough fast enough to surpass them.  But they've got their 
defects.  Dying when oxygen is removed for even a short amount of time, 
e.g.  Another is LOTS of maintenance.  (Admittedly this isn't all 
inherent in neural nets, but merely in the particular implementation 
that evolved...but that's what we're dealing with when we calculate how 
much thinking a brain does.)


OTOH, if things continue along the same curve (and Vista appears to be 
pushing the trend), then it won't be too long.  A question I wonder 
about is what's the power supply need of one of these?, but I don't 
think that answers at that fine a level are predictable.


Surely you jest.  Vista sucks so bad that Microsoft is having to eat 
crow and make it easy to escape back to Windows XP.   There is 
nothing, not a damn thing, revolutionary at the software level in Vista.


- samantha
It's because it's so bad that Vista is pushing the hardware envelope.  
You need twice as much hardware to get the machine to even be bootable.  
More than that it required for usability.  (I may have the precise 
numbers wrong...I haven't tried using it and am going by lurid published 
reports.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson

Peter Voss wrote:

Just for the record: To put it mildly, not everyone is 'Absolutely' sure
that AGI can't be implemented on Bill's computer.

In fact, some of us are pretty certain that that (a) current hardware is
adequate, and (b) AGI software will be with us in (much) less than 10 years.

Some people may be very, very surprised.

  

My take on it:
I'd be rather surprised if Bill's computer could do it today...unless, 
perhaps, his last name is Gates...and he's had it specially made.  A 
decade from now...not too surprising, at least not given today's 
knowledge.  Two decades from now?  Not surprising.  Three decades from 
now?  If it can't do it, why not?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-07-01 Thread Charles D Hixson

Samantha Atkins wrote:

Sergey A. Novitsky wrote:


Dear all,

...

  o Be deprived of free will or be given limited free will (if
such a concept is applicable to AI).


See above, no effective means of control.

- samantha


There is *one* effective means of control:
An AI will only do what it wants to do.  If, during the construction, 
you control what those desires are, you control the AI to the extent 
that it can be controlled.  Once it is complete, then you can't change 
what you have already done.  If it's much more intelligent than you, you 
probably also can't constrain it, contain it, or delete it.  If for no 
other reason than that it will hide it's intent from you until it can 
succeed.


So it's very important that you not get anything seriously wrong WRT the 
desires and goals of the AI as you are building it.  The other parts are 
less significant, as they will be subject to change...not necessarily by 
you, but rather by the AI itself.


E.g.:  If you instruct the AI to have reverence for life as defined by 
Albert Schweitzer, then we are likely to end up a planet populated by 
the maximal number of microbes.  (Depends on exactly how this gets 
interpreted...perhaps a solar system or galaxy populated by the maximal 
number of microbes.)


Well, English is fuzzy.  You knew what you meant, and you had the best 
of intentions... And the AI did precisely what it was built to do.  You 
just couldn't revise it when the bugs in the design became clear.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] AI concerns

2007-06-30 Thread Charles D Hixson

Stathis Papaioannou wrote:

On 01/07/07, Alan Grimes [EMAIL PROTECTED] wrote:

For the last several years, the limiting factor has absolutely not been
hardware.

How much hardware do you claim you need to devel a hard AI?

Available computing power doesn't yet match that of the human brain,
but I see your point, software (in general) isn't getting better
nearly as quickly as hardware is getting better.


Well, not at the personally accessible level.  I understand that there 
are now several complexes that approximate the processing power of a 
human brain...of course, they aren't being used for AI development.


OTOH, if things continue along the same curve (and Vista appears to be 
pushing the trend), then it won't be too long.  A question I wonder 
about is what's the power supply need of one of these?, but I don't 
think that answers at that fine a level are predictable.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-25 Thread Charles D Hixson

Kaj Sotala wrote:

On 6/22/07, Charles D Hixson [EMAIL PROTECTED] wrote:

Dividing things into us vs. them, and calling those that side with us
friendly seems to be instinctually human, but I don't think that it's a
universal.  Even then, we are likely to ignore birds, ants that are
outside, and other things that don't really get in our way.  An AI with


We ignore them, alright. Especially when it comes to building real
estate over some anthills.

People often seem to bring up AIs not being directly hostile to us,
but for some reason they forget that indifference is just as bad. Like
Eliezer said - The AI does not hate you, nor does it love you, but
you are made out of atoms which it can use for something else.

While there obviously is a possibility that an AI would choose to
leave Earth and go spend time somewhere else, it doesn't sound all
that likely. For one, there's a lot of ready-to-use infrastructure
around here - most AIs without explict Friendliness goals would
probably want to grab that for their own use.

It may not be good, but it's not just as bad.  Ant's are flourishing.  
Even wasps aren't doing too badly.


FWIW:  Leaving earth is only one possibility...true, it's probably the 
one that we would find least disruptive.  To me it's quite plausible 
that humans could live in the cracks.  I'll grant you that this would 
be a far smaller number of humans than currently exist, and the process 
of getting from here to there wouldn't be gentle.  But this isn't as 
bad as an AI that was actively hostile.


OTOH, let's consider a few scenario's where not super-human AI 
develops.  Instead there develops:
a) A cult of death that decides that humanity is a mistake, and decides 
to solve the problem via genetically engineered plagues.  (Well, 
diseases.  I don't specifically mean plague.)
b) A military genius takes over a major country and decides to conquer 
the world using atomic weapons.
c) Several rival racial supremacy groups take over countries, and 
start trying to conquer the world using plagues to either modify all 
others to be just like them, or sterile.

d) Insert your own favorite human psychopathology.

If we don't either develop a super-human AI or split into mutually 
inaccessible groups via diaspora, one of these things will lie in our 
future.  This is one plausible answer to the Fermi Paradox...but it 
doesn't appear to me to be inevitable as I see two ways out.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-25 Thread Charles D Hixson

Matt Mahoney wrote:

--- Tom McCabe [EMAIL PROTECTED] wrote:

  

These questions, although important, have little to do
with the feasibility of FAI. 



These questions are important because AGI is coming, friendly or not.  Will
our AGIs cooperate or compete?  Do we upload ourselves?
...

-- Matt Mahoney, [EMAIL PROTECTED]
  

-- Matt Mahoney, [EMAIL PROTECTED]
It's not absolutely certain that AGI is coming.  However if it isn't, we 
will probably kill ourselves off because of too much power enclosed in 
too small a box.  (Interstellar dispersion is the one obvious alternate 
solution.  This will need to be *SLOW* because of energetic 
considerations.  That doesn't mean it's not feasible.)


OTOH:  AGI, in some form or other, is quite probable.  If not designed 
from scratch, then as an amalgamation of the minds of several different 
people (100's?  1,000's?  more?) linked by neural connectors.  Possibly 
not linked in permanently...possibly only linked in as a part of a 
game.  Lots of possibilities here, because once I invoke neural 
connectors there are LOTS of different things that might be what was 
developed.  Primitive neural connectors exist, but they aren't very 
useful yet, unless you're a quadriplegic.  Anyway, neural connectors are 
not a it might arrive some time in the future technology.  They're 
here right now.  Just not very powerful.


Time frame, however, is somewhat interesting.  It appears that if it 
contains a large component of genetic programming, that it is over a 
decade off to general access (i.e., ownership of sufficient computing 
resources).  There may, however, be many ways to trim these 
requirements.  Communication can often substitute for local 
computation.  Also, there may be more efficient approaches that genetic 
programming.  (There had better be better ways than CURRENT genetic 
programming.)  Note that most people who feel that they are close have 
not only large amounts of local hardware, but a complex scheme of 
processing that is only partially dependent on genetic programming, if that.


Say that someone were to, tomorrow, start up a newly modified program 
that was as intelligent as an average human.  How long would that 
program run before the creator turned it off and tinkered with it's 
internals?  How much learning would it need to acquire before the 
builder was convinced that This is something I shouldn't turn off 
without it's permission?  Given this, what would tend to evolve?  And 
what does it mean to be as intelligent as an average human when we are 
talking about something that doesn't have an average human's growth 
experiences?


I feel that this whole area is one in which a great deal of uncertainty 
is the only proper approach.  I'm quite certain that my current program 
is less intelligent than a housefly...but I wouldn't want to defend any 
particular estimate as to How much less intelligent?.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI

2007-06-25 Thread Charles D Hixson

Alan Grimes wrote:

OTOH, let's consider a few scenario's where not super-human AI
develops.  Instead there develops:
a) A cult of death that decides that humanity is a mistake, and decides
to solve the problem via genetically engineered plagues.  (Well,
diseases.  I don't specifically mean plague.)



http://www.vhemt.org/
  

I don't think they're serious.  But some similar group could be.
  

b) A military genius takes over a major country and decides to conquer
the world using atomic weapons.



http://www.antiwar.com/justin/?articleid=6734

  

c) Several rival racial supremacy groups take over countries, and
start trying to conquer the world using plagues to either modify all
others to be just like them, or sterile.



Israel, right?
  
Israel seems more religious sectarian.  I left that one out.  Racial was 
more like the Nazis, but they aren't the only ones to have had that 
approach.  They just tried to be more thorough than most.  I was 
supposing that several such groups got started at once.  (Read some US 
newspapers from the 1920's.  Read some British from earlier.)
  

d) Insert your own favorite human psychopathology.



http://www.nfl.com/

  
Somehow I don't see the National Football League as being a 
psychopathology of this variety. :-)  I did, however, mean to indicate 
that this list wasn't exhaustive.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-22 Thread Charles D Hixson
And *my* best guess is that most super-humanly intelligent AIs will just 
choose to go elsewhere, and leave us alone.  (Well, most of those that 
have any interest in personal survivalif you posit genetic AI as the 
route to success, that will be most to all of them, but I'm much less 
certain about the route.)


Dividing things into us vs. them, and calling those that side with us 
friendly seems to be instinctually human, but I don't think that it's a 
universal.  Even then, we are likely to ignore birds, ants that are 
outside, and other things that don't really get in our way.  An AI with 
moderately more intelligence than a human, different environmental 
needs, and the ability to move to alternate bodies might well just move 
to the far side of the  moon to get a good place for some intensive 
development, and then head out to the asteroids.  Partially to put a bit 
of distance between it and some irritating neighbors.  What it would do 
then would depend on what it had been designed to do.  Many of the 
possibilities aren't open ended.  An intensive survey of the stars 
within 100 light years, e.g., could be done to a good approximation by a 
couple of 5-mile wide mirrors at opposite poles of Neptune's orbit.  
Possibly it would then want to take a closer look, but it's not clear 
that this would need to be done quickly.  An ion-jet might suffice.  etc.


Purpose is going to govern.  Lots of purposes would be neither Friendly 
nor un-Friendly.  I'll agree, however, that most open-ended purposes 
would be unfriendly, or even UNFRIENDLY!.  Also most purposes that while 
not technically open-ended, were inherently computationally 
intractable.  And that's a problem, because the halting problem is, 
itself, computationally intractable...and in any particular case it may 
be open-ended, trivial, or anywhere in between.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Bootstrapping AI

2007-06-07 Thread Charles D Hixson

BillK wrote:

On 6/7/07, Charles D Hixson wrote:

I believe that he said exactly as you quoted it.  And I also believe
that he gets mad if you quote him as saying 640K ought to be enough for
anyone.  However, I also remember him having been quoted as saying that
in the popular technical press...and him not, at that time, denying it.

The question, I guess, is how much do you trust the honesty of Bill
Gates.  I don't trust it at all.  His denying it doesn't convince me at
all.  It could be true, but believing it depends on believing his
honesty over that of a sensationalist reporter...and I don't.




Yes, it is too good a quote not to keep repeating it!  :)
Like many other fictitious quotes applied to famous people.

But the trouble is that it is almost impossible to prove a negative.
Can you prove that a UFO has never *ever* abducted anyone?

If a quote is true, you have to be able to cite where and when it was
first said or published.

The best I can come up with is the reference I quoted.
I think it is quite likely that one or more reporters interpreted this
comment to produce the disputed quote when writing an article. We
would say something very similar if we were asked by a friend what
processor they should have in their new pc. Oh, a 2.4GHz Core 2 Duo
should be enough for anyone. Not meaning that 2.4GHz should be enough
for eternity, but 'enough' at the time of asking.

BillK
Yes.  That's quite plausible.  But he doesn't say I was taken out of 
context, he says (paraphrased) I didn't say that.   This is possible, 
and if he didn't deny it I'd rate the plausibility at 50%.  Since he 
denies it I rate the plausibility higher.  (That's how much I trust Bill 
Gates.)  If he had said I was taken out of context, that's so likely 
that I'd continue to believe it even though Bill G. asserted it.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Re: Personal attacks

2007-06-06 Thread Charles D Hixson

Tom McCabe wrote:

--- Eugen Leitl [EMAIL PROTECTED] wrote:

  

On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom McCabe
wrote:



Unless, of course, that human turns out to be evil
  

and

That why you need to screen them, and build a group
with
checks and balances.



If our psychology is so advanced that we're willing to
trust the fate of the world with it, why have we had
no success at getting prisoners to avoid committing
further crimes, even when we have them under 24/7
control and observation for years on end? Keep in mind
that Hitler, Stalin, and the like, at age 20, would
have seemed like normal, regular guys.
  
FWIW, I seem to remember that Stalin at age 20 was a political terrorist 
and a bank robber.
OTOH, my real problem with ...a group with checks and balances. is who 
gets to specify those self-same checks and balances.  E.g., I can't 
think of a single government official or leader I would trust that is in 
any position of power.  (Well, perhaps one, but her position of power 
is rather scant in the amount of power.)


...

Obviously there is some selection effect (people who
are nasty jerks tend to want power more than others),
but it's not so severe that an insignificant part of
the population falls under the category of potential
evil overlord.
  

eh?  You must not be following the same news that I follow.

...
A computer system that we can design to spec, test to
an arbitrary degree of precision in a sandbox computer
environment, and for which the mathematics of behavior
are tractable.
  
This strikes me as...unlikely.  Great, if you can manage it, but just a 
bit unlikely.  (Not the design to spec part.  That's doable.  But both 
the specs themselves and the mathematics of behavior are tractable. 
seem more than a bit dubious


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Why do you think your AGI design will work?

2007-04-28 Thread Charles D Hixson

Mike Tintner wrote:

yes, but that's not precise enough.

you have to have a task example that focusses what is going on 
adaptively... you're not specifiying what kinds of essays/ maths etc


what challenge does the problem pose to the solver's existing rules 
for reaching a goal?

how does the solver adapt their rules to solve it?
...
- Original Message - From: Charles D Hixson 
[EMAIL PROTECTED]

To: singularity@v2.listbox.com
Sent: Saturday, April 28, 2007 6:23 AM
Subject: Re: [singularity] Why do you think your AGI design will work?



Mike Tintner wrote:

Hi,
 I strongly disagree - there is a need to provide a definition of 
AGI - not necessarily the right or optimal definition, but one that 
poses concrete challenges and focusses the mind - even if it's only 
a starting-point. The reason the Turing Test has been such a 
successful/ popular idea is that it focusses the mind.

...

OK.  A program is an AGI if it can do a high school kid's homework 
and get him good grades for 1 week (during which there aren't any 
pop-quizes, mid-terms, or other in-school and closed book exams.


That's not an optimal definition, but if you can handle essays and 
story problems and math and biology as expressed by a teacher, then 
you've got a pretty good AGI.


-
...

...
But the point is, a precise definition is useless.  Turing's test was 
established so that (paraphrase)If a program could do this, then you 
would have to agree that it was intelligent., it wasn't intended as a 
practical test that some future program would pass.  If we were to start 
passing laws about the rights and privileges of intelligent programs, 
then a necessary  sufficient test would be needed.  To do development 
work it may be more of a handicap than an assist.  (I.e., it would tend 
to focus effort on meeting the definition rather on where the program 
should logically next be developed.)


P.S.:  I meant an arbitrary week.  If it can only handle certain weeks, 
then it is clearly either not that intelligent, or has been poorly 
educated.   (However, I have a rather lower opinion than many of the 
amount of intelligence exhibited by humans, tending more toward a belief 
that they operate largely on reflexes and evolved rather than chosen 
goals.  Consider, e.g., the number of people not who start to believe in 
astrology, but rather who continue to believe in it for years.  A simple 
examination of predictions will demonstrate that nothing significant was 
predicted in advance, but only explained afterwards.  [OTOH, it *was* 
once useful for determining when to plant which crops.])


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-09 Thread Charles D Hixson

Stathis Papaioannou wrote:



On 3/7/07, *Charles D Hixson* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


With so many imponderables, the most reasonable thing to do is to just
ignore the possibility, and, after all, that may well be what is
desired
by the simulation.  (What would our ancestors lives have been
like if
Teddy Roosevelt had won the presidential election?)


While it's quite an assumption that we are in a simulation, it's an 
even more incredible assumption that we are somehow at the centre of 
it. It is analogous to comparing belief in a deistic God to belief in 
Jehovah the sky god, who wants us to make sacrifices to him and eat 
certain things but not others. The more closely we specify something 
of which we can have no knowledge, the more foolish it becomes.


Stathis Papaioannou

Point.  But we *could* be.  If it's a simulation, perhaps only a local 
area of interest is simulated.  In a good simulation, you couldn't 
tell.  Can you can't put a boundary around local area, either.  It 
could be just the internal workings of your brain (well, of my brain, 
since I'm the one active at the moment...but when you are reading, then 
you are the one active, so...).


That's sort of the point.  If it's a simulation, we can't tell what's 
going on, so we (well, I) can't make choices based on that assumption, 
even if it were to seem more plausible...UNLESS the argument that made 
it seem sufficiently plausible made some small sheaf of scenarios seem 
sufficiently most probable.  So far this hasn't happened.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-09 Thread Charles D Hixson

Shane Legg wrote:

:-)

No offence taken, I was just curious to know what your position was.

I can certainly understand people with a practical interest not having
time for things like AIXI.  Indeed as I've said before, my PhD is in AIXI
and related stuff, and yet my own AGI project is based on other things.
So even I am skeptical about whether it will lead to practical methods.
That said, I can see that AIXI does have some fairly theoretical uses,
perhaps Friendliness will turn out to be one of them?

Shane

...

As described (I haven't read, and probably couldn't read, the papers on 
AIXI, only on the list, and, when I get that far, in Ben's text) AIXI 
doesn't appear to be anything that a reasonable person would call 
intelligent.  As such, I don't see how it could shed any light on 
Friendliness.  Would you care to elaborate?  Or were the descriptions on 
the list, perhaps, unfair?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-03 Thread Charles D Hixson

John Ku wrote:
I think I am conceiving of the dialectic in a different way from the 
way you are imagining it. What I think Bostrom and others are doing is 
arguing that if the world is as our empirical science says it is, 
then the anthropic principle actually yields the prediction that we 
are almost certainly living in a computer simulation. That is the 
argument I was addressing.
 
There is another way you can try to run the argument where you are 
just directly trying to argue that we are living in a computer 
simulation or that we can't know that we aren't because there are no a 
priori reasons we can marshall against it. I think this argument is 
much less interesting. For one thing, this argument is nothing new and 
has been around at least since Descartes and his idea that there could 
just be an evil demon deceiving us into thinking there are other minds 
and an external world and probably extending back to the Greek skeptics.

...

Try this then:
If the world as we know it is a simulation, it might be a simulation of 
something far in the past of the species running the simulation, on the 
analogy of historical fiction.  And it might be no more an accurate 
simulation than is much historical fiction.  On can presume that a lot 
of consistency checking could be performed by sub-intelligent 
functions, so that the resulting fiction would be consistent in as many 
ways as the author of the fiction chose.


I don't find it useful to presume that I'm living in a simulation.  That 
leaves too many unanswered questions about how should I act? and is 
that a zimboe I see before me?...questions that are not only 
unanswered, but which appear in principle unaddressable.


If I presume that I'm living in the real world, then I *may* be making 
the choices of action that would be correct were I living in a 
simulation, and I *am* making my best effort at the choices of action 
that are appropriate if I'm not.


Yes, I see no valid argument asserting that this is not a simulation 
fiction that some other entity is experiencing.  And there's no 
guarantee that sometime soon he won't put down the book.  But this 
assumption yields no valid guide as to how I should act, so I ignore the 
possibility.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Minds beyond the Singularity: literally self-less ?

2006-10-11 Thread Charles D Hixson

Chris Norwood wrote:

...
a human hooked into the Net with VR technology and
able to sense and act remotely via sensors and
actuators all over the world, might also develop a
different flavor of self

Yes. I think this is an important point that I have
not seen discussed very much. It could be, that I am
not in the right circles to hear the discussions. 
  

...
I think that Vernor Vinge alluded to it briefly in his first paper on 
the Technological Singularity:

...

into the winners' enterprises. A creature that was built _de novo_
might possibly be a much more benign entity than one with a kernel
based on fang and talon. And even the egalitarian view of an Internet
that wakes up along with all mankind can be viewed as a nightmare
[26].

 The problem is not simply that the Singularity represents the 
passing of humankind from center stage, but that it contradicts

our most deeply held notions of being. I think a closer look at the
notion of strong superhumanity can show why that is. 
...

See also the entire section in IA (Intellegence Amplification).  He isn't 
exactly clear, but he does seem to have considered the point.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] An interesting but currently failed attempt to prove provable Friendly AI impossible...

2006-09-17 Thread Charles D Hixson

Shane Legg wrote:
On 9/17/06, *Brian Atkins* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


... 



It would be much easier to aim at the right target if the target was
properly defined.  ...

Shane
I have a suspicion that if an FAI could be properly and completely 
defined, the constructing one would then be a trivial effort.  Generally 
math can be turned into programs rather easily.  (They may not be very 
efficient programs, but they work, and an FAI by most definitions could 
improve itself.)


OTOH, given how much work a formal proof of just the four-color theorum 
involved ... well, I don't think it would be wise to hold our breaths.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]