Re: Remarks on the form of a TOE

2011-01-02 Thread silky
On Sun, Jan 2, 2011 at 7:05 PM, Evgenii Rudnyi use...@rudnyi.ru wrote:
 on 02.01.2011 08:47 silky said the following:

 On Sun, Jan 2, 2011 at 4:43 PM, Brian Tennesontenn...@gmail.com
 wrote:

 We're talking about a mathematical theory about E.

 What relevance does this comment have?


 I would say that a model and reality are different things. Do you mean that
 they could be the same?

I'm feel the same as you! That was my comment to Brian; I have no idea
what his response to me is meant to mean ...


 Evgenii

-- 
silky

http://dnoondt.wordpress.com/  (Noon Silk) | http://www.mirios.com.au:8081 

Every morning when I wake up, I experience an exquisite joy — the joy
of being this signature.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Remarks on the form of a TOE

2011-01-02 Thread silky
On Sun, Jan 2, 2011 at 8:31 PM, Brian Tenneson tenn...@gmail.com wrote:
 In the case of a TOE, the model IS reality.

Okay, I won't reply further, this has become irrelevant noise.

-- 
silky

http://dnoondt.wordpress.com/  (Noon Silk) | http://www.mirios.com.au:8081 

Every morning when I wake up, I experience an exquisite joy — the joy
of being this signature.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Remarks on the form of a TOE

2011-01-01 Thread silky
On Sun, Jan 2, 2011 at 12:03 AM, Brian Tenneson tenn...@gmail.com wrote:

[...]

  One way to describe something, a real basic way to describe something,
  is to form an aggregate of all things that meet that description.
  There may be no effective procedure for deciding whether or not A is
  in that aggregate, whatever.  The point is that that is one way to
  describe something.
  Thus reality basically describes itself.
  Reality is an aggregate and as such is a TOE, a complete description
  of reality.

 But that is the trivial TOE. You are saying take the territory for
 map.

 That is all I need to show that a TOE exists.  It's a trivial, brutal
 proof. Not elegant, I know.

Reality is the E of TOE. What's the point of calling it a TOE? You
can't use it for anything. the TO part implies that we understand
it. We don't understand of all reality, so we can't use all its
properties to make predictions, so it's not useful for us as a
theory and I don't see it as correct or appropriate to refer to it
as such.

-- 
silky

http://dnoondt.wordpress.com/  (Noon Silk) | http://www.mirios.com.au:8081 

Every morning when I wake up, I experience an exquisite joy — the joy
of being this signature.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Remarks on the form of a TOE

2011-01-01 Thread silky
On Sun, Jan 2, 2011 at 4:43 PM, Brian Tenneson tenn...@gmail.com wrote:
 We're talking about a mathematical theory about E.

What relevance does this comment have?

-- 
silky

http://dnoondt.wordpress.com/  (Noon Silk) | http://www.mirios.com.au:8081 

Every morning when I wake up, I experience an exquisite joy — the joy
of being this signature.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Was:Singularity - Re: Intelligence

2010-04-11 Thread silky
On Mon, Apr 12, 2010 at 5:50 AM, Jason Resch jasonre...@gmail.com wrote:

[...]

 In an uploaded state you could spend all day eating from an unlimited buffet
 of any food you could think of (and more) and get neither full nor fat.  In
 the end it is just firings of your neurons (artificial or otherwise) and if
 uploaded, that would be all there is to you, there would be no metabolism,
 and no additional resources would be sacrificed to provide the experience of
 eating that food.

Potentially an interesting question, though, is would it still mean
anything, if there were no consequences?


 Jason

-- 
silky

  http://www.programmingbranch.com/

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread silky
On Tue, Jan 19, 2010 at 8:43 PM, Stathis Papaioannou stath...@gmail.comwrote:

 2010/1/19 silky michaelsli...@gmail.com:

  Exactly my point! I'm trying to discover why I wouldn't be so rational
  there. Would you? Do you think that knowing all there is to know about
  a cat is unpractical to the point of being impossible *forever*, or do
  you believe that once we do know, we will simply end them freely,
  when they get in our way? I think at some point we *will* know all
  there is to know about them, and even then, we won't end them easily.
  Why not? Is it the emotional projection that Brent suggests? Possibly.

 Why should understanding something, even well enough to have actually
 made it, make a difference?


I don't know, that's what I'm trying to determine.



   Obviously intelligence and the ability to have feelings and desires
  has something to do with complexity. It would be easy enough to write
  a computer program that pleads with you to do something but you don't
  feel bad about disappointing it, because you know it lacks the full
  richness of human intelligence and consciousness.
 
  Indeed; so part of the question is: Qhat level of complexity
  constitutes this? Is it simply any level that we don't understand? Or
  is there a level that we *can* understand that still makes us feel
  that way? I think it's more complicated than just any level we don't
  understand (because clearly, I understand that if I twist your arm,
  it will hurt you, and I know exactly why, but I don't do it).

 I don't think our understanding of it has anything to do with it. It
 is more that a certain level of complexity is needed for the entity in
 question to have a level of consciousness which means we are able to
 hurt it.


But the basic question is; can you create this entity from scratch, using a
computer? And if so, do you owe it any obligations?


--
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.





-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

antagonist PATRIARCHATE scatterbrained professorship VENALLY bankrupt
adversity bored = unint...
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-19 Thread silky
On Wed, Jan 20, 2010 at 2:50 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 19 Jan 2010, at 03:28, silky wrote:

 I don't disagree with you that it would be significantly complicated, I
 suppose my argument is only that, unlike with a real cat, I - the programmer
 - know all there is to know about this computer cat. I'm wondering to what
 degree that adds or removes to my moral obligations.



 I think there is a confusion of level. It seems related to the problem of
 free-will. Some people believe that free will is impossible in the
 deterministic frame.



My opinion is that we don't have free will, and my definion of free-will
in this context is being able to do something that our programming doesn't
allow us to do.

For example, people explain free-will as the ability to decide whether or
not you pick up a pen. Sure, you can do either things, and no matter which
you do, you are exercising a choice. But I don't consider this free.
It's just a pre-determined as a program looking at some internal state and
deciding which branch to take:

if ( needToWrite  notHoldingPen ){ grabPen(); }

It goes without saying that it's significantly more complicated, but the
underlying concept remains.

I define free will as the concept of breaking out of a branch completely,
stepping outside the program. And clearly, from within the program (of
human consciousness) it's impossible. Thus, I consider free will as a
completely impossible concept.

If we re-define free will to mean the ability to choose between two actions,
based on state (as I showed above), then clearly, it's a fact of life, and
every single object in the universe has this type of free will.



But no machine can predict its own behavior in advance. If it could it could
 contradict the prediction.

 If my friend who knows me well can predict my action, it will not change
 the fact that I can do those action by free will, at my own level where I
 live.

 If not, determinism would eliminate all form of responsability. You can say
 to the judge: all right I am a murderer, but I am not guilty because I am
 just obeying the physical laws.
 This is an empty defense. The judge can answer: no problem. I still
 condemn you to fifty years in jail, but don't worry, I am just obeying
 myself to the physical laws.

 That is also why real explanation of consciousness don't have to explain
 consciousness *away*. (Eventually it is the status of matter which appear
 less solid).

 An explanation has to correspond to its correct level of relevance.

 Why did Obama win the election? Because Obama is made of particles obeying
 to the Schoredinger equation.? That is true, but wrong as an explanation.
  Because Obama promise to legalize pot? That is false, but could have work
 as a possible  explanation. It is closer to the relevance level.

 When we reduce a domain to another ontologically, this does not need to
 eliminate the explanation power of the first domain. This is made palpable
 in computer science. You will never explain how a chess program works by
 referring to a low level.

 Bruno

 http://iridia.ulb.ac.be/~marchal/ http://iridia.ulb.ac.be/%7Emarchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

UNBOUNDED-reconcilable crow's-feet; COKE? Intermarriage distressing: puke
tailoring bicyclist...
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: on consciousness levels and ai

2010-01-18 Thread silky
On Mon, Jan 18, 2010 at 6:57 PM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com
 wrote:


 silky wrote:


 I'm not sure if this question is appropriate here, nevertheless, the
 most direct way to find out is to ask it :)

 Clearly, creating AI on a computer is a goal, and generally we'll try
 and implement to the same degree of computationalness as a human.
 But what would happen if we simply tried to re-implement the
 consciousness of a cat, or some lesser consciousness, but still
 alive, entity.

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

 I then wonder, what moral obligations do we owe these programs? Is it
 correct to turn them off? If so, why can't we do the same to a real
 life cat? Is it because we think we've not modelled something
 correctly, or is it because we feel it's acceptable as we've created
 this program, and hence know all its laws? On that basis, does it mean
 it's okay to power off a real life cat, if we are confident we know
 all of it's properties? Or is it not the knowning of the properties
 that is critical, but the fact that we, specifically, have direct
 control over it? Over its internals? (i.e. we can easily remove the
 lines of code that give it the desire to 'live'). But wouldn't, then,
 the removal of that code be equivelant to killing it? If not, why?



 I think the differences are

 1) we generally cannot kill an animal without causing it some distress


 Is that because our off function in real life isn't immediate?

 Yes.

Do does that mean you would not feel guilty turning off a real cat, if
it could be done immediately?


 Or,
 as per below, because it cannot get more pleasure?


 No, that's why I made it separate.



 2) as
 long as it is alive it has a capacity for pleasure (that's why we
 euthanize
 pets when we think they can no longer enjoy any part of life)


  This is fair. But what if we were able to model this addition of
  pleasure in the program? It's easy to increase happiness++, and thus
  the desire to die decreases.

 I don't think it's so easy as you suppose.  Pleasure comes through
 satisfying desires and it has as many dimensions as there are kinds of
 desires.  A animal that has very limited desires, e.g. eat and reproduce,
 would not seem to us capable of much pleasure and we would kill it without
 much feeling of guilt - as swatting a fly.

Okay, so for your the moral responsibility comes in when we are
depriving the entity from pleasure AND because we can't turn it off
immediately (i.e. it will become aware it's being switched off; and
become upset).


  Is this very simple variable enough to
  make us care? Clearly not, but why not? Is it because the animal is
  more conscious then we think? Is the answer that it's simply
  impossible to model even a cat's consciousness completely?
 
  If we model an animal that only exists to eat/live/reproduce, have we
  created any moral responsibility? I don't think our moral
  responsibility would start even if we add a very complicated
  pleasure-based system into the model.

 I think it would - just as we have ethical feelings toward dogs and tigers.

So assuming someone can create the appropriate model, and you can
see that you will be depriving pleasure and/or causing pain, you'd
start to feel guilty about switching the entity off? Probably it would
be as simple as having the cat/dog whimper as it senses that the
program was going to terminate (obviously, visual stimulus would help
in a deterrent),  but then it must be asked, would the programmer feel
guilt? Or just an average user of the system, who doesn't know the
underlying programming model?


  My personal opinion is that it
  would hard to *ever* feel guilty about ending something that you have
  created so artificially (i.e. with every action directly predictable
  by you, casually).

 Even if the AI were strictly causal, it's interaction with the environment
 would very quickly make it's actions unpredictable.  And I think you are
 quite wrong about how you would feel.  People report feeling guilty about
 not interacting with the Sony artificial pet.

I've clarified my position above; does the programmer ever feel guilt,
or only the users?


  But then, it may be asked; children are the same.
  Humour aside, you can pretty much have a general idea of exactly what
  they will do,

 You must not have raised any children.

Sadly, I have not.


 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

CHURLISH rigidness; individual tangibly insomuch sadness cheerfulness.
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com wrote:
 2010/1/18 silky michaelsli...@gmail.com:
  It would be my (naive) assumption, that this is arguably trivial to
  do. We can design a program that has a desire to 'live', as desire to
  find mates, and otherwise entertain itself. In this way, with some
  other properties, we can easily model simply pets.

 Brent's reasons are valid,

Where it falls down for me is that the programmer should ever feel
guilt. I don't see how I could feel guilty for ending a program when I
know exactly how it will operate (what paths it will take), even if I
can't be completely sure of the specific decisions (due to some
randomisation or whatever) I don't see how I could ever think No, you
can't harm X. But what I find very interesting, is that even if I
knew *exactly* how a cat operated, I could never kill one.


 but I don't think making an artificial
 animal is as simple as you say.

So is it a complexity issue? That you only start to care about the
entity when it's significantly complex. But exactly how complex? Or is
it about the unknowningness; that the project is so large you only
work on a small part, and thus you don't fully know it's workings, and
then that is where the guilt comes in.


 Henry Markham's group are presently
 trying to simulate a rat brain, and so far they have done 10,000
 neurons which they are hopeful is behaving in a physiological way.
 This is at huge computational expense, and they have a long way to go
 before simulating a whole rat brain, and no guarantee that it will
 start behaving like a rat. If it does, then they are only a few years
 away from simulating a human, soon after that will come a superhuman
 AI, and soon after that it's we who will have to argue that we have
 feelings and are worth preserving.

Indeed, this is something that concerns me as well. If we do create an
AI, and force it to do our bidding, are we acting immorally? Or
perhaps we just withhold the desire for the program to do it's own
thing, but is that in itself wrong?


 --
 Stathis Papaioannou

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

crib? Unshakably MINICAM = heckling millisecond? Cave-in RUMP =
extraterrestrial matrimonial ...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com
 wrote:


 2010/1/18 silky michaelsli...@gmail.com:


 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.


 Brent's reasons are valid,


 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever)

 It's not just randomisation, it's experience.  If you create and AI at
 fairly high-level (cat, dog, rat, human) it will necessarily have the
 ability to learn and after interacting with it's enviroment for a while it
 will become a unique individual.  That's why you would feel sad to kill it
 - all that experience and knowledge that you don't know how to replace.  Of
 course it might learn to be evil or at least annoying, which would make
 you feel less guilty.

Nevertheless, though, I know it's exact environment, so I can recreate
the things that it learned (I can recreate it all; it's all
deterministic: I programmed it). The only thing I can't recreate, is
the randomness, assuming I introduced that (but as we know, I can
recreate that anyway, because I'd just use the same seed state;
unless the source of randomness is true).



 I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.




 but I don't think making an artificial
 animal is as simple as you say.


 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.


 I think unknowingness plays a big part, but it's because of our experience
 with people and animals, we project our own experience of consciousness on
 to them so that when we see them behave in certain ways we impute an inner
 life to them that includes pleasure and suffering.

Yes, I agree. So does that mean that, over time, if we continue using
these computer-based cats, we would become attached to them (i.e. your
Sony toys example


  Indeed, this is something that concerns me as well. If we do create an
  AI, and force it to do our bidding, are we acting immorally? Or
  perhaps we just withhold the desire for the program to do it's own
  thing, but is that in itself wrong?
 

 I don't think so.  We don't worry about the internet's feelings, or the air
 traffic control system.  John McCarthy has written essays on this subject
 and he cautions against creating AI with human like emotions precisely
 because of the ethical implications.  But that means we need to understand
 consciousness and emotions less we accidentally do something unethical.

Fair enough. But by the same token, what if we discover a way to
remove emotions from real-born children. Would it be wrong to do that?
Is emotion an inherent property that we should never be allowed to
remove, once created?


 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

FRACTURE THISTLEDOWN CURIOUSLY! Sixfold columned HOBBLER shouter
OVERLAND axon ZANY interbree...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
stath...@gmail.com wrote:
 2010/1/19 silky michaelsli...@gmail.com:
  On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou stath...@gmail.com 
  wrote:
   2010/1/18 silky michaelsli...@gmail.com:
It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.
  
   Brent's reasons are valid,
 
  Where it falls down for me is that the programmer should ever feel
  guilt. I don't see how I could feel guilty for ending a program when I
  know exactly how it will operate (what paths it will take), even if I
  can't be completely sure of the specific decisions (due to some
  randomisation or whatever) I don't see how I could ever think No, you
  can't harm X. But what I find very interesting, is that even if I
  knew *exactly* how a cat operated, I could never kill one.

 That's not being rational then, is it?

Exactly my point! I'm trying to discover why I wouldn't be so rational
there. Would you? Do you think that knowing all there is to know about
a cat is unpractical to the point of being impossible *forever*, or do
you believe that once we do know, we will simply end them freely,
when they get in our way? I think at some point we *will* know all
there is to know about them, and even then, we won't end them easily.
Why not? Is it the emotional projection that Brent suggests? Possibly.


   but I don't think making an artificial
   animal is as simple as you say.

 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.

 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.

Indeed; so part of the question is: Qhat level of complexity
constitutes this? Is it simply any level that we don't understand? Or
is there a level that we *can* understand that still makes us feel
that way? I think it's more complicated than just any level we don't
understand (because clearly, I understand that if I twist your arm,
it will hurt you, and I know exactly why, but I don't do it).


   Henry Markham's group are presently
   trying to simulate a rat brain, and so far they have done 10,000
   neurons which they are hopeful is behaving in a physiological way.
   This is at huge computational expense, and they have a long way to go
   before simulating a whole rat brain, and no guarantee that it will
   start behaving like a rat. If it does, then they are only a few years
   away from simulating a human, soon after that will come a superhuman
   AI, and soon after that it's we who will have to argue that we have
   feelings and are worth preserving.
 
  Indeed, this is something that concerns me as well. If we do create an
  AI, and force it to do our bidding, are we acting immorally? Or
  perhaps we just withhold the desire for the program to do it's own
  thing, but is that in itself wrong?

 If we created an AI that wanted to do our bidding or that didn't care
 what it did, then it would not be wrong. Some people anthropomorphise
 and imagine the AI as themselves or people they know: and since they
 would not like being enslaved they assume the AI wouldn't either. But
 this is false. Eliezer Yudkowsky has written a lot about AI, the
 ethical issues, and the necessity to make a friendly AI so that it
 didn't destroy us whether through intention or indifference.

 --
 Stathis Papaioannou

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

JUGULAR MATERIALS: thundershower! PRETERNATURAL anise! Stressed
BATTERED KICKBALL neophyte: k...
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker meeke...@dslextreme.com
 wrote:


 silky wrote:


 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou 
 stath...@gmail.com
 wrote:



 2010/1/18 silky michaelsli...@gmail.com:



 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.



 Brent's reasons are valid,



 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever)


 It's not just randomisation, it's experience.  If you create and AI at
 fairly high-level (cat, dog, rat, human) it will necessarily have the
 ability to learn and after interacting with it's enviroment for a while
 it
 will become a unique individual.  That's why you would feel sad to kill
 it
 - all that experience and knowledge that you don't know how to replace.
  Of
 course it might learn to be evil or at least annoying, which would make
 you feel less guilty.



 Nevertheless, though, I know it's exact environment,


 Not if it interacts with the world.  You must be thinking of a virtual cat
 AI in a virtual world - but even there the program, if at all realistic, is
 likely to be to complex for you to really comprehend.  Of course *in
 principle* you could spend years going over a few terrabites of data and
  you could understand, Oh that's why the AI cat did that on day 2118 at
 10:22:35, it was because of the interaction of memories of day 1425 at
 07:54:28 and ...(long string of stuff).  But you'd be in almost the same
 position as the neuroscientist who understands what a clump of neurons does
 but can't get a wholistic view of what the organism will do.

 Surely you've had the experience of trying to debug a large program you
 wrote some years ago that now seems to fail on some input you never tried
 before.  Now think how much harder that would be if it were an AI that had
 been learning and modifying itself for all those years.


I don't disagree with you that it would be significantly complicated, I
suppose my argument is only that, unlike with a real cat, I - the programmer
- know all there is to know about this computer cat. I'm wondering to what
degree that adds or removes to my moral obligations.



  so I can recreate
 the things that it learned (I can recreate it all; it's all
 deterministic: I programmed it). The only thing I can't recreate, is
 the randomness, assuming I introduced that (but as we know, I can
 recreate that anyway, because I'd just use the same seed state;
 unless the source of randomness is true).



 I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.





 but I don't think making an artificial
 animal is as simple as you say.



 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.



 I think unknowingness plays a big part, but it's because of our
 experience
 with people and animals, we project our own experience of consciousness
 on
 to them so that when we see them behave in certain ways we impute an
 inner
 life to them that includes pleasure and suffering.



 Yes, I agree. So does that mean that, over time, if we continue using
 these computer-based cats, we would become attached to them (i.e. your
 Sony toys example



 Hell, I even become attached to my motorcycles.


Does it follow, then, that we'll start to have laws relating to ending of
motorcycles humanely? Probably not. So there must be more too it then just
attachment.






 Indeed, this is something that concerns me as well. If we do create an
 AI, and force it to do our bidding, are we acting immorally? Or
 perhaps we just withhold the desire for the program to do it's own
 thing, but is that in itself wrong?



 I don't think so.  We don't worry about the internet's feelings, or the
 air
 traffic control system.  John McCarthy has written essays on this subject
 and he cautions against creating AI with human like emotions precisely
 because of the ethical implications.  But that means we need to
 understand
 consciousness and emotions less we accidentally do something unethical.



 Fair enough. But by the same token, what if we discover a way to
 remove emotions from real-born children. Would it be wrong to do

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 1:49 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
 stath...@gmail.com wrote:


 2010/1/19 silky michaelsli...@gmail.com:


 On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou 
 stath...@gmail.com wrote:


 2010/1/18 silky michaelsli...@gmail.com:


 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.


 Brent's reasons are valid,


 Where it falls down for me is that the programmer should ever feel
 guilt. I don't see how I could feel guilty for ending a program when I
 know exactly how it will operate (what paths it will take), even if I
 can't be completely sure of the specific decisions (due to some
 randomisation or whatever) I don't see how I could ever think No, you
 can't harm X. But what I find very interesting, is that even if I
 knew *exactly* how a cat operated, I could never kill one.


 That's not being rational then, is it?



 Exactly my point! I'm trying to discover why I wouldn't be so rational
 there. Would you? Do you think that knowing all there is to know about
 a cat is unpractical to the point of being impossible *forever*, or do
 you believe that once we do know, we will simply end them freely,
 when they get in our way? I think at some point we *will* know all
 there is to know about them, and even then, we won't end them easily.
 Why not? Is it the emotional projection that Brent suggests? Possibly.




 but I don't think making an artificial
 animal is as simple as you say.


 So is it a complexity issue? That you only start to care about the
 entity when it's significantly complex. But exactly how complex? Or is
 it about the unknowningness; that the project is so large you only
 work on a small part, and thus you don't fully know it's workings, and
 then that is where the guilt comes in.


 Obviously intelligence and the ability to have feelings and desires
 has something to do with complexity. It would be easy enough to write
 a computer program that pleads with you to do something but you don't
 feel bad about disappointing it, because you know it lacks the full
 richness of human intelligence and consciousness.



 Indeed; so part of the question is: Qhat level of complexity
 constitutes this? Is it simply any level that we don't understand? Or
 is there a level that we *can* understand that still makes us feel
 that way? I think it's more complicated than just any level we don't
 understand (because clearly, I understand that if I twist your arm,
 it will hurt you, and I know exactly why, but I don't do it).



 I don't think you know exactly why, unless you solved the problem of
  connecting qualia (pain) to physics (afferent nerve transmission) - but I
 agree that you know it heuristically.

 For my $0.02 I think that not understanding is significant because it
 leaves a lacuna which we tend to fill by projecting ourselves.  When people
 didn't understand atmospheric physics they projected super-humans that
 produced the weather.  If you let some Afghan peasants interact with a
 fairly simple AI program, such as used in the Loebner competition, they
 might well conclude you had created an artificial person; even though it
 wouldn't fool anyone computer literate.

 But even for an AI that we could in principle understand, if it is complex
 enough and acts enough like an animal I think we would feel ethical concerns
 for it.  I think a more difficult case is an intelligence which is so alien
 to us we can't project our feelings on it's behavior.  Stanislaw Lem has
 written stories on this theme: Solaris, His Masters Voice, Return from
 the Stars, Fiasco.

There doesn't seem to be much recognition of this possibility on this list.
 There's generally an implicit assumption that we know what consciousness is,
 we have it, and that's the only possible kind of consciousness.  All OMs are
 human OMs.  I think that's one interesting thing about Bruno's theory; it is
 definite enough (if I understand it) that it could elucidate different kinds
 of consciousness.  For example, I think Searle's Chinese room is conscious -
 but in a different way than we are.


I'll have to look into these things, but I do agree with you in general; I
don't think ours is the only type of consciousness at all. Though I do think
the concept that not understanding completely is interesting, because it
suggests that a god should actually not particularly care what happens to
us, because to them it's all predictable. (And obviously, the idea of moral
obligations to computer programs is arguably interesting).




 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 2:19 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:

 On Tue, Jan 19, 2010 at 1:02 PM, Brent Meeker 
 meeke...@dslextreme.commailto:
 meeke...@dslextreme.com wrote:

silky wrote:

On Tue, Jan 19, 2010 at 10:09 AM, Brent Meeker
meeke...@dslextreme.com mailto:meeke...@dslextreme.com wrote:

silky wrote:

On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou
stath...@gmail.com mailto:stath...@gmail.com

wrote:


2010/1/18 silky michaelsli...@gmail.com
mailto:michaelsli...@gmail.com:



It would be my (naive) assumption, that this
is arguably trivial to
do. We can design a program that has a desire
to 'live', as desire to
find mates, and otherwise entertain itself. In
this way, with some
other properties, we can easily model simply pets.


Brent's reasons are valid,


Where it falls down for me is that the programmer
should ever feel
guilt. I don't see how I could feel guilty for ending
a program when I
know exactly how it will operate (what paths it will
take), even if I
can't be completely sure of the specific decisions
(due to some
randomisation or whatever)

It's not just randomisation, it's experience.  If you
create and AI at
fairly high-level (cat, dog, rat, human) it will
necessarily have the
ability to learn and after interacting with it's
enviroment for a while it
will become a unique individual.  That's why you would
feel sad to kill it
- all that experience and knowledge that you don't know
how to replace.  Of
course it might learn to be evil or at least annoying,
which would make
you feel less guilty.


Nevertheless, though, I know it's exact environment,


Not if it interacts with the world.  You must be thinking of a
virtual cat AI in a virtual world - but even there the program, if
at all realistic, is likely to be to complex for you to really
comprehend.  Of course *in principle* you could spend years going
over a few terrabites of data and  you could understand, Oh
that's why the AI cat did that on day 2118 at 10:22:35, it was
because of the interaction of memories of day 1425 at 07:54:28 and
...(long string of stuff).  But you'd be in almost the same
position as the neuroscientist who understands what a clump of
neurons does but can't get a wholistic view of what the organism
will do.

Surely you've had the experience of trying to debug a large
program you wrote some years ago that now seems to fail on some
input you never tried before.  Now think how much harder that
would be if it were an AI that had been learning and modifying
itself for all those years.


 I don't disagree with you that it would be significantly complicated, I
 suppose my argument is only that, unlike with a real cat, I - the programmer
 - know all there is to know about this computer cat.


 But you *don't* know all there is to know about it.  You don't know what it
 has learned - and there's no practical way to find out.


Here we disagree. I don't see (not that I have experience in AI-programming
specifically, mind you) how I can write a program and not have the results
be deterministic. I wrote it; I know, in general, the type of things it will
learn. I know, for example, that it won't learn how to drive a car. There
are no cars in the environment, and it doesn't have the capabilities to
invent a car, let alone the capabilities to drive it.

If you're suggesting that it will materialise these capabilities out of the
general model that I've implemented for it, then clearly I can see this path
as a possible one.

Is there a fundamental misunderstanding on my part; that in most
sufficiently-advanced AI systems, not even the programmer has an *idea* of
what the entity may learn?


[...]



Suppose we could add and emotion that put a positive value on
running backwards.  Would that add to their overall pleasure in
life - being able to enjoy something in addition to all the other
things they would have naturally enjoyed?  I'd say yes.  In which
case it would then be wrong to later remove that emotion and deny
them the potential pleasure - assuming of course there are no
contrary ethical considerations.


 So the only problem you see is if we ever add emotion, and then remove it.
 The problem doesn't lie in not adding it at all? Practically, the result is
 the same.


 No, because if we add

Re: on consciousness levels and ai

2010-01-18 Thread silky
On Tue, Jan 19, 2010 at 3:02 PM, Brent Meeker meeke...@dslextreme.comwrote:

 silky wrote:



 [...]




 Here we disagree. I don't see (not that I have experience in
 AI-programming specifically, mind you) how I can write a program and not
 have the results be deterministic. I wrote it; I know, in general, the type
 of things it will learn. I know, for example, that it won't learn how to
 drive a car. There are no cars in the environment, and it doesn't have the
 capabilities to invent a car, let alone the capabilities to drive it.


 You seem to be assuming that your AI will only interact with a virtual
 world - which you will also create.  I was assuming your AI would be in
 something like a robot cat or dog, which interacted with the world.  I think
 there would be different ethical feelings about these two cases.


Well, it will be reacting with the real world; just a subset that I
specially allow it to interact with. I mean the computer is still in the
real world, whether or not the physical box actually has legs :) In my mind,
though, I was imagining a cat that specifically existed inside my screen,
reacting to other such cats. Lets say the cat is allowed out, in the form of
a robot, and can interact with real cats. Even still, it's programming will
allow it to act only in a deterministic way that I have defined (even if I
haven't defined out all it's behaviours; it may learn some from the other
cats).

So lets say that Robocat learns how to play with a ball, from Realcat. Would
my guilt in ending Robocat only lie in the fact that it learned something,
and given that I can't save it, that learning instance was unique? I'm not
sure. As a programmer, I'd simply be happy my program worked, and I'd
probably want to reproduce it. But showing it to a friend, they may wonder
why I turned it off; it worked, and now it needs to re-learn the next time
it's switched back on (interestingly, I would suggest that everyone would
consider it to be still the same Robocat, even though it needs to
effectively start from scratch).



  If you're suggesting that it will materialise these capabilities out of
 the general model that I've implemented for it, then clearly I can see this
 path as a possible one.


 Well it's certainly possible to write programs so complicated that the
 programmer doesn't forsee what it can do (I do it all the time  :-)  ).


 Is there a fundamental misunderstanding on my part; that in most
 sufficiently-advanced AI systems, not even the programmer has an *idea* of
 what the entity may learn?


 That's certainly the case if it learns from interacting with the world
 because the programmer can practically analyze all those interactions and
 their effect - except maybe by running another copy of the program on
 recorded input.



[...]


   Suppose we could add and emotion that put a positive value on
   running backwards.  Would that add to their overall pleasure in
   life - being able to enjoy something in addition to all the
other
   things they would have naturally enjoyed?  I'd say yes.  In
which
   case it would then be wrong to later remove that emotion
and deny
   them the potential pleasure - assuming of course there are no
   contrary ethical considerations.


So the only problem you see is if we ever add emotion, and
then remove it. The problem doesn't lie in not adding it at
all? Practically, the result is the same.


No, because if we add it and then remove it after the emotion is
experienced there will be a memory of it.  Unfortunately nature
already plays this trick on us.  I can remember that I felt a
strong emotion the first time a kissed girl - but I can't
experience it now.


 I don't mean we do it to the same entity, I mean to subsequent entites.
 (cats or real life babies). If, before the baby experiences anything, I
 remove an emotion it never used, what difference does it make to the baby?
 The main problem is that it's not the same as other babies, but that's
 trivially resolved by performing the same removal on all babies.
 Same applies to cat-instances; if during one compilation I give it
 emotion, and then I later decide to delete the lines of code that allow
 this, and run the program again, have I infringed on it's rights? Does the
 program even have any rights when it's not running?


 I don't think of rights as some abstract thing out there.  They are
 inventions of society saying we, as a society, will protect you when you
 want to do these things that you have a *right* to do.  We won't let others
 use force or coercion to prevent you.  So then the question becomes what
 rights is in societies interest to enforce for a computer program (probably
 none) or for an AI robot (maybe some).

 From this viewpoint the application to babies and cats is straightforward.
  What are the consequences for society and what kind of society do we want

on consciousness levels and ai

2010-01-17 Thread silky
I'm not sure if this question is appropriate here, nevertheless, the
most direct way to find out is to ask it :)

Clearly, creating AI on a computer is a goal, and generally we'll try
and implement to the same degree of computationalness as a human.
But what would happen if we simply tried to re-implement the
consciousness of a cat, or some lesser consciousness, but still
alive, entity.

It would be my (naive) assumption, that this is arguably trivial to
do. We can design a program that has a desire to 'live', as desire to
find mates, and otherwise entertain itself. In this way, with some
other properties, we can easily model simply pets.

I then wonder, what moral obligations do we owe these programs? Is it
correct to turn them off? If so, why can't we do the same to a real
life cat? Is it because we think we've not modelled something
correctly, or is it because we feel it's acceptable as we've created
this program, and hence know all its laws? On that basis, does it mean
it's okay to power off a real life cat, if we are confident we know
all of it's properties? Or is it not the knowning of the properties
that is critical, but the fact that we, specifically, have direct
control over it? Over its internals? (i.e. we can easily remove the
lines of code that give it the desire to 'live'). But wouldn't, then,
the removal of that code be equivelant to killing it? If not, why?

Apologies if this is too vague or useless; it's just an idea that has
been interesting me.

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: on consciousness levels and ai

2010-01-17 Thread silky
On Mon, Jan 18, 2010 at 6:08 PM, Brent Meeker meeke...@dslextreme.com wrote:
 silky wrote:

 I'm not sure if this question is appropriate here, nevertheless, the
 most direct way to find out is to ask it :)

 Clearly, creating AI on a computer is a goal, and generally we'll try
 and implement to the same degree of computationalness as a human.
 But what would happen if we simply tried to re-implement the
 consciousness of a cat, or some lesser consciousness, but still
 alive, entity.

 It would be my (naive) assumption, that this is arguably trivial to
 do. We can design a program that has a desire to 'live', as desire to
 find mates, and otherwise entertain itself. In this way, with some
 other properties, we can easily model simply pets.

 I then wonder, what moral obligations do we owe these programs? Is it
 correct to turn them off? If so, why can't we do the same to a real
 life cat? Is it because we think we've not modelled something
 correctly, or is it because we feel it's acceptable as we've created
 this program, and hence know all its laws? On that basis, does it mean
 it's okay to power off a real life cat, if we are confident we know
 all of it's properties? Or is it not the knowning of the properties
 that is critical, but the fact that we, specifically, have direct
 control over it? Over its internals? (i.e. we can easily remove the
 lines of code that give it the desire to 'live'). But wouldn't, then,
 the removal of that code be equivelant to killing it? If not, why?


 I think the differences are

 1) we generally cannot kill an animal without causing it some distress

Is that because our off function in real life isn't immediate? Or,
as per below, because it cannot get more pleasure?


 2) as
 long as it is alive it has a capacity for pleasure (that's why we euthanize
 pets when we think they can no longer enjoy any part of life)

This is fair. But what if we were able to model this addition of
pleasure in the program? It's easy to increase happiness++, and thus
the desire to die decreases. Is this very simple variable enough to
make us care? Clearly not, but why not? Is it because the animal is
more conscious then we think? Is the answer that it's simply
impossible to model even a cat's consciousness completely?

If we model an animal that only exists to eat/live/reproduce, have we
created any moral responsibility? I don't think our moral
responsibility would start even if we add a very complicated
pleasure-based system into the model. My personal opinion is that it
would hard to *ever* feel guilty about ending something that you have
created so artificially (i.e. with every action directly predictable
by you, casually). But then, it may be asked; children are the same.
Humour aside, you can pretty much have a general idea of exactly what
they will do, and we created them, so why do we feel so responsible?
(Clearly, a easy answer is that it's chemical).


 3) if we could
 create an artificial pet (and Sony did) we can turn it off and turn it back
 on.

Lets assume, for the sake of argument, that each instance of the
program is one unique pet, and it will never be re-created or saved.



 4) if a pet, artificial or otherwise, has capacity for pleasure and
 suffering we do have an ethical responsibility toward it.

 Brent

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

DOURNESS. KICKOFF! Exceed-submissiveness BRIBERY DEFOG schoolmistress.
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: Everything List Survey

2010-01-13 Thread silky
On Thu, Jan 14, 2010 at 4:14 PM, Jason Resch jasonre...@gmail.com wrote:

 On Wed, Jan 13, 2010 at 10:17 PM, Stathis Papaioannou 
 stath...@gmail.comwrote:

 2010/1/14 Stathis Papaioannou stath...@gmail.com:

  Interesting so far:
  - people are about evenly divided on the question of whether computers
  can be conscious
  - no-one really knows what to make of OM's
  - more people believe cats are conscious than dogs

 Oh, and one person does not believe that they are conscious! Come on,
 who's the zombie?


The main problem I had with the question is that consciousness isn't
defined. So I took it to mean conscious like me and hence didn't choose
anyone but 'me'  'others' and 'aliens' in the consciousness section.

That said, I don't really believe there is an 'absolute' concsiousness; I
think everything is as conscious as it is allowed to be by it's initial
configuration. It just so happens that our configuration is higher than
dogs, thus we appear more conscious. For this reason I also, obviously,
answered that computers can become conscious (and that is to say, can be
brought to our level; i.e. we invent ourselves).

-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

redeposit = hyperlink colorlessness: fetishism. PATHFINDER? Disarmingly:
BAPTISTERY nebulousl...
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: the theory of everything

2010-01-10 Thread silky
On Mon, Jan 11, 2010 at 4:33 AM, Mark Buda her...@acm.org wrote:
 Greetings.

 I believe humanity now has all the pieces of the theory of everything.
 The only remaining problem is putting them together to make a
 beautiful picture.

 I believe one of the pieces is: Everything exists. (That's what this
 list is about, right?)

 Here's another: Consciousness is computation.

 The algorithm doesn't matter. We are all running the same universal
 algorithm. The difference is in our inputs, our starting state, the
 bits on our Turing machine's infinite tape.

 Some computations may terminate. Some computations may repeat. The
 kind of computations that human consciousness is is the kind that does
 not terminate or repeat.

 While composing this email, I apparently achieved enlightenment. (I'm
 serious. It's complicated.)

So I suppose we all achieved enlightenment then. Excellent.

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

beggary, pertinacity CAD kaddish.
-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: Definition of universe

2009-12-29 Thread silky
On Wed, Dec 30, 2009 at 1:07 AM, Mindey min...@gmail.com wrote:
 Hello,

 I was just wondering, we are talking so much about universes, but how
 do we define universe? Sorry if that question was answered
 somewhere, but after a quick search I didn't find it.

To me it would be that which is contained when you specify a number of
dimensions. 2d? The universe can be a piece of paper.


 Inyuki
 http://www.universians.org

 --

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

drape experimentation COLDBLOODED, verisimilitude: fragment-mum
gloriously? CONTRACTOR prickl...

--

You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: Crystallizing block universe?

2009-12-09 Thread silky
On Wed, Dec 9, 2009 at 9:25 PM, ronaldheld ronaldh...@gmail.com wrote:
 Anyone want to give this a try and comment?
  http://arxiv.org/PS_cache/arxiv/pdf/0912/0912.0808v1.pdf

Yes; I too found it quite fascinating, I was reading it yesterday!

The most I have to offer on it is that it references the Wheeler
Delayed-Choice experiment as sort-of core to their argument; but the
results of that experiment are explained here:
http://arxiv.org/abs/quant-ph/0611034 by way of de Broglie and Bohm's
Pilot-Wave.

I am wondering, does this conflict with their conclusions?
Specifically of interest, to me, is the raising of Heisenberg as
possibly-contradicted due to this. Though it does seem they say that
it's not relevant, given they (claim) it happened in the past.

I too am interested in other peoples thoughts ...


                                             Ronald

 --

-- 
silky
  http://www.mirios.com.au/
  http://island.mirios.com.au/t/rigby+random+20

UNADVISEDLY FRICTIONAL outspoken INTERJECTION; INTRIGUINGLY preclude,
crunchiness tactlessness.

--

You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: Artificial Intelligence may be far easier than generally thought

2008-09-22 Thread silky

On Tue, Sep 23, 2008 at 1:36 PM,  [EMAIL PROTECTED] wrote:
 On Sep 22, 11:53 pm, John Mikes [EMAIL PROTECTED] wrote:
  Marc,
  Your closing line is appreciated.
  Yet: I still cannot get it: how can you include into an algorithm
  those features that had not yet been discovered? Look at it
  historically: if you composed such compendium 3000 yeas ago would you
  have included 'blank potential' unfilled algorithm for those aspects
  that had been discovered as part of the human intelligence since then?
  And forwardly: how much would you keep blank for newly addable
  features in human intelligence for the next millennia?
  Is B2 a closed and complete image?
  B1 (IMO) includes potential so far undiscovered beyond the knowable.
  How is that part of the algorithm? John M

 Yes, its intuitively hard to swallow, John, but it's actually what
 evolution has been naturally doing... for instance the parents of
 Albert Einstein were not as smart as Einstein, so something smarter
 came from something less smart.

Be careful with this one. I think it's not possible to say this; all
you *can* say is that his parents did not use their intelligence in
the same way he did, not that one is smarter then the other as if to
suggest he materialised some additional smarts out of thin air.
Obviously when combined you take components from the entire tree you
have available to you; thus allowing you to be different, but it still
does mean that you come from that tree and are hence limited by it.


 What I anticipate is that the original algorithm contains a few very
 simple, basic concepts (which I call 'Prims' or 'Primatives) which are
 very vague and fuzzy, but in some strange sense, these are all that
 are required to encompass all knowledge!  Hard to swallow yes, but
 consider the process of moving from a general idea to a more specific
 idea--- remember that game of questions where someone thinks of a word
 of you have to gues of what the word is.. you know... Is it animal,
 vegetable, mineral? and you keep asking more specific questions until
 you guess the word.

 So I think learning is just *elaboration* (optimization) of what is
 actually in some strange sense already in your mind, in a vague fuzzy
 way.  New knowledge is just making what is already there more
 specific.   Rather like the scultpor who already sees a work of art in
 a block of stone... he's just 'shaping' what is in some sense 'aready
 there'.

 And no B2 would not be complete either... there is no reason why it
 couldn't go on improving itself indefinitely.

 --

 This idea of course is the exact opposite of the way most researchers
 are thinking about this.  They are trying to start with hugely complex
 low-level mathematics, whereas I'm starting at the *highest* level of
 asbtraction, and trying to identify those few basic vague, fuzzy
 'Prims'  which I maintain are all that are needed to in some strange
 way, encompass all knowledge.

 So far I've identified 27 basic 'Prims'.  I suspect these are all of
 them.

-- 
noon silky
http://www.themonkeynet.com/armada/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---