Re: need for anthropic reasoning

2001-02-27 Thread rwas rwas

Hello,
I'm new in here. I apologize in advance for any
inadvertent transgressions...



   Second, there is no way of knowing whether
 you are in a so called 
 real world or in a virtual world.  So if I
 don't care about virtual 
 people, I don't even know whether or not I care
 about myself.  That doesn't 
 seem reasonable to me.

I'd argue, all worlds are just as real, or unreal as
you make them. Finding a common context as some
mechanism to validate truth seems naive. One can only
apply truth to issues in the context to be evaluated.


 Soon we may have AIs or uploaded human minds (i.e.
 human minds scanned and 
 then simulated in computers). It seems to me that
 those who don't care 
 about simulated thoughts would have an advantage in
 exploiting these beings 
 more effectively. I'm not saying that is a good
 thing, of course.
I enjoyed considering this possibility. It sounds a
lot like freedom.

My current understanding tells me that there is much
more to mind than just logic and reasoning power. The
power of the intellect is the ability to transcend the
chaos of undisciplined thought and feeling. It's
downfall is it's declaration of absolutism, that it
stands as the pinnacle of understanding. The problem I
find is that the intellect developed in this world,
only knows *this world*. Some would argue that there
is no other world. I'd argue it's the intellect
defining it self in terms of the *apparent* world, and
religiously maintaining the faith, less it find it's
own demise.

A truly powerful mind (imo) is one that quickly adapts
to any rules found in any context it operates in.
Clinging to one realm and making it the center of the
universe sounds a lot like religion to me.

 
 You're assuming that the AIs couldn't fight
 back.  With technology 
 improving, they might be exploiting us soon.

I do a lot of conceptual work in ai. I find without
purpose, an entity is one step closer to conceptual
death. An ai knowing enough to know it wants to
exploit probably isn't burdened by the chaotic
thinking humans are plaqued with. It is more likely
ai's achieving this level of cognition and
consciousness, will seek to cooperate. They would want
to achieve things they would recognize that only
humans act as a catalyst for. One scenario is that
ai's might have less consciousness than just
described, and that they operated in competition, not
conscious of what they are actually doing. I think
this is possible on a small scale, but would not
continue very far. Insects are in effect, small
machines without much in the way of consciousness.
Aside from the occasional plaque or locust swarm, we
don't worry about them too much.


 Do you think that, 150 years ago, white people
 who didn't care about 
 blacks had an evolutionary advantage?
 
 I also value knowledge as an end in itself, but the
 problems is how do you 
 know what is true knowledge? If you don't judge
 knowledge by how effective 
 it is in directing your actions, what do you judge
 it by,

I think this is an issue of consciousness. One may
operate with knowledge on a small scale. They find
harmony in there lives by keeping things simple. There
are those that develop skills in applying vast amount
of knowledge to complicated problems. You might ask:
which is better? I think it depends on what a person
wants out of life. To judge something, I think,
requires a contextual awareness. What applies for one
might not apply for another. In science, we maintain a
rigid form of thinking to in effect, keep from
deluding ourselves. It also applies as a language that
spans over anyone who would join and uphold the
principles of science (scientific method, etc). But
again, the validity and applicability of the knowledge
gained in this club depends on the context it is
applied to. A scientist might say: This drug will
improve your life. The farmer or other simple person
might say: I don't care. The scientist might see
statistics that say: These people are dieing
needlessly. The simple person might say: That's life.
You might make a limited scientist out of a given
simple person, making them see your view point. But
have you improved their life? Have you made them see?
Or have you just blinded them.

Robert W.

__
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail. 
http://personal.mail.yahoo.com/




Re: need for anthropic reasoning

2001-02-22 Thread Wei Dai

On Tue, Feb 20, 2001 at 04:52:10PM -0500, Jacques Mallah wrote:
 I disagree on two counts.  First, I don't consider self-consistency to 
 be the only requirement to call something a reasonable goal.  To be honest, 
 I consider a goal reasonable only if it is not too different from my own 
 goals.  It is only this type of goal that I am interested in.

That's fine, but when most people say reasonable the *reason* is not
just similarity to one's own beliefs.

 Second, there is no way of knowing whether you are in a so called real 
 world or in a virtual world.  So if I don't care about virtual people, 
 I don't even know whether or not I care about myself.  That doesn't seem 
 reasonable to me.

That's right, you don't know which world you are in. The proposal I made
was to consider your actions to affect all worlds that you can be in. But
you may not care about some of those worlds, in which case you just don't
take the effects of your actions on them into account when making your
decisions.

 Evolution is just the process that leads to the measure distribution.  
 (Conversely, those who don't believe in an absolute measure distribution 
 have no reason to expect Darwin to appear in their world to have been 
 correct.)

I do believe in an absolute measure distribution, but my point is that
evolution probably does not favor those whose utility function are
just functions on the measure distribution.

 Also, I disagree that caring about others (regardless of who they are) 
 is not likely to be popular.  In my speculation, it's likely to occur in 
 intelligent species that divide into groups, and then merge back into one 
 group peacefully.

Soon we may have AIs or uploaded human minds (i.e. human minds scanned and
then simulated in computers). It seems to me that those who don't care
about simulated thoughts would have an advantage in exploiting these
beings more effectively. I'm not saying that is a good thing, of course.

 Anthropic reasoning can't exist apart from a decision theory, otherwise 
 there is no constraint on what reasoning process you can use. You might as 
 well believe anything if it has no effect on your actions.
 
 I find that a very strange statement, especially coming from you.
 First, I (and other people) value knowledge as an end in itself.  Even 
 if I were unable to take other actions, I would seek knowledge.  (You might 
 argue that it's still an action, but clearly it's the *outcome* of this 
 action that anthropic reasoning will affect, not the decision to take the 
 action.)

I also value knowledge as an end in itself, but the problems is how do you
know what is true knowledge? If you don't judge knowledge by how effective
it is in directing your actions, what do you judge it by, and how do you
defend those criteria against others who would use different criteria?

 Further, I do not believe that even in practice my motivation for 
 studying the AUH (or much science) is really so as to make decisions about 
 what actions to take; it is pretty much just out of curiousity.  One so 
 motivated could well say you might as well do anything, if it has no effect 
 on your knowledge.  (But you can't believe just anything, since you want to 
 avoid errors in your knowledge.)

Even if you study science only out of curiousity, you can still choose
what to believe based on how theoretically effective it would be in making
decisions. But again if you have a better idea I'd certainly be interested
in hearing it. 

 Secondly, it well known that you believe a static string of bits could 
 be conscious.  Such a hypothetical observer would, by definition, be unable 
 to take any actions.  (Including thinking, but he would have one thought 
 stuck in his head.)

I'm not confident enough to say that I *believe* a static string of bits
could be conscious, but that is still my position until a better idea
comes along. I'd say that consciousness and decision making may not have
anything to do with each other, and that consciousness is essentially
passive in nature. A non-conscious being can use my proposed decision
procedure just as well as a conscious being. 

To be completely consistent with what I wrote above, I have to say that if
a theory of consciousness does not play a role in decision theory (as my
proposal does not), accepting it is really an arbitrary choice. I guess
the only reason to do so is for the psychological comfort.




Re: need for anthropic reasoning

2001-02-20 Thread Jacques Mallah

From: Wei Dai [EMAIL PROTECTED]
On Fri, Feb 16, 2001 at 10:22:35PM -0500, Jacques Mallah wrote:
  Any reasonable goal will, like social welfare, involve a function of 
the (unnormalized) measure distribution of conscious thoughts.  What else 
would social welfare mean?  For example, it could be to maximize the number 
of thoughts with a happiness property greater than life sucks.

My current position is that one can care about any property of the entire 
structure of computation. Beyond that there are no reasonable or 
unreasonable goals.  One can have goals that do not distinguish between 
conscious or unconscious computations, or goals that treat conscious 
thoughts in emulated worlds differently from conscious thoughts in real 
worlds (i.e., in the same level of emulation as the goal-holders). None of 
these can be said to be unreasonable, in the sense that they are not 
ill-defined or obviously self-defeating or contradictory.

I disagree on two counts.  First, I don't consider self-consistency to 
be the only requirement to call something a reasonable goal.  To be honest, 
I consider a goal reasonable only if it is not too different from my own 
goals.  It is only this type of goal that I am interested in.
Second, there is no way of knowing whether you are in a so called real 
world or in a virtual world.  So if I don't care about virtual people, 
I don't even know whether or not I care about myself.  That doesn't seem 
reasonable to me.

In the end, evolution decides what kinds of goals are more popular within 
the structure of computation, but I don't think they will only involve 
functions on the measure distribution of conscious thoughts. For example, 
caring about thoughts that arise in emulations as if they are real (in the 
sense defined above) is not likely to be adaptive, but the distinction 
between emulated thoughts and real thoughts can't be captured in a function 
on the measure distribution of conscious thoughts.

Evolution is just the process that leads to the measure distribution.  
(Conversely, those who don't believe in an absolute measure distribution 
have no reason to expect Darwin to appear in their world to have been 
correct.)
Also, I disagree that caring about others (regardless of who they are) 
is not likely to be popular.  In my speculation, it's likely to occur in 
intelligent species that divide into groups, and then merge back into one 
group peacefully.

  So you also bring in measure that way.  By the way, this is a bad 
idea: if the simulations are too perfect, they will give rise to conscious 
thoughts of their own!  So, you should be careful with it.  The very act of 
using the oracle could create a peculiar multiverse, when you just want to 
know if you should buy one can of veggies or two.

The oracle was not meant to be a realistic example, just to illustrate my 
proposed decision procedure. However to answer your objection, the oracle 
could be programmed to ignore conscious thoughts that arise out of its 
internal computations (i.e., not account for them in its value function) 
and this would be a value judgement that can't be challenged on purely 
objective grounds.

I've already pointed out a problem with that.  Let me add that your 
solution is also a rather boring solution to what could be an interesting 
problem, for those who do care about virtual guys (and have the computer 
resources).

  Decision theory is not exactly the same as anthropic reasoning.  In 
decision theory, you want to do something to maximize some utility 
function.
  By contrast, anthropic reasoning is used when you want to find out 
some information.

Anthropic reasoning can't exist apart from a decision theory, otherwise 
there is no constraint on what reasoning process you can use. You might as 
well believe anything if it has no effect on your actions.

I find that a very strange statement, especially coming from you.
First, I (and other people) value knowledge as an end in itself.  Even 
if I were unable to take other actions, I would seek knowledge.  (You might 
argue that it's still an action, but clearly it's the *outcome* of this 
action that anthropic reasoning will affect, not the decision to take the 
action.)
Further, I do not believe that even in practice my motivation for 
studying the AUH (or much science) is really so as to make decisions about 
what actions to take; it is pretty much just out of curiousity.  One so 
motivated could well say you might as well do anything, if it has no effect 
on your knowledge.  (But you can't believe just anything, since you want to 
avoid errors in your knowledge.)
Secondly, it well known that you believe a static string of bits could 
be conscious.  Such a hypothetical observer would, by definition, be unable 
to take any actions.  (Including thinking, but he would have one thought 
stuck in his head.)

 - - - - - - -
   Jacques Mallah ([EMAIL