Hi,

I've got a simulation statistics problem, and would really appreciate
some advice, being a newbie to stats :)

My simulation (metaphor)

You're blindfolded in a room and placed in a random position, 10
squares by 10 squares. There is a coin placed randomly on the floor.
You can move one square at a time, completely randomly. The simulation
is ended when you find the coin.

This is repeated (with the coin position and your start position
randomized at each run) until the average number of moves taken is
accurate.

My question is: How do you measure the average given this situation,
and when do you stop? Is there some confidence measurement for the
average (not CI). One method someone suggested was to run the
simulation 10 times, then 20, then 40 etc... and measure the average
at each (10, 20 etc) and look at the difference. Does this sound like
a formal method?

Once the average is found, a parameter such as room size is changed,
and the simulation is repeated until the average is converged (correct
terminology?).

Will this make any difference to how I measure my averages? (i.e. to
compare two different populations?)

Thanks in advance!!
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to