OK.  You have a case where you sample from a 'population' of times from situation A a
number of times, and from Situation B a number of times.  Maybe C, D, etc. too.

To compare 2 of these babies, use a t test.  Keep sample size (n's of each) about
equal, and go for it.  Student 't' test is pretty darn robust.  Means test relies on
Central Limit Theorem, etc.  Depending on how much work it takes to run a simulation,
you might do this 20 times each, to get a pretty fair discriminatory ability.

to compare more than 2, say A through E, a one way AoV would work.  IF the respective
distributions were Normal, and the variances were reasonably close together.  Have to
check this out first.  If not, you have some choices.  One is to back off the
confidence in your conclusion, and select the 'best' condition for further study.  If
the 'best' one is 'obvious' this may get you on to the next step.  Otherwise, some
(possibly effective) modifications of AoV may correct the deviations from assumptions.

And I haven't said anything about testing for Power, but I suspect you are not up to
that yet.  Patience :)

this is a pretty quick way to do it.  Depending on how rigorous you want to be, it
could be more than enough.

Jay

Gooseman wrote:

> Hi,
>
> Thanks for your help! I went over these ideas and now understand my
> problem better.
>
> If I explain my simulation, it may help. Basically, I have a
> simulation where various "agents" have to find a target. The
> simulation is terminated once the target has been found. The current
> measurement of performance is "iterations taken" - time. The
> simulation settings are kept constant, aside from the starting
> positions which are random. The simulation is repeated until the
> confidence interval reaches a certain percentage [this statement may
> be wrong once you read then next step!]
>
> The simulation then changes a parameter (such as number of "agents")
> and is then repeated to sample the new population. This is done quite
> a few times.
>
> What I really need to do, is to prove with a certain confidence, that
> the MEAN time taken from Simulation A comes from a different
> population than from Simulation B, C, D, E etc.
>
> >From my undestating, this may imply that I need to concentrate on the
> accuracy of the mean of simulation run wrt the real population mean
> (unknown) and then compare this to other simulation runs with
> different populations. Some suggestions have included doing a ANOVA
> analysis. Comparing multi variances was also suggested, but this
> apparenly can only be done with 2 populations.
>
> On top of this, there is a big requirement on computational efficiency
> - each simulation needs to stop when the results are accurate enough
> for the next step. So is confidence in the mean the solution (and how
> do I do that), or is it comparing various simulation runs together
> (and using what method) or is it something else, or a combination.
>
> Does this explain enough? If anyone requires any more info, just ask.
> Sorry if this explanation or question sounds vague - I am just
> starting to find my way around stats!!!!!
>
> Many thanks!
>
> [EMAIL PROTECTED] (Jay Warner) wrote in message news:<[EMAIL PROTECTED]>...
> > the real question is, 'how much accuracy (precision, variance) is
> > suitable?'
> >
> > If you were to repeat the simulation run (i.e., a test) a total of n
> > times, then you could say that the true mean elapsed time was x-bar +/-
> > (certain amount), with say 95% confidence.
> >
> > That is, if you were to then repeat the whole process, n times again, 95%
> > of the time the x-bar would fall within the +/- (certain amount) you had
> > calculated.  The average of your mean elapsed time is probably Normal, so
> > this equation can be used.  If you want to predict the one next elapsed
> > time from the next simulation run, then you have to believe that your
> > individual times are Normally distributed, or do some deeper analysis.
> > If that's confusing, I'm sorry, but it comes from what you asked.
> >
> > You can do the simulation run n times, and _estimate_ a value for mean
> > elapsed time that could be confirmed only by say 100*n runs.  Does this
> > sound like what you want?
> >
> > The eq. for the 'certain amount' is given by
> >
> > certain amount = s*z/sqrt(n)
> >
> > where s = stdev of your n run times, z = 1.96 for 95% confidence, and n =
> > number of simulation runs.
> >
> > Pick a confidence interval ('certain amount') that you like, then solve
> > for n to decide how many runs you will need to make.  Statistics cannot
> > tell you what confidence interval is suitable to your problem - that is a
> > technical issue.  It can tell you now many n's you need to reach that
> > confidence interval.
> >
> > Is this what you were looking for?
> >
> > Cheers,
> > Jay
> >
> > PS:    Yes, I know 'accuracy' and 'precision' refer to different things.
> > But you used the first of these words in a way which I infer meant the
> > latter, so I opened the first sentence in that manner.
> >
> > Gooseman wrote:
> >
> > > Hi,
> > >
> > > I am writing a computer simulation, and I really would appreciate some
> > > advice about statistics side of things!
> > >
> > > Each simulation run has fixed settings, but there is some randomness
> > > involved (e.g. start position). As a result, each simulation scenario
> > > needs to be run until the universal mean (say time taken for objective
> > > to be met) varience is reduced.
> > >
> > > The simulation has just one output that needs measurement - time
> > > taken, and there is no transient state.
> > >
> > > The question is, what accuracy is acceptable, and how can I guartee
> > > that the varience is small enough to be accurate, while being
> > > efficient on computing power. Any methods, techniques etc. gladly
> > > welcome, as I am new to stats!!
> > >
> > > Thanks.
> > >
> > > =================================================================
> > > Instructions for joining and leaving this list, remarks about the
> > > problem of INAPPROPRIATE MESSAGES, and archives are available at
> > >                   http://jse.stat.ncsu.edu/
> > > =================================================================
> >
> > --
> > Jay Warner
> > Principal Scientist
> > Warner Consulting, Inc.
> > 4444 North Green Bay Road
> > Racine, WI 53404-1216
> > USA
> >
> > Ph: (262) 634-9100
> > FAX: (262) 681-1133
> > email: [EMAIL PROTECTED]
> > web: http://www.a2q.com
> >
> > The A2Q Method (tm) -- What do you want to improve today?
> >
> >
> >
> >
> >
> > =================================================================
> > Instructions for joining and leaving this list, remarks about the
> > problem of INAPPROPRIATE MESSAGES, and archives are available at
> >                   http://jse.stat.ncsu.edu/
> > =================================================================
>
> =================================================================
> Instructions for joining and leaving this list, remarks about the
> problem of INAPPROPRIATE MESSAGES, and archives are available at
>                   http://jse.stat.ncsu.edu/
> =================================================================

--
Jay Warner
Principal Scientist
Warner Consulting, Inc.
4444 North Green Bay Road
Racine, WI 53404-1216
USA

Ph: (262) 634-9100
FAX: (262) 681-1133
email: [EMAIL PROTECTED]
web: http://www.a2q.com

The A2Q Method (tm) -- What do you want to improve today?






=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to