I have a problem which, in the end, comes down to making an inference about
a difference between means, but it seems more complicated than any example I
can find in Croxton's Applied General Statistics or Sach's Applied
Statistics: Handbook of techniques.
Subjects make dichotomous judgements, coded 0 or 1, in two quite different
conditions. The judgement concerns whether the appearance of a probe bar in
a 1 second interval was closer to the beginning of the interval or to the
end of the interval. The beginning of the interval is t=0, the end ist=1 and
the time of the probe bar being t_p. If we can ignore for the moment whether
this is a sensible thing to do, assume that I find a best fit cumulative
gaussian to the data so that with increasing t_p, there is an increasing
probability of the subject answering "Late in the interval, coded 1", and
the earlier that the probe bar appears, the subject is more likely to answer
"Early in the interval, coded 0". Essentially, this relies on an argument
that there is an underlying probability distribution which is then
dichotomized.
I'd rather not discuss the process up to this point since it is the *next*
part that is of greatest concern.
Assumethat I have estimates, for each of the two conditions, of the mean,
and standard deviation of the *best-fit* distributions, say {{mu1, sigma1,
n1},{mu2,sigma2,n2}} with the n's reflecting how many 0/1 judgments I had
from the same subject under the two aforementioned conditions. The
parameters are most likely derived from either a simplex or
Levenburg-Marquadt fitting of the model to the data.
How can I now go about deciding whether the estimates of the two means {mu1,
mu2} are significantly different from one another. Experimentally, this is
the same as asking whether the subjective or perceptual mid-point of the 1
second interval is the same in the two conditons.
I hope that someone may be able to offer either some guidance or a solution.