Mark Diamond wrote:
> (1) A prior experiment  shows that a particular (special) subject who
> engages in a temporal bisection experiment in which she is asked to say
> whether a probe bar appeared early or late in temporal interval can do so
> extremely accurately. That is, if the interval is, say 500 ms long, and a
> probe stimulus is flashed 247ms after the appearance of the stimulus that
> marks the beginning of the interval, then she will (with almost no errors)
> say that the stimulus was probe stimulus was early. Similarly, if the
probe
> appears 253ms after the first stimulus, she will say that the probe was
> late. That is, there is only a period of 3 ms either side of the true mid
> point during which she makes any errors.
>
> (2) Theory, and some other results, predicts that, for this subject, if
her
> attention is disturbed by an event that occurs in the first half of the
> interval, then she will judge the midpoint of the interval to be later
than
> it really is. In other words, she will say that probes appearing up to 265
> ms after the beginning of the interval were early, and
> probes appearing after 271 ms will be judged to be late, putting the
> subjective midpoint around 268 ms. However, the extent of the period over
> which she makes errors may well change from 6 ms to something else.
>
> A reversal of prediction occurs for conditions in which the disturbance
> happens in the
> latter half of the interval. Now, one expects the subjective midpoint to
be
> around 232 ms, and there is no guarantee that the the interval over which
> the errors extend is the same as in the previous condition or the same as
in
> the control condition..
>
>
> How does one go about testing the prediction that the two subjective
> mid-points (as estimated by whatever method you suggest) are different
from
> one another?

    This isn't really my area, but I'll take a stab at it: I would suggest
using logistic regression, with her "early-late" response as the dependent
variable and probe time and disturbance as predictors. Logistic regression
can be done using most stats packages; it fits a model in which, at a
stimulus level L, a dichotomous outcome occurs with probability P(L), where
P(L) is typically close to 0 for some stimulus levels & close to 1 for
others. You can read out the "50-50" levels from the fitted equations, and
determine whether a particular predictor (here, disturbance) affects the
probability. You can also look at interactions to find out if she is less
reliable when disturbed.

    The effect of disturbance *time* may be somewhat complicated, perhaps
following a curve such as

                    *
            *            *
    *                        *                      *
                                *        *
                                    *

    0                      250                       500


so you might either want to introduce polynomial terms to allow an empirical
fit with (say) a third-order curve or simply use one disturbance time at a
time & use a 0-1 dummy variable to represent disturbance or no disturbance.
Actually finding a curve would be rather nice, though.

    One precaution: design your experiment to have several measurements in
the range in which your subject is not completely reliable, as logistic
regression does odd things if the responses are all 0 below some level and
all 1 above that level. "This is not a bug, it's a feature": in such a case
there is no information present to do more then bound the steepness ofthe
transition from p~0 to p~1, or its location.

    A very simple but less informative experiment would simply be to test
the subject with stimuli at several times, with one time (say 260 ms)
actually of interest and the rest just to keep her from falling into a
pattern or deciding that they were all the same length. If your conjecture
is right she will classify a 260ms stimulus as early more often if disturbed
than if not disturbed. The analysis would now be a simple two-sample test
for proportion.


    -Robert Dawson


Reply via email to