RE:[tips] signal detection and ROC curves

2016-01-30 Thread Mike Palij

On Date: Fri, 29 Jan 2016 12:57:50 -0600, Douglas Peterson wrote:


No argument here.  Just me not being clear.


No problem. Just me being argumentative. ;-)


A' and AUC are valid measures comparing two systems
and much more interpretable than other SDT measures
given the parameters as Mike explains but they are not
direct measures of SDT parameters as typically explains.


I'm not sure why being "direct measure of SDT" is more
important thant the usefulness of the measure (e.g., A'
being more useful than d'). Although defining d' as the
difference between Z-hit rate - Z-false alarm is one way
to calculate d', this form seems to depend upon the
assumption that the probability dietributions are normal
in which case the means and standard deviations are
independent which is not the case in some other
distributions.

True story:  I learned about SDT from Sheila Chase with
whom I took experiment psych lab at Hunter College as
an undergraduate.  She did psychophysics work with
pigeons and we used Michael D'Amato's (an NYU Ph.D. ;-)
textbook on experimental psychology which covered
classical psychophysics and SDT.  When I got to grad
school in experimental psych at Stony Brook, I got
additional coverage of SDT in my graduate S class.
However, I had to take a year off from grad school to
deal with some family matters and enrolled part-time
at NYU's graduate experimental psych program so I
could keep up my studies.  One of the courses involved
George Sperling who taught the "sensation" and psychophysics
part of a course called "Basic Processes I" that all first
year NYU students had to take (Lloyd Kaufman taught
the "perception" part of the course).  Speriling took a
heavily mathematical approach and when he started to
cover SDT I thought "Cool, I know this stuff."

Well, unfornately, Sperling started out with an example of
SDT that used two exponential distributions instead of normal
distributions.  Now, I had some idea of what an exponential
distribution was but that was never covered in any class I took
(I read about them in my readings on SDT).  He went into the math
for doing SDT with exponential distributions and I was lost --
a major problem was that Sperling didn't provide a reading list
on the topics, so you either followed what he said in class or
you had to find sources on your own.  I thouught I was a total
idiot until I talked to my classmates who also had no idea
what an exponential distribution was.  This, of course, raised
the question why Sperling assumed we would know about
exponential distributions (later we learned that he just expected
a certain level of math stat knowledge in his students and if
they didn't have it, they could just flunk out).

Unltimately, Sperling had to point out to us idiots that the
signficance of exponential distributions is that the variance
of such distributions is the square of the mean of the distribution
which poses problems for SDT because (a) the mean and SD
are not independent, and (b) as the mean increased, so did the
variance.  This violated traditional SDT based on normal
distributions but early work by Egan and others showed how
other distributions could be used -- one just needed either a
background in electrical engineering (Sperling was working
at Bell Labs at the time) or a masters in math stat.

Needless to say, I bombed this part of the course -- it didn't make
me feel any better that the rest of the class also bombed out.
However, I did ace Kaufman's part of the course. ;-)

Morale:  the form of SDT that relies upon normal distributions is
only one form of SDT analysis and other forms (as we'll see shortly)
may make the analysis more complex and rely upon other aspects
of the analysis (e.g., A' actually is better than d' in certain 
situations).



Pastore, Crawley, Berens and Skelly (2003) present a good
discussion of the issues including the advantages and
disadvantages of A'.


*smacks forehead* How did I forget Pastore et al (2003)?  I looked
at the article after you mentioned it and realized that I had in fact 
read

but a while ago.  However, turns out the situation may be more
complex than they present.  More on this shortly.


Specifically, A' is not independent from bias and is actually a
poorer estimate when performance is nearer to perfect in terns
of hits or false alarms.   For the 3 of us who care about this issue,


I think we will soon reach N <1 of Tipsters who care about this
issue. ;-)


estimates of d' aren't much good in those extremes either.


You mean like how Fechner's law breaks down at very low and very
high intensities because the Weber ratio is not constant for
all stimulus values; e.g., see:
https://books.google.com/books?id=ALsP3Rv3fFgC=PA288=%22fechner%27s+law%22+extremes=en=X=0ahUKEwjtzK7n3dHKAhXLWT4KHTnBBkoQ6AEIKDAC#v=onepage=%22fechner%27s%20law%22%20extremes=false


Macmillan and Creelman (1991) suggest adjusting hit rates of
100% to 1-(1/2n) an false alarm rates to 1/2n and I don't have
any reason to doubt that 

Re: [tips] signal detection and ROC curves

2016-01-30 Thread Kenneth Steele




> On Jan 30, 2016, at 10:13 AM, Mike Palij  wrote:
> 
> 
> I believe that now we have officially reached N< 0 people
> interested in this topic. ;-)
> 
> -Mike Palij
> New York University
> m...@nyu.edu
> 


N = N + 1

I am still enjoying the discussion.  

Ken

-
Kenneth M. Steele, Ph.D.  steel...@appstate.edu 

Professor
Department of Psychology  http://www.psych.appstate.edu 

Appalachian State University
Boone, NC 28608
USA
-


---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48021
or send a blank email to 
leave-48021-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

re: [tips] signal detection and ROC curves

2016-01-29 Thread Mike Palij

On Thu, 28 Jan 2016 20:08:38 -0800, Carol DeVolder wrote:

Dear TIPSters,
I am currently teaching about the Theory of Signal Detectability,
Stevens's Power Law, and ROC curves in my Sensation and
Perception course.


I have to admit that I find your lumping Stevens' Power law
with SDT and ROC (or, depending upon the phenomenon
being studied MOC or Memory Operating Characteristic
curves or AOC or Attention Operating Characteristics or
the more general measure AUC or Area under the Curve).
Given that SDT was developed in the context of detecting
weak signals in presence of noise while the Power law is
supposed to represent the relationship of stimulus magnitude
to sensory/subjective magnitude, I find it hard to reconcile
the two theories into a single framework.

Historically, Fechner leads to Stevens (among others)
for relating stimulus energies to sensation -- all above
an "absolute threshold" (if one believes in such a thing).
SDT does away with the concept of threshold in favor of
describing a person's performance in term of sensitivity
(ability to detect a stimulus, usually in a background of
noise of some sort) and bias or willingness to say "Yes"
(in a Yes-No task; other response in multiple alternative
tasks) which if often assumed to be independent of sensitivity
(but may be wrong in certain situations).  This is why simple
measures of "accuracy" like "percent correct" are often
misleading indicators of a person's ability to detect or
discriminate stimuli.


Do any of you have any examples that you work on in class
or use to illustrate how to implement them?


You do understand that the types of task you would use with
SDT (ROC is just one way to represent the performance on
SDT tasks) would be different from those used with Power law?
If you put a gun to my head and say you'll blow my brains out
if I don't come with appropriate tasks, I'd suggest:
(1) Showing how the Self-Reference Effect (SRE; typically
a recognition memory task that uses SDT analysis -- see
the Http://opl.apa.org website for their implementation)
and
(2) How to use magnitude estimation procedures for various
social phenomena, such as seriousness of different crimes.
If Hugh Foley is still on Tips, he can provide more information
about this type of research from when he worked with Dave
Cross and others at Stony Brook back when he was in grad
school (a cohort of mine).


I want to do several things. First, I want to be able to
explain the logic of SDT, the power law, and ROCs.


It is probably me but I would have said the following instead
of what you wrote above:
(1) What is SDT, how it is a model of decision-making about
stimuli when they are difficult to detect or discriminate (not
limited to human; animal psychophysics have also used SDT
analysis), and how the ROC provides a convenient representation
of the performance on a SDT task (i.e., it shows the degree of
sensitivity as reflected by d' or a similar measure, the effect of
payoffs and probabilities of stimuli [placement of Beta along the
ROC curve], and accuracy [the area under the ROC curve]).


Second, I want to be able to make the topics relevant and
convince the students that these concepts are active in their
daily lives.


I think you need to be a little bit more specific about which "concepts"
you're referring to.  Stevens' power law is just one example of the
"psychophysical law" and it has a number of problems associated
with it -- see the entry on Wikipedia for a brief presentation on the
objections to it:
https://en.wikipedia.org/wiki/Stevens'_power_law
Shepard has shown that what researcher what to do when it comes
to the psychophysical law if show the following relationship:
Sensation = f(stimulus energy)
The problem is that we cannot directly observe sensation so we
typically rely upon the following empirical relationship:
Response = f(stimulus energy)
In both cases, f(stimulus energy) is a mathematical function relation
stimulus energy to sensation or response but the function can take
a variety of form (just ask any Fechnerian ; -).  Shepard, however,
has pointed out that this assume that there is a simple relationship
between response and sensation or
Response = f(sensation)
which can be ignored -- it has been ignored or over simplified in
Stevens and other psychophysical functions.  So, the equation
that is possibly operating is:
Response = f(sensation[f(stimulus energy]))
That is, the observed response on, say, a magnitude estimation
task is the result of a function of a function, each may differ for
different stimuli.

With respect to SDT, originally it was based on Wald's statistical
decision theory which we are most familiar with whenever we use
the Neyman-Pearson framework for doing statistical analysis in
contrast to classical Fisherian analysis (i.e., it involves the concepts
of Type II errors, statistical power, confidence errors, etc.).  So,
SDT represents a model of how (some) people might make decisions
in certain situations (if one were so 

RE: [tips] signal detection and ROC curves

2016-01-29 Thread Peterson, Douglas (USD)
 with each 
system.  Recognizing that a system with no ability to distinguish will produce 
a straight line with a slope of 1 (that is FA rate and Hit rate rise and fall 
together) we have a representation of what a system with d'=0 would look like.  
The more the curve bows away from that straight line the stronger the signal 
strength, responses in that system will fall along that curve depending on the 
bias with the neutral bias falling along a line perpendicular to the d'=0 line 
and extending to the upper left corner (not many examples using google images 
have this line but you can find one).  From here you can talk about fuzzy 
signal detection theory with three outcome states (no signal, not sure, 
signal).  The simplest use is to treat the "not sure" as no signal in one 
computation and as a signal response in a second and you get two points from 
the same system and now you can estimate the curve.  

I realize this is long and I just tried to explain in e-mail what I spend an 
entire day of class talking about but I hope it helps.  I'd be happy to make 
another attempt at explanation or maybe making a short video/screen capture 
explanations.  SDT continues to be applicable in a number of settings, 
particularly medical tests, many use a the AUC that Mike mentions and while 
this isn't technically SDT (no z transforms) the ROC method is identical (here 
is a short and good example 
http://www.nature.com/nmeth/journal/v12/n9/fig_tab/nmeth.3482_SF9.html)

All the best, 
Doug

Doug Peterson, PhD
Associate Professor of Psychology
The University of South Dakota
Vermillion SD 57069
605.677.5295

From: Carol DeVolder [devoldercar...@gmail.com]
Sent: Thursday, January 28, 2016 10:06 PM
To: Teaching in the Psychological Sciences (TIPS)
Subject: [tips] signal detection and ROC curves

Dear TIPSters,
I am currently teaching about the Theory of Signal Detectability, Stevens's 
Power Law, and ROC curves in my Sensation and Perception course. Do any of you 
have any examples that you work on in class or use to illustrate how to 
implement them? I want to do several things. First, I want to be able to 
explain the logic of SDT, the power law, and ROCs. Second, I want to be able to 
make the topics relevant and convince the students that these concepts are 
active in their daily lives. And third, I want to give them some opportunities 
to practice. I've already talked about hits, misses, false alarms, and correct 
rejections in class, and using payoffs to manipulate response criteria, now I 
want to make it all applicable.I welcome any and all ideas.

Thank you very much.
Carol

Carol DeVolder, Ph.D.
Professor of Psychology
St. Ambrose University
518 West Locust Street
Davenport, Iowa  52803
563-333-6482





---

You are currently subscribed to tips as: 
doug.peter...@usd.edu<mailto:doug.peter...@usd.edu>.

To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=12991.6a54289b29ceb58cb7609cc50e0dc1c8=T=tips=48000

(It may be necessary to cut and paste the above URL if the line is broken)

or send a blank email to 
leave-48000-12991.6a54289b29ceb58cb7609cc50e0dc...@fsulist.frostburg.edu<mailto:leave-48000-12991.6a54289b29ceb58cb7609cc50e0dc...@fsulist.frostburg.edu>

---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48010
or send a blank email to 
leave-48010-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re: [tips] signal detection and ROC curves

2016-01-29 Thread Paul Brandon
The main point I liked to make about Signal Detectability is that there is no 
such thing in the sense that a given stimulus has a given strength below which 
it cannot be detected.
First you must define the response being controlled by the stimulus.
We are really talking about changes in the likelihood of occurrence of a 
specified response given the presence of a certain stimulus situation.
A particular change in the strength of a stimulus may increase the likelihood 
of one response enough for it to be emitted, while not a different response.
So SDT is really about behavior under stimulus control, not just stimuli.

for my own experimental application:
"Brandon, Paul K. 
 A Signal Detection Analysis of Counting Behavior (1981). 
 in Quantitative Analysis of Behavior vol.I, Michael Commons and John A. Nevin, 
eds., Ballinger”



On Jan 28, 2016, at 10:06 PM, Carol DeVolder  wrote:

> Dear TIPSters,
> I am currently teaching about the Theory of Signal Detectability, Stevens's 
> Power Law, and ROC curves in my Sensation and Perception course. Do any of 
> you have any examples that you work on in class or use to illustrate how to 
> implement them? I want to do several things. First, I want to be able to 
> explain the logic of SDT, the power law, and ROCs. Second, I want to be able 
> to make the topics relevant and convince the students that these concepts are 
> active in their daily lives. And third, I want to give them some 
> opportunities to practice. I've already talked about hits, misses, false 
> alarms, and correct rejections in class, and using payoffs to manipulate 
> response criteria, now I want to make it all applicable.I welcome any and all 
> ideas. 
> 
> Thank you very much.
> Carol


Paul Brandon
Emeritus Professor of Psychology
Minnesota State University, Mankato
pkbra...@hickorytech.net




---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48009
or send a blank email to 
leave-48009-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re: [tips] signal detection and ROC curves

2016-01-29 Thread Mike Palij

On Fri, 29 Jan 2016 07:47:20 -0800, Paul Brandon wrote:

The main point I liked to make about Signal Detectability is
that there is no such thing in the sense that a given stimulus
has a given strength below which it cannot be detected.


Exactly right,  The old idea of an absolute threshold is
shown to be wrong because it is not the threshold that
varies and produces a normal distribution (or other
probability distribution) of sensations but there is an
intrinsic background level of "noise" (be it neural or
a combination of factors) that exists and is used as a
reference level that the new distribution of "signal+noise"
is compared to.  Thus, the ratio of the signal+noise
distribution to the noise distribution (i.e., the likelihood ratio),
serves as the basis for making a decision. The
comparison of this ratio L(S+N/N) to Beta (criterion or
a fixed value of L(S+N/N) for the combination of payoffs,
probabilities of signals/stimuli, distributions, etc.) is what
serves as the person's/organism's decision rule:

If L(S+N/N) > Beta, say "Yes" or "Stimulus present"
if L(S+N/N) < Beta say "No" or "Stimulus absent"
If L(S+N/N) = Beta guess. ;-)

So, unlike the old absolute threshold notion that there is
an energy level that cannot be detected, we have sensations
that are produced even by weak stimuli and the only question
is do they produce a S+N distribution of sensations that differs
from noise alone.  Of course, our willingness to say "Yes" is
only partly determined by this because the pay-off matrix
(costs of being wrong, benefits of being right) and probability
of the stimulus) play important roles.


First you must define the response being controlled by the
stimulus. We are really talking about changes in the likelihood
of occurrence of a specified response given the presence of
a certain stimulus situation. A particular change in the strength of
f a stimulus may increase the likelihood of one response enough
for it to be emitted, while not a different response.


Don't forget the effect of context on underlying noise distribution.
Detecting the presence of a weak flash of light through a pinhole
or a small area of a computer screen will be affected by whether
you do the task in a room with bright lighting or completely dark.
David Krantz & Co have estimated that it might take a single
quantum of light to activate a rod in the eye under conditions of
pure darkness for the dark adapted eye (remember the commercials
that said one could see the light of candle several thousand feet
away on a dark night [assuming no light pollution]) but under ordinary
light conditions, a stimulus, even a weak one, will require many more
quanta in order to produce a sensation that leads to detection or,
in other words, a d-prime not equal to zero or a Hit rate not = False
Alarm rate (or AUC = .50).

So SDT is really about behavior under stimulus control, not just 
stimuli.

for my own experimental application:


Your behavioristic tendencies are showing. ;-)


"Brandon, Paul K.
A Signal Detection Analysis of Counting Behavior (1981).
in Quantitative Analysis of Behavior vol.I, Michael Commons and John 
A. Nevin,

eds., Ballinger"


Remember Skinner's comparison of his approach to that of Tolman
that I mentioned in a previous post?  Tolman asserted that certain
variables operated within the organism while Skinner argued that
those variables operated in the environment.  The latter gives rise to
notions like "stimulus control" while the former gives rise to the
evaluation of evidence, an internal process.  This then raises the
question of whether SDT is correctly specified or even the correct
model (perhaps Luce's choice axioms provide a better description).

-Mike Palij
New York University
m...@nyu.edu


---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48011
or send a blank email to 
leave-48011-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


RE: [tips] signal detection and ROC curves

2016-01-29 Thread Mike Palij

On Fri, 29 Jan 2016 08:49:23 -0800, Douglas Peterson wrote:
[snip]

... SDT continues to be applicable in a number of settings,
particularly medical tests, many use a the AUC that Mike mentions
and while this isn't technically SDT (no z transforms) the ROC
method is identical (here is a short and good example
http://www.nature.com/nmeth/journal/v12/n9/fig_tab/nmeth.3482_SF9.html 
 )


A few points:
(1) As I mentioned in an earlier post, SDT is based on Wald's
statistical theory which serves as the basis for the Neyman-Pearson
framework for statistical testing.  The decision matrix originally
developed is a 2 x 2 table where the rows represent the response
("yes" or "no", "present" or "absent", etc.) and the columns
represent the "true state of nature", that is, stimulus was presented
or not presented (this is knows with absolute certainty since they
are selected by the researcher; given that the "true state" is known,
the question that remains is how well do the responses or decision
match the true state -- if the Hit rate is 100% and Correct Rejection
ate is 100%, then there the False Alarm rate = 0.00 and the Miss
rate = 0.00, in other words, performance is perfect which with
weak stimuli in psychophysics rarely/never occurs).

(2) I am puzzled by Peterson's statement that AUC is not really
SDT given that it's equivalent A' was developed by memory
researchers as early as the 1960s and has been shown to be
part of SDT.  In the http://opl.apa.org experiment on the "Self
Reference Effect", the dependent variable is a version of A'
that represents the area under "curve" created by the single
pair of Hit and False Alarm rates.  One reference on this point
is the following:
Macmillan, N. A., & Creelman, C. D. (1996). Triangles in ROC
space: History and theory of "nonparametric" measures of
sensitivity and response bias. Psychonomic Bulletin & Review,
3(2), 164-170.

Given that the ROC/MOC/AOC is presented in a unit square
-- the x-axis represents the probability of a false alarm is limited
to the range 0.00 to 1.00 and the y-axis represents the prob of
a Hit which also ranges from 0.00 to 1.00 -- chance performance
is represented by the diagonal line representing P(Hit)=P(FA).
In traditional SDT this implies d-prime is zero.  It also implies
that the area under the performance curve is 0.50 which can
be interpreted as a measure of accuracy; in this case, it represents
chance performance (hence the term "chance diagonal").  In
most Yes-No recognition memory experiments, only one hit rate
and one false alarm rate is obtained.  For nonrandom performance,
this provides a single point above the chance diagonal, forming
a triangle with the chance diagonal as the base.  The sum of
the area of the triangle and the area under the chance diagonal
(i.e., 0.50) becomes a measure of accuracy.  As the Hit rate
increases and the False Alarm rate decreases, the area in the
triangle increases -- in the limit when the False Alarm rate is zero,
the triangle fills the upper space and A' or AuC is 1.00 or the
entire area of the unit-square.  Thus, perfect performance is
represented by A' = AuC = 1.00.

(3) In making a medical diagnosis or interpreting a medical test,
the same reasoning above is employed but the terms differ::

Hit rate becomes True Positive Rate = "Sensitivity"

Correct Rejection become True Negative Rate = "Specificity"

For more on these ideas and how they are used to determine how
good your usual medical test is, see the Wikipedia entry:
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
This entry eventually leads to d-prime but go to the Wikipedia
entry on ROC curves for alternative measures; including AuC:
https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve

This my third post to TiPS today, so no more till the morrow.

-Mike Palij
New York University
m...@nyu.edu



---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48012
or send a blank email to 
leave-48012-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


RE: [tips] signal detection and ROC curves

2016-01-29 Thread Peterson, Douglas (USD)
No argument here.  Just me not being clear.  

A' and AUC are valid measures comparing two systems and much more interpretable 
than other SDT measures given the parameters as Mike explains but they are not 
direct measures of SDT parameters as typically explains.  Pastore, Crawley, 
Berens and Skelly (2003) present a good discussion of the issues including the 
advantages and disadvantages of A'.  Specifically, A' is not independent from 
bias and is actually a poorer estimate when performance is nearer to perfect in 
terns of hits or false alarms.   For the 3 of us who care about this issue, 
estimates of d' aren't much good in those extremes either.  Macmillan and 
Creelman (1991) suggest adjusting hit rates of 100% to 1-(1/2n) an false alarm 
rates to 1/2n and I don't have any reason to doubt that I just don't see it 
used very often.

The use of the sensitivity/specificity reporting doesn't capture both the 
sensitive and response bias as explained in SDT examples (i.e.g, estimate of 
the distance between the two distributions (I believe this the reason that this 
entry is clear to distinguish the sensitivity index, called d' as something 
different from sensitivity as true positives).  The two approaches might be 
considered two sides of the same coin but they are not the same side of the 
same coin.  

Macmillan, N.A., & Creelman, C.D. (1991). Detection Theory: A User’s Guide. NY: 
Cambridge
University Press.

Pastore, R.E., Crawley, E.J., Berens, M.S., & Skelley, M.A.  (2003).  
"Nonparametirc" A' and other modern misconceptions about signal detection 
theory. Psychonomic Builletin and Review, 10(3), 556-569.

  



Doug Peterson, PhD
Associate Professor of Psychology
The University of South Dakota
Vermillion SD 57069
605.677.5295

From: Mike Palij [m...@nyu.edu]
Sent: Friday, January 29, 2016 12:05 PM
To: Teaching in the Psychological Sciences (TIPS)
Cc: Michael Palij
Subject: RE: [tips] signal detection and ROC curves

On Fri, 29 Jan 2016 08:49:23 -0800, Douglas Peterson wrote:
[snip]
>... SDT continues to be applicable in a number of settings,
>particularly medical tests, many use a the AUC that Mike mentions
>and while this isn't technically SDT (no z transforms) the ROC
>method is identical (here is a short and good example
> http://www.nature.com/nmeth/journal/v12/n9/fig_tab/nmeth.3482_SF9.html
>  )

A few points:
(1) As I mentioned in an earlier post, SDT is based on Wald's
statistical theory which serves as the basis for the Neyman-Pearson
framework for statistical testing.  The decision matrix originally
developed is a 2 x 2 table where the rows represent the response
("yes" or "no", "present" or "absent", etc.) and the columns
represent the "true state of nature", that is, stimulus was presented
or not presented (this is knows with absolute certainty since they
are selected by the researcher; given that the "true state" is known,
the question that remains is how well do the responses or decision
match the true state -- if the Hit rate is 100% and Correct Rejection
ate is 100%, then there the False Alarm rate = 0.00 and the Miss
rate = 0.00, in other words, performance is perfect which with
weak stimuli in psychophysics rarely/never occurs).

(2) I am puzzled by Peterson's statement that AUC is not really
SDT given that it's equivalent A' was developed by memory
researchers as early as the 1960s and has been shown to be
part of SDT.  In the http://opl.apa.org experiment on the "Self
Reference Effect", the dependent variable is a version of A'
that represents the area under "curve" created by the single
pair of Hit and False Alarm rates.  One reference on this point
is the following:
Macmillan, N. A., & Creelman, C. D. (1996). Triangles in ROC
space: History and theory of "nonparametric" measures of
sensitivity and response bias. Psychonomic Bulletin & Review,
3(2), 164-170.

Given that the ROC/MOC/AOC is presented in a unit square
-- the x-axis represents the probability of a false alarm is limited
to the range 0.00 to 1.00 and the y-axis represents the prob of
a Hit which also ranges from 0.00 to 1.00 -- chance performance
is represented by the diagonal line representing P(Hit)=P(FA).
In traditional SDT this implies d-prime is zero.  It also implies
that the area under the performance curve is 0.50 which can
be interpreted as a measure of accuracy; in this case, it represents
chance performance (hence the term "chance diagonal").  In
most Yes-No recognition memory experiments, only one hit rate
and one false alarm rate is obtained.  For nonrandom performance,
this provides a single point above the chance diagonal, forming
a triangle with the chance diagonal as the base.  The sum of
the area of the triangle and the area under the chance diagonal
(i.e., 0.50) becomes a measure of accuracy.  As the Hit rate

[tips] signal detection and ROC curves

2016-01-29 Thread Carol DeVolder
Thank you all for your great responses. Mike, I knew I could count on you,
and yes, I read your message in its entirety. :)  Why I lumped all of that
together is that it is all lumped together in the unit we are on. I talked
about each separately, but since my students tend to be math phobes, I
wanted to not only convey how each procedure is carried out, but really
wanted some mundane examples in addition to practical. And Annette, I am
reading through the information on Wixted's page.
Thanks again to all, I appreciate your help.

-- 
Carol DeVolder, Ph.D.
Professor of Psychology
St. Ambrose University
518 West Locust Street
Davenport, Iowa  52803
563-333-6482

---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48014
or send a blank email to 
leave-48014-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re: [tips] signal detection and ROC curves

2016-01-29 Thread Paul Brandon

On Jan 29, 2016, at 10:54 AM, Mike Palij  wrote:

>> So SDT is really about behavior under stimulus control, not just stimuli.
>> for my own experimental application:
> 
> Your behavioristic tendencies are showing. ;-)

I’ll take that as a compliment ;-).

>> "Brandon, Paul K.
>> A Signal Detection Analysis of Counting Behavior (1981).
>> in Quantitative Analysis of Behavior vol.I, Michael Commons and John A. 
>> Nevin,
>> eds., Ballinger"
> 
> Remember Skinner's comparison of his approach to that of Tolman
> that I mentioned in a previous post?  Tolman asserted that certain
> variables operated within the organism while Skinner argued that
> those variables operated in the environment.  The latter gives rise to
> notions like "stimulus control" while the former gives rise to the
> evaluation of evidence, an internal process.  This then raises the
> question of whether SDT is correctly specified or even the correct
> model (perhaps Luce's choice axioms provide a better description).
> 
> -Mike Palij
> New York University
> m...@nyu.edu

As I read Skinner (and I’ve read most of it) he never denied the existence of 
immediate causation (internal mediating processes) — but he doubted that the 
state of neurology during his time was adequate to account for behavior at the 
level of internal mechanisms.  So we’re not talking about the same variables 
here; Tolman was talking about intervening variables (a mechanism mediating 
between environmental variables and behavior), while Skinner was talking about 
independent, directly observable variables (environment, history) as better 
predictors of behavior.

Paul Brandon
Emeritus Professor of Psychology
Minnesota State University, Mankato
pkbra...@hickorytech.net




---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48015
or send a blank email to 
leave-48015-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] signal detection and ROC curves

2016-01-28 Thread Carol DeVolder
Dear TIPSters,
I am currently teaching about the Theory of Signal Detectability, Stevens's
Power Law, and ROC curves in my Sensation and Perception course. Do any of
you have any examples that you work on in class or use to illustrate how to
implement them? I want to do several things. First, I want to be able to
explain the logic of SDT, the power law, and ROCs. Second, I want to be
able to make the topics relevant and convince the students that these
concepts are active in their daily lives. And third, I want to give them
some opportunities to practice. I've already talked about hits, misses,
false alarms, and correct rejections in class, and using payoffs to
manipulate response criteria, now I want to make it all applicable.I
welcome any and all ideas.

Thank you very much.
Carol

Carol DeVolder, Ph.D.
Professor of Psychology
St. Ambrose University
518 West Locust Street
Davenport, Iowa  52803
563-333-6482

---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=48000
or send a blank email to 
leave-48000-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu