Re:[tips] NY Times Article on Reproducibility

2015-09-02 Thread Mike Wiliams
Psychologists have been using poor research methods for so long that we 
think our current methods are valid.  We have been wise enough to
detect these problems and often comment on them, and even study them, 
but we don't change them because there is essentially no correction.
 It's like the old quote about the weather: everybody talks about it 
but no one does anything about it.  A good example I use in stats classes
is reliability.  Psychologists have actually made contributions to the 
study of measurement because our measures are so unreliable. I wonder
how many studies will not replicate or have stable effect sizes if the 
dependent measures only have reliabilities of .8?


If the dependent measures can't be improved, we still forge on using 
them as if they were perfectly valid and reliable.  Of course, one

consequence of this is a poor rate of replication.

Mike Williams
Drexel University

On 9/3/15 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: RE: NY Times Article on Reproducibility
From: "Mike Palij"
Date: Wed, 2 Sep 2015 08:54:34 -0400
X-Message-Number: 1

On Tue, 01 Sep 2015 08:02:05 -0700, Jim Clark wrote:

>Hi
>
>Piece in NY Times by psychologist defending the discipline.
>http://www.nytimes.com/2015/09/01/opinion/psychology-is-not-in-crisis.html?emc=edit_th_20150901=todaysheadlines=26933398&_r=0
>
>Judging by comments, readers aren't buying the argument.

Maybe Scott Lilienfeld should write an Op-Ed piece because
of his background on reviewing psychology as a science vs
being a pseudoscience.  He hasn't commented on the
reproducibility project but one imagines that he may have
some useful insights as well as explanations that go beyond
"this is just an example of the self-correcting nature of science".

-Mike Palij
New York University
m...@nyu.edu



---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5=T=tips=46630
or send a blank email to 
leave-46630-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Lillienfield Article on Why Ineffective Therapies Appear to Work

2015-07-25 Thread Mike Wiliams
I found the article a superb summary on all the possible defects of 
psychotherapy outcome research.


I succumbs to many of the defects it summarizes by asserting that any of 
the research method fixes actually fix anything.
They fail for two major reasons: 1) self-report measures (e.g. Beck 
Depression Inventory), the backbone of all dependent
measures used in outcome research, are not independent observations or 
measurements. Human participants are not
passive agents in the research study, providing objective assessments of 
their mental state.  2) Humans are interactive as
they participate in the study.  They form ideas about which treatment 
they are experiencing.  It is easy to tell when you are
in the control group.  The IRB occasionally gets complaints from 
subjects because they were assigned to the control
condition and they expected free treatment.  This factor makes it 
impossible to have a blinded study of psychotherapy.
This also applies to outcome studies of psychotropic medications. If you 
have a dry mouth and constipation, you are in
the drug treatment group.  There has never been a double-blind study of 
psychotherapy outcome.  For this reason, even
the empirically validated studies are not valid.  I honestly don't 
know how to solve these problems.  The first step might
be to recognize that humans will never behave like passive laboratory 
rats and just survey them concerning factors like
expectation bias.  How large an effect on self-report measures does 
expectation bias produce?  It is as large as the effects

stated in the past as treatment effects?

Mike Williams
Drexel University



On 7/25/15 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: Lillienfield Article on Why Ineffective Therapies Appear to Work
From: Michael Brittmich...@thepsychfiles.com
Date: Fri, 24 Jul 2015 15:59:21 -0400
X-Message-Number: 2

Just finished discussing this article on my podcast:

Why Ineffective Psychotherapies Appear to Work: A Taxonomy of Causes of 
Spurious Therapeutic Effectiveness
http://www.latzmanlab.com/wp-content/uploads/2014/04/Lilienfeld-et-al-2014-CSTEs.pdf  
http://www.latzmanlab.com/wp-content/uploads/2014/04/Lilienfeld-et-al-2014-CSTEs.pdf

Really worth reading.  I’d go so far as to say that it might be considered 
required reading for grad students studying to be therapists.  We all know how 
many pseudo-scientific therapies there are out there.  If we can’t conduct 
good research on them then we might as well at least be aware of some of the 
reasons why we think they work when they don’t.

Anyway, great article Scott and colleagues.

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt





---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=46119
or send a blank email to 
leave-46119-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Longest Time To A Ph.D. Oral Defense EVER!

2015-06-11 Thread Mike Wiliams
This is a great story.  A small part of it is a testament to the role of 
the Medical College of Pennsylvania and Hahnemann University in training
women and minority doctors in Philadelphia.  In those days, the Dean of 
the Medical School interviewed every medical school applicant.
Unfortunately, both institutions no longer exist.  They were part of a 
larger system that went bankrupt.  The remnants were incorporated into

the Drexel University School of Medicine.

Mike Williams

On 6/11/15 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: Longest Time To A Ph.D. Oral Defense EVER!
From: Mike Palijm...@nyu.edu
Date: Wed, 10 Jun 2015 14:10:10 -0400
X-Message-Number: 2

There is a remarkable story in the popular press about
a German woman getting her Ph.D. in neonatology after
passing her oral defense.  The remarkable part is that
she is 102 years old and had to wait almost 80 years to
get to do her oral defense.  It's a story about racist German
Nazis and rabid U.S. anti-communists and, ultimately,
resolution.

Several outlets have the story but with varying degrees of
detail.  For those in a hurry, here's the story on the US News
and World Reports:website
http://www.usnews.com/news/world/articles/2015/06/09/102-year-old-jewish-woman-to-receive-doctorate-in-germany
and from the BBC website:
http://www.bbc.com/news/world-europe-33048927

The Wall Street Journal article has the greatest amount of
detail, explaining why after coming to the US and getting
an MD degree she and her husband would go back to
Germany, to East Berlin, in the early 1950s after the US
Joseph McCarthy's House Un-American Activities Committee
(see:http://en.wikipedia.org/wiki/Joseph_McCarthy  )
got interested in their involvement with the US communist
party.  For the Wall Street Journal account see:
http://www.wsj.com/articles/from-nazi-germany-a-tale-of-redemption-1431576062



---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=45349
or send a blank email to 
leave-45349-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] A Clinical Trial Undone

2015-04-28 Thread Mike Wiliams
Here is an interesting NT Times news story on a clinical drug trial gone 
bad.  I plan to discuss it in my stats class.


http://www.nytimes.com/2015/04/19/business/seroquel-xr-drug-trial-frayed-promise.html?_r=1

On aspect of the article highlights how desperate recruitment becomes 
for the drug companies.  So much more could have been said about this.  
In my experience, the stock of the companies rests on the drugs they 
have in development.  If a trial ends for any reason, the stock values 
fall.  Since the trial monitors and others are all given stock options, 
their personal finances diminishes if a trial is ended early.  As a 
result, they apply pressure on the investigators to get patients in the 
trial at all costs.  The investigators are also paid a set amount when a 
subject completes the trial.  These factors also work to keep the trials 
as short in duration as possible.  Treating Borderline disorder in 8 
weeks and getting positive findings is just expectation bias influencing 
the self-report measures.  Most people are also unaware of what the drug 
companies have done to the IRB process.  Virtually all the trials are 
now reviewed by for-profit centralized IRBs.  This makes the review much 
more efficient but presumably reduces local oversight by the 
institutions involved in the trials.


Mike Williams
Drexel University

---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=44390
or send a blank email to 
leave-44390-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Brain and Mind

2014-12-19 Thread Mike Wiliams
Both of these systems are still mediated by the brain.  Since the 
software/hardware controversy and plasticity are widely accepted
as possible models for brain function, and discussed in the neuroscience 
literature, the Neurohacks article just sets up some straw men and
implies that non one believes the brain is plastic.  I didn't learn 
anything new from it.  I'm preparing two papers on brain development 
using DTI
imaging and came across three great papers.  The papers by Bucker  
Krienen and Neubauer  Hublin are two of a small number of papers that
hit my brain like a bucket of cold water.  I was hanging on every word. 
They represent theories of brain development that do incorporate a balance
between hardware and software development (especially Buckner).

Mike Williams

Stiles, J.,  Jernigan, T. L. (2010). The basics of brain development.  
Neuropsychology review, 20(4), 327-348. doi: 10.1007/s11065-010-9148-4

Buckner, R. L.,  Krienen, F. M. (2013). The evolution of distributed 
association networks in the human brain. Trends in cognitive sciences, 
17(12), 648-665. doi: 10.1016/j.tics.2013.09.017

Neubauer, S.,  Hublin, J.-J. (2012). The Evolution of Human Brain 
Development. Evolutionary Biology, 39(4), 568-586. doi: 
10.1007/s11692-011-9156-1


On 12/19/14 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:
 (1) to have all aspects of this system's functionality hardwired,
 in this case, embodied in form of individual neurons, neural
 networks, and systems of neural networks,

 or

 (2) to have all aspects of this system's functionality as software,
 in this case, general purpose neurons that are highly adaptable
 (e.g., stem cells), that can form general purpose neural networks
 that can be specialized to handle specific forms of information
 (e.g., representations of visual information or auditory info or etc.),
 and systems of such neural networks.

 On 12/18/14 11:00 PM, Teaching in the Psychological Sciences (TIPS)
 digest wrote:
 With all of the wack and rank neuroscience being promoted
 these days, it's easy for people to think that brain function is
 that same thing as cognitive function.  One popular media
 article that points out some of the problems with this view is
 an Neurohacks article on the BBC website; see:

 http://www.bbc.com/future/story/20141216-can-you-live-with-half-a-brain


---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=41205
or send a blank email to 
leave-41205-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re:[tips] Haters of Neuroscience

2014-07-13 Thread Mike Wiliams
I found the essay reasonably supportive of the neuroscience 
investigations.  The questions he raises would be great to investigate.  
Neuroscience
research like this has great potential in resolving many of the Nature v 
Nurture controversies.  The one thing about fMRI research that makes it
different from most psychology research is that it is empirically driven 
and not influenced by the expectations of the researcher or the subjects.
In this way, the field is similar to studies of obesity.  The pattern I 
get from the scanner is the pattern I have to live with.


The major reservation I have with the field is publication bias.  The 
investigators jump all the hurdles to get a study with positive findings 
published.
With negative findings, they tend to move on to the next study and 
neglect to publish the negative findings.


Mike Williams

On 7/13/14 2:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: For Haters Of Neuroscience...
From: Mike Palijm...@nyu.edu
Date: Sat, 12 Jul 2014 08:28:20 -0400
X-Message-Number: 2

And you know who you are.  The NY Times has an opinion piece
by NYU's Gary Marcus titled The Trouble With Brain Science
which goes into some of the difficulties that recent initiatives
(e.g., the EU's Brain Initiative) have since we don't seem to
have the basic questions right; see:
http://www.nytimes.com/2014/07/12/opinion/the-trouble-with-brain-science.html?emc=edit_th_20140712nl=todaysheadlinesnlid=389166_r=0

NOTE: No dead salmon were involved.

-Mike Palij
New York University
m...@nyu.edu



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=37558
or send a blank email to 
leave-37558-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Whose IQ Is It?

2014-07-06 Thread Mike Wiliams
Paul A. McDermott, P A, Watkins, M W  Rhoad, A M (2014). Whose IQ Is 
It?---Assessor Bias Variance in High-Stakes Psychological Assessment,
Psychological Assessment, Vol. 26, No. 1, 207--214

http://edpsychassociates.com/Papers/IQassessorBias(2014).pdf 
http://edpsychassociates.com/Papers/IQassessorBias%282014%29.pdf

This is a remarkable paper.  If you need to discuss the advantages and 
disadvantages of IQ tests, here is a new angle.  From the point of view of
clinical assessment, my summary is that the findings indicate how the 
Wechsler Scales are so poorly designed.  This exposes some big holes in
the scoring of the tests that could be easily remedied by design 
changes.  Take the examiner out of the scoring.  Subtests in multiple choice
format did not suffer from examiner defects in scoring.  I was surprised 
by the magnitude of error.  The examiners were probably just
expressing the uncertainty in assigning scores that apply to any group 
of examiners.

The first sentence of the Discussion (see below) is a compelling 
indictment of the profession and the Wechsler scales. Maybe we can get 
the blinders off
and fix the tests.  It is unlikely we can fix the examiners.

The degree of assessor bias variance conveyed by FSIQ and VCI scores 
effectively vitiates the usefulness of those measures for differential 
diagnosis
and classification, particularly in the vicinity of the critical cut 
points ordinarily applied for decision making.

Mike Williams

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=37468
or send a blank email to 
leave-37468-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

[tips] The Monty Hall Problem

2014-06-01 Thread Mike Wiliams
Hello All.  I wrote a standalone program to simulate the Monty Hall 
problem.  This problem is a good example of the contrast between 
intuitive and actual probability.  It is a great exercise for stats 
classes when you cover probability.  If you have not heard of it, there 
are a number of web sites that explain it.  There are also other 
web-based simulators.  I could not find one that allowed the user to set 
up their own n  of trials so I wrote this one in Livecode.


Mike Williams

Presentation:
http://www.learnpsychology.com/monty/Monty_Hall_Presentation.pptx

Mac Version:
http://www.learnpsychology.com/monty/Monty_Hall_Problem.zip

PC Version:
http://www.learnpsychology.com/monty/Monty_Hall_Problem.exe

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=36993
or send a blank email to 
leave-36993-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Psychopaths' Brain

2014-04-19 Thread Mike Wiliams
I discussed these studies briefly with Kent when I attended one of his 
workshops sponsored by the MIND Institute.  The imaging work they do is 
first class.  The only reservation I have about psychopathy (and many 
areas like this in Psychiatry) is the determination of the independent 
variable.  They use an elaborate interview, self-report questionnaire 
and records review.  They are as thorough as possible.  Unfortunately, 
they still can't partial out psychopathy from correlated factors that 
may produce brain imaging differences, in particular, drug use and 
traumatic brain injury. The patterns they find are also essentially 
random areas roughly within the limbic system.  The pattern doesn't 
explain psychopathy.  If they examined a random sample of TBI patients, 
they would get similar results.


http://www.dailymail.co.uk/news/article-2608003/Study-Half-jailed-NYC-youths-brain-injury.html

Although many people on this list are disparaging of neuroimaging, this 
is a good example of how its done well and how complex it is. In 
contrast to virtually every other area of psychological research, the 
investigator and the subjects are unable to manipulate the dependent 
measure: What you get is what you see.  It's very hard for a subject who 
knows the hypothesis to manipulate his grey matter density; it's very 
easy to endorse a lower level of depression if I don't want more ECT.


Mike Williams


On 4/19/14 2:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

The website for Wired has an interesting interview with the researcher
Kent Kiehl who has studied psychopaths for 20 years; the interview
is here:
http://www.wired.com/2014/04/psychopath-brains-kiehl/

The interview is partly a shill for Kiehl's new book The Psychopath
Whisperer which is geared for the general public (i.e., it is a
money book, that is, a book a scientist writes not for a limited
scientific or academic audience but to appeal to a broad audience
and is expect to make a fair amount of money -- most popular
science books are money books though not all of them make a
lot of money).  Anyway, Kiehl has his own mobile MRI scanner
(there is a picture of him next to trailer that contains the scanner)
so he's not doing too badly.



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=36243
or send a blank email to 
leave-36243-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Construct Validity and IQ

2014-04-10 Thread Mike Wiliams
I think many of the responses to the IQ and g discussion have mistaken 
models of cognition and validity that underlie them.  I will just 
summarize these by making two points.  The first is that measurement 
devices, such as IQ tests, are valid or reliable and do not embody 
multiple validities or reliabilities.  Since there are a few ways 
validity and reliability are estimated, the tendency is to think that a 
measurement tool embodies multiple validities and reliabilities.  There 
is only one validity and this is determined by 1) the logical integrity 
of the theory that defines the construct and the 2) mapping of the 
construct on to a measurement device.  The measurement device can 
include a multiple choice test, recall test, self-report questionnaire 
etc., whatever matches the theoretical model of the construct.  
Empirical studies are then conducted to test hypotheses about the 
mapping of the theory to the device.  These studies have been given 
different names that we are all familiar with, such as criterion 
validity, concurrent validity etc.  This suggest that there are 
different validities for each test.  There is only one validity, 
construct validity, and this is estimated using a variety of empirical 
studies.  However, no empirical study can provide evidence for the 
validity of a measurement device if the theoretical model defining the 
construct is vague.  Since this is a teaching forum, the example I 
present in class is a set of validity coefficients for a test I refer to 
as the Spelling Test from the Wide Range Achievement Test.  These 
include factor analyses and prediction studies, using the test to 
predict grades and other criteria.  I then reveal the test items.  The 
first item is the first item from the Arithmetic subtest of the WRAT, 
something like 7+5=?.  The correlations I presented were all studies of 
the Arithmetic subtest.  They appear very convincing and they are 
generally in the same range as the correlations of the Spelling 
subtest.  The point is that the constructs were extremely different and 
the correlation patterns were indistinguishable for both constructs.  
The only way I know this is because I have theoretical models of 
Spelling and Arithmetic that are clear and distinct.  Validity (i.e. 
Construct Validity) is in the theoretical understanding of the theorist, 
not determined by empirical studies.  IQ and g are not theoretically 
clear.  Their validity is consequently unknown even if the device called 
an IQ test correlates with other measurements in expected directions and 
magnitudes.  Once you get to the level of IQ battery subtests, many of 
these problems become clear.  Just as an example, it is clear from item 
examination and factor analyses that the Information, Vocabulary, 
Similarities and Comprehension subtests of the WAIS just measure a 
better-defined construct called Semantic Knowledge.  If you are familiar 
with the subtests, just think about this as a theoretical possibility.  
For example a Comprehension subtest item, Why do we pay taxes? 
requires the semantic knowledge associated with the word taxes. It is 
just another way of asking for the definition of the word taxes.  
These subtests are grouped by the test developer under a construct 
called Verbal Intelligence.  All the Performance subtests group together 
because they are timed tests. They may also measure other constructs but 
their common variance is based on the subject solving problems quickly.  
However, the grouping is given the name Performance Intelligence.   The 
odd couple, Arithmetic and Digit Span, group together because they share 
variance on Sustained Attention.  Kaufman called the grouping Freedom 
From Distractability.  The General IQ score is just the average of all 
these scores in comparison to the population average of the scores.  No 
factor analysis has ever supported averaging all the subtests.  This 
would require that Semantic Knowledge correlate highly with Sustained 
Attention etc. The constructs of Semantic Knowledge, Sustained Attention 
and Timing are much better defined than the constructs Verbal 
Intelligence, Performance Intelligence and Freedom From 
Distractability.  IQ has no clear definition as a measurement construct; 
Semantic Knowledge does. The WAIS should be called the Semantic 
Knowledge, Sustained Attention and Timing Test.  There is no g in the 
WAIS that I can discern.  The correlations of the WAIS IQ scores with 
other tests, grades etc., exists because there is a correlation of 
semantic knowledge with these other measures.  Semantic Knowledge exists 
but intelligence does not. In as much as semantic knowledge is acquired 
through reading and education, the correlations of the WAIS with any 
other measure is just the correlation of one measure of education with 
other education measures (e.g. grades), or other criteria that are also 
influenced by education (e.g. occupation success, salary etc).


Somehow, psychologists were 

[tips] How Intelligent is IQ

2014-04-08 Thread Mike Wiliams
I couldn't agree more with Mike Palij's analysis. IQ and g never 
existed.  IQ is just an average score; g is just an artifact of factor 
analysis.  Neither represent cognitive or brain processes.  They don't 
explain anything and they are hard to define.  Any vague construct has 
unknown construct validity.  Check out Muriel Lezak's INS presidential 
address (IQ: RIP):


http://www.ncbi.nlm.nih.gov/pubmed/3292568

Mike Williams

On 4/9/14 2:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: Re: How Intelligent is IQ? - Neuroskeptic | DiscoverMagazine.com
From: Mike Palijm...@nyu.edu
Date: Tue, 8 Apr 2014 15:18:45 -0400
X-Message-Number: 8

John,

Create 10 random variables via SPSS or your favoriate statistical
package.
The distributions don't matter (for simiplicity's sake, they can all be
random
normal variate but for generality sake use a different probability
distribution
for each variable).  The correlation matrix of these 10 variables will
have
a rank = 10 (i.e., cannot be reduced to a smaller matrix because the
rows
and columns are independent).  This is how modules are supposed to work.
But why then do we get correlations, especially in cognitive tests?
Chomsky
might argue that for tests of language, the correlations are artifacts
of
measurement or from other sources because the language module
is independent of all other cognitive modules.  And Chomsky will argue
until the cows come home that language is an independent module,
so take it up with him if you are feeling feisty.;-)

Of course the real problem with g is that it is not theory of mind but
a mathematical consequence of factor analyzing correlation matrices.
Stop and consider:  one theory of cognitive architecture for g is that
there is a single process that serves as the basis for thought.  This
breaks down as soon as we make a distinction like short-term memory
versus long-term memory or declarative memory versus nondeclarative
memory or [insert you own favorite distinction].  What is g supposed
to be besides an mathematcal entity?

Or consider the following:  let's call the performance of racing cars
g which represents winning races.  All cars can be rank-ordered on
the basis of how many races and g explains performance. Cars
high in g win more races than cars low in g.  g is the general
ability of cars to win races.  How useful is that as a concept?
NOTE: assuming g in this case does not require one to know
anything about automotive engineering, just how well cars perform.

Now change cars to people and races to tests.  g is the general
ability of people to do well on tests.  How useful is that as a concept?

-Mike Palij
New York University
m...@nyu.edu




---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=35982
or send a blank email to 
leave-35982-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Help! Learning Styles are Eating the Brains of Our Young

2014-03-30 Thread Mike Wiliams
Fortunately for every GS in the world there are 10 mentors or advisers 
who thought the best for you. Whatever the intellectual talents of a GS, 
their over-compensating attitude will leave them with no students who 
want to work with them.


Mike Williams

On 3/30/14 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: Re:Help! Learning Styles are Eating the Brains of Our Young
From: Mike Palijm...@nyu.edu
Date: Sat, 29 Mar 2014 09:02:23 -0400
X-Message-Number: 2

On Fri, 28 Mar 2014 21:11:27 -0700, Mike Wiliams wrote:

When responding to the research of students in high school
or undergrads, I go by a simple maxim: What would Mr. Rogers
say? They need to feel that the work is important and that
they are important. They can have the drivel shaken out
when they get to grad school.

I don't know if Mike Williams has lapsed into Louis Schmierism
(i.e., uncritical, unconditional positive regard that is usually safe
only for tenured professors and ill prepares students for
learning how to deal with professors and colleagues who will
ruthlessly exploit them in their quest for fame and fortune)
but let me provide a counterweight to the Mr. Rogers' position
by asking what would one of the most difficult professors I
ever had might do (and by difficult, I mean that in all possible
senses, from being intellectually opaque -- if you could not
understand him it was because you were too stupid -- to
emotionally distant -- the don't bother me with the reasons
why you can't make a deadline/get work done/need a social life/etc,
there are others who can do your job).

I'll refer to this professor as GS and ask the question
What would GS do?

A little more background:  when GS was hired for his professorship,
he initially taught a course at the undergraduate and graduate
level.  After the first semester, the complaints from the undergraduates
were so great that the university administration (who viewed GS
as a prized faculty member and a jewel in its crown) decided that
GS didn't have to teach undergraduate courses, only graduate
level courses (presumably he would cause the least amount of
damage with graduate students).  GS's level of productivity (often
through the efficient and effective use of graduate students) and
ability to get grant money secured his position in the university --
his teaching was secondary to all of this.  So, he would become
a power in the psychology department, in the university, and in
the field, ultimately making him a member of the National Academy
of Sciences.

So, what would GS do?  I imagine that he would argue that we
should not encourage people who cannot do good science or
are unable to distinguish between good science and bad science
from engaging in anything that can be construed as science
given the view that most of what passes for scientific research
is flawed, misleading, and a waste of precious resources.
With respect to high school students doing research projects,
I think that he might say that bad science has to be nipped in
the bud.  Perhaps the student would be better off doing something
more suited to their intellectual abilities, such as selling real
estate or becoming a politician.  This, however, is just speculation
on my part; I don't think GS would have cared what the student
did with their life -- there are far too many more important things
to be concerned about.

I'd like to point that I have come across other faculty/researchers
who came from the mode that made GS:  some legitimately
brilliant but lacking in empathy and compassion, some who just
seemed good at denigrating and exploiting people even though
they never accomplished much in their own career.  I have stopped
being amazed that people like this seem to rise to high levels
of power in the discipline because that seems to be a primary
goal (though some can't get to a very high level because they
are B list or C list academic superstars, but an academic
superstar is still a superstar from the perspective of administrators).

In the situation of reviewing a student's work on learning styles,
I would try to point out what the strengths and weaknesses are
of the research but would recommend that the student engage
in scholarship on the topic and to be mindful of the confirmation
bias, of only looking for research that supports one's favorite
hypothesis or position.  They need to come to their own realization
of the limitations of their understanding of the phenomenon --
like most of us, they probably won't really follow the advice
given to them.

But one has to look on the bright side of this situation:
the student could have attempted a replication of one of Bem's PSI
experiments and had a successful replication.  Who would
wants to explain that retroactive causation doesn't really exist
and that the results are probably due to expectancy effects and
other problems?  What if the student's faculty sponsor actually
believes such stuff?  Good luck.

-Mike Palij
New York University
m

Re:[tips] Psychology and Politics

2014-03-01 Thread Mike Wiliams
Richard Redding wrote a nice paper on this issue for the American 
Psychologist:

http://www.ncbi.nlm.nih.gov/pubmed/11315246

Am Psychol. http://www.ncbi.nlm.nih.gov/pubmed/11315246# 2001 
Mar;56(3):205-15.Sociopolitical diversity in psychology. The case for 
pluralism.


Mike Williams

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=35051
or send a blank email to 
leave-35051-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re:[tips] SAT and High School grade study

2014-02-19 Thread Mike Wiliams
Given the level of education debt in the country,  it's obvious that 
colleges and Universities are making far more money than test 
companies.  Has anyone ever calculated how much information is lost by 
converting a perfectly good test average into a letter?  Did I say 
letter?  We actually convert scores into letters?  Imagine if we 
converted IQ scores into letters.  Does anyone know the history of using 
letter grades?  The error in grading as a measurement device contributes 
to the lower predictive power of grades.  If we scored courses better, I 
am willing to bet that they would be completely redundant with SATs etc 
and standardized testing would have no unique predictive power.


Mike Williams

On 2/20/14 12:00 AM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Assessment companies and the test prep companies that live symbiotically off of 
them make a great deal of money. The test score is held up and apart from the 
grades as being somehow more fair. So I think they invite the scrutiny.

I think any individual grade from the student's middle school or high school 
record might be less useful than an aggregate GPA. The 20-30 instructors 
together make an index with considerable predictive power. Not that they 
shouldn't be held accountable also. But it's unlikely that all 20 or so are 
grading too easy or too hard. And no individual instructor has the same 
financial investment in his or her product than the handful of institutions 
making coin from theirs.

That being said, SES, for both grades and test scores, is a problematic 
variable to tease out from merit/ability to succeed in higher education.

Nancy Melucci
Long Beach City College
Long Beach CA
-Original Message-
From: Mike Wiliamsjmicha5...@aol.com
To: Teaching in the Psychological Sciences (TIPS)tips@fsulist.frostburg.edu
Sent: Tue, Feb 18, 2014 11:10 pm
Subject: Re:[tips] SAT and High School grade study

These studies of SAT and grades as predictors or criterion just

highlight how grades are poorly designed as a measurement device.  What

is their reliability and validity as measures of performance.  Somehow

the college board and SAT makers get the scrutiny that we don't apply to

ourselves as grade makers.  The error goes both ways.



Mike Williams



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=34403
or send a blank email to 
leave-34403-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] SAT and High School grade study

2014-02-18 Thread Mike Wiliams
These studies of SAT and grades as predictors or criterion just 
highlight how grades are poorly designed as a measurement device.  What 
is their reliability and validity as measures of performance.  Somehow 
the college board and SAT makers get the scrutiny that we don't apply to 
ourselves as grade makers.  The error goes both ways.


Mike Williams

On 2/19/14 12:00 AM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Re: SAT and High School grade study



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=34371
or send a blank email to 
leave-34371-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Re-Imagining HM's Brain

2014-01-26 Thread Mike Wiliams
I'm in the middle of the book-length summary of HM by Suzanne Corkin, 
entitled Permanent Present Tense.  It's a masterpiece. It includes 
everything about HM, from his personal experiences to all the testing 
they did with him through the years.  I have learned a lot from it and I 
thought I knew everything.  I also attended a session by this group at a 
conference.  They plan to make all the images available at the site below.


Mike Williams

On 1/26/14 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Subject: Re-Imagining HM's Brain
From: Mike Palijm...@nyu.edu
Date: Sun, 26 Jan 2014 22:12:19 -0500
X-Message-Number: 2

There's a couple of news articles on a project that is constructing a
3-dimensional representation of the famous Henry Molaison or HM's
brain.  The New Scientist has one such article; see:
http://www.newscientist.com/article/dn24944-neurosciences-most-famous-brain-is-reconstructed.html#.UuXK1vso7bQ
At the end of the above article, there is a source given but neither
the journal (Nature communications) or the DOI seem to work.
For more information about the project, go to the Brain
Observatory website that contains more info on this and other
aspects of HM; see:
http://thebrainobservatory.org/hm

-Mike Palij
New York University
m...@nyu.edu



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=33519
or send a blank email to 
leave-33519-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] For your friends who question tenure...

2014-01-23 Thread Mike Wiliams
IRBs are not completely independent.  Aside from the obvious 
dependencies resulting from the fact that the institution pays 
everyone's salary, the designated institutional official can override 
any IRB approval of research.  The IRB decision to disapprove a study 
cannot be overridden.


Mike Williams

On 1/23/14 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Subject: Re: For your friends who question tenure...
From: MiguelRoigmiguelr...@comcast.net
Date: Thu, 23 Jan 2014 13:59:30 + (UTC)
X-Message-Number: 1

Those of you interested in IRB angle of Willingham's research may be interested 
in this 1-page document that was posted yesterday to the IRB forum:

http://research.unc.edu/files/2014/01/Willingham-media-clarification-1-21-2014.pdf

A line that kind of jumped at me was this one: The IRB at UNC operates with a very 
high degree of independence and authority, as it was intended. 'High degree of 
independence'? Shouldn't that have been 'complete independence'?

Miguel



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=33016
or send a blank email to 
leave-33016-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] For your friends who question tenure...

2014-01-20 Thread Mike Wiliams
Why don't we compel the professional sports organizations to start farm 
clubs for Football and Basketball, like we have for Baseball?  It would 
resolve this hypocrisy, improve the sports and probably help the economy 
of smaller cities.  I find it tragic that students at large Universities 
with essentially professional sports teams cannot play on the school 
football and basketball teams.  Get the farm clubs out of the 
Universities and these problems will be solved.


The IRB did the right thing by exempting her research.  Even if they 
don't exempt the study, and review it, she will likely be approved.  
Fortunately, one of the positive things about IRBs is that they are 
essentially  independent of the administration.  If anyone in the 
administration tries to influence the deliberations, UNC could get into 
a lot more trouble than bad news about their athletes.


The administration is between a rock and a hard place.  If this was 
research about something other than the University itself, they could 
prohibit the research based on its poor quality.  Since the object of 
the research is the University, prohibiting the study makes it appear 
that the administration is suppressing the study.


Although tenure has a general bearing on the issues, I did not read that 
the investigator was tenured.  This case suggests that a free press 
(CNN) may keep the University fair and honest.


Mike Williams


On 1/20/14 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

For your friends who question tenure...


Subject: For your friends who question tenure...
From: Christopher Greenchri...@yorku.ca
Date: Mon, 20 Jan 2014 08:44:25 -0500
X-Message-Number: 2

For those of you (probably not many on this list) who might have thought that tenure is 
unnecessary in this modern era to protect the integrity of research from the 
political motivations of a vindictive administration.

UNC IRB suddenly reverses its decision AFTER THE FACT on whether research that 
shows many of its athletes to be functionally illiterate requires oversight.

http://www.insidehighered.com/news/2014/01/20/u-north-carolina-shuts-down-whistle-blower-athletes

Sheesh!
Chris
---
Christopher D. Green
Department of Psychology
York University
Toronto, ON M3J 1P3
Canada



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=32939
or send a blank email to 
leave-32939-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Useful Hardware and Software for Computer Lab?

2013-12-12 Thread Mike Wiliams
The Memory Screening Test has an interesting history and the only reason 
to describe it for the list is the observation that it represents a test 
that cannot be normed.  I heard about the test indirectly from Charles 
Long when traveling back from a conference.  He went to a session in 
which someone was following a tumor patient using a memory recognition 
test in which the examiner would present a card with 6 figures, point to 
one and ask the patient to remember that a particular figure had been 
indicated.  After presenting 15 cards, the examiner then presents the 
first card and asks, Which one of these did I point to before?.  The 
standard amnesic response is, You never showed me these before.  The 
examiner then proceeds through the 15 cards asking the same question.  
All the patient has to do is point to, or name the figure indicated on 
the first presentation.


The test is eloquent, simple and portable.  It was a perfect substitute 
for the testing I was doing while following trauma patients in the ICU 
and acute hospital setting.  I could administer the test every day as 
the patient recovered memory.  When the patient obtained 13/15 correct, 
I would administer a regular memory test and other neuropsych 
assessment.   The test was normed with the Memory Assessment Scales 
(MAS).  However, after giving it to approx 800 subjects, I observed that 
there was no variance among normal subjects.  Maybe 10 normal subjects 
made an error.  A test with no variance among normals cannot be normed.  
This was one major factor that encouraged me to rethink how norms are 
constructed and what they really mean.  This problem of low variance 
includes many tests used in neuropsychology.  In my experiences with 
patients at very low levels of ability, I have come to the realization 
that cognition is either on, and working within normal limits, or 
essentially, off.  For example, the idea that memory increases 
monotonically with the memory score is a fallacy.  There may be major 
characterizations of memory disorder that might correspond to levels of 
ability but the idea that it increases and decreases monotonically with 
a memory score is just incorrect.  I also think that the scaling and 
construction of conventional norms reifies small differences reinforced 
by a bell curve model of ability.  The amount of variance in the raw 
score describing normal is much smaller than we think.  The raw score 
levels corresponding to norms are not reported because test publishers 
wish to protect their norms.  They consider them proprietary.  The 
scaling of ability using standard scores reinforces the interpretation 
that small differences in ability appear large.  Compare your memory to 
that of Commander Data on Star Trek and you will have an idea of what a 
large difference might be.  When it comes to memory, a normal level is 
essentially impaired.  If one of the drug companies invented a 
medication that improved memory by a standard deviation, I would not be 
impressed.  iPads, iPhones and continuous internet access have increased 
our memory ability much greater.


I made iPad versions of the screening test and the Hahnemann Orientation 
and Memory Examination (HOME).  These were portable tests I developed to 
track trauma patients.  The data I collected was reported in Williams, 
J. M., (1990). The Neuropsychological Assessment of Traumatic Brain 
Injury in the Intensive Care and Acute Care Environment. In C. J. Long, 
 L. Ross (Eds.), Traumatic Brain Injury, New York: Plenum.


Mike Williams

P.S. I also sell a beautiful Naming Test that also cannot be normed. 
Check Brainmetric.com.



On 12/11/13 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Subject: Re: Useful Hardware and Software for Computer Lab?
From: Michael Brittmich...@thepsychfiles.com
Date: Wed, 11 Dec 2013 06:08:53 -0500
X-Message-Number: 3

Mike,

I'm curious about your Memory Screening Test.  The description in iTunes 
mentions some norming work that has been done on the test.  Do you have any 
published research on it?

Michael

Michael A. Britt, Ph.D.
mich...@thepsychfiles.com
http://www.ThePsychFiles.com
Twitter: @mbritt



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=31273
or send a blank email to 
leave-31273-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Useful Hardware and Software for Computer Lab?

2013-12-10 Thread Mike Wiliams
This is a shameless plug for software I designed myself.  They are all 
sold through brainmetric.com.  Although they were originally designed 
for research studies, I have always had in the back of my head that many 
would be useful in teaching labs.  I also developed some programs for 
teaching statistics.


If you have E-prime, Presentation and systems like these installed, many 
researchers have developed procedures in these systems that could be 
used in class.


Mike Williams
Drexel Univesity

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=31209
or send a blank email to 
leave-31209-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Higgs Bosons and the tenure system

2013-12-07 Thread Mike Wiliams
This past June, Marcus Raichle was awarded the Talairach award by the 
Organization for Human Brain Mapping. The award was given for his 
discovery and examination of the Resting State. Now, we take resting 
state scans as a matter of routine, just in case we might think of a 
study in the future. The person reviewing Dr. Raichle's background and 
presenting the award actually made the comment that Dr Raichle would 
never make it as a PI today since he only published 6 papers a year.

I really have no proposed solution to this problem. People in academic 
life are wasting so much time publishing ruminative and redundant work. 
It seems that all the great discoveries come from outside this system. 
Somehow the reward system has to shift from counting publications to 
counting discoveries.

Mike Williams

On 12/7/13 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:
 TIPS Digest for Saturday, December 07, 2013.

 1. Random Thought:  Tis The Season To Be Grateful, II
 2. Higgs Bosons and the tenure system
 3. Why is this Funny?
 4. Re: Why is this Funny?
 5. Re: Higgs Bosons and the tenure system
 6. Re: Why is this Funny?
 7. Re: Higgs Bosons [and Barbara McClintock]

 --

 Subject: Random Thought:  Tis The Season To Be Grateful, II
 From: Louis Eugene Schmierlschm...@valdosta.edu
 Date: Sat, 7 Dec 2013 10:54:45 +
 X-Message-Number: 1

Gifts have been bought, shipped, and some given.  Getting ready for 
 some grandmunchkin spoiling.  Tis now the season of being consciously 
 grateful.   So, I especially was thinking about my two sons.  I deeply admire 
 them, deeply.  My Silicon Valley Michael always wants--needs--the challenge 
 and excitement of something new; my artist-with-food Robby is always looking 
 for a culinary new (oh, you should taste his pickles, bacon, and lox).  
 Neither are status quo people.  Old hat doesn't fit them.  They're always 
 deftly on the move in their professions and lives.  A risky let's see if 
 and an unsuccessful oops are not their enemies; fearfully stymying am 
 not, paralyzing can't and atrophying won't are.  They experiment; 
 they're not one-and-out people.  They're not embarrassed or diminished by 
 doesn't work.   They know behind every one of their accomplishments there 
 were a host of attempts.  Perseverance and practice are their names, 
 commitment is their game.  Th
   ey've learned through trail and error.  They don't stick with one chiseled 
 in stone habit of doing one thing one way, over and over again.  They adapt, 
 adopt, invent, create, generate, discard, modify, adjust.  They venture out 
 into new worlds to pass milestones rather than being weighed down and slowed 
 by millstones..  They're really humble, knowing that too much pride can rob 
 them of their confident I wonder if.  They know so much about what they 
 don't know.  They know that mastering their craft has taken a lot of time and 
 pain, but they've learned to learn to put in the time in order to convert 
 that pain into gain.
   I am truly thankful to have them as my sons and humbled to have them 
 call me dad.  Love them both.  I've decided to keep them.  And, when I see 
 them in the coming weeks, I will hug them, kiss them, and tell them, thank 
 you for becoming you--and spoil their kids rotten.
   Susie and I would like to wish one and all a merry, happy, and all that.

 Make it a good day

 -Louis-


 Louis Schmier 
 http://www.therandomthoughts.edublogs.org
 203 E. Brookwood Pl http://www.therandomthoughts.com
 Valdosta, Ga 31602
 (C)  229-630-0821 /\   /\  /\ /\  
/\
/^\\/  \/   \   /\/\__ 
   /   \  /   \
   / \/   \_ \/ /   \/ 
 /\/  /  \/\  \
 //\/\/ /\
 \__/__/_/\_\/\_/__\  \
   /\If you want to climb 
 mountains,\ /\
   _ /  \don't practice on mole 
 hills - /   \_


 --

 Subject: Higgs Bosons and the tenure system
 From: Lilienfeld, Scott Oslil...@emory.edu
 Date: Sat, 7 Dec 2013 19:42:20 +
 X-Message-Number: 2

 Hi All TIPSTERs: I thought that some of you might this piece worthy of 
 discussion and debate:

 http://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-academic-system

 ...Scott

 Scott O. Lilienfeld, Ph.D.
 President, Society for the Scientific Study of Psychopathy
 Professor, Department of Psychology
 Emory University
 Atlanta, Georgia 30322













 

 This e-mail message (including any attachments) is for the sole use of
 the intended recipient(s) and may contain 

Re:[tips] Psychology Apps

2013-09-11 Thread Mike Wiliams

Check out the software and mobile apps at Brainmetric.com

Mike Williams

 --

Subject: Good Psychology Mobile Apps
From: Michael Brittmich...@thepsychfiles.com
Date: Wed, 11 Sep 2013 09:17:10 -0400
X-Message-Number: 1

I'm preparing a video episode for the podcast in which I'll be showing some of what I 
consider to the better psychology-related mobile apps.  As you can imagine, there is a 
lot of pseudo-scence in the psychology apps available for mobile devices.  I 
have a draft of my list which anyone is free to take a look at - feedback welcome - here:

http://list.ly/draft/3e8618ef9ff62f89e12b17545049c7db




---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=27758
or send a blank email to 
leave-27758-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Satel Lilienfeld book (Re: Watch A Legend In Action)

2013-06-25 Thread Mike Wiliams


I just returned from the annual meeting of the Society for Human Brain 
Mapping 
(http://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=1).  No 
one presented any of the extreme misinterpretations that Satel  
Lilienfeld bring up in the book.  The hyperpole they criticize is 
derived completely from the world of press release science that we 
live in.  I attribute all this to the inability of the press to 
criticize itself.  They create the misinterpretations in trying to 
create and push a story, and then criticize the investigators when the 
interpretations get hyped.


Brain activations in fMRI are facts.  The interpretation of these 
facts are subject to all the strengths and weaknesses of human reasoning.


The undercurrent of the Satel and Lilienfeld book is a push-back to 
the blame the brain explanations that keep getting overhyped and 
rankle all of us. Neuroimaging is being used to support this but I 
have a feeling that fMRI studies will produce findings that will cause 
people to qualify the reductionist, genetic models and finally 
understand how the systems work.  After all, fMRI is used to image 
environmental influences.  There is no activation pattern to interpret 
(except for resting state) unless the investigator presents a stimulus 
to the subject.


Since it is functional, fMRI has embedded within it the capacity for 
pushing our understanding of how the brain mediates complex function.  
It keeps getting better.  Five years ago, 1.5 and 3T scanner could 
only image the entire hippocampus.  At this meeting there were 
presentations on high resolution imaging of parts of the hippocampus.  
DTI imaging and spectroscopy keep getting better.  Data analysis keeps 
getting better.  The Allen Institute was present and integrating gene 
expressions with imaging is just beginning.  I saw high resolution 
structural scans of leukemic children from St. Jude hospital 
illustrating white matter lesions that were not imaged before when I 
worked there in the 1980s.  The clinical applications of fMRI, DTI and 
the new methods in neurological diseases is just emerging.  There are 
many applications of imaging in clinical neurology and neuropsychology 
that have nothing to do with the God center, psychopathology or 
abstract psychological functions.  Five or ten years ago, I thought 
the technology had run its course.  It is just beginning.


It's the media hype that has to run its course.  The media is 
presenting a bizarre picture of the science.


Mike Williams

On 6/25/13 2:00 AM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Satel  Lilienfeld book (Re: Watch A Legend In Action)





---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=26231
or send a blank email to 
leave-26231-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Subject: Re: Support Our Troops bumper stickers

2013-05-29 Thread Mike Wiliams
I meant powerless in the sense that they could not influence how the 
media and special interests

depicted them.  Someone had to be blamed and they were easy targets.

I regret you had to choose one way or the other.  I was in the last 
group to register for the draft and
I honestly don't know what I would have done.  I was fortunate to be 
just a year or two younger than

those who had to make the choice.

I did keep my draft card, although I don't know if many people today 
realize its significance as a
souvenir of those times.  We ceremoniously burned two things in the 
1960s, bras and draft cards,

sometimes in the same fire.

Mike Williams

On 5/30/13 2:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Mike-

They weren't powerless. They could have refused to serve. I did.

-Don.

- Original Message -
From: Mike Wiliamsjmicha5...@aol.com
To: Teaching in the Psychological Sciences (TIPS)tips@fsulist.frostburg.edu
Sent: Wednesday, May 29, 2013 7:45:30 AM
Subject: Re:[tips] Support Our Troops bumper stickers

These expressions to support the troops and honor their service are part
of a general reaction to the way the troops were treated when they came
back from the Vietnam War.  I think it also reflects a general
appreciation for the service of WWII veterans who have received a lot of
attention in movies and documentaries.  The Vietnam era veterans were
blamed for losing the war by conservatives and blamed for atrocities by
liberals.  They were powerless scapegoats:

http://abcnews.go.com/blogs/politics/2012/05/obama-recalls-vietnam-vets-treatment-as-national-shame-vows-it-will-not-happen-again/

We can still argue about Iraq.  It should have nothing to do with
supporting the troops.

Mike Williams



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=25805
or send a blank email to 
leave-25805-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Support Our Troops bumper stickers

2013-05-28 Thread Mike Wiliams
These expressions to support the troops and honor their service are part 
of a general reaction to the way the troops were treated when they came 
back from the Vietnam War.  I think it also reflects a general 
appreciation for the service of WWII veterans who have received a lot of 
attention in movies and documentaries.  The Vietnam era veterans were 
blamed for losing the war by conservatives and blamed for atrocities by 
liberals.  They were powerless scapegoats:


http://abcnews.go.com/blogs/politics/2012/05/obama-recalls-vietnam-vets-treatment-as-national-shame-vows-it-will-not-happen-again/

We can still argue about Iraq.  It should have nothing to do with 
supporting the troops.


Mike Williams

On 5/29/13 2:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

TIPS Digest for Tuesday, May 28, 2013.

1. Re: Support Our Troops bumper stickers

--

Subject: Re: Support Our Troops bumper stickers
From: Joan Warmboldjwarm...@oakton.edu
Date: Tue, 28 May 2013 17:46:01 -0500
X-Message-Number: 1

These ongoing ceremonies to support and honor our troops has effectively
wiped-out any further contemplation or discussion of our original
ill-conceived rationale for invading Iraq.  As you state Beth, to
question our motives for going to war has been effectively distorted
into a critique of the motives and honor of our soldiers.

Personally, I feel this is an instructive example of the use of
indoctrination 'in the free world' that could be an interesting focus of
discussion in our classes.

Joan
jwarm...@oakton.edu



On 5/25/2013 9:18 AM, Beth Benoit wrote:

Giving thanks to our military on Memorial Day reminds me of my
aversion to the yellow Support Our Troops bumper stickers in the
U.S.Who DOESN'T support the soldiers themselves?  If you are not
in favor of invading a country with warlike intentions, does that also
mean you don't support our troops?  We may not all support the
momentum that has sent our troops wherever they are ordered to go, but
is there anyone who doesn't want them all to come home safely?

Years ago, when the war centered on looking for weapons of mass
destruction, I had a bumper sticker that said:  Support Our Troops.
  Bring Them Home.  Someone thoughtfully scrawled TRAITOR across it
with a Magic Marker.

Sadly, those yellow ribbon bumper stickers seem to have become icons
that just indicate that the driver is a politically conservative person.

Beth Benoit
Granite State College
Plymouth State University
New Hampshire







---

You are currently subscribed to tips as: jwarm...@oakton.edu
mailto:jwarm...@oakton.edu.

To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=49240.d374d0c18780e492c3d2e63f91752d0dn=Tl=tipso=25740
http://fsulist.frostburg.edu/u?id=49240.d374d0c18780e492c3d2e63f91752d0dn=Tl=tipso=25740

(It may be necessary to cut and paste the above URL if the line is broken)

or send a blank email to
leave-25740-49240.d374d0c18780e492c3d2e63f91752...@fsulist.frostburg.edu
mailto:leave-25740-49240.d374d0c18780e492c3d2e63f91752...@fsulist.frostburg.edu




---

END OF DIGEST

---
You are currently subscribed to tips as: jmicha5...@aol.com
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13285.491159b91b3f6bfcebca81f03ebeee71n=Tl=tipso=25777
or send a blank email to 
leave-25777-13285.491159b91b3f6bfcebca81f03ebee...@fsulist.frostburg.edu



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=25779
or send a blank email to 
leave-25779-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] The evidence based bandwagon

2013-04-13 Thread Mike Wiliams
Evidence-based practice and Standard of care are important concepts 
in medicine that are strongly related to Current Procedural Terminology 
(CPT) code based reimbursement for health care services.  If you do not 
provide an evidence-based standard of care, you will not get reimbursed 
for services.  This is basically a good idea because physicians, 
dentists and others were providing all kinds of essentially ineffective, 
unusual therapies in order to maximize reimbursement.  Restricting 
reimbursement to the effective treatments will obviously reduce health 
care costs.  Clinical psychologists have joined the chorus and a number 
of groups are working on specifying the standard of care for 
psychological disorders.  I was peripherally involved in this regarding 
neuropsych assessment and the CPT codes used for clinical fMRI.


I would like to make two points about this.  The first is the cognitive 
behavioral therapists are using this to further criticize psychoanalysis 
and other therapy schools that are not their own.  The second point is 
that the standard for empirical validity in medicine treatment is the 
randomized, double-blind clinical trial.  There has never been one of 
these for any psychological treatment, including cognitive-behavior 
therapies and psychotropic medications.  By not examining the blinding, 
the drug companies have been very crafty in convincing the FDA that the 
studies were actually blind when a simple survey of the subjects would 
probably reveal that they know whether they were in the treatment 
condition or placebo.  The onset of dry mouth and constipation are sure 
signs you are not getting the placebo.


Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=24997
or send a blank email to 
leave-24997-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


RE:[tips] Childlessness hits men the hardest (n = 16)

2013-04-08 Thread Mike Wiliams
I got the numbers from another Tipster and just plugged them in.  I 
think they are correct now.


Mike

On 4/8/13 2:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

TIPS Digest for Sunday, April 07, 2013.

1. RE:Childlessness hits men the hardest (n = 16)
2. RE:Childlessness hits men the hardest (n = 16)

--

Subject: RE:Childlessness hits men the hardest (n = 16)
From: Mike Wiliamsjmicha5...@aol.com
Date: Sun, 07 Apr 2013 01:09:25 -0400
X-Message-Number: 1

I updated the spreadsheet.  It now includes an analysis for both samples.

Mike Williams

On 4/7/13 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest
wrote:

  Subject: RE:Childlessness hits men the hardest (n = 16)
  From: David Epsteinda...@neverdave.com
  Date: Sat, 6 Apr 2013 12:58:25 -0400 (EDT)
  X-Message-Number: 5

  On Sat, 6 Apr 2013, Mike Wiliams went:


  I did the spreadsheet quickly and I hope
  there are no errors.
  
  http://www.learnpsychology.com/Men_Woman_ChildDepression_Example.xlsx

  It's a little less bad (less bad than the p = .57 on your spreadsheet)
  when you use the author's denominator, which is the subset of people
  who had actually wanted to have children: 67 out of 108.  The numbers
  are then:

   male   female
  depression   8   14 22
  no depression8   37 45
 16   51 67

  And the p value is .09, two-tailed.

  But that's by far the biggest difference of the half dozen
  differences he reports.

  --David Epstein
   da...@neverdave.com



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=24844
or send a blank email to 
leave-24844-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


RE:[tips] Childlessness hits men the hardest (n = 16)

2013-04-06 Thread Mike Wiliams

I updated the spreadsheet.  It now includes an analysis for both samples.

Mike Williams

On 4/7/13 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: RE:Childlessness hits men the hardest (n = 16)
From: David Epsteinda...@neverdave.com
Date: Sat, 6 Apr 2013 12:58:25 -0400 (EDT)
X-Message-Number: 5

On Sat, 6 Apr 2013, Mike Wiliams went:


  I did the spreadsheet quickly and I hope
  there are no errors.

  http://www.learnpsychology.com/Men_Woman_ChildDepression_Example.xlsx

It's a little less bad (less bad than the p = .57 on your spreadsheet)
when you use the author's denominator, which is the subset of people
who had actually wanted to have children: 67 out of 108.  The numbers
are then:

male   female
depression   8   14 22
no depression8   37 45
  16   51 67

And the p value is .09, two-tailed.

But that's by far the biggest difference of the half dozen
differences he reports.

--David Epstein
da...@neverdave.com



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=24831
or send a blank email to 
leave-24831-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


RE:[tips] Childlessness hits men the hardest (n = 16)

2013-04-05 Thread Mike Wiliams
I used this example in class today because part of my introduction to 
stats involves examples of why statistics makes you think better.  It 
went reasonably well.  I quickly made up a 2x2 chi-square example using 
the depression frequencies reported in the article.  I also made a link 
to the article in the Blackboard site for the course.  Although we 
haven't covered chi-square yet, I think the example is simple enough 
that students can grasp the logic.  Here is a link to the spreadsheet.  
I like Excel for examples because I can change data vales and 
immediately present what happens to the statistics when the change is 
made.  I did the spreadsheet quickly and I hope there are no errors.


http://www.learnpsychology.com/Men_Woman_ChildDepression_Example.xlsx

Mike Williams

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=24815
or send a blank email to 
leave-24815-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Nothing Personal: The questionable Myers-Briggs test | Science | guardian.co.uk

2013-03-22 Thread Mike Wiliams
This is another example of using a false measure that appears objective 
in order to enforce variance among people that does not exist.  In 
business and many other areas, the problem is selecting one person for a 
job from 100 applicants (or more) who are largely the same.  We need to 
appear to use objective measures that appear to discriminate one 
applicant from another.  We want the selection to look objective.  Since 
the screening process has usually filtered out anyone objectively low on 
any reasonable selection variables, the remaining pool contains people 
who are the same.  The only reasonable and fair solution to this is to 
use a lottery.  They have a lottery for admission to Philadelphia 
Charter kindergarden that everyone endorses as fair.  Although no one 
provided a rationale for this lottery, I think all involved understand 
that there would be no fair method to select the children based on tests 
or some other factor.


We need to use lotteries more often in these situations.  We could even 
make a standard for constrained variance that would kick in a lottery 
when the screened group becomes too similar.  That would be fair and 
applicants would know why they were or were not selected.


Mike Williams
Drexel University

On 3/23/13 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: Nothing Personal: The questionable Myers-Briggs test | Science | 
guardian.co.uk
From: Christopher Greenchri...@yorku.ca
Date: Fri, 22 Mar 2013 15:15:14 -0400
X-Message-Number: 1

Here's a good article about the problems of the Myers-Briggs test that might be 
useful to pass along to students who ask about it.
http://www.guardian.co.uk/science/brain-flapping/2013/mar/19/myers-briggs-test-unscientific?CMP=twt_fd

Chris
---
Christopher D. Green
Department of Psychology
York University
Toronto, ON M3J 1P3
Canada

chri...@yorku.ca
http://www.yorku.ca/christo/



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=24482
or send a blank email to 
leave-24482-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] AAUP recommends more researcher autonomy in IRB reform | Inside Higher Ed

2013-03-07 Thread Mike Wiliams
If you would like to get some control of the IRB process, compel your 
institutions to use the Huron IRB management system:


http://www.huronconsultinggroup.com/Insights/Webinar/Education/IRB_Automation_Best_Practices.aspx

We just went to this system and I estimated that I saved 30% of the 
usual aggravation and time.  The system has a set of checklists and 
worksheets that clarify the actions of both the investigators and the 
IRB. They include every detail that should be included to satisfy the 
regs, and no more.  I am also on the IRB and the Huron system constrains 
the behavior of the research department and IRB such that feedback to 
the investigators includes only information consistent with the 
regulations and no more.  It also saves the IRB considerable time.  
Since the checklist are so clear and have been refined as a result of 
practical use by hundreds of IRBs, we discuss and argue/ruminate very 
little.  The general result is efficiency that improves the relationship 
of investigators to the IRB and probably saves collectively thousands of 
hours of time. The system is worth much more than what it costs. Many of 
the unfortunate examples mentioned on this list would not have happened 
using the Huron system.  I should also state that I have no relationship 
to Huron.


Mike Williams

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=24208
or send a blank email to 
leave-24208-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] AAUP recommends more researcher autonomy in IRB reform | Inside Higher Ed

2013-03-06 Thread Mike Wiliams
This will have no impact since investigators are not represented in 
AAHRPP or PRIMR.  These organizations are sustained by more and more 
regulations.  They ignore the interests of investigators and essentially 
treat investigators like they are all intent on harming subjects.  It is 
a real them vs us attitude and there is essentially no check on their 
behavior.  If there is one thing that the Republicans got right is that 
when you create regulations, you great and industry and special interest 
that feeds off supporting them.


http://www.primr.org/

http://www.aahrpp.org/

Mike Williams

On 3/7/13 12:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: AAUP recommends more researcher autonomy in IRB reform | Inside Higher 
Ed
From: Christopher Greenchri...@yorku.ca
Date: Wed, 6 Mar 2013 09:02:04 -0500
X-Message-Number: 1

A very interesting development in the history of the IRB.
http://www.insidehighered.com/news/2013/03/06/aaup-recommends-more-researcher-autonomy-irb-reform

Chris
---
Christopher D. Green
Department of Psychology
York University
Toronto, ON M3J 1P3
Canada

chri...@yorku.ca
http://www.yorku.ca/christo/



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=24167
or send a blank email to 
leave-24167-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] The Man Who Mistook...Though Not Perfect Still Quite Amazing

2013-02-13 Thread Mike Wiliams
I'm not sure what you mean by not perfect.  As far as I know, all the 
case reports are valid.  You have to keep in mind that Sacks is 
following the tradition of Luria's romantic mode of science. 
(http://www.youtube.com/watch?v=eGqLfP-LtgE). The cases are not 
objective, clinical descriptions only.  They expand on the humanistic 
aspects of having these disorders.  They are his best descriptions of 
the life of the patient with the remaining abilities rather than a 
disability.  His Island of the Color Blind includes wonderful 
descriptions of life without color.  The PBS special is a great video 
for classes in Sensation  Perception:


http://www.youtube.com/watch?v=CM06G26X-rQ

Mike Williams

On 2/13/13 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Subject: The Man Who Mistook . . .Though Not Perfect Still Quite Amazing
From: Joan Warmboldjwarm...@oakton.edu
Date: Wed, 13 Feb 2013 19:04:58 -0600 (CST)
X-Message-Number: 2

This is not a perfect book as I'm sure others who recommended it would
agree.  But it has so very much to offer.  How many other books focused on
brain function provide a glimpse into the lives of people with such an
extraordinary variety of brain malfunction and the consequences thereof?



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=23714
or send a blank email to 
leave-23714-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] TED talks and lectures

2013-01-27 Thread Mike Wiliams
This is a style of course construction that I follow now that still 
includes some form of lectures:


1) Computer-mediation: Everything is stored in a course within 
Blackboard.  I have written some programs using Livecode that provide 
for demonstrations and interactive lab exercises.  (see Windows version: 
http://www.learnpsychology.com/courses/spcourse/Limulus.exe; Mac 
version: http://www.learnpsychology.com/courses/spcourse/Limulus.zip).  
There are many sources for commercial software that do many of the 
things I programmed.


2) All the course content is divided into sections that students can 
reasonably study in one week.  This includes conventional readings from 
the text book and/or other sources, all my Powerpoint lectures recorded 
by me, movies and the interactive software that pertains to the unit.  
The advantage of recording the lectures is that the student does not 
have to take it all in within 50min.  They can pause, rewind etc.  The 
lecture pace is under their control.


3) The units also contain a test on the material that is taken on-line 
using test construction and administration tools offered by Blackboard.  
All the test items I used in the past for midterms and finals were 
incorporated into these unit tests.  I was also able to add many more 
questions since the tests were not packed into a single testing 
session.  I also don't waste class time giving tests.  The students can 
take the tests when they finish studying a unit at any point in the 
course. They must have them all done by the end of the course.


4) The classes are now  filled with questions, discussion, interactive 
lab exercises and a short presentation like a TED talk that most often 
include clinical cases, sometimes I present movies segments from the 
units as a way to start discussions.  They are also given a short, 
on-line quiz on the interactive exercises by the end of the class.  They 
can't take the quiz if they don't come to class.  This serves as some 
incentive to come to class.


During the exercise on Limulus, I show a short video on horseshoe 
crabs.  Their great breeding event at the full moon is nearby in 
Delaware.  Did you know that their blood is blue and based on Copper, 
not Iron (kind of like Vulcans)?  They were also used to study vision 
etc.  This is how Lateral Inhibition works in a horseshoe crab - then we 
go into the short Limulus interactive program.  The quiz includes 
relatively easy items on Lateral Inhibition and the location of Cape 
Henlopen.  I usually take the quiz with them and give them the answer if 
someone doesn't shout it out before me.


5) The students also have the Blackboard discussion forums and email.  I 
find that they do most of the discussion in class.


In short, my lectures are treated like the textbook readings.  Classes 
are now interactive sessions.  The course is also more self-paced.  
Since the tests are open-book (even this term is an anachronism - with 
Google, life is open book), most no longer cram for exams.  Since there 
is never enough time to look up every answer, they must still study to 
get a good grade.  I can also ask questions that are more abstract and 
require more synthesis.  After they take one or two of the tests, they 
know that they cannot breeze through if they want a good grade.


I think one of the most important elements is that the students have 
more control over the pace of the course.  The pace isn't dictated to 
them.  They probably perceive the class sessions as more interesting 
since they are doing something.  Finally, I don't have the burden of 
lecturing more than once.  All my effort is put into inventing class 
activities getting them interested in the material.  After you do all 
this once, it becomes far easier to conduct future classes.


Mike Williams

P.S. This course is Sensation  Perception.

On 1/27/13 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Subject: re: Aren't TED talks just lectures?
From: Annette Taylortay...@sandiego.edu
Date: Sun, 27 Jan 2013 16:44:46 +
X-Message-Number: 2

I just don't get the brou-ha-ha over lectures.

I lecture.

I make no excuses for that.

There is a lot that can be done to make lectures relatively interactive. It's 
not rocket science. The pause for students to think about a question asked 
during a lecture, and then providing a CORRECT answer! the pause for students 
to formulate an answer, maybe a little pair and share, and then solicitation of 
the responses. I use lots of embedded demos, especially in cognitive. It does 
not have to be 100% delivery, but for most of my classes I'd say it's about 80% 
delivery with short film clips, demos and embedded questions.

Let's face it, discovery learning does not work especially well. Students are 
are likely, if not more likely, to hit upon a wrong answer and then convince 
their classmates of the wrong information. Go back and check the archives for 
many of Hake's postings for evidence to 

Re:[tips] my crummy knowledge of stats

2013-01-16 Thread Mike Wiliams


You can use a conventional paired t test.  Although you have dichotomous 
scores that does not mean they are categorical.  Correct/incorrect is a 
ratio scale of 1 unit.


Green/Red, Accountant/Psychologist are the type of categorical 
dichotomies that bring in the nonparametric procedures like Chi-square 
or ranking tests.


Just calculate a mean difference and variance for each item and analyze 
them the usual way.  You might also try some of the test reliability 
stats that are now in
SPSS, such as coefficient alpha.  Alpha is a general index of how well 
the items intercorrelate or hang together.


Mike Williams


- Original Message -

From: Annette Taylortay...@sandiego.edu
To: Teaching in the Psychological Sciences (TIPS)tips@fsulist.frostburg.edu
Sent: Tuesday, January 15, 2013 6:21:42 PM
Subject: [tips] my crummy knowledge of stats

I know this is a basic question but here goes:

I have categorical data, 0,1 which stands for incorrect (0) or correct (1) on a 
test item.

I have 25 items and I have a pretest and a posttest and I want to know on which 
items students improved significantly, and not just by chance. Just eyeballing 
the data I can tell that there are some on which the improved quite a bit, some 
not at all and some are someplace in the middle and I can't make a guess at 
all. That is why we have statistics. Yeah!  hbleh.

As far as I know, the best thing to do is a chi-square test for each of 25 
items; but of course that will mean that with a .05 sig level I will have at 
least one false positive, maybe more, but most assuredly at least one. This 
seems to be a risk. At any rate I can use SPSS and the crosstabs command allow 
for calculation of the chi-square.

I know that when I do planned comparisons with multiple t-tests, I can do a 
Simes' correction in which I can rank order my final, obtained alphas, and 
adjust for the number of comparisons and reject from the point from which the 
obtained alpha failed to exceed the corrected-for-number-of-comps alpha. But as 
far as I know, I cannot do that with 25 chi square tests. There is probably 
some reason why I can no more do that, that relates to the reason for why I 
cannot do 25 t-tests in this situation with categorical data.

Is there a better way to answer my research question? I need a major professor! 
Oh wait, that's me... drat! I need to hire a statistician. Oh wait, I'd need $$ 
for that and I don't have any. So I hope tipsters can stand in as a 
quasi-hired-statistician and help me out.

Oh, I get the digest. I don't mind waiting until tomorrow or the next day for a 
response, but a backchannel is fine.tay...@sandiego.edu

I will be at APS this year. Any other tipsters planning to be there? Let's have 
a party! I'd love to put personalities to names.

Thanks

Annette

Annette Kujawski Taylor, Ph. D.
Professor, Psychological Sciences
University of San Diego
5998 Alcala Park
San Diego, CA 92110
tay...@sandiego.edu  



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=23097
or send a blank email to 
leave-23097-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] famous psychologists and federal grants

2013-01-02 Thread Mike Wiliams

Just a few points:

1) The base rate of grants available to psychologists is too low to fund 
any reasonable number of tenured faculty.  This is just another 
manipulation of criteria
designed to cut costs.  We are headed for faculties composed of 
instructors, post-docs and depressed nontenured assistant professors who 
spend each waking

moment applying for grants they have a small chance of acquiring.

2) I have never seen grant acquisition stats just for psychologists.  It 
must be low because we don't have our own institute.  Each major 
discipline has its own
funding institute.  NIMH is for psychiatry and neuroscientists who 
believe that all mental disorders are a neurotransmitter disease.  
Psychologists get a few grants
out of guilt.  The reviewers support their own disciplines. It is only 
in the areas of obvious psychological influence, such as obesity, 
smoking cessation and
drug addiction that receive any reasonable funding for psychologists.  
Most of the funding in those areas still goes to neuroscientists who 
write grants supporting

theories of obesity, drug addiction etc as brain disorders.

Even nurses have their own institute.  After the nurses advocated for 
their institute, they began to get grants on their own terms.


3) The R01 has become the currency of academia.  It has already become 
standard practice in the biomedical sciences to fire people because they 
don't get grants,

even before they go up for tenure.

The longshoremen recently shook the foundations of our fragile economy 
by just suggesting they might go on strike over a minor compensation issue.


Maybe it's time to join a union.

Mike Williams

 On 1/2/13 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Subject: famous psychologists and federal grants
From: Lilienfeld, Scott Oslil...@emory.edu
Date: Wed, 2 Jan 2013 22:57:07 +
X-Message-Number: 1

Hi TIPSters...happy New Year.

I beg your indulgence for just a bit, as this message doesn't have much direct 
bearing on the teaching of psychology, although I do think it carries a number 
of implications for how we think about academia and what we value or do not 
value in our colleagues.



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=22664
or send a blank email to 
leave-22664-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] IQ RIP

2012-12-21 Thread Mike Wiliams
The discussion of g reminded me of the International Neuropsychological 
Society presidential address of Muriel Lezak.  Neuropsychologists and 
many others who use IQ tests every day recognized that the tests were 
measuring a variety of independent cognitive abilities.  The average 
performance did not capture the lost and retained abilities of people 
who sustain brain illness and injuries.  Unfortunately, the largely 
meaningless summary scores are still reified.


http://www.learnpsychology.com/papers/general/Lezak_IQ_RIP.pdf

Mike Williams



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=22491
or send a blank email to 
leave-22491-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] MRI Diagnosis

2012-12-08 Thread Mike Wiliams
Two features of the study seem fishy.  The first is the manual 
designations of individual brain areas.  Although they studied the 
reliability of the designations and
appeared very careful, there must be some persistent error in defining 
the brain areas.  The second is that people making these designations 
may have a
systematic bias in the brain areas they think are associated with 
various disorders.  This is different than being blind to the subject 
diagnosis.  We did
something similar in a study that required designating the hippocampus.  
I noticed that each person on the team had a different idea of where the
hippocampus ended and the parahippocampus began.  There were other 
uncertain areas.  Each rater just had to make a decision. However, if 
the raters in this study
systematically sampled more of the hippocampus for the depression group 
and less for the schizophrenia group then the classifications might 
represent
such a systematic difference.  Since they did not use a normalized 
method common across all the subjects, this could have happened and they 
should have

examined it.

The second is the high rate of classification compared to the 
reliabilities of the diagnostic methods.  The SCID reliability varies 
from approx .6 to .9, depending
on the diagnosis.  Presumably their classification rates should be 
lower, given the error in making anatomical designations and 
measurements and
the error in making diagnoses.  The extremely high rates of 
classification suggest that some systematic bias is linking the brain 
measurements to the

diagnostic clusters.

Finally, these overly empirical and descriptive studies do not enlighten 
us concerning how the brain mediates these disorders.


Mike Williams

Assaf, B., Mohamed, F. B., Abou-Khaled, K., Williams, J. M., Yazeji, M., 
Haselgrove, J.  Faro, S. (2003).  Diffusion Tensor Imaging of the 
Hippocampal formation in temporal lobe epilepsy.  American Journal of 
Neuroradiology, 24, 1857-1862.




On 12/8/12 11:00 PM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:

Subject: MRI Diagnosis
From: don allendap...@shaw.ca
Date: Sat, 8 Dec 2012 10:15:55 -0700 (MST)
X-Message-Number: 3

I just read the following study in PLOS one titled: Anatomical Brain Images 
Alone Can Accurately Diagnose Chronic Neuropsychiatric Illnesses.

It can be found here:

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0050698

The claims of the study seem impressive:

In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, 
Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial 
risk for Major Depressive Disorder, our method discriminated with high specificity and 
nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric 
disorder from the brains of healthy participants and the brains of persons who had a 
different neuropsychiatric disorder.

The research design seemed to be adequate (at least to me) but I don't have 
enough detailed information about MRI to know whether this is a really 
important breakthrough or just another soon-to-be-forgotten study. The fact 
that it was published in PLOS one rather that Science or Nature makes me 
suspect the latter. Would anyone with more expertise in interpreting MRI data 
like to provide some comments on the study?

Thanks,

-Don.



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=22259
or send a blank email to 
leave-22259-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Is p .05 ?

2012-10-08 Thread Mike Wiliams

Sounds like your students need to eat more organic food.

Mike Williams

On 10/6/12 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: Is p  .05 ?
From: Wuensch, Karl Lwuens...@ecu.edu
Date: Fri, 5 Oct 2012 16:44:23 +
X-Message-Number: 3

Remember my rant about students not being able to tell which of two numbers (both between 0 and 
1) is larger?  Well look at this statement from one of my students:  The mean IQ of freshman at 
East Carolina Unviresitysic  (N = 17, M = 107.65, SD = 9.95) was significantly less 
than that of the general population (100),...  Now 107.65 is less than 100.

Cheers,

Karl L. Wuensch



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=20966
or send a blank email to 
leave-20966-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] TED talk on Publication Bias

2012-10-03 Thread Mike Wiliams
I guess it's all pseudoscience.  Of the drug trials I have worked on, 
publication bias is just the tip of the iceberg.  Drug companies have an 
antagonistic relationship with the FDA.  They are also very sensitive to 
reporting any information that reduces the company stock values.  Since 
most of the drug trial managers have stock options, the conflict of 
interest is obvious.  They are not responding to publication bias.  They 
would not report negative findings even if there was no publication 
bias.  As Goldacre reports, they have been dragged kicking and screaming 
by the FDA to report all their trials.  They are still are not 
cooperating.  If it wasn't for the FDA regulating them at the current 
level, we would be back to the days of patent medicines when cough syrup 
contained alcohol, heroin and cannabis extract:


http://antiquecannabisbook.com/chap25/BlackSheep_1.htm

Drink enough of that stuff and you will definitely stop coughing.

Mike Williams

On 10/3/12 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: TED talk on Publication Bias
From: Jeffrey Nagelbushnagel...@hotmail.com
Date: Tue, 2 Oct 2012 20:44:00 +
X-Message-Number: 8


I strongly recommend the recent TED talk by the wonderful Dr. Ben Goldacre. His 
focus is on publication bias that makes it very difficult for even doctors to 
know much about the drugs they prescribe. He does mention the Bem precognition 
study in the beginning.

http://www.ted.com/talks/ben_goldacre_what_doctors_don_t_know_about_the_drugs_they_prescribe.html

Jeffrey Nagelbush
Social Sciences Department
Ferris State University




---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=20890
or send a blank email to 
leave-20890-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] IRB Training

2012-09-22 Thread Mike Wiliams
CITI training just allows the institution to have standardized training 
that meets the regulations. Otherwise, the institution has to develop 
and maintain their own training and it may not meet the regulations.  
CITI solves the problem.  I wish I thought of it.  It's a great lesson 
in the way regulations generate an industry to support them.  Every 
institution will go to a CITI program or something like it, just as a 
short-cut to standardization.


Mike Williams


On 9/22/12 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

TIPS Digest for Friday, September 21, 2012.

1. question about IRB training
2. Re: question about IRB training
3. Retirement For Bonzo

--

Subject: question about IRB training
From: Carol DeVolderdevoldercar...@gmail.com
Date: Fri, 21 Sep 2012 14:46:22 -0500
X-Message-Number: 1

Hi all,
If you submit work to your institution's IRB, I assume you have to show
some sort of training certification, either from the NIH or an equivalent.
How many of you know if your institution is affiliated with CITI, and if
so, why did your institution go that route? Backchannel is fine.
Thanks,
Carol





---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=20577
or send a blank email to 
leave-20577-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] IRB Training

2012-09-22 Thread Mike Wiliams

You do not need to have all your students take the training, unless this is an 
educational exercise.  The only people who need training are the PI and others 
on the IRB submission, especially those taking consents.  The NIH and CITI 
training probably meet the requirements, but who knows?  The IRB regs are a 
moving target.  A great book on the subject is Ethical Imperialism, by Zachary 
Schrag.

Mike Williams



Subject: Re: IRB Training
From: Paul C Bernhardtpcbernha...@frostburg.edu
Date: Sat, 22 Sep 2012 13:27:53 +
X-Message-Number: 3

We go directly to NIH's website. I have no idea if it costs us anything, but 
I'm pretty confident it does not. I'm doing a study now that required about 10 
undergraduate research assistants and they are all just going to the site for 
training.

http://phrp.nihtraining.com/users/login.php

Paul


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=20596
or send a blank email to 
leave-20596-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Critique of Ethics Procedures

2012-09-06 Thread Mike Wiliams

Hello

In regard to IRBs, I thought it might be helpful to point out some 
things that most investigators do not know about the process.  I don't 
claim I know everything but I recently went through IRB Chair training 
that included attending the PRIMR conference (http://www.primr.org/) and 
working with a consulting group named Huron 
(http://www.huronconsultinggroup.com/researchdetails.aspx?articleId=1506).  
Most investigators do not know that PRIMR even exists.  It is also not 
referred to in any of the documents like the one from the AAUP.


PRIMR is a professional society of IRB Chairs and administrators.  All 
the IRBs have a budget for training that usually means periodic 
attendance at PRIMR and each year thousands of IRB staff attend. There 
is also a system for accrediting IRB professionals. They convene 
sessions in which the outline is the same: rationalize the intense 
review of protocols by citing all the abuses of investigators and then 
itemize all the additional ways research should be regulated to stop the 
investigators from harming people.  I attended a training session and 
the course leaders asked how many people were in each major category 
(Chairs, Vice-chairs, IRB members, investigators etc.)  Approx 3 people 
raised their hands admitting they were investigators.  The conference 
has a few sessions for investigators but these are usually just sessions 
used to inform investigators about new regulations.  There is one thing 
that the Republicans have correct: If you have a set of laws or 
regulations, you will create an industry that has an interest in 
protecting and extending the regulations.   That is what has happened, 
thousands of people now have a vested interest in keeping and extending 
the domain of IRBs.  They make money from these reviews and they need 
them to be a complicated as they can be.  The ambiguity present in the 
IRB regulations has fed this interest and PRIMR rationalizes all this 
because PRIMR itself feeds off the regulations.


Now, enter Huron.  The Huron system is a marvel of administrative 
process.  It has a worksheet for every decision and clear models for 
every process.  It even includes a model Thank You letter you might send 
to people who consult to the committee.  At Drexel, we are going to this 
system and it will be worth far more than the price.  Temple University 
now uses these forms and you can see them here: 
(http://www.temple.edu/research/regaffairs/irb/index.html).  I just 
completed a set for a study I will do with Temple.  If you want to see 
what we have to go through at Drexel now, check this page: 
(http://www.research.drexel.edu/compliance/irb/medical_irb.aspx).  It 
represents the idiosyncratic, common sense and overly-complicated system 
that most IRBs use.  Using the current Drexel system, and many others, 
represents hours of wasted preparation time.  Imagine if Temple and 
Drexel used the same forms.  Imagine if all the IRBs used the Huron 
system.  The Huron system also keeps the IRBs grounded.  Every decision 
is mediated by a worksheet.  IRBs don't fly unguided.  Most of the poor 
IRB decisions occur because the regulations are unclear and the IRBs 
have no guidance or supervision.  Since it is a system developed 
external to the IRB, and represents the best interpretation of the 
regulations, the Huron system implicitly supervises them.


Millions of wasted hours and considerable frustration would be saved if 
every IRB was required to use the same forms and review process.  Most 
of the people writing the pronouncement papers like this one from the 
AAUP have obviously never consulted their colleagues who work on the 
IRB.  There are many aspects of the IRB process that could actually be 
changed for the better that are never proposed because the authors are 
unaware of the IRB systems.


Mike Williams


Subject: Critique of Ethics Procedures
From: Jim Clarkj.cl...@uwinnipeg.ca
Date: Wed, 05 Sep 2012 06:20:14 -0500
X-Message-Number: 1

Hi

A strong critique of current research ethics practices from the AAUP, with many 
implications for most psychology research if its recommendations were adopted 
(i.e., much would be exempt from IRB approval).

http://www.aaup.org/NR/rdonlyres/3F016909-1388-43DE-872B-18D7F1C373AC/0/IRBREPORT29August2012.pdf

Perhaps there is some hope that the flawed current practices will be revised?  
And should we be educating our students not only about the current regulations, 
but also about their weaknesses?

Take care
Jim


James M. Clark
Professor of Psychology and Chair
j.cl...@uwinnipeg.ca
Room 4L41A
204-786-9757
204-774-4134 Fax
Dept of Psychology, U of Winnipeg
515 Portage Ave, Winnipeg, MB
R3B 0R4  CANADA





---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=20236
or send a blank email to 
leave-20236-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Smell Cilia

2012-09-04 Thread Mike Wiliams
The main problem with these studies is the use of a pathology that does 
not exist in nature. The authors: ... the relevance of IFT88 mutations 
to human pathology is unknown. The logic follows the line that, We 
produced a mouse that doesn't have protein IFT88 and this protein is 
necessary for cilia growth.  We discovered that when we give the mouse a 
treatment that increases protein IFT88, they grow cilia.  An IFT88 
protein deficit is not a natural illness.  It was apparently produced by 
a type of selective inbreeding.  It reminds me of the attempts to treat 
scopolamine-induced memory disorder.  A number of medications were 
effective but none panned out as effective with any naturally-occurring 
memory disorder.

I wonder if the hearing and balance systems are poor in these mice.  The 
cilia in these systems are much more important than smell.

Mike Williams


On 9/4/12 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:
 TIPS Digest for Monday, September 03, 2012.

 1. What's That Smell?
 2. What's That Smell: Dogs  Orcas Edition
 3. Re: What's That Smell: Dogs  Orcas Edition
 4. The Effective But Forgotten Benezet Method of K-8 Education

 --

 Subject: What's That Smell?
 From: Michael Palijm...@nyu.edu
 Date: Mon, 3 Sep 2012 08:59:40 -0400
 X-Message-Number: 1

 Some new research involving gene therapy in a mouse model shows
 promise for treating a group of disorders called ciliopathies which are
 dysfunctions of the cilia.  Most psychologists are familiar with cilia
 from the role they play in hearing, seeing, and smell.  The new research
 focuses on how to repair the cilia in mice that have genetically disabled
 olfactory cilia, that is, mice who are born without a sense of smell.
 If such gene therapy is effective in humans, then a number of ciliopathies
 might be cured or significantly improved.

 The popular media has picked up on the story and here is one example
 of their presentation:
 http://www.bbc.co.uk/news/health-19409154

 A pop science presentation on the Science Daily website is available
 here (it provides much more detail and additional links):
 http://www.sciencedaily.com/releases/2012/09/120902143147.htm

 Some of the researchers involved in the study are at the University
 of Michigan and the U of M media office provided this press release:
 http://www.uofmhealth.org/news/archive/201209/smell

 The original research is published in Nature Medicine:
 http://www.nature.com/nm/journal/vaop/ncurrent/full/nm.2860.html

 The reference for the article is:

 Jeremy C McIntyre, Erica E Davis, Ariell Joiner, Corey L Williams,
 I-Chun Tsai, Paul M Jenkins, Dyke P McEwen, Lian Zhang, John
 Escobado, Sophie Thomas, Katarzyna Szymanska, Colin A Johnson,
 Philip L Beales, Eric D Green, James C Mullikin, NISC Comparative
 Sequencing Program, Aniko Sabo, Donna M Muzny, Richard A Gibbs,
 Tania Attié-Bitach, Bradley K Yoder, Randall R Reed, Nicholas Katsanis,
 Jeffrey R Martens. (2012).
 Gene therapy rescues cilia defects and restores olfactory function
 in a mammalian ciliopathy model.
 Nature Medicine, 2012;
 DOI: 10.1038/nm.2860

 I suspect that if this research is successful in humans, then olfactory
 abilities lost to toxins and age might be successfully treated.  It may
 be particularly useful in the elderly who have developed a diminished
 sense of smell.

 -Mike Palij
 New York University
 m...@nyu.edu

 P.S.  One point for the person who can guess which movie the subject
 line is from. ;-)



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=20198
or send a blank email to 
leave-20198-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

[tips] One Ring to bind them into a tight social cluster

2012-08-14 Thread Mike Wiliams
I thought the paper referred to by the Columbia student was a good 
example of quantitative rumination and a good example of how statistics 
is only descriptive and helps us reveal patterns in the data. The 
interpretation is still up to our theories.

http://iopscience.iop.org/0295-5075/99/2/28002/

By analysis of the social networks, the authors compared mythological 
accounts to the characteristics of real social networks.  The 
presumption is that if a myth such as Beowulf depicts true social 
networks it is more likely to have been an historical account rather 
than a fictional one.  They also analyzed the Iliad, an Irish epic, 
Tain, The Fellowship of Ring, the Marvel Universe and Harry Potter.  
These accounts obviously vary in the social networks they depict.  The 
Fellowship, Iliad and Harry Potter all depict a small group of close 
friends fighting adversity.  The Lord of the Rings has the weakest 
analysis.  In the Appendices of The Lord of the Rings and other works, 
Tolkien mapped out the explicit genealogies and built the story around 
these:

http://lotrproject.com/hobbits.php

The Fellowship of the Ring, the only part examined by the authors, 
refers to nine guys representing all the races of Middle Earth set 
against nine ringwraiths who were formerly men given rings of power by 
Sauron.  In a strange way, Sauron was bringing people together.  His 
overall plan was to bind the ringbearers into one tight crew:

One Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bind them
In the Land of Mordor where the Shadows lie.

If the authors had analyzed the Lord of the Rings thoroughly, they would 
have discovered that it is a complete, internally consistent history. 
Tolkien attempted to depict his understanding of England as it existed 
in Anglo-Saxon times, when Grendel, Beowulf and the Elves were still 
here.  He even invented the languages that he believed existed at the 
time.  Since the authors only examined the first book of the Lord of the 
Rings, they missed depicting the complete social structure that existed 
in Middle Earth.

This reminds me of an analysis done of a possible new poem by 
Shakespeare comparing the number of new words used in the poem to the 
population distribution of new words used by Shakespeare:

http://www.learnpsychology.com/movies/infer_onescoresm.mov

It strikes me that the authors should have compared the mythical 
accounts to historical accounts of the same period and culture (e.g. 
compare Herodotus to Homer).  Including the Marvel Universe was really odd.

Mike Williams




Hi A Columbia psychology graduate student gives an incoherent diatribe 
against scientific approaches in the humanities and social sciences, 
including 
psychology.http://blogs.scientificamerican.com/literally-psyched/2012/08/10/humanities-arent-a-science-stop-treating-them-like-one/
 


Well, this kind of critique is not new though her incoherence and 
apparent lack of historical knowledge of the nature of psychology is 
troubling. This is particularly so, given some of the research that she 
has been involved in. Ms. Konnikova's bio on the Scientific American 
website is kind of vague on what she's doing and who she is working with 
at Columbia; see: 
http://blogs.scientificamerican.com/literally-psyched/about.php?author=314

EndFragment

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=19731
or send a blank email to 
leave-19731-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

[tips] Odd-ball Multiple Choice Questions

2012-07-27 Thread Mike Wiliams

These odd formats just force variance that doesn't exist.  The variance on the 
test is corrupted by items that test examination skill rather than the 
construct they were designed to measure.  Since people who take tests will get 
higher scores on the test then people who don't figure out the puzzle, simple 
item analyses will show that test items like this appear to work.  However, 
test taking skill is now an extraneous factor.  You may as well teach people 
how to take tests.

I see this all the time in medical school classes.  The instructors are 
obsessed with getting a normal curve because a skewed distribution makes them 
look too easy.  The variance in knowledge of the course content just doesn't 
exist.  Since they are so highly selected as studying machines, virtually all 
medical students know the answers to all the questions on the tests.  The only 
way to force a normal curve is to manipulate the tests so that even students 
who know the answer have a hard time responding.

There is no theoretical reason that all the students should not get high scores 
in every class they take.  If you did a good job teaching then the scores 
should be skewed.  Isn't that the goal?

Mike Williams



I don't like these question formats because I believe they evaluate 
puzzle-solving rather than specific learning outcomes related to knowledge of 
content,  application of a theory or model to a new problem, or other skills I 
am interested in evaluating.





---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=19299
or send a blank email to 
leave-19299-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Neuroimaging Statistics

2012-07-13 Thread Mike Wiliams
Subject: Re: Some Problems with Neuroimaging From: Michael Palij 
m...@nyu.edu Date: Thu, 12 Jul 2012 08:54:06 -0400 X-Message-Number: 2 
On Wed, 11 Jul 2012 23:09:57 -0700, Mike Wiliams wrote:


 However, the distribution of false positives across the voxel 
locations should be random.

Depends upon how defines random.  Consider:
(1) For all t-tests, is N1=N2?  I know that you say you're
using paired t-tests but what guarantee is there that there
is always a matching value? How is such missing data
treated?
There is no missing data.  The whole brain scans are replicated for 
active and rest phases of the study.
There is a measurement taken of signal strength for every voxel for 
every whole-brain scan.  I am
conducting a paired t-test for a single voxel and subtracting the mean 
for the active condition from
the mean for the rest condition.  The number of measurements (N) is the 
same for each condition.

(2)  How are violations of the assumptions of paired t-tests handled?
Variance away from the mean values are stochastic error and there is no 
skew.  If the distribution was
abnormal and variances unequal then there would be something wrong with 
the scanner.  This would

be obvious in the artifacts produced in the scans.


|The fact that the Salmon's randomly significant voxels clustered
|in the Salmon's brain cavity I consider extremely unlikely. What
|are the odds of this pattern occurring by chance?

Oh, so we're turning Bayesian now? ;-)  Let's start by asking
what is the baserate?

|There was likely some artifact that produced this, like they
|moved the Salmon's head slightly at the end of every activation run,
|or there was an intentional manipulation of the data.

Uh, yeah.

|From a random distribution of 1,000 t-tests, how many times to t-tests
|numbered 98,99 and 100 come up significant and all the others come
|up nonsignificant?

I don't understand your sentence above.  If you're asking what is the
overall Type I error rate for 1000 t-tests, this is given by the formula:

alpha-overall = (1 - (1- alpha-per comparison)**1000
Your formula specifies the probability of any voxel coming up 
significant by chance.
Suppose you specify an alpha of .05.  5% of the voxels should be 
significant by chance.

However,  the significant t scores should be randomly distributed
across the voxels and all areas of the image.  What are the odds of 
chance activations
only in the voxels making up the Salmon's brain cavity?  What are the 
odds of just
voxels 4,5,6 (the brain cavity) coming up significant and all the 
others coming
up nonsignificant?  The odds must be astronomical.Why were there no 
chance
activations in other areas of the Salmon image?  The odds of this 
occurring are so

small that some kind of manipulation was conducted to produce this extremely
rare pattern.




But, if I am reading the literature correctly, the Pearson r and sample size
and not routinely reported.  Nor are the power levels associated with each
test -- reducing alpha-per comparison will reduce the statistical power for
each test, thus increasing the Type II errors. So, do the corrections trade
Type I errors for Type II errors?

In other words, what are you talking about Willis?


The sample sizes are reported when the model is described.  The number 
of whole brain scans for each
condition in a block design is the sample size.  I typically have 15 
measurements for each condition for
each voxel.  The effect size for the BOLD response is more-or-less 
standardized.  I just don't remember
what it is.  The % change in signal strength associated with the BOLD 
response was established very
early and it has a classic pattern of onset, peak and diminishment that 
is well known and modeled in the

analyses.

Corrections are the default for the analysis software, such as SPM.  You 
have to actively uncorrect the
analysis if you want to see it uncorrected.  It is also typical to 
reduce the voxel extent for clusters.
Randomly distributed significant voxels don't usually cluster.  By 
specifying a minimum cluster of

5 voxels, I can usually eliminate most of the random results.

The hypotheses of neuroimaging are not at the single t-test, voxel 
level. The hypothesis is typically that a
cluster of voxels, a region of interest, demonstrates a BOLD response 
under the active condition.  When I
administer a Naming Test, I expect a typical pattern of voxel clusters 
representing the language areas of
the left hemisphere.  This usually happens.  The problems with fMRI are 
the same problems that
hamper any research design: researchers with weak theories and 
hypotheses about brain function are

essentially on a fishing expedition for fame and glory and not science.

Mike Williams

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=19001
or send a blank email to 
leave-19001-13090.68da6e6e5325aa33287ff385b70df

Re:[tips] Some Problems with Neuroimaging

2012-07-11 Thread Mike Wiliams
I used to have many of the blog author's reservations until I began to 
use fMRI in clinical cases and verifying basic
brain functions.  As long as you are not looking for the God Center, 
or verifying some crackpot cognitive theory,
the results are reliable and valid.  This surprised me since I expected 
more uncertainty and error.  Most psychological
tests do not have the reliability of fMRI.  I know that sounds odd.  The 
examples given by the blogger suggest a lot
of statistical error, at least at the level of the usual cognitive 
test.  However, if I administer a simple cognitive task,
such as naming pictures, the fMRI analysis will show the same results 
every time.  The left image shows the results of an fMRI
study of a patient with a meningioma in the left parietal-temporal area:

http://www.learnpsychology.com/neuropsych/images/tumors.jpg

The Naming task is associated with BOLD responses in the occipital lobes 
and the language centers in the left hemisphere.
Notice that the language activation actually outlines the margins of the 
tumor.  The patient's language was normal.
Knowing that language centers are under the tumor is extremely important 
in surgery planning.  The image on the
right shows a tumor in the right temporal lobe that is interrupting 
vision pathways connecting the Lateral Geniculate
Body to the Occipital Lobe.  Notice that the left Occipital Lobe is 
active but the right Occipital Lobe is inactive. The
patient had a very clear loss of vision in the upper left quadrant of 
the visual field.

The problems with fMRI that I endorse have nothing to do with the method 
itself.  The fMRI method is continuing to be
developed and its underlying evolution is proceeding well.  New MRI 
scanners that can handle the data and give
radiologists a turn-key technology for fMRI are available now.  New 
connectivity modeling and other data analysis
procedures are also moving fMRI along.  If you come into the hospital 
today with a brain tumor, it is likely that you
will get an fMRI study while you are in the MRI scanner getting your 
structural scans.

The problems are all the result of people jumping on the bandwagon and 
trying to scoop the next sensational finding.
This is actually hurting the method.  We need to conduct the usual 
reliability and validity studies that psychologists are
well known for in the development of new methods.

Unfortunately, I don't think human brain function is as interesting as 
cognitive psychologists think it is.  Most cognitive
psychologists don't know enough about brain function to draw correct 
inferences.  After you consider all the tissue
mediating simple neurological and cognitive functions, there is not much 
left for all the complex cognitive abilities
cognitive psychologists believe are there.  When they conduct research 
that does not have explicit hypotheses connecting
a cognitive ability or construct to specific functional brain systems, 
they can show any activation pattern and proclaim that
the whatsy center has been discovered.  Most activation patterns are not 
the result of error suggested by the blogger.  They
are usually the result of activation associated with the task that were 
not accounted for by the theory underlying the task.
For example, many language tasks will activate language areas that are 
not the focus of a particular cognitive neuroscience
language study.  If the investigator is unaware of these then they will 
appear as false positive errors.

I was also involved in one of the fMRI deception studies. Here, fMRI may 
actually pan out as a lie detector.  Lies involve a simple
inhibition of the truth and a construction of an alternate response.  
The truth just comes out.  The former requires much more
frontal lobe inhibition than telling the truth.  We worked with an 
excellent polygrapher who educated me on many things about
polygraphs and ways to study and detect lies.  The first was that 
polygraphs are not designed to detect lies; they are designed to
elicit confessions.  That is why they are in widespread use by police 
departments but not admitted into courts.  Here is a great
video segment I use in presentations:

http://www.learnpsychology.com/fmri/jerrypolysm.mov

Mohamed FB, Faro SH, Gordon NJ, Platek SM, Ahmad H, Williams M. (2006). 
Brain mapping of deception and truth telling about an ecologically valid 
situation:An fMRI and polygraph investigation-initial 
experience./Radiology/, 238: 679-688.

Mike Williams





---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=18960
or send a blank email to 
leave-18960-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re:[tips] ECT Expectations, or how I learned to love the FDA

2012-03-23 Thread Mike Wiliams
I am not going to beat a dead horse.  Until someone conducts a 
double-blind clinical trial with random assignment, no psychological 
treatment can claim empirical support.  We have to have some standard of 
empirical support.  If that is black-and-white then so be it.  The FDA 
is black and white.  Studies that do not maintain a minimum standard of 
empirical analysis are pseudoscience.

The motivation of the subjects are the expectation biases and Hawthorne 
effects build into all clinical trials. If I believe ECT works, I will 
endorse positive change on the self-report measures regardless of how 
depressed I feel.  The use of sham ECT and blinding reduces these.  If 
these were not important issues, why would the investigators of ECT have 
a sham condition?

I hold neuropsychology studies to the same standard.  If a clinical drug 
trial for dementia was not blind or did not use random assignment, both 
me and the FDA would declare the study invalid and I would call it 
pseudoscience.  The issue of expectation bias is salient here as it is 
for depression clinical trials.  However, it is very difficult for the 
subject's expectations to influence the scores on a memory test.  It is 
very easy for the subject's expectations to influence the scores on a 
self-report memory questionnaire.  It is well known that demented 
subjects do not endorse  problems on memory self-report measures even 
when their memory scores are low and they demonstrate obvious memory 
disorder in everyday behavior. They are unaware of their memory problems 
for the obvious reason that they cannot remember memory errors.  The 
opposite pattern prevails in depression.  Depressed subjects endorse 
numerous problems on memory self-report scales but score in the normal 
range on memory tests (See Williams, Little, Scates  Blockman 
http://www.learnpsychology.com/papers/mypapers/memory_complaints.PDF). 
Depressed subjects are hyper-aware of any kind of problem, including 
memory errors.  It would be absurd to base the outcome of a dementia 
clinical trial on the self-report of memory problems.

I was very involved in clinical trials of Xanex, Imiprimine and ECT as a 
grad student.  I conducted a clinical trial of a medication to treat 
cognitive impairment following traumatic brain injury.  I even did 
animal research investigating cognitive enhancing meds for traumatic 
brain injury in rats. I have also been involved as a secondary member of 
other large trials.  I am now the vice-chair of one of the IRBs here at 
Drexel.  My observation of the contrasting environments of a typical 
outcome study of CBT vs the environment of research among the drug 
companies is that CBT outcome studies are sloppy.  They make a host of 
mistakes, including losing track of drop outs, numerous and undocumented 
protocol violations and neglecting control conditions and blinding.  
They don't even use consistent data collection methods, such as case 
report forms, let alone use the internet systems that are part of real 
clinical trials.  If the CBT outcome studies were held to the same FDA 
standards as the drug trials, they would all be declared invalid.  This 
is not black-and-white reasoning about science.  It is maintaining a 
minimum standard of empirical analysis.  Most psychologists are not 
involved in research with the drug companies and their research is not 
supervised by anyone, let alone a hard-headed organization like the 
FDA.  They have a very flimsy idea of empirical support.

You did not answer my question: Why do we blind clinical trials?

Mike

On 3/24/12 12:00 AM, Teaching in the Psychological Sciences (TIPS) 
digest wrote:
 Mike, no, actually I don't think you answered my question, unless I've missed 
 it in the back-and-forth flurry of multiple emails (but I don't think so).  
 I've asked, now three times, why depressed patients in controlled studies 
 would be motivated to self-report that their depressed symptoms are improved 
 (even when they're not) to avoid ECT, when they know that their participation 
 is voluntary and that they can refuse treatment at any time.  Mike's 
 discussion of Hawthorne (spelled Hawthorne, not Hawthorn) effects below is 
 interesting and raises some valid points, but it doesn't address my question.

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=16921
or send a blank email to 
leave-16921-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re:[tips] ECT

2012-03-22 Thread Mike Wiliams


I will concede that some studies randomly assigned patients to ECT vs 
drugs.  I claimed that I didn't know of any.  Given that ECT was a 
choice, I would have to check that there was true random assignment.  
The subject offered the study who felt negative about ECT probably did 
not volunteer on the chance they might get assigned to it.


My main gripe was not random assignment.  My main gripe is that the 
studies cannot blind the subjects to the treatment condition.  The 
strong expectation bias is the reason that the investigators and 
subjects are blinded to the treatment condition.  If this was not an 
important factor, the FDA would approve clinical trials without it.  The 
entire field of clinical trials ignores the fact that they are examining 
human beings who have the cognitive capacity to figure which treatment 
they are receiving.  They should be compelled by regulations to ask the 
subjects to tell them which treatment they thought they were receiving 
as a check on the blinding method.  This is never done.  Expectation 
bias is a psychological phenomenon that psychologists should study.  No 
one does this, presumably because they will discover factors that may 
account for all the treatment effects.  The investigators don't even 
know that subjects talk to each other!  They talk about the side effects 
with each other.  They form a hypothesis of which treatment they 
received and behave consistent with it, regardless of the fact that they 
signed a consent form and participated on a voluntary basis.  Wouldn't 
it be cool to study this?  Expectation bias is presumably a strong 
influence on self report measures.  These self-report measures are the 
major dependent variables used in these clinical trials of psychological 
disorders.  Self-report measures are incidental in the study of drugs 
that do not involve psychological disorders.  We don't use self-report 
measures in studies of antibiotics.  It would likewise be absurd to ask 
someone receiving a statin if the they believed their cholesterol was 
lower.  We ask the subjects to estimate the treatment effects with 
anti-depression treatments.


You will notice that studies of the same drugs in animals are not 
blinded.  We blind the human studies because humans have the capacity to 
make their behavior conform to the demand characteristics of the study.  
Rats can't do this.


We neglect the simple fact that the investigators and subjects are human 
beings who interact with each other.  These interactions and other 
expectation reactions influence the dependent measures.  When Fisher was 
working out the basic treatment outcome designs and statistics we use 
today, he never figured that the plants he was studying would ever be 
cognizant of the treatment, talk to other plants, ask the investigators 
a lot of questions, drop out of treatment, miss appointments, decide to 
take other meds to mitigate the side effects of the antidepressants and 
so on.  These are real people and there is an assumption they behave 
like laboratory rats (or plants).


One of the positive things about ongoing IRB assessments is that we are 
discovering how common it is for subjects to drop out of treatment.  My 
hypotheses is that they just don't like the treatment because of side 
effects, they got better while waiting on the control waiting list and 
any of a number of currently unstudied aspects of treatment.  
Investigators are holding blinders up and only see the world through a 
set of assumptions that support how they have been conducting research 
in the past.  These factors deserve investigation on their own.  We 
might discover new methods or come to the conclusion that current 
research designs are invalid with humans and dependent measures that 
require human judgments.


In regards to neuropsych assessment, and neuropsychology studies 
generally, it was these experiences studying depression treatment in 
graduate school that pushed me toward neuropsychology.  I realized that 
I might spend an entire career studying depression, anxiety, 
schizophrenia or other psychological disorders and never know if I ever 
made a contribution.  Constructs like memory, language, brain tumor and 
traumatic brain injury have a conceptual integrity that makes them 
easier to study.  Unfortunately, there are few treatment outcome studies 
in neuropsychology.  I have conducted studies of changes and recovery of 
memory and cognition following illnesses such as traumatic brain injury, 
childhood leukemia and dementia.  There are some treatment outcome 
studies of dementia treatments.  Can you imagine using a self-report 
measure as a dependent measure in such a treatment study.  People with 
dementia have no capacity to make judgments of their own memory 
ability.  You have to use a test, or the ratings of observers, such as 
family members.  Even in this situation, family ratings are influenced 
by expectation bias if the family members know which treatment the 

Re:[tips] ECT

2012-03-22 Thread Mike Wiliams
 From Scott:
 Mike (and Paul B., if you'd like), can you please answer the following 
 question: If so, why would patients be motivated to self-report that their 
 depression is better if they know that the treatment team can't impose the 
 intervention on them?  I don't follow the reasoning here at all.

I just wanted to make sure I responded to this question since questions 
posed this way are often at the core of a disagreement.  The influence 
of expectation bias is more subtle than this.  I am sure that Scott and 
others are familiar with Hawthorn effects 
http://en.wikipedia.org/wiki/Hawthorne_effect and other expectation 
biases.  Their presence is the major reason studies are blinded in the 
first place.  If I believe that ECT is effective and I am randomly 
assigned to the drug condition, I will have a bias in the direction of 
not endorsing change on the self-report measure.  The self-appraisal of 
depressed mood, weight loss, constipation etc are subject to these 
biases presumably because the constructs do not lend themselves to 
objective, monotonic, linear ratings made by a humans:

How constipated are you now? 1) Extremely constipated; 2) Very 
constipated; 3) a Little constipated; 4) not constipated.

What do these items really mean?  If a subject endorses a number of 
items on the self-report scale, such as I am constipated or I have 
recently lost weight, but does not endorse the item, I am sad., can 
the subject actually be depressed?

This ambiguity in the dependent measure makes a bias easy to manifest as 
a simple push in endorsing items slightly one way or the other.  The 
subject might do this because of beliefs about treatment or simply 
because they like the investigators and want to support them.  They may 
do this because they don't like the treatment and think that if they say 
they are not depressed, they will get out of treatment sooner.  Studies 
of Hawthorn effects suggest that it is the simple fact that they are in 
a study.  The effect sizes of antidepressant treatments are well within 
the magnitude of Hawthorn effects.  The investigators have the burden of 
proof in establishing that the Hawthorn effects are not present. The 
first step in establishing this is to simply ask the subjects, Which 
treatment did you think you received?, Were there any side effects?, 
Do you think the treatment worked? Did you discuss the treatment with 
other patients?, You decided to drop out of treatment.  What was the 
reason?, etc.

Since the blinding of treatment is essentially impossible for ECT, I 
don't think we will ever feel confident that ECT is effective.  Since 
the blinding of treatment is essentially impossible for ECT, 
investigators have pretended that the research problem does not exist.

Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=16884
or send a blank email to 
leave-16884-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re:[tips] ECT and the pseudoscience of clinical trials

2012-03-22 Thread Mike Wiliams
Scott, I think I did answer your question. Subjects and investigators 
are manipulated by Hawthorn effects and expectation biases in ways they 
are not even aware of.   If your interpretation of consent was valid 
then there would be no need to blind the clinical trials.  Since you are 
into questions, what is the reason clinical trials should be blinded?

It all comes down to the simple fact that no psychological treatment, 
including CPT, psychotrophic drugs or ECT, has the support of a single 
blinded clinical trial.  They are all pseudoscience until this happens 
for at least one of the them.

This conversation started up again because someone posted a paper on 
connectivity studies following ECT.  The abstract begins,  To date, 
electroconvulsive therapy (ECT) is the most potent treatment in severe 
depression.  PNAS paper 
http://www.pnas.org/content/early/2012/03/12/1117206109

Detect any bias here?  Are they doing the study to test an hypothesis, 
or support their ECT clinical service?  Now, Time magazine has picked up 
on this study and soon it will be full blown, press release pseudoscience:

Time article 
http://healthland.time.com/2012/03/21/how-electroconvulsive-therapy-works-for-depression/

All they discovered is that ECT scrambles your brain.  We now have firm 
confirmation of this with brain scans.  I bet any induced seizures 
scrambles your brain.  Since they had no nondrepressed control group, 
maybe we should volunteer to have an induced seizure for the sake of 
science.

Mike

P. S. Connectivity modeling in fMRI is brand new and no one has figured 
out what it really means.  However, I do have high hopes for it and plan 
to apply it to an fMRI study of orienting the body in space.



From Scott:
   Mike (and Paul B., if you'd like), can you please answer the following 
  question: If so, why would patients be motivated to self-report that their 
  depression is better if they know that the treatment team can't impose the 
  intervention on them?  I don't follow the reasoning here at all.


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=16898
or send a blank email to 
leave-16898-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Re:[tips] ECT

2012-03-20 Thread Mike Wiliams

I guess I will go point by point.

(1) Even though most patients describe the procedure as no more threatening 
than a trip to the dentist, their report is not especially plausible or at 
least not plausible enough to be taken on its own merits (see Paul's message 
below);
No one stated that ECT is more painful or otherwise more aversive than 
the dentist.  Just the possibility of experiencing the side effect of an 
induced seizure is sufficient.  People avoid the dentist too.  Clients 
endorse positive change on self-report measures just to get out of seeing a

conventional therapist they don't like.

(2) Even though scores of published studies on ECT assure patients' that their 
self-report reports of depression are confidential, they somehow don't believe 
this assurance of confidentiality, and instead think believe the treatment team 
will gain access to this information and use it to decide on the course of 
future treatment;
The published studies do not assure patients that their ratings are 
completely confidential.   They are known by the treatment team.  The 
information
is not revealed to people outside of the treatment team.  In addition, 
the team also usually completes the Hamilton Rating Scale.  This includes an

interview with the patient.

(3) Even though most (today, probably all) patients in published controlled 
outcome studies of ECT give full informed consent regarding to whether to 
receive the treatment (and therefore the treatment is voluntary), they somehow 
don't believe that their participation is voluntary and instead believe that 
the treatment will be forced upon them against their will.
ECT is sold to the patients.  I don't know of any study that used random 
assignment of treatment types, unless it was to different types of ECT.  
It is very common to have random assignment of drugs or psychotherapy.  
Intractable patients are the ones offered ECT.

(4) Even though patients in contemporary controlled studies of ECT are told 
they will be randomly assigned to either a treatment arm or an alternative 
treatment arm, they don't actually believe that the assignment is random, and 
instead believe that the investigative team can decide at will whether to alter 
the treatment plan on the basis of their self-reports.
I know of no study of ECT that included other treatments in which the 
subjects were randomly assigned.


The major point you are missing is that there can be no blinding of an 
ECT condition.  The expectation biases associated with this are well known.
They can account for the treatment effects associated with all the 
depression treatments.  The investigators have the burden of proof in 
this and
they neglect this problem in the same way that obesity researchers fail 
to notice that their entire science is based on the dieting behavior of 
young women.  It has been a problem so long and impossible to fix that 
the entire field assumes that the problems don't actually exist.  If an 
expectation bias exists then it could account for the treatment effect.  
The investigators have the burden to partial out this effect.  I think 
it would be very illuminating if someone running a blind trial would 
just ask the patients to indicate which condition they thought they were 
in.


The final point I need to make is that ECT may be effective.  There is 
just no experiment that I can think of that will prove the effect.  The 
main confound is expectation bias.


Mike Williams



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=16855
or send a blank email to 
leave-16855-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] How ECT Works

2012-03-19 Thread Mike Wiliams
ECT is just the induction of a seizure.  It should be just a matter of 
time before someone discovers that fMRI connectivity analyses shows a
reduction of connectivity following a seizure.  Notice that the measure 
of depression was still self-report.  ECT has no valid control condition.
Everyone who got ECT knows they received it.  It amounts to the patient 
reasoning, What do I have to indicate on the self-report

measure to get these people to stop?

Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=16821
or send a blank email to 
leave-16821-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] depression as crutch

2012-03-10 Thread Mike Wiliams
The comments on the adaptive nature of depression remind me of a 
character from the Dr Katz TV show.  Referring to his mother, he stated:
After Mom got depressed, she was put on so many medications that we 
never knew how she felt about anything.  I am paraphrasing.
I am usually happy to hear that a client is depressed.  If the client is 
suicidal then I worry but a mild-to-moderate level of depression is a
sign that the person is self-reflective.  Experiencing life's turmoil 
without depression or anxiety is an indication of a personality disorder and
a tendency to blame life's problems on external agents.  The 
anti-depression treatments can make everything worse.  Imagine losing your
appetite, ruminating all day AND having dry mouth.  Why are we using 
anti-depressants with such low treatment effects?  There are drugs
that have been available for years that will really make you happy.  We 
don't use those, or anything derived from them.  We use drugs that
basically communicate to the patient: We will keep giving you this drug 
that makes you feel sick until you tell us that you feel better.  After
a sufficient bout of anticholinergic side effects, including dry mouth 
and constipation, the patient indicates in a session or by response to
a self-report measure that the depression is lifting and they don't 
think they need more medication.


Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=16604
or send a blank email to 
leave-16604-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Cognitive Reserve

2012-01-22 Thread Mike Wiliams
This concept emerged as a way to explain variance in loss of cognition 
with neurological disorders that produce a generalized
cognitive decline, such as dementia related illness.  The idea is that 
people with higher IQs have further to fall and hence they have
a greater reserve of cognitive abilities.  Higher cognitive function 
makes them more resilient to brain illness.  They have more
developed cognitive abilities to rely upon when they are injured.  It is 
an archaic concept that neuropsychologists have largely
abandoned.  However, in practice we often quantify the degree of 
impairment by a direct reference to an IQ or similar score.  The
current practice of estimating pre-injury IQ with demographics and using 
this as a metric to quantify the presumed loss in IQ associated
with a brain injury or illness incorporates the cognitive reserve 
concept. I can't imagine that anyone believes IQ is an amount of
something that can be lost by monotonic units like water poured from a 
pitcher.  After stating this, I have done a number of studies that
assumed IQ had this relationship to severity (e.g. Williams, Gomes, 
Drudge  Kessler, 1983, Journal of Neurosurgery).  I predicted IQ from
initial coma level.  There was a significant correlation.  This happens 
because neuropsych assessment itself is archaic and needs to
develop to the level of neuropsychological theory.  Many years ago, 
Muriel Lezak actually titled her INS Presidential Address,
The Death of IQ.  She was complaining that this unitary concept was a 
poor way to describe cognitive function after a brain illness.
The IQ concept is still alive and kicking, like a zombie.  Maybe we need 
to give it a clear shot to the head.


Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=15585
or send a blank email to 
leave-15585-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Watch Out! Here Come The Web-Based Textbooks!

2011-12-18 Thread Mike Wiliams
The really strange thing is that there will likely be only a few of 
these textbooks and they will be used for every course taught on the 
planet.  Presumably the best few or even the best one will surface and 
beat the competition.  Everyone will be taught the same generic core of 
the topic.  Both
Steve Jobs and Bill Gates were surprised that education had not been 
substantially influenced by the PC and the internet.  Steve Jobs planned
to develop these books for free as a way to sell iPads.  Now that Apple 
has created a way for publishers to sell through the iTunes store and
protect their intellectual property, the authors and publishers have a 
greater incentive to create these books than ever before.  After
teaching undergrad statistics a few times, it became clear that 
interactive models were the way to go.  Here is a link to a set of them 
that I

programmed using LiveCode:

http://www.learnpsychology.com/courses/statcourse/programs.htm

Now that Livecode has capability to save the programs as iOS and Android 
apps, I plan to group them into a set and sell them as a single app
in the app store.  If they do well then I will keep developing them.
I would like to work one up for the general linear model.  Good old 
Excel can actually be used for some interactive exercises.


Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=14950
or send a blank email to 
leave-14950-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Thank you Steve Jobs,but ...........

2011-12-04 Thread Mike Wiliams
I guess the easiest way to deal with the contribution of Steve Jobs is 
just to quote this passage from his biographer, Walter Isaacson:


The saga of Steve Jobs is the Silicon Valley creation myth write large: 
launching a startup in his parent's garage and building it into the 
world's most valuable company.  He didn't invent many things outright, 
but he was a master at putting together ideas, art and technology in 
ways that invented the future.  He designed the Mac after appreciating 
the power of graphical interfaces in a way that Xerox was unable to do, 
and he created the iPod after grasping the joy of having a thousand 
songs in your pocket in a way that Sony, which had all the assets and 
heritage, never could accomplish.  Some leaders push innovation by being 
good at the big picture.  Others do so by mastering details.  Jobs did 
both, relentlessly.  As a result, he launched a series of products over 
three decades that transformed whole industries:


1) The Apple II, which took Wosniak's circuit board and turned it into 
the first personal computer that was not just for hobbyists.


2) The Macintosh, which begat the home computer revolution and 
popularized graphic user interfaces.


3) Toy Story and other Pixar blockbusters, which opened up the miracle 
of digital imagination.


4) Apple stores, which reinvented the role of a store in defining a brand.

5) The iPod, which changed the way we consume music.

6) The iTunes Store, which saved the music industry

7) The iPhone, which turned mobile phones into music, photography, 
video, email and web devices.


8) The iPad, which launched tablet computing and offered a platform for 
digital newspapers, magazines, books, and videos.


9) iCloud, which demoted the computer from its central role in managing 
our content and let all our devices sync seamlessly.


10) And Apple itself, which Jobs considered his greatest creation, a 
place where imagination was nurtured, applied and executed in ways so 
creative that it became the most valuable company on earth.


No, Steve Jobs did not invent the MP3 format.  However, without the 
IPod, the MP3 format would have languished in the bowels of brain-dead 
MP3 players and the music industry would have been dead after a few 
years of rampant piracy.


Steve Jobs brought his imagination to all these products.  Without his 
imagination and incredible drive to change the world, we would likely 
still be using brain-dead products like CP/M. MSDOS, Wordstar, dBase-II, 
Sony Walkmans and Windows.  Systat was the first stats package to try a 
GUI.  The interface for SPSS is just plain brain dead: Legacy Menus?


Steve Jobs also brought a philosophy of product development that proved 
incredibly successful.  The software and hardware must be united.  If 
you design using an open architecture, you design for a common element 
and not excellence.  Bill Gates could never yell at the engineers at 
IBM, Dell or Gateway to make the hardware match his software.  As a 
result, Windows was designed for the common medium, the mediocre.  Jobs 
could demand that Bill Attkinson figure out how to layer the windows for 
a 9in Mac screen because all they had was 128K RAM to work with.  Bill 
never had that control and Microsoft produced a brain-dead interface 
when he knew Windows could be better if he had Job's level of control.


The legacy of Steve Jobs is independence, imagination and the reality 
distortion field.  If we don't distort reality from time to time, we 
will remain stuck in a world of crappy, brain-dead products and systems.


Mike Williams

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=14626
or send a blank email to 
leave-14626-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Types of Brain Scans

2011-10-25 Thread Mike Wiliams
One of my research areas involves the study of clinical applications of 
fMRI.  Probably the best way to discriminate the methods is just to look 
at the
different scans.  The only scans that really look similar are PET and 
SPECT scans.  They use a similar process and the spatial resolution of 
each is
similar.  CT Scans are very sharp structural images that are constructed 
from X-Ray images.  CT scans have greatly improved in recent years and
show structural lesions, such as skull fractures and hemorrhages, very 
well.  They are also still less expensive than MRI scans.  They have been
improved by greater post-processing of the CT data.  CT is still the 
first level of scanning for traumatic brain injury.  The spatial 
resolution of
MRI and CT scanning is much higher than PET, SPECT or EEG scanning.  The 
temporal resolution of EEG is much higher than the others.


MRI scans are built from our water molecules suspended in a very strong 
magnetic field.  Most of them line up with the field.  Send in a radio
transmission and they precess away from the field because they absorbed 
some of the energy from the radio transmission.  They would really like
to line back up with the field.  When they do this, they send out a tiny 
radio transmission that is picked up by an antenna coil wrapped around
your head  and body.  This small radio transmission is analyzed and 
mapped in gray scale.  This is obviously simplified but this is the basic
process.  MRI scans also render a structural image that depicts the soft 
tissues extremely well.  It is the scanning workhorse of the hospital and
medical diagnosis made a great leap forward when it was invented.  The 
story of its invention is very entertaining and important for students to
learn.  It represents the best of American pragmatism and 
inventiveness.  The guys involved were great characters.  PBS did a nice 
story on its

invention that you can use in class:

http://www.pbs.org/wnet/brain/scanning/mri.html

The best part was when they scanned the first person, one of the 
research team members.  They got no meaningful data.  They figured out that
the reason was because he was too fat.  They all then turned to the thin 
guy on the team.  He stated that he would get in the magnet if nothing
happened to the first guy for 6 weeks.  Since no one had been scanned 
before, he felt there might be some harm caused by the magnet.  After a
time, they scanned him and graphed the data by hand coloring darker and 
lighter cells in graph paper.  When they did this, his internal organs
became visible.  Then they knew they had it.  The rest was history.  MRI 
scans have been greatly improved by faster data collection, larger, open
magnet cores, faster computer processing and stronger magnets.  The 
images have become higher and higher resolution constructed faster and

faster.  Millions of MRI scans are conducted each year in the US.

fMRI uses the same magnetic resonance data as the MRI scan.  Sherrington 
discovered 100 years ago that neural activity causes a local increase
in the flow of oxygenated blood.  Ogawa showed that this increase in 
blood flow causes an increase in the MRI signal (BOLD).  By subtracting the
signal level at the a time when the brain is active to the signal level 
when it is not, I can map the locations of activity.  By controlling the
cognitive activity, I can image the locations of the activity.  fMRI 
methods are improving with the improvements in MRI data collection in 
general.
fMRI is also improving with better data analysis methods.  The current 
emphasis is on methods to study changes in BOLD responses over time

that reveals connectivity of brain areas that function over time.

DTI imaging is also something I have studied and this involves the 
detection of movement of the water molecule.  You can render the orientation
of white matter pathways in a very detailed map.  DTI imaging still uses 
the same MRI data collected using a special protocol.  We used it to

study lesions in seizure disorder.

I think many people who are disparaging of fMRI are reacting to the 
sensational stories using it.  I have used fMRI to study individual 
patients and
I have been very impressed by its utility.  We discovered patients who 
had receptive language (Wernicke's area) in the left hemisphere and
expressive language (Broca's area) in the RIGHT frontal lobe, language 
areas that defined the margins of a  tumor and clear evidence of a left 
upper

visual defect caused by a tumor in the right temporal lobe.

We are still in the early stages of developing fMRI.  It is already the 
dominant method of studying brain function and this will only get better 
as the
method improves.  When I started in this area, it took about 3 hours to 
analyze the data for a single subject, and this involved a lot of staring at
the computer while it crunched numbers.  Now, the new scanners analyze 
the data on the fly and render an image after each active period.  You
can adjust 

[tips] Making iPhone/iPad Apps for Education

2011-10-15 Thread Mike Wiliams

If you are interested in developing education apps, I highly recommend
LiveCode (runrev.com).  After making your application on a Mac or Windows
machine, you can save it for the Mac, Windows, Unix, Web application,
iOS (iPhone, iPad), Android and soon Windows Mobile.  LiveCode uses
natural language and you can test each component of your application
before compiling.  LiveCode saves considerable time and
trouble over the arcane programming  languages, such as Object-C.
LiveCode is a derivative of the incredible innovation of Steve Jobs and
Bill Atkinson, the inventor of Hypercard.  Think Different.

You can check out my apps by searching for Brainmetric at the app
store.  I also have a number of education programs at my personal
teaching site, learnpsychology.com

Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=13461
or send a blank email to 
leave-13461-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] tips digest: October 07, 2011

2011-10-07 Thread Mike Wiliams
I find comments like these remarkable.  It reminds me of the old Mac vs 
PC arguments in the past two decades.  All the computers and operating 
systems mentioned, especially CP/M, are dinosaurs that did not survive 
the meteor of the Macintosh and its GUI.  Which of these systems still
remain?  The only systems we have now are Mac OSX, Mac iOS and a thing 
called Windows that is just the Mac OS done with crayons.


Steve had the balls to risk everything for a vision of information 
management products that have absolutely changed the world.  He did a
remarkable thing: he saw needs that we didn't see and satisfied these 
needs with products that make us look cool.


Mike Williams


On 10/8/11 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

I could be wrong but Steve Jobs is just a bit player in this epic.



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=13233
or send a blank email to 
leave-13233-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] CHRONICLE: Are Psychiatric Medications Making Us Sicker?

2011-09-22 Thread Mike Wiliams


No, that was definitely NOT Mike's point. I was particularly appalled by Mike's statement 
that ECT is pure behavior therapy: 'Mr. Smith, we understand that you are unhappy.  
We will continue to induce seizures until you feel better.' After a few seizures, Mr. 
Smith endorses positive change on the Beck Depression Inventory.  The psychiatrist stops 
inducing seizures. ECT is a punishment condition.

ECT has been extensively studied for many years and the idea that it is a 
punishment condition has been thoroughly debunked.


This is false.  It has not been debunked.  The expectation bias that existed back in the 
snake pit days is the same one used today:  We will induce seizures until the 
patient changes for the better.  These changes are manifested on the Hamilton or 
Beck or some other self-report measure.  If I want the seizures to stop, I endorse 
positive change on the measure.  Its a very simple mechanism of change that has nothing 
to do with depression.  The investigators and supporters of ECT have the burden of proof 
to partial out this explanation and prove the punishment condition is not valid.  This is 
difficult or impossible to do because of all the limits on constructing blinded 
conditions that I presented before.



The most obvious objection to that idea is the the fact that modern ECT uses 
general anesthesia. The patient wakes up and doesn't know whether or not the 
ECT has been administered.


This is false.  The patients are not asked to indicate which condition they 
were in.  Since the obvious intent of the

anesthesia etc is to create a placebo, why don't the researchers just ask?  I 
can only think the reason is because they
are afraid of the answers.  They may discover the placebo did not work and ECT 
patients were aware that seizures
produce side effects (memory loss, extended lethargy etc) that were different 
than anesthesia.  On our research floor,
the ECT patients hang out with the patients receiving alternate treatments and 
they all talked to each other about
the treatment.


The research question is: How does the patient's understanding of the 
treatment condition influence ratings

on the dependent measures.  You could even design a study in which all the 
patients receive a sham treatment and you
examine the difference associated with believing your in the treatment 
condition vs believing your in the placebo
condition.  The research hypothesis is obviously that subjects will endorse 
change consistent with their beliefs. If
I believe I was receiving ECT and I would prefer not to continue receiving 
seizures, I will probably indicate that
I am happier now than I was the last time I received the BDI.


The general point is that every human in a research study thinks about which 
treatment or placebo they are receiving and

makes dependent measure ratings consistent with their beliefs.  I can't believe 
anyone thinks this is a radical idea.
All the investigators have to do is study it.  Why is it not studied?

Besides, if it was such a punishment, a painful shock should be even more effective than 
a seizure. Its not. And eyes open ECT (much scarier) should be more effective 
than ECT done under anesthesia. It's not. And bilateral ECT, with it's severe retrograde 
amnesia, should be less efffective than unilateral ECT with its negligible retrograde 
amnesia. It's not.


The expectation bias exists for all these conditions.  If a patient feels they 
are receiving seizures and they don't

like seizures, they will endorse positive change on the dependent measures in 
order to avoid more seizures.  This is a classic punishment condition.  It 
doesn't have to be related to pain dosage.  The patients just endorse enough 
change
to make it stop.  Again, why don't the researchers just ask patients about 
their expectations?

Mike's diatribe sounds more like a humanistic harangue than an informed opinion.


Name-calling is not an argument.


And while we're on the topic, would Mike be as critical of talk therapies than of biological 
therapies? Talk therapies are, of course, subject to most of the same criticisms that he levels at biological 
therapies.  But that discussion gets even more interesting since one can argue that talk therapies ARE a 
placebo and that its practitioners are the institutionalized dispensers of placebos ) per Marvin 
Gross in The Psychological Society.  And once said, is that a bad thing?


I was critical of all psychotherapies on similar grounds.  I don't think you 
read all the comments from other posts.  The main difference is that the talk 
therapies are even worse.  Since everyone in psychotherapy outcome studies 
accepts that placebo conditions are impossible to construct, they don't even 
ponder the consequences any longer.


Placebo effects are real, powerful, and have a clear biological basis.


There is no evidence placebos have a biological basis.  They represent 
cognition working full time to produce expected changes on self-report or 

[tips] CHRONICLE: Are Psychiatric Medications Making Us Sicker?

2011-09-22 Thread Mike Wiliams
On 9/23/11 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Although it's probably futile to do so, since you have beeen consistently 
ignoring all contrary evidence to your claims (e.g. all the people who have 
pointed out that many treatment studies include objective observational 
measures and manipulation checks),


I have not ignored any of these.  There are no objective observational 
measures if all the observers know who is being treated.  Please name a
measure that is not influenced by expectation.  The effect size for 
anti-depressant treatments was established using the BDI (self-report) 
and the
Hamilton (other report), both measures that can be influenced by the 
expectations.  My hypothesis is that the treatment effect may simply be
the result of expectations, a factor well-known to influence dependent 
measures.  How can you ignore this?

I'll point out here that there is a long history of demonstrated placebo 
effects on non-self-report measures, including:

heart rhythm
blood pressure
sensorimotor impairment
gastric acid secretion in ulcer patients
ACC, prefrontal, orbitofrontal, and amygdala activation
dopamine levels
immune system functioning
asthma symptoms
bronchitis symptoms
respiratory depression
I'm not sure what the point is here.  Are these cited as evidence 
placebo is biological, or that there are measures that are not 
influenced by
expectation?  These are not measures used to measure the outcome of 
psychotropic medications.  All the measures used to asses the
effects of psychological treatment outcome are self-report or observer 
measures.


These are good examples of
dependent measures that are influenced by expectation bias in studies of 
medication to treat heart disease, high blood pressure etc.
This manipulation of expectation is the reason placebo conditions were 
invented in the first place.  I have never seen a study of depression

or any other psychological treatment that included these measures.

I think any reasonable point I made has either been well taken or masked 
by these curves.  I have to admit that the posing such extreme qualifiers
as all and every usually generates irritation and disbelief.  The 
fact is that it is all and every study.  I feel like Diogenese in modern 
times,

looking for one, true, double-blinded study of psychological treatment.

Mike



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=12900
or send a blank email to 
leave-12900-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] CHRONICLE: Are Psychiatric Medications Making Us Sicker?

2011-09-21 Thread Mike Wiliams

Hello All.

I guess I should respond to Scott's comments point by point.

Mike, I had thought your very point was because most studies of 
antidepressants aren't conducted in a strictly double-blind fashion 
(because of medication side effects...although you didn't address active 
placebo studies), we cannot draw clear-cut conclusions from them.  But 
Mike, you are now saying that we can conclude with confidence that 
antidepressants have no treatment effect.  One can't have things both 
ways - if the studies are categorically invalid (not merely imperfect) 
as you asserted in previous messages, then one can't draw conclusions 
from them one way or the other.  Mike, I don't follow your logic here.


Since the drugs are for sale, the FDA thinks they work.  By your 
statement, we cannot draw clear-cut conclusions from them, we should 
logically
conclude that there is no evidence the drugs work.  Since the FDA must 
make a decision when a drug company makes an application, the FDA should
assume the null hypothesis until there is evidence to support a 
treatment effect.  My assertions that the drugs are ineffective comes 
from my
own personal observations of patients who don't get better but endorse 
change on the measures.  I admit that my personal observations of
depressed people is not a basis for generalization.  However, this is 
all I have since none of the studies are properly blinded and valid.
 I can't prove the negative.  It is the burden of the drug companies to 
prove there is an effect before we give them to patients.  The drugs are

being given now as if the effect was proven.

Mike, you also never responded to my points or Jim Clark's questions 
regarding your earlier claims that all of the dependent measures in 
antidepressant studies come from either clients or therapists 
themselves.  When I pointed out (with references to meta-analyses) that 
this assertion was false, you merely continued to reiterate your 
previous points without acknowledgng our criticisms.


These were just examples of a general point.  I will rephrase it: find 
a dependent measure that is not influenced by expectation bias. They all 
involve someone making a rating of a psychological construct or ratings 
of behavior.  All the people making the ratings are involved in the study
and influenced by expectations for treatment effectiveness.  This 
includes parents of children who are experiencing the side effects of 
the drugs.
All the investigators have to do is study the expectation bias.  Just 
ask the subjects after the study is completed to indicate which condition
they thought they were in.  This is rarely, if ever, studied.  Studies 
of this will go a long way to explain the role of cognition in treatment and
placebo.  For humans, placebo is always a cognitive manipulation of 
expectation.


Contrast this with a dependent measure that is mostly not influenced 
by expectation bias, body weight.  Psychologists who study obesity treatment
actually have a dependent measure that is very hard to manipulate by 
expectation.  If I have an expectation bias that I'm in treatment, it is 
still
very hard to lose weight (don't we know).  It is very easy to rate my 
mood a point or two better on a self-report measure.


A meta-analysis of 100 unblinded studies is a meta-analysis of 100 
poorly designed studies.  If all the individual studies are noise, the
meta-analysis will just add up the noise.  The meta-analysis should come 
to the conclusion: Since none of the studies were properly blinded,
we cannot come to a conclusion that there is a treatment effect.  
Instead, the possible effects of an expectation confound is itemized and
discussed at length.  The lack of blinding is never measured or 
considered.  It's only in the context of many side effects and treatment 
failures that

issues like this even reach the surface.

I have to confess that I'm finding this TIPS discussion regarding 
antidepressant and therapeutic efficacy increasingly troubling.  It 
seems to be more of a discussion of ideology than science.  It also 
seems to be marked by the kind of dichotomous, categorical claims (e.g., 
studies of therapeutic efficacy are invalid, antidepressants have no 
treatment effect, there is nothing there, ECT is pure behavior 
therapy, ECT is a punishment condition, the Beck Depression 
Inventory..is not a measure of mood) that we would rightly criticize in 
our students.


This is just a veiled reference to my personal characterization of 
study findings.  My qualifiers are extreme because the research deficits in
this area are extreme.  If all the studies are unblinded then none of 
the studies are blinded.  I don't have to say some studies are unblinded
because the truth is that all are unblinded.  The studies remain 
unblinded by assumption and everyone behaves as if the studies are well
designed.  Referring to ECT as a punishment condition is just something 
you have never heard before. This is exactly the 

[tips] CHRONICLE: Are Psychiatric Medications Making Us Sicker?

2011-09-20 Thread Mike Wiliams
Reading this article brought back many memories and disillusionment with 
clinical trials.  However, I believe there are opportunities to
study what a placebo is, and how this condition influences our dependent 
measures.


The only psychotropic medications that work are those that sedate 
patients who are anxious, manic, or actively psychotic.  They
actually help people because they chemically suppress the worst 
symptoms.  They don't cure people and they are associated with
so many adverse side effects that no one can take them day in and day 
out without becoming a zombie.


The other medications, including all the antidepressants, have no 
treatment effect.  The effects represents the manipulation of the
patients to endorse positive changes on the dependents measures.  As a 
result of the expectation biases I described before, the
patients endorse change on the measures but their mood stays the same.  
Anyone who describes placebo as a treatment effect is
just trying to extract something positive from ingesting these chemicals 
when there is nothing there.


The positive change endorsed by the subjects is not a positive change.  
The validity of the depression measures have been
compromised by the expectation bias.  The Beck Depression Scale is now a 
measure of expectation bias and

not a measure of mood.

ECT is pure behavior therapy: Mr. Smith, we understand that you are 
unhappy.  We will continue to induce seizures until you feel better.
After a few seizures, Mr. Smith endorses positive change on the Beck 
Depression Inventory.  The psychiatrist stops inducing seizures.


ECT is a punishment condition.

Just to belabor the point: There are no double blinded studies of 
psychotropic meds and any psychotherapy interventions.  Given
this situation, we are currently ruminating about the significance of 
noise.


Mike Williams

Are Psychiatric Medications Making Us Sicker?
By John Horgan
Several generations of psychotropic drugs have proven to be of little or no 
benefit, and may be doing considerable harm.
http://chronicle.com/article/Are-Psychiatric-Medications/128976/



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=12790
or send a blank email to 
leave-12790-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Blinded Studies

2011-09-16 Thread Mike Wiliams

Hello All.

This is starting to look like a response to journal reviewers.  Rather 
than make a long list of point-by-point responses, I would like to just 
state some points that might  generate a paradigm shift in how we think 
of outcome research.


The major point I want to make is one that everyone will agree with but 
no one has thought of the consequences.


This point is that humans are sentient beings who attempt to understand 
and figure out which treatment condition they are in.
After they make a judgement about this, they behave consistent with the 
social demands of the
research setting and other expectation biases that are consistent with 
their judgments.  All the research design
guides, such as Campbell and Stanley, assume that the subject is a 
passive agent in the process and does
not interact with the research design.  Some of the threats to internal 
validity suggests something like this but they

really need a chapter called, How Human Cognition Screws up Research.

Imagine you could create a placebo that had the same side effects as the 
medication but did not have an active
agent.  Both groups would have the same side effects.  You then randomly 
assign the meds and

placebo and the subjects and investigators are not told who got which one.

Now, I'm a subject in the placebo condition and I start experiencing dry 
mouth and constipation.  I infer from this that
I am getting medication.  I then endorse positive change on the Beck 
Depression Scale because I want to help out
the researchers or otherwise demonstrate an expectation bias.  Some 
proportion of the subjects in the placebo
condition behave like they were treated on the self-report measures.  
This is a scenario in which the side effects are
controlled and the result is still noise.  I understand that some of you 
will counter that you could inform all the subjects
that they might experience side effects.  This is always done on the 
consent form.  However, it is commonly understood
that placebos do not cause effects and subjects will reason consistent 
with this.  There is also a limit to which any side
effects can be emphasized because you run the risk of actually 
suggesting that subjects should experience a side effect.


It is understood that all psychotherapy outcome studies cannot be 
blinded.  It is so widely understood that people

don't think about its consequences any longer.

If the standard for empirical validation is one blinded study showing a 
treatment effect then no study of meds or

psychotherapy meets the standard.

I think this area would be fascinating to study.  Survey the subjects 
and ask them which condition they thought they

were in and why.

Maybe we should only analyze the data of subjects whose beliefs were 
consistent with the treatment condition they
were actually assigned to.  Someone should at least analyze the relevant 
groups to see if there is a difference.


If the FDA mandated that the drug companies verify the blinding, the 
research reps would piss their pants.  They

know they are passing off a flimflam to make millions on psychotropic meds.

Mike Williams

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=12706
or send a blank email to 
leave-12706-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Blinded or Blind Studies

2011-09-14 Thread Mike Wiliams

Hello All.

I thought I would take on each of Mike P.'s points:

  It should be noted that drug treatment studies can be
conducted with within-subject designs such as crossover designs
where one group receives a drug treatment first and, after a
washout period, receives a placebo treatment.  Another group
has placebo first and drug later.  In any event, a competent researcher
will make sure that the design they use addresses threats to the
different types of validity involved in the study and try to make
sure that their effect is negated or minimized.
All the patients experience dry mouth and constipation at every 
cross-over
in the design.  They all know when the treatment has changed.  This does 
not

control for the problem.

It might be a somewhat useful to follow the research heuristic
that all treatment/medication studies involving human are invalid
but, as with all heuristics, there will be situations where it fails and
situations where it is right but for the wrong reasons.

This is not a heuristic, it is a fact.  If the studies are not blinded
then they are not valid.  They have no internal validity.

But if one uses
an outpatient population where the participants have no contact
with each other, it is hard to see the merit in Williams' critique.

Outpatients still get dry mouth and constipation.

(3) A minor point:  I would assert that though one's own personal
experience is, perhaps, a useful guide to think about things, it does
not necessarily constitute a valid guide.  It

I was using my own experience as an example.  It was also the only way to
assess this threat since none of the research studies survey the 
subjects or

investigators.  I wonder why?

(4)  I have conducted the statistical analysis for a few drug studies
as represented in the following publications:

Your personal experiences are apparently not relevant (see 3 above).

Trying to
claim that all studies are invalid or all studies are valid is logically
invalid from an inductive perspective -- it is as foolish as claiming
that All swans are white.  Those without experience with black
swans will swear the all swans are white if that has been their
lifelong experience.

If it barks like a duck and walks like a duck, it must be a swan.

 What underlies my
criticism is that humans will reason their way through a study
and if they are given basic information like side effects, they will 
infer the
presence of treatment or placebo.  All the great research guides assume 
that the
subjects are passive agents of the treatment research design.  The idea 
that they

would interact with the design causes great problems in our own inferences.

I generalize to all studies simply because I cannot think of a way 
anyone, including
myself, can get around the problem.  When problems like this exist the 
very human

researchers put their collective heads in the sand and say its not so.

We can never be confident that any study of a psychological intervention 
ever worked.


We have to accept that none of these interventions will ever meet an 
objective

standard of empirical support.

Constipation trumps all.

Mike Williams



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=12657
or send a blank email to 
leave-12657-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


RE:[tips] Clinical training: Boulder and Denver

2011-09-13 Thread Mike Wiliams

Hello All.

When I was a grad student, we were conducting a clinical trial of Imipramine vs 
Xanex in the treatment of severe depression.  The study was
conducted on an inpatient research unit in the hospital.  The patients lived 
there and I noticed that they would sit in the day room in
the evenings and discuss their treatment.  Although the medications were 
assigned randomly and the researchers did not know the assignment,
the patients with dry mouth and constipation knew they were taking the 
medications.  Those given placebo knew this because they did not suffer
constipation and dry mouth (the anticholinergic side effects).  The patients 
knew which treatment they were receiving and they communicated
this to the investigators because the investigators constantly monitored the 
side effects.  The constant monitoring of side effects
unblinds the study.

This happens in every clinical trial of psychotropic medications.

This problem is even more obvious in every clinical trial of psychotherapy.  
All these studies are invalid.

I could explain why they are invalidated by referring to the gigantic 
literature on expectation biases.

Since all the dependent measures involve a judgement by the patient or the 
investigator that the disorder got better or worse, they are
all influenced by the expectation bias that the treatment worked.  I think many 
subjects want to help the researchers and they endorse
small positive changes on the dependent measures.  The people who get placebo 
behave consistent with this because they know they never
got treatment.

All the investigators have to do is anonymously survey the subjects.  The 
results will blow their minds.  To my knowledge, this obvious,
simple assessment has never been made.

Now you may be able to understand why the treatment effect size today for 
antidepressants is the same as the placebo effect for some
studies in the past - its all noise.

Mike Williams

__

Hi Mike:

This is a very interesting point but I am not sure that I follow
the argument completely.  Please expand your argument, dotting
the 'i's and crossing the 't's.

Ken

On 9/12/2011 3:00 AM, Mike Wiliams wrote:


Clinical Psychology psychotherapy and psychotropic medication
therapies will never have sufficient empirical support simply
because the
subjects are never blind to the treatment condition.

*
All the


investigators are doing is training the subjects to endorse
change on the
dependent measures.

**
That's why the meta-analyses conclude that


any therapy is effective. I have never seen an analysis that
addressed this research problem. It's similar to the obesity
researchers who never notice that their entire field is based on
the dieting behavior of young women.



Mike Williams
Drexel University



---
Kenneth M. Steele, ph.d.steel...@appstate.edu
Professor
Department of Psychologyhttp://www.psych.appstate.edu
Appalachian State University
Boone, NC 28608
USA


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=12612
or send a blank email to 
leave-12612-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] How to blind a treatment study of psychotropic meds or psychotherapy?

2011-09-13 Thread Mike Wiliams

Hello All,

Its interesting how Scott and Mike P. dismiss the threat to internal 
validity as if a meta-analysis balances out the defects.  All a 
meta-analysis

does is add up the defects.

The Meta-analyses can only present the data that is collected in the 
individual studies.  As far as I know, no blinded subject has ever been 
asked whether they were in the treatment condition or not.  No blinded 
investigator has ever been asked if they could identify the treated 
subjects.  I definitely could identify them from their report of side 
effects.  All the investigators could have done the same by examining 
the adverse event

reports.

The observer ratings that Scott refers to can all be influenced by the 
same internal defect.  All the observers, including parents, know that 
the children are taking a medication because of the side effects.  All 
these same people know when a child is being treated with a behavior 
intervention
because it appears very different than a waiting list control or other 
control conditions.


All the drug companies and the psychotherapy outcome investigators need 
to do is survey the subjects and the investigators to verify the
blinding.  They don't do this because they know that these studies can 
never be blinded.  They interpret the results as if they are.


Until a genuinely blinded treatment study is conducted, all the effect 
sizes in all these studies could be the result of the internal bias that

Campbell, Stanley and Shadish so eloquently present.

If anyone can present a study that was correctly blinded, or even 
present a way this could be done, it would advance the field 100% since

all the studies done up to this point have presented noise.

The research defect I described doesn't exist among studies in which the 
blinding isn't threatened by side effects and other clear indications of the

treatment condition.

Until a genuinely blind treatment study is conducted, these drugs and 
psychotherapy interventions have no empirical validation.


No insurance company should pay for these treatments until they are 
empirically validated.


Isn't anyone but me curious about why placebos are sugar pills?  Why not 
try a salt pill?  The control condition must be similar to the treatment
condition for humans or they quickly figure out which condition they are 
in and they are very influenced by the social setting of research.


Mike Williams



On 9/14/11 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

 Mike W. is right to raise useful questions regarding the internal validity of 
psychotherapy designs, but I agree with Mike P. that he is wrong to categorically dismiss 
all of them simply as invalid.  Surely, no study is perfect, but many of them 
yield highly useful inferences.  In addition to Mike P.'s endorsement of Don Campbell's 
writings on internal validity, I'd like to add Campbell's helpful principle of the 
heterogeneity of irrelevancies.  The most helpful inferences derive the consilience of 
multiple independent studies, all with largely offsetting flaws.



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=12633
or send a blank email to 
leave-12633-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


RE:[tips] Clinical training: Boulder and Denver

2011-09-12 Thread Mike Wiliams
Thought I would chip in since I have been teaching in PhD Clinical Psych 
programs and graduated from one.  I have also taught at a
clinical program within a medical school (Hahnemann) and those from 
traditional Arts  Sciences (Memphis, Drexel).


I am pessimistic that Scott's future world of BA's working in medical 
centers supervised by PsyDs and PhDs will ever work.  Medicare does
not reimburse anyone below the licensed doctoral level.  The focus on 
empirically supported treatments is not something invented by
cognitive-behavioral psychologists who conduct a lot of research.  It is 
a process of review put in place by insurance companies to deny
treatment.  Psychology is such a minor cost that I am sure they could 
care less to even get documentation from us.  The insurance
companies will always up the anti and require higher and higher levels 
of empirical support that only obvious, life-saving medical interventions

will be compensated.

I find it very interesting how the empirically supported therapies 
arguments have factored into the theoretical differences in clinical 
psychology.
Groups that have been at it for years, such as psychoanalytic and other 
dynamic therapies vs behavior therapies are fighting it out over
who has empirical support.  Since CBT and BT have always had more 
empirical study than the others, the advocates for CPT and BT have

held that these therapies are superior to therapies that are unstudied.

Clinical Psychology psychotherapy and psychotropic medication therapies 
will never have sufficient empirical support simply because the
subjects are never blind to the treatment condition.  All the 
investigators are doing is training the subjects to endorse change on the
dependent measures.  That's why the meta-analyses conclude that any 
therapy is effective.  I have never seen an analysis that addressed this 
research problem.  It's similar to the obesity researchers who never 
notice that their entire field is based on the dieting behavior of young 
women.


The best research in my specialty of neuropsychology is done in the 
clinic.  There are even private practice neuropsychologists who
conduct a lot of research.  You can't sit up in an Ivory Tower and 
conduct clinical psychology research.


I'm sure that Scott has noticed that the number of available PhD slots 
is getting smaller.  It reminds me of the history of the English and French
in Quebec.  The French just eventually overpopulated the English.  If 
clinical psychology is just practitioners then we have failed.  We had a 
chance of
being unique with the Boulder or Vail models.  My interpretation of the 
models was that we should train practitioners who conduct clinical
research.  Many of our PhD graduates actually do this.  There are 
actually PsyD graduates who do this.


Working in a medical setting, I was often consulted by physicians 
because of my research training.  Although physicians often master the 
most esoteric calculus, molecular biology and genetics in order to get 
into and through medical school, they are often surprised by this thing 
called
the t test.  We have this unique scientist practitioner training that is 
best implemented when a trained scientist is confronted with a real, 
clinical problem.


The last point I want to make is that the base of academic jobs is not 
high enough to employ all these Ivory Tower, academic-only graduates.


Mike Williams
Drexel University







---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=12595
or send a blank email to 
leave-12595-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Factor Analysis of Dichotomous Variables

2011-08-01 Thread Mike Wiliams
Any variable set that produces valid correlations can be factor 
analyzed.  FA is just the reduction of a correlation matrix.  I even 
asked clinicians to generate a correlation matrix for the WAIS subtests 
and then factor analyzed the matrices.  Everyone was remarkably 
consistent in rarifying the factors.  They all believed that the 
subtests were much more intercorrelated than they are and they were all 
apparently strong believers in Spearman's G. IQ reigns! It was very 
difficult for people to incorporate noise into their judgements.


Since you have an hypothesis about the item clusters, you should look 
into confirmatory factor analyses.  You basically give the procedure the 
model you have and the analysis tests the hypothetical factor matrix 
against the pattern in the data.


Mike Williams
Drexel University

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=11716
or send a blank email to 
leave-11716-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] Puzzling Alzheimers diagnosis

2011-04-25 Thread Mike Wiliams
Its a ploy by the drug companies and the people they support to have 
more stages for drug development and treatment.
Imagine the market if you can sell a drug to treat mild cognitive 
impairment.  The drug companies tried this before.
They called the disorder, age-associated cognitive impairment.  The FDA 
did not approve the trials here so they did
most of the research in Europe.  These trials did not result in 
significant results.


Mike Williams

Subject: Puzzling Alzheimers diagnosis
From: michael sylvestermsylves...@copper.net
Date: Sun, 24 Apr 2011 16:12:32 -0100
X-Message-Number: 1

The new guidelines for evaluating Alzm distinguish among three stages,namely,a 
pre-clinical.mild cognitive impairment,and dementia.It is the 
pre-clinical(stage without symptoms) that I find puzzling.It would seem that if 
there are no symptoms  that condition does not deserve to be called a stage.Or 
virtually all conditions will have a pre-clinical stage.I guess we are all in a 
pre-clinical stage for developping schizophrenia. Come on,I can envision a 
post-clinical stage,but pre-clinical?

Michael omnicentric Sylvester,PhD
Daytona Beach,Florida



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=10183
or send a blank email to 
leave-10183-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] B vitamins, Alzheimer's, and telling the whole story

2010-09-11 Thread Mike Wiliams
 It appears to me that we have been struck once again by publication 
bias and
press release science.  The authors can't simply state negative findings 
because
no one will publish the paper.  I also expect the study would never make 
this

discussion list if the findings didn't show an effect.

Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=4730
or send a blank email to 
leave-4730-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] tips digest: Magic Tricks

2010-08-29 Thread Mike Wiliams
 Here is a trick I actually use in Perception class.  I usually throw 
it in during the color section:


http://www.youtube.com/watch?v=ppvGwKpUfMQ

The decks are reasonably priced and the trick is easy to learn.  It 
always gets them.  I think it also can work when you want to explain how 
expectations influence perception.  The deck is tricky because the 
viewer is expecting a pick any card - type trick and no one expects 
the deck to change color.  People also have expectations about what a 
card deck should be and have great difficulty getting their minds out of 
the box in order to figure out what appears to be an easy trick.


Mike Williams
Drexel University

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=4507
or send a blank email to 
leave-4507-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] tips digest: July 18, 2010

2010-07-18 Thread Mike Wiliams
The groups with special protection are pregnant, children, prisoners and 
subjects with mental impairment.  A flagrant schizophrenic would fall in 
the
last category.  The reason PETs are still done is because they render a 
spatially ambiguous image that the psychiatrist can interpret any way the
diagnostic wind blows.  The only imaging technique that should be 
conducted these days is fMRI.


The best consenting process still does not protect subjects from bad 
procedures.


Mike Williams

On 7/19/10 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: dangers of imaging studies
From: Annette Taylortay...@sandiego.edu
Date: Sun, 18 Jul 2010 06:39:39 -0700
X-Message-Number: 1

Here's an article from the LA times. I guess I had never thought about the fact 
that there are no special protections for mentally ill people in terms of 
giving consent for research. There are other protected groups. I wonder how the 
mentally ill slipped through those cracks in the code. So someone who is 
flagrantly schizophrenic would not be required to have a legal guardian sign 
for them; and often they simply do not have one because they have never come 
under the purview of the legal system.

Safety violations at N.Y. brain lab may have bigger fallout

http://www.latimes.com/news/health/la-sci-columbia-20100718,0,782909.story


Annette Kujawski Taylor, Ph. D.
Professor, Psychological Sciences
   



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=3667
or send a blank email to 
leave-3667-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


[tips] on-line stats textbook

2010-06-27 Thread Mike Wiliams
I found the materials very short on explanation.  Some of the demos gave 
me ideas I might use in my stats classes.


Mike Williams

Please check my stats demos and let me know what you think.  I have been 
translating Runrev applications into web apps:


http://www.learnpsychology.com

Subject: On-line stats text
From:roig-rear...@comcast.net
Date: Sun, 27 Jun 2010 00:11:08 + (UTC)
X-Message-Number: 5
Our chairperson sent us the following URL,http://onlinestatbook.com/index.html  , for an on-line stats textbook. 
I would appreciate comments about this resource from anyone who has used it.

Miguel



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=3329
or send a blank email to 
leave-3329-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] U.S. Tax Dollars at Work Part 982,542: Pretest But Don't Posttest for Brain Injury

2010-06-16 Thread Mike Wiliams
The ANAM is not a conventional neuropsychological test battery.  It 
consists of a number of computer-mediated tasks (a good thing) that are 
very sensitive to sustained attention and high levels of cognitive 
abilities.  It is best used when you want to decide who among the Navy 
Seals should be selected as a Delta Force member.  ANAM does a good job 
discriminating impaired pilots from unimpaired.  Even in this situation, 
some pilots are likely deemed impaired when they could probably function 
well as pilots.  Since they are compared to a norm of other pilots, 
they may appear disabled.  The absolute level of cognitive abilities 
need to competently fly an airplane is largely unknown.  When you 
administer this test to typical soldiers, an incorrect number appear 
disabled (false positives).  That is what I mean by too sensitive: it 
is extremely sensitive to small changes in high levels of ability.  A 
clinical test with the same problem is the PASAT.  This results from the 
design of the tests and may be a result of norming.  I don't know enough 
about the norms to make a judgment about them.  The fact that the person 
representing the tests described them as,  no better then flipping a 
coin, suggests that someone did a validity study and found that the 
specificity/sensitivity approached random levels among typical 
soldiers.  I wonder what they used as an external standard for TBI?


The fundamental problem with this whole approach was found when 
psychological assessments were used to predict violent behavior among 
people discharged from mental institutions.  The predictive power was 
low.  It was low because the incidence of violence is so low.  If I have 
a violence test with an accuracy of 90%, I still end up identifying a 
large number of people as potentially violent who will not engage in 
violence.  The ANAM, or any test battery, has the same problem.  If the 
base rate of TBI is low (5%) then it will be impossible to detect with 
a neuropsych battery that is insufficiently valid and reliable to detect 
an effect that small.  When you add in extraneous factors like 
malingering and psychological depression, then it will appear that many 
soldiers have cognitive impairment when they do not.  Someone with PTSD 
and no head injury will bomb the ANAM.  Some of the tests are so 
sensitive to attention that a poorly placed sneeze will lower your score.


A sensible program would be to assess with the ANAM and conventional 
clinical tests any soldier who was rendered unconscious or had other 
evidence of a head injury.  The money being used to test all these 
soldiers could be better spent on the rehabilitation of the ones who 
have valid TBI.


Mike Williams
Drexel University

On 6/16/10 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest 
wrote:

Subject: Re:U.S. Tax Dollars at Work Part 982,542: Pretest But Don't Posttest 
for Brain Injury
From: Mike Palijm...@nyu.edu
Date: Tue, 15 Jun 2010 07:21:38 -0400
X-Message-Number: 3

On Mon, 14 Jun 2010 23:09:17 -0700, Mike Wiliams wrote:
   

The ANAM battery is far too sensitive for a general application like
this.
 

What are the specificity and sensitivity for the ANAM? Also,
what do you mean by too sensitive?  One interpretation is
that it is good at detecting cases with brain injury (sensitivity)
while the article suggests that it produces false positives
(i.e., 1 - specificity).

   

In addition, the base rate of TBI among returning soldiers is
so low that a screening with a test like this will be far too
expensive for what it is intended to do.
 

Please explain this to me. From what I have heard, the rate of
TBI is much higher than (a) that experienced in previous wars
and (b) in the general population.  If TBI can be researched in
these groups, why shouldn't it be researched in soldiers from
Iraq and Afghanistan?

  If I interpret the article correctly, the pretesting and posttesting
was part of an ongoing study which can be interpreted as gathering
baseline data under different conditions and in different groups.
This seems to me like a worthwhile thing to do unless the ANAM
has really bad diagnostic accuracy which raises the question of why
it was chosen in the first place.  In any event,  the premature cancellation
of posttests means that the pretest data makes it much more difficult
to reach any conclusions at all (outside of supporting the confirmation
bias)..

   

The obvious approach is to only test soldiers who have
some history of head injury, especially those who were rendered
unconscious. Do they really expect that soldiers serving in low
risk assignments will come back with a brain injury and PTSD?
 

Although I agree that soldiers with a documented case of head
injury or concussion should be used but you seem to suggest
that multiple control groups should not be used.  I assume that
prettesting will identify a certain percentage of people with
pre-existing problems -- are soldiers with pre-existing

Re:[tips] U.S. Tax Dollars at Work Part 982,542: Pretest But Don't Posttest for Brain Injury

2010-06-15 Thread Mike Wiliams

The ANAM battery is far too sensitive for a general application like
this.  In addition, the base rate of TBI among
returning soldiers is so low that a screening with a test like this will
be far too expensive for what it is intended to do.  The obvious
approach is to only test soldiers who have some history of head injury,
especially those who were rendered unconscious. Do they really expect
that soldiers serving in low risk assignments will come back with a
brain injury and PTSD?  The other thing they must include is an
assessment of malingering.  The military disability support system is
very ripe for abuse.  It is a waste of tax dollars to conduct these
assessments without checks on malingering.

Mike Williams
Drexel University

On 6/15/10 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest
wrote:


  Subject: U.S. Tax Dollars at Work Part 982,542:  Pretest But Don't Posttest 
for Brain Injury
  From: Mike Palijm...@nyu.edu
  Date: Mon, 14 Jun 2010 19:33:32 -0400
  X-Message-Number: 7

  A news story is making the round on how the U.S. Pentagon
  initiated a program of neuropsychological testing using pretest
  (i.e., prior to deployment to combat areas) and posttest (i.e.,
  return from deployment) but apparently has stopped administering
  the posttest because its false positive rate is too high.  For one
  source see:

  
http://www.intelihealth.com/IH/ihtIH/EMIHC267/333/8014/1369167.html?d=dmtICNNews

  The test in question is called the Automated Neuropsychological
  Assessment Metris or ANAM.  Anyone familiar with it?

  -Mike Palij
  New York University
  m...@nyu.edu
  
   



---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=3096
or send a blank email to 
leave-3096-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu


Re:[tips] Legal Fight Delays Paper on Psychopathy Scale 3 Years

2010-06-14 Thread Mike Wiliams
I plan to write an opinion review criticizing Psych Corp and Pearson for 
arbitrarily changing the Wechsler Scales and creating new editions just 
to churn the scales and squeeze more money out of us, in similar fashion 
that Pearson is churning up textbook editions to squeeze more money out 
of our students.  Do you think Pearson will sue?  Since APA has a major 
publishing arm, do you think APA will support Pearson, or me?  It's time 
that APA spun off its publishing arm as a separate company.  These 
conflicts of interest between its financial interests and its ethics 
guidelines will come up again and again.


Mike Williams


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5n=Tl=tipso=3079
or send a blank email to 
leave-3079-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu