By the way, in both groups, the first year GPA was better predicted by
HS rank than SAT but SAT score contributed a significant amount of
prediction over and above the class ranking when combined in multiple
regression.

Bill Scott



 


 




What were the effect sizes?  How much variance was explained by the model?  
This is the 
issue neglected in all these reports.  I can get a statistically significant 
multiple R
of .4 with the sample sizes in these studies.  Would you base a financial 
decision on
a model with that level of error?

The major problem with accepting everyone and then paring down the classes each 
year based 
on performance is that it ignores competence and it ignores the fact that 
grades have
unknown validity and reliability.  It may be the case that all the students are 
actually within the same range of competence and the differences between them 
are
accounted for by error in using grades to stratify them.  

This commonly happens in medical schools.  We accept these super-students who 
all do
extremely well in all their courses.  Since we have to enforce a Bell Curve 
when none
exists, their tests are manipulated by their teachers who generate a set of 
items
that actually incorporate random error that looks like valid performance 
variance.
A very tiny number of medical students do poorly in classes.  They get jerked 
around 
by their instructors because the instructor looks bad if everyone gets an A or 
B. It is
completely logical for everyone in a class to get an A (or an F). In 
particular, 98% of medical
students actually perform in the range of an A.  We have hyper-selected them 
for a
bizarre level of study and test-taking skill.  Unfortunately, the faculty cannot
tolerate this.  They design their test responses so ambiguously that a student 
who
truly knows the correct answer cannot find it among the options.  

If we were to empirically terminate students based on some percentage each 
year, we would
be terminating a number of successful students who are only being selected for 
failure 
because of the error in our tests and grading systems.   

The issue of inflation of High School grades is largely irrelevant.  They don't 
predict 
college performance very well.  We could only use them as a very general 
screening
tool.  We use them and SAT scores as if these things measure something 
important in a 
reliable and valid manner.  We know in our hearts that they do not.  As a 
consequence,
we tell lies to students who are accepted and to those who are rejected.  What 
we 
should tell them is that the selection was largely random either way.  I think 
we 
should be completely honest and have fair lotteries if we need to restrict 
admission.
Otherwise, financial concerns, legacies, idiosyncratic foolish ideas and all 
the weak 
aspects of human reasoning, such as confirmation bias, determine the selection. 
 
The fact that that most students, parents, admissions offices and faculty are 
not aware
of the random error involved in the process is a reflection of this.  

We present a facade of validity and fairness when the process is actually at 
best, a lottery.

Mike Williams
http://www.learnpsychology.com





 


---
To make changes to your subscription contact:

Bill Southerly ([EMAIL PROTECTED])

Reply via email to