Statistical Software Analyst/Programmer

1999-12-22 Thread Stat/Math Center

The Center for Statistical and Mathematical Computing at Indiana
University has an open position for a software analyst/ programmer
(http://www.indiana.edu/~statmath).

Responsibilities:

Under minimal supervision, responsible for technical support and
consultation with users for statistical software. Support statistical
software across all University Information Technology Services (UITS)
supported platforms (windows, mac, unix).  Responsibilities involve
evaluation, testing, and support of  statistical software. Prepare
documentation and present short classes as needed; attend meetings as
required.

Qualifications: 

Masters degree in statistics/quantitative methodology or closely
related area; advanced degree preferred. In-depth knowledge of
statistical software required; experience with SPSS, SAS, Minitab, and
RATS particularly helpful. Experience with multiple computer platforms
required; experience in a university environment and experience with
UNIX and Windows NT operating systems is preferred.

Competitive salary.

To apply send cover letter and application with three references to:

Thea Brown ([EMAIL PROTECTED]) or 
University Information Technology Services, 
2711 East Tenth Street, 
Indiana University, 
Bloomington, Indiana  47408.  

Application deadline:  Friday, January 21, 2000.

Indiana University is an equal opporutnity employer.



unsubscribe

1999-12-22 Thread Drube Gregg




RE: Christmas Reading?

1999-12-22 Thread Eugene Komaroff

Have a go at Prof. Stigler's latest "Statistics on the Table" (1999,
Harvard University Press).   A very scholarly and often entertaining
collection of historical essays about lead characters and ideas in the
great story of statistics.  

It's amazing how deep some apparently modern ideas are actually rooted
in time. 

Eugene 


***

Date: Tue, 21 Dec 1999 09:43:07 -0500
From: "Tatikola, Kanaka [PRI]" [EMAIL PROTECTED]
Subject: RE: 

I personally like THE HISTORY OF STATISTICS , The measurement of
uncertainity before 1900, by Stephen M. Stigler.

Kanaka



Re: adjusting marks; W. Edwards Deming

1999-12-22 Thread Peter Westfall



Jim Clark wrote:

 Artificially giving all students (or almost all) the same grade
 does not minimize variation in the underlying trait, achievement,
 in this case. It simply hides the variation so that one does not
 know to what extent one is minimizing differences in achievement,
 and rewards students for not trying to achieve more than some
 minimal level.

I don't Deming would have said assignment of Pass/Fail should be "artificial".
If the student doesn't perform, then of course they shouldn't Pass.  He did say,
on the other hand, that grading imposes an artificial scarcity of A's (also of
C's and D's).  These are again Deming's words, and echo Dennis Robert's comments
about the pure subjectivity of the grading process.

The motivation for the students should be in Joy of Learning (one of Deming's 14
points) rather than the grade.  This I agree with wholeheartedly.  How can we
achieve this?  I think it is our main challenge as educators.  Using the grading
system as a motivational substitute for Joy of Learning is lazy, inefficient
management of our classes.

Students who are fairly sure they are not going to get the coveted A, or who
only need a "C or better" are going to give less effort.  This will increase
variation, and operates contrary to the stated goal of the system.




  My question is again: Is ranking really necessary?  Given the goal of
  reducing variation, what does it help? Students in competition for the
  scarce A's will withhold information from one another.  Does this achieve
  the stated aim of the system in an optimal way?  W. Edwards Deming would
  have said, most emphatically, no.  He spoke quite often of the
  educational system particularly in his later years; his message was not
  at all meant to be limited to manufacturing.

 Grading is not equivalent to ranking, unless one uses a forced
 distribution.  One can grade without any restriction on the
 number of As or other grades other than the achievement of the
 students.  I would be interested in hearing about any empirical
 evidence that non-use of grading schemes produces better or even
 as good learning as the use of grades?


I think this is a very important point: what can we do in place of ranking?
Now, as much as you say you don't use ranking, I am not sure you can get away
without out.  What if all of a sudden everyone got A's by your criteria?
Wouldn't the administration get on your case?  Then, you might say, just make
the criteria harder so that we get back to a "normal" proportion of As, Bs etc.
Well, aren't you just back to ranking?

I don't have any data from the classroom experience, but I do have an
observation from business.  Texas Instruments had a policy of ranking plants in
terms of their performance.  The employees at the top plants received bonuses.
Great idea, right?  Motivates people, makes them perform to the best of their
abilities, just like grading.  The problem is, the innovations were hoarded by
the individual plants to secure the bonuses, to the detriment of the company at
large.  Optimization of individual processes can be detrimental to the system,
if the system at large is not considered in the optimization process.

Thanks for the continuing discussion.  I have been profoundly influenced by the
words of W. Edwards Deming, and hope others will take a look at what he had to
say, at least to stimulate discussions such as this.  As he himself said, you
don't simply "implement" his system, much like you don't learn to play piano by
buying one and placing it in your living room.  In the same way, you don't
simply implement Deming's method as it applies to teaching by implementing P/F
and be done with it.

I would like to know, are there any others out there who have been influenced by
Deming?  Has his message lost its force in our current climate of economic
prosperity?

Peter Westfall




unsubscribe

1999-12-22 Thread Gordon, Heather C.

unsubscribe [EMAIL PROTECTED]



Re: adjusting marks

1999-12-22 Thread Peter Westfall



"David A. Heiser" wrote:

 - Original Message -
 From: Peter Westfall [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, December 21, 1999 6:45 PM
 Subject: Re: adjusting marks

 
 
  Bob Hayden wrote:
 
   - Forwarded message from Peter Westfall -
  
   Deming himself (if I remember correctly) graded everyone as "A" until
   the administration noticed, and then they made his courses Pass-Fail.
  
   Deming was also very much against ranking students in any way, except
   for the possible exception of identifying an exceptional student that
   others might emulate (the  3*sigma student) and identifying the
   exceptionally poor student ( 3*sigma) for remediation.  All other
   students should be be essentially equivalent, in Deming's philosophy.
  
   - End of forwarded message from Peter Westfall -
  
   Would you recommend this for drivers' license tests?  Oh, I get it,
   that's what we're doing already!  No wonder.
  
   I have to admit, it would sure simplify quality control if we
   considered anything within +- 3 s.d. to be OK.  Then I guess the
   motivation would be to throw in a few clunkers now and then to keep
 
   the s.d. as large as possible?
 
  Bob,
 
  Your remarks sound facetious. I was hoping to stimulate some serious
  discussion.  Have you read anything by Deming?
 
  Here is Deming's philosophy, as well as I can paraphrase it for the
  present situation:
 
  Students/teachers/administrators form a system. The system has an aim,
  which is (presumably) to educate everyone as well as possible, for the
  good of the students, and for the good of society.  What good does
  ranking do?  Does it help to achieve the aim of the system?  Or rather,
  is it simply a weeding process?  Is ranking necessary? (these are mainly
  Deming's words, but I must admit I see lots of value there.)
 
  Regarding making the standard deviation large, Deming would say that
  management's (professors, administrators) job entails minimizing
  variation among students.  This can be done in the usual ways -
  admissions procedures, advising, prerequisites.  Individual classes are
  "processes" within the larger system, and in the process of continual
  improvement, one seeks ways to minimize variation within the processes.
  Deming shows a diagram where the knowledge of people before training is
  scattered and highly variable, and after training the mean level is
  higher but the variation smaller.  The inference is that the more
  effective the classroom experience, the less variation in the final
  levels of knowledge and abilities of the students, as they pertain to the
  subject at hand.
 
  My question is again: Is ranking really necessary?  Given the goal of
  reducing variation, what does it help? Students in competition for the
  scarce A's will withhold information from one another.  Does this achieve
  the stated aim of the system in an optimal way?  W. Edwards Deming would
  have said, most emphatically, no.  He spoke quite often of the
  educational system particularly in his later years; his message was not
  at all meant to be limited to manufacturing.
 
 
  Peter
 
 ---
 Very Intersting

 I don't agree with Demming. Life is essentially a matter of diversity, and
 being able to find one's own "niche". The process of ranking is inherent in
 life whenever there is stress on a population. Going to college is indeed
 "stress".

 If in order to suceed, I need to obtain a PhD from Stanford, then I have to
 get high grades and attain other acheivments to get in that few percent that
 gets accepted. If my college grades are all "pass", how am I going to
 compete with the applicate with A+++ grades from NCU?

 How are new hires for the expensive New York/Washington law firms hired? Not
 on pass/fail but on which law school and how the professors rated the
 student  and what were the extra curricular activities? Much of this is
 subjective, but when you have 300 applicants for one job, you have to do
 some ranking to pick the top 3 or 5.

 Demming I think has the quality control mindset of pass/fail in terms of
 manufactured objects, where everything is acceptable between -3 and +3 sigma
 (Now it is -6 to +6 sigma.) This may be fine for shop work on the floor. In

(I think Deming had some serious problems with 6 sigma QC, but that is besides
the point.)


 this country the only thing we manufacture now is credit and money to buy
 manufactured goods from other countries.

 You need a very diverse population now. The process of ranking as flawed as
 it is, works, because there are so many areas where one can find his own
 niche, and ranking is one way of finding one's niche.

 DAH

No doubt about it, we can't make everyone the same, nor do we want to.  We can,
however, make their levels of understanding and logical thought processes
similar through proper education.   Human diversity is expected.  We can't
change people's 

Re: Correlation conversion

1999-12-22 Thread Robert Hamer

"haytham siala" [EMAIL PROTECTED] writes:

Can anyone please tell me how to convert a Kendal-tau correlation to a
Pearson correlation.

It is easier to get a camel through
the eye of a needle than to convert a
Tau to a Pearson correlation.
-- 
--(Signature)  Robert M. Hamer 732 235 4218
  Use my last name @rci.rutgers.edu
  "Mit der Dummheit kaempfen Goetter selbst vergebens" -- Schiller



Re: adjusting marks; W. Edwards Deming

1999-12-22 Thread dennis roberts

this shows how naive deming really was ...
who says learning "should" be a joy? learning is WORK ... and, work is 
hard. now, some kids really relish the task and challenges ... but many 
others do not ... should we blame THEM?

but, i don't really see what deming has to do with our discussion of 
"adjusting" marks ...

At 08:33 AM 12/22/99 -0600, Peter Westfall wrote about deming:


The motivation for the students should be in Joy of Learning (one of 
Deming's 14
points) rather than the grade.

--
208 Cedar Bldg., University Park, PA 16802
AC 814-863-2401Email mailto:[EMAIL PROTECTED]
WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm
FAX: AC 814-863-1002



Re: adjusting marks

1999-12-22 Thread Eric Bohlman

Michael Granaas ([EMAIL PROTECTED]) wrote:
: While more careful admissions processes would certainly limit the
: variability in students, and therefor grading, how is it any different
: from grading?  If you are going to be more careful with admissions you
: need a ranking system of some sort to determine who will succeed and who
: will fail.  This is just puts the Social Darwinism issue at a different
: stage of the process.

There's a fundamental difference between admissions decisions and grading 
decisions: the former involve allocating an inherently scarce resource.  
There's a limit to the total number of students a school or program can 
admit, regardless of how certain qualities are distributed among the 
applicants.  However imperfect the available criteria for selecting a 
subset of applicants are, you're going to *have* to use *some* criteria.  
All you can do is try to make them as "fair" as possible.  There's a 
genuine cost associated with admitting another applicant.

But evaluating performances within a class doesn't involve any inherently 
scarce resource.  There's no particular cost that increases with the 
grade a student gets.



Re: adjusting marks

1999-12-22 Thread Ddeliberto

Not all grading practices "on a curve" are performed as described by Eric 
Bohlman.

OK maybe I am clueless about all of this but I often saw grading on a curve 
being implemented when lots of students performed poorly on a test.  Thus 
test scores were adjusted (usually in the upward direction) to make up for 
the poor performance that might be attributed to poor teaching, poor test 
construction, bad items or whatever.  I never, as a teacher, used any curving 
procedure to lower students grades!

But obviously the students scores for those performing poorest on the test 
had the highest increases when the curve was applied whereas those performing 
well saw little if any increase in their scores.  Perhaps that is the 
unfairness you and others are referring to.

Or are you referring to the decision to rescale test scores so they fit a 
more normal distribution?  In which case, I agree that there are problems 
with that approach and see no reason for why anyone should assume that test 
scores should conform to a normal distribution or force them to do so.

In fact, most teacher-made tests (and here I really want to say all) are 
criterion-referenced tests so why can't all students meet the criterion?  
There is no reason at all for why that cannot not be done except that some 
might think that one instructor grades more leniently than another and at the 
university level students will sign up in droves for the class taught by the 
instructor ho is the easier grader.

So am I off my rocker or what?  (After developing tests as a teacher, I now 
develop tests for states and local school districts so if I am missing a big 
point here, please let me know.  I would hate to think I was causing harm to 
students.)

Deanna
===
Deanna M. De'Liberto, President/Director of Assessment
D Squared Assessments, Inc.
(Specialists in Test Development/Validation and Test Administration)
9 Bedle Road, Suite 250
Hazlet, NJ 07730-1209
Phone:  (732) 888-9339
Email:[EMAIL PROTECTED]
Web: http://www.quikpage.com/D/dsquared

Member of the Association of Test Publishers
===

Confidentiality Notice: This e-mail transmission 
may contain confidential or legally privileged 
information that is intended only for the individual 
or entity named in the e-mail address. If you are not 
the intended recipient, you are hereby notified that 
any disclosure, copying, distribution, or reliance 
upon the contents of this e-mail is strictly prohibited. 

If you have received this e-mail transmission in error, 
please reply to the sender, so that DSA can arrange 
for proper delivery, and then please delete the message 
from your inbox. Thank you.



In a message dated 12/22/1999 2:16:36 PM Eastern Standard Time, 
[EMAIL PROTECTED] writes:

 EAKIN MARK E ([EMAIL PROTECTED]) wrote:
  : While I do not grade on a curve, I feel that if reasons exist,it is more
  : valid to adjust atypical grades distributions than not to adjust them. 
  : My reason for not grading on a curve is more for class harmony. Grading 
on
  : a curve often means taking points away from some students while adding to
  : others. I noticed that a class can suddenly become hostile if some
  : students are treated better than others. This hostile environment can be
  : detrimental to a class's performance also.
  
  To put it even more bluntly, grading "on a curve" really means 
  establishing a budget of grade points and then distributing that budget 
  among the students, which means that the grade a particular student gets 
  depends not only on the distribution decisions but on the size of the 
  budget.  Where on earth does this concept of a budget come from?  It 
  implies at least two questionable, to say the least, underlying 
assumptions:
  
  1) That the "total" of whatever it is that grades are supposed to measure 
  is a constant depending only on class size.
  
  2) That it's possible to evaluate the collective performance of a group 
  on a task *before* they've performed that task.
  
  The purpose of a budget is to make it possible to allocate limited 
  resources.  Since when is academic performance a limited resource, or 
  even any sort of resource subject to allocation?  What on earth does it 
  mean to say to a student "your performance would be an A, but that would 
  put me over budget so I can only give you a B" or "your performance would 
  be a D, but I've got some extra grade points left over so I can give you 
  a C"?
  
  The disharmony you talk about is really the result of pitting students 
  against each other in such a way that each student's success depends on 
  other students' failure.  Why would someone want to do this?  If we're 
  not talking about allocating an inherently scarce resource, the only 
  reason I can think of is a deliberate desire to create disharmony in 
  order to use "divide and conquer" to prevent collective action.  If 

grading on the curve

1999-12-22 Thread dennis roberts

this discussion is interesting ...

there seems to be TWO general kinds of "grading" on the curve ... it would
be interesting to try to "estimate" how frequently each happens ...

1. LOWERing cutoffs ... thus, INcreasing the #s of those getting various
higher grades

2. making cutoffs such that the distribution of GRADES resembles a normal
distribution

i assume that #1 occurs much more frequently and, from my perspective,
there is NO good rationale for doing #2 ... unless one assumes that ability
within a class is normally distributed AND ... and far more crucial ...
that achievement SHOULD resemble the distribution of ability ... 

in any case ... instructors are suppose to give students some reasonable
description of the grading system used ... at the BEginning of a course ...
which i assume would include some facimile of a grading scale ... or what
one has to do to earn certain grades ... and in this context, i would think
that anyone who might 'consider" RAISING cutoffs so that FEWER students get
higher grades ... would be challenged from students .. as this appears to
border on unethical practice ... 

At 02:32 PM 12/22/99 -0500, [EMAIL PROTECTED] wrote:
  I never, as a teacher, used any curving 
procedure to lower students grades!

==
dennis roberts, penn state university
educational psychology, 8148632401
http://roberts.ed.psu.edu/users/droberts/droberts.htm



Re: Prediction Model Question

1999-12-22 Thread Rich Ulrich

There were several earlier messages, and then 
I thought Don Burrill said most of what needed to be said --

On 20 Dec 1999 22:43:52 -0800, [EMAIL PROTECTED] (Donald F.
Burrill) wrote:

 
 For openers, I quote from Pedhazur (2nd edition), p 329 (summary for 
 Chapter 9), so that we're all on the same wavelength, more or less:
   "... Regardless of the coding method used, the results of the 
   overall analysis are the same. ..."
(This is the point that other respondents and I had in mind when we 
were questioning your interpretation of Pedhazur.) 
Continuing a few sentences later:
   "... The coding methods do differ in the properties of their
   regression equations.  A brief summary ... follows. ..."
After the summaries of each method, the final paragraph:
   "Which method of coding one chooses depends on one's purpose and 
   interest.  [For one purpose], dummy coding is the preferred 
   method.  Orthogonal coding is most efficient [for another 
   purpose].  It was shown, however, that the different types of 
   multiple comparisons ... can be easily performed [with] effect 
   coding.  Consequently, effect coding is generally the preferred 
   method of coding categorical variables."
 
 Burke Johnson had written:
 
  1.  I agree with Joe that the term "dummy" in dummy coding is a rather 
[ snip, various details.]

Later Don recommended constructing dummies for the Interactions in
such a manner so that they would be orthogonal to the main effects, in
order to reduce confusion of confounded interpretations; that bit of
advice received a minor criticism from someone else who pointed out
that you should never be trying to interpret *those* coefficients in
the first place.

Well, I agree with both Don and the critic.  I create my interactions
as orthogonal, or approximately orthogonal -- in the old days, your
program was too likely to blow up if you did not get rid of all the
numerical problems you could, whenever you could.  Further, if I
happen to look at the wrong listing, it will still have numbers that
are in the right range, and PROBABLY right.  Finally, it may be a
cheap piece of consistency, but it gives me one less item that I have
to explain to the non-statisticians who look at  various results.
Like the critic, though, I never want to interpret the coded main
effects in any regression that has also included the interactions.

 - I would not mind receiving guidance on this final point.  It is
*conceivable*  to use codings so that the coefficients and tests for
main effects do have meaning when the interaction is included in a
regression.  If it is what I remember seeing in an ANOVA text many
years ago, the weights and coefficients can be constructed to take
into account the Ns of the cells (more complicated than -1,0,1).

I believe:  The test that this gives you for main effects is either
exactly the same as some other way of constructing the problem, or it
is considered obsolete.  The construction that I like is Searle's
partitioning of sums of squares, usually in a hierarchy:   (A), (B|A),
(AB|A,B)  for instance.

Today, Burke Johnson sent an SPSS-worked example to Don B., with a CC:
to me, since I had posted earlier.  The example was supposed to show
that different codings give different results.  The example shows that
the total SS and test is always the same.  And the example shows that
different codings can give you different results for coefficients and
tests when you look at Main effects when Interactions are already in
the equation -- which is entirely consistent with what Don and I (I
think) have both said, i.e., those effects should be  presumed to be
uninterpretable -- so the illustration just heightens the question of
whether those effects are *ever*  interpretable, since the
inconsistency proves that they are not strictly interpretable for most
sets of coefficients.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html



Re: teaching statistical methods by rules?

1999-12-22 Thread Robert Frick

Alan McLean wrote, among other things:

 On the other hand, a body of knowledge can be thought of as a set of
 'rules'.

I think you are concentrating on the information in what is learned and
ignoring the format.  This works for computers, which learn in only one
format (memory), but not for people, for which memory is just one
format.  My argument:
For the sake of example, suppose I want to teach students how to tie
their shoes.  I could observe what I do and create a verbal
description.  I could teach students this verbal description, and they
could memorize it.  I could test them on their ability to remember this
information.  A student who could remember it probably could tie their
shoes.
My students might end up knowledge roughly the same information as me,
but their knowledge wouldn’t be stored in their brains the same way it
is stored in mine.  I have a connected series of motor movements built
into my brain as a habit.  And these different storage formats have
different implications.  My students would be good at verbal
descriptions, but probably not so fast at actually tying their shoes.
Now to reality.  Research on implicit learning has suggested that
people can learn something without being able to report what they have
learned.  Presumably, they have no conscious knowledge of what they have
learned.  In my published opinion, there are three types of implicit
knowledge, with habits being just one.  Combined with conscious
knowledge, that makes four different types of learning.
The format in which something is learned has implications.  One is for
memory.  Research suggests that implicit learning is retained much
longer than explicit learning.  Another is for usage.  Obviously, for
verbal report, conscious knowledge is far superior than any other type
of knowledge.  But the other types of learning probably are probably
better for other types of performance.  For example, in one study, we
either gave subjects implicit knowledge of a rule or explicitly taught
them a collection of rules.  The subjects with implicit knowledge could
use the information in an identification task better than they could
report it.  The subjects with conscious knowledge could report the rules
better than they could use them.
The hardest type of learning to describe or define is what I call
mental models, and what often corresponds to what people call
understanding.  For example, you have a mental model of your spouse (or
friend).  You can use this mental model to predict what your spouse or
friend will do.  You can also try to use this mental model to verbally
describe your spouse or friend, but that isn't a natural use of the
mental model and that format of learning isn't that good for verbal
report.  Someone adept at statistics would have a mental model of
standard deviation, the t-test, statistical testing, etc.  Teaching
students rules or formulas does not develop mental models.

Bob F.



Re: Correlation conversion

1999-12-22 Thread Rich Ulrich

On 22 Dec 1999 12:07:58 -0800, [EMAIL PROTECTED] (W. Keith Moser)
wrote:

 Actually, the original quote referred to it being easier to get a camel
 through the eye of a needle than to get a rich man into heaven. It's from
 the Bible, Matthew 9:24.
 
 Not trying to open a debate, just properly attributing a quote.

Well, along with clearing up the attribution, you also  may have
disabused those readers who could have walked away 
thinking that the Bible had some lines about Pearson correlations.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html



anti-SPAM? was: [Re: Difference of means]

1999-12-22 Thread Rich Ulrich

On Wed, 22 Dec 1999 16:18:26 +0800, "DIAMOND Mark" [EMAIL PROTECTED]
wrote:

...  University policy is to avoid putting email
 addresses that can be extracted by spammers in the body of newsgroup
 postings.

That would seem to be a misguided and unnecessary policy.  And, having
just  decoded your address to type it in (since Forte Agent sends an
Email copy), I can say, "I will never do *that*  again" unless there
is some good reason.

I give my address as REPLY address and in the body of my text and I
have posted 15 or 20 times a week, for three years.  I hardly see any
SPAM at all -- a median of one or  items per week, and some of those
were sent to a different email address (so I know I can't blame the
newsgroup for them).

I hear that some bulk-mailers avoid  ".edu" addresses.  But  I don't
think all of them do.  Also, my ISP weeds out the stuff that is
detectable as a bulk mailing -- addressed to everyone at the ISP, or
with too many dollar signs in the subject line.  And if my University/
ISP can weed-out so successfully, maybe you should ask yours to ask
for advice?

No, by my own experience, and from what I have read elsewhere, I think
that your name is far more likely to be gleaned from websites that you
visit.Though, it is possible that some *other* particular
newsgroups happen to be a place where Spammers harvest names.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html