TIPsters:

FYI EPAA document re: grade inflation, etc. article, etc

Excerpt from:
Education Policy Analysis Archives/

http://epaa.asu.edu/epaa/v3n11.html

    Volume 3 Number 11

    June 26, 1995

    ISSN 1068-2341

------------------------------------------------------------------
        EDUCATION POLICY ANALYSIS ARCHIVES
    A peer-reviewed scholarly electronic journal.

    Editor: Gene V Glass,[EMAIL PROTECTED] College of Education, Arizona
    State University,Tempe AZ 85287-2411

    Copyright 1995, the EDUCATION POLICY ANALYSIS ARCHIVES. Permission is
    hereby granted to copy any article provided that EDUCATION POLICY
    ANALYSIS ARCHIVES is credited and copies are not sold.
--------------------------------------------------------------------

        Academic Policies and Practices
        that Contribute to Grade Inflation
        A variety of institutional practices and conditions are thought
        to contribute to grade inflation, and some of them are
        academically defensible. For example, any grade increases that
        are due to increased learning are not only defensible but
        welcomed. Others, however, are clearly questionable. One
        suspected cause of grade inflation is the admission of
        increasing numbers of poorly prepared students (Birnbaum, 1977).
        Despite additional instructor time and tutorial assistance, most
        poorly prepared students are unlikely to exceed the average
        levels of performance exhibited by their better prepared peers.
        Instructors whose classes include substantial numbers of such
        students must lower expectations or risk creating an
        insurmountable pedagogical task for themselves. If expectations
        are not lowered, many students fail, enrollment is lowered, and
        student satisfaction is decreased--all decidedly unrewarding
        outcomes. Conversely, lower expectations make passing grades and
        continued enrollment attainable for all.

        Other inflationary factors are curricular options that permit
        students to elect less challenging courses and programs
        (Prather, Smith, & Kodras, 1979; Sabot & Wakmann-Linn, 1991)
        That is, the proverbial "underwater basket weaving" courses.
        Also, academic policies that permit students to enroll in a
        large number of courses and later drop the ones in which they
        are failing can inflate average grades. So can a policy such as
        permitting students to replace failing grades when they repeat a
        course (Geisinger, 1979). Whatever the wisdom of these policies
        with respect to other academic considerations, they all have the
        effect of adding to the need for expanded funding.

        Attempts have been made to offset the impact of inflated grades
        by adding plus and minus to the formal grading scale (Singleton
        & Smith, 1978) and by altering the GPA minimums for advanced
        programs. As is true with respect to policies such as the
        above-cited change in the GPA required for academic honors,
        these adjustments are more cosmetic than substantive.

        Student Ratings of Instruction as
        an Inducement to Grade Inflation
        Despite widespread usage and surface credibility, the use of
        student ratings with respect to merit, promotion, and tenure
        decisions remains a controversial matter (Zirkel, 1995). Many
        faculty suspect that institutional reliance on student ratings
        of instruction is a prime cause of lowered standards and grade
        inflation (Cohen, 1984; Goldman, 1985; Mieczkowski, 1995;
        Renner, 1981). A case for relying on them, however, can be made
        on the grounds of correlational studies that have shown student
        ratings to be valid in the sense of having a statistically
        significant relationship(r=0.4) with measured achievement
        (Benton, 1982; Cohen, 1981; Marsh, 1984). Moreover student
        ratings have had great appeal in higher education because they
        afford convenient and seemingly objective assessment of teaching
        without the exercise of peer or administrative judgment.
        Institutional reliance on student ratings of instruction has
        grown steadily since the mid to late 1960s. Seldin (1993)
        reports that over 85 percent of 600 colleges surveyed used them
        as of 1993. Of course, this is precisely the same era in which
        grades inflated and academic learning outcomes first declined
        and then stagnated. Thus, whatever it is that student ratings
        measure, they apparently failed to detect the flawed teaching
        practices that produced these unwanted phenomena.

        The apparent inconsistency between research reporting validity
        for student ratings and their failure to detect declining
        achievement and inflated grades may be due to the way in which
        they are used. A statistically established validity of r=0.4
        means that 16 percent of the differences among the ratings of
        instructors are attributable to differences in measured student
        achievement. However, the remaining 84 percent of the
        variability in ratings is attributable to factors that must be
        controlled for in their use and interpretation with respect to
        individual instructors. The most sophisticated and convincing
        student rating validation studies carefully control student
        achievement, reported grades, and other possible sources of bias
        (Benton, 1982). Typical institutional applications of student
        ratings, however, rarely employ the monitoring and controls
        found in validation studies. Thus, rating instruments with a
        modest degree of validity may be producing invalid faculty
        evaluations because they are used without appropriate
        precautions.

        Although it is understood that student-opinion-based assessments
        of individual instructors should be corrected for watered-down
        objectives, inflated grades, and other potential sources of bias
        (Aleamoni, 1981; Seldin, 1993), in practice, ratings are
        typically presumed valid, sometimes despite indications to the
        contrary. For example, although instructors who receive the high
        ratings often report higher grades, administrative interpreters
        of such data most often simply assume that such higher grades
        are a function of superior student learning. Their assumption is
        rarely checked.

        Instructors are evaluated on the basis of a comparison of their
        student ratings with those of colleagues who teach similar
        courses. Given that the biasing effects of insufficient
        expectations or too-lenient grading serve to increase, not
        lower, ratings, individuals who receive relatively low ratings
        must question not their own ratings, but the expectations and
        grading of their colleagues who have received higher ratings.
        Because most faculty are understandably reluctant to raise such
        questions, instructors who are judged unfairly have little
        choice but to accept student opinion and quietly find a means of
        competing with their higher rated peers. In contrast to those
        who receive less favorable ratings, instructors with ratings
        that may be biased by low expectations or lax grading are
        comfortably insulated from skeptical inquiries and otherwise
        given little cause to be concerned about the student
        rating/evaluation process.

        Without unrelenting vigilance, local standards against which
        teaching is judged can become biased. Because students have no
        basis for judging whether an instructor's expectations were too
        low or grades were too high, inflated ratings can occur any time
        student ratings are collected without appropriate checks
        (Seldin, 1993). If they are entered into the normative data
        base, the benchmarks against which faculty are compared become
        biased by default and to an unknown extent.

        The longer term impact of teaching evaluated by student ratings
        now seems to be evidencing itself. It is as a former American
        Association of University Professors president, Fritz Machlup,
        anticipated: We now have "Poor Learning from Good Teachers"
        (Machlup, 1979). The fact that learning has declined and
        stagnated during the twenty-five or so years that higher
        education has relied on student opinion as a measure of "good"
        teaching speaks for itself.

        Economists Gordon Tullock and Richard McKenzie (1985) argued
        that economic theory predicts that professors will ease that
        which is expected of their student customers to buy higher
        ratings. Are typical institutional procedures for interpreting
        student ratings sufficiently stringent to prevent such
        transactions? Proponents of student ratings must agree that even
        if student opinion can, under carefully circumscribed
        conditions, serve as a sound basis for evaluating teaching, the
        task of insuring correctly interpreted ratings under real-world
        conditions may be beyond the practical limits of institutional
        ability.

        Although most students want to learn, their idea of academic
        accomplishment is often very different from that of the
        professor, or for that matter, that of the taxpayer
        (Mieczkowski, 1995). As the Integrity in the Curriculum
        (Association of American Colleges, 1985) report bluntly
        observed: "The credential is for most students more important
        than the course." Higher education makes a very great mistake if
        it permits its primary mission to become one of serving student
        "customers." Treating students as customers means shaping
        services to their taste. It also implies that students are
        entitled to use or waste the services as they see fit. Thus
        judging by enrollment patterns, students find trivial courses of
        study, inflated grades, and mediocre standards quite acceptable.
        If this were not the case, surely there would have long ago been
        a tidal wave of student protest. Of course, reality is that
        student protest about such matters is utterly unknown. Tomorrow,
        when they are alumni and taxpayers, today's students will be
        vitally interested in academic standards and efficient use of
        educational opportunities. Today, however, the top priority of
        most students is to get through college with the highest grades
        and least amount of time, effort, and inconvenience (Chadwick &
        Ward, 1987).

        A fundamental misconception underlies the student-as-customer
        view. Students are not higher education's only customer or even
        its most important customer. Rather, higher education's
        forgotten customers are the taxpayers, the parents, and the
        employers who both "pay now and pay later" for higher
        education's failures. It is these customers--the "paying"
        customers-- who are insisting on better results.
        -----------------------------------------------------------


Contributed Commentary by Damon Runion on Volume 3 Number 11: Stone
Inflated Grades,Inflated Enrollment, and Inflated Budgets: An Analysis and
Call for Review at the State level.
Level ------------------------------------------------------------------ 31

May 1996 Damon Runion [EMAIL PROTECTED] John E. Stone's work
brings to light an issue of profound significance to all educators and
public administrators. Inflated grades clearly point to structural problems
in American higher education. If one accepts Stone's argument, which is
carefullyconstructed and well supported, the very idea of such educational
fraud issomewhat unnerving. On a national level consider the level of
resourcesgoing to support students who twenty years ago would have been
academically dismissed early in their college careers. In addition,
consider
the ongoing expenses of retraining these "qualified" graduates and it
doesn't take a degree in economics to see that much more harm is being done
than good.
Stone suggests that the large bureaucratic nature ofuniversity systems
necessitates funding levels which must be financed through increased
enrollments. Correspondingly, inflated grades serve to maintain these
preferred levels of enrollment. One can quickly point blame to faculty
members for such grade inflation, but total responsibility does not lie
there. Faculty answer to department and college level administrators. These
administrators control faculty research funding, as well as other
financially linked items. Careful manipulation of these funds insures that
the desired level of students will be maintained from year to year. Whether
faculty members agree to this under pressure from key administrative
personnel or a conscious decision is made independently is not the issue.
The issue is grades are inflated. Stone concludeshis work by offering seven
areas of potential change. What I find missing inhisconclusions is
a simple tenet I have learnedthrough my own course work. As a student of
Public Administration, I have had the fortunate opportunity to read the
classic work on public management by Douglas M. Fox entitled "Managing the
Public's Interest: A Results Oriented Approach."Fox makes continuous
reference to a systems-based management approach that stresses production
over process. What has evolved overthe years in higher education is a
carbon copy of the iron fisted bureaucratic structure that runs the federal
government. Bureaucracies manage the flow of information, nothing else. As
is apparent in Stone'sanalysis, administrative personnel in higher
education have far too much control over the outcomes of the process.
Higher education is about learning. The faculty represent the most
important input
in the process of making the product, i.e., students who can recall, recognize
and comprehend any given subject and who with ease andprecision can apply
the skills learned from any given subject. Accordingly, higher education
needs to
be structured around the idea of production. Any process that does not
relateto the effect of more products would be regarded as non-production.
It is important to note that these areas are by no means unimportant.
Campuses definitely need Student Affairs personnel, Facilities Engineers and
a whole host of other support services. But the focus needs to be on the
classroom and empowering thefaculty. Stone makes the rough comparison of
the college faculty member to a federal judge. He states that faculty enjoy
the same lifetime tenure as a judge but do not enjoy thesame freedom of
controlling their destiny. Faculty members need to regain such freedom if we
are to rid higher education ofthe plague of gradeinflation. Higher
education officials, faculty members, and public administrators must take
the time to
study what makes higher education function properly. It is clear that the
case made by inflated grades does not paint afavorable picture of the
current state
of higher education in America. However unsavory the truth may be, it affords
the opportunity for examination and ultimately correction.



---------------------------------->>>
[[(#:-) >>>>
---------------------------------------->>>
                             +
...John C Damron, PhD        *
...Douglas College, DLC   *  *  *
...P.O Box 2503         *    *    *
...New Westminster, British Columbia
...Canada V3L 5B2  FAX: (604) 527-5969
...e-mail: [EMAIL PROTECTED]

                http://www.douglas.bc.ca/

        http://www.douglas.bc.ca/psychd/index.html


 Student Ratings Critique:

 http://www.mankato.msus.edu/dept/psych/Damron_politics.html


             <<<<<<<<========///==========>>>>>>>>




Reply via email to