First they came for the school teachers. Now it is the turn of
research universities.

They are talking about "measuring faculty productivity". We all know
what is coming next, right?

http://www.insidehighered.com/news/2011/07/20/controversial_former_texas_official_criticizes_productivity_of_university_faculty
http://www.insidehighered.com/views/2011/07/20/o_donnell_on_faculty_productivity_data
----------------------------snip
A former adviser to the University of Texas Board of Regents who is
aligned with controversial reforms that have been touted by
conservative groups and Governor Rick Perry issued a report Tuesday
identifying what he called a “faculty productivity gap” at the two
chief research institutions in the state.

Using data released by Texas A&M University and the University of
Texas, Rick O’Donnell, who had a brief and tumultuous stint as special
adviser to the UT regents, broke faculty into five groups -- Dodgers,
Coasters, Sherpas, Pioneers and Stars -- based on the number of
students they teach (weighted against their costs in salary, benefits
and overhead) and the amount of money they bring to their respective
universities in external research funding. Full-time administrators
who teach a course in addition to their regular duties were excluded.

“The data shows in high relief what anecdotally many have long
suspected, that the research university’s employment practices look
remarkably like a Himalayan trek, where indigenous Sherpas carry the
heavy loads so Western tourists can simply enjoy the view,” O’Donnell
wrote in his paper, “Higher Education’s Faculty Productivity Gap: The
Cost to Students, Parents & Taxpayers.”

[...]

Critics, however, saw O’Donnell’s analysis less as an attempt to shine
a light on serious issues than as an exercise in self-vindication by a
“disgruntled ex-employee,” said Pamela Willeford (specifying that she
was speaking for herself and not on behalf of others). Willeford is a
former chair of the Texas Higher Education Coordinating Board and a
member of the operating committee of the Texas Coalition for
Excellence in Higher Education, which was organized to counter the
ideas put forth by those aligned with Perry.

Stressing that she believes higher education can be improved,
Willeford argued that the universities' presidents and system
chancellors were already working, through such efforts as the
blue-ribbon Commission of 125, to better the institutions in ways that
will benefit students and the state.

“We think these are very simplistic ideas that are being pushed in a
heavy-handed way with an obvious bias of someone who no longer works
for the system,” Willeford said of O'Donnell's ideas. “Name-calling
like what’s going on in this report ... is certainly not helpful.”

In O'Donnell's nomenclature, “Dodgers" are the least productive
faculty because they bring in no external research funding and teach
few students. “In essence, they’ve figured out how to dodge any but
the most minimal of responsibilities,” O’Donnell wrote.

“Coasters” are senior and tenured faculty who have reduced teaching
loads and do not produce significant research funding.

“Sherpas,” on the other hand, are mostly untenured and bear most of
the teaching load and carry out little to no research.

The last two categories describe faculty who generate considerable
external research money. “Pioneers” are highly productive in research
-- especially in science, technology and engineering -- but teach
little. “Stars” are highly productive faculty who do considerable
teaching and funded research.

Categorizing people in this way, said O’Donnell, was a way of
highlighting those who he said were coasting and not teaching enough,
which he argued results in higher college costs and lower educational
quality. “I think there’s a lot of evidence that there’s too much
research that might be mediocre and too many faculty members who might
not be engaging with students,” he said.

But Gregory L. Fenves, dean of the Cockrell School of Engineering at
UT-Austin, said that such conclusions were not supported by the facts
because the analysis was deeply flawed. Fenves had two chief
criticisms. The first was that the analysis didn’t disaggregate
tenure-track and tenured faculty from instructors and contingent
faculty, who serve very different functions within the university. The
second was that research productivity was measured only in terms of
external grants. “I don’t think we can have a valid analysis for
discussion if it’s based on those two premises,” said Fenves.

O’Donnell said he chose that metric because it was the only one
available in the data and that it could be evaluated on an
apples-to-apples basis. He added that peer review was embedded in the
process, which testified to the worth of the scholarship being funded.
He also said he supported the role of research in the humanities and
social sciences -- even though it would not be accounted for in his
measurement.

Fenves said the biggest problem with using this information in this
way was that it confused an input into the system (money raised
externally) with an output (scholarly impact). Impact, he said, is
typically measured in a researcher’s contributions to journals and
delivery of papers at conferences. “I’m not opposed to analysis,” said
Fenves. “It’s got to be the right analysis.”
_______________________________________________
pen-l mailing list
pen-l@lists.csuchico.edu
https://lists.csuchico.edu/mailman/listinfo/pen-l

Reply via email to