November 14, 2010 Questionable Science Behind Academic Rankings By D.D. GUTTENPLAN http://www.nytimes.com/2010/11/15/education/15iht-educLede15.html?_r=1&hpw
LONDON For institutions that regularly make the Top 10, the autumn announcement of university rankings is an occasion for quiet self-congratulation. When Cambridge beat Harvard <http://topics.nytimes.com/top/reference/timestopics/organizations/h/harvard _university/index.html?inline=nyt-org> for the No. 1 spot in the QS World University Rankings this September, Cambridge put out a press release. When Harvard topped the Times Higher Education list two weeks later, it was Harvard¹s turn to gloat. But the news that Alexandria University in Egypt had placed 147th on the list just below the University of Birmingham and ahead of such academic powerhouses as Delft University of Technology in the Netherlands (151st) or Georgetown in the United States (164th) was cause for both celebration and puzzlement. Alexandria¹s Web site was quick to boast of its newfound status as the only Arab university among the top 200. Ann Mroz, editor of Times Higher Education magazine, issued a statement congratulating the Egyptian university, adding ³any institution that makes it into this table is truly world class.² But researchers who looked behind the headlines noticed that the list also ranked Alexandria fourth in the world in a subcategory that weighed the impact of a university¹s research behind only Caltech, M.I.T. <http://topics.nytimes.com/top/reference/timestopics/organizations/m/massach usetts_institute_of_technology/index.html?inline=nyt-org> and Princeton, and ahead of both Harvard and Stanford. Like most university rankings, the list is made up of several different indicators, which are given weighted scores and combined to produce a final number or ranking. As Richard Holmes, who teaches at the Universiti Teknologi MARA in Malaysia, wrote on his University Ranking Watch blog, according to the Webometrics ranking of World Universities, published by the Spanish Ministry of Education, Alexandria University is ³not even the best university in Alexandria.² The overall result, he wrote, was skewed by ³one indicator, citations, which accounted for 32.5% of the total weighting.² Phil Baty, deputy editor of Times Higher Education, acknowledged that Alexandria¹s surprising prominence was actually due to ³the high output from one scholar in one journal² soon identified on various blogs as Mohamed El Naschie, an Egyptian academic who published over 320 of his own articles in a scientific journal of which he was also the editor. In November 2009, Dr. El Naschie sued the British journal Nature for libel over an article alleging his ³apparent misuse of editorial privileges.² The case is still in court. One swallow may not make a summer, but the revelation that one scholar can make a world class university comes at a particularly embarrassing time for the rapidly burgeoning business of rating academic excellence. ³The problem is we don¹t know what we¹re trying to measure,² said Ellen Hazelkorn, Dean of the Graduate Research School at the Dublin Institute of Technology and author of ³Rankings and the Reshaping of Higher Education: the Battle for World Class Excellence,² coming out this March. ³We need cross-national comparative data that is meaningful. But we also need to know whether the way the data are collected makes it more useful or easier to game the system.² Dr. Hazelkorn also questioned whether the widespread emphasis on bibliometrics using figures for academic publications or how often faculty members are cited in scholarly journals as proxies for measuring the quality or influence of a university department made any sense. ³I understand that bibliometrics is attractive because it looks objective. But as Einstein used to say, Not everything that can be counted counts, and not everything that counts can be counted.²¹ Unlike the Times Higher Education rankings, where surveys of academic reputation make up nearly 45 percent of the total, Shanghai Jiao Tong University relies heavily on faculty publication rates for its rankings; weight is also given to the number of Nobel Prizes <http://topics.nytimes.com/top/news/science/topics/nobel_prizes/index.html?i nline=nyt-classifier> or Fields Medals won by alumni or current faculty. The results, say critics, tip toward science and mathematics rather than arts or humanities, while the tally of prizewinners favors rich institutions able to hire faculty members whose best work may be long behind them. ³The big rap on rankings, which has a great deal of truth to it, is that they¹re excessively focused on inputs,² said Ben Wildavsky, author of ³The Great Brain Race,² who said that measuring faculty size or publications, or counting the books in the university library, as some rankings do, tells you more about a university¹s resources than about how those resources impact on students. Nevertheless Mr. Wildavsky, who edited U.S. News and World Report¹s Best Colleges list from 2006 to 2008, described himself as ³a qualified defender² of the process. ³Just because you can¹t measure everything doesn¹t mean you shouldn¹t measure anything,² said Mr. Wildavsky, adding that when U.S. News published its first college guide in 1987 a delegation of college presidents met with the magazine¹s editors to ask that the whole exercise be stopped. Today there are over 40 different rankings some, like U.S. News, focused on a single country or a single academic field like business administration, medicine or law, while others attempt to compare universities on a global scale. Mr. Wildavsky freely admits the system is subject to all kinds of bias. ³A lot of ratings use graduation rates as a measure of student success,² he said. ³An urban-setting university is probably not going to have the same graduation rate as Dartmouth.² ³But there¹s a real need for a globalized comparison on the part of students, academic policymakers, and governments,² he said. The difficulty, Dr. Hazelkorn said, ³is that there is no such thing as an objective ranking.² Mr. Baty said that when Times Higher Education Magazine first set up its rankings in 2004 ³it was a relatively crude exercise² aimed mainly at prospective graduate students and academics. Yet today those ratings have an impact on governments as well as on faculties. Dr. Hazelkorn pointed out that a recent Dutch immigration law explicitly targets foreigners who received their degree ³from a university in the top 150² of the Shanghai or Times Higher Education rankings. According to Mr. Baty, it was precisely the editors¹ awareness that the Times Higher Education rankings ³had become a global news event² that prompted them to overhaul their methodology for 2010. So it is particularly ironic that the new improved model should prove so vulnerable. ³When you¹re looking at 25 million individual citations there¹s no way to examine each one,² he said. ³We have to rely on the data.² That may not convince the critics, who apparently include Dr. El Naschie. ³I do not believe at all in this ranking business and do not consider it anyway indicatory of any merit of the corresponding university,² he said in an e-mail. But if rankings can¹t always be relied on, they have become an indispensable part of the educational landscape. ³For all their methodological shortcomings, rankings aren¹t going to disappear,² said Jamil Salmi, an education expert at the World Bank <http://topics.nytimes.com/top/reference/timestopics/organizations/w/world_b ank/index.html?inline=nyt-org> . Mr. Salmi said that the first step in using rankings wisely is to be clear about what is actually measured. He also called for policy makers to move ³beyond rankings² to compare entire education systems. He offered the model of Finland, ³a country that has achieved remarkable progress as an emerging knowledge economy, and yet does not boast any university among the top 50 in the world, but has excellent technology-focused institutions.² _______________________________________________ pen-l mailing list [email protected] https://lists.csuchico.edu/mailman/listinfo/pen-l
