Về chuyện xếp hạng giáo dục đại học U-Multirank

Posted on February 27, 2015 by


Op-Economica, 27-2-2015 — Tình cờ tôi đọc lại được bài cũ về chương trình LERU ở Châu Âu, cách đây đã 2 năm. Đây là hợp tác về xếp hạng các trung tâm đào tạo trên thế giới, sử dụng số lượng tiêu chí xếp hạng đa dạng hơn.

Các trường thuộc EU không hài lòng với xếp hạng hiện tại, cụ thể bài báo chỉ ra xếp hạng của Giao Thông Thượng Hải (ARWU), trong đó các trường của EU tỏ ra yếu thế hơn hẳn so với Mỹ.

Các bảng xếp hạng khác cũng tỏ ra có nhiều nhược điểm. Pháp sốt sắng đi đầu trong sáng kiến này, và phương pháp xếp hạng thay thế được chủ trương bởi các trường của Đức và Hà Lan, gọi tắt là ‘Brussels Ranking’.

Họ đã có 2 năm thử nghiệm trước đó, rồi bắt đầu thu thập dữ liệu, và sẽ có bộ dữ liệu xếp hạng U-Multirank đầu tiên cho năm 2014. Nếu như vậy thì việc công bố sẽ trong năm 2015, sau khi nhận được tài trợ từ EU với tổng giá trị 2 triệu euro. Lúc đầu chỉ mới có 159 trường, bây giờ chương trình đã nâng lên 500 trường tham gia.

Tuy thế, cũng nhiều ý kiến ỳ xèo. Có nơi thì bảo không cần thiết. Có chỗ thì cho rằng phương pháp và dữ liệu dễ bị xuyên tạc (có thể vì phức tạp và phải lệ thuộc vào cơ sở cung cấp) như Edinburgh. Lại có nơi thì tuyên bố tẩy chay như Univ Zurich.

Tuy nhiên, phần lớn dường như tán thành quan điểm của Dublin Institute of Technology cho rằng chương trình rất cần để cổ vũ giá trị toàn diện của giáo dục Châu Âu, và là đóng góp lớn lao cho nhận thức về giáo dục toàn cầu.

Dù thế nào thì cũng cần thấy Châu Âu vẫn xứng đáng là trung tâm trí tuệ lớn, nơi khởi phát nguồn ánh sáng tri thức kể từ thời kỳ phục hưng văn hóa-khoa học.

‘Brussels Ranking’ of Universities Off to a Rocky Start

Science 25 January 2013: Vol. 339 no. 6118 p. 383 DOI: 10.1126/science.339.6118.383

BRUSSELS—At a meeting in Dublin next week, researchers will unveil the details of a new system to grade the world’s universities. Backed by millions of euros from the European Union and designed by a group of higher education research centers, the new method is billed as less simplistic and more reliable than existing university rankings, which have long been the subject of fierce criticism.

But the new plan, dubbed U-Multirank, has already aroused controversy. The League of European Research Universities (LERU), which includes some of Europe’s top schools, says its 21 members won’t provide data. That could seriously undermine the new project. “It’s difficult to run a ranking if people are boycotting,” says Alex Usher, president of Higher Education Strategy Associates in Toronto, Canada. The ranking’s developers have also struggled to bring U.S. and Chinese universities on board, casting further doubt on the usefulness of the data.

Critics of current ranking systems—which include the Academic Ranking of World Universities, better known as the Shanghai ranking, and the Times Higher Education World University Rankings—argue that these charts rely on indirect, sometimes inadequate indicators, and that the aggregated scores, based on subjective weights for each indicator, are meaningless (Science, 24 August 2007, p. 1026). Some also complain that the rankings are biased toward research in the natural sciences.

Low scores for European universities have only fueled frustrations; now, for instance, only nine E.U. universities are listed in Shanghai’s top 50, compared with 36 from the United States. In 2008, France’s government championed the creation of a more sophisticated alternative, which has since been led by the Centre for Higher Education (CHE) in Gütersloh, Germany, and the Center for Higher Education Policy Studies at the University of Twente in the Netherlands.

U-Multirank is inspired by a popular CHE ranking of German universities and a clear departure from existing global efforts. Instead of focusing on research quality, it will assess universities more broadly, comparing research reputation, teaching quality, ties with the local community, international orientation, and success in tech transfer. And instead of publishing a single ranking that aggregates indicators using pre-assigned weights, U-Multirank will let users select their own criteria, providing data and graphs on dozens of measures. The system relies on a range of sources, including bibliometric databases, student surveys, and data provided by the universities themselves, for instance on student/staff ratios or gender balance.

After a 2-year pilot involving 159 universities, U-Multirank will now expand to at least 500 institutions. The first batch of data, in physics, business, mechanical engineering, and electrical engineering, will be collected this year and published in 2014; computing, social sciences, and the humanities may follow. U-Multirank is set to receive €2 million from the European Union in 2013–14.

LERU, which includes university heavyweights like Cambridge and Oxford—ranked fifth and tenth respectively, on the Shanghai chart—initially welcomed the idea, albeit cautiously. In a 2010 LERU paper, Geoffrey Boulton from the University of Edinburgh in the United Kingdom said that U-Multirank could provide an “antidote to single monotonic lists.” But more recently, the group has slammed the project as unnecessary and poorly designed. The pilot has shown the data collection to be a huge burden on universities, LERU claims, and some statistics—such as graduate employment—are hard to compare from one country to another. Universities could also be tempted to manipulate data to improve their scores, LERU Secretary-General Kurt Deketelaere says.

CHE researcher Gero Federkeil says the pilot study helped test and refine indicators. “To ensure that the data are comparable, you need to give universities very precise [indicator] definitions,” he says. Federkeil is aware that universities could cheat but says there will be checks and controls; for example, U-Multirank will look for outliers and check for country-specific inconsistencies. Federkeil is hoping to clear up what he calls “misunderstandings” about U-Multirank, and to get LERU back on board.

The European University Association, which unites about 850 universities, declined to say whether it would advise its members to participate. U-Multirank is taking a risk by asking the universities to supply data themselves, Usher says. “You open the process to hijacking by people who don’t want to play ball,” he says. “If that happens, I’m not sure the ranking will have credibility with consumers.”

Count them out. The University of Zurich, a member of a group that will boycott the new ranking.

Count them out.
The University of Zurich, a member of a group that will boycott the new ranking.

Producing a truly global list may prove another headache. Of 24 U.S. universities selected for the pilot project, only four confirmed participation, and just one submitted data. Participation in China was equally low. That is no surprise, Usher says. He recalls a 2008 speech by Valérie Pécresse, then France’s higher education minister, who argued for “an objective, well thought-out European ranking, that will let [European] universities shine worldwide.” “It was perceived as an E.U. instrument,” Usher says. The rankers have tried to address that concern but admit they won’t achieve global coverage in the short-term.

Ellen Hazelkorn, vice president for research and enterprise at the Dublin Institute of Technology and an expert on rankings, calls the low international take-up “disappointing.” She praises U-Multirank as an ambitious attempt to go beyond single definitions of quality and says it could be a valuable tool to promote the merits of higher education in Europe. “It’s a huge undertaking and an important contribution to the debate,” Hazelkorn says.

* Tania Rabesandratana is a writer based in Brussels.