همبستگی بین رتبه‌بندی دانشگاه‌ها به دو روش کتاب‌سنجی محض و داوری غنی‌شده با کتاب‌سنجی: نمونه مورد مطالعه نظام تعالی پژوهش انگلیس و نظام‌های رتبه‌بندی معتبر جهانی

نوع مقاله : مقاله پژوهشی

نویسندگان

1 کارشناس ارشد علم اطلاعات و دانش‌شناسی، دانشگاه شیراز.

2 استاد، گروه علم اطلاعات و دانش‌شناسی، دانشکده علوم تربیتی و روان‌شناسی، دانشگاه شیراز.

3 استادیار، گروه علم اطلاعات و دانش‌شناسی، دانشکده ادبیات و علوم انسانی، دانشگاه خلیج فارس.

چکیده

هدف: برخی رتبه‌بندی‌های دانشگاهی مانند چارچوب تعالی پژوهش (رف)، برای دستیابی به کیفیت بالاتر، از روش داوری استفاده می‌کنند. بااین‌حال، کارایی این روش پایین است. پژوهش حاضر به‌منظور بررسی میزان هم‌گرایی نتایج رتبه‌بندی‌های دارای رویکرد متفاوت به داوری، همبستگی نمره‌های دانشگاه‌های انگلیس را در «رف» و سامانه‌های رتبه‌بندی معتبر لایدن، شانگهای، کیو-اس و تایمز بررسی می‌کند. 
روش‌شناسی: پژوهش حاضر، به روش اسنادی با رویکرد تحلیل محتوای کمی انجام شده است. داده‌ها به روش سرشماری گردآوری و با تحلیل همبستگی و رگرسیون تحلیل شده‌اند.
                   
یافته‌ها: قوت همبستگی نمره دانشگاه‌ها در «رف» با «تایمز» و «کیو-اس» قوی، با «شانگهای» متوسط و با ابعاد «لایدن» از ضعیف تا قوی است. نمره کل در «کیو-اس» متأثر از گرایش پزشکی است. تأثیر گرایش موضوعی بر نمره در برخی ابعاد سامانه لایدن مشاهده می‌شود.
                   
نتیجه‌گیری: رتبه‌بندی مبتنی بر داوری غنی‌شده با کتاب‌سنجی، هم‌گرایی قوی و درنتیجه مشابهت بالایی را با روش مبتنی بر تلفیق کتاب‌سنجی با آمار عملکردی و نظرسنجی نشان می‌دهد. اما تلفیق کتاب‌سنجی با آمار عملکردی صرف مشابهت متوسطی با آن دارد و نمی‌تواند جایگزین آن شود. تأثیر گرایش موضوعی بر نتایج رتبه‌بندی، کاربرد این نتایج برای مقایسه دانشگاه‌های دارای پوشش موضوعی متفاوت را به چالش می‌کشد.

کلیدواژه‌ها


عنوان مقاله [English]

A Correlation Study of Bibliometric-Based and Informed-Peer-Review University Rankings: The Case of UK Research Excellence Framework (REF) and the World's Prestigious University Ranking Systems

نویسندگان [English]

  • Somayeh Hesabi 1
  • Hajar Sotudeh 2
  • Zahra Yousefi 3
1 M. A. in Knowledge & Information Sciences, Faculty of Education & Psychology, Shiraz University. Shiraz, Iran.
2 Professor, Department of Knowledge & Information Sciences, Faculty of Education & Psychology, Shiraz University. Shiraz, Iran.
3 Assistant Professor, Department of Knowledge & Information Sciences, Faculty of Literature & Humanities, Persian Gulf University, Bushehr, Iran.
چکیده [English]

Purpose: Informed peer-review university ranking systems like REF involve peer review to achieve high-quality university performance evaluations while considering bibliometric facts. Despite its advantages and methodological success, it is not possible to implement the review-based exercise in all countries due to unaffordable costs, vulnerability to bias, and differences in cultural, economic, managerial, and infrastructural conditions. As an alternative, some science systems prefer to consider using the results of international university rankings to gauge the academic performance of their higher education institutions. Using a different methodology, the international ranking systems tend to maintain their efficiency by heavily relying on the bibliometric information extracted from databases, the performance data gathered from official authorities, and academic and employer surveys. This gives rise to the question of to what extent these two types of systems are convergent in their results. If their different methodologies lead to similar results, they could be interpreted to have similar effectiveness in their evaluation of academic performance. Therefore, by relying on the results of the international ranking systems, one can avoid the shortcomings of the review-based method, and maintain both the efficiency and effectiveness of the evaluation systems. 
To reveal the convergence of the results obtained from the methods, the present study explores a sample of British universities evaluated by REF (2014) to investigate the correlation between their scores in REF and the world’s prestigious university rankings.
Methodology: Using a quantitative content analysis method, the present study concentrates on a collection of 150 British universities evaluated simultaneously by REF (2014) and at least one of the world's prestigious ranking systems including QS, THE, Leiden and ARWU. Due to the small size of the population, all REF members are examined without sampling. The evaluation results of these universities are extracted from these systems and entered into a checklist. Also, the subject fields and disciplines covered by the universities are collected to investigate their probable effects on the results. The universities’ subject coverage similarity is calculated using the Cosine similarity measure and K Nearest Neighbor technique in the KNIME data mining platform. Finally, correlation and regression analyses are applied to analyze the data.
Findings: UK universities’ scores in REF are found to be significantly correlated to theirs in international ranking systems. They are highly correlated to QS and THE’s, while being moderately associated with ARWU’s, and weakly-to-strongly correlated to Leiden’s. The regression analyses show no significant effects of subject coverage on the overall scores, except for medical tendency’s effect on QS. However, the subject coverage affects some dimension scores. While it does not significantly predict any of the dimensions of ARWU, it partially predicts the Citations and Industry Income dimensions in THE, with medical and technical, and engineering subjects respectively having the highest positive predicting power. In QS, the subject coverage partially predicts the dimensions of Academic reputation, Faculty/student ratio, International faculty ratio, International student ratio, and Citation Per faculty. Medical science has the highest positive effect on the dimensions of Academic reputation, Faculty/student ratio, and Citations per faculty. Moreover, basic sciences have the highest negative effect on the International student ratio and International faculty ratio. 
Also, the subject coverage can predict Leiden scores for the dimensions PP (top 10%) (i.e. the proportion of a university’s publications that, compared with other publications in the same field and the same year, belong to the top 10% most frequently cited), MCS (i.e. the average number of citations of the publications of a university), MNCS (i.e. the average number of citations of the publications of a university, normalized for field and publication year), PP(collab) (i.e. the proportion of a university’s publications that have been co-authored with one or more other organizations), PP(int collab) (i.e. the proportion of a university’s publications that have been co-authored by two or more countries), PP(industry) (i.e. the proportion of a university’s publications that have been co-authored with one or more industrial organizations) and PP(>1000 km) (i.e. the proportion of a university’s publications with a geographical collaboration distance of more than 1000 km). Medical science has the highest positive predictive power for the scores in all the mentioned dimensions except for PP(industry).
Conclusion: The results of REF, as a peer-review-based university ranking informed with bibliometric data, are highly correlated to those of the international evaluations based on performance statistics and enriched by surveys, while being moderately correlated to those performed by just performance statistics. The subject coverage impact on the rankings challenges the application of the results in comparing universities with different subject coverages.

کلیدواژه‌ها [English]

  • Research Evaluation
  • University Rankings
  • REF
  • THE
  • QS
  • ARWU
  • Leiden
 
پوریزدیان، مهدی؛ کرمی، مرتضی (1398). ارزیابی نظام‌های رتبه‌بندی دانشگاه‌ها جهت ارزیابی عملکرد پژوهشی دانشگاه با رویکرد تعالی سازمانی. نخستین کنفرانس ملی علوم انسانی و توسعه. شیراز، 16 ص.
خانی‌زاد، رحیم؛ منتظر، غلامعلی (1396). ارزیابی تطبیقی نظام‌های رتبه‌بندی دانشگاه‌های جهان. سیاست علم و فناوری، 46-31، (3) 9. DOI: 10.22034/jstp.2017.9.3.537781
خسروجردی، محمود و ندا زراعت‌کار (1391). مروری بر نتایج هفت نظام رتبه‌بندی دانشگاه‌های جهان. پژوهش‌نامه پردازش و مدیریت اطلاعات، 28 (1): 71-84.
رحیمی، ماریه؛ فتاحی، رحمت‌الله (1387). بررسی وضعیت همکاری علمی اعضای هیئت علمی در چهار حوزه موضوعی در دانشگاه فردوسی مشهد. فصلنامه کتابداری و اطلاع‌رسانی، 11 (2) ص. 120-95.
ریاحی، عارف؛ نوروزی، علیرضا (1390). بررسی میزان همکاری‌های علمی کشورهای حوزه خلیج فارس با کشورهای اسکاندیناوی در پایگاه اطلاعاتی اسکاپوس طی سال‌های 1989-2009. رهیافت (شماره 48).
ستارزاده، اصغر؛ گلین‌مقدم، گلنسا؛ مؤمنی، عصمت (1395). تحلیل ساختار شبکه همکاری‌های علمی پژوهشگران حوزه علوم پایه پزشکی ایران در نمایه استنادی علوم در بازه زمانی 1996-2013. فصلنامه مطالعات دانش‌شناسی، 2 (6).ص. 20-1. DOI: 10.22054/jks.2016.2707
عرفان‌منش، محمدامین؛ روحانی، والاعلی؛ بصیریان جهرمی، رضا؛ غلامحسین‌زاده، زهره (1392). بررسی مشارکت پژوهشگران روانشناسی و روان‌پزشکی کشور در  تولید علم. پژوهش‌نامه پردازش و مدیریت اطلاعات، 29 (1). ص. 163-137.
عرفان‌منش، محمدامین؛ مروتی اردکانی، مرضیه (1395). مطالعه علم‌سنجی و تحلیل شبکه‌های همکاری علمی در فصلنامه مطالعات میان‌رشته‌ای در علوم انسانی. فصلنامه مطالعات میان‌رشته‌ای در علوم انسانی، 8 (4). ص. 77-55. DOI: 10.22035/isih.2016.230
مشتاق، مریم، ستوده، هاجر، یقطین، مریم و جوکار، طاهره (1400). همبستگی نتایج سامانه‌های رتبه‌بندی نمایه نیچر، لایدن با تایمز و کیو-اس. پژوهش‌نامه علم‌سنجی، 7 (14)، 157-172. DOI: 10.22070/rsci.2020.5488.1384
نورمحمدی، حمزه و صفری، فاطمه (1392). معرفی نظام‌های رتبه‌بندی جهانی دانشگاه‌ها و بررسی شاخص‌های این نظام‌ها. سیاست‌نامه علم و فناوری، (2)3: 71-86. DOR: 20.1001.1.24767220.1392.03.2.7.1
 
Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2009). Allocative efficiency in public research funding: Can bibliometrics help? Research policy, 38(1), 206-215. doi: 10.1016/j.respol.2008.11.001
Abramo, G., & D'Angelo, C. A. (2011). Evaluating research: from informed peer review to bibliometrics. Scientometrics, 87(3), 499-514. doi: 10.1007/s11192-011-0352-7
Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: a comparative study at a Norwegian university. Research evaluation, 13(1), 33-41. doi: 10.3152/147154404781776563
Baccini, A., & De Nicolao, G. (2016). Do they agree? Bibliometric evaluation versus informed peer review in the Italian research assessment exercise. Scientometrics, 108(3), 1651-1671. doi:10.1007/s11192-016-1929-y
Banal-Estañol, A., Macho-Stadler, I., & Pérez-Castrillo, D. (2013). Research output from university–industry collaborative projects. Economic Development Quarterly, 27(1), 71-81. doi: 10.1177/0891242412472535
Bertocchi, G., Gambardella, A., Jappelli, T., Nappi, C. A., & Peracchi, F. (2015). Bibliometric evaluation vs. informed peer review: Evidence from Italy. Research Policy, 44(2), 451-466.‏ doi: 10.1016/j.respol.2014.08.004
Boulton, G. (2011). University rankings: Diversity, excellence and the European initiative. Procedia-Social and Behavioral Sciences, 13, 74-82. doi: 10.1016/j.sbspro.2011.03.006
Box, S. (2010). Performance-based funding for public research in tertiary education institutions: Country experiences. Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing. Dec. 01, 2010. Paris, 188 p.‏ doi: 10.1787/9789264094611-en
Butler, L. (2010). Impacts of performance-based research funding systems: A review of the concerns and the evidence. Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings, OECD Publishing. Dec. 01, 2010. Paris, 188 p.‏ doi: 10.1787/9789264094611-en
Chubin, D. E., Hackett, E. J. (1990). Peerless science: Peer review and US science policy. Suny Press. New York, USA. 257 P.
Chubin, D. E. (1994). Grants peer review in theory and practice. Evaluation Review, 18(1), 20-30.‏ doi: 10.1177/0193841X9401800103
De Boer, H., Jongbloed, B., Benneworth, P., Cremonini, L., Kolster, R., Kottmann, A., & Vossensteyn, H. (2015). Performance-based funding and performance agreements in fourteen higher education systems. Center for Higher Education Policy Studies.‏ https://research.utwente.nl/en/publications/
D’Este, P., & Patel, P. (2007). University–industry linkages in the UK: What are the factors underlying the variety of interactions with industry?. Research policy, 36(9), 1295-1313. doi: 10.1016/j.respol.2007.05.002
Erfanmanesh, M., Rohani, V. A., Jahromi, R. B., & Gholamhosseinzadeh, Z. (2014). Investigating scientific collaboration of Iranian psychology and psychiatry researchers. Iranian Journal of Information Processing and Management, 29(1), 137-163. [In Persian].‏
Erfanmanesh, M. A., & Morovati Ardakani, M. (2016). A scientometrics and collaboration network analysis of the quarterly journal of interdisciplinary studies in the humanities. Interdisciplinary Studies in the Humanities, 8(4), 55-77. doi:  10.22035/isih.2016.230 [In Persian].
Fauzi, M. A., Tan, C. N. L., Daud, M., & Awalludin, M. M. N. (2020). University rankings: A review of methodological flaws. Issues in Educational Research, 30(1), 79-96.
Franceschet, M., & Costantini, A. (2011). The first Italian research assessment exercise: A bibliometric perspective. Journal of informetrics, 5(2), 275-291.‏ doi: 10.1016/j.joi.2010.12.002
Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277-304.‏ doi: 10.1023/B:MINE.0000005155.70870.bd
Geuna, A., & Piolatto, M. (2016). Research assessment in the UK and Italy: Costly and difficult, but probably worth it (at least for a while). Research Policy, 45(1), 260-271.‏ doi: 10.1016/j.respol.2015.09.004
Goldfinch, S., & Yamamoto, K. (2012). Prometheus Assessed? Research Measurement, Peer Review, and Citation Analysis. ‎ Chandos Publishing. Minneapolis, USA. 384 p.
Henriksen, D. (2016). The rise in co-authorship in the social sciences (1980–2013). Scientometrics, 107(2), 455-476. doi: 10.1007/s11192-016-1849-x
Hicks, D. (2012). Performance-based university research funding systems. Research policy, 41(2), 251-261. doi: 10.1016/j.respol.2011.09.007
Hood, W., & Wilson, C. (2001). The literature of bibliometrics, scientometrics, and informetrics. Scientometrics, 52(2), 291-314. doi: 10.1023/a:1017919924342
Jonkers, K., & Zacharewicz, T. (2016). Research Performance Based Funding Systems: a Comparative Assessment. Publications Office of the European Union. Luxembourg, Luxembourg. 108 p.
Jonkers K and Zacharewicz T. Research Performance Based Funding Systems: a Comparative Assessment . EUR 27837. Luxembourg (Luxembourg):; 2016
Khanizad, R., & Montazer, G. (2017). A Comparative Evaluation of the World University Rankings Systems. Journal of Science and Technology Policy, 9(3), 31-43. doi:  10.22034/jstp.2017.9.3.537781 [In Persian].
Khosrowjerdi, M., & Zeraatkar, N. (2012). A review of outcomes of seven world university ranking systems. Iranian Journal of Information processing and Management, 28(1), 71-84. [In Persian].
Kostoff, R. N. (1995). Federal research impact assessment: Axioms, approaches, applications. Scientometrics, 34(2), 163-206. doi: 10.1007/BF02020420
Marginson, S. W. M. (2009). University rankings, government and social order. In: Re-Reading Education Policies. Brill. 584-604 pp. doi: 10.1007/BF02020420
Moed, H. F. (2017). A critical comparative analysis of five world university rankings. Scientometrics, 110(2), 967-990. doi: 10.1007/978-3-319-60522-7_18
Moshtagh, M., Sotudeh, H., Yaghtin, M., & Jowkar, T. (2021). The Correlation of Nature and Leiden Index Ranking Systems with Times and QS. Scientometrics Research Journal, 7(14), 157-172. doi: 10.22070/rsci.2020.5488.1384 [In Persian].
Mryglod, O., Kenna, R., Holovatch, Y., & Berche, B. (2013). Comparison of a citation-based indicator and peer review for absolute and specific measures of research-group excellence. Scientometrics, 97(3), 767-777. doi: 10.1007/s11192-013-1058-9
Nourmohammadi, H. A., & Safari, F. (2013). Introduction the global rankings of universities and review criteria of this system. Science and Technology Policy Letters, 3(2), 71-86. DOR: 20.1001.1.24767220.1392.03.2.7.1 [In Persian].
Oberski, J. E. J. (1988). Some Statistical Aspects Of Co-Citation Cluster Analysis And A Judgment By Physicists. In: Handbook of quantitative studies of science and technology. Elsevier. 431-462 pp.‏
Phillips, M., & Maes, K. (2012). Research Universities and Research Assessment, Position Paper for the League of European Research Universities (LERU). https://www.leru.org/publications/research-universities-and-research-assessment
Priem, J., & Hemminger, B. H. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday, 15(7). doi: 10.5210/fm.v15i7.2874
Radicchi, F., & Castellano, C. (2012). A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions. PLoS One, 7(3), e33833. doi: 10.1371/journal.pone.0033833
Rahimi, M., & Fatahi, R. (2008). A survey of scholarly collaborations among academic staff of Ferdowsi University of Mashhad. Library and Information Science, 11(2), 95-120 [In Persian].
Rauhvargers, A. (2013). Global university rankigs and their impact: Report II. European University Association. Brussels, Belgium. 82 p.
 
Rinia, E. J., Van Leeuwen, T. N., Van Vuren, H. G., & Van Raan, A. F. (1998). Comparative   analysis of a set of bibliometric indicators and central peer review criteria: Evaluation of condensed matter physics in the Netherlands. Research policy, 27(1), 95-107. doi: 10.1016/S0048-7333(98)00026-2
Riyahi, A., & Norouzi, A. (2011). The survey of the scientific cooperation of the Gulf states with the Scandinavian countries in the Scopus database during 1989-2009. Rahyaft, 21(48), 91-110. [In Persian].
Robbins, P. T., Wield, D., & Wilson, G. (2017). Mapping engineering and development research excellence in the UK: An analysis of REF2014 impact case studies. Journal of International Development, 29(1), 89-105. doi: 10.1002/jid.3255
Sattarzadeh, A., Galyani Moghaddam, G., & Momeni, E. (2016). The Analysis of the structure of scientific collaboration networks in basic medical sciences in the Science Citation Index from 1996 to 2013. Knowledge Retrieval and Semantic Systems, 3(6), 1-20. doi:  10.22054/jks.2016.2707 [In Persian]
Selten, F., Neylon, C., Huang, C. K., & Groth, P. (2020). A longitudinal analysis of university rankings. Quantitative Science Studies, 1(3), 1109-1135. doi: 10.1162/qss_a_00052
Smolinsky, L., & Lercher, A. (2012). Citation rates in mathematics: A study of variation by subdiscipline. Scientometrics, 91(3), 911-924. doi: 10.1007/s11192-012-0647-3
Stern, N. (2016). Building on success and learning from experience: an independent review of the Research Excellence Framework.  http://eprints.lse.ac.uk/87988/
Stratilatis, C. (2014). University rankings and the scientification of social sciences and humanities. Ethics in Science and Environmental Politics, 13(2), 177-192. doi: 10.3354/esep00144
Taylor, J. (2011). "The assessment of research quality in UK universities: peer review or metrics?" British Journal of Management, 22(2): 202-217. doi: 10.1111/j.1467-8551.2010.00722.x
 
van Raan, A. F. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133-143. doi: 10.1007/s11192-005-0008-6
Wilsdon, J. (2016). The metric tide: independent review of the role of metrics in research assessment and management. Sage. London, UK. 167 p.