Comparison of the Experts’ Perspec-tives to SciVal Database’s FWCI Index in Identification of Top Authors (Case Study: Top Iranian Authors in Fundamental Sciences Area from 2013 to 2018)

Document Type : Research Paper

Authors

1 Professor, Department of Knowledge and Information Science Studies, Shahid Chamran University of Ahvaz, Iran

2 Associate Professor, Department of Knowledge and Information Science Studies , Payam-e-Noor University

3 Phd of Knowledge and Information Science Studies, Shahid Chamran University of Ahvaz, Iran

Abstract

Purpose: The most important objective of the present study is the comparison of a quantitative approach (scientometrics index of FWCI) and a qualitative approach (experts’ perspectives) in the identification of the top authors.
Methodology: This is applied research and has been done by a combined method (qualitative and quantitative). In the qualitative part, the experts' points of view and in the quantitative part, the scientific-efficacy evaluation index have been used. Two sample volumes were comprised of the present study participants; the first group included the experts (n=10) and the second group consisted of the premier writers (n=87) based on FWCI. Checklist, questionnaire and SciVal database were the resources utilized for data gathering. The data were analyzed based on the nonparametric Friedman test.
Findings: the results of the present study indicated that the most important indicators influencing the scientific productivity of the authors, as viewed by the experts, are environmental and organizational factors (the time devoted to the research; the scientific rank of the affiliated organization and the fame of the author in national and international levels) while scientometrics indicators (number of papers, organizational goals, substantiations and credibility of the surveyed journal) were found with the highest mean values and ranks based on FWCI in respect to the other indicators in the top author group. However, no significant difference was evidenced between the two approaches in terms of the obtained ranks.
Conclusion: the present study indicated that there is no significant difference between the qualitative and quantitative approaches in terms of the obtained ranks (even with the transposition of the indicators in these two approaches) and the top authors feature the indicators intended by the scientific productivity area’s experts for primacy based on FWCI.

Keywords


ستوده، هاجر؛ و یقطین، مریم (1393). شاخص‌ها و مدل‌های سنجش بهره‌وری علمی پژوهشگران، سیاست علم و فناوری، 3(1)، 47-59.
قانعی‌راد، محمدامین و قاضی‌پور، فریده (1381). عوامل هنجاری و سازمانی مؤثر بر میزان بهره‌وری اعضای هیئت علمی، نامه پژوهش، 4، 167-207.
موسوی چلک، افشین؛ سهیلی، فرامرز و خاصه، علی‌اکبر (1396). رابطۀ بین نفوذ اجتماعی و بهره‌وری و کارایی در شبکه اجتماعی هم‌نویسندگی پژوهشگران علوم قرآن و حدیث ایران، کتابداری و اطلاع‌رسانی، 20(3)، 50-75.
Abramo, G., Cicero, T., & D’Angelo, C. A. (2014). How do you define and measure research productivity?, scientometrics,  101, 1129-1144.
Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: a comparative study at a Norwegian university. Research evaluation, 13(1), 33-41.
Allison, P. D., & Stewart, J. A. (1974). Productivity differences among scientists: Evidence for accumulative advantage. American sociological review, 596-606.
Bayer, A. E., & Folger, J. (1966). Some correlates of a citation measure of productivity in science. Sociology of education, 381-390.
Bonaccorsi, A., & Daraio, C. (2003). A robust nonparametric approach to the analysis of scientific productivity. Research evaluation, 12(1), 47-69.
Bornmann, L., & Daniel, H. D. (2005). Does the h-index for ranking of scientists really work?. Scientometrics, 65(3), 391-392.
Bornmann, L., & Haunschild, R. (2017). Does evaluative scientometrics lose its main focus on scientific quality by the new orientation towards societal impact?. Scientometrics, 110(2), 937-943.‏
Chew, W. B. (1988). No-nonsense guide to measuring productivity. Harvard Business Review, 66(1), 110-118.
Cole, Jonathan. R., & Cole, Stephan. (1972). The Ortega hypothesis: Citation analysis suggests that only a few scientists contribute to scientific progress. Science, 178(4059), 368-375.
Edwards, S. A., & McCarrey, M. W. (1973). Measuring Performance of Researchers. Research Management, 16(1), 34-41.
Fedderke, J. W. (2013). The objectivity of national research foundation peer review in South Africa assessed against bibliometric indexes. Scientometrics, 97(2), 177-206.
Fiala, J., Mareš, J. J., & Šesták, J. (2017). Reflections on how to evaluate the professional value of scientific papers and their corresponding citations. Scientometrics, 112(1), 697-709.‏
Garfield, Eugene (1973).  More of Forecasting Noble Prizes and the Most Cited Scientists of 1972!, Current Contents,No. 40, 5-7.
Godin, B. (2009). The value of science: changing conceptions of scientific productivity, 1869 to circa 1970. Social Science Information, 48(4), 547-586.‏
Harnad, S. (2008). Validating research performance metrics against peer rankings. Ethics in science and environmental politics, 8(1), 103-107.‏
Hirsch, I., Milwitt, W., & Oakes, W. J. (1958).  Increasing the productivity of scientists, Harvard Bussiness Review, 36, 66-76.
Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences of the United States of America, (Nov 15) 16569-16572. https://scival.com
Kenna, R., Mryglod, O., & Berche, B. (2017). A scientists' view of scientometrics: Not everything that counts can be counted. arXiv preprint arXiv:1703.10407.‏
Kosmulski, M. (2018).  Are you in top 1%?, scientomerics, 114(2), 557-565.
Kreiman, G., & Maunsell, J. (2011). Nine criteria for a measure of scientific output. Frontiers in computational neuroscience, 5, 48.‏
Lotka, A. J. (1926). The frequency distribution of scientific productivity.  Journal of the Washington Academy of Science, 16 (12), 317-323.
Merton, R. K. (1988). The Matthew effect in science, II: Cumulative advantage and the symbolism of intellectual property. isis, 79(4), 606-623.
Mose, T. B., & Lyhne, V. B. (2015). Gender differences and the role of social capital in academic productivity. Unpublished master’s thesis, Business School, Copenhagen.
Ramsden. Paul (1994).  Describing and explaining research productivity, Higher Education, 28(2), 207-226.
Rinia, E. J., Van Leeuwen, T. N., Van Vuren, H. G., & Van Raan, A. F. (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria: Evaluation of condensed matter physics in the Netherlands. Research policy, 27(1), 95-107.
Ruiz-Castillo, J. (2016). Research output indicators are not productivity indicators, informetrics, 10, 661-663.
Sahel, J. A. (2011). Quality versus quantity: assessing individual research performance. Science translational medicine, 3(84), 84cm13-84cm13.
Tangen, Stefan (2005). Demystifying productivity and performance, International Journal of productivity and performance management, 54(1), 34-46.
Tijssen, R. J., Visser, M. S., & Van Leeuwen, T. N. (2002). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference?. Scientometrics, 54(3), 381-397.
Van Noorden, R. (2010). A profusion of measures: scientific performance indicators are proliferating--leading researchers to ask afresh what they are measuring and why. Richard Van Noorden surveys the rapidly evolving ecosystem. Nature, 465(7300), 864-867.
Van Raan, A. F. (2006). Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. scientometrics, 67(3), 491-502.
Vinkler, P. (2017). Core indicators and professional recognition of scientometricians. Journal of the Association for Information Science and Technology, 68(1), 234-242.
Zerem, Enver (2017).  The ranking of scientists based on scientific publications assessment, Journal of Biomedical Information, 75, 107-109..