Identifying and categorizing the indicators used to evaluate scientific outputs in the scholarly publication ecosystem

Document Type : Research Paper

Authors

1 Ph.D. in Knowledge and Information Science, Research Institution for Information Science and Technology (IranDoc).

2 Ph.D. in Knowledge and Information Science; Assistant Professor; Research Institution for Information Science and Technology (IranDoc),

3 Associate professor.department of educational science, yadegare imam khomeini(RAH) Shahre rey branch, Islamic Azad University, Tehran, Iran.

Abstract

Purpose: Science and technology are the most critical infrastructures of the country's progress and a necessary tool for competition in various fields. Evaluation is the heart of all scientific efforts, which has become more important with the explosion of scientific publications. Evaluation is not a simple and transparent process. This process is considered a sensitive activity. The existence of multiple evaluation indicators to determine the value of scientific outputs in texts, databases, and scientific centers or publications led us to study these three sources (texts, scientific networks, and experts) to create integrated criteria for evaluating the outputs in the scholarly publication ecosystem. Some of the scientific outputs--such as lectures, workshops, and scientific meetings- are not taken into consideration due to a lack of an integrated framework in the scholarly publication ecosystem. Moreover, only a few specific quantitative aspects such as the impact factor, the number of citations, or the number of uses are allowed to be assessed in the evaluation of scientific works. Furthermore, evaluations will be limited to a short period. Identifying and categorizing these indicators as a framework can have a positive effect on solving these issues and forming a continuous evaluation process in both pre and post-publication of scientific works. Therefore, the present research aims to identify the comprehensive evaluation criteria in the scholarly publication ecosystem by considering the texts, scholarly publication networks, and the views of scientific publication experts.
Methodology: Results of all three methods show that, to evaluate research results within the Scientific Publishing System, there are 3 key indicators and 9 sub-indicators with a high percentage being grouped according to form, type, and format. The triangulation method has presented a conceptual framework of evaluation indicators in the scholarly publication ecosystem. First, a systematic review to extract the evaluation criteria from 331 sources. Then, to determine the validity of the extracted criteria and to complete the initial framework, the identified criteria were examined in 12 scientific bases, and finally, it was approved by 30 domestic and foreign scholarly publication experts. The purposive sampling method has been used in all three studies.
Findings: Research shows that the ecosystem of scientific publication consists of various components, including experts, scientific centers, information media, subject areas, information, and knowledge systems, which require different indicators and methods. The data extracted from the systematic review in the field of evaluation were classified into three groups: form, type, and format. Evaluation forms include content, open, altmetric, and bibliographic evaluations (creative and source evaluation). However, some experts distinguish between bibliometric evaluation indicators and scientometric and informatics evaluation indicators, and most experts in different subject areas define all three categories as bibliographic evaluations. In the evaluation form, creators include individuals and scientific organizations such as universities. Open evaluation can refer to judging an output not just by a jury of experts but rather by a jury of anyone interested in the output. In other words, Open evaluations are an ongoing post-publication process of transparent peer evaluation. Multiple paper evaluation functions freely defined by individuals or groups provide various perspectives on the scientific literature. Multiple paper evaluation functions alongside more diverse research evaluation criteria beyond traditional methods are emerging, and with these come a range of practical, ethical, and social factors to consider. Altmetric evaluations are a set of methods based on the social web that measure and monitor the reach and impact of scholarly output through online interactions. Simply, altmetrics are metrics beyond traditional citations. This evaluating form measures cite, like, view, reuse, discussion, Bookmark, etc. The types of evaluations include quantitative, qualitative, and mixed evaluations. The form of evaluation also includes technical evaluation and non-technical evaluation (researcher-made evaluation, discussion-based evaluation). Technical evaluations are indicators that follow predefined procedures or repetitive processes to reach the result, while experts define non-technical evaluations according to specific situations and conditions.
Conclusion: Results of all three methods show that, to evaluate scientific output within the scholarly publication ecosystem, there are 3 key indicators and 9 sub-indicators with a high percentage being grouped according to form, type, and format. The results showed the alignment among the three studies (systematic review, observation of scholarly publication networks, and survey of experts). However, each of the three studies has emphasized special indicators of the evaluation. Based on the systematic review, observation of the scholarly publication networks, and according to the experts, the priority in evaluating the scholarly publication ecosystem is the form and type of evaluation. In evaluation, more emphasis is placed on usual and well-known formats. In addition to dimensions, the needs and goals of individuals and organizations play a decisive role in selecting evaluation indicators. The grouping of evaluation indicators will help the stakeholders clarify the evaluation processes of scholarly publication ecosystems and choose different evaluation methods.

Keywords

Main Subjects