شناسایی و دسته بندی معیارهای ارزیابی بروندادهای علمی در زیست‌بوم نشر علمی(اصلاح و بارگزاری مجدد)

نوع مقاله : مقاله پژوهشی

نویسندگان

1 دکتری علم اطلاعات و دانش‌شناسی، پژوهشگاه علوم و فناوری اطلاعات ایران(ایرانداک)،

2 دکتری علم اطلاعات و دانش‌شناسی، استادیار پژوهشگاه علوم و فناوری اطلاعات ایران(ایرانداک)

3 دانشیار علوم تربیتی واحد یادگار امام خمینی (ره) شهرری، دانشگاه آزاد اسلامی، تهران، ایران.

چکیده

هدف: هدف اصلی پژوهش حاضر شناسایی معیارهای ارزیابی در زیست‌بوم نشر علمی با در نظر گرفتن متون، شبکه‌های نشر علمی و دیدگاه خبرگان نشر علمی است.
روش‌شناسی: برای ارائه چارچوب مفهومی از شاخص های ارزیابی در زیست‌بوم نشر علمی، از روش سه‌سویه‌سازی استفاده‌شده‌است. ابتدا معیارهای استفاده‌شده در حوزه نشر علمی با استفاده از مرور نظام‌مند از 331 منبع استخراج‌شد. سپس برای تعیین اعتبار معیارهای استخراج‌شده و تکمیل چارچوب اولیه، معیارهای شناسایی‌شده در 12 پایگاه های علمی مورد بررسی قرارگرفت و در نهایت به تأیید 30 نفر از خبرگان داخلی و خارجی رسید.
یافته‌ها: داده‌های استخراج‌شده از مرور نظام‌مند در حوزه ارزیابی در 3 گروه طبقه‌بندی شد که شامل شکل، نوع و قالب ارزیابی است.
نتیجه‌گیری: نتایج نشان از همراستایی هر سه مطالعه (مرور نظام‌مند، مشاهده شبکه‌های نشر علمی و نظرسنجی از خبرگان) دارد. هرچند در هر یک از سه مطالعه تأکید بر شاخص‌های خاصی از مؤلفه ارزیابی است. بر اساس مرور نظام‌مند متون، مشاهده شبکه‌های نشر علمی و از نظر خبرگان آنچه در ارزیابی زیست‌بوم نشر علمی در اولویت است شکل و نوع ارزیابی است. در قالب ارزیابی نیز بیشتر بر روش‌های معمول و شناخته‌شده ساخته تأکید‌شده‌است. پژوهش حاضر میتواند الگوی مناسبی برای طراحی نظام ارزیابی بروندادهای علمی قرار گیرد.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

Identifying and categorizing the indicators used to evaluate scientific outputs in the scholarly publication ecosystem

نویسندگان [English]

  • Afrooz Hamrahi 1
  • Roya Pournaghi 2
  • Dariush Matlabi 3
1 Ph.D. in Knowledge and Information Science, Research Institution for Information Science and Technology (IranDoc).
2 Ph.D. in Knowledge and Information Science; Assistant Professor; Research Institution for Information Science and Technology (IranDoc),
3 Associate professor.department of educational science, yadegare imam khomeini(RAH) Shahre rey branch, Islamic Azad University, Tehran, Iran.
چکیده [English]

Purpose: Science and technology are the most critical infrastructures of the country's progress and a necessary tool for competition in various fields. Evaluation is the heart of all scientific efforts, which has become more important with the explosion of scientific publications. Evaluation is not a simple and transparent process. This process is considered a sensitive activity. The existence of multiple evaluation indicators to determine the value of scientific outputs in texts, databases, and scientific centers or publications led us to study these three sources (texts, scientific networks, and experts) to create integrated criteria for evaluating the outputs in the scholarly publication ecosystem. Some of the scientific outputs--such as lectures, workshops, and scientific meetings- are not taken into consideration due to a lack of an integrated framework in the scholarly publication ecosystem. Moreover, only a few specific quantitative aspects such as the impact factor, the number of citations, or the number of uses are allowed to be assessed in the evaluation of scientific works. Furthermore, evaluations will be limited to a short period. Identifying and categorizing these indicators as a framework can have a positive effect on solving these issues and forming a continuous evaluation process in both pre and post-publication of scientific works. Therefore, the present research aims to identify the comprehensive evaluation criteria in the scholarly publication ecosystem by considering the texts, scholarly publication networks, and the views of scientific publication experts.
Methodology: Results of all three methods show that, to evaluate research results within the Scientific Publishing System, there are 3 key indicators and 9 sub-indicators with a high percentage being grouped according to form, type, and format. The triangulation method has presented a conceptual framework of evaluation indicators in the scholarly publication ecosystem. First, a systematic review to extract the evaluation criteria from 331 sources. Then, to determine the validity of the extracted criteria and to complete the initial framework, the identified criteria were examined in 12 scientific bases, and finally, it was approved by 30 domestic and foreign scholarly publication experts. The purposive sampling method has been used in all three studies.
Findings: Research shows that the ecosystem of scientific publication consists of various components, including experts, scientific centers, information media, subject areas, information, and knowledge systems, which require different indicators and methods. The data extracted from the systematic review in the field of evaluation were classified into three groups: form, type, and format. Evaluation forms include content, open, altmetric, and bibliographic evaluations (creative and source evaluation). However, some experts distinguish between bibliometric evaluation indicators and scientometric and informatics evaluation indicators, and most experts in different subject areas define all three categories as bibliographic evaluations. In the evaluation form, creators include individuals and scientific organizations such as universities. Open evaluation can refer to judging an output not just by a jury of experts but rather by a jury of anyone interested in the output. In other words, Open evaluations are an ongoing post-publication process of transparent peer evaluation. Multiple paper evaluation functions freely defined by individuals or groups provide various perspectives on the scientific literature. Multiple paper evaluation functions alongside more diverse research evaluation criteria beyond traditional methods are emerging, and with these come a range of practical, ethical, and social factors to consider. Altmetric evaluations are a set of methods based on the social web that measure and monitor the reach and impact of scholarly output through online interactions. Simply, altmetrics are metrics beyond traditional citations. This evaluating form measures cite, like, view, reuse, discussion, Bookmark, etc. The types of evaluations include quantitative, qualitative, and mixed evaluations. The form of evaluation also includes technical evaluation and non-technical evaluation (researcher-made evaluation, discussion-based evaluation). Technical evaluations are indicators that follow predefined procedures or repetitive processes to reach the result, while experts define non-technical evaluations according to specific situations and conditions.
Conclusion: Results of all three methods show that, to evaluate scientific output within the scholarly publication ecosystem, there are 3 key indicators and 9 sub-indicators with a high percentage being grouped according to form, type, and format. The results showed the alignment among the three studies (systematic review, observation of scholarly publication networks, and survey of experts). However, each of the three studies has emphasized special indicators of the evaluation. Based on the systematic review, observation of the scholarly publication networks, and according to the experts, the priority in evaluating the scholarly publication ecosystem is the form and type of evaluation. In evaluation, more emphasis is placed on usual and well-known formats. In addition to dimensions, the needs and goals of individuals and organizations play a decisive role in selecting evaluation indicators. The grouping of evaluation indicators will help the stakeholders clarify the evaluation processes of scholarly publication ecosystems and choose different evaluation methods.

کلیدواژه‌ها [English]

  • scholarly evaluation
  • evaluation indicators
  • scholarly publication ecosystem
  • triangulation