سخن سردبیر
نویسنده
سردبیر پژوهشنامه علمسنجی، دوفصلنامه علمی دانشگاه شاهد و استاد گروه علم اطلاعات و دانششناسی دانشگاه شاهد، تهران، ایران.
چکیده
کلیدواژهها
عنوان مقاله [English]
نویسنده [English]
Identifying the actual roles of authors in the production of a scientific been among the most important and contested concerns of research evaluation systems, remaining at the center of attention within the scientometrics community. As an illustrative example, one of the key concepts in this context is Dominance Factor Analysis (DFA), which takes the frequency with which work and clarifying the the nature and extent of each contributor’s participation in the conception, execution, and dissemination of research have consistently an author occupies the first‑author position as a basis for assessing that author’s scientific dominance over other co-authors (Kumar et al., 2021). Alongside the extensive research conducted in this area, it is also necessary to refer to international efforts that have emerged in response to these same concerns. For example, the International Committee of Medical Journal Editors (ICMJE) specifies four explicit criteria for qualifying for authorship of a scientific article (ICMJE, as cited in Hosseini et al., 2025, p. 22):
Substantial contributionto the conception or design of the study, or to the analysis and interpretation of data;
Drafting the initial manuscriptor critically revising it for important intellectual content;
Final approvalof the version to be published;
Full accountabilityfor all aspects of the research work.
Building on this line of argument, scholars such as Larivière et al. (2021, as cited in Hosseini et al., 2025, p. 22) have emphasized that a scientific author must satisfy all four of the above criteria. In this context, one of the most significant international initiatives aimed at enhancing transparency in contributors’ roles is the development of the ANSI/NISO Z39.104‑2022 standard (the CRediT standard). This standard, developed by the National Information Standards Organization (NISO) with the endorsement of the American National Standards Institute (ANSI), introduces fourteen clearly defined roles across the research process—from conceptualization and methodology to writing, analysis, and visualization—and is now widely adopted by journals, publishers, and research evaluation bodies (NISO, 2022).
The common thread running through all these efforts is the emphasis on the unique and non‑substitutable position of the human researcher in fulfilling three fundamental roles: articulating the research problem, interpreting the results, and verifying the validity of data and findings. These three core roles are inherently grounded in scientific judgment and human accountability and must necessarily be performed by the researcher themselves, such that no technological tool- whether analytical software or artificial intelligence systems- can substitute for the researcher’s direct engagement in carrying out these functions. For this reason, even within standards that define and classify contributor roles in research, references to “software” or “tools” are intended to denote the researcher’s use of analytical instruments, rather than the delegation of responsibility for reasoning, evaluation, and scientific judgment to machines. Accordingly, while artificial intelligence tools and data infrastructures can substantially accelerate and facilitate the collection and analysis of scientometric data, understanding the research problem, exercising judgment over results, and assuming scientific responsibility remain firmly and exclusively within the domain of the human researcher.
Artificial Intelligence on the Trajectory of Scientometrics
In light of the preceding discussion, it is evident that the introduction and adoption of artificial intelligence technologies in scientometric research- despite the substantial capacities they offer for facilitating and advancing various stages of the research process- cannot substitute for the three fundamental roles of the scientometrics researcher: articulating the research problem, interpreting the results, and verifying the validity of data and findings. Scientometrics has always been shaped and advanced through direct engagement with emerging technologies; from bibliographic databases and data analysis software to large language models, each technological development has opened new possibilities for enhancing the accuracy, speed, and depth of analysis in this field. Nevertheless, the contemporary landscape of scientometrics more than ever requires that the boundary between “technological capability” and “scientific judgment” be clearly and deliberately maintained.
The introduction of large language models into this field has ushered in a new phase of scientific data analysis- one in which artificial intelligence becomes a technological partner in human decision‑making, and data analysis increasingly approaches an algorithmic interpretation of the scientific system (Zhang et al., 2025). From the perspective of national science policy, this transformation is also of considerable importance, as it can enhance the monitoring capacity of the science system and provide policymakers with more comprehensive evidence bases. This is because large language models are capable of (Saarela et al., 2025; Thelwall, 2025; Zhang et al., 2025):
- analyzing scientific texts at scale;
- identifying emerging concepts and research orientations;
- making the linkages between science, technology, and society more visible;
- improving the predictability of citations;
- enabling more effective analyses of knowledge trajectories;
- facilitating the identification of research axes and thematic areas; and
- extracting advanced textual features within data‑mining processes.
Saarela et al. (2025) demonstrate that large language models and knowledge graphs possess substantial potential for enriching scientometric analyses. They show that, in contemporary scientometric research, large language models and knowledge graphs are being employed for two primary purposes:
Enhancing traditional scientometric analyses to improve the accuracy, scalability, and analytical richness of existing research methods, including tasks such as:
- entity extraction;
- thematic mapping;
- trend analysis; and
- the construction of domain-specific knowledge graphs.
Exploratory and generative applications, functioning as engines of knowledge discovery for purposes such as:
- research idea generation;
- prediction of research interests;
- scientific question answering; and
- advanced interdisciplinary analysis.
On this basis, Saarela et al. (2025) view scientometrics as undergoing a transition from retrospective mapping of science toward AI‑augmented discovery and insight-oriented analysis. Thelwall (2025) similarly lends support to this perspective, arguing that the use of large language models in research evaluation has the potential to reshape decision-making approaches and research management practices. It is evident that, for scientometrics in its role as a support mechanism for science and technology policy, this shift implies access to tools that facilitate not only an understanding of what has been, but also a clearer visibility into what is currently taking shape.
At the same time, AI-based systems, despite their considerable capabilities, are inherently susceptible to biases and inconsistencies arising from their training data and model design. Consequently, their use in research evaluation necessitates transparency, human oversight, and methodological accountability (Zhang et al., 2025). This position is fully aligned with the critical perspective articulated by Thelwall (2025), who warns that the absence of human oversight in AI-based research evaluation may lead to systemic distortions and misguided decisions. Moreover, Saarela et al. (2025), while acknowledging that artificial intelligence can alleviate many of the limitations of traditional scientometric methods, emphasize that the outputs of large language models are analytically meaningful only insofar as they are traceable to explicit relations, evidence, and data within knowledge graphs. From this perspective, knowledge graphs function as a transparent and explainable semantic layer within the analytical process. Nevertheless, in line with the views of Zhang et al. (2025) and Thelwall (2025), they likewise maintain that without explainability, human oversight, and the establishment of clear ethical frameworks, the application of artificial intelligence in research evaluation and analysis may result in misleading or unreliable analytical outcomes.
Conclusion and Outlook
Recent developments in scientometrics- particularly with the emergence of large language models and generative artificial intelligence- have ushered in a new phase in understanding, analyzing, and interpreting the scientific system. Despite the extensive capacities of these technologies to accelerate data analysis and broaden the horizons of knowledge discovery, the very nature of scientometrics remains inseparably linked to human judgment, explainability, and scientific trust.
As Zhang et al. (2025) characterize artificial intelligence as a turning point in the transformation of scientific paradigms, and Saarela et al. (2025) emphasize the mediating role of human–AI interaction in enabling transparent and traceable analytical processes, Thelwall (2025) similarly cautions that, although large language models can enhance the capacity for research quality evaluation, the absence of human oversight may increase the risk of systemic biases and misinterpretations in research assessment. Accordingly, the future trajectory of scientometrics cannot be sought in a reliance on algorithms alone. Rather, it must be pursued within a framework of human–AI co–design, in which the researcher is preserved as the final overseer, interpreter, and accountable agent of scientific results, while simultaneously aligning with international standards for the attribution of contributor roles in research, such as the CRediT taxonomy.
Within this framework, any scientometric research- whether conducted at the national or international level- should be grounded in three fundamental principles:
Explainability and transparency of analytical processes;
Human oversight and accountability in the production and interpretation of results; and
Alignment of technology with the strategic objectives of science policy and research ethics.
The forward-looking vision for Iran’s scientometrics community is therefore not one of rejection of, or resistance to, emerging technologies, but rather the cultivation of a systematic, informed, and institutionalized dialogue on their responsible use. Such a dialogue can ultimately lead to the development of a context-sensitive framework for human–AI–centered scientometrics. Within this horizon, the scientometrics researcher is not positioned as a competitor to artificial intelligence, but as the ethical and scientific regulator of interaction with it—a role that both safeguards the integrity of science and clarifies the path of future transformation.
Accordingly, the central question today is no longer whether artificial intelligence should be used in scientometrics, but how such use can be designed in a manner that is both effective and aligned with the overarching goals of science policy. It is a question of how large language models can be leveraged to advance scientometric research without undermining the intrinsic standards of the researcher’s role, including established frameworks such as CRediT, and how clear principles and guidelines can be established for research models based on human–AI interaction. Undoubtedly, scholars and researchers in the field of scientometrics, by approaching the emerging phenomenon of generative artificial intelligence with a critical and evaluative perspective, and by engaging with such reflective questions, will be able to pave the way for the intelligent and responsible use of these novel tools, thereby opening new opportunities for the scientometrics community.
کلیدواژهها [English]