Journal Logo

Artificial Intelligence for scientific research and discoveries

Authors: Saif Aldeen Alryalat, MD[1,2]
Affiliations:
1. University of Illinois Chicago, Chicago, Illinois, USA. 2. The University of Jordan, Amman, Jordan.
Saif Aldeen Alryalat <saifryalat@yahoo.com>
Published Date: December 1, 2025
Keywords: Artificial intelligence, Large language models, Generative model, Research

Abstract

The integration of artificial intelligence (AI) into scientific research is accelerating rapidly, with large language models (LLMs) and generative AI tools now widely adopted by researchers worldwide. While the most common use of LLMs is primarily to enhance written outputs, emerging uses like streamlining systematic reviews and improving research efficiency, with some AI-assisted workflows demonstrating over 350-fold acceleration while maintaining expert-level quality. Beyond text, AI is increasingly applied in drug and protein target discovery, enabling rapid identification of previously inaccessible targets and accelerating early-stage drug development. In medical imaging, multimodal AI models have shown the ability to detect pathologies such as glaucoma from fundus photographs with high accuracy, and generative models can create high-quality, medical-grade images to augment datasets, aid training, and produce educational figures. Despite potential risks, the benefits of these technologies are becoming increasingly evident, with AI poised to transform research methodology, diagnostics, and medical education.

Body

Main Text

A recent global survey of more than 2,400 researchers found that over 72% use artificial intelligence (AI)—and specifically large language models (LLMs)—at least once per month [1]. The most common application of LLMs is to refine and enhance the final written output of scientific work, a use case that aligns closely with the core strengths of these models as advanced language-processing tools. This marks a significant shift in the global perception of AI within the research ecosystem. Historically, concerns about misuse prompted considerable hesitation from the scientific community, including strong cautionary positions from publishers. However, as AI technologies have rapidly proliferated across all aspects of daily life, their integration into scientific research has become increasingly inevitable. In parallel, the community’s stance has evolved toward greater acceptance—albeit with clearly defined restrictions and an emphasis on maximum transparency. For example, Elsevier—one of the largest scientific publishers—outlined its position in the document Generative AI Policies for Journals [2], stating: “Elsevier recognizes the potential of generative AI and AI-assisted technologies (‘AI Tools’), when used responsibly, to help researchers work efficiently, gain critical insights fast and achieve better outcomes. Increasingly, these tools, including AI agents and deep-research tools, are helping researchers to synthesize complex literature, provide an overview of a field or research question, identify research gaps, generate ideas, and provide tailored support for tasks such as content organization and improving language and readability.” Other major publishers have adopted similar policies that emphasize transparency in the use of AI tools without prohibiting their integration into the research process. While the impact of AI tools on scientific research will likely continue to expand in the coming years—and despite the inevitable misuse of these technologies—the potential benefits they offer to scientific discovery are substantial. In a prior project that assessed controlled LLM usage compared with human experts, we observed more than a 350-fold acceleration in completing a systematic review, with the AI-assisted review achieving a quality comparable to that of expert-led reviews [3]. This serves as one example of how an AI-enabled platform can dramatically accelerate one of the most resource-intensive methods of scientific evidence generation. Traditionally, conducting a high-quality systematic review has been estimated to cost upwards of $100,000 per review, as reported in previous analyses [4] Other research groups have successfully leveraged AI-based protein structure prediction and generative chemistry to accelerate drug-target discovery, demonstrating the practical impact of AI beyond theoretical promise. For example, by using AlphaFold-predicted structures as the basis, a recent study combined a target-identification engine (PandaOmics) and a generative-chemistry platform (Chemistry42) to identify a previously “dark” protein target — CDK20 — and produced a small-molecule “hit” inhibitor, even though no experimental structure for CDK20 existed [5]. Image generation and processing capabilities of AI models have grown tremendously, and remain on a steep upward trajectory. In the domain of ophthalmology, this revolution in “visual AI” is already bearing fruit. For instance, in our recent study titled “Evaluating the strengths and limitations of multimodal ChatGPT-4 in detecting glaucoma using fundus images”, we demonstrated that a multimodal large-language model (LLM) — ChatGPT‑4 — was able to classify fundus photographs (from the publicly available REFUGE dataset) as “Likely Glaucomatous” or “Likely Non-Glaucomatous” with a good overall accuracy [6]. While emerging AI technologies certainly carry risks, their benefits are becoming increasingly evident with each new generation of generative models. The ability of modern AI systems to produce high-quality, medical-grade images has expanded rapidly, creating new opportunities across diagnostics, medical education, and research. To illustrate the capabilities of these models, we used Google’s Nano Banana Pro model [7] to transform a simple hand-drawn sketch of an eye into a textbook-quality medical illustration suitable for professional use (Figure 1). This example underscores how quickly AI is evolving and highlights the transformative potential these tools hold for the future of scientific communication and clinical practice.

Figures

Demonstration of how generative artificial intelligence can transform hand drawings (A) into high-quality textbook-grade figures (B).
Demonstration of how generative artificial intelligence can transform hand drawings (A) into high-quality textbook-grade figures (B).

References

1. ExplanAItions—Researchers and AI. (n.d.). Retrieved November 26, 2025, from https://www.wiley.com/en-us/about-us/ai-resources/ai-study/for-researchers/
2. Generative AI policies for journals. (n.d.). Www.Elsevier.Com. Retrieved November 26, 2025, from https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
3. Musleh, A., Alwisi, N., Serhan, H. A., Toubasi, A., Malkawi, L., & Alryalat, S. A. (2025). Artificial Intelligence Powered Research Automation (AIPRA) Versus Human Expert: A Two-Arm Ophthalmology Comparative Study (p. 2025.10.27.25338904). medRxiv. https://doi.org/10.1101/2025.10.27.25338904 DOI: 10
4. Michelson, M., & Reuter, K. (2019). The significant cost of systematic reviews and meta-analyses: A call for greater involvement of machine learning to assess the promise of clinical trials. Contemporary Clinical Trials Communications, 16, 100443. https://doi.org/10.1016/j.conctc.2019.100443 DOI: 10
5. Ren, F., Ding, X., Zheng, M., Korzinkin, M., Cai, X., Zhu, W., Mantsyzov, A., Aliper, A., Aladinskiy, V., Cao, Z., Kong, S., Long, X., Liu, B. H. M., Liu, Y., Naumov, V., Shneyderman, A., V. Ozerov, I., Wang, J., W. Pun, F., … Zhavoronkov, A. (2023). AlphaFold accelerates artificial intelligence powered drug discovery: Efficient discovery of a novel CDK20 small molecule inhibitor. https://doi.org/10.1039/D2SC05709C DOI: 10
6. AlRyalat, S. A., Musleh, A. M., & Kahook, M. Y. (2024). Evaluating the strengths and limitations of multimodal ChatGPT-4 in detecting glaucoma using fundus images. Frontiers in Ophthalmology, 4. https://doi.org/10.3389/fopht.2024.1387190 DOI: 10
7. Introducing Nano Banana Pro. (2025, November 20). Google. https://blog.google/technology/ai/nano-banana-pro/