Skip to content

PLOS is a non-profit organization on a mission to drive open science forward with measurable, meaningful change in research publishing, policy, and practice.

Building on a strong legacy of pioneering innovation, PLOS continues to be a catalyst, reimagining models to meet open science principles, removing barriers and promoting inclusion in knowledge creation and sharing, and publishing research outputs that enable everyone to learn from, reuse and build upon scientific knowledge.

We believe in a better future where science is open to all, for all.

PLOS BLOGS EveryONE

Editorial Spotlight: Issa Ali Atoum

This interview and blog post was prepared by PLOS One Associate Editor Daniel Parkes.

Issa Atoum is an Associate Professor in the Faculty of Information Technology at Philadelphia University (Jordan) and an Academic Editor for PLOS One. He also serves the community through program and technical committee membership for international conferences and as a reviewer for multiple peer-reviewed journals in software engineering and applied computing. His research spans software engineering, machine learning, and natural language processing, with an emphasis on rigorous evaluation, reproducible research artifacts, and practice-aware impact. He also brings industry experience in software development and quality assurance, supported by professional certifications including PMP, ITIL, and Scrum.

In this post, he discusses his thorough assessments of the manuscripts, his insights on AI tools in research and editorial works, and his experience beyond Academia.

I see AI as a tool rather than a target. The goal is not to use AI simply because it is popular, but to use it responsibly to improve methods, strengthen evidence, and help solve problems that are difficult to tackle manually at scale.


As PLOS One Academic Editor, you have provided thorough assessments of both reviewer reports and meticulous analysis of the author’s submissions, including the supplementary information. What is your approach? Do you have any advice for others?

I begin with the parts that often shape the entire assessment. Before looking at anything else, I read the cover letter to understand the authors’ intent and claims, then review any journal checks, as they can highlight issues that require extra care. From there, I assess the manuscript against the PLOS One criteria, with a clear emphasis on research integrity, methodological soundness, and the extent to which the evidence supports the conclusions.

In practical terms, I read the manuscript and supplementary materials together. In many submissions, the key methodological details and datasets sit in the supplementary files, so I treat them as essential. At the initial assessment stage, I set aside dedicated, uninterrupted time to review the entire submission before making any recommendations. I prioritize issues that affect reliability, such as data leakage, missing baselines, unclear evaluation protocols, or inconsistencies between the manuscript and shared artifacts.

As a professor and researcher, I often recognize when shared code appears AI-assisted or is provided as a skeleton rather than a working pipeline. Where artifacts are available, I replicate the workflow on my own machine to confirm that the underlying data support the results. The PLOS Editorial Board Services team recently shared positive feedback from an Associate Editor, noting that my decision letter identified clear errors in the submitted code—errors that suggested the underlying data did not support the reported results. Identifying such issues early helps protect research integrity.

My advice to other editors is to keep decision letters structured and specific. I group feedback into what must change to move forward (validity), what is needed for reproducibility, and what is recommended for clarity. This approach reduces unnecessary back-and-forth and supports a fair process for authors and reviewers.


Artificial intelligence is a major topic of discussion today. How does it relate to your work, and what do you see as its most exciting opportunities, as well as its biggest risks?

AI is closely connected to my research in software engineering and natural language processing, especially in large-scale, evidence-driven analysis. I see AI as a tool rather than a target. The goal is not to use AI simply because it is popular, but to use it responsibly to improve methods, strengthen evidence, and help solve problems that are difficult to tackle manually at scale.

From an editorial perspective, I follow the journal’s approved tools and processes. I do not share manuscript content with external AI tools unless there is explicit permission and an approved workflow, as editors must avoid creating avoidable integrity inquiries regarding confidentiality. I also pay attention to provenance signals in submissions; if images lack clear sources or if AI assistance is used without disclosure, that is a concern.

The most exciting opportunities are practical and research-enabling. AI can help researchers handle complex datasets and improve documentation quality while maintaining rigor. However, the risks are real. Common pitfalls include hidden leakage, unclear preprocessing, unreported prompt sensitivity, and claims that cannot be independently verified. Used responsibly, AI can support stronger science. Used carelessly, it can create uncertainty that harms authors and readers alike.


You have a lot of experience beyond Academia. How does this inform your research, and do you feel it helps with your work as a PLOS One Academic Editor?

My experience beyond academia has shaped how I think about rigor and impact. I have experience in software development and quality assurance, and I also hold formal training in project and service management, with certifications including PMP, ITIL, and Scrum. This background has taught me the value of traceability, precise requirements, and reliable delivery. Those principles translate directly into how I assess research.

In editorial work, this perspective helps me evaluate submissions not only for novelty but also for robustness and reproducibility. I look for methods that are clearly documented, results that are supported by evidence, and artifacts that can be reused and verified. I also understand the challenges of bridging research and practice, so I aim to provide precise, constructive feedback with clear guidance on what is needed to meet the journal’s criteria.

Beyond PLOS One, I contribute through reviewer service for multiple peer-reviewed journals and through program committee roles for international conferences. This broader service reinforces the importance of consistent standards and respectful communication. For me, editorial work is a form of stewardship; it strengthens the scientific record and supports research that is reliable and can be built upon.


Disclaimer: Views expressed by contributors are solely those of individual contributors, and not necessarily those of PLOS.

Editor Spotlight series features engaged and dedicated PLOS One Editorial Board members who facilitate excellent peer review processes. If you’d like to be considered for the series, please fill out the interest form.

Related Posts
Back to top