Generative AI Policy
AUTHORSHIP AND ACCOUNTABILITY
Guidelines for the Ethical Use of AI in Research and Writing
The Journal adheres to strict ethical standards regarding the use of Artificial Intelligence (AI) and AI-assisted technologies. In alignment with international editorial standards:
- Non-Authorship: Generative AI tools (such as ChatGPT, Claude, or Gemini) cannot be listed as an author or co-author. AI lacks the legal and ethical capacity to take responsibility for the research.
- Full Accountability: Human authors are entirely responsible for the integrity of their manuscript, including the accuracy of data, the validity of citations, and the absence of plagiarism.
Usage Limits & Guidelines
Permissible Threshold: To maintain the authenticity and original contribution of the researcher, the Journal permits a maximum of 5% AI-generated content. This allowance is strictly for structural refinement, grammar correction, and language polishing.
Key Restrictions:
- Manuscripts exceeding the 5% AI similarity threshold during screening will be returned for revision or rejected.
- AI must not be used to generate primary data, research findings, or conclusive arguments.
- AI-generated images or figures are strictly prohibited unless the research specifically investigates AI technology.
Disclosure Requirements
Transparency is vital. If AI-assisted tools were used (even within the 5% limit), authors must:
- Include a formal "Statement on Generative AI Use" before the References list.
- Clearly specify the tool used and the specific sections that were assisted by AI.
- Confirm that all AI-assisted content has been verified for accuracy by the human authors.


