OLH Artificial Intelligence (AI) Policy.

Last revised December 2025.

The use of generative AI in scholarly publishing has evolved rapidly, coinciding with the development of better trained AI models and other technological advances. The rise of AI technologies has become ubiquitous, and in many cases is seamlessly integrated into websites, search engines and software applications. The following guidance is provided to assist authors and editors with the acceptable and responsible use of this technology within the publishing workflow.

While the Open Library of Humanities (OLH) does not insist that its journals require an 'AI Declaration Statement' to be made public with every published article in its journals, authors planning on submitting research to an OLH journal must consult these guidelines first and must always provide a declaration of any substantial and known uses of generative AI to the editors of the journal they are submitting to for their consideration.

The OLH aligns with the Committee on Publication Ethics' (COPE) stance that AI has no legal claim to the authorship of research, as it cannot be held accountable for or take responsibility for work, assert competing interests, or consent to licensing agreements. AI cannot be listed as an author or co-author of scholarly research for these reasons. As outlined by the European Commission's 'Living Guidelines on the responsible use of generative AI in research', researchers are accountable for the integrity of all AI-assisted content. The OLH therefore requires authors to take responsibility for their submitted research, including all substantial and intentional AI outputs.

The OLH's stance on the use of AI in scholarly research takes into consideration COPE's position statement on 'Authorship and AI tools', and their discussion document on 'Artificial intelligence in decision-making'.

Acceptable and responsible uses of generative AI.

Generative AI harnesses models trained by machine learning on sets of data and other algorithms to perform a specific task for the user. This includes tools that summarise content from an article or various sources, AI assistants that analyse and suggest changes to the grammar and phrasing of writing, or tools that provide assistance with changes to the formatting and presentation of content. It also includes software applications that can create entire works, such as full-length articles or complete images, via trained machine learning models and algorithms. In some cases, this is done with very little input from humans (such as providing a brief prompt only).

Generative AI can be implemented as part of a standalone application that is very clear about its use case. However, it can also be seamlessly embedded in computer software to the extent that the user may not realise they are being assisted by a generative AI tool.

The OLH believes that while it is not feasible for authors to know and declare every instance of generative AI use prior to the submission of their research, there must be an awareness of generative AI and the responsible use of it. At a minimum, all research should be thoroughly checked by authors prior to its submission to an OLH journal, and the author must take sole responsibility for any outputs, including errors or inconsistencies that arise as a result of generative AI use, such as errors that stem from the use of large language models (LLMs).

The OLH's position on the acceptable and responsible use of generative AI is as follows:

  1. Authors are increasingly using generative AI tools to assist with general parts of editing research, such as suggestions for more concise or grammatically correct phrasing. The OLH recognises that, for enabling greater accessibility for users that require it, such tools can be helpful and in some cases necessary (for example, the use of generative AI for dictation and transcription of text for those with disabilities). However, the author's main argument or methodology must not have been generated wholesale by AI.
    A note on translation: authors are increasingly making use of LLMs and other kinds of generative AI tools to assist with the translation of their article into another language for submission to a journal. However, all submissions must be subject to the same degree of editorial scrutiny and it should be self-evident if a translated submission is of the written and intellectual standard required for the journal regardless of the method of translation, which includes generative AI assisted translation.
  2. Care should be taken when using an assistive generative AI tool to help with formatting or checking the references and citation information contained within an article. The outputs of using such tools must be thoroughly screened and verified by the author. The author must have consulted all the works being referenced and cited within their article, and as with any research, the cited and referenced works must be relevant to the author's research.
  3. When using generative AI as a part of research methods, such as for analysing data, supplementing partial datasets, using generative AI for controlled simulations, or in the presentation of data, substantial and known uses of generative AI for these purposes are acceptable but those uses must be adequately declared by the author(s) (see the ‘AI Declaration Statement’ section below, in particular 'Complex AI Declaration Statements'). All generative AI outputs that are a product of research methods and that deal with the production of or handling of datasets must be thoroughly vetted by the author(s) before their work is submitted to a journal.
  4. Generative AI that is used for demonstrative purposes as a quotation of its output(s) within an article (for example, the output of a work created by a generative AI model is being discussed or analysed by an author, and has relevance to the author's own research) is an acceptable use. When generative AI is used for demonstrative purposes the AI model, including its version number and any other important identifying information, must be clearly cited with its output(s).
  5. Although the OLH discourages this practice, we recognise that some editors may wish to use assistive generative AI tools to aid their editorial decision-making (for example, summarising an author's research or peer review reports for preliminary screening). The OLH believes that editors should give their own due scrutiny to each submission made to their journal and should ideally not use generative AI tools for this purpose where possible. However, the OLH is also aware that editors may require the use of assistive generative AI tools for accessibility purposes, such as in points (i) and (ii). Editors using generative AI for this purpose must check that any assistive generative AI tool outputs are correct before using them to supplement editorial decisions. They must not solely rely on a generative AI tool's output to make and issue an editorial decision on an author's work. Please see COPE's ‘Artifical intelligence in decision making’ document, which gives further guidance on appropriate generative AI use for editors.
  6. The OLH asks peer reviewers not to use generative AI tools to bypass their own assessment of an article, such as for content analysis when producing peer review reports on scholarly research. For example, the use of generative AI LLMs to produce an analytical summary of an author's work without having read the research in full is not permitted. The OLH believes that a reviewer's own expertise is paramount in providing robust, helpful peer review for authors and editors, which extends to the thorough reading and critical analysis of the entirety of an author's submission. However, the OLH recognises that reviewers may use assistive generative AI tools in the same manner an author or editor might (such as in points (i) and (ii), with suggestions for concise phrasing, grammar, or with the formatting/structure of a report). All generative AI tool outputs for this purpose must be thoroughly checked by the peer reviewer before submitting a review report and recommendation.

Due to the instabilities of generative AI models and their outputs, in all cases the OLH advises that substantial and known uses of generative AI should be specific, undertaken with caution, and used as sparingly as possible at this juncture. For every known instance of generative AI use, human oversight of generated outputs is required prior to the submission of research.

OLH journal editorial teams should read the 'Guidance for Editors' section below for further advice on generative AI in journal editorial workflows.

For more information on where and how to declare a known and substantial use of generative AI, see the 'AI Declaration Statement' section below.

Unacceptable uses of generative AI.

The OLH recognises that while generative AI has the capability to assist authors with parts of their research, there are clear and unacceptable uses of generative AI that constitute academic malpractice and must be avoided.

The OLH does not permit substantial, major uses of generative AI to be passed off as the author's own research, nor does it permit any use of generative AI by the author that is wilfully fraudulent, with the intention of committing academic malpractice. By this, the OLH means the following:

  • A prompt or series of prompts used in a piece of generative AI software, such as an LLM, that results in the creation of full research arguments and their subsequent analyses which the author then presents as their own;
  • The fabrication of the majority of an article's text and/or other content using generative AI, such as using an LLM, as opposed to the refinement of, or assistive, general editing of the author's own writing;
  • The fabrication of false datasets using generative AI, that are not clearly marked as synthetic and are not transparently explained, with the aim of supporting or justifying the author's argument. Such use falls outside of a declared research method or the use of quoted generative AI outputs for purely demonstrative purposes;
  • The fabrication of references or citations (in particular, those that have been created in their entirety, and incorrectly, or are of non-existent works) that are used to attempt to justify or supplement the author's research.

Should there be any concerns during the publishing process, or following publication, that an author has used generative AI that constitutes the above uses, an investigation will be conducted by the journal's editorial team in line with the publisher's policy on complaints and appeals.

Editors must not use generative AI to fabricate an editorial report for authors about their submission. If an author has concerns that an editor is using AI to generate an editorial report on their article, they should raise this with the publisher in the first instance.

Reviewers must not use generative AI to fabricate a peer review report about an author's submission. Should a reviewer be found to be using generative AI to fabricate a peer review report, the journal editor(s) should disregard the reviewer entirely and not use their review in the editorial decision-making process for the article.

AI Declaration Statement.

The OLH requires the declaration of specific, substantial and known uses of generative AI in all research submitted by authors to an OLH journal.

The 'specific, substantial and known uses of generative AI' that the OLH requires to be declared in an 'AI Declaration Statement' are:

  • The use of generative AI that forms the basis of a specific research purpose. For example, this might mean the use of a generative AI trained model to recognise patterns across many works that would otherwise be impractical or almost impossible for a person to do without AI assistance;
  • The use of generative AI that forms the basis of a controlled simulation that produces outputs, such as synthetic data to supplement partial datasets, which informs the article's argument(s). For example, this may be the production of outputs as a result of entering precise, sophisticated prompts and datasets into a generative AI model that would not be possible to compute without the use of that model. This use must be fully and transparently outlined in the article's research methods and requires a more complex declaration of AI use (see 'Complex AI Declaration Statements' below);
  • The use of generative AI for specific demonstrative purposes, with generated output(s) quoted and cited clearly with the prompts used, the model version and the URL of the service as part of its citation or caption.

If the above uses of generative AI are present in an author's research, they should be clearly declared by the author in the interests of transparency and research integrity (please see the below sections on where and how to make a declaration).

The OLH does not require for minor, non-substantial uses of generative AI, outlined in the 'Acceptable and Responsible Uses of Generative AI' section above, to be declared in a formal 'AI Declaration Statement'. We recognise that, while the need for transparency on the specific, known and substantial uses of generative AI is important for journal editors to be made aware of, the use of non-substantial generative AI, including assistive tools for accessibility purposes in particular, is not feasible or practical to be declared in all cases and could be a discriminating factor with publicly available declarations.

Before an article may progress through the scholarly workflow, editors must use their discretion in reviewing any declaration statements alongside the guidelines in this policy to decide whether the declared usage is deemed acceptable.

Where to make an AI Declaration Statement.

If your research meets the requirements for an 'AI Declaration Statement' to be made, the OLH requests that it is done so in the following places:

  • Under the clearly defined heading 'AI Declaration Statement' on your submitted research manuscript (e.g. Word document, PDF), and;
  • Entered into a field that is given as part of the electronic, online submission process for the journal (via the Janeway platform), and;
  • If any specific output(s) of generative AI have been quoted and cited within the body text of your work, as much information as possible should be given about the use of generative AI to accompany the quoted output.

How to make an AI Declaration Statement.

If you need to make an 'AI Declaration Statement', an example format is:

'I acknowledge the use of [generative AI model, its version number, and any URL of service/model] on [date] for the purpose of [insert use case]. I used the following prompts: [insert prompts]. The output(s) from these prompts were used to [explanation of use]. I take responsibility for the content of all AI generated outputs used in my research.'

Here are some simple and isolated examples of how this might look in practice:

'I acknowledge the use of the multimodal large language model OpenAI GPT-4o via ChatGPT (https://chatgpt.com) on 20 January 2025 for the purpose of answering a complex query. I used the following prompt: 'across the complete works of Charles Dickens how many mentions of poverty are made and how many words into each text before such a mention is made'. The output from this prompt was used with my own analysis to show how large language models have the capacity to quickly answer very specific queries that it would otherwise take a long time for a researcher to answer. I take responsibility for the content of all AI generated outputs used in my research.'
'I acknowledge the use of the generative AI music model Suno v4 (https://suno.com/) on 18 February 2025 for the purpose of creating a piece of music that I analyse in my research. I used the following prompts: 'instrumental'; 'jazz'. The output from these prompts has been used in my submission to demonstrate the advances in generative music AI model outputs as a growing threat to the creativity of jazz musicians. I take full responsibility for the content of all AI generated outputs used in my research.'

The use of generative AI for demonstrative purposes should also be made specifically clear at the point in the article where the AI outputs are presented. A caption should accompany the output(s) and include the model of generative AI used, the date of the creation of the output(s), and any input prompts used. If the output was generated by the author, the caption should also be concluded with the statement: 'The author accepts full responsibility for this AI generated content'. Output(s) generated by others should be cited in the usual manner.

Complex AI Declaration Statements.

The OLH appreciates that in some cases a declaration of using generative AI will intersect with a declaration of research methods, especially in the case of using generative AI models to supplement partial datasets or to produce simulations. Such uses of generative AI models have already started to emerge in published research in the Humanities, and this is likely an area of generative AI use that will continue to grow. See an example of an article that uses generative AI to supplement partial datasets ('SIMMI: Synthetic Images for Medieval Musical Iconography', 2024, Picascia et al); and an example of an article that uses machine learning as part of a simulation ('On the transmission of texts: written cultures as complex systems', 2025, Camps, Randon-Furling and Godreau).

In such cases, the OLH asks that the 'Research Methods' section (or similar) and the 'AI Declaration Statement' are in agreement, to clearly and transparently declare and explain the use of generative AI in the article's research methodology. There must be a stated acknowledgement of the use of generative AI. Complex AI declarations that intersect with or underpin an article's research methods should provide as much detail as possible to expand on how the generative AI model's outputs form a consistent and integral part of the article's datasets, conclusions, etc., adequately citing and referencing any open code used, along with a full disclosure of the details of the human oversight of these research methods involving generative AI.

If your AI declaration falls into this category, you must indicate to the editor of the journal through the online submission of your research that your manuscript contains the appropriate declarations and explanations of generative AI use, and indicate where these may be found in the article. Authors must state that they take full responsibility for all generative AI outputs used in their research, in all cases.

Guidance for Editors.

The OLH appreciates that journal editors have many concerns about the impact of the rise of generative AI on academic research submissions to their journals. The OLH fully condemns research malpractice as a result of generative AI use, and any concerns that an editor has with a submission should be fully investigated by editorial teams and raised with the publisher where necessary.

Below are some extra points that editors may find useful, in particular with the use of generative AI and potential research malpractice.

  • Editors must not use generative AI to make editorial decisions for them. This is because generative AI requires the careful human oversight of its outputs and does not have the legal agency to make decisions alone.
  • Try not to place undue reliance on any one assistive generative AI tool, such as tortured sentence detectors, to help with the detection of generative AI-fabricated research. AI models are continually learning and adapting to avoid what used to be considered telltale signs of AI-written sentences (for example, the overuse of em dashes —). Such signs are no longer accurate markers of generative AI writing.
  • Make use of iThenticate (which OLH journals have access to for submitted articles) to perform a similarity check on articles. The resulting report can be useful in helping editors to spot outright plagiarism but also to trace incorrect references and citations, and for flagging the rephrasing of material from other sources that has not been sufficiently credited. In general, any article presenting a similarity score of 20% and above merits further, more granular examination by editors.
  • While we acknowledge there is frustration around the overuse of generative AI in the writing process for academic research, we encourage editors to please bear in mind that some authors have legitimate disabilities that will require for them to make use of assistive generative AI in the writing process more so than other authors. We encourage that in all cases there should be an unbiased and sensitive approach to the consideration of manuscripts.
  • As with any poorly captioned figures or sets of data presented by an author that form part of an article, editors may ask authors to explain how a result or set of data has been reached or generated as a matter of good practice in the review stage. Editors should remind authors that the use of generative AI must require a declaration statement for any substantial and known uses, which includes uses in relation to datasets. Editors ultimately decide whether explanations given by authors are sufficient.
  • While the OLH discourages the use of generative AI image creators in order to produce artwork for issue covers, article banner images or article thumbnails for OLH journals, we recognise that some journals may already do this. The OLH believes that it is far more ethical to use and appropriately credit an image that is CC BY licensed (see OLH's 'Image Permissions and Reproduction Guidance' for further information). However, for images that have been generated by AI and used for this purpose, the image should be captioned to state it has been generated by AI. This may be stated in an issue's main description metadata, or in an article's 'Abstract' field metadata.

The OLH licenses this policy under the CC BY 4.0 attribution license. This means that the content of this policy may be freely used by others in accordance with the terms of that license.