Course Policies for Using AI

Course Policies for Using AI

Overview

One of the difficult issues we are facing this fall in higher education is to determine what policies we use for AI in our classrooms.  The use of generative-AI can effectively break some of the ways we might evaluate our students.  If, for example, students are being graded on creative writing, the use of generative-AI by some students could give them an unfair advantage over other students.

To understand how this issue is being addressed elsewhere, I looked at the recently updated publication policies for several scientific journals and societies.  This included Science, Nature, Cambridge University Press, Elsevier, IEEE, the American Chemical Society, the American Astronomical Society, the Association of Computing Machines, and the American Physical Society.

The guidelines they used were instructive.  They ranged from being very restrictive to being open to usage if the contribution was acknowledged in the paper.  No publisher allowed AI to be a co-author, but all of them required the human author to be responsible for the content being presented.

Based on these guidelines and with the help of ChatGPT 4.0, I’ve put together a few template policies that might be suitable for higher education classrooms.   These guidelines don’t address the ways we might change our courses.  However, they make clear to the students what is considered ethical within a given course.   The policies range from prohibiting AI like the Science publication standard to being open to its usage within some guidelines.   There can’t be a single standard, but hopefully this will help you think about how you want to use this in your classes.

You are free to adapt and adopt these as needed for your classes. 

I’ve also made a link of how generative-AI might be used for students to help generate study materials for their classes. 

Using AI to Create Study Materials

Using the transcript of a lecture recorded by Panopto last year, I created useful material for students including:

  • A lecture outline
  • A three-paragraph lecture summary
  • Sample lecture questions using multiple choice and essay formats
  • A vocabulary list
  • Sample data tables
  • A list of images and figures to study
  • A timeline of events discussed in the lecture
  • A list of common misconceptions

This material could be generated by either faculty or students using a generative AI with sufficient memory.  (I used Claude 2 so I could load the entire transcript into the system.)   Please note – you need to review the material before you use or distribute it.  Some of the questions had multiple right answers, and the explanations were occasionally misguided. 

 

Template Policies

Policy 1 – Use of AI is Prohibited

The use of AI-generated context including text, images, code, figures, and any other material is strictly prohibited for any material submitted in this class.   This includes using this content for homework, papers, codes, or other creative works.  This restriction encompasses the creation or revision of work by AI.   Violation of this policy will be considered academic misconduct and will be dealt with accordingly.   The use of basic word processing AI systems including grammar and spelling checkers need not be disclosed in this class.

 

Policy 2 – Use of AI is Permitted with Explicit Disclosure

The use of AI-generated content including text, images, code, figures, and other materials is allowed in this class unless otherwise noted in the specific assignment.  However, any use of this content must be explicitly disclosed in all academic work.  You may use AI generated tools to aid content generation and revision is allowed within these guidelines.   All work must comply with MTSU’s policy on academic honesty.  Students must ensure the originality of their own work.  The use of basic word processing AI systems including grammar and spelling checkers need not be disclosed in this class.

 

Policy 3 – Controlled Use

The controlled use of AI-generated content in this class is permitted provided that it follows MTSU’s policy on academic honesty and the guidelines on research integrity.    Generative AI will not be considered an author, but rather a tool that assists students in their work.   Students bear the ultimate responsibility for the originality, integrity, and accuracy of the work for this course.  All use of Generative-AI must be declared and explained and must not violate the plagiarism policies for campus or this course.   Use of basic word processing AI systems including grammar and spelling checkers need not be disclosed.

 

Policy 4 – Go for it!

Since we recognize the potential for enhancing the educational process, the use of AI-generated content in this class is welcome.   However, the use of AI tools must be acknowledged just like the use of any other software package.  (Note: because of their widespread usage, acknowledging AI systems for grammar and spelling checks need not be acknowledged.)  Because generative-AI can copy work without using citations, students are still responsible for ensuring the originality, integrity, and accuracy of their work.  Violation of academic honesty standards including plagiarism is prohibited under the MTSU academic honesty policy.

 

AI Authorship on Scientific Papers – August 3, 2023, A Snapshot

This is a compilation of the guidelines being given to authors regarding the use of AI written text.  The policies vary from simple disclosure in a cover letter to a complete ban on the text in Science journals.  This document is not meant to be complete.  It quotes elements of the new AI policies I was able to find on-line.  These policies may change.

From Science:

Artificial intelligence (AI). Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors. In addition, an AI program cannot be an author of a Science journal paper. A violation of this policy constitutes scientific misconduct.

https://www.science.org/content/page/science-journals-editorial-policies?adobe_mc=MCMID%3D79730734082570706754102817179663373464%7CMCORGID%3D242B6472541199F70A4C98A6%2540AdobeOrg%7CTS%3D1675352420#image-and-text-integrity

From Elsevier:

Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Each (co-) author is accountable for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original, that the stated authors qualify for authorship, and the work does not infringe third party rights.

Elsevier will monitor developments around generative AI and AI-assisted technologies and will adjust or refine this policy should it be appropriate. More information about our authorship policy can be viewed here: https://www.elsevier.com/about/policies/publishing-ethics.

https://www.elsevier.com/about/policies/publishing-ethics/the-use-of-ai-and-ai-assisted-writing-technologies-in-scientific-writing

From the Cambridge University Press:

AI Contributions to Research Content

AI use must be declared and clearly explained in publications such as research papers, just as we expect scholars to do with other software, tools, and methodologies.

AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge.

Authors are accountable for the accuracy, integrity, and originality of their research papers, including for any use of AI.

Any use of AI must not breach Cambridge’s plagiarism policy. Scholarly works must be the author’s own, and not present others’ ideas, data, words or other material without adequate citation and transparent referencing.

Please note, individual journals may have more specific requirements or guidelines for upholding this policy.

https://www.cambridge.org/core/services/authors/publishing-ethics/research-publishing-ethics-guidelines-for-journals/authorship-and-contributorship#ai-contributions-to-research-content

From IEEE:

Guidelines for Artificial Intelligence (AI)-Generated Text

The use of artificial intelligence (AI)–generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to an IEEE Conference or Periodical. The sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text.

https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/submission-and-peer-review-policies/

From ACM:

Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain ­about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.

Basic word processing systems that recommend and insert replacement text, perform spelling or grammar checks and corrections, or systems that do language translations are to be considered exceptions to this disclosure requirement and are generally permitted and need not be disclosed in the Work. As the line between Generative AI tools and basic word processing systems like MS-Word or Grammarly becomes blurred, this Policy will be updated.

https://www.acm.org/publications/policies/new-acm-policy-on-authorship

From the American Chemical Society:

Science publishing is not an exception to the trend of growing use of artificial intelligence and large language models like ChatGPT. The use of AI tools is not a negative thing per se, but like all aspects of publishing research, transparency and accountability regarding their use are critical for maintaining the integrity of the scholarly record. It is impossible to predict how AI will develop in the coming years, but there is still value in establishing some basic principles for its use in preprints.

After consultation with ChemRxiv’s Scientific Advisory Board, ChemRxiv has made the two following adjustments to its selection criteria to cover the use of AI by our authors:

AI tools cannot be listed as an author, as they do not possess the ability to fundamentally review the final draft, give approval for its submission, or take accountability for its content. All co-authors of the text, however, will be accountable for the final content and should carefully check for any errors introduced through the use of an AI tool.

The use of AI tools, including the name of the tool and how it was used, should be divulged in the text of the preprint. This note could be in the Materials and Methods, a statement at the end of the manuscript, or another location that works best for the format of the preprint.

Some authors have already used AI language tools to help polish or draft the text of their work, and others have studied their effectiveness in handling chemistry concepts. See some recent preprints related to ChatGPT here.

ChemRxiv authors are welcome to use such tools ethically and responsibly in accordance with our policy. If you have any questions about the use of AI tools in preparing your preprint, please view our Policies page and the author FAQs or contact our team at curator@chemrxiv.org.

https://axial.acs.org/publishing/new-chemrxiv-policy-on-the-use-of-ai-tools

From the American Astronomical Society:

With this in mind, we offer two editorial guidelines for the use of chatbots in preparing manuscripts for submission to one of the journals of the AAS. First, these programs are not, in any sense, authors of the manuscript. They cannot explain their reasoning or be held accountable for the contents of the manuscript. They are a tool. Responsibility for the accuracy (or otherwise) of the submission remains with the (human) author or authors. Second, since their use can affect the contents of a manuscript more profoundly than, for example, the use of Microsoft Word or even the more sophisticated Grammarly, we expect authors to acknowledge their use and cite them as they would any other significant piece of software. Citing commercial software in the same style as scholarly citations may present difficulties. We urge authors to use whatever sources are most useful to readers, i.e., as detailed a description of the software as possible and/or a link to the software itself. Although these programs will surely evolve substantially in the near future, we think these guidelines should cover their use for years to come.

https://aas.org/posts/news/2023/03/use-chatbots-writing-scientific-manuscripts

From the American Physical Society Physical Review Journals:

Appropriate Use of AI-Based Writing Tools

Large Language Models, such as ChatGPT, are rapidly evolving, and the Physical Review Journals continue to observe their uses in creating and modifying text.

Authors and Referees may use ChatGPT and similar AI-based writing tools exclusively to polish, condense, or otherwise lightly edit their writing. As always, authors must take full responsibility for the contents of their manuscripts; similarly, referees must take full responsibility for the contents of their reports.

An AI-based writing tool does not meet the criteria for authorship because it is neither accountable nor can it take responsibility for a research paper’s contents. A writing tool should, therefore, not be listed as an author but could be listed in the Acknowledgments.

Authors should disclose the use of AI tools to editors in their Cover Letter and (if desired) within the paper itself. Referees should disclose the use of AI tools to editors when submitting a report. These disclosures will help editors understand how researchers use the tools in preparing manuscripts or other aspects of the peer review process.

To protect the confidentiality of peer-reviewed materials, referees should not upload the contents of submitted manuscripts into external AI-assistance tools.

https://journals.aps.org/authors/ai-based-writing-tools

 

Dr. John Wallin is the Director of the Computational and Data Science Ph.D. Program at Middle Tennessee State University.