Skip to main content

Each instructor is responsible for setting their own policies around student use of generative AI. This page provides resources and examples for developing and communicating a course policy that reflects the course learning goals and clarifies acceptable and unacceptable use.

Considerations for Course Policies

  • Center your policy on learning. The AI uses you allow or prohibit should reflect the kinds of thinking students need to do for themselves to achieve learning goals.
  • Communicate your policy early, often, and with explanation. Stating your policy in your syllabus as well as addressing expectations when introducing specific assignments will help students understand what they can and cannot do and why.
  • Have a plan to address suspected policy violations. In the event of suspected misconduct, know what options and resources are available.

Creating a Course Policy on AI

Penn does not have a standardized policy on student AI use, and each instructor is responsible for setting their own policies. However, your department or program may have recommendations or resources for developing your course policy in addition to those provided here.

Under Penn’s Code of Academic Integrity, students may not use unauthorized assistance in their academic work. It is up to you to decide what that means and communicate that definition to students. 

While some instructors may consider any significant use of AI-generated work to be an academic integrity violation, others may allow students to use AI for a particular assignment or in specific parts of an assignment.

A statement about AI can be added to the academic integrity policy already in your syllabus and on Canvas. Some examples include:

  • You may not use generative AI for your work in this class. The work you produce should be your own, free from any AI assistance. Submitting work that contains AI-generated content will be considered a violation of Penn’s Code of Academic Integrity, and suspected use will be referred to the Center for Community Standards & Accountability (CSA).
  • It is plagiarism to submit work produced by generative AI as your own without attribution. Any use of Al must be in alignment with assignment guidelines, and all AI-generated contributions should be properly cited like any other reference material. Note that AI will not be considered a “scholarly source” for assignments requiring specific types of references. 
  • You are welcome to use generative AI for any purpose in this class, provided that you cite it properly. However, note that all AI tools are prone to errors, including falsifying sources, and may even generate offensive content. You will be responsible for the accuracy and quality of the work you submit, regardless of whether it was generated by you or an AI tool. The university’s policy on plagiarism still applies to uncited or improperly cited work, whether from an AI or another human being.
  • Additional sample policies from Penn faculty.

If students are allowed to use AI in your course, you may want to define how they should acknowledge AI contributions. Students could embed screenshots of AI output, submit transcripts of AI interactions, or share the prompts they entered. 

If AI is referenced as a source in a bibliography, several common style guides have released advice on formatting:

Communicating Your AI Policies to Students

Your students may encounter different expectations around AI use in each of their classes, so it is important to help them understand your course policies. Include your AI and overall academic integrity policies in your syllabus and on Canvas. You also may want to address it in class at the start of the semester or when introducing a new assignment, especially if guidelines for acceptable use vary between assignments.

Being transparent with students about an assignment’s purpose can help them appreciate what and why they are learning. Consider discussing the ways that having core knowledge in a field makes their use of generative AI more powerful, so they understand the value of that learning even with access to these tools in future work.

Fear of making mistakes can tempt some students to turn to AI. Consider reinforcing to students the value of learning from mistakes and showing evidence of their thinking. Course policies and practices that emphasize the importance of the learning process can show students that you value their progress in addition to the final product and reduce the incentive to use AI dishonestly.

Academic dishonesty, including through unacceptable AI use, is frequently the result of students feeling unable to keep up with their work. Remind students of ways to get help if they are struggling in your course, such as office hours, Weingarten’s academic support, or the Marks Family Writing Center. Structured flexibility around due dates or resubmission policies may make students less tempted to turn to AI-related cheating as a last resort.

Addressing Potential Violations

If you encounter a student’s work in which you suspect AI may have been used inappropriately, you have several options for addressing the situation.

Talk with your instructional team. In some courses, TAs may do the bulk of the grading. Make sure they understand your AI policies and how students might use the tools. Determine a procedure for members of the instructional team to refer concerns about academic dishonesty to the instructor of record.

Remember that AI outputs are random. You are encouraged to familiarize yourself with AI tools and the type of content they can generate, and you may even choose to input your assignment prompt to see what AI produces. However, keep in mind that an AI response to the same prompt will be different each time and can be adjusted through more sophisticated prompting or use of a different tool.

Get familiar with your students’ work. Recognizing the type of work a student typically produces will make it easier to notice when something seems starkly different. Including an in-class assignment early in the semester is one strategy for comparing later assignments, but note that some students will produce different work when free from time constraints.

Check the reference list. Although AI is capable of generating citations that seem realistic, the content may or may not exist or be related to the topic. Requiring students to cite sources and reviewing those citations can be an effective way to spot AI uses. Fabricating sources is a violation of Penn’s Code of Academic Integrity, regardless of whether AI is involved.

Avoid AI detectors. Several companies have created “AI detectors” that claim to evaluate whether text was written by an AI or a human. However, none of these tools are sufficiently accurate to serve as evidence that a student has used AI. Additionally, AI detectors demonstrate bias by falsely flagging written work by non-native English speakers. Uploading student work to an AI detector may also violate Penn’s AI privacy policies.

Talk with your students. A non-accusatory conversation with your students about their work product or process can help you gauge whether or not they relied substantially on AI.

Consult the Center for Community Standards & Accountability (CSA). CSA can consult with instructors on suspected or confirmed academic integrity violations. They can answer questions and offer help for talking with an affected student, even if you are not ready or planning to report a violation. Visit Reporting Academic Integrity Violations to learn more about the reporting and disciplinary process.

CETLI Can Help

CETLI staff are available to discuss ideas or concerns related to generative AI and your teaching, and we can work with your program or department to facilitate conversations about this technology. Contact CETLI to learn more.