Skip to main content

Instructors are responsible for setting their own policies around student use of generative AI. This page provides resources and examples for developing and communicating policies that reflect the assignment learning goals and outline acceptable and unacceptable AI uses. 

Considerations for Creating Course-Level AI Policies

  • Center your policies on learning. The AI uses you allow or prohibit should reflect the kinds of thinking students need to do to achieve learning goals. Acceptable uses of AI are likely to differ by assignment, so you might consider creating AI policies that are specific to each assignment.
  • Address the specific tasks you ask students to do. Consider that the impact of AI use will vary by task, enhancing some activities while hindering learning in others. Design your policies with this in mind.  
  • Communicate your policies early, often, and with explanation. Stating general policies in your syllabus as well as addressing specific expectations that pertain to individual assignments throughout the semester will help students understand what they can and cannot do and why. 

Creating a Course Policy on AI

It is Penn’s policy that each instructor is responsible for setting their own guidelines. Your department or program may have recommendations or resources for developing your policies in addition to the considerations provided here. 

This section includes elements you might consider when creating your own course policy.

Under Penn’s Code of Academic Integrity, students may not use unauthorized assistance in their academic work. It is up to you to decide what that means and communicate that definition to students.  

While some instructors may consider any significant use of AI to be an academic integrity violation, others may allow students to use AI in specific ways or for particular parts of an assignment.

Students might share course materials with AI tools to study, simplify complicated instructions, prepare for class, or support academic accommodations such as live notetaking. For example, a student might ask AI to create a practice quiz based on a course reading or locate a particular diagram within a large collection of lecture slides.  

Sharing copyrighted materials with AI for “educational purposes” may be permissible if it meets the criteria for Fair Use and complies with the publisher’s guidelines. For material that is your own, you can decide how it may or may not be shared. If you are concerned about privacy, you may wish to set a policy that prohibits sharing your materials with AI tools that are not Penn-licensed. As with other aspects of AI-related policy, it is important to clearly communicate these expectations and the reasons behind them to your students. 

If students are allowed to use AI for assistance with their coursework, you may want to define how they should acknowledge AI’s contributions. Students could embed screenshots of AI output, submit transcripts of AI interactions, or share the prompts they entered. Less formally, you may ask students to submit an AI-assisted learning disclosure or reflection. 

If AI is referenced as a source in a bibliography, several common style guides have released advice on formatting: 

You may either create a standalone AI policy or add a statement about AI to the academic integrity policy already in your syllabus and on Canvas. Some examples include: 

  • You may not use generative AI for your work in this class. The work you produce should be your own, free from any AI assistance. Submitting work that contains AI-generated content will be considered a violation of Penn’s Code of Academic Integrity, and suspected use will be referred to the Center for Community Standards & Accountability (CSA). 
  • It is plagiarism to submit work produced by generative AI as your own without attribution. Any use of Al must be in alignment with assignment guidelines, and all AI-generated contributions should be properly cited like any other reference material. Note that AI will not be considered a “scholarly source” for assignments requiring specific types of references.  
  • You are welcome to use generative AI for any purpose in this class, provided that you cite it properly. However, note that all AI tools are prone to errors, including falsifying sources, and may even generate offensive content. You will be responsible for the accuracy and quality of the work you submit, regardless of whether it was generated by you or an AI tool. The university’s policy on plagiarism still applies to uncited or improperly cited work, whether from an AI or another human being. 

Your expectations for individual assignments may vary based on the learning goals of each task. When introducing a particular assignment, be sure to clarify any specific AI-related expectations beyond your general policy. Some examples include: 

  • This assessment will be completed in class using a secure browser; therefore, use of generative AI will be prohibited. The goal of this task is to produce evidence of your unaided critical thinking, not to evaluate the quality of your writing. I am interested in your ability to develop and communicate your original ideas, even if they are not fully polished in the limited time allotted for this exercise. 
  • Remember that generative AI is not considered to be a scholarly source for the purpose of this assignment. If you select a source that was suggested to you by an AI tool, you are responsible for evaluating its quality and content. Relying solely on an AI-generated summary rather than reading the original source may lead to false claims or citations, both of which constitute plagiarism. 
  • Along with your completed assignment, you are required to submit an AI-Assisted Learning Disclosure acknowledging the ways you did or did not use generative AI assistance. There are no “correct” answers. I encourage you to be honest in your responses, both as a way of practicing academic integrity and to help us learn together about specific uses that helped or hindered your process. 

Communicating Your AI Policies to Students

Your students may encounter different expectations around AI use in each of their classes, so it is important to help them understand your course policies. Include your AI policy and overall academic integrity policies in your syllabus and on Canvas. It can also help students if you address it in class at the start of the semester and when introducing a new assignment, especially if guidelines for acceptable use vary between assignments. 

Being transparent with students about an assignment’s purpose can help them appreciate what and why they are learning. Consider discussing the ways that having core knowledge in a field makes their use of AI more powerful, so they understand the value of that learning even with access to these tools in future work. 

Fear of making mistakes can tempt some students to turn to AI. Consider how you might reinforce the value of learning from errors and working through complex problems. Course policies and grading practices that emphasize the importance of the learning process can show students that you value their thinking in addition to the final product and encourage them to make thoughtful decisions about when and why they consult AI for help.

Academic dishonesty, including through unacceptable AI use, is frequently the result of students feeling unable to keep up with their work. Remind students of ways to get help if they are struggling in your course, such as office hours, Weingarten’s academic support, or the Marks Family Writing Center. Structured flexibility around due dates or resubmission policies may make students less tempted to turn to AI-related shortcuts. 

Should you decide to use AI to interact with student work, use a Penn-licensed tool and explain how and why student data will be shared and protected.

Addressing Potential Violations

If you encounter a student’s work in which you suspect AI may have been used inappropriately, you have several options for addressing the situation. 

There are steps you can take both before an assignment is due and after it has been submitted to frame your expectations around academic integrity.

Talk with your instructional team
In some courses, TAs may do the bulk of the grading. Make sure they understand your AI policies and how students might use the tools. Determine a procedure for members of the instructional team to refer concerns about academic dishonesty to the instructor of record. 

Remember that AI outputs include randomness
You are encouraged to familiarize yourself with AI tools and the type of content they can generate, and you may even choose to input your assignment prompt to see what AI produces. However, keep in mind that an AI response to the same prompt will be different each time and can be adjusted through more sophisticated prompting or use of a different tool. 

Get familiar with your students’ work
Recognizing the type of work a student typically produces will make it easier to notice when something seems starkly different. Including an in-class assignment early in the semester is one strategy for comparing later assignments, but note that some students will produce different work when free from time constraints. 

Check the reference list
Although AI is capable of referencing real sources, the source may or may not actually contain the information stated by AI. Requiring students to cite sources and reviewing those citations can be an effective way to spot inappropriate AI use. Fabricating sources is a violation of Penn’s Code of Academic Integrity, regardless of whether AI is involved. 

Avoid AI detectors
Several companies have created “AI detectors” that claim to evaluate whether text was written by an AI or a human. However, none of these tools are sufficiently accurate to serve as evidence that a student has used AI. Additionally, AI detectors demonstrate bias by falsely flagging written work by non-native English speakers. Uploading student work to an AI detector may also violate Penn’s AI privacy policies. 

Talk with your students
A non-accusatory conversation with your students about their work product or process can help you gauge whether or not they relied substantially on AI and be an opportunity for a teachable moment. 

Consult the Center for Community Standards & Accountability (CSA)
CSA can consult with instructors on suspected or confirmed academic integrity violations. They can answer questions and offer help for talking with an affected student, even if you are not ready or planning to report a violation. Visit Reporting Academic Integrity Violations to learn more about the reporting and disciplinary process. 

CETLI Can Help

CETLI staff are available to discuss ideas or concerns related to generative AI and your teaching, and we can work with your program or department to facilitate conversations about this technology. Contact CETLI to learn more. 

Center for Excellence in Teaching, Learning and Innovation