Goals of this Guide
- Outline a clear, actionable process for detecting, addressing, and reporting the misuse of generative AI.
- Provide guidance on crafting transparent AI policy language for syllabi and assignment design.
The Process at a Glance
- Gather – Determine sufficient vs. insufficient evidence and save all relevant materials.
- Meet – Have a private, non-accusatory conversation to invite the student to explain their process.
- Refer – Forward the case to the Administrative Board.
- Review – Redesign your syllabus or assignment to AI-aware.
Step 1: Gather Evidence
Before reaching out to a student, compile the facts.
Harvard policy requires a holistic approach to evidence.
What to Save
- The student's submission and draft history.
- The assignment prompt and rubric.
- Relevant course messages or emails.
Determine Sufficient vs. Insufficient Evidence
- Insufficient Evidence (Does Not Meet standard for academic integrity charge)
- AI Detection Scores: Harvard strongly advises against using (like Turnitin's AI score). They are probabilistic and prone to false positives, especially for non-native English speakers.
- Generic Prose: Bland or highly formulaic writing.
Sufficient Evidence (Meets standard for further referral)
- Hallucinated Citations: References to fake sources or nonexistent data.
- Process Red Flags: Unexplained, dramatic shifts in writing style.
- Leftover Prompts: Text directly stating "As an AI language model..."
Step 2. Meet with the Student
- ###
- ###
If you suspect inappropriate generative AI use, the Bok Center [1] recommends meeting with the student soon after the work is submitted. During this private, non-accusatory conversation, you should:
Share your concerns directly and ask if they used generative AI or another aid.
Discuss the specific reasons the work raised concerns and give the student an opportunity to explain.
Ask targeted questions about the content and their process to gauge if they plausibly authored the work.