
Creating clear policies around AI use is essential to maintaining academic integrity and setting student expectations. These policies should outline when and how AI tools like ChatGPT, Perplexity.ai, or Grammarly can be used and ensure students understand the ethical implications of AI in academic work.
If you’re interested, check out this AI Chatbot we created that can provide you with sample syllabus statements based on your unique course information and preferences.
Considerations for AI Policy Creation
Permitted Use Cases
Clearly define what AI tools can be used for in your course (e.g., brainstorming, idea generation, or drafting) and where they are prohibited (e.g., completing graded assignments without supervision).
Example Policy Statement: "Students may use AI tools like ChatGPT for initial idea generation and draft creation, but final submissions must reflect the student's original thoughts and revisions."
Prohibited Use Cases
Establish boundaries for AI use to prevent academic misconduct, such as using AI to generate full essays or solutions to homework problems.
Example Policy Statement: "Using AI tools to generate any work is considered plagiarism and will result in disciplinary action."
Ethical AI Use
Include guidelines on ethical AI usage, including proper attribution when AI-generated content is used in assignments.
Example Policy Statement: "Any AI-generated content used in assignments must be properly attributed. Failure to disclose AI assistance may result in plagiarism charges."
AI Syllabus Statement Templates
To help streamline the process of adding AI policies to your syllabus, here are templates and examples that can be adapted for your course. These templates ensure students are fully informed about when AI can be used and how to properly acknowledge AI-generated content. Remember that you can establish different AI policies for each assignment, depending on how you want the tools to be used (or not used). Just make sure to clearly communicate this decision to your students.
-
- "AI tools, such as ChatGPT or Grammarly, may be used for brainstorming or refining ideas, but all work submitted must be your own. If AI assistance is used in drafting, please include a note explaining how the tool was used."
- "AI tools, such as DALL-E or MidJourney, may be used for brainstorming or idea generation in art projects, but the student must create and refine all final artwork. If AI tools are used in any part of the creative process, students must disclose their role and provide a written reflection or recorded demonstration about how AI contributed to their creative decisions."
- "AI tools, such as AIVA or Amper Music, may be used for generating initial musical ideas or experimenting with different styles. However, students are required to modify and develop their compositions independently. Any AI-generated musical elements must be disclosed, and students must provide a written reflection on how they used AI in their compositional process."
-
- "Students may use AI tools to help with research, but all final written content must be original. Cite all sources, including AI-generated suggestions, where appropriate."
- "AI tools can be used to assist in music analysis or transcription tasks, such as identifying chord progressions or analyzing harmonic structures. However, students must critically evaluate AI output and include their own insights. AI tools must not be used to bypass critical analysis and understanding."
-
- "In group work, AI can be used as a collaborative tool for idea generation or outlining, but students must document the specific AI tool used and how it contributed to the project."
- "AI tools can be used to assist in collaborative art projects for idea generation or visual research. However, the final artwork must be original and primarily created by students. All AI-generated concepts or visuals must be properly credited and documented in the project submission."
-
- "If AI tools are used during the creative process, students must include a 1-2 page reflection outlining how the AI contributed to their project, the specific tools used, and how they adapted or refined the AI-generated content to fit their vision.
- If art or music-based, “The reflection should also address the ethical implications of using AI in creative arts."
Engaging Students in AI Policy Discussions
To foster a better understanding of AI’s role in education, engage students early in conversations about how AI should be used in academic work. These discussions can help students grasp the ethical responsibilities and critical thinking skills required to effectively and responsibly use AI.
Start the semester with an open discussion on using AI tools, including the ethical implications and expectations for proper use.
Discussion Prompts
"How do you think AI will impact your learning experience? What are the potential risks and benefits?"
“How do you think AI should be used in our classroom? How can we establish agreed-upon norms for AI usage?”
Assign students a reflection on how AI may assist them in their work and the importance of critical thinking beyond relying on AI-generated responses.
Example Assignments
"Write a reflection on how you used AI for your project and explain how it influenced your thinking process."
“Reflect on how relying on AI may have influenced your problem-solving approach and identify any gaps in understanding that could arise from over-reliance on AI-generated outputs.”
Privacy and Data Security Considerations
When using AI tools, it’s essential to consider privacy and data security. Do not share work containing student identifiers with any third-party services—no student names or other unique identifiers like NetID or N-numbers. Sharing such identifiers with services like ChatGPT or Google Gemini violates the Federal Educational Records Privacy Act (FERPA), which mandates careful handling of student records and restricts their disclosure to third parties. Sharing data that identifies students with tools that NYU has not licensed will never be FERPA-compliant, as FERPA requires the institution to have a specific sort of business relationship. (NYU’s Academic Integrity for Students at NYU.)
Addressing Academic Integrity Concerns
With the rise of AI tools like ChatGPT and other generative AI applications, concerns about academic integrity have become increasingly relevant. As AI becomes more integrated into the educational experience here at NYU Steinhardt, both faculty and students must navigate the fine line between appropriate use and academic dishonesty. Clear guidelines are necessary to help students understand when and how AI can be used responsibly.
If you suspect a violation, follow NYU Steinhardt’s policy on reporting an academic integrity violation.
Key strategies to address academic integrity concerns include:
- Establishing Clear AI Use Policies: Instructors must create explicit policies outlining when AI tools can be used and what constitutes misuse. For example, using AI for initial drafts may be allowed, but passing off fully AI-generated work as one's own without modification or attribution is not.
- Promoting Critical Thinking: Instead of banning AI, educators should focus on teaching students to use AI as a learning tool while encouraging them to critically evaluate AI-generated content. As AI is reshaping how work is done across industries, it’s essential for educators to adapt and incorporate these changes into their teaching practices, helping students navigate and responsibly engage with AI technologies.
- Ethical AI Use: Educators should emphasize ethical practices, such as proper attribution for AI assistance and transparency about the use of AI in assignments.
- Define Plagiarism with AI: Clarify what constitutes plagiarism when AI tools are involved, such as submitting AI-generated work as your own or failing to attribute AI contributions.
Example
"Submitting AI-generated content without proper attribution is considered academic dishonesty and will be subject to disciplinary action."
Assessments and AI in Academic Integrity
Integrating AI considerations into assessments is crucial for maintaining academic integrity and aligning AI use with course objectives. Here are key elements to consider:
- Assessment Design: Design assignments that encourage original thought and critical analysis, minimizing opportunities for academic misconduct. For instance, open-ended questions, reflective responses, and analysis-based projects can make it harder for students to rely solely on AI-generated content.
- Clear Guidelines for AI in Assessments: Specify when and how students can use AI assessment tools. For example, “AI tools can be used for initial research and brainstorming, but the final submission should be your original work.” If AI is prohibited for certain tasks (e.g., producing written essays or solving complex problems), clearly outline this.
- Reflective Assessment on AI Use: Include assignments where students reflect on their use of AI in completing tasks. This can help students critically assess AI’s impact on their learning and ensure that AI is a supplement, not a replacement for independent thought.
Incorporating these practices into your assessments fosters responsible AI use, supports academic integrity, and reinforces the value of original work.
Citing AI Tools in Academic Work
As AI tools like ChatGPT and other generative AI systems become more prevalent in academic settings, it’s crucial to provide proper citations when using them in your work. While AI doesn’t create “original” content like a human author, acknowledging its role in generating text or ideas is essential for maintaining academic integrity. Please see NYU Library’s Guide for Citing AI-Generated Text.
Here’s an overview of how to cite AI-generated content in different citation formats and key considerations:
-
Basic Format: When citing AI tools in APA format, treat the tool as the "author," followed by the year of use and a description of the content generated.
Example:
- OpenAI. (2023). Response generated by ChatGPT [Language model]. https://chat.openai.com/chat
Resource:
ChatGPT and Other AI Generative Tools
How to Cite ChatGPTRecommendations:
- APA warns against over-reliance on AI tools to generate substantial content. While AI can assist in drafting, it should not replace critical thinking or the author's original analysis.
- If you’ve used ChatGPT or other AI tools in your research, describe how you used the tool in your Method section or a comparable section of your paper. You might describe how you used the tool in your introduction for literature reviews or other types of essays or response or reaction papers. In your text, provide the prompt you used and any portion of the relevant text generated in response.
-
Basic Format: In MLA style, AI tools are cited like software or apps, with the company listed as the author, followed by the tool name and the interaction date.
Example:
- OpenAI. ChatGPT. OpenAI, 2023, https://chat.openai.com/chat.
Resource:
How do I cite Generative AI in MLA Style?
Recommendations:
- MLA emphasizes the importance of using AI tools to complement other academic sources, not as the primary or sole content source.
- You should include the unique URL that the tool generates instead of the general URL.
-
Basic Format: In Chicago style, AI-generated content is treated similarly to software, citing the platform, tool, and interaction date.
Example:
- Text generated by ChatGPT, OpenAI, March 7, 2023, https://chat.openai.com/chat.
Resources:
Citation, Documentation of Sources (AI)
Recommendations:
- In Chicago, make sure to cross-reference any AI-generated insights with credible, verified sources.
- If you’ve edited the AI-generated text, you should say so in the text or at the end of the note (e.g., “edited for style and content”).
NYU’s Notice on AI Detection Tools
At NYU, it is important to note that AI detection tools like Turnitin have introduced features to identify AI-generated content, but these tools are not 100% reliable. As such, NYU advises that educators and students should not rely on these tools to accurately detect AI use in submissions. Faculty members are encouraged to engage students in conversations about academic integrity and the responsible use of AI rather than depending on detection tools to police AI use.