Teaching and Learning with AI
Action Guide for Merced College faculty
This page distills statewide guidance from the California Community Colleges Chancellor’s Office (CCCCO), the Academic Senate for California Community Colleges (ASCCC), and Merced College policies into seven quick steps. Follow them in order; the links and examples are all California resources, so you can copy‑paste with confidence.
- Explore the Human Centered Principles for Adopting Generative AI from the California Community College page to ensure alignment.
- Complete the custom for Merced College, self-paced Canvas course Foundations of AI (6-8 hrs, self‑paced) covers
- Foundations of AI (core concepts & terminology)
- AI for Educators (prompt engineering & lesson ideas)
- AI for Students (academic honesty & study skills)
- AI in the Workplace (career readiness)
(e.g., Microsoft Copilot in the MC tenant) and jot down an idea for how it could save you grading or prep time.
Choose your stance → paste it in both the syllabus and the Canvas Start‑Here page. Language below is adapted from the ASCCC Academic Integrity in the AI Age resource. asccc.org
Option Sample statement (edit the yellow text)
Explore & Acknowledge
“Students may use generative AI (Copilot, Claude, etc.) for brainstorming, outlining, and grammar checks only if they paste their prompt and a note on edits in an appendix. Undisclosed AI use = plagiarism under BP 5500.”
Limited Use
“AI tools are permitted only where the assignment explicitly says ‘AI‑OK’. All other uses violate academic honesty rules.”
No AI
“Generative AI tools are not allowed for any portion of graded work in this course.”
Revisit the policy whenever you release an assignment so students never have to ask, “Is this AI‑okay?”
Ex: Post a vague prompt → students improve it → compare Copilot outputs.
Ask for a simple Prompt Log (date · tool · prompt · what you accepted/edited). It takes seconds for students and gives you: transparency and faster feedback loops, evidence for academic‑honesty questions, and rich data to improve your next assignment.
Post weekly announcements showing how you checked AI output and corrected errors. Bias guardrails: Rotate datasets/case studies to reflect diverse cultures and ask students to note any stereotypes they spot. PII shield: Remind everyone—no IDs, grades or health info in public models. Use district‑licensed tools whenever possible.
- Foundations of AI badge earned
- Syllabus box added
- One AI activity drafted
- Prompt Log attached to an assignment
1:1 Consult – Connect with your Instructional Designers, Pam Huntington or Keri Ortiz to discuss.