Artificial Intelligence (AI) in Teaching, Learning, and the Workplace

Merced College supports the responsible, creative, and ethical use of Artificial Intelligence (AI) to enhance teaching, learning, and operational excellence.
AI is a support tool, not a replacement for human expertise. Faculty remain central to guiding student learning and maintaining academic integrity.

AI Considerations Before You Use AI

AI with Purpose: A Human-Centered Guide for Education, Work, and Life (Self Enroll)
(6–8 hours • self-paced • developed by Merced College for Merced College faculty and staff)

This course provides a practical, people-centered introduction to artificial intelligence, equipping you to integrate AI thoughtfully across teaching, learning, and everyday work.

You’ll explore:

  • Foundations of AI: Core concepts, terminology, and real-world applications.

  • AI for Educators: Prompt design, lesson integration, and ethical classroom use.

  • AI for Students: Academic integrity, study strategies, and digital literacy.

  • AI in the Workplace: Productivity, creativity, and preparing students for an AI-enabled future.

This course was custom-developed for Merced College district and aligns with statewide guidance from the Human Centered Principles for Adopting Generative AI from the California Community College.

Our approach follows CCCCO guidance and the HUMANS framework:

  • Human-Centered: Preserve faculty and student agency; AI complements—not replaces—people.

  • Universal Support: Equitable access to tools and training for all.

  • Managed Privacy: Protect personal and institutional data; use privacy-by-design.

  • Algorithmic Fairness: Reduce bias and test for equity impacts.

  • Notice & Transparency: Disclose when AI is used and how it shapes outcomes.

  • Safety & Security: Guard against misuse, unsafe outputs, or insecure systems.

Thanks to a partnership with Google and the California Community Colleges, Merced College faculty and staff (students coming soon!) now have access to two new generative AI tools: Google Gemini and NotebookLM. These tools offer exciting opportunities to explore and create using artificial intelligence.

🔗 Access here: Google Gemini | NotebookLM

How to Log In
Use your Merced College @mccd.edu email address. When prompted, select Google Workspace account to authenticate and answer a few short setup questions. Because Merced College primarily uses Microsoft, some Google features may not be available. This ensures a secure and consistent work experience. Even so, these tools provide great ways to explore and innovate with AI.

Free AI training resources to help you begin using Gemini and NotebookLM effectively.

Prior to using any AI Tool, be sure to evaluate the tool using the Evaluating Artificial Intelligence (AI) Tools in an Academic Setting Rubric developed by Asccc.org

1. Data Privacy & Security

  • Require full compliance with FERPA, California data laws, and institutional policies.

  • Demand vendor-provided documentation showing encryption, secure storage, role-based access, deletion rights, and breach notification procedures.

  • Ensure that any data flows (input, output, backups) are transparent and auditable.

  • Disallow use of tools that force us to submit student personal data to public or unsecured endpoints.

2. Bias, Fairness & Equity

  • Ask vendors for bias audits, test results, and mitigation strategies across key demographic groups.

  • Reject tools that exhibit unexamined or systemic bias (e.g. differences in accuracy by race, gender, disability).

  • Require ongoing monitoring of fairness metrics as part of the contract.

  • Include equity in pilot evaluation, disaggregate outcomes by student subgroups.

3. Transparency & Explainability

  • The tool should clearly explain how outputs are generated, especially when offering recommendations or predictions.

  • Educators should be able to inspect or “peek inside” model behavior (e.g. feature weights, rules, decision logic) where possible.

  • The vendor should provide documentation in educator-friendly language about limitations, confidence, and assumptions.

4. Accountability

  • Define accountable roles within the institution: who is responsible if the tool errs, harms, or misclassifies?

  • The vendor contract should include liability, indemnity, and recourse for harmful outputs.

  • Maintain logs, audit trails, and version history of model updates.

  • Ensure a process for human-in-the-loop (HITL) review before decisions that can materially affect students (grades, placement, eligibility).

5. Pedagogical Alignment

  • The tool must support, not subvert, evidence-based teaching goals (critical thinking, metacognition, disciplinary reasoning).

  • It should enhance, not replace, the instructor’s role.

  • Reject AI features that complete assessments or responses without instructor oversight.

  • Favor tools that allow scaffolding: guiding students through steps instead of delivering final answers.

6. Student Engagement & Autonomy

  • Tools should encourage student agency (e.g. students shaping prompts, reflecting on outputs) rather than passive consumption.

  • Avoid “black box” features that give full answers without showing process.

  • Allow customization and adaptation for diverse learners (e.g. varying levels of scaffolding, support for different modalities).

7. Assessment & Feedback

  • The system should provide timely, meaningful, actionable feedback (not just generic statements).

  • Faculty must retain oversight of grading; tool must expose rubric logic and allow manual override.

  • For any automated scoring, the tool should offer transparency in scoring decisions.

8. Academic Integrity & Authorship

  • The tool should include safeguards against misuse (e.g. plagiarism checks, detection of AI-generated text, originality checks).

  • It must clarify authorship (distinguish student-generated vs AI outputs).

  • Be wary of tools that automatically rewrite or ghost-write without student involvement.

9. Usability & Accessibility

  • The interface must be intuitive and low-friction for faculty and students.

  • Demand compliance with WCAG / Section 508 / VPAT standards.

  • Ensure support for assistive technologies (screen readers, keyboard navigation, alt text, captioning).

  • Provide training, help documentation, and responsive vendor support.

10. Integration with Existing Systems

  • Prefer tools that seamlessly integrate with LMS, SIS, authentication (SSO), and campus IT infrastructure.

  • Avoid tools that require excessive manual data export/import.

  • Check compatibility, APIs, and long-term maintainability.

11. Scalability & Sustainability

  • The tool should scale from pilot to full-course or multi-department use without performance degradation.

  • Confirm vendor capacity: support, reliability, uptime, maintenance.

  • Evaluate total cost over time, including licensing, onboarding, training, infrastructure, and vendor escalation.

12. Cost, Licensing & Vendor Terms

  • Demand transparent pricing and clarity around usage limits (e.g. per-user, per-class).

  • Ensure licensing terms allow institutional use, future expansion, and data portability.

  • Require clauses for discontinuation, data export, and tool phase-out without lock-in.

  • Confirm that tool usage does not require paying extra at student level or create inequities.

1. Focus on Process Over Product

  • Encourage students to show how they arrived at their ideas, not just the final answer.

  • Use drafts, reflections, and oral defenses to assess reasoning and learning, not just output.

2. Design Authentic Assessments

  • Connect assignments to real-world, local, or community contexts.

  • Prioritize projects that require human judgment, personal insight, or original synthesis—things AI can’t easily replicate.

3. Teach AI Literacy

  • Help students understand how AI works, its limits, and its biases.

  • Discuss hallucinations, accuracy checking, source verification, and responsible citation.

  • Model critical evaluation of AI-generated content in class.

4. Be Transparent and Ethical

  • Disclose when AI tools are used in your teaching materials.

  • Discuss academic integrity expectations openly.

  • Require students to note when and how they used AI in their work.

5. Provide Alternatives

  • Offer non-AI options for students who lack access, have accessibility needs, or prefer to work without AI.

  • Ensure that AI use never becomes a barrier to participation or equity.

6. Keep the Human at the Center

  • Use AI to enhance—not replace—human connection in teaching and learning.

  • Prioritize empathy, mentorship, and creativity.

  • Encourage students to view AI as a partner for exploration, not a shortcut to completion.

7. Promote Critical Reflection

  • Ask students to reflect on their experience using AI—what worked, what didn’t, and what they learned about thinking, ethics, or creativity.

  • Use these reflections to build digital and ethical literacy.

8. Start Small, Iterate Thoughtfully

    • Begin with low-stakes activities (e.g., brainstorming, outlines).

    • Gather feedback from students, adjust course policies, and share what works with colleagues.

Review the materials provided in the Merced College AI Course (Direct link to course for self-enrollment). You can also explore the Cal State Recommendations for Teaching & Learning with AI or explore 7 Practical AI Enhancements for Assessments.docx

Carefully Consider and Indicate AI Use Statements in the Syllabus and pecific Assignments

Choose your stance → paste it in both the syllabus and the Canvas Start‑Here page. Language below is adapted from the ASCCC Academic Integrity in the AI Age resource. asccc.org

Syllabus Option​ Sample statements

Explore & Acknowledge​
“Students may use generative AI (Copilot, Claude, etc.) for brainstorming, outlining, and grammar checks only if they paste their prompt and a note on edits in an appendix. Undisclosed AI use = plagiarism under BP 5500.”​

Limited Use​
“AI tools are permitted only where the assignment explicitly says ‘AI‑OK’. All other uses violate academic honesty rules.”​

No AI​
“Generative AI tools are not allowed for any portion of graded work in this course.”​

Revisit the policy whenever you release an assignment, so students never have to ask, “Is this AI‑okay?”   You may choose to allow or prohibit AI use for various assessments or pieces of assessments. Discuss this more with an Instructional Designer to learn more.