Thoughtful Artificial Intelligence (AI) Use

We support the responsible, creative, and ethical use of Artificial Intelligence (AI) to enhance teaching, learning, and operational excellence across Merced College. AI serves as a collaborative support tool that amplifies human expertise, insight, and innovation.

Faculty, staff, and administrators each play a vital role in ensuring AI is used thoughtfully to strengthen student learning, improve efficiency, and foster a culture of integrity, creativity, and continuous improvement.

AI Considerations Before Use

AI with Purpose: A Human-Centered Guide for Education, Work, and Life (Self Enroll)
6–8 hours, self-paced, no assignments, specifically developed by Merced College for Merced College faculty and staff

This course provides a practical, people-centered introduction to artificial intelligence, equipping you to integrate AI thoughtfully across teaching, learning, and everyday work, Human Centered Principles for adopting Generative AI. You’ll explore:

  • Foundations of AI: Core concepts, terminology, and real-world applications.
  • AI for Educators: Prompt design, lesson integration, and ethical classroom use.
  • AI for Students: Academic integrity, study strategies, and digital literacy.
  • AI in the Workplace: Productivity, creativity, and preparing students for an AI-enabled future.

👉Faculty can potentially earn 3 DE recertification hours for teaching online by demonstrating completion of the AI with Purpose course by successfully completing all quizzes at 100% and submitting a copy of the grades page to the OSH.

Our approach follows CCCCO guidance and the HUMANS framework:

  • Human-Centered: Preserve faculty and student agency; AI complements—not replaces—people.

  • Universal Support: Equitable access to tools and training for all.

  • Managed Privacy: Protect personal and institutional data; use privacy-by-design.

  • Algorithmic Fairness: Reduce bias and test for equity impacts.

  • Notice & Transparency: Disclose when AI is used and how it shapes outcomes.

  • Safety & Security: Guard against misuse, unsafe outputs, or insecure systems.

FERPA-Compliant AI Tools

Protecting student data is foundational to how we approach AI at Merced College. Most publicly available AI tools are not FERPA compliant, which is why the District has invested in a small set of approved, FERPA-compliant AI tools designed specifically for instructional and professional use within secure college systems.

These tools allow faculty and staff to explore AI thoughtfully, ethically, and responsibly—without putting student information at risk.

Approved tools include:

  1. Google Gemini

  2. Google NotebookLM

  3. Nectir.AI AI Assistant (includes Claude 4.5 and ChatGPT 5.2)

  4. Microsoft Copilot
  5. AI Tool Coming Soon: BoodleBox AI for diverse models

If you are new to AI, we recommend starting with Google Gemini, Microsoft Outlook, or NotebookLM. As your comfort grows, Nectir.AI and BoodleBox provide more instructionally powerful and collaborative options. At every stage, support is available. Reach out to your Instructional Designer Pamela Huntington or Keri Ortiz to explore possibilities.

Google Gemini & Google NotebookLM

General-Purpose AI for Exploration and Productivity

Google Gemini and Google NotebookLM are flexible, general-purpose AI tools available to Merced College faculty and staff through our Google Workspace for Education partnership with the California Community Colleges.

These tools are ideal for individual exploration and productivity, including drafting, brainstorming, summarizing content, and working with documents.

How These Tools Are Best Used

  • Google Gemini: A conversational AI useful for idea generation, drafting text, asking questions, and exploring topics.

  • Google NotebookLM: Designed to work with documents you upload, helping summarize, organize, and analyze information you provide.

These tools are especially well-suited for faculty who are new to AI and want a low-barrier way to experiment and learn.

FERPA and Secure Access

  • Gemini and NotebookLM are FERPA compliant only when accessed through the Merced College Google Workspace (MCCD).

  • Microsoft Copilot is FERPA compliant only when used within Microsoft 365 in the MCCD environment.

  • Do not upload or share sensitive or FERPA-protected information in any AI tool unless it is explicitly approved and secure.

🔗 Access here: Google Gemini | NotebookLM

How to Log In
Use your Merced College @mccd.edu email address. When prompted, select Google Workspace account to authenticate and answer a few short setup questions. Because Merced College primarily uses Microsoft, some Google features may not be available. This ensures a secure and consistent work experience. Even so, these tools provide great ways to explore and innovate with AI.

Free AI training resources to help you begin using Gemini and NotebookLM effectively.

Nectir.AI Assistant Creator

Course-Aware AI Integrated with Canvas

Nectir.AI is a higher-education–focused AI platform designed specifically to support teaching, learning, and student success. Unlike general-purpose AI tools such as Google Gemini or NotebookLM, Nectir is built around instructional workflows and can integrate directly within Canvas.

Nectir is best suited for faculty who want to intentionally design AI support around their course content. You can find Nectir hidden in the navigation menu and it must be moved to be viewable for students. Tutorial on how to create an AI Assistant in Canvas Using Nectir. – Creating Your First AI Assistant.

How Nectir.AI Is Different

  • Course-aware AI: Faculty can connect Nectir to course materials (syllabi, readings, assignments), allowing the AI to respond based on your content rather than general web data.

  • Student-centered support: Helps students ask questions, clarify expectations, and review concepts without replacing instructor judgment or feedback.

  • Instructionally aligned: Emphasizes transparency, source citation, and responsible use in academic contexts.

  • Variety of AI Models: Nectir.AI provides access to a variety of models including GPT, Claude, and Google Gemini

FERPA and Responsible Use

Nectir.AI is designed for educational environments and prioritizes data security. As with all AI tools, sensitive or FERPA-protected information should only be used if the tool is explicitly approved and properly configured.

Training and Support

Faculty are encouraged to explore Nectir through guided tutorials and consultation with the Innovation team. If you are considering using Nectir in your Canvas course, your Instructional Designer can help you think through appropriate use cases and course design strategies.

🔗 Tutorial on Creating Your First Nectir AI Assistant

🔗 Connect with your Instructional Designer to learn more.

Microsoft Copilot
AI Built into Microsoft 365 for Everyday Work

Microsoft Copilot is an AI assistant integrated directly within Microsoft 365 applications used at Merced College, including Outlook, Word, Excel, PowerPoint, and Teams. Because Copilot operates inside our existing Microsoft environment, it is designed to enhance daily workflows while maintaining institutional security standards.

Copilot is ideal for increasing efficiency in routine tasks such as drafting emails, summarizing meetings, analyzing data, generating reports, and creating presentations.

How Microsoft Copilot Is Best Used
Integrated productivity: Works within Word, Excel, Outlook, PowerPoint, and Teams to assist with drafting, editing, summarizing, and data analysis.

Context-aware support: Uses information you already have access to in Microsoft 365 (emails, documents, meetings) to generate more relevant and personalized outputs.

Administrative and operational efficiency: Particularly useful for streamlining communication, documentation, reporting, and project coordination.

FERPA and Secure Use
Microsoft Copilot is FERPA compliant only when used within the official Microsoft 365 environment using your Merced College @mccd.edu account.

Sensitive or FERPA-protected information should only be used within approved, secure platforms.

Access
Copilot is available through Microsoft 365 applications when logged in with your Merced College credentials.

Faculty and staff are encouraged to explore Copilot for drafting, summarizing, and productivity enhancement within their existing Microsoft workflows.

Coming Soon: BoodleBox

Collaborative, Multi-Model AI for Group Learning

  • BoodleBox is a collaborative AI platform designed to support group work, shared inquiry, and collective problem-solving. Unlike tools built primarily for individual use, BoodleBox allows multiple users to engage with AI together in shared workspaces.
  • BoodleBox is best suited for collaborative learning, AI literacy, and group-based activities.

How BoodleBox Is Different

  • Collaboration first: Faculty and students can work together in shared AI spaces, making thinking, revision, and discussion visible.

  • Multiple AI models: Users can compare responses from different AI models side-by-side, supporting evaluation, bias detection, and critical thinking.

  • Process-focused learning: Encourages reflection, dialogue, and sense-making rather than simply producing answers.

FERPA and Responsible Use

BoodleBox is designed for educational use and emphasizes ethical, transparent AI engagement. As with all AI tools, faculty should avoid uploading sensitive or FERPA-protected information unless explicitly approved and configured for that purpose.

Training and Support

Once available, faculty will be invited to explore BoodleBox through guided tutorials and supported experimentation. BoodleBox is particularly well-suited for group activities, brainstorming, peer review, and teaching students how to critically engage with AI.

🔗 Getting Started with BoodleBox (Faculty Tutorial)
🔗 Connect with your Instructional Designer to explore classroom use cases once Boodlebox is available.

Artificial intelligence is showing up in more of the tools we use every day, from chat-based platforms like Gemini to specialized assistants and autonomous agents. As we continue to explore AI tools through pilots like PlaylabAI and Nectir AI, it’s helpful to understand the difference between these three levels of AI.


Level 1: Conversational AI

This is the most familiar kind of AI, the one you chat with directly. Tools like ChatGPT, Google Gemini, NotebookLM, or engagement with Boodlebox chats generate ideas, answer questions, summarize information, and help you think through problems.

They’re great for drafting, brainstorming, or tutoring, but they don’t remember past conversations or take independent action. Think of these as on-demand collaborators, ready when you ask, but reactive rather than proactive.


Level 2: AI Assistants

AI Assistants go a step further. They’re customized helpers built for specific workflows, tasks, or departments, and they can hold context across interactions.

At Merced College, we’re exploring pilots with Nectir AI, which let us design our own assistants for education and campus operations. These assistants can answer student questions, help draft reports, or generate learning materials, all while remembering context and using relevant information securely.

Other examples include ChatGPT’s “GPTs” and Google Gemini’s “Gems”, both of which allow you to create specialized assistants inside their platforms.

These aren’t just chats; they’re context-aware AI Assistants that help manage recurring work tasks and connect with your existing tools.


Level 3: AI Agents

AI Agents take things even further, they don’t just respond or assist, they act.

Agents can make decisions, and carry out multi-step tasks automatically, often by connecting to systems and data. For example, an agent could monitor enrollment patterns, send alerts, or manage scheduling without needing to be prompted each time.

This level of AI is still emerging in education, but it represents the next frontier, systems that proactively support staff, faculty, and students by handling tasks behind the scenes.


In Short

Level What It Does Example Tools How It’s Different
1. Conversational AI Chat-based generation and Q&A — you ask, it responds ChatGPT (basic chat), Gemini (basic chat), NotebookLM, Copilot Reactive: you prompt → it answers
2. AI Assistants Customizable, workflow-embedded helpers with context ChatGPT “GPTs”, Gemini “Gems”, PlaylabAI, Nectir AI Context-aware: built for specific tasks, can integrate data and remember context
3. AI Agents Autonomous systems that plan and act toward goals Emerging agent frameworks (e.g., OpenAI Agents, institutional pilots) Proactive: can take action and make decisions on behalf of users

Why It Matters

Understanding these levels helps us design and use AI thoughtfully as tools that enhance our creativity, insight, and capacity to serve others.

  • Conversational AI helps us explore ideas, learn efficiently, and expand our thinking.

  • AI Assistants support our work by handling repetitive tasks, organizing information, and giving us more space for creative and meaningful projects.

  • AI Agents represent a growing opportunity to automate background processes while ensuring people remain at the center — guiding, shaping, and reviewing outcomes.

As Merced College we don’t have a tool licensed yet for agents but our focus is on using AI to strengthen human potential, freeing time and energy for teaching, learning, innovation, and connection.

Prior to using any AI Tool, be sure to evaluate the tool using the Evaluating Artificial Intelligence (AI) Tools in an Academic Setting Rubric developed by Asccc.org. Faculty and staff should not use tools outside the approved, vetted, and secured AI tools within the district approved tools.

1. Data Privacy & Security

  • Require full compliance with FERPA, California data laws, and institutional policies.

  • Demand vendor-provided documentation showing encryption, secure storage, role-based access, deletion rights, and breach notification procedures.

  • Ensure that any data flows (input, output, backups) are transparent and auditable.

  • Disallow use of tools that force us to submit student personal data to public or unsecured endpoints.

2. Bias, Fairness & Equity

  • Ask vendors for bias audits, test results, and mitigation strategies across key demographic groups.

  • Reject tools that exhibit unexamined or systemic bias (e.g. differences in accuracy by race, gender, disability).

  • Require ongoing monitoring of fairness metrics as part of the contract.

  • Include equity in pilot evaluation, disaggregate outcomes by student subgroups.

3. Transparency & Explainability

  • The tool should clearly explain how outputs are generated, especially when offering recommendations or predictions.

  • Educators should be able to inspect or “peek inside” model behavior (e.g. feature weights, rules, decision logic) where possible.

  • The vendor should provide documentation in educator-friendly language about limitations, confidence, and assumptions.

4. Accountability

  • Define accountable roles within the institution: who is responsible if the tool errs, harms, or misclassifies?

  • The vendor contract should include liability, indemnity, and recourse for harmful outputs.

  • Maintain logs, audit trails, and version history of model updates.

  • Ensure a process for human-in-the-loop (HITL) review before decisions that can materially affect students (grades, placement, eligibility).

5. Pedagogical Alignment

  • The tool must support, not subvert, evidence-based teaching goals (critical thinking, metacognition, disciplinary reasoning).

  • It should enhance, not replace, the instructor’s role.

  • Reject AI features that complete assessments or responses without instructor oversight.

  • Favor tools that allow scaffolding: guiding students through steps instead of delivering final answers.

6. Student Engagement & Autonomy

  • Tools should encourage student agency (e.g. students shaping prompts, reflecting on outputs) rather than passive consumption.

  • Avoid “black box” features that give full answers without showing process.

  • Allow customization and adaptation for diverse learners (e.g. varying levels of scaffolding, support for different modalities).

7. Assessment & Feedback

  • The system should provide timely, meaningful, actionable feedback (not just generic statements).

  • Faculty must retain oversight of grading; tool must expose rubric logic and allow manual override.

  • For any automated scoring, the tool should offer transparency in scoring decisions.

8. Academic Integrity & Authorship

  • The tool should include safeguards against misuse (e.g. plagiarism checks, detection of AI-generated text, originality checks).

  • It must clarify authorship (distinguish student-generated vs AI outputs).

  • Be wary of tools that automatically rewrite or ghost-write without student involvement.

9. Usability & Accessibility

  • The interface must be intuitive and low-friction for faculty and students.

  • Demand compliance with WCAG / Section 508 / VPAT standards.

  • Ensure support for assistive technologies (screen readers, keyboard navigation, alt text, captioning).

  • Provide training, help documentation, and responsive vendor support.

10. Integration with Existing Systems

  • Prefer tools that seamlessly integrate with LMS, SIS, authentication (SSO), and campus IT infrastructure.

  • Avoid tools that require excessive manual data export/import.

  • Check compatibility, APIs, and long-term maintainability.

11. Scalability & Sustainability

  • The tool should scale from pilot to full-course or multi-department use without performance degradation.

  • Confirm vendor capacity: support, reliability, uptime, maintenance.

  • Evaluate total cost over time, including licensing, onboarding, training, infrastructure, and vendor escalation.

12. Cost, Licensing & Vendor Terms

  • Demand transparent pricing and clarity around usage limits (e.g. per-user, per-class).

  • Ensure licensing terms allow institutional use, future expansion, and data portability.

  • Require clauses for discontinuation, data export, and tool phase-out without lock-in.

  • Confirm that tool usage does not require paying extra at student level or create inequities.

1. Focus on Process Over Product

  • Encourage students to show how they arrived at their ideas, not just the final answer.

  • Use drafts, reflections, and oral defenses to assess reasoning and learning, not just output.

2. Design Authentic Assessments

  • Explore Assessment Ideas for Leveraging AI with 7+ Practical AI Enhancements for Assessments – docx.

  • Connect assignments to real-world, local, or community contexts.

  • Prioritize projects that require human judgment, personal insight, or original synthesis—things AI can’t easily replicate.

3. Teach AI Literacy

  • Help students understand how AI works, its limits, and its biases.

  • Discuss hallucinations, accuracy checking, source verification, and responsible citation.

  • Model critical evaluation of AI-generated content in class.

4. Be Transparent and Ethical

  • Disclose when AI tools are used in your teaching materials.

  • Discuss academic integrity expectations openly.

  • Require students to note when and how they used AI in their work.

5. Provide Alternatives

  • Offer non-AI options for students who lack access, have accessibility needs, or prefer to work without AI.

  • Ensure that AI use never becomes a barrier to participation or equity.

6. Keep the Human at the Center

  • Use AI to enhance—not replace—human connection in teaching and learning.

  • Prioritize empathy, mentorship, and creativity.

  • Encourage students to view AI as a partner for exploration, not a shortcut to completion.

7. Promote Critical Reflection

  • Ask students to reflect on their experience using AI—what worked, what didn’t, and what they learned about thinking, ethics, or creativity.

  • Use these reflections to build digital and ethical literacy.

8. Start Small, Iterate Thoughtfully

  • Begin with low-stakes activities (e.g., brainstorming, outlines).
  • Gather feedback from students, adjust course policies, and share what works with colleagues.

9. Consider Thoughtful AI Use While Promoting Integrity

Review the materials provided in the Merced College AI Course (Direct link to course for self-enrollment). You can also explore the Cal State Recommendations for Teaching & Learning with AI. You can also explore a number of strategies from Arizona State University on upholding Integrity Through Design: Teaching in the Age of AI

10. Consult with Your Instructional Designer faculty

Your Innovation team brings extensive expertise and a proven track record to this field. We are eager to explore new ideas and partner with faculty on innovative projects. To schedule a consultation, please contact [email protected] or [email protected].

Carefully Consider and Indicate AI Use Statements in the Syllabus and Specific Assignments

Choose your stance → paste it in both the syllabus and the Canvas Start‑Here page. Language below is adapted from the ASCCC Academic Integrity in the AI Age resource. asccc.org.

Select one of the Syllabus Sample statements found under “Syllabus Guidelines” within the OSH and then add additional details to the statements like indicating how and when AI may be used. a)

  • Use of Generative AI is Generally Permitted Within Guidelines
    Artificial Intelligence (AI), including ChatGPT, are being used in workplaces all over the world to save time and improve outcomes by generating text, images, computer code, audio, or other media. Use of AI tools is generally welcome and even encouraged in this class with attribution aligned with disciplinary guidelines. AI tools might be employed to brainstorm, draft, edit, revise, etc. I will provide examples of how to properly cite. use. Any submitted course assignment not explicitly identified as having used generative AI will be assumed to be your original work. Using AI tools to generate content without proper attribution will be considered a violation of the Merced College Academic Honesty Policy (Administrative Procedure 5540) and students may be sanctioned for confirmed, non-allowable use. If you have questions about what is permitted, contact the instructor to discuss before submitting work.
  • Use of Generative AI Permitted Under Some Circumstances or With Explicit Permission
    Some assignments in this course may include or allow use of Artificial Intelligence (AI), including ChatGPT or related tools for the creation of text, images, computer code, audio, or other media. The instructor will inform you when, where and how you may use these tools, and provide guidance for attribution. Use of generative AI tools in any other context in this course will be considered a violation of the Merced College Academic Honesty Policy (Administrative Procedure 5540) and students may be sanctioned for confirmed, non-allowable use. If you have questions about what is permitted, contact the instructor to discuss before submitting work.
  • No Generative AI Use Permitted
    In this course, all assignments must be completed by the student. Artificial Intelligence (AI), including ChatGPT and other related tools used for creating of text, images, computer code, audio, or other media, are not permitted for use in any work in this class. Use of these generative AI tools will be considered a violation of the Merced College Academic Honesty Policy (Administrative Procedure 5540), and students may be sanctioned for confirmed, non-allowable use in this course.

 

Adding Assignment-Level Guidance on AI Use

In addition to your syllabus statement, you may provide brief, assignment-specific directions so students understand exactly what is permitted for that task. Examples include:

  • Explore & Acknowledge
    “Students may use generative AI tools (Gemini, ChatGPT, etc.) for brainstorming, outlining, or grammar support only if they include their AI prompt(s) and a short note explaining any edits in an appendix.”
  • Limited Use
    “AI tools may be used for [insert the specific portion of the assignment where AI is allowed, such as idea generation, coding assistance, or data visualization]. Any AI use outside of these allowed components violates academic honesty expectations for this assignment.”
  • No AI
    “Generative AI tools may not be used for any portion of this assignment.”

Revisit your policies whenever you release an assignment, so students never have to ask, “Is this AI‑okay?”   You may choose to allow or prohibit AI use for various assessments or pieces of assessments. Reach out to one of our Instructional Designer faculty to learn more.

The AI Innovation Challenge invites Merced College faculty and staff to share creative, human-centered ideas for using artificial intelligence to improve teaching, learning, and college operations. Whether your idea is simple or ambitious, this is an opportunity to explore solutions that reflect our commitment to student success and continuous improvement.

Project ideas may focus on:

  • Solving a problem or removing barriers

  • Improving a process or workflow

  • Enhancing the student experience

Selected participants receive access to advanced AI tools and support from the Innovation Team, Instructional Design, and Educational Technology. Projects may be recognized through campus showcases, college communications, and additional awards or mini-grants.

👉 Submit your AI Innovation Challenge proposal

The AI Champions Network is a cross-functional community of 32 faculty and staff working together to explore practical, human-centered uses of artificial intelligence at Merced College. Champions are not expected to be experts. Instead, they serve as connectors and collaborators, helping identify meaningful use cases, share lessons learned, and support responsible AI adoption across teaching, learning, and operations.

The network meets monthly in a supportive cohort format focused on building AI confidence, exploring real-world examples, and moving from awareness to application. Sessions are recorded and shared for continued learning.

👉 [View the First AI Champions Network Meeting] 2/4/26 – Email [email protected] for password – available for all MCCD faculty, staff, and administrators.