CSDS 447: Responsible AI Engineering
Introduced this courses focusing on recent research in responsible AI and AI engineering.
This graduate-level focused on the engineering of AI-enabled systems that satisfy key Responsible AI principles—fairness, robustness, safety, and explainability. Students examined real-world risks such as bias amplification, unsafe autonomy, and AI hallucinations, and learned rigorous methods for evaluating, testing, and mitigating these challenges. Through a mix of lectures, research paper discussions, presentations, and a substantial hands-on project, students gained both theoretical understanding and practical skills for building AI responsibly.
Learning Objectives
- Understand the unique challenges of engineering AI-enabled software systems.
- Apply software engineering methods to address Responsible AI properties.
- Evaluate and mitigate fairness, robustness, safety, and explainability issues in AI systems.
- Critically analyze recent research and emerging trends in Responsible AI.
- Communicate technical concepts effectively through scholarly presentations.
- Design and implement a project that applies responsible AI engineering methods to a real-world problem.
Topics Covered
- Introduction to Responsible AI and AI engineering fundamentals
- Emerging challenges: bias amplification, unsafe autonomy, hallucinations
- AI-specific software development life cycle (SDLC)
- Fairness: ethical principles, metrics, mitigation, and verification
- Robustness: resilient design and evaluation methods
- Safety: requirements, safety in production, safety-critical contexts
- Explainability: black-box vs. white-box analysis, building trust
- Trade-offs and synergies among Responsible AI properties
- Applied project work and scholarly presentations
Presentation
Each student selected a research paper from a curated list, conducted an in-depth review, and presented it in class. Presentations were 20 minutes long, followed by 5–10 minutes of Q&A and open discussion. The goal was to develop students’ skills in synthesizing research contributions, critically evaluating methodologies, and facilitating scholarly dialogue. Presentations were assessed on clarity, content quality, depth of analysis, and engagement with audience questions.
Project
Students conducted an individual or two-person project targeting one of the four Responsible AI properties. Projects involved surveying relevant literature, empirically evaluating existing techniques, identifying research gaps, and developing a proof-of-concept implementation. Deliverables included:
- A project proposal (1–2 pages, ACM format)
- Implementation code, data, and analysis results
- A comprehensive written report
- A final presentation