Learning
This project deepened my knowledge of AI ethics both conceptually and practically. I relearned machine learning fundamentals in Python while framing them through the lens of bias and fairness. I learned how to integrate coding tasks with reflective discussions, striking a balance between technical skill and critical thinking. Teaching remotely at Arcada also taught me new ways to engage students virtually on a topic that can feel abstract.
Impact
The project gave students a practical and conceptual grounding in AI ethics. By coding their own models and simultaneously reflecting on ethical issues, they learned to see bias as both a statistical and a societal challenge. This made them better prepared to build and evaluate AI systems responsibly.
Challenge
The biggest challenge was defining scope: AI ethics is vast, and narrowing it to a set of examples that were rigorous but teachable was not easy. Another challenge was balancing code and reflection: if the material became too technical, the ethical questions got lost; if it became too reflective, students missed the technical grounding. Remote teaching also posed difficulties in keeping students engaged and ensuring they saw ethics as integral, not secondary, to AI practice.
Description
This teaching project explored the ethical challenges of machine learning, designed in notebook format so students could code, analyze, and reflect directly in the same environment. The project predates the rise of LLMs and focused on recommendation engines, search engines, and traditional machine learning models.
The first chapter asked why we love AI and what its limitations are, explaining typical pitfalls such as bias and lack of transparency. The next section examined biases in more depth, distinguishing between human and algorithmic biases and showing how statistical issues like poorly trained models create ethical failures. Real-world examples included problematic image classifiers and biased criminal recidivism prediction systems.
The project also explored how recommendation engines work, showing students both Python-based implementations and the ethical issues behind them. I tied this to the alignment problem, using research and books to show how systems must be aligned with human norms.
Topics
AI ethics, human vs. machine bias, accountability, machine learning, recommendation engines, alignment problem
Tools
Python, Observable
Year
2021 to 2023
Clients
Arcada University
