Course on Flexible Human-AI Interaction at the Interdisciplinary College 2022

A Mini Online Lecture Series on Human-AI Interaction

Header slide for the flexible human-ai interaction course at the Interdisciplinary College 2022.

Header slide for the flexible human-ai interaction course at the Interdisciplinary College 2022.

I was very happy to be asked to deliver a mini-online lecture series on “flexible” (the focus topic of the year for IK 2022) human-ai interaction. The course highlighted a range of selected topics from the CSC8611 module “Human-Artificial Intelligence (AI) Interaction & Futures” as it was delivered at Newcastle University - together with Yu Guan as a Co-Lecturer - in 2021 (as part of the new MSc in HCI programme), which in turn built on many underlying resources and related works (with the HAII@CMU module by Haiyi Zhu and Steven Wu; formerly by Chinmay Kulkarni and Beth Kery being a key component).

Artificial Intelligence (AI) and machine learning (ML) are quickly changing the way we live and work. AI&ML enable the design and development of automated processes that mimic cognition and provide deep and complex integration of information, in robot and agent systems even with the capability of taking actions. The ultimate goal of AI is to support human decision making and action with informed intelligent services. This course concerns critical and responsible design, development and evaluation of AI technologies with a focus on human-AI-interaction. The course aims to provide learners with a cross-disciplinary background setting the roots for an advanced set of skills around utilising and critically evaluating the development and impact of Human-AI concepts and technologies within their ecosystems.

The lecture series focused on also including brief practical experiences as a foundation for discussion.

Original course description:

Session 01: Ubiquitous AI & the Need for Human-AI Interaction

Session 02: Automation, Control & HAIx

Session 03: Bias, Fairness, x-ability

Session 04: Mental Models, Embodied Interaction & FutureHAIx

  • Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91.
  • Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A historical perspective of explainable Artificial Intelligence. WIREs Data Mining and Knowledge Discovery, 11(1), e1391.
  • Dauvergne, P. (2020). AI in the Wild: Sustainability in the Age of Artificial Intelligence. The MIT Press.
  • Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer Nature.
  • Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
  • Hassenzahl, M., Borchers, J., Boll, S., Pütten, A. R. der, & Wulf, V. (2021). Otherware: How to best interact with autonomous systems. Interactions, 28(1), 54–57.
  • Le, H. V., Mayer, S., & Henze, N. (2021). Deep learning for human-computer interaction. Interactions, 28(1), 78–82.
  • Mattu, J. A., Jeff Larson,Lauren Kirchner,Surya. (n.d.). Machine Bias. ProPublica. Retrieved 7 November 2021, from
  • O’Neil, C. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Reprint edition). Crown.
  • Pfau, J., Smeddinck, J. D., & Malaka, R. (2020). The Case for Usable AI: What Industry Professionals Make of Academic AI in Video Games. In Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play (pp. 330–334). Association for Computing Machinery.
  • Sheridan, T. B. (2001). Rumination on automation, 1998. Annual Reviews in Control, 25, 89–97.
  • Shneiderman, B., & Maes, P. (1997). Direct manipulation vs. Interface agents. Interactions, 4(6), 42–61.
  • Swartz, L. (2003). Why People Hate the Paperclip: Labels, Appearance, Behavior, and Social Responses to User Interface Agents.
  • Thieme, A., Cutrell, E., Morrison, C., Taylor, A., & Sellen, A. (2020). Interpretability as a dynamic of human-AI interaction. Interactions, 27(5), 40–45.
  • Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083.
  • Qian Yang, Aaron Steinfeld, Carolyn Rosé, & John Zimmerman. (2020). Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13.
  • Zimmerman, J., Oh, C., Yildirim, N., Kass, A., Tung, T., & Forlizzi, J. (2020). UX designers pushing AI in the enterprise: A case for adaptive UIs. Interactions, 28(1), 72–77.

About the Interdisciplinary College (IK) 2022

The virtual IK 2022 theme was Flexibility, from the perspective of the nervous system, the mind, communication, and AI & robotics. Flexibility can be interpreted as mental flexibility, physical flexibility (including dance improvisation), neuroplasticity, and adaptive artificial systems. 28 lecturers from a wide range of backgrounds filled this theme with life, ranging from clinical psychology to robotics to science communication or Tibetan monastic debate.