Lecturer: Jan Smeddinck
Fields: Human-Computer Interaction, Machine Learning, Artificial Intelligence, Interaction Design
Content
Machine learning (ML) and artificial intelligence (AI) services are having a growing impact on the way we live and work. The most prominent goal of contemporary AI is to support human decision making and action with intelligent services. Widely available ML and AI tools are increasingly enabling the design and development of automated processes that provide (potentially) deep integration of complex information, often with the capacity to respond autonomously, mimicking aspects of human cognition and behavior. However – even questionable marketing aside – the term “artificial intelligence” alone is prone to generating misunderstandings and bloated expectations, leading to bad user experiences or worse. In this context, the course will explore flexibility in human-AI interaction with a view of both the potential upsides and pitfalls. The talk for this course will introduce the foundations of critical and responsible design, development, and evaluation of AI technologies with a focus on human-AI-interaction. It aims to provide participants with an intuition towards utilizing – and critically evaluating the impact of – human-AI interaction concepts and technologies. The workshop elements will scaffold further critical discussion along hands-on ML/AI use-cases.
Literature
- Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
- Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A historical perspective of explainable Artificial Intelligence. WIREs Data Mining and Knowledge Discovery, 11(1), e1391. https://doi.org/10.1002/widm.1391
- Dauvergne, P. (2020). AI in the Wild: Sustainability in the Age of Artificial Intelligence. The MIT Press.
- Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer Nature.
- Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
- Hassenzahl, M., Borchers, J., Boll, S., Pütten, A. R. der, & Wulf, V. (2021). Otherware: How to best interact with autonomous systems. Interactions, 28(1), 54–57. https://doi.org/10.1145/3436942
- Le, H. V., Mayer, S., & Henze, N. (2021). Deep learning for human-computer interaction. Interactions, 28(1), 78–82. https://doi.org/10.1145/3436958
- Mattu, J. A., Jeff Larson,Lauren Kirchner,Surya. (n.d.). Machine Bias. ProPublica. Retrieved 7 November 2021, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- O’Neil, C. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Reprint edition). Crown.
- Pfau, J., Smeddinck, J. D., & Malaka, R. (2020). The Case for Usable AI: What Industry Professionals Make of Academic AI in Video Games. In Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play (pp. 330–334). Association for Computing Machinery. https://doi.org/10.1145/3383668.3419905
- Sheridan, T. B. (2001). Rumination on automation, 1998. Annual Reviews in Control, 25, 89–97. https://doi.org/10.1016/S1367-5788(01)00009-8
- Shneiderman, B., & Maes, P. (1997). Direct manipulation vs. Interface agents. Interactions, 4(6), 42–61. https://doi.org/10.1145/267505.267514
- Swartz, L. (2003). Why People Hate the Paperclip: Labels, Appearance, Behavior, and Social Responses to User Interface Agents. https://doi.org/10.13140/RG.2.1.2508.1047
- Thieme, A., Cutrell, E., Morrison, C., Taylor, A., & Sellen, A. (2020). Interpretability as a dynamic of human-AI interaction. Interactions, 27(5), 40–45. https://doi.org/10.1145/3411286
- Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083. https://doi.org/10.1098/rsta.2018.0083
- Qian Yang, Aaron Steinfeld, Carolyn Rosé, & John Zimmerman. (2020). Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3313831.3376301
- Zimmerman, J., Oh, C., Yildirim, N., Kass, A., Tung, T., & Forlizzi, J. (2020). UX designers pushing AI in the enterprise: A case for adaptive UIs. Interactions, 28(1), 72–77. https://doi.org/10.1145/3436954
Lecturer
Jan Smeddinck is currently a Principal Investigator at – and the Co-Director of – the Ludwig Boltzmann Institute for Digital Health and Prevention (LBI-DHP) in Salzburg, Austria. For the LBI-DHP, he leads research programme lines on digital technologies and data analytics. Prior to this appointment he was a Lecturer (Assistant Professor) in Digital Health at Open Lab and the School of Computing at Newcastle University in the UK. He also spent one year as a postdoc visiting research scholar at the International Computer Science Institute (ICSI) in Berkeley and retains an association with his PhD alma mater, the TZI Digital Media Lab at the University of Bremen in Germany. Building on his background in interaction design, serious games, web technologies, human computation, machine learning, and visual effects, he has found a home in the research field of human-computer interaction (HCI) research with a focus on digital health.
Affiliation: Ludwig Boltzmann Institute for Digital Health and Prevention
Homepage: https://smeddinck.com/