I study Artificial Intelligence at Northwestern University and I'm a member of the Qualitative Reasoning Group. I studied social and cognitive psychology for my undergraduate degree at the University of Chicago. I switched to AI from Psychology because I wanted to spend more time building minds, rather than figuring out how ours work (although I still think that's super interesting). I'm broadly interested in any AI research that produces something that looks like reasoning, cognition, or experience (my own work uses symbolic reasoning, analogy, and qualitative representations). I am also interested in how AI can be integrated into our everyday lives, and how people will interact with and respond to those systems. My main interest in human psychology is in moral reasoning, and I want to work on AI systems that not only reason morally, but that humans recognize as moral. I also have a secret passion for working on and interacting with virtual characters.
My cat is named Herschel, and he's the best.
My loved ones would agree that my greatest passion outside of my work and my family is eating high quality cheese. I also enjoy art - particularly movies, music, books, and video games - that make me think and reflect in new and interesting ways.
My research is focused on getting computers to think like humans (if not yet at the same level as humans), and in ways that humans can recognize and understand. I am particularly interested in aspects of social reasoning and behavior, especially moral reasoning. Recently I have also been working on commonsense reasoning techniques. In general, I am interested in:
- Intelligent systems
- Computational models of human cognition
- Social computers
- Knowledge representation and reasoning, especially qualitative reasoning and analogical reasoning, and
- How to make AIs that everyday people will trust
My thesis will focus on using analogical reasoning to learn about, understand, and make decisions within complex social situations, specifically situations where agents must consider the moral ramifications of their actions.
Blass, J.A., Forbus, K.D. (2015). Moral Decision-Making by Analogy: Generalizations vs. Exemplars. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence
Ma, D.S., Blass, J.A., Tipping, M., Correll, J., & Wittenbrink, B. (2009). Racial Bias in Shot Lethality: Moving Beyond Reaction Time and Accuracy. American Psychological Association, Toronto, Canada
Blass, J.A., and Horswill, I.D. (2015). Implementing Injunctive Social Norms Using Defeasible Reasoning. Workshop on Intelligent Narrative Technologies and Social Believability in Games at the 11th Conference on Artificial Intelligence and Interactive Digital Entertainment.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Doctoral Consortium, Twenty-Third International Conference on Case-Based Reasoning, Frankfurt Am Main, Germany.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Students of Cognitive Science Workshop at the Third Conference on Advances in Cognitive Systems, Atlanta, Georgia, USA.
Blass, J.A. (2016). Interactive Learning and Analogical Chaining for Moral and Commonsense Reasoning. Doctoral Consortium, Thirthieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA.
joeblass <at> u <dot> northwestern <dot> edu
I am also on LinkedIn, which is pretty much the only online social network I participate in anymore.