I study Artificial Intelligence and Law at Northwestern University and I am a member of the Qualitative Reasoning Group. I studied social and cognitive psychology for my undergraduate degree at the University of Chicago. I switched to AI from Psychology because I wanted to spend more time building minds, rather than figuring out how ours work (although I still think that's super interesting). I'm broadly interested in any AI research that produces something that looks like reasoning, cognition, or experience (my own work uses symbolic reasoning, analogy, and qualitative representations). I am also interested in how AI can be integrated into our everyday lives, and how people will interact with and respond to those systems. My main interest in human psychology is in moral reasoning, and I want to work on AI systems that not only reason morally, but that humans recognize as moral. I also love working on and interacting with virtual characters, although I haven't gotten a chance to do much research in that area recently.
In the past few years I have become concerned by the fact that while AI is increasingly making important decisions for and about humans, AI is regulated (if at all) largely by laws that were for humans and corporations, not autonomous computer systems. I had the privilege of being accepted into Northwestern's joint JD/PhD program, and am studying law full-time through 2019. I'm planning on focusing on how AI is currently regulated and how it can and will be regulated in the future, and I'm interested in using that knowledge to develop ethical, legally-bound AI systems. I believe that true moral reasoning may prove to be an AI complete process, but that ethical reasoning undergirded by the codes and laws of society is achievable with current technology. I also want to help educate AI researchers about how the law actually works, and legal scholars about how AI actually works, since I think there are frequently misconceptions on both sides. I do not know what my scholarly balance of legal and AI research will be a few years down the line, but am looking forward to finding out.
Oof, all of that is so serious. So let me say that my loved ones would agree that my greatest passion outside of my work and my family is eating high quality cheese, and that my cats are Herschel and Mika, and each is perfect in their own way. I also enjoy art - particularly movies, books, music, prestige television, and video games - that make me think and reflect in new and interesting ways.
My research is focused on getting computers to think like humans and in ways that humans can recognize, understand, and accept. I am particularly interested in aspects of social reasoning and behavior, especially moral reasoning: how we can teach computers to reason morally, make sure they make decisions morally, and make them able to explain those decisions to us. I have also worked on commonsense reasoning techniques. I recently began my J.D. studies at Northwestern's Pritzker School of Law as a member of Northwestern's joint JD/PhD program.
In general, I am interested in:
- Systems that exhibit what we recognize as intelligence
- Computational models of human cognition
- AI Law: how to define and regulate AI responsibilities, permissions, obligations, and restrictions
- Social computers
- Knowledge representation and reasoning, especially qualitative reasoning and analogical reasoning, and
- How to make AIs that everyday people can teach and whose decisions they will trust
My thesis is changing based on my participation in Northwestern's JD/PhD program. Before I joined the program, my original plan was to focus on using analogical reasoning to learn about, understand, and make decisions within complex social situations, specifically situations where agents must consider the moral ramifications of their actions. Although it's not set in stone, my plan now is to do something similar involving legal reasoning about case law: to use analogical generalization and reasoning to synthesize, understand, and apply legal principles encoded in series of legal precedents.
I was an organizer and co-chair of the Computational Analogy Workshop at ICCBR-16, and an organizer of the workshop the following year.
I was a winner of the ACM SIGAI Student Essay Contest on the Responsible Use of AI Technology. You can read the essay here.
Blass, J.A., Forbus, K.D. (2017). Analogical Chaining with Natural Language Instruction for Commonsense Reasoning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Blass, J.A., Forbus, K.D. (2016). Modeling Commonsense Reasoning via Analogical Chaining: A Preliminary Report. Proceedings of the Thirty-Eighth Annual Meeting of the Cognitive Science Society.
Blass, J.A., Forbus, K.D. (2015). Moral Decision-Making by Analogy: Generalizations vs. Exemplars. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
Ma, D.S., Blass, J.A., Tipping, M., Correll, J., & Wittenbrink, B. (2009). Racial Bias in Shot Lethality: Moving Beyond Reaction Time and Accuracy. American Psychological Association, Toronto, Canada.
Blass, J.A., Rabkina, I., and Forbus, K. D. (2017). Towards a Domain-independent Method for Evaluating and Scoring Analogical Inferences. Computational Analogy Workshop at the 25th International Conference on Case-Based Reasoning.
Blass, J.A., and Forbus, K. D. (2016). Natural Language Instruction for Analogical Reasoning: An Initial Report. Computational Analogy Workshop at the 24th International Conference on Case-Based Reasoning.
Blass, J.A., and Horswill, I.D. (2015). Implementing Injunctive Social Norms Using Defeasible Reasoning. Workshop on Intelligent Narrative Technologies and Social Believability in Games at the 11th Conference on Artificial Intelligence and Interactive Digital Entertainment.
Blass, J.A. (2018). Legal, Ethical, Customizable Artificial Intelligence. Student Program, Artificial, Ethics, and Society Conference, New Orleans, Louisiana, USA.
Spelke, E. and Blass, J.A. (2017). Intelligent Machines and Human Minds. Behavioral and Brain Sciences, 40, E277.
Blass, J.A. and Fitzgerald, T. (2017). The Computational Analogy Workshop at ICCBR-16. AI Magazine, Winter (2017): 91.
Blass, J.A. (2016). Interactive Learning and Analogical Chaining for Moral and Commonsense Reasoning. Doctoral Consortium, Thirthieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Doctoral Consortium, Twenty-Third International Conference on Case-Based Reasoning, Frankfurt Am Main, Germany.
Blass, J.A. (2015). Interactively Learning Moral Norms By Analogy. Students of Cognitive Science Workshop at the Third Conference on Advances in Cognitive Systems, Atlanta, Georgia, USA.
joeblass <at> u <dot> northwestern <dot> edu
I am also just barely on LinkedIn, which is pretty much the only online social network I participate in anymore, but if you're trying to get in touch, use email.