Avatar

Leilani H. Gilpin

Assistant Professor

UC Santa Cruz

I am an Assistant Professor in Computer Science and Engineering and an affiliate of the Science & Justic Research Center at UC Santa Cruz. I am part of the AI group @ UCSC and I lead the AI Explainability and Accountability (AIEA) Lab.

Previously, I was a research scientist at Sony AI working on explainability in AI agents. I graduated with my PhD in Electrical Engineering and Computer Science at MIT in CSAIL, where I continue as a collaborating researcher. During my PhD, I developed “Anomaly Detection through Explanations” or ADE, a self-explaining, full system monitoring architecture to detect and explain inconsistencies in autonomous vehicles. This allows machines and other complex mechanisms to be able to interpret their actions and learn from their mistakes.

My research focuses on the theories and methodologies towards monitoring, designing, and augmenting complex machines that can explain themselves for diagnosis, accountability, and liability. My long-term research vision is for self-explaining, intelligent, machines by design.

Interests

  • Explainable AI (XAI)
  • Anomaly Detection
  • Commonsense Reasoning
  • Anticipatory Thinking for Autonomy
  • Semantic Representations of Language
  • Story-enabled intelligence
  • AI & Ethics

Education

  • PhD in Electrial Engineering and Computer Science, 2020

    Massachusetts Institute of Technology

  • M.S. in Computational and Mathematical Engineering, 2013

    Stanford University

  • BSc in Computer Science, BSc in Mathematics, Music minor, 2011

    UC San Diego

News

  • October 2023: Our paper on “Towards a fuller understanding of neurons with Clustered Compositional Explanations” was accepted as a poster to NeurIps!
  • September 2023: Our workshop on eXplainable AI approaches for deep reinforcement learning XAI4DRL was accepted to AAAI! (Co-organized with Roberto Capobiano, Oliver Chang, Biagio La Rosa, Michela Proietti and Alessio Ragno.)
  • August 2023: I’ll be speaking at the XAI in Action workshop at NeurIps.
  • July 2023: Our special issue on Anticipatory Thinking (with Adam Amos-Binks and Dustin Dannenhauer) in AI Magainze is out! Learn more in our introductory article.
  • June 2023: I gave the Slugs and Steins Alumni Lecture
  • May 2023: We were awarded a California Education Learning Lab Faculty Development grant on “Building Data Science Communities for Improving Student Success.”
  • April 2023: I organized the PhD open house for UCSC CSE.
  • March 2023: I gave a Data Science Matters Seminar at Brown University, and a Machine Learning Fairness Webinar at Illinois Tech.
  • February 2023: Our DoT national center on cybersecurity is awarded (Clemson University as the lead). Press Release.
  • January 2023: I participated in the Northwestern CASMI workshop on “Toward a Safety Science of AI.”

Publications

The Anticipatory Paradigm

Anticipatory thinking is necessary for managing risk in the safety- and mission-critical domains where AI systems are being deployed. …

Accountability layers: explaining complex system failures by parts

With the rise of AI used for critical decision-making, many important predictions are made by complex and opaque AI algorithms. The aim …

DANGER: A Framework of Danger-Aware Novel Dataset Generator Extension for Robustness Test of Machine Learning

Benchmark datasets for autonomous driving, such as KITTI, Argoverse, or Waymo are realistic, but they are designed to be too …

Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning

In this paper, we outline a new method for evaluating the human impact of machine-learning (ML) applications. In partnership with …

Outracing champion Gran Turismo drivers with deep reinforcement learning

Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with …

Recent & Upcoming Talks

Featured talks are available as videos.

Knowledge-based commonsense reasoning and explainability

US2TS KG and XAI tutorial with Filip Ilievski

Accountability Layers

Joint School of Computer Science Seminar with Razvan Marinescu

Accountability Layers

Joint CMIC/WEISS + AI Centre Joint Seminar with Razvan Marinescu

Teaching

Lead Instructor

  • UCSC
    • CSE 140: Artificial Intelligence (Winter 2022, Spring 2022, Spring 2023)
    • CSE 240: Artificial Intelligence (Winter 2023)
    • CSE 246: Responsible Data Science (Fall 2022)
  • MIT - Artificial Intelligence and Global Risks (IAP 2018)
  • Stanford - SMASH Institute: Calculus (Summer 2015)

Lectures

Teaching Assistant

  • MIT - 6.905/6.945: Large-scale Symbolic Systems
  • Stanford University - CS 348A: Geometric Modeling (PhD Level Course)
  • UC San Diego - COGS 5A (beginning java), CSE 8A/8B (beginning java), CSE 5A (beginning C), CSE 21 (discrete mathematics), CSE 100 (Advanced Data Structures), CSE 101 (Algorithms)

Projects

AI and ethics

The AI and ethics reading group is a student-lead, campus-wide initiative.

Explanatory Games

Using internal symbolic, explanatory representations to robustly monitor agents.

Monitoring Decision Systems

An adaptable framework to supplement decision making systems with commonsense knowledge and reasonableness rules.

The Car Can Explain!

The methdologies and underlying technologies to allow self-driving cars and other AI-driven systems to explain behaviors and failures.

Miscellaneous

Academic Interests as a Bookshelf

  • Sylvain Bromberger - On What We Know We Don’t Know
  • Yuel Noah Harari - Sapiens
  • Marvin Minsky - The Emotion Machine
  • Roger Schank - Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures
  • Patick C. Suppes - Introduction to Logic

Note: This is a working list. It is inspired by my colleague. Let’s pass it along.

Other Happenings

  • My father, Brian M. Gilpin, is a retired manager and has a new book about white privilege in Hawaii. My mother is a retired recreation therapist who worked over 30 years at Sonoma State Hospital, and my brother is an aspiring writer.
  • In fall 2018, I learned How to Make Almost Anything.
  • When I’m not working, I enjoy rowing, swimming, and hiking. I’m also a former water polo player.
  • Sometimes, I manage to take photos.
  • I am captivated by personality traits and analysis. I did a project on detecting personality traits using speech signals. I consistently score as an INTJ, but am quite in the middle in (T)hinking versus (F)eeling.
  • Currently reading: Deep Work.

Contact

lgilpin @ ucsc.edu