- This event has passed.
William Rudman – Towards Robust Interpretability Methods for Large Language Models (AI PHI Affinity Group)
December 1, 2023 @ 9:00 am - 10:00 amFree
Thank you for coming!
The recording for this meeting is available below:
Towards Robust Interpretability Methods for Large Language Models
The “black box” nature of deep learning techniques has limited their application in clinical settings. Traditional interpretability methods, such as gradient-based saliency maps or model probing, are subject to “interpretability illusions” where networks may spuriously appear to encode interpretable concepts. Our work focuses on finding more robust techniques for understanding deep learning models by investigating the vector space of model representations. In particular, we find that a single basis dimension in fine-tuned large language models drives model decisions and preserves >99% of the original classification performance of the full model. Our ongoing research project investigates how interpretability methods developed for large language models can be applied to understand how multimodal clinical models developed for detecting child abuse from free-text clinical narratives and patient demographic information make diagnostic decisions.
Presented by William Rudman
I am a 4th year PhD student in the computer science department at Brown University and a member of the joint Health NLP Lab at Brown & Tuebingen University. My primary research direction focuses on understanding the structure of large language model representations and how models make downstream decisions. In addition to my interpretability research, I work on developing NLP methods for detecting child abuse from free-text clinical narratives.
Artificial Intelligence in Cancer Research – AI PHI Affinity Group
(First Friday of each month)
This group was formed to discuss the current trends and applications of artificial intelligence in cancer research and clinical practice. The group brings together AI researchers in a variety of fields (computer science, engineering, nutrition, epidemiology radiology, etc) with clinicians and advocates. Students, trainees and faculty with any or no background in AI are encouraged to attend. The goal is to foster collaborative interactions to solve problems in cancer that were thought to be unsolvable a decade ago before the broad use of deep learning and AI in medicine.