Neel Guha
I am a fifth year JD-PhD student in Computer Science at Stanford University (advised by Chris Ré). I’m a part of the Hazy Research Lab, Stanford Center for Research on Foundation Models, and RegLab. I graduated with a MS in Machine Learning from Carnegie Mellon University (‘19) and a BS (with Honors) in Computer Science from Stanford University (‘18). I am grateful to be supported by the Stanford Interdisciplinary Graduate Fellowship (SIGF) and the HAI Graduate Fellowship.
My computer science research explores (1) legal applications of large language models, and (2) methods for enabling machine learning systems to better leverage structure (e.g., type information, knowledge base triples, embeddings). My recent work includes leading the creation of LegalBench, a large scale open effort to develop benchmark tasks for evaluating legal reasoning in LLMs.
My legal research agenda has two focuses. First, I am interested in using machine learning to advance empirical legal research. Traditionally, the scope of questions empirical legal scholars have been able to study has been limited by what types of datasets can be feasibly constructed by hand. Machine learning offers an exciting path towards automatically, cheaply, and robustly constructing large scale legal datasets, thus allowing us to explore substantively broader questions (e.g., how many private rights of action exist across all state codes?).
Second, I am interested in AI governance. As AI proliferates across sectors (e.g., healthcare, finance, energy), the types of AI systems deployed will be guided by standards fashioned by both regulators and courts. My work attempts to describe how these stakeholders can approach governance choices in a technically and legally principled way. For instance, I’ve written about how courts might assess liability for medical AI, and the technical/institutional tradeoffs raised by different regulatory mechanisms.
Recent News
- May 2024: Excited to contribute a chapter on benchmarking language models for legal applications to The Oxford Handbook on the Foundations and Regulation of Generative AI (OUP, 2024).
- May 2024: Two papers accepted to ICML 2024: (1) Prospectors, and (2) Long-Context Retrievers.
- January 2024: “Understanding Liability Risk from Using Health Care Artificial Intelligence Tools” out in The New England Journal of Medicine (with Michelle Mello).
- January 2024: “Private Enforcement in the States” out in University of Pennsylvania Law Review (with Diego Zambrano, Austin Peters, and Jeffrey Xia).