- Fairness: Models trained on data containing biases regarding protected attributes such as race or gender may learn these biases and automate discrimination through their predictions. What is the best way to measure such bias, and how can we construct models which mitigate it?
- Interpretability: It is crucial to know not just what an algorithm's prediction is, but why the algorithm came to this conclusion. How confident are we in the output? What were the most important variables?
- Robustness: As machine learning is applied in increasingly broad tasks, we need models to perform adequately in the presence of distributional shift, outliers, and adversaries.
University of California, Berkeley
B.S. Computer Science