Machine-learning, which can learn to predict given labeled data, bares many promises for medical applications. And yet, experience shows that predictors that looked promising most often fail to bring the expected medical benefits. One reason is that they are evaluated detached from actual usage and medical outcomes. And yet, test running predictive models on actual medical decisions can be costly and dangerous. How do we bridge the gap? By improving machine-learning model evaluation. First, the metrics used to measure prediction error must capture as well as possible the cost-benefit tradeoffs of the final usage. Second, the evaluation procedure must really put models to the test: on a representative data sample, and accounting for uncertainty in model evaluation.
Gaël Varoquaux is a research director working on data science at Inria (French Computer Science National research) where he leads the Soda team on computational and statistical methods to understand health and society with data. Varoquaux is an expert in machine learning, with an eye on applications in health and social science. He develops tools to make machine learning easier, suited for real-life, messy data. He co-founded scikit-learn, one of the reference machine-learning toolboxes, and helped build various central tools for data analysis in Python. He currently develops data-intensive approaches for epidemiology and public health, and worked for 10 years on machine learning for brain function and mental health. Varoquaux has a PhD in quantum physics supervised by Alain Aspect and is a graduate from Ecole Normale Superieure, Paris.
Please help us plan ahead by registrating for the event at our
|What?||Medical AI: Addressing the Validation Gap|
|Who?||Gaël Varoquaux, INRIA France|
|When?||June 28th 2023 @ 11am|
|Where?||Bioquant (lectura hall SR41), Im Neuenheimer Feld 267|