GW Calendar
Sign Up

801 22nd Street NW

#Reinforcement Learning

View map

The GW Department of Statistics is offering two short courses as part of our 90th anniversary event, "The Past, Present and Future of Statistics in the Era of AI." 

Introduction to Reinforcement Learning
May 8, 2025 | 2 – 4 p.m.
Zhengling Qi, Assistant Professor of Decision Sciences, George Washington University

Description: Reinforcement learning (RL) is a powerful branch of machine learning where agents learn optimal decision-making through interaction with their environment. In this two-hour intensive course, participants will explore core RL concepts, delve into widely-used algorithms, and examine modern applications, such as fine-tuning large language models. This course offers a concise yet comprehensive introduction to reinforcement learning and its practical impact in today's AI-driven world, which is suitable for both beginners and professionals.

About the Instructor: Zhengling Qi is an assistant professor at the School of Business, George Washington University. He got his PhD degree from the Department of Statistics and Operations Research at the University of North Carolina, Chapel Hill. His research has been focused on reinforcement learning and causal inference. Zhengling has published papers in prestigious statistical journals such as JASA and AOS and top machine learning venues such as Neurips, ICML and ICLR.

Statistical Foundations of Deep Learning
Lizhen Lin, Professor of Statistics, University of Maryland
May 8, 2025 | 10 a.m. – Noon

Description: Deep learning has achieved groundbreaking performance in various application domains, including image recognition, speech recognition, natural language processing, and healthcare. Alongside its practical success, there has been a growing effort to explore the theoretical foundations of deep learning models. This short course will focus on the statistical foundations underlying deep neural network (DNN) models. From a statistical perspective, deep learning models can be largely viewed as a nonparametric function or distribution estimation problem, where the underlying function or distribution is parameterized by a DNN. In supervised settings, such as regression and classification tasks, deep neural networks, including feedforward DNNs, are used to model the regression function or classification map. For distribution estimation, deep generative models—such as those based on DNNs—have become popular. This course will focus on theoretical underpinning of deep learning from the lens of statistical theory.  In particular, a key investigation involves understanding why deep neural networks often outperform classical nonparametric models. By characterizing the statistical foundations of DNNs, we aim to explain why these models perform exceptionally well in practice, providing insights from the perspective of statistical theory.

About the Instructor: Lizhen Lin is a professor of statistics in the Department of Mathematics at the University of Maryland. Her areas of expertise encompasses Bayesian modeling and theory for high-dimensional and infinite-dimensional models, statistics on manifolds, statistical network analysis, and statistical theory and foundations of deep neural network models.

Event Details