Probability

Random matrix theory of high-dimensional optimization - Lecture 4

Speaker: 
Elliot Paquette
Date: 
Fri, Jul 5, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension d becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.

Class: 

Random matrix theory of high-dimensional optimization - Lecture 3

Speaker: 
Elliot Paquette
Date: 
Thu, Jul 4, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension d becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.

Class: 

Random walks and branching random walks: old and new perspectives - Lecture 3

Speaker: 
Perla Sousi
Date: 
Thu, Jul 4, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

This course will focus on two well-studied models of modern probability: simple symmetric and branching random walks in ℤd. The focus will be on the study of their trace in the regime that this is a small subset of the ambient space.
We will start by reviewing some useful classical (and not) facts about simple random walks. We will introduce the notion of capacity and give many alternative forms for it. Then we will relate it to the covering problem of a domain by a simple random walk. We will review Lawler’s work on non-intersection probabilities and focus on the critical dimension d=4. With these tools at hand we will study the tails of the intersection of two infinite random walk ranges in dimensions d≥5.

A branching random walk (or tree indexed random walk) in ℤd is a non-Markovian process whose time index is a random tree. The random tree is either a critical Galton Watson tree or a critical Galton Watson tree conditioned to survive. Each edge of the tree is assigned an independent simple random walk in ℤd increment and the location of every vertex is given by summing all the increments along the geodesic from the root to that vertex. When d5, the branching random walk is transient and we will mainly focus on this regime. We will introduce the notion of branching capacity and show how it appears naturally as a suitably rescaled limit of hitting probabilities of sets. We will then use it to study covering problems analogously to the random walk case.

Class: 

Random matrix theory of high-dimensional optimization - Lecture 2

Speaker: 
Elliot Paquette
Date: 
Wed, Jul 3, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension d becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.

Class: 

Random walks and branching random walks: old and new perspectives - Lecture 2

Speaker: 
Perla Sousi
Date: 
Wed, Jul 3, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

This course will focus on two well-studied models of modern probability: simple symmetric and branching random walks in ℤd. The focus will be on the study of their trace in the regime that this is a small subset of the ambient space.
We will start by reviewing some useful classical (and not) facts about simple random walks. We will introduce the notion of capacity and give many alternative forms for it. Then we will relate it to the covering problem of a domain by a simple random walk. We will review Lawler’s work on non-intersection probabilities and focus on the critical dimension d=4. With these tools at hand we will study the tails of the intersection of two infinite random walk ranges in dimensions d≥5.

A branching random walk (or tree indexed random walk) in ℤd is a non-Markovian process whose time index is a random tree. The random tree is either a critical Galton Watson tree or a critical Galton Watson tree conditioned to survive. Each edge of the tree is assigned an independent simple random walk in ℤd increment and the location of every vertex is given by summing all the increments along the geodesic from the root to that vertex. When d5, the branching random walk is transient and we will mainly focus on this regime. We will introduce the notion of branching capacity and show how it appears naturally as a suitably rescaled limit of hitting probabilities of sets. We will then use it to study covering problems analogously to the random walk case.

Class: 

Random matrix theory of high-dimensional optimization - Lecture 1

Speaker: 
Elliot Paquette
Date: 
Tue, Jul 2, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension d becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.

Class: 

Random walks and branching random walks: old and new perspectives - Lecture 1

Speaker: 
Perla Sousi
Date: 
Tue, Jul 2, 2024
Location: 
CRM, Montreal
Conference: 
2024 CRM-PIMS Summer School in Probability
Abstract: 

This course will focus on two well-studied models of modern probability: simple symmetric and branching random walks in ℤd. The focus will be on the study of their trace in the regime that this is a small subset of the ambient space.
We will start by reviewing some useful classical (and not) facts about simple random walks. We will introduce the notion of capacity and give many alternative forms for it. Then we will relate it to the covering problem of a domain by a simple random walk. We will review Lawler’s work on non-intersection probabilities and focus on the critical dimension d=4. With these tools at hand we will study the tails of the intersection of two infinite random walk ranges in dimensions d≥5.

A branching random walk (or tree indexed random walk) in ℤd is a non-Markovian process whose time index is a random tree. The random tree is either a critical Galton Watson tree or a critical Galton Watson tree conditioned to survive. Each edge of the tree is assigned an independent simple random walk in ℤd increment and the location of every vertex is given by summing all the increments along the geodesic from the root to that vertex. When d5, the branching random walk is transient and we will mainly focus on this regime. We will introduce the notion of branching capacity and show how it appears naturally as a suitably rescaled limit of hitting probabilities of sets. We will then use it to study covering problems analogously to the random walk case.

Class: 

Random discrete surfaces

Speaker: 
Thomas Budzinski
Date: 
Wed, Sep 23, 2020
Location: 
Zoom
PIMS, University of British Columbia
Conference: 
Emergent Research: The PIMS Postdoctoral Fellow Seminar
Abstract: 

A triangulation of a surface is a way to divide it into a finite number of triangles. Let us pick a random triangulation uniformly among all those with a fixed size and genus. What can be said about the behaviour of these random geometric objects when the size gets large? We will investigate three different regimes: the planar case, the regime where the genus is not constrained, and the one where the genus is proportional to the size. Based on joint works with Baptiste Louf, Nicolas Curien and Bram Petri.

Class: 

Surjectivity of random integral matrices on integral vectors

Speaker: 
Melanie Matchett Wood
Date: 
Fri, Nov 8, 2019 to Sat, Nov 9, 2019
Location: 
PIMS, University of British Columbia
Conference: 
PIMS Distinguished Colloquium
Abstract: 

A random nxm matrix gives a random linear transformation from Zm to Zn (between vectors with integral coordinates). Asking for the probability that such a map is injective is a question of the non-vanishing of determinants. In this talk, we discuss the probability that such a map is surjective, which is a more subtle integral question. We show that when m=n+u, for u at least 1, as n goes to infinity, the surjectivity probability is a non-zero product of inverse values of the Riemann zeta function. This probability is universal, i.e. we prove that it does not depend on the distribution from which you choose independent entries of the matrix, and this probability also arises in the Cohen-Lenstra heuristics predicting the distribution of class groups of real quadratic fields. This talk is on joint work with Hoi Nguyen.

Class: 

Depth Functions in Multivariate & Other Data Settings: Concepts, Perspectives, Tools, & Applications

Speaker: 
Robert Serfling
Date: 
Thu, Sep 28, 2017
Location: 
PIMS, University of Manitoba
Conference: 
PIMS-UManitoba Distinguished Lecture
Abstract: 

Depth functions were developed to extend the univariate notions of median, quantiles, ranks, signs, and order statistics to the setting of multivariate data. Whereas a probability density function measures local probability weight, a depth function measures centrality. The contours of a multivariate depth function induce closely associated multivariate outlyingness, quantile, sign, and rank functions. Together, these functions comprise a powerful methodology for nonparametric multivariate data description, outlier detection, data analysis, and inference, including for example location and scatter estimation, tests of symmetry, and multivariate boxplots. Due to the lack of a natural order in dimension higher than 1, notions such as median and quantile are not uniquely defined, however, posing a challenging conceptual arena. How to define the middle? The middle half? Interesting competing formulations of depth functions in the multivariate setting have evolved, and extensions to functional data in Hilbert space have been developed and more recently, to multivariate functional data. A key question is how generally a notion of depth function can be productively defined. This talk provides a perspective on depth, outlyingness, quantile, and rank functions, through an overview coherently treating concepts, roles, key properties, interrelations, data settings, applications, open issues, and new potentials.

Class: 

Pages