# Scientific

## Conditional estimates for logarithms and logarithmic derivatives in the Selberg class

The Selberg class consists of functions sharing similar properties to the Riemann zeta function. The Riemann zeta function is one example of the functions in this class. The estimates for logarithms of Selberg class functions and their logarithmic derivatives are connected to, for example, primes in arithmetic progressions.

In this talk, I will discuss about effective and explicit estimates for logarithms and logarithmic derivatives of the Selberg class functions when Re(s) ≥ 1/2+ where

## Shifted divergences for sampling, privacy, and beyond

Shifted divergences provide a principled way of making information theoretic divergences (e.g. KL) geometrically aware via optimal transport smoothing. In this talk, I will argue that shifted divergences provide a powerful approach towards unifying optimization, sampling, privacy, and beyond. For concreteness, I will demonstrate these connections via three recent highlights. (1) Characterizing the differential privacy of Noisy-SGD, the standard algorithm for private convex optimization. (2) Characterizing the mixing time of the Langevin Algorithm to its stationary distribution for log-concave sampling. (3) The fastest high-accuracy algorithm for sampling from log-concave distributions. A recurring theme is a certain notion of algorithmic stability, and the central technique for establishing this is shifted divergences. Based on joint work with Kunal Talwar, and with Sinho Chewi.

## Perceptual Learning in Olfaction: Flexibility, Stability, and Cortical Control

The ability to learn and remember is an essential property of the brain that is not limited to high-level processing. In fact, the perception of olfactory stimuli in rodents is strongly shaped by learning processes in the olfactory bulb, the very first brain area to process olfactory information. We developed computational models for the two structural plasticity mechanisms at work. The models capture key aspects of a host of experimental observations and show how the separate plasticity time scales allow perceptual learning to be fast and flexible, but nevertheless produce long-lasting memories. The modeling gives strong evidence for the formation of odor-specific neuronal subnetworks and indicates how their formation is likely under top-down control.

## Quantitative estimates for the size of an intersection of sparse automatic sets

In 1979, Erdős conjectured that for $k \ge 9$, $2^k$ is not the sum of distinct powers of 3. That is, the set of powers of two (which is 2-automatic) and the 3-automatic set consisting of numbers whose ternary expansions omit 2 has finite intersection. In the theory of automata, a theorem of Cobham (1969) says that if $k$ and $\ell$ are two multiplicatively independent natural numbers then a subset of the natural numbers that is both $k-$ and $\ell$-automatic is eventually periodic. A multidimensional extension was later given by Semenov (1977). Motivated by Erdős' conjecture and in light of Cobham's theorem, we give a quantitative version of the Cobham-Semenov theorem for sparse automatic sets, showing that the intersection of a sparse k-automatic subset of $\mathbb{N}^d$ and a sparse $\ell$-automatic subset of $\mathbb{N}^d$ is finite. Moreover, we give effectively computable upper bounds on the size of the intersection in terms of data from the automata that accept these sets.

## An explicit estimate on the mean value of the error in the prime number theorem in intervals

The prime number theorem (PNT) gives us the density of primes amongst the natural numbers. We can extend this idea to consider whether we have the asymptotic number of primes predicted by the PNT in a given interval. Currently, this has only been proven for sufficiently large intervals. We can also consider whether the PNT holds for sufficiently large intervals ‘on average’. This requires estimating the mean-value of the error in the PNT in intervals. A new explicit estimate for this will be given based on the work of Selberg in 1943, along with two applications: one for primes in intervals, and one for Goldbach numbers in intervals.

## Understanding adversarial robustness via optimal transport.

Deep learning-based approaches have succeeded surprisingly in various fields of sciences. In particular, one of the first and most successful achievements of them is image classification. Now, deep learning-based algorithms perform even better than humans in classification problems. However, people in machine learning community observed that it is possible to deteriorate the performance of neural networks seriously by adding a well-designed small noise, which is called ‘adversarial attack’. Although humans still classify this new image correctly. the machine completely fails to classify this new image. Since this issue is serious in practice, for example security or self-driving car, practioners want to develop more robust machines against such adversarial attack, which motivates ‘adversarial training problem’. However, until very recently there has been no theoretical understanding of it. In this talk, I will present the recent progress of understanding adversarial training problem. The key idea of connecting the two areas originates from (Wasserstein) barycenter problem, one of the famous implications of optimal transport theory. I will introduce ‘generalized barycenter problem’, the extension of classical barycenter problem, and its multimarginal optimal transport formulations. Through the lens of those tools, one can understand the geometric structure of adversarial training problems. One crucial advantage of this result is that it allows to utilize many computational optimal transport tools. Lastly, if time is permitted, I will give the result of the existence of optimal robust classifiers which not only extends the binary setting case to the multiclass one but also provides a clean interpretation by duality.

## What conifer trees can show us about how organs are positioned in developing organisms

One of the central questions in developmental biology is how organs form in the correct positions in order to create a functional mature organism. Plant leaves offer an easily observable example of organ positioning, with species-specific motifs for leaf arrangement (phyllotaxis). These patterns arise through a combination of chemical pattern formation, mechanical stresses and growth. Mathematical modelling in each of these areas (and their combinations) contributes to quantitative understanding of developmental mechanisms and morphogenesis in general. Conifer trees are some of the most characteristic plants of BC. They also display a type of ring patterning of their embryonic leaves (cotyledons), which I believe offers a unique route to understanding plant phyllotaxis in general. I will discuss how early work at UBC on similar patterning in algae led to application of reaction-diffusion models in conifer development. This framework has guided experiments at BCIT and recently led to a model that accounts for the natural variability in conifer cotyledon number. The model involves the kinetics of a highly conserved gene regulation module and therefore sheds light on the chemical pattern formation control of phyllotaxis across plants. Conifer patterning also demonstrates scaling of position to organism size, an active area of research in animal development: the model provides some mechanistic insight into how this can occur via chemical kinetics.

## Sign changes of the error term in the Piltz divisor problem

For an integer k≥3; Δk (x) :=∑n≤xdk(n)-Ress=1 (ζk(s)xs/s), where dk(n) is the k-fold divisor function, and ζ(s) is the Riemann zeta-function. In the 1950's, Tong showed for all large enough X; Δk(x) changes sign at least once in the interval [X, X + CkX1-1/k] for some positive constant Ck. For a large parameter X, we show that if the Lindelöf hypothesis is true, then there exist many disjoint subintervals of [X, 2X], each of length X1-1/k-ε such that Δk (x) does not change sign in any of these subintervals. If the Riemann hypothesis is true, then we can improve the length of the subintervals to << X1-1/k (logX)-k^2-2. These results may be viewed as higher-degree analogues of a theorem of Heath-Brown and Tsang, who studied the case k = 2. This is joint work with Siegfred Baluyot.

## Transcendence of period integrals over function fields

Speaker: Jacob Tsimerman (University of Toronto)

Title: Transcendence of period integrals over function fields

Abstract: Periods are integrals of differential forms, and their study spans many branches of mathematics, including diophantine geometry, differential algebra, and algebraic geometry. If one restricts their attention to periods arising over Q, then the Grothendieck Period Conjecture is a precise way of saying that these are as transcendental as is allowed by the underlying geometry. While this is a remarkably general statement (and very open), it does not include another major (also open!) conjecture in transcendence theory - the Schanuel conjecture. In particular, e is not a period, even though it can be described through periods via the relation that the integral from 1 to e of dx/x is 1. We shall present a generalization due to André which unifies the two conjectures in a satisfactory manner.

In the (complex) function field case, a lot more is known. The Grothendieck Period Conjecture has been formulated and proved by Ayoub and Nori. We shall explain the geometric analogue of the André - Grothendieck Period Conjecture and present its proof. It turns out that this conjecture is (almost) equivalent to a functional-transcendence statement of extreme generality known as the Ax-Schanuel conjecture, which has been the subject of a lot of study over the past decade in connection with unlikely intersection problems. The version relevant to us is a comparison between the algebraic and flat coordinates of geometric local systems. We will explain the ideas behind the proofs of this Ax-Schanuel conjecture and explain how it implies the relevant period conjecture.

## Understanding arithmetic and geometry through cutting and pasting

Euler’s famous formula tells us that (with appropriate caveats), a map on the sphere with f countries (faces), e borders (edges), and v border-ends (vertices) will satisfy v-e+f=2. And more generally, for a map on a surface with g holes, v-e+f=2-2g. Thus we can figure out the genus of a surface by cutting it into pieces (faces, edges, vertices), and just counting the pieces appropriately. This is an example of the topological maxim “think globally, act locally”. A starting point for modern algebraic geometry can be understood as the realization that when geometric objects are actually algebraic, then cutting and pasting tells you far more than it does in “usual” geometry. I will describe some easy-to-understand statements (with hard-to-understand proofs), as well as easy-to-understand conjectures (some with very clever counterexamples, by M. Larsen, V. Lunts, L. Borisov, and others). I may also discuss some joint work with Melanie Matchett Wood.

Speaker biography:

Ravi Vakil is a Professor of Mathematics and the Robert K. Packard University Fellow at Stanford University, and was the David Huntington Faculty Scholar. He received the Dean's Award for Distinguished Teaching, an American Mathematical Society Centennial Fellowship, a Frederick E. Terman fellowship, an Alfred P. Sloan Research Fellowship, a National Science Foundation CAREER grant, the presidential award PECASE, and the Brown Faculty Fellowship. Vakil also received the Coxeter-James Prize from the Canadian Mathematical Society, and the André-Aisenstadt Prize from the CRM in Montréal. He was the 2009 Earle Raymond Hedrick Lecturer at Mathfest, and a Mathematical Association of America's Pólya Lecturer 2012-2014. The article based on this lecture has won the Lester R. Ford Award in 2012 and the Chauvenet Prize in 2014. In 2013, he was a Simons Fellow in Mathematics.