Probability Seminar

Every other Friday, 3:00–4:00 pm, in MSB 318

Schedule

April 27

Jason Swanson

The Principles of Probability

In the beginning, probability was part of logic. That was the view of Leibniz, Bernoulli, de Moivre, Laplace, and even Boole. It was a logic that was concerned with degrees of evidentiary support. These degrees were called probabilities. The idea that probabilities are empirical properties of sequences is a modern one. It emerged from the logical empiricism of the early 20th century. (Logical empiricism itself died out in the 1960s.) This new idea came to be known as frequentism. Not everyone accepted the new way of thinking. In the 1950s, the frequentists coined a derogatory nickname for their opponents. They called them Bayesians.

Although shunned by the mainstream of the 20th century, Bayesian ideas now flourish. Modern science has become more probabilistic today than ever before. Out of necessity, people are returning to the original way of thinking. Probability is a form of reasoning.

In the 1930s, Kolmogorov established measure theory as the foundation for probability. Kolmogorov was a frequentist, and this motivated his work. But the mathematical apparatus he constructed was not bound to frequentism. It is perfectly compatible with the original idea of probability as logic. In fact, modern probabilists use measure theory every day. But they do not behave as if they are frequentists. They behave as if they are performing a kind of logic. They work exclusively by deriving probabilistic statements from probabilistic premises. But the logic that underlies their practice is unformalized. Or rather, it was unformalized, until now.

In The Principles of Probability [Lecture Notes in Mathematics, 2026], a formal system of probabilistic reasoning is presented. This system fully captures the modern practice of probability. It is the first logical system to accomplish this. Embedded in this system is Kolmogorov’s measure-theoretic treatment. In this way, we see that measure theory is but the tip of the probability iceberg. Below it lies a foundation of logic. In this talk, we will give a very brief overview of this logical system.

This talk will be based on a 10-page, self-contained summary of inductive logic, which is available here. You can also download the slides for the talk here.

April 10

Jay Nulph

Spatial Autoregression Model: Dependent Errors

Spatial autoregressive models are widely used in fields such as economics and agriculture, where observations at one spatial location depend on neighboring locations. In certain applications, the parameters governing spatial dependence may lie near the boundary of stability, leading to behavior analogous to unit roots in time series models and creating challenges for statistical inference.

In this work, we study estimation in an alternative unit root formulation of a spatial autoregressive model. We consider the null, near-unit-root, and explosive regimes, which exhibit markedly different asymptotic behavior. To estimate the model parameters, we employ the Gauss-Newton procedure and analyze the resulting estimators using asymptotic techniques. In particular, we apply a martingale central limit theorem for arrays to establish the asymptotic distribution of the estimators.

Our results show that the Gauss-Newton estimators are asymptotically normal under appropriate conditions, providing theoretical justification for their use in spatial autoregressive models near the unit root boundary. These findings contribute to the broader understanding of statistical inference in spatial processes.

March 27

Chathura Keshan

Some Extended Multilevel Dimension Iteration Algorithms for High-Dimensional
Numerical Integration

This talk presents several enhancements to the Multilevel Dimension Iteration (MDI) framework for high-dimensional numerical integration. Although classical MDI methods reduce computational complexity from exponential to polynomial order, they are primarily suited to smooth and regularly structured integrands and often struggle with irregular features such as singularities, complex structures, and truncated domains, leading to increased computational cost and degraded performance in higher dimensions. To overcome these limitations, we develop a family of modified MDI algorithms that extend the framework to a wider range of practical problems, including a quadrature-adjusted MDI algorithm that refines quadrature points to simplify symbolic computation, a row extraction MDI algorithm that efficiently handles complex matrix structures arising from the spectral decomposition of correlated assets, and a hybrid MDI algorithm that partitions the domain to better manage truncation effects. The effectiveness of these approaches is demonstrated through the explicit solution of the multi-asset Black–Scholes equation and a set of comprehensive numerical examples.

March 13

Christian Keller

Non-local Hamilton–Jacobi–Bellman equations for the stochastic optimal control of path-dependent piecewise deterministic processes

We study the optimal control of path-dependent piecewise deterministic processes. An appropriate dynamic programming principle is established. We prove that the associated value function is the unique minimax solution of the corresponding non-local path-dependent Hamilton–Jacobi–Bellman equation. This is the first well-posedness result for nonsmooth solutions of fully nonlinear nonlocal path-dependent partial differential equations. Joint work with Elena Bandini.

February 27

Michael Tseng

An Introduction to Informed Trading via the Kyle Model

We give a friendly glimpse of strategic (informed) trading theory through the canonical Kyle model, with a nod toward basic literacy in the finance literature more broadly. We cover the static version of the model and, time permitting, move to the dynamic version—a continuous-time game of asymmetric information.