Mathematical Finance Seminar
Date
Time
17:15
Location:
HUB; RUD 25; 1.115
Alex Tse (University College London)

Portfolio Selection in Contests

In an investment contest with incomplete information, a finite number of agents dynamically trade assets with idiosyncratic risk and are rewarded based on the relative ranking of their terminal portfolio values. We explicitly characterize a symmetric Nash equilibrium of the contest and rigorously verify its uniqueness. The connection between the reward structure and the agents’ portfolio strategies is examined. A top-heavy payout rule results in an equilibrium portfolio return distribution with high positive skewness, which suffers from a large likelihood of poor performance. Risky asset holding increases when competition intensifies in a winner-takes all contest. This is joint work with Yumin Lu.

Mathematical Finance Seminar
Date
Time
16:15
Location:
HUB; RUD 25; 1.115
Kristoffer Andersson (University of Verona)

Exponential convergence of fictitious-play FBSDEs in finite player stochastic differential games

We study finite player stochastic differential games on possibly bounded spatial domains. The equilibrium problem is formulated through the dynamic programming principle, leading to a coupled Nash system of HJB equations and, in probabilistic form, to a corresponding Nash FBSDE with stopping at the first exit from the parabolic domain (covering both boundary and terminal conditions). The main focus of the talk is the analysis of a fictitious-play procedure applied at the level of FBSDEs. At each iteration, a player solves a best-response FBSDE against fixed opponent strategies, giving rise to a sequence of fictitious-play FBSDEs. We show that this sequence converges exponentially fast to the Nash FBSDE. In unbounded domains, this holds under a small-time assumption; in bounded domains, exponential convergence is obtained for arbitrary horizons under additional regularity conditions. For completeness, we also discuss how the fictitious-play FBSDE is approximated by a numerically tractable surrogate FBSDE, which itself converges exponentially to the fictitious-play equation. Since the surrogate FBSDE admits a standard time-discrete approximation of order 1/2, this provides a transparent overall error structure for the numerical approximation of the Nash FBSDE. We conclude with representative numerical illustrations of the full approximation scheme.

Workshop/Conference
Date
Time
9:oo
Location:
WIAS Berlin

CRC Day on Rough Volatility

Mathematical Finance Seminar
Date
Time
17:oo
Location:
HUB; RUD 25; 1.115
Sam Cohen (Oxford)

tba

Mathematical Finance Seminar
Date
Time
16:15
Location:
RUD 25; 1.115
Xueru Liu (HU Berlin)

tba

Mathematical Finance Seminar
Date
Time
16:oo
Location:
HUB; RUD 25; 1.115
Yilie Huang (Columbia U)

Mean-Variance Portfolio Selection by Continuous-Time Reinforcement Learning: Algorithms, Regret Analysis, and Empirical Study

We study continuous-time mean-variance portfolio selection in markets where stock prices are diffusion processes driven by observable factors that are also diffusion processes yet the coefficients of these processes are unknown. Based on the recently developed reinforcement learning (RL) theory for diffusion processes, we present a general data-driven RL algorithm that learns the pre-committed investment strategy directly without attempting to learn or estimate the market coefficients. For multi-stock Black-Scholes markets without factors, we further devise a baseline algorithm and prove its performance guarantee by deriving a sublinear regret bound in terms of Sharpe ratio. For performance enhancement and practical implementation, we modify the baseline algorithm and carry out an extensive empirical study to compare their performance, in terms of a host of common metrics, with a large number of widely used portfolio allocation strategies on S&P 500 constituents. The results demonstrate that the proposed continuous-time RL strategy is consistently among the best especially in a volatile bear market, and decisively outperforms the model-based continuous-time counterparts by significant margins.
 

Workshop/Conference
Date
Time
9:00
Location:
Humboldt University, Main Building
Asaf Cohen, Boualem Djehiche, Jameson Graber, ...

Mean field games and applications

This workshop is thought as a forum for discussing new developments in mean-field games and control problems and their applications, in particular in Economics and Finance.

It is jointly organized  by Giorgio Ferrari (Bielefeld University) and Ulrich Horst (Humboldt University of Berlin), and sponsored by the Center for Mathematical Economics at Bielefeld University, the CRC/TRR 388 "Rough Analysis, Stochastic Dynamics and Related Fields", and the IRTG 2544 "Stochastic Analysis in Interactions".

For more information, please visit the conference webpage.

Mathematical Finance Seminar
Date
Time
17:15
Location:
TUB, MA 043
Matthieu Lauriere (NYU Shanghai)

An Efficient On-Policy Deep Learning Framework for Stochastic Optimal Control

We present a novel on-policy algorithm for solving stochastic optimal control (SOC) problems. By leveraging the Girsanov theorem, our method directly computes on-policy gradients of the SOC objective without expensive backpropagation through stochastic differential equations or adjoint problem solutions. This approach significantly accelerates the optimization of neural network control policies while scaling efficiently to high-dimensional problems and long time horizons. We evaluate our method on classical SOC benchmarks as well as applications to sampling from unnormalized distributions via Schrodinger-Follmer processes and fine-tuning pre-trained diffusion models. Experimental results demonstrate substantial improvements in both computational speed and memory efficiency compared to existing approaches. Joint work with Mengjian Hua and Eric Vanden-Eijnden.

Mathematical Finance Seminar
Date
Time
16:15
Location:
TUB; MA 043
Sören Christensen (Christian-Albrechts-Universität zu Kiel)

How to Learn from Data in Stochastic Control Problems - An Approach Based on Statistics

While theoretical solutions to many stochastic control problems are well understood, their practicality often suffers from the assumption of known dynamics of the underlying stochastic process, which raises the statistical challenge of developing purely data-driven controls. In this talk, we discuss how stochastic control and statistics can be brought together, which we study for various classical control problems with underlying one- and multi-dimensional diffusions and jump processes. The dilemma between exploration and exploitation plays an essential role in the considerations. We find exact sublinear-order convergence rates for the regret and compare the results numerically with those of deep Q-learning algorithms.

Mathematical Finance Seminar
Date
Time
17:15
Location:
TUB; MA043
Main Dai (HK PolyU)

Option Exercise Games and the q Theory of Investment

Firms shall be able to respond to their competitors’ strategies over time. Back and Paulsen (2009) thus advocate using closed-loop equilibria to analyze classic real-option exercise games but point out difficulties in defining closed-loop equilibria and characterizing the solution. We define closed-loop equilibria and derive a continuum of them in closed form. These equilibria feature either linear or nonlinear investment thresholds. In all closed-loop equilibria, firms invest faster than in the open-loop equilibrium of Grenadier (2002). We confirm Back and Paulsen (2009)’s conjecture that their closedloop equilibrium (with a perfectly competitive outcome) is the one with the fastest investment and in all other closed-loop equilibria firms earn strictly positive profits. This work is jointly with Zhaoli Jiang and Neng Wang.