Fitting with more parameters than data points: Topology of the solutions to overparameterized problems In university we're taught to be conservative when picking parameters for data fitting, and with good reason: many-parameter models are susceptible to overfitting. As the number of parameters approaches the number of data points, the model runs perfectly through all data points but goes haywire in the space between them. The last decades of work in machine learning have revealed a surprise: when the number of parameters is taken much higher than the number of data points, good and sometimes better fits are still possible. I will review what is known about when and why overparameterized fits work, touching on the relationship between algorithms, their initialization, and solution geometry. Then, I will describe recent work to quantify the topology of the space of "perfect" fits and discuss what we might glean from this information. See more: On the topology of solutions to random continuous constraint satisfaction problems, Jaron Kent-Dobias, arXiv:2409.12781 [cond-mat.dis-nn] https://arxiv.org/abs/2409.12781