\documentclass[a4paper,fleqn]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{amsmath,amssymb,latexsym,graphicx} \usepackage{newtxtext,newtxmath} \usepackage{bbold} \usepackage[dvipsnames]{xcolor} \usepackage[ colorlinks=true, urlcolor=BlueViolet, citecolor=BlueViolet, filecolor=BlueViolet, linkcolor=BlueViolet ]{hyperref} \usepackage[ style=phys, eprint=true, maxnames = 100 ]{biblatex} \usepackage{anyfontsize,authblk} \usepackage{fullpage} \addbibresource{topology.bib} \title{ On the topology of solutions to random continuous constraint satisfaction problems } \author{Jaron Kent-Dobias\footnote{\url{jaron.kent-dobias@roma1.infn.it}}} \affil{Istituto Nazionale di Fisica Nucleare, Sezione di Roma I, Italy} \begin{document} \maketitle \begin{abstract} We consider the set of solutions to $M$ random polynomial equations with independent Gaussian coefficients on the $(N-1)$-sphere. When solutions exist, they form a manifold. We compute the average Euler characteristic of this manifold in the limit of large $N$, and find different behavior depending on the scaling of $M$ with $N$. When $\alpha=M/N$ is held constant, the average characteristic is 2 whenever solutions exist. When $M$ is constant, the average characteristic is also 2 up until a transition value $M_\textrm{th}$, above which it is exponentially large in $N$. To better interpret these results, we compute the average number of stationary points of a test function on the solution manifold. In both regimes, this reveals another transition between a regime with few and one with exponentially many stationary points. We conjecture that this transition corresponds to a geometric rather than a topological transition. \end{abstract} \tableofcontents \section{Introduction} Constraint satisfaction problems seek configurations that simultaneously satisfy a set of equations, and form a basis for thinking about problems as diverse as neural networks \cite{Mezard_2009_Constraint}, granular materials \cite{Franz_2017_Universality}, ecosystems \cite{Altieri_2019_Constraint}, and confluent tissues \cite{Urbani_2023_A}. All but the last of these examples deal with sets of inequalities, while the last considers a set of equality constraints. Inequality constraints are familiar in situations like zero-cost solutions in neural networks with ReLu activations and stable equilibrium in the forces between physical objects. Equality constraints naturally appear in the zero-gradient solutions to overparameterized smooth neural networks and in vertex models of tissues. In such problems, there is great interest in characterizing structure in the set of solutions, which can be influential in how algorithms behave when trying to solve them \cite{Baldassi_2016_Unreasonable, Baldassi_2019_Properties, Beneventano_2023_On}. Here, we show how \emph{topological} information about the set of solutions can be calculated in a simple model of satisfying random nonlinear equalities. This allows us to reason about the connectivity of this solution set. We consider the problem of finding configurations $\mathbf x\in\mathbb R^N$ lying on the $(N-1)$-sphere $\|\mathbf x\|^2=N$ that simultaneously satisfy $M$ nonlinear constraints $V_k(\mathbf x)=V_0$ for $1\leq k\leq M$ and some constant $V_0\in\mathbb R$. The nonlinear constraints are taken to be centered Gaussian random functions with covariance \begin{equation} \label{eq:covariance} \overline{V_i(\mathbf x)V_j(\mathbf x')} =\delta_{ij}F\left(\frac{\mathbf x\cdot\mathbf x'}N\right) \end{equation} for some choice of function $F$. When the covariance function $F$ is polynomial, the $V_k$ are also polynomial, with a term of degree $p$ in $F$ corresponding to all possible terms of degree $p$ in $V_k$. In particular, taking \begin{equation} V_k(\mathbf x) =\sum_{p=0}^\infty\frac1{p!}\sqrt{\frac{F^{(p)}(0)}{N^p}} \sum_{i_1\cdots i_p}^NJ^{(k,p)}_{i_1\cdots i_p}x_{i_1}\cdots x_{i_p} \end{equation} with the elements of the tensors $J^{(k,p)}$ as independently distributed unit normal random variables satisfies \eqref{eq:covariance}. The size of the series coefficients of $F$ therefore control the variances in the coefficients of random polynomial constraints. This problem or small variations thereof have attracted attention recently for their resemblance to encryption, optimization, and vertex models of confluent tissues \cite{Fyodorov_2019_A, Fyodorov_2020_Counting, Fyodorov_2022_Optimization, Urbani_2023_A, Kamali_2023_Dynamical, Kamali_2023_Stochastic, Urbani_2024_Statistical, Montanari_2023_Solving, Montanari_2024_On, Kent-Dobias_2024_Conditioning, Kent-Dobias_2024_Algorithm-independent}. In each of these cases, the authors studied properties of the cost function \begin{equation} \label{eq:cost} \mathcal C(\mathbf x)=\frac12\sum_{k=1}^MV_k(\mathbf x)^2 \end{equation} which achieves zero only for configurations that satisfy all the constraints. Here we dispense with the cost function and study the set of solutions directly. This set can be written as \begin{equation} \Omega=\big\{\mathbf x\in\mathbb R^N\mid \|\mathbf x\|^2=N,V_k(\mathbf x)=0 \;\forall\;k=1,\ldots,M\big\} \end{equation} Because the constraints are all smooth functions, $\Omega$ is almost always a manifold without singular points. The conditions for a singular point are that $0=\frac\partial{\partial\mathbf x}V_k(\mathbf x)$ for all $k$. This is equivalent to asking that the constraints $V_k$ all have a stationary point at the same place. When the $V_k$ are independent and random, this is vanishingly unlikely, requiring $NM+1$ independent equations to be simultaneously satisfied. This means that different connected components of the set of solutions do not intersect, nor are there self-intersections, without extraordinary fine-tuning. When $M$ is too large, no solutions exist and $\Omega$ becomes the empty set. Following previous work, a replica symmetric equilibrium calculation using the cost function \eqref{eq:cost} predicts that solutions vanish when the ratio $\alpha=M/N$ is larger than $\alpha_\text{\textsc{sat}}=f'(1)/f(1)$. Based on the results of this paper, and the fact that this $\alpha_\text{\textsc{sat}}$ is consistent The Euler characteristic $\chi$ of a manifold is a topological invariant \cite{Hatcher_2002_Algebraic}. It is perhaps most familiar in the context of connected compact orientable surfaces, where it characterizes the number of handles in the surface: $\chi=2(1-\#)$ for $\#$ handles. For general $d$, the Euler characteristic of the $d$-sphere is $2$ if $d$ is even and 0 if $d$ is odd. The canonical method for computing the Euler characteristic is done by defining a complex on the manifold in question, essentially a higher-dimensional generalization of a polygonal tiling. Then $\chi$ is given by an alternating sum over the number of cells of increasing dimension, which for 2-manifolds corresponds to the number of vertices, minus the number of edges, plus the number of faces. Morse theory offers another way to compute the Euler characteristic using the statistics of stationary points of a function $H:\Omega\to\mathbb R$ \cite{Audin_2014_Morse}. For functions $H$ without any symmetries with respect to the manifold, the surfaces of gradient flow between adjacent stationary points form a complex. The alternating sum over cells to compute $\chi$ becomes an alternating sum over the count of stationary points of $H$ with increasing index, or \begin{equation} \chi=\sum_{i=0}^N(-1)^i\mathcal N_H(\text{index}=i) \end{equation} Conveniently, we can express this abstract sum as an integral over the manifold using a small variation on the Kac--Rice formula for counting stationary points. Since the sign of the determinant of the Hessian matrix of $H$ at a stationary point is equal to its index, if we count stationary points including the sign of the determinant, we arrive at the Euler characteristic, or \begin{equation} \label{eq:kac-rice} \chi=\int_\Omega d\mathbf x\,\delta\big(\nabla H(\mathbf x)\big)\det\operatorname{Hess}H(\mathbf x) \end{equation} When the Kac--Rice formula is used to \emph{count} stationary points, the sign of the determinant is a nuisance that one must take pains to preserve \cite{Fyodorov_2004_Complexity}. Here we are correct to exclude it. We need to choose a function $H$ for our calculation. Because $\chi$ is a topological invariant, any choice will work so long as it does not share some symmetry with the underlying manifold, i.e., that it $H$ satisfies the Smale condition. Because our manifold of random constraints has no symmetries, we can take a simple height function $H(\mathbf x)=\mathbf x_0\cdot\mathbf x$ for some $\mathbf x_0\in\mathbb R^N$ with $\|\mathbf x_0\|^2=N$. $H$ is a height function because when $\mathbf x_0$ is used as the polar axis, $H$ gives the height on the sphere. \section{The average Euler characteristic} We treat the integral over the implicitly defined manifold $\Omega$ using the method of Lagrange multipliers. We introduce one multiplier $\omega_0$ to enforce the spherical constraint and $M$ multipliers $\omega_k$ to enforce the vanishing of each of the $V_k$, resulting in the Lagrangian \begin{equation} L(\mathbf x,\pmb\omega) =H(\mathbf x)+\frac12\omega_0\big(\|\mathbf x\|^2-N\big) +\sum_{k=1}^M\omega_kV_k(\mathbf x) \end{equation} The integral over $\Omega$ in \eqref{eq:kac-rice} then becomes \begin{equation} \label{eq:kac-rice.lagrange} \chi(\Omega)=\int_{\mathbb R^N} d\mathbf x\int_{\mathbb R^{M+1}}d\pmb\omega \,\delta\big(\partial L(\mathbf x,\pmb\omega)\big) \det\partial\partial L(\mathbf x,\pmb\omega) \end{equation} where $\partial=[\frac\partial{\partial\mathbf x},\frac\partial{\partial\pmb\omega}]$ is the vector of partial derivatives with respect to all $N+M+1$ variables. This integral is now in a form where standard techniques from mean-field theory can be applied to calculate it. In order for certain Gaussian integrals in the following calculation to be well-defined, it is necessary to treat instead the Lagrangian problem above with $\pmb\omega\mapsto i\pmb\omega$. This transformation does not effect the Dirac $\delta$ functions of the gradient, but it does change the determinant by a factor of $i^{N+M+1}$. We will see that the result of the rest of the calculation neglecting this factor is real. Since the Euler characteristic is also necessarily real, this indicates an inconsistency with this transformation when $N+M+1$ is odd. In fact, the Euler characteristic is always zero for odd-dimensional manifolds. This is the signature of it in this problem. To evaluate the average of $\chi$ over the constraints, we first translate the $\delta$ functions and determinant to integral form, with \begin{align} \delta\big(\partial L(\mathbf x,\pmb\omega)\big) =\int\frac{d\hat{\mathbf x}}{(2\pi)^N}\frac{d\hat{\pmb\omega}}{(2\pi)^{M+1}} e^{i[\hat{\mathbf x},\hat{\pmb\omega}]\cdot\partial L(\mathbf x,\pmb\omega)} \\ \det\partial\partial L(\mathbf x,\pmb\omega) =\int d\bar{\pmb\eta}\,d\pmb\eta\,d\bar{\pmb\gamma}\,d\pmb\gamma\, e^{-[\bar{\pmb\eta},\bar{\pmb\gamma}]^T\partial\partial L(\mathbf x,\pmb\omega)[\pmb\eta,\pmb\gamma]} \end{align} To make the calculation compact, we introduce superspace coordinates. Define the supervectors \begin{equation} \pmb\phi(1)=\mathbf x+\bar\theta_1\pmb\eta+\bar{\pmb\eta}\theta_1+\bar\theta_1\theta_1i\hat{\mathbf x} \qquad \sigma_k(1)=\omega_k+\bar\theta_1\gamma_k+\bar\gamma_k\theta_1+\bar\theta_1\theta_1\hat\omega_k \end{equation} The Euler characteristic can be expressed using these supervectors as \begin{equation} \begin{aligned} \chi(\Omega) &=\int d\pmb\phi\,d\pmb\sigma\,e^{\int d1\,L\big(\pmb\phi(1),\pmb\sigma(1)\big)} \\ &=\int d\pmb\phi\,d\pmb\sigma\,\exp\left\{ \int d1\left[ H\big(\pmb\phi(1)\big) +\frac i2\sigma_0(1)\left(\|\pmb\phi(1)\|^2-N\right) +i\sum_{k=1}^M\sigma_k(1)\Big(V_k\big(\pmb\phi(1)\big)-V_0\Big) \right] \right\} \end{aligned} \end{equation} Since this is an exponential integrand linear in the functions $V_k$, we can average over the functions to find \begin{equation} \begin{aligned} \overline{\chi(\Omega)} =\int d\pmb\phi\,d\pmb\sigma\,\exp\Bigg\{ \int d1\left[ H(\pmb\phi(1)) +\frac{i}2\sigma_0(1)\big(\|\pmb\phi(1)\|^2-N\big) -iV_0\sum_{k=1}^M\sigma_k(1) \right] \\ -\frac12\int d1\,d2\,\sum_{k=1}^M\sigma_k(1)\sigma_k(2)F\left(\frac{\pmb\phi(1)\cdot\pmb\phi(2)}N\right) \Bigg\} \end{aligned} \end{equation} This is a Gaussian integral in the Lagrange multipliers with $1\leq k\leq M$. Performing that integral yields \begin{equation} \begin{aligned} \overline{\chi(\Omega)} &=\int d\pmb\phi\,d\sigma_0\,\exp\Bigg\{ \int d1\left[ H(\pmb\phi(1)) +\frac{i}2\sigma_0(1)\big(\|\pmb\phi(1)\|^2-N\big) \right] \\ &\hspace{5em}-\frac M2V_0^2\int d1\,d2\,F\left(\frac{\pmb\phi(1)\cdot\pmb\phi(2)}N\right)^{-1} -\frac M2\log\operatorname{sdet}F\left(\frac{\pmb\phi(1)\cdot\pmb\phi(2)}N\right) \Bigg\} \end{aligned} \end{equation} The supervector $\pmb\phi$ enters this expression as a function only of the scalar product with itself and with the vector $\mathbf x_0$ inside the function $H$. We therefore change variables to the superoperator $\mathbb Q$ and the supervector $\mathbb M$ defined by \begin{equation} \mathbb Q(1,2)=\frac{\pmb\phi(1)\cdot\pmb\phi(2)}N \qquad \mathbb M(1)=\frac{\pmb\phi(1)\cdot\mathbf x_0}N \end{equation} These new variables can replace $\pmb\phi$ in the integral using a generalized Hubbard--Stratonovich transformation, which yields \begin{equation} \begin{aligned} \overline{\chi(\Omega)} &=\int d\mathbb Q\,d\mathbb M\,d\sigma_0\, \left[g(\mathbb Q,\mathbb M)+O(N^{-1})\right] \,\exp\Bigg\{ N\int d1\left[ \mathbb M(1) +\frac{i}2\sigma_0(1)\big(\mathbb Q(1,1)-1\big) \right] \\ &\hspace{5em}-\frac M2V_0^2\int d1\,d2\,F(\mathbb Q)^{-1}(1,2) -\frac M2\log\operatorname{sdet}F(\mathbb Q) +\frac N2\log\operatorname{sdet}(\mathbb Q-\mathbb M\mathbb M^T) \Bigg\} \end{aligned} \end{equation} where $g$ is a function of $\mathbb Q$ and $\mathbb M$ independent of $N$ and $M$, detailed in Appendix~\ref{sec:prefactor}. To move on from this expression, we need to expand the superspace notation. We can write \begin{equation} \begin{aligned} \mathbb Q(1,2) &=C-R(\bar\theta_1\theta_1+\bar\theta_2\theta_2) -G(\bar\theta_1\theta_2+\bar\theta_2\theta_1) -D\bar\theta_1\theta_1\bar\theta_2\theta_2 \\ &\qquad +(\bar\theta_1+\bar\theta_2)H +\bar H(\theta_1+\theta_2) -(\bar\theta_1\theta_1\bar\theta_2+\bar\theta_2\theta_2\bar\theta_1)\hat H -\bar{\hat H}(\theta_1\bar\theta_2\theta_2+\theta_1\bar\theta_1\theta_1) \end{aligned} \end{equation} and \begin{equation} \mathbb M(1) =m+\bar\theta_1H_0+\bar H_0\theta_1-\hat m\bar\theta_1\theta_1 \end{equation} The order parameters $C$, $R$, $G$, $D$, $m$, and $\hat m$ are ordinary numbers defined by \begin{align} C=\frac{\mathbf x\cdot\mathbf x}N && R=-i\frac{\mathbf x\cdot\hat{\mathbf x}}N && G=\frac{\bar{\pmb\eta}\cdot\pmb\eta}N && D=\frac{\hat{\mathbf x}\cdot\hat{\mathbf x}}N && m=\frac{\mathbf x_0\cdot\mathbf x}N && \hat m=-i\frac{\mathbf x_0\cdot\hat{\mathbf x}}N \end{align} while $\bar H$, $H$, $\bar{\hat H}$, $\hat H$, $\bar H_0$ and $H_0$ are Grassmann numbers defined by \begin{align} \bar H=\frac{\bar{\pmb\eta}\cdot\mathbf x}N && H=\frac{\pmb\eta\cdot\mathbf x}N && \bar{\hat H}=\frac{\bar{\pmb\eta}\cdot\hat{\mathbf x}}N && \hat H=\frac{\pmb\eta\cdot\hat{\mathbf x}}N && \bar H_0=\frac{\bar{\pmb\eta}\cdot\mathbf x_0}N && H_0=\frac{\pmb\eta\cdot\mathbf x_0}N \end{align} We can treat the integral over $\sigma_0$ immediately. It gives \begin{equation} \int d\sigma_0\,e^{N\int d1\,\frac i2\sigma_0(1)(\mathbb Q(1,1)-1)} =2\pi\,\delta(C-1)\,\delta(G+R)\,\bar HH \end{equation} This therefore sets $C=1$ and $G=-R$ in the remainder of the integrand, as well as setting everything depending on $\bar H$ and $H$ to zero. \begin{equation} \begin{aligned} \operatorname{sdet}(\mathbb Q-\mathbb M\mathbb M^T) &=1+\frac{(1-m^2)D+\hat m^2-2Rm\hat m}{R^2} -\frac6{R^4}\bar H_0H_0\bar{\hat H}\hat H \\ &\qquad+\frac2{R^3}\left[ (mR-\hat m)(\bar{\hat H}H_0+\bar H_0\hat H) -(D+R^2)\bar H_0H_0 +(1-m^2)\bar{\hat H}\hat H \right] \end{aligned} \end{equation} \begin{equation} \operatorname{sdet}f(\mathbb Q) =1+\frac{Df(1)}{R^2f'(1)} +\frac{2f(1)}{R^3f'(1)}\bar{\hat H}\hat H \end{equation} \begin{equation} \int d1\,d2\,F(\mathbb Q)^{-1}(1,2) =\left(f(1)+\frac{R^2f'(1)}{D}\right)^{-1} +2\frac{Rf'(1)}{(Df(1)+R^2f'(1))^2}\bar{\hat H}\hat H \end{equation} \subsection{Behavior with extensively many constraints} \subsection{Behavior with finitely many constraints} The correct scaling to find a nontrivial answer with finite $M$ is to scale both the covariance functions and fixed constants with $N$ like $v_0=\frac1NV_0$, $f(q)=\frac1NF(q)$, so that $v_0$ and $f(q)$ are finite at large $N$. With these scalings and $M=1$, this problem reduces to examining the levels sets of the spherical spin glasses at energy density $E=v_0$. $v_0^{\chi>2}=\sqrt{2f(1)}$ $v_0^{m=0}=2\sqrt{f(1)-\frac{f(1)^2}{f'(1)}}$ \subsection{What does the average Euler characteristic tell us?} It is not straightforward to directly use the average Euler characteristic to infer something about the number of connected components in the set of solutions. To understand why, a simple example is helpful. Consider the set of solutions on the sphere $\|\mathbf x\|^2=N$ that satisfy the single quadratic constraint \begin{equation} 0=\sum_{i=1}^N\sigma_ix_i^2 \end{equation} where each $\sigma_i$ is taken to be $\pm1$ with equal probability. If we take $\mathbf x$ to be ordered such that all terms with $\sigma_i=+1$ come first, this gives \begin{equation} 0=\sum_{i=1}^{N_+}x_i^2-\sum_{i=N_++1}^Nx_i^2 \end{equation} where $N_+$ is the number of terms with $\sigma_i=+1$. The topology of the resulting manifold can be found by adding and subtracting this constraint from the spherical one, which gives \begin{align} \frac12=\sum_{i=1}^{N_+}x_i^2 \qquad \frac12=\sum_{i=N_++1}^{N}x_i^2 \end{align} These are two independent equations for spheres of radius $1/\sqrt2$, one of dimension $N_+$ and the other of dimension $N-N_+$. Therefore, the topology of the configuration space is that of $S^{N_+-1}\times S^{N-N_+-1}$. The Euler characteristic of a product space is the product of the Euler characteristics, and so we have $\chi(\Omega)=\chi(S^{N_+-1})\chi(S^{N-N_+-1})$. What is the average value of the Euler characteristic over values of $\sigma_i$? First, recall that the Euler characteristic of a sphere $S^d$ is 2 when $d$ is even and 0 when $d$ is odd. When $N$ is odd, any value of $N_+$ will result in one of the two spheres in the product to be odd-dimensional, and therefore $\chi(\Omega)=0$, as is always true for odd-dimensional manifolds. When $N$ is even, there are two possibilities: when $N_+$ is even then both spheres are odd-dimensional, while when $N_+$ is odd then both spheres are even-dimensional. The number of terms $N_+$ with $\sigma_i=+1$ is distributed with the binomial distribution \begin{equation} P(N_+)=\frac1{2^N}\binom{N}{N_+} \end{equation} Therefore, the average Euler characteristic for even $N$ is \begin{equation} \overline{\chi(\Omega)} =\sum_{N_+=0}^NP(N_+)\chi(S^{N_+-1})\chi(S^{N-N_+-1}) =\frac4{2^N}\sum_{n=0}^{N/2}\binom{N}{2n} =2 \end{equation} Thus we find the average Euler characteristic in this simple example is 2 despite the fact that the possible manifolds resulting from the constraints have characteristics of either 0 or 4. \begin{figure} \includegraphics[width=0.32\columnwidth]{figs/connected.pdf} \hfill \includegraphics[width=0.32\columnwidth]{figs/shattered.pdf} \hfill \includegraphics[width=0.32\columnwidth]{figs/gone.pdf} \includegraphics{figs/bar.pdf} \caption{ Cartoon of the topology of the solution manifold implied by our calculation. The arrow shows the vector $\mathbf x_0$ defining the height function. The region of solutions is marked in black, and the critical points of the height function restricted to this region are marked with a point. For $\alpha<1$, there are few simply connected regions with most of the minima and maxima contributing to the Euler characteristic concentrated at the height $m^*$. For $\alpha\geq1$, there are many simply connected regions and most of their minima and maxima are concentrated at the equator. } \label{fig:cartoons} \end{figure} \cite{Franz_2016_The, Franz_2017_Universality, Franz_2019_Critical, Annesi_2023_Star-shaped, Baldassi_2023_Typical} \section{Average number of stationary points of a test function} \subsection{Behavior with extensively many constraints} \subsection{Behavior with finitely many constraints} \begin{equation} Mv_0^2=\frac4{f''(1)}\left[ f'(1)-f(1)-2\frac{f(1)}{f'(1)}f''(1) \right]\left[ \frac{f(1)}{f'(1)}f''(1)-2\big(f'(1)-f(1)\big) \right] \end{equation} \begin{equation} Mv_0^2=\frac{f'(1)^2}{f''(1)} \end{equation} \section{Interpretation of our results} \paragraph{Quenched average of the Euler characteristic.} \begin{equation} D=\beta R \qquad \hat\beta=-\frac{m+\sum_aR_{1a}}{\sum_aC_{1a}} \qquad \hat m=0 \end{equation} \begin{align} &\mathcal S(m,C,R) =\frac12\log\det\big[I+\hat\beta R^{-1}(C-m^2)\big] \notag \\ &\quad-\frac\alpha2\log\det\big[I+\hat\beta\big(R\odot f'(C)\big)^{-1}f(C)\big] \end{align} The quenched average of the Euler characteristic in the replica symmetric ansatz becomes for $1<\alpha<\alpha_\text{\textsc{sat}}$ \begin{align} \frac1N\overline{\log\chi} =\frac12\bigg[ \log\left(-\frac 1{\tilde r_d}\right) -\alpha\log\left( 1-\Delta f\frac{1+\tilde r_d}{f'(1)\tilde r_d} \right) \notag \\ -\alpha f(0)\left(\Delta f-\frac{f'(1)\tilde r_d}{1+\tilde r_d}\right)^{-1} \bigg] \end{align} where $\Delta f=f(1)-f(0)$ and $\tilde r_d$ is given by \begin{align} \tilde r_d =-\frac{f'(1)f(1)-\Delta f^2}{2(f'(1)-\Delta f)^2} \bigg( \alpha-2+\frac{2f'(1)f(0)}{f'(1)f(1)-\Delta f^2} \notag\\ +\sqrt{ \alpha^2 -4\alpha\frac{f'(1)f(0)\Delta f\big(f'(1)-\Delta f\big)}{\big(f'(1)f(1)-\Delta f^2\big)^2} } \bigg) \end{align} When $\alpha\to\alpha_\text{\textsc{sat}}=f'(1)/f(1)$ from below, $\tilde r_d\to -1$, which produces $N^{-1}\overline{\log\chi}\to0$. \section*{Acknowledgements} \addcontentsline{toc}{section}{Acknowledgements} JK-D is supported by a \textsc{DynSysMath} Specific Initiative of the INFN. The authors thank Pierfrancesco Urbani for helpful conversations on these topics. \appendix \section{Calculation of the prefactor of the average Euler characteristic} \label{sec:prefactor} \printbibliography \addcontentsline{toc}{section}{References} \end{document}