\documentclass[fleqn]{article} \usepackage{fullpage,amsmath,amssymb,latexsym,graphicx} \usepackage{appendix} \begin{document} \title{Full solution of the Kac--Rice problem for mean-field models.\\ or Full solution for the counting of saddles of mean-field glass models} \author{Jaron Kent-Dobias \& Jorge Kurchan} \maketitle \begin{abstract} We derive the general solution for the computation of saddle points of mean-field complex landscapes. The solution incorporates Parisi's solution for the ground state, as it should. \end{abstract} \section{Introduction} The computation of the number of metastable states of mean field spin glasses goes back to the beginning of the field. Over forty years ago, Bray and Moore \cite{Bray_1980_Metastable} attempted the first calculation for the Sherrington--Kirkpatrick model, in a paper remarkable for being the first practical application of a replica symmetry breaking scheme. As became clear when the actual ground-state of the model was computed by Parisi \cite{Parisi_1979_Infinite} with a different scheme, the Bray--Moore result was not exact, and in fact the problem has been open ever since. To this date the program of computing the number of saddles of a mean-field glass has been only carried out for a small subset of models. These include most notably the (pure) $p$-spin model ($p>2$) \cite{Rieger_1992_The, Crisanti_1995_Thouless-Anderson-Palmer}. The problem of studying the critical points of these landscapes has evolved into an active field in probability theory \cite{Auffinger_2012_Random, Auffinger_2013_Complexity, BenArous_2019_Geometry} In this paper we present what we argue is the general replica ansatz for the computation of the number of saddles of generic mean-field models, including the Sherrington--Kirkpatrick model. It reproduces the Parisi result in the limit of small temperature for the lowest states, as it should. \section{The model} Here we consider, for definiteness, the mixed $p$-spin model, itself a particular case of the `Toy Model' of M\'ezard and Parisi \cite{Mezard_1992_Manifolds} \begin{equation} H(s)=\sum_p\frac{a_p^{1/2}}{p!}\sum_{i_1\cdots i_p}J_{i_1\cdots i_p}s_{i_1}\cdots s_{i_p} \end{equation} for $\overline{J^2}=p!/2N^{p-1}$. Then \begin{equation} \overline{H(s_1)H(s_2)}=Nf\left(\frac{s_1\cdot s_2}N\right) \end{equation} for \begin{equation} f(q)=\frac12\sum_pa_pq^p \end{equation} Can be thought of as a model of generic gaussian functions on the sphere. To constrain the model to the sphere, we use a Lagrange multiplier $\mu$, with the total energy being \begin{equation} H(s)+\frac\mu2(s\cdot s-N) \end{equation} At any critical point, the hessian is \begin{equation} \operatorname{Hess}H=\partial\partial H+\mu I \end{equation} $\partial\partial H$ is a GOE matrix with variance \begin{equation} \overline{(\partial_i\partial_jH)^2}=\frac1Nf''(1)\delta_{ij} \end{equation} and therefore its spectrum is given by the Wigner semicircle with radius $\sqrt{4f''(1)}$, or \begin{equation} \rho(\lambda)=\frac1{\pi\sqrt{f''(1)}}\sqrt{4f''(1)-\lambda^2} \end{equation} and the spectrum of $\operatorname{Hess}H$ is this shifted by $\mu$, or $\rho(\lambda+\mu)$. The parameter $\mu$ fixes the spectrum of the hessian. When it is an integration variable, and one restricts the domain of all integrations to compute saddles of a certain macroscopic index, or of minima with a certain harmonic stiffness, its value is the `softest' mode that adapts to change the Hessian \cite{Fyodorov_2007_Replica}. When it is fixed, then the restriction of the index of saddles is `payed' by the realization of the eigenvalues of the Hessian, usually a `harder' mode. {\tiny NOT SURE WORTHWHILE \subsection{What to expect?} In order to try to visualize what one should expect, consider two pure p-spin models, with \begin{equation} H = H_1 + H_2=\alpha_1 \sum_{ijk} J^1_{ijk} s_i s_j s_k + \alpha_2 \sum_{ijk} J^2_{ijk} \bar s_i \bar s_j \bar s_k +\epsilon \sum_i s_i \bar s_i \end{equation} The complexity of the first and second systems in terms of $H_1$ and of $H_2$ have, in the absence of coupling, the same dependence, but are stretched to one another: \begin{equation} \Sigma_1(H_1)= \Sigma_o(H_1/\alpha_1) \qquad ; \qquad \Sigma_2(H_2)= \Sigma_o(H_2/\alpha_2) \end{equation} Each system has a ground state energy $E_{gs}^{1,2}$, a threshold energy $E_{thres}^{1,2}$ (a well-defined notion, since we are considering pure p-spins), the corresponding limit values $X^{1,2}_{gs}=\left. \frac{d \Sigma_1}{dE_{1,2}}\right|_{E^{gs}_{12}}$ and $X^{1,2}_{thres}=\left. \frac{d \Sigma_1}{dE_{1,2}}\right|_{E^{thres}_{12}}$ Considering the cartesian product of both systems, we have, in terms of the total energy $H=H_1+H_2$ three regimes: \begin{itemize} \item {\bf Unfrozen}: \begin{eqnarray} & & X_1 \equiv \frac{d \Sigma_1}{dE_1}= X_2 \equiv \frac{d \Sigma_2}{dE_2} \end{eqnarray} \item {\bf Semi-frozen} As we go down in energy, one of the systems (say, the first) reaches its frozen phase, the first system is thus concentrated in a few states of $O(1)$ energy, while the second is not, so that $X_1=X_1^{gs}> X_2$. The lowest energy is reached when systems are frozen. \item {\bf Semi-threshold } As we go up from the unfrozen upwards in energy, the second system reaches its threshold $X_2^{thres}$. At higher energies minima are extremely rare, so the minima of the second system remain stuck at its threshold for higher energies. \item{\bf Both systems reach their thresholds} There essentially no more minima above that. \end{itemize} Consider now two combined vectors $({\bf s},{\bf \hat s})$ and $({\bf s}',{\bf \hat s}')$ chosen at the same energies.\\ $\bullet$ Their normalized overlap is close to one when both subsystems are frozen, between zero and one in the semifrozen phase, and zero at all higher energies.\\ $\bullet$ In phases where one or both systems are stuck in their thresholds (and only in those), the minima are exponentially subdominant with respect to saddles, because a saddle is found by releasing the constraint of staying on the threshold. } \section{Equilibrium} Here we review the equilibrium solution. \cite{Crisanti_1992_The, Crisanti_1993_The, Crisanti_2004_Spherical, Crisanti_2006_Spherical} The free energy is well known to take the form \begin{equation} \beta F=-\frac12\lim_{n\to0}\frac1n\left(\beta^2\sum_{ab}^nf(Q_{ab})+\log\det Q\right)-1-\log2\pi \end{equation} which must be extremized over the matrix $Q$. When the solution is a Parisi matrix, this can also be written in a functional form. If $P(q)$ is the probability distribution for elements $q$ in a row of the matrix, then \begin{equation} \chi(q)=\int_1^qdq'\,\int_0^{q'}dq''\,P(q'') \end{equation} Since it is the double integral of a probability distribution, $\chi$ must be convex, monotonically decreasing and have $\chi(1)=0$, $\chi'(1)=1$. The free energy can be written as a functional over $\chi$ as \begin{equation} \beta F=-\frac12\int dq\,\left(\beta^2f''(q)\chi(q)+\frac1{\chi(q)}\right)-1-\log2\pi \end{equation} We are especially interested We insert a $k$ step RSB ansatz (the steps are standard, and are reviewed in Appendix A) , and obtain $q_0=0$ \begin{align*} \beta F= -\frac12\log S_\infty -\frac12\left(\beta^2f(1)+\beta^2\sum_{i=0}^k(x_i-x_{i+1})f(q_i) +\frac1{x_1}\log\left[ 1+\sum_{i=1}^{k}(x_i-x_{i+1})q_i \right]\right.\\ \left.+\sum_{j=1}^k(x_{j+1}^{-1}-x_j^{-1})\log\left[ 1+\sum_{i=j+1}^{k}(x_i-x_{i+1})q_i-x_{j+1}q_j \right] \right) \end{align*} The zero temperature limit is most easily obtained by putting $x_i=\tilde x_ix_k$ and $x_k=y/\beta$, $q_k=1-z/\beta$ \begin{align*} \beta F= -\frac12\log S_\infty- \frac12\left(\beta^2f(1)+\beta^2(y\beta^{-1}-1)f(1-z\beta^{-1})+y\beta\sum_{i=0}^{k-1}(\tilde x_i-\tilde x_{i+1})f(q_i)\right. \\ +\frac\beta{\tilde x_1 y}\log\left[ y\sum_{i=1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z-yz/\beta \right]\\ +\sum_{j=1}^{k-1}\frac\beta y(\tilde x_{j+1}^{-1}-\tilde x_j^{-1})\log\left[ y\sum_{i=j+1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z-yz/\beta-y\tilde x_{j+1}q_j \right]\\ \left.-\frac\beta{\tilde x_1 y}\log\beta-\sum_{j=1}^{k-1}\frac\beta y(\tilde x_{j+1}^{-1}-\tilde x_j^{-1})\log\beta+(1-\beta y^{-1})\log\left[ z/\beta \right] \right) \end{align*}Taking the limit we get \begin{align*} \lim_{\beta\to\infty}F= -\frac12\left(yf(1)+zf'(1)+y\sum_{i=0}^{k-1}(\tilde x_i-\tilde x_{i+1})f(q_i) +\frac1{\tilde x_1 y}\log\left[ y\sum_{i=1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z \right]\right.\\ \left.+\sum_{j=1}^{k-1}\frac1 y(\tilde x_{j+1}^{-1}-\tilde x_j^{-1})\log\left[ y\sum_{i=j+1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z-y\tilde x_{j+1}q_j \right] -\frac1y\log z \right) \end{align*} $F$ is a $k-1$ RSB ansatz with all eigenvalues scaled by $y$ and shifted by $z$. $\tilde x_0=0$ and $\tilde x_k=1$. {\em We have lost one level of RSB because at zero temperature the states become points.} {\bf Jaron: should'nt we put here the continuum solution?} \section{Kac-Rice} \cite{Auffinger_2012_Random, BenArous_2019_Geometry} \begin{equation} \mathcal N(\epsilon, \mu) =\int ds\,\delta(N\epsilon-H(s))\delta(\partial H(s)+\mu s)|\det(\partial\partial H(s)+\mu I)| \end{equation} \begin{equation} \Sigma(\epsilon,\mu)=\frac1N\log\mathcal N(\epsilon, \mu) \end{equation} The `mass' term $\mu$ may take a fixed value, or it may be an integration constant, for example fixing the spherical constraint. This will turn out to be important when we discriminate between counting all solutions, or selecting those of a given index, for example minima \subsection{The replicated problem} \cite{Ros_2019_Complex} \cite{Folena_2020_Rethinking} \begin{equation} \begin{aligned} \Sigma(\epsilon, \mu) &=\frac1N\lim_{n\to0}\frac\partial{\partial n}\mathcal N^n(\epsilon) \\ &=\frac1N\lim_{n\to0}\frac\partial{\partial n}\int\prod_a^n ds_a\,\delta(N\epsilon-H(s_a))\delta(\partial H(s_a)+\mu s_a)|\det(\partial\partial H(s_a)+\mu I)| \end{aligned} \end{equation} {\bf As noted by Bray and Dean \cite{Bray_2007_Statistics}, gradient and Hessian are independent for a Gaussian distribution, and the average over disorder breaks into a product of two independent averages, one for the gradient factor and one for the determinant. The integration of all variables, including the disorder in the last factor, may be restricted to the domain such that the matrix $\partial\partial H(s_a)-\mu I$ has a specified number of negative eigenvalues (the index {\cal{I}} of the saddle), (see Fyodorov \cite{Fyodorov_2007_Replica} for a detailed discussion) } {\bf Jaron: I think it is better to call $\hat \epsilon \rightarrow \hat \beta$ and add the phrase: $\hat \beta$ is a parameter conjugate to the state energies, i.e. playing the role of an inverse temperature for the metastable states. } \begin{equation} \begin{aligned} \overline{\Sigma(\epsilon, \mu)} &=\frac1N\lim_{n\to0}\frac\partial{\partial n}\int\left(\prod_a^nds_a\right)\,\overline{\prod_a^n \delta(N\epsilon-H(s_a))\delta(\partial H(s_a)+\mu s_a)} \times \overline{\prod_a^n |\det(\partial\partial H(s_a)+\mu I)|} \end{aligned} \end{equation} \begin{equation} \prod_a^n\delta(N\epsilon-H(s_a))\delta(\partial H(s_a)+\mu s_a) =\int \frac{d\hat\epsilon}{2\pi}\prod_a^n\frac{d\hat s_a}{2\pi} e^{\hat\epsilon(N\epsilon-H(s_a))+i\hat s_a\cdot(\partial H(s_a)+\mu s_a)} \end{equation} \begin{equation} \begin{aligned} \overline{ \exp\left\{ \sum_a^n(i\hat s_a\cdot\partial_a-\hat\epsilon)H(s_a) \right\} } &=\exp\left\{ \frac12\sum_{ab}^n (i\hat s_a\cdot\partial_a-\hat\epsilon) (i\hat s_b\cdot\partial_b-\hat\epsilon) \overline{H(s_a)H(s_b)} \right\} \\ &=\exp\left\{ \frac N2\sum_{ab}^n (i\hat s_a\cdot\partial_a-\hat\epsilon) (i\hat s_b\cdot\partial_b-\hat\epsilon) f\left(\frac{s_a\cdot s_b}N\right) \right\} \\ &\hspace{-13em}\exp\left\{ \frac N2\sum_{ab}^n \left[ \hat\epsilon^2f\left(\frac{s_a\cdot s_b}N\right) -2i\hat\epsilon\frac{\hat s_a\cdot s_b}Nf'\left(\frac{s_a\cdot s_b}N\right) -\frac{\hat s_a\cdot \hat s_b}Nf'\left(\frac{s_a\cdot s_b}N\right) +\left(i\frac{\hat s_a\cdot s_b}N\right)^2f''\left(\frac{s_a\cdot s_b}N\right) \right] \right\} \end{aligned} \end{equation} We introduce the parameters: \begin{align} Q_{ab}=\frac1Ns_a\cdot s_b && R_{ab}=-i\frac1N\hat s_a\cdot s_b && D_{ab}=\frac1N\hat s_a\cdot\hat s_b \end{align} The meaning of $R_{ab}$ is that of a response of replica $a$ to a linear field in replica $b$: \begin{equation} R_{ab} = \frac 1 N \sum_i \overline{\frac{\delta s_i^a}{\delta h_i^b}} \end{equation} The $D$ may similarly be seen as the variation of the complexity with respect to a random field. In terms of these parameters, we have \begin{equation} \begin{aligned} S =\mathcal D(\mu)+\hat\epsilon\epsilon+\lim_{n\to0}\frac1n\left( -\mu\sum_a^nR_{aa} +\frac12\sum_{ab}\left[ \hat\epsilon^2f(Q_{ab})+2\hat\epsilon R_{ab}f'(Q_{ab}) -D_{ab}f'(Q_{ab})+R_{ab}^2f''(Q_{ab}) \right] \right. \\ \left. +\frac12\log\det\begin{bmatrix}Q&iR\\iR&D\end{bmatrix} \right) \end{aligned} \end{equation}where \begin{equation} \begin{aligned} \mathcal D(\mu) &=\frac1N\overline{\log|\det(\partial\partial H(s_a)+\mu I)|} =\int d\lambda\,\rho(\lambda+\mu)\log|\lambda| \\ &=\operatorname{Re}\left\{ \frac12\left(1+\frac\mu{2f''(1)}\left(\mu\pm\sqrt{\mu^2-4f''(1)}\right)\right) -\log\left(\frac1{2f''(1)}\left(\mu\pm\sqrt{\mu^2-4f''(1)}\right)\right) \right\} \end{aligned} \end{equation} Following the usual steps (Appendix B) we arrive at the replicated action: \begin{equation} \begin{aligned} S =\mathcal D(\mu)+\hat\epsilon\epsilon+\lim_{n\to0}\frac1n\left( -\mu\sum_a^nR_{aa} +\frac12\sum_{ab}\left[ \hat\epsilon^2f(Q_{ab})+2\hat\epsilon R_{ab}f'(Q_{ab}) -D_{ab}f'(Q_{ab})+R_{ab}^2f''(Q_{ab}) \right] \right. \\ \left. +\frac12\log\det\begin{bmatrix}Q&iR\\iR&D\end{bmatrix} \right) \end{aligned} \end{equation} \section{Replica ansatz} We shall make the following ansatz: \begin{eqnarray}\label{ansatz} Q_{ab}&=& \text{a Parisi matrix} \nonumber \\ R_{ab}&=&R_d \; \delta_{ab} \nonumber\\ D_{ab}&=& D_d \; \delta_{ab} \label{diagonal}\end{eqnarray} From what we have seen above, this means that replica $a$ is insensitive to a small field applied to replica $b$ if $a \neq b$, a property related to ultrametricity. A similar situation happens in quantum replicated systems, with time appearing only on the diagonal terms: see Appendix C for details. From its very definition, it is easy to see just perturbing the equations with a field that $R_d$ is the trace of the inverse Hessian, as one expect indeed of a response. Putting: \begin{equation} \mathcal D(\mu) =\frac1N\overline{\log|\det(\partial\partial H(s_a)-\mu I)|} =\int d\lambda\,\rho(\lambda-\mu)\log|\lambda| \end{equation} this means that: \begin{equation} R_d = \mathcal D(\mu)' \end{equation} \subsection{Solution} Insert the diagonal ansatz \cite{diagonal} one gets REINSTATED THIS--------- \begin{equation} \begin{aligned} \mathcal D(\mu) &=\operatorname{Re}\left\{\frac12\left(1+\frac\mu{2f''(1)}\left(\mu\pm\sqrt{\mu^2-4f''(1)}\right)\right)-\log\left(\frac1{2f''(1)}\left(\mu\pm\sqrt{\mu^2-4f''(1)}\right)\right)\right\} \end{aligned} \end{equation} -------------------------------------------------- \begin{equation} \label{eq:diagonal.action} \begin{aligned} S =\mathcal D(\mu) + \hat\epsilon\epsilon-\mu R_d +\frac12(2\hat\epsilon R_d-D_d)f'(1)+\frac12R_d^2f''(1)+\frac12\log R_d^2 \\ +\frac12\lim_{n\to0}\frac1n\left(\hat\epsilon^2\sum_{ab}f(Q_{ab})+\log\det((D_d/R_d^2)Q+I)\right) \end{aligned} \end{equation} Using standard manipulations (Appendix B), one finds \begin{equation} \label{eq:functional.action} \begin{aligned} S =\mathcal D(\mu) + \hat\epsilon\epsilon-\mu R_d +\frac12(2\hat\epsilon R_d-D_d)f'(1)+\frac12R_d^2f''(1)+\frac12\log R_d^2 \\ +\frac12\int_0^1dq\,\left( \hat\epsilon^2f''(q)\chi(q)+\frac1{\chi(q)+R_d^2/D_d} \right) \end{aligned} \end{equation} Note the close similarity of this action to the equilibrium replica one, at finite temperature. \begin{equation} \beta F=-\frac12\lim_{n\to0}\frac1n\left(\beta^2\sum_{ab}f(Q_{ab})+\log\det Q\right)-1-\log2\pi \end{equation} \subsubsection{Saddles} \label{sec:counting.saddles} The dominant stationary points are given by maximizing the action with respect to $\mu$. This gives \begin{equation} \label{eq:mu.saddle} 0=\frac{\partial S}{\partial\mu}=\mathcal D'(\mu)-R_d \end{equation} as expected. To take the derivative, we must resolve the real part inside the definition of $\mathcal D$. When saddles dominate,, $\mu<\mu_m$, and \begin{equation} \mathcal D(\mu)=\frac12+\frac12\log f''(1)+\frac{\mu^2}{4f''(1)} \end{equation} It follows that the dominant saddles have $\mu=2f''(1)R_d$. Their index is thus ${\cal{I}}= $ THIS NEEDS MATHEMATICA \subsubsection{Minima} \label{sec:counting.minima} When minima dominate, $\mu>\mu_m$ and all the roots inside $\mathcal D(\mu)$ are real. Therefore $\mathcal D(\mu)$ is given by its former expression with the real part dropped, and \begin{equation} \mathcal D'(\mu)=\frac{2}{\mu+\sqrt{\mu^2-4f''(1)}} \end{equation} \begin{equation} \mu=\frac1{R_d}+R_df''(1) \end{equation} \begin{figure} \begin{center} \includegraphics[width=13cm]{frsb_complexity-2.pdf} \end{center} \end{figure} \subsubsection{Recovering the replica ground state} The ground state energy corresponds to that where the complexity of dominant stationary points becomes zero. If the most common stationary points vanish, then there cannot be any stationary points. In this section, we will show that it reproduces the ground state produced by taking the zero-temperature limit in the equilibrium case. Consider the extremum problem of \eqref{eq:diagonal.action} with respect to $R_d$ and $D_d$. This gives the equations \begin{align} 0 &=\frac{\partial S}{\partial D_d} =-\frac12f'(1)+\frac12\frac1{R_d^2}\lim_{n\to0}\frac1n\operatorname{Tr}((D_d/R_d^2)Q+I)^{-1}Q \label{eq:saddle.d}\\ 0 &=\frac{\partial S}{\partial R_d} =-\mu+\hat\epsilon f'(1)+R_df''(1)+\frac1{R_d}-\frac{D_d}{R_d^3}\lim_{n\to0}\frac1n\operatorname{Tr}((D_d/R_d^2)Q+I)^{-1}Q \label{eq:saddle.r} \end{align} Adding $2(D_d/R_d)$ times \eqref{eq:saddle.d} to \eqref{eq:saddle.r} and multiplying by $R_d$ gives \begin{equation} 0=-R_d\mu+1+R_d^2f''(1)+f'(1)(R_d\hat\epsilon-D_d) \end{equation} There are two scenarios: one where the dominant stationary points in the vicinity of the ground state are minima, and one where they are saddles. In the case where the dominant stationary points are minima, we can use the optimal $\mu$ from \S\ref{sec:counting.minima}, which gives \begin{equation} 0=f'(1)(R_d\hat\epsilon-D_d) \end{equation} Therefore, in any situation where minima dominate, the optimal $\mu$ will have $R_d\hat\epsilon=D_d$. When the dominant stationary points are saddles, we can use the $\mu$ from \S\ref{sec:counting.saddles}, which implies $R_d=\mu/2f''(1)$ and \begin{equation} 0=1-\frac{\mu^2}{4f''(1)}+f'(1)(R_d\hat\epsilon-D_d) \end{equation} If saddles dominate all the way to the ground state, then they must become marginal minima at the ground state. Therefore at the ground state energy $\mu=\mu_m=\sqrt{4f''(1)}$, and once again $R_d\hat\epsilon-D_d=0$. In any case, at the ground state $D_d=R_d\hat\epsilon$. Substuting this into the action, and also substituting the optimal $\mu$ for saddles or minima, gives \begin{equation} \epsilon_0 =-\lim_{n\to0}\frac1n\frac12\left(nR_df'(1)+\hat\epsilon\sum_{ab} f(Q_{ab}) +\frac1{\hat\epsilon}\log\det(\hat\epsilon R_d^{-1} Q+I) \right) \end{equation} which is precisely \eqref{eq:ground.state.free.energy} with $R_d=z$ and $\hat\epsilon=y$. {\em We arrive at one of the main results of our paper: a $(k-1)$-RSB ansatz in Kac--Rice will predict the correct ground state energy for a model whose equilibrium state at small temperatures is $k$-RSB } \subsection{The continuum situation at a glance} Here a picture of $\chi$ vs $C$ or $X$ vs $C$ showing limits $q_{max}$, $x_{max}$ for different energies and typical vs minima. \section{Ultrametricity rediscovered} TENTATIVE BUT INTERESTING The frozen phase for a given index ${\cal{I}}$ is the one for values of $\hat \beta> \hat \beta_{freeze}^{\cal{I}}$. [Jaron: does $\hat \beta^I_{freeze}$ have a relation to the largest $x$ of the ansatz? If so, it would give an interesting interpretation for everything] The complexity of that index is zero, and we are looking at the lowest saddles in the problem, a question that to the best of our knowledge has not been discussed in the Kac-Rice context -- for good reason, since the complexity - the original motivation - is zero. However, our ansatz tells us something of the actual organization of the lowest saddles of each index in phase space. \section{Conclusion} We have constructed a replica solution for the general problem of finding saddles of random mean-field landscapes, including systems with many steps of RSB. The main results of this paper are the ansatz (\ref{ansatz}) and the check that the lowest energy is the correct one obtained with the usual Parisi ansatz. For systems with full RSB, we find that minima are, at all energy densities above the ground state, exponentially subdominant with respect to saddles. The solution contains valuable geometric information that has yet to be extracted in all detail. \paragraph{Funding information} J K-D and J K are supported by the Simons Foundation Grant No. 454943. \begin{appendix} \section{RSB for the Gibbs-Boltzmann measure} \begin{equation} \beta F=-\frac12\lim_{n\to0}\frac1n\left(\beta^2\sum_{ab}f(Q_{ab})+\log\det Q\right)-\frac12\log S_\infty \end{equation} $\log S_\infty=1+\log2\pi$. \begin{align*} \beta F= -\frac12\log S_\infty -\frac12\lim_{n\to0}\frac1n\left(\beta^2nf(1)+\beta^2\sum_{i=0}^kn(x_i-x_{i+1})f(q_i) +\log\left[ \frac{ 1+\sum_{i=0}^k(x_i-x_{i+1})q_i }{ 1+\sum_{i=1}^k(x_i-x_{i+1})q_i-x_1q_0 } \right]\right.\\ +\frac n{x_1}\log\left[ 1+\sum_{i=1}^k(x_i-x_{i+1})q_i-x_1q_0 \right]\\ \left.+\sum_{j=1}^kn(x_{j+1}^{-1}-x_j^{-1})\log\left[ 1+\sum_{i=j+1}^k(x_i-x_{i+1})q_i-x_{j+1}q_j \right] \right) \end{align*} \begin{align*} \lim_{n\to0}\frac1n \log\left[ \frac{ 1+\sum_{i=0}^k(x_i-x_{i+1})q_i }{ 1+\sum_{i=1}^k(x_i-x_{i+1})q_i-x_1q_0 } \right] &= \lim_{n\to0}\frac1n \log\left[ \frac{ 1+\sum_{i=0}^k(x_i-x_{i+1})q_i }{ 1+\sum_{i=0}^k(x_i-x_{i+1})q_i-nq_0 } \right] \\ &=q_0\left(1+\sum_{i=0}^k(x_i-x_{i+1})q_i\right)^{-1} \end{align*} \begin{align*} \beta F= -\frac12\log S_\infty -\frac12\left(\beta^2f(1)+\beta^2\sum_{i=0}^k(x_i-x_{i+1})f(q_i) +q_0\left(1+\sum_{i=0}^k(x_i-x_{i+1})q_i\right)^{-1}\right. \\ +\frac1{x_1}\log\left[ 1+\sum_{i=1}^{k}(x_i-x_{i+1})q_i-x_1q_0 \right]\\ \left.+\sum_{j=1}^k(x_{j+1}^{-1}-x_j^{-1})\log\left[ 1+\sum_{i=j+1}^{k}(x_i-x_{i+1})q_i-x_{j+1}q_j \right] \right) \end{align*} $q_0=0$ \begin{align*} \beta F= -\frac12\log S_\infty -\frac12\left(\beta^2f(1)+\beta^2\sum_{i=0}^k(x_i-x_{i+1})f(q_i) +\frac1{x_1}\log\left[ 1+\sum_{i=1}^{k}(x_i-x_{i+1})q_i \right]\right.\\ \left.+\sum_{j=1}^k(x_{j+1}^{-1}-x_j^{-1})\log\left[ 1+\sum_{i=j+1}^{k}(x_i-x_{i+1})q_i-x_{j+1}q_j \right] \right) \end{align*} $x_i=\tilde x_ix_k$, $x_k=y/\beta$, $q_k=1-z/\beta$ \begin{align*} \beta F= -\frac12\log S_\infty- \frac12\left(\beta^2f(1)+\beta^2(y\beta^{-1}-1)f(1-z\beta^{-1})+y\beta\sum_{i=0}^{k-1}(\tilde x_i-\tilde x_{i+1})f(q_i)\right. \\ +\frac\beta{\tilde x_1 y}\log\left[ y\sum_{i=1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z-yz/\beta \right]\\ +\sum_{j=1}^{k-1}\frac\beta y(\tilde x_{j+1}^{-1}-\tilde x_j^{-1})\log\left[ y\sum_{i=j+1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z-yz/\beta-y\tilde x_{j+1}q_j \right]\\ \left.-\frac\beta{\tilde x_1 y}\log\beta-\sum_{j=1}^{k-1}\frac\beta y(\tilde x_{j+1}^{-1}-\tilde x_j^{-1})\log\beta+(1-\beta y^{-1})\log\left[ z/\beta \right] \right) \end{align*} \begin{align*} \lim_{\beta\to\infty}F= -\frac12\left(yf(1)+zf'(1)+y\sum_{i=0}^{k-1}(\tilde x_i-\tilde x_{i+1})f(q_i) +\frac1{\tilde x_1 y}\log\left[ y\sum_{i=1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z \right]\right.\\ \left.+\sum_{j=1}^{k-1}\frac1 y(\tilde x_{j+1}^{-1}-\tilde x_j^{-1})\log\left[ y\sum_{i=j+1}^{k-1}(\tilde x_i-\tilde x_{i+1})q_i+y+z-y\tilde x_{j+1}q_j \right] -\frac1y\log z \right) \end{align*} $F$ is a $k-1$ RSB ansatz with all eigenvalues scaled by $y$ and shifted by $z$. $\tilde x_0=0$ and $\tilde x_k=1$. \begin{equation} \label{eq:ground.state.free.energy} \lim_{\beta\to\infty}F=-\lim_{n\to0}\frac1n\frac12\left(nzf'(1)+y\sum_{ab}f(\tilde Q_{ab})+\frac1y\log\det(yz^{-1}\tilde Q+I) \right) \end{equation} \section{ RSB for the Kac-Rice integral} \subsection{Solution} \begin{align*} \lim_{n\to0}\frac1n\log\det(\hat\epsilon R_d^{-1} Q+I) =x_1^{-1}\log\left(\hat\epsilon R_d^{-1}(1-\bar q_k)+1\right)+\int_{q_0^+}^{q_{k-1}}dq\,\mu(q)\log\left[\hat\epsilon R_d^{-1}\lambda(q)+1\right] \end{align*} where \[ \mu(q)=\frac{\partial x^{-1}(q)}{\partial q} \] Integrating by parts, \begin{align*} \lim_{n\to0}\frac1n\log\det(\hat\epsilon R_d^{-1} Q+I) &=x_1^{-1}\log\left(\hat\epsilon R_d^{-1}(1-\bar q_k)+1\right)+\left[x^{-1}(q)\log[\hat\epsilon R_d^{-1}\lambda(q)+1]\right]_{q=q_0^+}^{q=q_{k-1}}-\frac{\hat\epsilon}{R_d}\int_{q_0^+}^{q_{k-1}}dq\,\frac{\lambda'(q)}{x(q)}\frac1{\hat\epsilon R_d^{-1}\lambda(q)+1}\\ &=\log[\hat\epsilon R_d^{-1}\lambda(q_{k-1})+1]+\frac{\hat\epsilon}{R_d}\int_{q_0^+}^{q_{k-1}}dq\,\frac1{\hat\epsilon R_d^{-1}\lambda(q)+1} \end{align*} \begin{align*} \Sigma =-\epsilon\hat\epsilon+ \frac12\hat\epsilon R_df'(1) +\frac12\int_0^1dq\,\left[ \hat\epsilon^2\lambda(q)f''(q) +\frac1{\lambda(q)+R_d/\hat\epsilon} \right] \end{align*} for $\lambda$ concave, monotonic, $\lambda(1)=0$, and $\lambda'(1)=-1$ \[ 0=\frac{\delta\Sigma}{\delta\lambda(q)}=\frac12\hat\epsilon^2f''(q)-\frac12\frac1{(\lambda(q)+R_d/\hat\epsilon)^2} \] \[ \lambda^*(q)=\frac1{\hat\epsilon}\left[f''(q)^{-1/2}-R_d\right] \] We suppose that solutions are given by \begin{equation} \lambda(q)=\begin{cases} \lambda^*(q) & q0$ and positive for $\mu<0$. The $k$-RSB ansatz is equivalent to piecewise linear $\chi$ with $k+1$ pieces, with replica symmetric or 0-RSB giving $\chi(q)=1-q$. Our other major result is that, if the equilibrium state in the vicinity of zero temperature is given by a $k$-RSB ansatz, then the complexity is given by a $(k-1)$-RSB ansatz. Moreover, there is an exact correspondence between the parameters of the equilibrium saddle point in the limit of zero temperature and those of the complexity saddle at the ground state. If the equilibrium is given by $x_1,\ldots,x_k$ and $q_1,\ldots,q_k$, then the parameters $\tilde x_1,\ldots,\tilde x_{k-1}$ and $\tilde q_1,\ldots,\tilde q_{k-1}$ for the complexity in the ground state are \begin{align} \hat\epsilon=\lim_{\beta\to\infty}\beta x_k && \tilde x_i=\lim_{\beta\to\infty}\frac{x_i}{x_k} && \tilde q_i=\lim_{\beta\to\infty}q_i && R_d=\lim_{\beta\to\infty}\beta(1-q_k) && D_d=R_d\hat\epsilon \end{align} \section{ A motivation for the ansatz} We may encode the original variables in a superspace variable: \begin{equation} \phi_a(1)= s_a + \bar\eta_a\theta_1+\bar\theta_1\eta_a + \hat s_a \bar \theta_1 \theta_1 \end{equation}Here $\theta_a$, $\bar \theta_a$ are Grassmann variables, and we denote the full set of coordinates in a compact form as $1= \theta_1 \overline\theta_1$, $d1= d\theta_1 d\overline\theta_1$, etc. The correlations are encoded in \begin{equation} \begin{aligned} \mathbb Q_{a,b}(1,2)&=\frac 1 N \phi_a(1)\cdot\phi_b (2) = Q_{ab} -i\left[\bar\theta_1\theta_1+\bar\theta_2\theta_2\right] R_{ab} +(\bar\theta_1\theta_2+\theta_1\bar\theta_2)F_{ab} + \bar\theta_1\theta_1 \bar \theta_2 \theta_2 D_{ab} \\ &+ \text{odd terms in the $\bar \theta,\theta$}~. \end{aligned} \label{Q12} \end{equation} \begin{equation} \overline{\Sigma(\epsilon,\mu)} =\hat\epsilon\epsilon\lim_{n\to0}\frac1n\left[ \mu\int d1\sum_a^n\mathbb Q_{aa}(1,1) +\int d2\,d1\,\frac12\sum_{ab}^n(1+\hat\epsilon\bar\theta_1\theta_1)f(\mathbb Q_{ab}(1,2))(1+\hat\epsilon\bar\theta_2\theta_2) +\frac12\operatorname{sdet}\mathbb Q \right] \end{equation} The odd and even fermion numbers decouple, so we can neglect all odd terms in $\theta,\bar{\theta}$. \cite{Annibale_2004_Coexistence} This encoding also works for dynamics, where the coordinates then read $1= (\bar \theta, \theta, t)$, etc. The variables $\bar \theta \theta$ and $\bar \theta ' \theta'$ play the role of `times' in a superspace treatment. We have a long experience of making an ansatz for replicated quantum problems, which naturally involve a (Matsubara) time. The dependence on this time only holds for diagonal replica elements, a consequence of ultrametricity. The analogy strongly suggests that only the diagonal ${\bf Q}_{aa}$ depend on the $\theta$'s. This boils down the ansatz \ref{ansatz}. Not surprisingly, and for the same reason as in the quantum case, this ansatz closes, as we shall see.For example, consider the convolution: \begin{equation} \begin{aligned} \int d3\,\mathbb Q_1(1,3)\mathbb Q_2(3,2) =\int d3\,( Q_1 -i(\bar\theta_1\theta_1+\bar\theta_3\theta_3) R_1 +(\bar\theta_1\theta_3+\theta_1\bar\theta_3)F_1 + \bar\theta_1\theta_1 \bar \theta_3 \theta_3 D_1 ) \\ ( Q_2 -i(\bar\theta_3\theta_3+\bar\theta_2\theta_2) R_2 +(\bar\theta_3\theta_2+\theta_3\bar\theta_2)F_2 + \bar\theta_3\theta_3 \bar \theta_2 \theta_2 D_2 ) \\ =-i(Q_1R_2+R_1Q_2) +Q_1D_2\bar\theta_2\theta_2+D_1Q_2\bar\theta_1\theta_1 -i\bar\theta_1\theta_1\bar\theta_2\theta_2R_1D_2 -i\bar\theta_1\theta_1\bar\theta_2\theta_2D_1R_2 \end{aligned} \end{equation} \end{appendix} \bibliographystyle{plain} \bibliography{frsb_kac-rice} \end{document}