summaryrefslogtreecommitdiff
path: root/monte-carlo.tex
diff options
context:
space:
mode:
authorJaron Kent-Dobias <jaron@kent-dobias.com>2017-11-07 15:25:33 -0500
committerJaron Kent-Dobias <jaron@kent-dobias.com>2017-11-07 15:25:33 -0500
commit94e69df7939a687cedfa614955950a8d251e0b2e (patch)
treeb49502b6e45052f528540735a00121e79fdf7471 /monte-carlo.tex
parente7220fadb6b3775e55afd9a07831e90136cfaf24 (diff)
downloadPRE_98_063306-94e69df7939a687cedfa614955950a8d251e0b2e.tar.gz
PRE_98_063306-94e69df7939a687cedfa614955950a8d251e0b2e.tar.bz2
PRE_98_063306-94e69df7939a687cedfa614955950a8d251e0b2e.zip
many changes
Diffstat (limited to 'monte-carlo.tex')
-rw-r--r--monte-carlo.tex385
1 files changed, 188 insertions, 197 deletions
diff --git a/monte-carlo.tex b/monte-carlo.tex
index 2490b31..283b609 100644
--- a/monte-carlo.tex
+++ b/monte-carlo.tex
@@ -82,7 +82,7 @@
\begin{document}
-\title{Efficiently sampling Ising states in an external field}
+\title{An efficient Wolff algorithm in an external magnetic field}
\author{Jaron Kent-Dobias}
\author{James P.~Sethna}
\affiliation{Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, NY, USA}
@@ -93,44 +93,56 @@
We introduce an extension of the Wolff algorithm that preforms efficiently
in an external magnetic field. Near the Ising critical point, the
correlation time of our algorithm has a conventional scaling form that
- reduces to that of the Wolff algorithm at zero field. As an application, we
+ reduces to that of the Wolff algorithm at zero field and becomes more efficient
+ at any nonzero field. As an application, we
directly measure scaling functions of observables in the metastable state of
the 2D Ising model.
\end{abstract}
\maketitle
-The Ising model is a simple model of a magnet comprised of locally interacting
-spins. Like most large thermal systems, computation of its properties cannot
-be carried out explicitly and is preformed using Monte Carlo methods. Near its
-continuous phase transition, divergent correlation length leads to divergent
-correlation time in any locally-updating algorithm, hampering computation.
-At zero external field, this was largely alleviated by cluster algorithms,
-like the Wolff algorithm, whose dynamics are nonlocal and each step flips
-groups of spins whose size diverges with the correlation length. However, the
-Wolff algorithm only works at zero field. We describe an extension of this
-algorithm that works in arbitrary external field while preserving the Wolff
-algorithm's small dynamic exponent.
+The Ising model is a simple model of a magnet comprised of discrete locally
+interacting one-component spins. Like most systems in statistical mechanics,
+calculation of its ensemble properties cannot be done explicitly and is often
+performed using Monte Carlo methods. Near its continuous phase transition,
+divergent correlation length leads to divergent correlation time in any
+locally-updating algorithm, hampering computation. With no external field,
+this critical slowing-down is largely alleviated by cluster algorithms---the
+most efficient of which is the Wolff algorithm---whose dynamics are nonlocal
+since each step flips a cluster of spins whose average size diverges with the
+correlation length. While less efficient cluster algorithms, like
+Swendsen--Wang, have been modified to perform in nonzero field, Wolff
+only works at zero field. We describe an extension of the Wolff
+algorithm that works in arbitrary external field while preserving Wolff's efficiency throughout the entire temperature--field parameter
+space.
The Wolff algorithm works by first choosing a random spin and adding it to an
-empty cluster. Every neighbor of that spin pointed in the same direction as
-the spin is added to the cluster with probability $1-e^{-2\beta J}$, where
-$\beta=1/T$ and $J$ is the coupling between sites. This process is iterated
-again for neighbors of every spin added to the cluster. When all sites
-surrounding the cluster have been exhausted, the cluster is flipped. Our
-algorithm is a simple extension of this. An extra spin is introduced (often
-referred to as a ``ghost spin'') that couples to all others with coupling $H$.
-The traditional Wolff algorithm is then preformed on this larger lattice exactly as described above,
-with the extra spin treated no differently from any others. Observables in the
-original system can be exactly estimated on the new one using a simple
-mapping. As an application, we use our algorithm to measure critical scaling functions
-of the 2D Ising model in its metastable phase.
+empty cluster. Every neighbor of that spin that is pointed in the same
+direction as the spin is also added to the cluster with probability
+$1-e^{-2\beta J}$, where $\beta=1/T$ is inverse temperature and $J$ is the
+coupling between sites. This process is repeated for the neighbors of every
+spin added to the cluster. When all sites surrounding the cluster have been
+exhausted, the cluster is flipped. Our algorithm is a simple extension of
+this. An extra spin---often referred to as the ``ghost spin''---is introduced
+and made a nearest neighbor of all others with coupling $|H|$, the magnitude
+of the external field. The traditional Wolff algorithm is then preformed on
+this new extended lattice exactly as described above, with the extra spin
+treated no differently from any others, i.e., allowed to be added to clusters
+and subsequently flipped. Observables in the original system can be exactly
+estimated using the new one by a simple correspondence.
+
+This paper is divided into three sections. First, the Ising model (and our
+notation for it) is introduced, along with extant Monte Carlo algorithms.
+Second, we introduce our algorithm in detail and compare its efficiency with
+the existing ones. Finally, we use this new algorithm to directly measure
+critical scaling functions for observables of the 2D Ising model in its
+metastable state.
\section{Introduction}
-Consider an undirected graph $G=(V,E)$ describing a system of interacting spins. The set
-of vertices $V=\{1,\ldots,N\}$ enumerates the sites of the network, and the
-set of edges $E$ describes connections between interacting sites. On each site
+Consider an undirected graph $G=(V,E)$ describing a network of interacting spins. The set
+of vertices $V=\{1,\ldots,n\}$ enumerates the sites of the network, and the
+set of edges $E$ describes connections between neighboring sites. On each site
is a spin that can take any value from the set $S=\{-1,1\}$. The state of the
system is described by a function $s:V\to S$, leading to a configuration space
of all possible states $S^n=S\times\cdots\times S$. The Hamiltonian
@@ -140,17 +152,17 @@ by
\H(s)=-\sum_{\{i,j\}\in E}J_{ij}s_is_j-HM(s),
\label{eq:ham}
\]
-where $H$ is the external magnetic field, $J:E\to\R$ gives the coupling between spins on connected sites and
+where $H$ is the external magnetic field, $J:E\to\R$ gives the coupling
+between spins on neighboring sites and
$M:S^n\to\R$ is the magnetization of the system defined for a state
$s\in S^n$ by
\[
M(s)=\sum_{i\in V}s_i.
\]
For the purpose of this study, we will only be considering ferromagnetic
-systems where the function $J$ is nonnegative. All formal results can be
-generalized to the antiferromagnetic or mixed cases, but algorithmic
-efficiency for instance cannot.
-An observable of this system is a function $A:S^n\to\R$ depending on the
+systems where the function $J$ is nonnegative.
+
+An observable of the system is a function $A:S^n\to\R$ depending on the
system's state. Both the Hamiltonian and magnetization defined above are
observables. Assuming the ergodic hypothesis holds, the expected value $\avg
A$ of
@@ -161,18 +173,17 @@ or
\avg A=\frac1Z\sum_{s\in S^n}e^{-\beta\H(s)}A(s),
\label{eq:avg.obs}
\]
-where $\beta$ is the inverse temperature and the partition function $Z$ defined by
+where the partition function $Z$ is defined by
\[
Z=\sum_{s\in S^n}e^{-\beta\H(s)}
\]
-gives the correct normalization for the weighted sum.
-
-The sum over configurations in \eqref{eq:avg.obs} are intractable for all but
-for very small systems. Therefore expected values of observables are usually
+and gives the correct normalization for the weighted sum.
+Unfortunately, the sum over configurations in \eqref{eq:avg.obs} are intractable for all but
+for very small systems. Therefore expectation values are usually
approximated by some means. Monte Carlo methods are a common way of
-accomplishing this. These methods sample states $s$ in the configuration space
-$S^n$ according to the Boltzmann distribution $e^{-\beta\H(s)}$, so that
-averages of observables made using their samples asymptotically approach the
+accomplishing this. These methods sample states $s$ from the configuration space
+$S^n$ according to the Boltzmann distribution $e^{-\beta\H(s)}$ so that
+averages of observables made using their incomplete samples asymptotically approach the
true expected value.
The Metropolis--Hastings algorithm
@@ -183,29 +194,29 @@ to the perturbation is then computed. If the change is negative the perturbed
state $s'$ is accepted as the new state. Otherwise the perturbed state
is accepted with probability $e^{-\beta\Delta\H}$. This process is repeated
indefinitely and a sample of states is made by sampling the state $s$
-between iterations. This algorithm is shown schematically in Algorithm
-\ref{alg:met-hast}. Metropolis--Hastings is very general, but unless the
+between iterations sufficiently separated that the successively sampled states
+are uncorrelated, e.g., at separations larger than the correlation time $\tau$. Metropolis--Hastings is very general, but unless the
perturbations are very carefully chosen the algorithm suffers in regimes where
large correlations are present in the system, for instance near continuous
phase transitions. Here the algorithm suffers from what is known as critical
slowing-down, where likely states consist of large correlated clusters that
take many perturbations to move between in configuration space.
-\begin{figure}
- \begin{algorithm}[H]
- \begin{algorithmic}
- \REQUIRE $s\in S^n$
- \STATE $s'\gets$ \texttt{Perturb}($s$)
- \STATE $\Delta\H\gets\H(s')-\H(s)$
- \IF {$\exp(-\beta\Delta\H)>$ \texttt{UniformRandom}$(0,1)$}
- \STATE $s\gets s'$
- \ENDIF
- \end{algorithmic}
- \caption{Metropolis--Hastings}
- \label{alg:met-hast}
- \end{algorithm}
-\end{figure}
-
+%\begin{figure}
+% \begin{algorithm}[H]
+% \begin{algorithmic}
+% \REQUIRE $s\in S^n$
+% \STATE $s'\gets$ \texttt{Perturb}($s$)
+% \STATE $\Delta\H\gets\H(s')-\H(s)$
+% \IF {$\exp(-\beta\Delta\H)>$ \texttt{UniformRandom}$(0,1)$}
+% \STATE $s\gets s'$
+% \ENDIF
+% \end{algorithmic}
+% \caption{Metropolis--Hastings}
+% \label{alg:met-hast}
+% \end{algorithm}
+%\end{figure}
+%
The Wolff algorithm \cite{wolff1989collective} solves many of these problems, but only at zero external
field, $H=0$. This algorithm solves the problem of critical slowing-down by
flipping carefully-constructed clusters of spins at once in a way that samples
@@ -228,69 +239,69 @@ H}$ every time a spin is added to it.
$z=0.29(1)$ \cite{wolff1989comparison,liu2014dynamic} $z=0.35(1)$ for
Swendsen--Wang \cite{swendsen1987nonuniversal}
-\begin{figure}
- \begin{algorithm}[H]
- \begin{algorithmic}
- \REQUIRE $s\in\G^n$
- \STATE \textbf{let} $q$ be an empty stack
- \STATE \textbf{let} $i_0\in V$
- \STATE $q.\mathtt{push}(i_0)$
- \STATE $\sigma\gets s_{i_0}$
- \WHILE {$q$ is not empty}
- \STATE $i\gets q.\mathtt{pop}$
- \IF {$s_i=\sigma$}
- \FORALL {$j$ such that $\{i,j\}\in E$}
- \IF {$1-\exp(-2\beta
- J_{ij}s_is_j)>\mathop{\mathtt{UniformRandom}}(0,1)$}
- \STATE $q.\mathtt{push}(j)$
- \ENDIF
- \ENDFOR
- \STATE $s_i\gets-s_i$
- \ENDIF
- \ENDWHILE
- \end{algorithmic}
- \caption{Wolff (Zero-Field)}
- \label{alg:wolff}
- \end{algorithm}
-\end{figure}
-
-
-
-\begin{figure}
- \begin{algorithm}[H]
- \begin{algorithmic}
- \REQUIRE $s\in\G^n$
- \STATE $s'\gets s$
- \STATE \textbf{let} $q$ be an empty stack
- \STATE \textbf{let} $i_0\in V$
- \STATE $q.\mathtt{push}(i_0)$
- \STATE $\sigma\gets s_{i_0}'$
- \STATE \texttt{completed} $\gets$ \textbf{true}
- \WHILE {$q$ is not empty}
- \STATE $i\gets q.\mathtt{pop}$
- \IF {$s_i'=\sigma$}
- \FORALL {$j$ such that $\{i,j\}\in E$}
- \IF {$1-\exp(-2\beta
- J_{ij}s_i's_j')>\mathop{\mathtt{UniformRandom}}(0,1)$}
- \STATE $q.\mathtt{push}(j)$
- \ENDIF
- \ENDFOR
- \STATE $s_i'\gets-s_i'$
- \STATE $q.\mathtt{push}(i)$
- \IF {$1-\exp(-2\beta\sigma H)>\mathop{\mathtt{UniformRandom}}(0,1)$}
- \STATE \texttt{completed} $\gets$ \textbf{false}
- \STATE \textbf{break}
- \ENDIF
- \ENDIF
- \ENDWHILE
- \IF {completed}
- $s\gets s'$
- \ENDIF
- \end{algorithmic}
- \caption{Hybrid Wolff/Metropolis--Hastings}
- \label{alg:h.wolff}
- \end{algorithm}
-\end{figure}
+%\begin{figure}
+% \begin{algorithm}[H]
+% \begin{algorithmic}
+% \REQUIRE $s\in\G^n$
+% \STATE \textbf{let} $q$ be an empty stack
+% \STATE \textbf{let} $i_0\in V$
+% \STATE $q.\mathtt{push}(i_0)$
+% \STATE $\sigma\gets s_{i_0}$
+% \WHILE {$q$ is not empty}
+% \STATE $i\gets q.\mathtt{pop}$
+% \IF {$s_i=\sigma$}
+% \FORALL {$j$ such that $\{i,j\}\in E$}
+% \IF {$1-\exp(-2\beta
+% J_{ij}s_is_j)>\mathop{\mathtt{UniformRandom}}(0,1)$}
+% \STATE $q.\mathtt{push}(j)$
+% \ENDIF
+% \ENDFOR
+% \STATE $s_i\gets-s_i$
+% \ENDIF
+% \ENDWHILE
+% \end{algorithmic}
+% \caption{Wolff (Zero-Field)}
+% \label{alg:wolff}
+% \end{algorithm}
+%\end{figure}
+%
+%
+%
+%\begin{figure}
+% \begin{algorithm}[H]
+% \begin{algorithmic}
+% \REQUIRE $s\in\G^n$
+% \STATE $s'\gets s$
+% \STATE \textbf{let} $q$ be an empty stack
+% \STATE \textbf{let} $i_0\in V$
+% \STATE $q.\mathtt{push}(i_0)$
+% \STATE $\sigma\gets s_{i_0}'$
+% \STATE \texttt{completed} $\gets$ \textbf{true}
+% \WHILE {$q$ is not empty}
+% \STATE $i\gets q.\mathtt{pop}$
+% \IF {$s_i'=\sigma$}
+% \FORALL {$j$ such that $\{i,j\}\in E$}
+% \IF {$1-\exp(-2\beta
+% J_{ij}s_i's_j')>\mathop{\mathtt{UniformRandom}}(0,1)$}
+% \STATE $q.\mathtt{push}(j)$
+% \ENDIF
+% \ENDFOR
+% \STATE $s_i'\gets-s_i'$
+% \STATE $q.\mathtt{push}(i)$
+% \IF {$1-\exp(-2\beta\sigma H)>\mathop{\mathtt{UniformRandom}}(0,1)$}
+% \STATE \texttt{completed} $\gets$ \textbf{false}
+% \STATE \textbf{break}
+% \ENDIF
+% \ENDIF
+% \ENDWHILE
+% \IF {completed}
+% $s\gets s'$
+% \ENDIF
+% \end{algorithmic}
+% \caption{Hybrid Wolff/Metropolis--Hastings}
+% \label{alg:h.wolff}
+% \end{algorithm}
+%\end{figure}
\section{Cluster-flip in a Field}
@@ -300,66 +311,52 @@ Consider the new graph $\tilde G=(\tilde V,\tilde E)$ defined from $G$ by
\tilde E&=E\cup\{\{0,i\}\mid i\in V\},
\end{align}
or by adding a zeroth vertex and edges from every other vertex to the new
-one. The network of spins
+one. Thee spin at site zero is often known as a ``ghost spin.'' The network of spins
described by this graph now have states that occupy a configuration space
$S^{n+1}$. Extend the coupling between spins $\tilde J:\tilde E\to\R$ by
\[
- \tilde J_{ij}=\begin{cases}
- J_{ij} & i,j>0\\
- H & \text{otherwise},
+ \tilde J(e)=\begin{cases}
+ J(e) & e\in E\\
+ |H| & \text{otherwise},
\end{cases}
\]
so that each spin is coupled to every other spin the same way they were
-before, and coupled to the new spin with strength $H$. Now define a
+before, and coupled to the new spin with strength $|H|$, the magnitude of the
+external field. Now define a
Hamiltonian function
$\tilde\H:S^{n+1}\to\R$ on this new, larger configuration space defined by
\[
\tilde\H(s)=-\sum_{\{i,j\}\in\tilde E}\tilde J_{ij}s_is_j.
+ \label{eq:new.h.simple}
\]
This new Hamiltonian resembles the old one, but is comprised only of
spin--spin interactions with no external field. However, by changing the terms
-considered in the sum we may write
+considered in the sum we may equivalently write
\[
\tilde\H(s)=-\sum_{\{i,j\}\in E}J_{ij}s_is_j-H\tilde M(s)
\]
-where the new magnetization $\tilde M:S^{n+1}\to\R$ is defined for $(s_0,s)\in
+where the new magnetization $\tilde M:S^{n+1}\to\R$ is defined for $s\in
S^{n+1}$ by
\[
- \tilde M(s)=s_0\sum_{i\in V}s_i=M(s_0\times(s_1,\ldots,s_n)).
+ \begin{aligned}
+ \tilde M(s)
+ &=\sgn(H)s_0\sum_{i\in V}s_i\\
+ &=M(s_0\sgn(H)(s_1,\ldots,s_n)).
+ \end{aligned}
\]
In fact, any observable $A$ of the original system can be written as an
observable $\tilde A$ of the new system by defining
\[
- \tilde A(s)=A(s_0\times(s_1,\ldots,s_n))
-\]
-and the expected value of the observable $A$ in the old system and that of its
-counterpart $\tilde A$ in the new system is unchanged. This can be seen by
-using the facts that
-$\tilde\H(s)=\tilde\H(-s)$, $\sum_{s\in S^n}f(s)=\sum_{s\in S^n}F(-s)$ for any
-$f:S^n\to\R$, and $\tilde\H(1,s)=\H(s)$ for $s\in S^n$, from which follows
-that
-\[
- \begin{aligned}
- \tilde Z\avg{\tilde A}
- &=\sum_{s\in S^{n+1}}\tilde A(s)e^{-\beta\tilde\H(s)}
- =\sum_{s_0\in S}\sum_{s\in S^n}\tilde A(s_0,s)e^{-\beta\tilde\H(s_0,s)}\\
- &=\bigg(\sum_{s\in S^n}A(s)e^{-\beta\tilde\H(1,s)}
- +\sum_{s\in S^n}A(-s)e^{-\beta\tilde\H(-1,s)}\bigg)\\
- &=\bigg(\sum_{s\in S^n}A(s)e^{-\beta\tilde\H(1,s)}
- +\sum_{s\in S^n}A(-s)e^{-\beta\tilde\H(1,-s)}\bigg)\\
- &=\bigg(\sum_{s\in S^n}A(s)e^{-\beta\tilde\H(1,s)}
- +\sum_{s\in S^n}A(s)e^{-\beta\tilde\H(1,s)}\bigg)\\
- &=\bigg(\sum_{s\in S^n}A(s)e^{-\beta\H(s)}
- +\sum_{s\in S^n}A(s)e^{-\beta\H(s)}\bigg)\\
- &=2Z\avg A.
-\end{aligned}
+ \tilde A(s)=A(s_0\sgn(H)(s_1,\ldots,s_n))
\]
-An identical calculation shows $\tilde Z=2Z$, therefore immediately proving
-$\avg{\tilde A}=\avg A$. Notice this correspondence also holds for the
-Hamiltonian.
-
-Our new spin system with an additional field is, when $H$ is greater than
-zero, simply a ferromagnetic spin system in the absence of an external field.
+such that $\avg{\tilde A}=\avg A$. This can be seen readily by using the
+symmetry $\tilde\H(-s)=\tilde\H(s)$ of the Hamiltonian
+\eqref{eq:new.h.simple}, the invariance of configuration space sums under
+negation of their summand, and the fact that $\tilde\H(1,s)=\H(s)$ for any $s\in
+S^n$. Notice in particular that this is true for the Hamiltonian $\tilde\H$ as
+well.
+
+Our new spin system with an additional field is simply a ferromagnetic spin system in the absence of an external field.
Therefore, the Wolff algorithm can be applied to it with absolutely no
modifications. Since there is an exact correspondence between the expectation
values of our ordinary spin system in a field and their appropriately defined
@@ -368,35 +365,36 @@ system allows us to estimate the expectation values for the old. This ``new''
algorithm, if you can call it that, is shown in Algorithm
\ref{alg:wolff-field}.
-\begin{figure}
- \begin{algorithm}[H]
- \begin{algorithmic}
- \REQUIRE $s\in S^n$
- \STATE \textbf{let} $q$ be an empty stack
- \STATE \textbf{let} $i_0\in V$
- \STATE $q.\mathtt{push}(i_0)$
- \STATE $\sigma\gets s_{i_0}$
- \WHILE {$q$ is not empty}
- \STATE $i\gets q.\mathtt{pop}$
- \IF {$s_i=\sigma$}
- \FORALL {$j$ such that $\{i,j\}\in\tilde E$}
- \IF {$1-\exp(-2\beta \tilde
- J_{ij}s_is_j)>\mathop{\mathtt{UniformRandom}}(0,1)$}
- \STATE $q.\mathtt{push}(j)$
- \ENDIF
- \ENDFOR
- \STATE $s_i\gets-s_i$
- \ENDIF
- \ENDWHILE
- \end{algorithmic}
- \caption{Wolff (Nonzero Field)}
- \label{alg:wolff-field}
- \end{algorithm}
-\end{figure}
-
+%\begin{figure}
+% \begin{algorithm}[H]
+% \begin{algorithmic}
+% \REQUIRE $s\in S^n$
+% \STATE \textbf{let} $q$ be an empty stack
+% \STATE \textbf{let} $i_0\in V$
+% \STATE $q.\mathtt{push}(i_0)$
+% \STATE $\sigma\gets s_{i_0}$
+% \WHILE {$q$ is not empty}
+% \STATE $i\gets q.\mathtt{pop}$
+% \IF {$s_i=\sigma$}
+% \FORALL {$j$ such that $\{i,j\}\in\tilde E$}
+% \IF {$1-\exp(-2\beta \tilde
+% J_{ij}s_is_j)>\mathop{\mathtt{UniformRandom}}(0,1)$}
+% \STATE $q.\mathtt{push}(j)$
+% \ENDIF
+% \ENDFOR
+% \STATE $s_i\gets-s_i$
+% \ENDIF
+% \ENDWHILE
+% \end{algorithmic}
+% \caption{Wolff (Nonzero Field)}
+% \label{alg:wolff-field}
+% \end{algorithm}
+%\end{figure}
+%
-At sufficiently small $H$ both our algorithm and the hybrid Wolff--Metropolis reduce
-to the ordinary Wolff algorithm, and have its runtime properties. At very large $H$, the hybrid
+At sufficiently small $|H|$ both our algorithm and the hybrid Wolff--Metropolis reduce
+to the ordinary Wolff algorithm, and have its runtime properties. At very
+large $|H|$, the hybrid
Wolff--Metropolis behaves exactly like Metropolis, where almost only one spin
is ever flipped at a time with probability $\sim e^{-2\beta H}$, since the
energy is dominated by contributions from the field.
@@ -405,7 +403,8 @@ We measured the autocorrelation time $\tau$ of the internal energy $\H$ of a
square-lattice Ising model ($J=1$ and $E$ is the set of nearest neighbor
pairs)
resulting from using each of these three algorithms at various
-fields and temperatures. This was done using a batch mean estimator. Time was
+fields and temperatures. This was done using an initial convex sequence
+estimator, as described in \cite{geyer1992practical}. Time was
measured as ``the number of spins that the algorithm has attempted to flip.''
For example, every Metropolis--Hastings step takes unit time, every Wolff step takes
time equal to the number of spins in the flipping cluster, and every hybrid
@@ -488,18 +487,10 @@ $\avg{|M|}$ of the absolute value of the magnetization and taking expectation
values $\avg M_\e$ of the magnetization on a reduced configuration space,
since
\begin{align}
- \avg{|M|}
- &=\frac1Z\sum_{s\in S^n}e^{-\beta\H(s)}|M(s)|\\
- &=\frac1{Z_\e+Z_\m+Z_0}\bigg(\sum_{s\in S^n_\e}e^{-\beta\H(s)}|M(s)|+
- \sum_{s\in S^n_\m}e^{-\beta\H(s)}|M(s)|+\sum_{s\in
- S^n_0}e^{-\beta\H(s)}|M(s)|\bigg)\\
- &=\frac1{2Z_\e+Z_0}\bigg(\sum_{s\in S^n_\e}e^{-\beta\H(s)}M(s)+
- \sum_{s\in S^n_\e}e^{-\beta\H(-s)}|M(-s)|\bigg)\\
- &=\frac2{2Z_\e+Z_0}\sum_{s\in S^n_\e}e^{-\beta\H(s)}M(s)\\
- &=\frac1{1+\frac{Z_0}{2Z_\e}}\eavg M
+ \avg{|M|}=\frac1{1+Z_0/2Z_\e}\eavg M
\end{align}
At infinite temperature, $Z_0/Z_\e\simeq n^{-1/2}\sim L^{-1}$ for large $L$,
-$N$. At any finite temperature, especially in the ferromagnetic phase,
+$n$. At any finite temperature, especially in the ferromagnetic phase,
$Z_0\ll Z_\e$ in a much more extreme way.
If the ensemble average over only positive magnetizations can be said to