1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
|
%
% test.tex - description
%
% Created by on Mon Oct 22 19:11:00 EDT 2018.
% Copyright (c) 2018 pants productions. All rights reserved.
%
\documentclass[fleqn]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amsmath,amssymb,latexsym,concmath,mathtools,fullpage}
\mathtoolsset{showonlyrefs=true}
\title{LS16406 Referee Response}
\author{Jaron Kent-Dobias \& James P Sethna}
\begin{document}
\def\[{\begin{equation}}
\def\]{\end{equation}}
\maketitle
We address the referee response to our first submission below.
\begin{quote}
\begin{verbatim}
As mentioned before, other methods for carrying out cluster
simulations in a field have been proposed. See:
V. Martin-Mayor, D. Yllanes Phys. Rev. E 80 (2009), 015701
In this paper it is shown how to construct a cluster algorithm with an
arbitrary conserved physical quantity, by working in a different
statistical ensemble. This is equivalent to fixing the conjugated
field: both are related by a Legendre transformation, dominated by a
saddle point, so there is a one-to-one relationship (fixing a
magnetization at a value m = c is equivalent to fixing the magnetic
field at a value h* such that <m>_h* = c). The method, Tethered Monte
Carlo, also works in situations where more than one order parameter is
fixed (or more than one field is applied): V. Martin-Mayor, B. Seoane,
D. Yllanes, J. Stat. Phys. 144 (2011) 554.
Of course, the present approach is very different from that of the
above reference, but perhaps the authors could address the
differences.
\end{verbatim}
\end{quote}
The indicated paper is indeed interesting, and indeed different in the ways
already outlined by the referee: the algorithm described operates in a
different statistical ensemble. Extracting values in the constant-field
ensemble is done by a numeric integral over results from simulations of many
constant-magnetization systems. Another difference is that the algorithm
described relies on a heat-bath method to update the clusters once formed, and
therefore spiritually belongs to a broad class of existing algorithms we
already cite that form clusters and decide to flip them using metropolis or
heat-bath methods directly in the constant field ensemble. Reference to this
work has now been made alongside these others. Notably, a constant-magnetization ensemble
cluster algorithm that uses clusters without the need for a separate auxiliary
update exists; see JR Heringa \& HWJ Blote, Phys Rev E 57 5 (1998), 4976. This
latter
work feels like a nearer analogue to our own.
\begin{quote}
\begin{verbatim}
The part of the paper dealing with numerical tests of the method is
severely lacking in detail. First of all, the authors just say, after
eq. (12), that they measure tau "with standard methods", but cite a
paper from 1992 with a different approach to what is commonly done
nowadays. A useful reference could be
G. Ossola, A.D. Sokal, "Dynamic critical behavior of the Swendsen–Wang
algorithm for the three-dimensional Ising model" Nucl. Phys. B 691
(2004) 259, https://doi.org/10.1016/j.nuclphysb.2004.04.026
\end{verbatim}
\end{quote}
We found the suggested reference very helpful, and now use the methods
described therein for computation of correlation times and their uncertainties.
\begin{quote}
\begin{verbatim}
In any case, more detailed is needed on the computation of tau, such
as showing some autocorrelation functions and explaining how the error
bars are estimated (this could be an appendix).
\end{verbatim}
\end{quote}
Since the autocorrelation times and their uncertainties are now computed using
the method suggested above, explicit reference to that method seems sufficient
to explain how the data shown were processed. The autocorrelation functions
themselves appear unremarkable pure exponentials, as the energy
autocorrelation functions also were found to be in Ossola \& Sokal. Moreover,
we compute $\tau$ for six models at at least seven system sizes and at least
fifteen values of the field, meaning that there are hundreds of independent
autocorrelation functions, perhaps beyond the scope of even an appendix.
\begin{quote}
\begin{verbatim}
A direct computation of z with their data would be much preferable
to the scaling collapses, which are semi-quantitative. Why has this
not been attempted?
\end{verbatim}
\end{quote}
In the revised manuscript we provide rough estimates for $z$ in the models
studied, but reiterate that since the algorithm is identical to Wolff for
trivial fields, $z$ is simply that of the Wolff algorithm on each model. We
are principally interested in exploring the way the autocorrelation time
scales as one moves away from the zero field critical point---where the
dynamic behavior of the algorithm is already known---in the nonzero field
direction. Remeasuring $z$ for the Wolff algorithm does not accomplish this;
we believe that showing the scaling collapses, which in turn outline the form
of underlying universal scaling functions, does.
\begin{quote}
\begin{verbatim}
As another general point, the authors should provide some technical
details of their simulations, such as the number of MC steps. For
systems other than the 2D Ising model not even the sizes simulated are
specified.
\end{verbatim}
\end{quote}
Information about system sizes has been added. Since the work involves so
many separate data points, including such details for each would greatly
increase the size of the manuscript without adding much useful information. At
least $10^6$ runs were preformed for every data point involving
autocorrelation times.
\begin{quote}
\begin{verbatim}
In Fig. 1, the authors show results for the 2D Ising model up to
sizes L = 256. This is a very small size for such a simple system,
especially considering that the point of these cluster algorithms is
that there is no critical slowing down. The figure should include a
legend saying which curve corresponds to which system size.
\end{verbatim}
\end{quote}
A $512\times512$ curve has been added, along with system size labels. We
emphasize that unlike Ossola \& Sokal, this is not meant to be a precision study of any of these
models, for which extensive computer time might be dedicated to measuring
quantities for much larger systems, as we ourselves have done in another preprint
(arXiv:1707.03791 [cond-mat.stat-mech]). We believe the behavior we intend to
show---the way the algorithm scales as the critical point is departed in the
field direction---is demonstrated well by the system sizes used.
\begin{quote}
\begin{verbatim}
Why is tau only computed for the Ising model? In Fig. 2 the
efficiency of the method is demonstrated via a more indirect method
for the other systems. In addition, this figure does not even say
which system sizes have been simulated.
\end{verbatim}
\end{quote}
Autocorrelation times have been computed for the other models studied. System
size labels have been added to all figures.
\begin{quote}
\begin{verbatim}
As the authors say, "the goal of statistical mechanics is to compute
expectation values of observables". In this sense, why don't the
authors compute some simple physical observable, such as the energy,
and show how much precision can be achieved for a given computational
effort? At the end of the day, this is the true measure of efficiency
for any Monte Carlo method.
\end{verbatim}
\end{quote}
A great deal is known about the efficiency of the Wolff algorithm in this
regard. The algorithm described here is exactly the same as the Wolff
algorithm when there is no coupling to an external field. We hope that our
numeric experiments convincingly demonstrate that this algorithm's efficiency
scales from the already known zero field Wolff behavior into nonzero field as
an ordinary scaling analysis would predict. The supplied autocorrelation times
are already an indication of how much precision can be achieved for a given
computational effort.
\end{document}
|