Log In | Get Help   
Home My Page Projects Code Snippets Project Openings Mareframe
Summary Activity Forums Tracker Lists Tasks Docs Surveys News SCM Files
[mareframe] Annotation of /trunk/gadget/simann.cc
[mareframe] / trunk / gadget / simann.cc Repository:
ViewVC logotype

Annotation of /trunk/gadget/simann.cc

Parent Directory Parent Directory | Revision Log Revision Log


Revision 11 - (view) (download)

1 : agomez 1 /* ABSTRACT: */
2 :     /* Simulated annealing is a global optimization method that distinguishes */
3 :     /* different local optima. Starting from an initial point, the algorithm */
4 :     /* takes a step and the function is evaluated. When minimizing a function,*/
5 :     /* any downhill step is accepted and the process repeats from this new */
6 :     /* point. An uphill step may be accepted (thus, it can escape from local */
7 :     /* optima). This uphill decision is made by the Metropolis criteria. As */
8 :     /* the optimization process proceeds, the length of the steps decline and */
9 :     /* the algorithm closes in on the global optimum. Since the algorithm */
10 :     /* makes very few assumptions regarding the function to be optimized, it */
11 :     /* is quite robust with respect to non-quadratic surfaces. The degree of */
12 :     /* robustness can be adjusted by the user. In fact, simulated annealing */
13 :     /* can be used as a local optimizer for difficult functions. */
14 :     /* */
15 :     /* The author can be contacted at h2zr1001@vm.cis.smu.edu */
16 : ulcessvp 11 //
17 : agomez 1 /* This file is a translation of a fortran code, which is an example of the*/
18 :     /* Corana et al. simulated annealing algorithm for multimodal and robust */
19 :     /* optimization as implemented and modified by by Goffe et al. */
20 :     /* */
21 :     /* Use the sample function from Judge with the following suggestions */
22 :     /* to get a feel for how SA works. When you've done this, you should be */
23 :     /* ready to use it on most any function with a fair amount of expertise. */
24 :     /* 1. Run the program as is to make sure it runs okay. Take a look at */
25 :     /* the intermediate output and see how it optimizes as temperature */
26 :     /* (T) falls. Notice how the optimal point is reached and how */
27 :     /* falling T reduces VM. */
28 :     /* 2. Look through the documentation to SA so the following makes a */
29 :     /* bit of sense. In line with the paper, it shouldn't be that hard */
30 :     /* to figure out. The core of the algorithm is described on pp. 4-6 */
31 :     /* and on pp. 28. Also see Corana et al. pp. 264-9. */
32 :     /* 3. To see the importance of different temperatures, try starting */
33 :     /* with a very low one (say T = 10E-5). You'll see (i) it never */
34 :     /* escapes from the local optima (in annealing terminology, it */
35 :     /* quenches) & (ii) the step length (VM) will be quite small. This */
36 :     /* is a key part of the algorithm: as temperature (T) falls, step */
37 :     /* length does too. In a minor point here, note how VM is quickly */
38 :     /* reset from its initial value. Thus, the input VM is not very */
39 :     /* important. This is all the more reason to examine VM once the */
40 :     /* algorithm is underway. */
41 :     /* 4. To see the effect of different parameters and their effect on */
42 :     /* the speed of the algorithm, try RT = .95 & RT = .1. Notice the */
43 :     /* vastly different speed for optimization. Also try NT = 20. Note */
44 :     /* that this sample function is quite easy to optimize, so it will */
45 :     /* tolerate big changes in these parameters. RT and NT are the */
46 :     /* parameters one should adjust to modify the runtime of the */
47 :     /* algorithm and its robustness. */
48 :     /* 5. Try constraining the algorithm with either LB or UB. */
49 : ulcessvp 11 //
50 : agomez 1 /* Synopsis: */
51 :     /* This routine implements the continuous simulated annealing global */
52 :     /* optimization algorithm described in Corana et al.'s article */
53 :     /* "Minimizing Multimodal Functions of Continuous Variables with the */
54 :     /* "Simulated Annealing" Algorithm" in the September 1987 (vol. 13, */
55 :     /* no. 3, pp. 262-280) issue of the ACM Transactions on Mathematical */
56 :     /* Software. */
57 : ulcessvp 11 //
58 : agomez 1 /* A very quick (perhaps too quick) overview of SA: */
59 :     /* SA tries to find the global optimum of an N dimensional function. */
60 :     /* It moves both up and downhill and as the optimization process */
61 :     /* proceeds, it focuses on the most promising area. */
62 :     /* To start, it randomly chooses a trial point within the step length */
63 :     /* VM (a vector of length N) of the user selected starting point. The */
64 :     /* function is evaluated at this trial point and its value is compared */
65 :     /* to its value at the initial point. */
66 :     /* In a maximization problem, all uphill moves are accepted and the */
67 :     /* algorithm continues from that trial point. Downhill moves may be */
68 :     /* accepted; the decision is made by the Metropolis criteria. It uses T */
69 :     /* (temperature) and the size of the downhill move in a probabilistic */
70 :     /* manner. The smaller T and the size of the downhill move are, the more */
71 :     /* likely that move will be accepted. If the trial is accepted, the */
72 :     /* algorithm moves on from that point. If it is rejected, another point */
73 :     /* is chosen instead for a trial evaluation. */
74 :     /* Each element of VM periodically adjusted so that half of all */
75 :     /* function evaluations in that direction are accepted. */
76 :     /* A fall in T is imposed upon the system with the RT variable by */
77 :     /* T(i+1) = RT*T(i) where i is the ith iteration. Thus, as T declines, */
78 :     /* downhill moves are less likely to be accepted and the percentage of */
79 :     /* rejections rise. Given the scheme for the selection for VM, VM falls. */
80 :     /* Thus, as T declines, VM falls and SA focuses upon the most promising */
81 :     /* area for optimization. */
82 : ulcessvp 11 //
83 : agomez 1 /* The importance of the parameter T: */
84 :     /* The parameter T is crucial in using SA successfully. It influences */
85 :     /* VM, the step length over which the algorithm searches for optima. For */
86 :     /* a small intial T, the step length may be too small; thus not enough */
87 :     /* of the function might be evaluated to find the global optima. The user */
88 :     /* should carefully examine VM in the intermediate output (set IPRINT = */
89 :     /* 1) to make sure that VM is appropriate. The relationship between the */
90 :     /* initial temperature and the resulting step length is function */
91 :     /* dependent. */
92 :     /* To determine the starting temperature that is consistent with */
93 :     /* optimizing a function, it is worthwhile to run a trial run first. Set */
94 :     /* RT = 1.5 and T = 1.0. With RT > 1.0, the temperature increases and VM */
95 :     /* rises as well. Then select the T that produces a large enough VM. */
96 : ulcessvp 11 //
97 : agomez 1 /* For modifications to the algorithm and many details on its use, */
98 :     /* (particularly for econometric applications) see Goffe, Ferrier */
99 :     /* and Rogers, "Global Optimization of Statistical Functions with */
100 :     /* the Simulated Annealing," Journal of Econometrics (forthcoming) */
101 :     /* For a pre-publication copy, contact */
102 :     /* Bill Goffe */
103 :     /* Department of Economics */
104 :     /* Southern Methodist University */
105 :     /* Dallas, TX 75275 */
106 :     /* h2zr1001 @ smuvm1 (Bitnet) */
107 :     /* h2zr1001 @ vm.cis.smu.edu (Internet) */
108 : ulcessvp 11 //
109 : agomez 1 /* As far as possible, the parameters here have the same name as in */
110 :     /* the description of the algorithm on pp. 266-8 of Corana et al. */
111 : ulcessvp 11 //
112 : agomez 1 /* Input Parameters: */
113 :     /* Note: The suggested values generally come from Corana et al. To */
114 :     /* drastically reduce runtime, see Goffe et al., pp. 17-8 for */
115 :     /* suggestions on choosing the appropriate RT and NT. */
116 :     /* n - Number of variables in the function to be optimized. (INT) */
117 :     /* x - The starting values for the variables of the function to be */
118 :     /* optimized. (DP(N)) */
119 :     /* max - Denotes whether the function should be maximized or */
120 :     /* minimized. A true value denotes maximization while a false */
121 :     /* value denotes minimization. */
122 :     /* RT - The temperature reduction factor. The value suggested by */
123 :     /* Corana et al. is .85. See Goffe et al. for more advice. (DP) */
124 :     /* EPS - Error tolerance for termination. If the final function */
125 :     /* values from the last neps temperatures differ from the */
126 :     /* corresponding value at the current temperature by less than */
127 :     /* EPS and the final function value at the current temperature */
128 :     /* differs from the current optimal function value by less than */
129 :     /* EPS, execution terminates and IER = 0 is returned. (EP) */
130 :     /* NS - Number of cycles. After NS*N function evaluations, each element */
131 :     /* of VM is adjusted so that approximately half of all function */
132 :     /* evaluations are accepted. The suggested value is 20. (INT) */
133 :     /* nt - Number of iterations before temperature reduction. After */
134 :     /* NT*NS*N function evaluations, temperature (T) is changed */
135 :     /* by the factor RT. Value suggested by Corana et al. is */
136 :     /* MAX(100, 5*N). See Goffe et al. for further advice. (INT) */
137 :     /* NEPS - Number of final function values used to decide upon termi- */
138 :     /* nation. See EPS. Suggested value is 4. (INT) */
139 :     /* maxevl - The maximum number of function evaluations. If it is */
140 :     /* exceeded, IER = 1. (INT) */
141 :     /* lb - The lower bound for the allowable solution variables. (DP(N)) */
142 :     /* ub - The upper bound for the allowable solution variables. (DP(N)) */
143 :     /* If the algorithm chooses X(I) .LT. LB(I) or X(I) .GT. UB(I), */
144 :     /* I = 1, N, a point is from inside is randomly selected. This */
145 :     /* This focuses the algorithm on the region inside UB and LB. */
146 :     /* Unless the user wishes to concentrate the search to a par- */
147 :     /* ticular region, UB and LB should be set to very large positive */
148 :     /* and negative values, respectively. Note that the starting */
149 :     /* vector X should be inside this region. Also note that LB and */
150 :     /* UB are fixed in position, while VM is centered on the last */
151 :     /* accepted trial set of variables that optimizes the function. */
152 :     /* c - Vector that controls the step length adjustment. The suggested */
153 :     /* value for all elements is 2.0. (DP(N)) */
154 :     /* t - On input, the initial temperature. See Goffe et al. for advice. */
155 :     /* On output, the final temperature. (DP) */
156 :     /* vm - The step length vector. On input it should encompass the */
157 :     /* region of interest given the starting value X. For point */
158 :     /* X(I), the next trial point is selected is from X(I) - VM(I) */
159 :     /* to X(I) + VM(I). Since VM is adjusted so that about half */
160 :     /* of all points are accepted, the input value is not very */
161 :     /* important (i.e. is the value is off, SA adjusts VM to the */
162 :     /* correct value). (DP(N)) */
163 : ulcessvp 11 //
164 : agomez 1 /* Output Parameters: */
165 :     /* xopt - The variables that optimize the function. (DP(N)) */
166 :     /* fopt - The optimal value of the function. (DP) */
167 : ulcessvp 11 //
168 : agomez 1 /* JMB this has been modified to work with the gadget object structure */
169 :     /* This means that the function has been replaced by a call to ecosystem */
170 :     /* object, and we can use the vector objects that have been defined */
171 :    
172 :     #include "gadget.h"
173 :     #include "optinfo.h"
174 :     #include "mathfunc.h"
175 :     #include "doublevector.h"
176 :     #include "intvector.h"
177 :     #include "errorhandler.h"
178 :     #include "ecosystem.h"
179 :     #include "global.h"
180 : ulcessvp 11 #include "seq_optimize_template.h"
181 :     #ifdef GADGET_OPENMP
182 :     #include <omp.h>
183 :     #endif
184 : agomez 1
185 :     extern Ecosystem* EcoSystem;
186 : ulcessvp 11 #ifndef NO_OPENMP
187 :     extern Ecosystem** EcoSystems;
188 :     #endif
189 :     //extern StochasticData* data;
190 : agomez 1
191 :    
192 : ulcessvp 11 //void OptInfoSimann::OptimiseLikelihood() {
193 :     //
194 :     // //set initial values
195 :     // int nacc = 0; //The number of accepted function evaluations
196 :     // int nrej = 0; //The number of rejected function evaluations
197 :     // int naccmet = 0; //The number of metropolis accepted function evaluations
198 :     //
199 :     // double tmp, p, pp, ratio, nsdiv;
200 :     // double fopt, funcval, trialf;
201 :     // int a, i, j, k, l, offset, quit;
202 :     // int rchange, rcheck, rnumber; //Used to randomise the order of the parameters
203 :     //
204 :     // handle.logMessage(LOGINFO, "\nStarting Simulated Annealing optimisation algorithm\n");
205 :     // int nvars = EcoSystem->numOptVariables();
206 :     // DoubleVector x(nvars);
207 :     // DoubleVector init(nvars);
208 :     // DoubleVector trialx(nvars, 0.0);
209 :     // DoubleVector bestx(nvars);
210 :     // DoubleVector scalex(nvars);
211 :     // DoubleVector lowerb(nvars);
212 :     // DoubleVector upperb(nvars);
213 :     // DoubleVector fstar(tempcheck);
214 :     // DoubleVector vm(nvars, vminit);
215 :     // IntVector param(nvars, 0);
216 :     // IntVector nacp(nvars, 0);
217 :     //
218 :     // EcoSystem->resetVariables(); //JMB need to reset variables in case they have been scaled
219 :     // if (scale)
220 :     // EcoSystem->scaleVariables();
221 :     // EcoSystem->getOptScaledValues(x);
222 :     // EcoSystem->getOptLowerBounds(lowerb);
223 :     // EcoSystem->getOptUpperBounds(upperb);
224 :     // EcoSystem->getOptInitialValues(init);
225 :     //
226 :     // for (i = 0; i < nvars; i++) {
227 :     // bestx[i] = x[i];
228 :     // param[i] = i;
229 :     // }
230 :     //
231 :     // if (scale) {
232 :     // for (i = 0; i < nvars; i++) {
233 :     // scalex[i] = x[i];
234 :     // // Scaling the bounds, because the parameters are scaled
235 :     // lowerb[i] = lowerb[i] / init[i];
236 :     // upperb[i] = upperb[i] / init[i];
237 :     // if (lowerb[i] > upperb[i]) {
238 :     // tmp = lowerb[i];
239 :     // lowerb[i] = upperb[i];
240 :     // upperb[i] = tmp;
241 :     // }
242 :     // }
243 :     // }
244 :     //
245 :     // //funcval is the function value at x
246 :     // funcval = EcoSystem->SimulateAndUpdate(x);
247 :     // if (funcval != funcval) { //check for NaN
248 :     // handle.logMessage(LOGINFO, "Error starting Simulated Annealing optimisation with f(x) = infinity");
249 :     // converge = -1;
250 :     // iters = 1;
251 :     // return;
252 :     // }
253 :     //
254 :     // //the function is to be minimised so switch the sign of funcval (and trialf)
255 :     // funcval = -funcval;
256 :     // offset = EcoSystem->getFuncEval(); //number of function evaluations done before loop
257 :     // nacc++;
258 :     // cs /= lratio; //JMB save processing time
259 :     // nsdiv = 1.0 / ns;
260 :     // fopt = funcval;
261 :     // for (i = 0; i < tempcheck; i++)
262 :     // fstar[i] = funcval;
263 :     //
264 :     // //Start the main loop. Note that it terminates if
265 :     // //(i) the algorithm succesfully optimises the function or
266 :     // //(ii) there are too many function evaluations
267 :     // while (1) {
268 :     // for (a = 0; a < nt; a++) {
269 :     // //Randomize the order of the parameters once in a while, to avoid
270 :     // //the order having an influence on which changes are accepted
271 :     // rchange = 0;
272 :     // while (rchange < nvars) {
273 :     // rnumber = rand_r(&seedP) % nvars;
274 :     // rcheck = 1;
275 :     // for (i = 0; i < rchange; i++)
276 :     // if (param[i] == rnumber)
277 :     // rcheck = 0;
278 :     // if (rcheck) {
279 :     // param[rchange] = rnumber;
280 :     // rchange++;
281 :     // }
282 :     // }
283 :     //
284 :     // for (j = 0; j < ns; j++) {
285 :     // for (l = 0; l < nvars; l++) {
286 :     // //Generate trialx, the trial value of x
287 :     // newValue(nvars, l, param, trialx, x, lowerb, upperb, vm);
288 :     //// for (i = 0; i < nvars; i++) {
289 :     //// if (i == param[l]) {
290 :     //// trialx[i] = x[i] + ((randomNumber() * 2.0) - 1.0) * vm[i];
291 :     ////
292 :     //// //If trialx is out of bounds, try again until we find a point that is OK
293 :     //// if ((trialx[i] < lowerb[i]) || (trialx[i] > upperb[i])) {
294 :     //// //JMB - this used to just select a random point between the bounds
295 :     //// k = 0;
296 :     //// while ((trialx[i] < lowerb[i]) || (trialx[i] > upperb[i])) {
297 :     //// trialx[i] = x[i] + ((randomNumber() * 2.0) - 1.0) * vm[i];
298 :     //// k++;
299 :     //// if (k > 10) //we've had 10 tries to find a point neatly, so give up
300 :     //// trialx[i] = lowerb[i] + (upperb[i] - lowerb[i]) * randomNumber();
301 :     //// }
302 :     //// }
303 :     ////
304 :     //// } else
305 :     //// trialx[i] = x[i];
306 :     //// }
307 :     //
308 :     //// cout << "param:" << param[l] << "-" << trialx[param[l]] << endl;
309 :     //
310 :     // //Evaluate the function with the trial point trialx and return as -trialf
311 :     // trialf = EcoSystem->SimulateAndUpdate(trialx);
312 :     // trialf = -trialf;
313 :     //
314 :     // //If too many function evaluations occur, terminate the algorithm
315 :     // iters = EcoSystem->getFuncEval() - offset;
316 :     // if (iters > simanniter) {
317 :     // handle.logMessage(LOGINFO, "\nStopping Simulated Annealing optimisation algorithm\n");
318 :     // handle.logMessage(LOGINFO, "The optimisation stopped after", iters, "function evaluations");
319 :     // handle.logMessage(LOGINFO, "The temperature was reduced to", t);
320 :     // handle.logMessage(LOGINFO, "The optimisation stopped because the maximum number of function evaluations");
321 :     // handle.logMessage(LOGINFO, "was reached and NOT because an optimum was found for this run");
322 :     // handle.logMessage(LOGINFO, "Number of directly accepted points", nacc);
323 :     // handle.logMessage(LOGINFO, "Number of metropolis accepted points", naccmet);
324 :     // handle.logMessage(LOGINFO, "Number of rejected points", nrej);
325 :     //
326 :     // score = EcoSystem->SimulateAndUpdate(bestx);
327 :     // handle.logMessage(LOGINFO, "\nSimulated Annealing finished with a likelihood score of", score);
328 :     // return;
329 :     // }
330 :     //// cout << "f:" << trialf << endl;
331 :     // //Accept the new point if the new function value better
332 :     //// cout << "mustAccept:" << trialf << "|" << funcval<< endl;
333 :     // if ((trialf - funcval) > verysmall) {
334 :     // for (i = 0; i < nvars; i++)
335 :     // x[i] = trialx[i];
336 :     // funcval = trialf;
337 :     // nacc++;
338 :     // nacp[param[l]]++; //JMB - not sure about this ...
339 :     //
340 :     // } else {
341 :     // //Accept according to metropolis condition
342 :     // p = expRep((trialf - funcval) / t);
343 :     // pp = randomNumber(&seedM);
344 :     // if (pp < p) {
345 :     // //Accept point
346 :     // for (i = 0; i < nvars; i++)
347 :     // x[i] = trialx[i];
348 :     // funcval = trialf;
349 :     // naccmet++;
350 :     // nacp[param[l]]++;
351 :     // } else {
352 :     // //Reject point
353 :     // nrej++;
354 :     // }
355 :     // }
356 :     //// cout << "goog VALUE:" << funcval << endl;
357 :     // // JMB added check for really silly values
358 :     // if (isZero(trialf)) {
359 :     // handle.logMessage(LOGINFO, "Error in Simulated Annealing optimisation after", iters, "function evaluations, f(x) = 0");
360 :     // converge = -1;
361 :     // return;
362 :     // }
363 :     //
364 :     // //If greater than any other point, record as new optimum
365 :     // if ((trialf > fopt) && (trialf == trialf)) {
366 :     // for (i = 0; i < nvars; i++)
367 :     // bestx[i] = trialx[i];
368 :     // fopt = trialf;
369 :     //
370 :     // if (scale) {
371 :     // for (i = 0; i < nvars; i++)
372 :     // scalex[i] = bestx[i] * init[i];
373 :     // EcoSystem->storeVariables(-fopt, scalex);
374 :     // } else
375 :     // EcoSystem->storeVariables(-fopt, bestx);
376 :     //
377 :     // handle.logMessage(LOGINFO, "\nNew optimum found after", iters, "function evaluations");
378 :     // handle.logMessage(LOGINFO, "The likelihood score is", -fopt, "at the point");
379 :     // EcoSystem->writeBestValues();
380 :     // }
381 :     // }
382 :     // }
383 :     //
384 :     // //Adjust vm so that approximately half of all evaluations are accepted
385 :     // for (i = 0; i < nvars; i++) {
386 :     // ratio = nsdiv * nacp[i];
387 :     // nacp[i] = 0;
388 :     // if (ratio > uratio) {
389 :     // vm[i] = vm[i] * (1.0 + cs * (ratio - uratio));
390 :     // } else if (ratio < lratio) {
391 :     // vm[i] = vm[i] / (1.0 + cs * (lratio - ratio));
392 :     // }
393 :     //
394 :     // if (vm[i] < rathersmall)
395 :     // vm[i] = rathersmall;
396 :     // if (vm[i] > (upperb[i] - lowerb[i]))
397 :     // vm[i] = upperb[i] - lowerb[i];
398 :     // }
399 :     // }
400 :     //
401 :     // //Check termination criteria
402 :     // for (i = tempcheck - 1; i > 0; i--)
403 :     // fstar[i] = fstar[i - 1];
404 :     // fstar[0] = funcval;
405 :     //
406 :     // quit = 0;
407 :     // if (fabs(fopt - funcval) < simanneps) {
408 :     // quit = 1;
409 :     // for (i = 0; i < tempcheck - 1; i++)
410 :     // if (fabs(fstar[i + 1] - fstar[i]) > simanneps)
411 :     // quit = 0;
412 :     // }
413 :     //
414 :     // handle.logMessage(LOGINFO, "Checking convergence criteria after", iters, "function evaluations ...");
415 :     //
416 :     // //Terminate SA if appropriate
417 :     // if (quit) {
418 :     // handle.logMessage(LOGINFO, "\nStopping Simulated Annealing optimisation algorithm\n");
419 :     // handle.logMessage(LOGINFO, "The optimisation stopped after", iters, "function evaluations");
420 :     // handle.logMessage(LOGINFO, "The temperature was reduced to", t);
421 :     // handle.logMessage(LOGINFO, "The optimisation stopped because an optimum was found for this run");
422 :     // handle.logMessage(LOGINFO, "Number of directly accepted points", nacc);
423 :     // handle.logMessage(LOGINFO, "Number of metropolis accepted points", naccmet);
424 :     // handle.logMessage(LOGINFO, "Number of rejected points", nrej);
425 :     //
426 :     // converge = 1;
427 :     // score = EcoSystem->SimulateAndUpdate(bestx);
428 :     // handle.logMessage(LOGINFO, "\nSimulated Annealing finished with a likelihood score of", score);
429 :     // return;
430 :     // }
431 :     //
432 :     // //If termination criteria is not met, prepare for another loop.
433 :     // t *= rt;
434 :     // if (t < rathersmall)
435 :     // t = rathersmall; //JMB make sure temperature doesnt get too small
436 :     //
437 :     // handle.logMessage(LOGINFO, "Reducing the temperature to", t);
438 :     // funcval = fopt;
439 :     // for (i = 0; i < nvars; i++)
440 :     // x[i] = bestx[i];
441 :     // }
442 :     //}
443 : agomez 1
444 : ulcessvp 11
445 :     #ifdef GADGET_OPENMP
446 :     void OptInfoSimann::OptimiseLikelihoodOMP() {
447 :    
448 : agomez 1 //set initial values
449 :     int nacc = 0; //The number of accepted function evaluations
450 :     int nrej = 0; //The number of rejected function evaluations
451 :     int naccmet = 0; //The number of metropolis accepted function evaluations
452 :    
453 :     double tmp, p, pp, ratio, nsdiv;
454 :     double fopt, funcval, trialf;
455 :     int a, i, j, k, l, offset, quit;
456 :     int rchange, rcheck, rnumber; //Used to randomise the order of the parameters
457 :    
458 : ulcessvp 11 struct Storage {
459 :     DoubleVector trialx;
460 :     double newLikelihood;
461 :     };
462 :    
463 : agomez 1 handle.logMessage(LOGINFO, "\nStarting Simulated Annealing optimisation algorithm\n");
464 :     int nvars = EcoSystem->numOptVariables();
465 :     DoubleVector x(nvars);
466 :     DoubleVector init(nvars);
467 :     DoubleVector trialx(nvars, 0.0);
468 :     DoubleVector bestx(nvars);
469 :     DoubleVector scalex(nvars);
470 :     DoubleVector lowerb(nvars);
471 :     DoubleVector upperb(nvars);
472 :     DoubleVector fstar(tempcheck);
473 :     DoubleVector vm(nvars, vminit);
474 :     IntVector param(nvars, 0);
475 :     IntVector nacp(nvars, 0);
476 :    
477 :     EcoSystem->resetVariables(); //JMB need to reset variables in case they have been scaled
478 :     if (scale)
479 :     EcoSystem->scaleVariables();
480 :     EcoSystem->getOptScaledValues(x);
481 :     EcoSystem->getOptLowerBounds(lowerb);
482 :     EcoSystem->getOptUpperBounds(upperb);
483 :     EcoSystem->getOptInitialValues(init);
484 :    
485 : ulcessvp 11 for (i = 0; i < nvars; ++i) {
486 : agomez 1 bestx[i] = x[i];
487 :     param[i] = i;
488 :     }
489 :    
490 :     if (scale) {
491 : ulcessvp 11 for (i = 0; i < nvars; ++i) {
492 : agomez 1 scalex[i] = x[i];
493 :     // Scaling the bounds, because the parameters are scaled
494 :     lowerb[i] = lowerb[i] / init[i];
495 :     upperb[i] = upperb[i] / init[i];
496 :     if (lowerb[i] > upperb[i]) {
497 :     tmp = lowerb[i];
498 :     lowerb[i] = upperb[i];
499 :     upperb[i] = tmp;
500 :     }
501 :     }
502 :     }
503 :    
504 :     //funcval is the function value at x
505 :     funcval = EcoSystem->SimulateAndUpdate(x);
506 :     if (funcval != funcval) { //check for NaN
507 :     handle.logMessage(LOGINFO, "Error starting Simulated Annealing optimisation with f(x) = infinity");
508 :     converge = -1;
509 :     iters = 1;
510 : ulcessvp 11 // EcoSystem->writeOptValuesOMP();
511 : agomez 1 return;
512 :     }
513 :    
514 :     //the function is to be minimised so switch the sign of funcval (and trialf)
515 :     funcval = -funcval;
516 :     offset = EcoSystem->getFuncEval(); //number of function evaluations done before loop
517 :     nacc++;
518 :     cs /= lratio; //JMB save processing time
519 :     nsdiv = 1.0 / ns;
520 :     fopt = funcval;
521 : ulcessvp 11 for (i = 0; i < tempcheck; ++i)
522 : agomez 1 fstar[i] = funcval;
523 :    
524 : ulcessvp 11
525 :    
526 :     int numThr = omp_get_max_threads ( );
527 :     int bestId=0;
528 :     int ini=0;
529 :    
530 :     Storage* storage = new Storage[numThr];
531 :    
532 :     for (i=0; i<numThr; ++i)
533 :     storage[i].trialx = trialx;
534 :    
535 : agomez 1 //Start the main loop. Note that it terminates if
536 :     //(i) the algorithm succesfully optimises the function or
537 :     //(ii) there are too many function evaluations
538 : ulcessvp 11 int aux;
539 :    
540 :     DoubleVector vns(nvars, 0);
541 :     int ns_ = ceil(numThr/2.);
542 :     double res;
543 :     aux=0;
544 : agomez 1 while (1) {
545 : ulcessvp 11 for (a = 0; a < nt; ++a) {
546 : agomez 1 //Randomize the order of the parameters once in a while, to avoid
547 :     //the order having an influence on which changes are accepted
548 :     rchange = 0;
549 :     while (rchange < nvars) {
550 : ulcessvp 11 rnumber = rand_r(&seedP) % nvars;
551 : agomez 1 rcheck = 1;
552 : ulcessvp 11 for (i = 0; i < rchange; ++i)
553 : agomez 1 if (param[i] == rnumber)
554 :     rcheck = 0;
555 :     if (rcheck) {
556 :     param[rchange] = rnumber;
557 :     rchange++;
558 :     }
559 :     }
560 :    
561 :    
562 : ulcessvp 11 for (j = 0; j < (ns*ns_); ++j) {
563 :     for (l = ini; l < nvars; l+=numThr) {
564 :     for (i=0; i<numThr;++i)
565 :     {
566 :     if ((l+i) < nvars)
567 :     newValue(nvars, l+i, param, storage[i].trialx, x, lowerb, upperb, vm);
568 :     else {
569 :     newValue(nvars, ini, param, storage[i].trialx, x, lowerb, upperb, vm);
570 :     ini++;
571 :     if (ini >= nvars)
572 :     ini=0;
573 :     }
574 :     }
575 : agomez 1
576 : ulcessvp 11 # pragma omp parallel private(res)
577 :     {
578 :     //Evaluate the function with the trial point trialx and return as -trialf
579 :     int id = omp_get_thread_num ();
580 :     res = EcoSystems[id]->SimulateAndUpdate(storage[id].trialx);
581 :     storage[id].newLikelihood = -res;
582 :     }
583 :     //best value from omp
584 :     trialf = storage[0].newLikelihood;
585 :     bestId=0;
586 :     for (i=0;i<numThr;++i)
587 :     {
588 :     if (storage[i].newLikelihood > trialf)
589 :     {
590 :     trialf=storage[i].newLikelihood;
591 :     bestId=i;
592 :     }
593 :     k = param[(l+i)%nvars];
594 : agomez 1
595 : ulcessvp 11 if ((storage[i].newLikelihood - funcval) > verysmall)
596 :     {
597 :     nacp[k]++;
598 :     aux++;
599 :     vns[k]++;
600 :     }
601 :     else {
602 :     //Accept according to metropolis condition
603 :     p = expRep((storage[i].newLikelihood - funcval) / t);
604 :     pp = randomNumber(&seedM);
605 :     if (pp < p)
606 :     aux++;
607 :     else {
608 :     vns[k]++;
609 :     nrej++;
610 :     }
611 :     }
612 : agomez 1
613 : ulcessvp 11 if (vns[k] >= ns) {
614 :     ratio = nsdiv * nacp[k];
615 :     nacp[k] = 0;
616 :     if (ratio > uratio) {
617 :     vm[k] = vm[k] * (1.0 + cs * (ratio - uratio));
618 :     } else if (ratio < lratio) {
619 :     vm[k] = vm[k] / (1.0 + cs * (lratio - ratio));
620 :     }
621 :     if (vm[k] < rathersmall){
622 :     vm[k] = rathersmall;
623 :     }
624 :     if (vm[k] > (upperb[k] - lowerb[k]))
625 :     {
626 :     vm[k] = upperb[k] - lowerb[k];
627 :     }
628 :     vns[k]=0;
629 :     }
630 :     }
631 :     aux--;
632 :     iters = (EcoSystems[bestId]->getFuncEval() * numThr) -aux;
633 :     if (iters > simanniter) {
634 :     handle.logMessage(LOGINFO, "\nStopping Simulated Annealing optimisation algorithm\n");
635 :     handle.logMessage(LOGINFO, "The optimisation stopped after", iters, "function evaluations");
636 :     handle.logMessage(LOGINFO, "The temperature was reduced to", t);
637 :     handle.logMessage(LOGINFO, "The optimisation stopped because the maximum number of function evaluations");
638 :     handle.logMessage(LOGINFO, "was reached and NOT because an optimum was found for this run");
639 :     handle.logMessage(LOGINFO, "Number of directly accepted points", nacc);
640 :     handle.logMessage(LOGINFO, "Number of metropolis accepted points", naccmet);
641 :     handle.logMessage(LOGINFO, "Number of rejected points", nrej);
642 : agomez 1
643 : ulcessvp 11 score = EcoSystem->SimulateAndUpdate(bestx);
644 :     handle.logMessage(LOGINFO, "\nSimulated Annealing finished with a likelihood score of", score);
645 :     delete[] storage;
646 :     return;
647 : agomez 1 }
648 :    
649 :     //Accept the new point if the new function value better
650 :     if ((trialf - funcval) > verysmall) {
651 : ulcessvp 11 for (i = 0; i < nvars; ++i)
652 :     x[i] = storage[bestId].trialx[i];
653 : agomez 1 funcval = trialf;
654 :     nacc++;
655 : ulcessvp 11 // vns[param[l+bestId]]++;
656 :     //nacp[param[l+bestId]]+=numThr; //JMB - not sure about this ...
657 :     // for (i = 0; i < numThr; ++i)
658 :     // nacp[param[l+i]]++;
659 :     // nacp[param[l+bestId]]++;
660 : agomez 1
661 :     } else {
662 :     //Accept according to metropolis condition
663 :     p = expRep((trialf - funcval) / t);
664 : ulcessvp 11 pp = randomNumber(&seedP);
665 : agomez 1 if (pp < p) {
666 :     //Accept point
667 : ulcessvp 11 for (i = 0; i < nvars; ++i)
668 :     x[i] = storage[bestId].trialx[i];
669 : agomez 1 funcval = trialf;
670 :     naccmet++;
671 : ulcessvp 11 nacp[param[(l+bestId)%nvars]]++;
672 :     // vns[param[l+bestId]]++;
673 :     // nacp[param[l+bestId]]+=numThr;
674 :     // for (i = 0; i < numThr; ++i)
675 :     // nacp[param[l+i]]++;
676 :     // nacp[param[l+bestId]]++;
677 : agomez 1 } else {
678 :     //Reject point
679 :     nrej++;
680 :     }
681 :     }
682 :     // JMB added check for really silly values
683 :     if (isZero(trialf)) {
684 :     handle.logMessage(LOGINFO, "Error in Simulated Annealing optimisation after", iters, "function evaluations, f(x) = 0");
685 :     converge = -1;
686 : ulcessvp 11 // EcoSystem->writeOptValuesOMP();
687 :     delete[] storage;
688 : agomez 1 return;
689 :     }
690 :    
691 :     //If greater than any other point, record as new optimum
692 :     if ((trialf > fopt) && (trialf == trialf)) {
693 : ulcessvp 11 for (i = 0; i < nvars; ++i)
694 :     bestx[i] = storage[bestId].trialx[i];
695 : agomez 1 fopt = trialf;
696 :    
697 :     if (scale) {
698 : ulcessvp 11 for (i = 0; i < nvars; ++i)
699 : agomez 1 scalex[i] = bestx[i] * init[i];
700 : ulcessvp 11 //FIXME EcoSystems[]->
701 : agomez 1 EcoSystem->storeVariables(-fopt, scalex);
702 :     } else
703 : ulcessvp 11 EcoSystem->storeVariables(-fopt, bestx);
704 : agomez 1
705 :     handle.logMessage(LOGINFO, "\nNew optimum found after", iters, "function evaluations");
706 :     handle.logMessage(LOGINFO, "The likelihood score is", -fopt, "at the point");
707 :     EcoSystem->writeBestValues();
708 :     }
709 :     }
710 :     }
711 :    
712 :     //Adjust vm so that approximately half of all evaluations are accepted
713 : ulcessvp 11 // for (i = 0; i < nvars; ++i) {
714 :     //// cout << "nacp["<<i<<"]:"<<nacp[i]<<endl;
715 :     // ratio = nsdiv * nacp[i];
716 :     // nacp[i] = 0;
717 :     // if (ratio > uratio) {
718 :     // vm[i] = vm[i] * (1.0 + cs * (ratio - uratio));
719 :     // } else if (ratio < lratio) {
720 :     // vm[i] = vm[i] / (1.0 + cs * (lratio - ratio));
721 :     // }
722 :     //
723 :     // if (vm[i] < rathersmall)
724 :     // vm[i] = rathersmall;
725 :     // if (vm[i] > (upperb[i] - lowerb[i]))
726 :     // vm[i] = upperb[i] - lowerb[i];
727 :     //
728 :     //// cout << "vm["<<i<<"]:"<<vm[i]<<endl;
729 :     //
730 :     // }
731 : agomez 1 }
732 :    
733 :     //Check termination criteria
734 :     for (i = tempcheck - 1; i > 0; i--)
735 :     fstar[i] = fstar[i - 1];
736 :     fstar[0] = funcval;
737 :    
738 :     quit = 0;
739 :     if (fabs(fopt - funcval) < simanneps) {
740 :     quit = 1;
741 : ulcessvp 11 for (i = 0; i < tempcheck - 1; ++i)
742 : agomez 1 if (fabs(fstar[i + 1] - fstar[i]) > simanneps)
743 :     quit = 0;
744 :     }
745 :    
746 :     handle.logMessage(LOGINFO, "Checking convergence criteria after", iters, "function evaluations ...");
747 :    
748 :     //Terminate SA if appropriate
749 :     if (quit) {
750 :     handle.logMessage(LOGINFO, "\nStopping Simulated Annealing optimisation algorithm\n");
751 :     handle.logMessage(LOGINFO, "The optimisation stopped after", iters, "function evaluations");
752 :     handle.logMessage(LOGINFO, "The temperature was reduced to", t);
753 :     handle.logMessage(LOGINFO, "The optimisation stopped because an optimum was found for this run");
754 :     handle.logMessage(LOGINFO, "Number of directly accepted points", nacc);
755 :     handle.logMessage(LOGINFO, "Number of metropolis accepted points", naccmet);
756 :     handle.logMessage(LOGINFO, "Number of rejected points", nrej);
757 :    
758 :     converge = 1;
759 :     score = EcoSystem->SimulateAndUpdate(bestx);
760 :     handle.logMessage(LOGINFO, "\nSimulated Annealing finished with a likelihood score of", score);
761 : ulcessvp 11 // EcoSystem->writeOptValuesOMP();
762 :     delete[] storage;
763 : agomez 1 return;
764 :     }
765 :    
766 :     //If termination criteria is not met, prepare for another loop.
767 :     t *= rt;
768 :     if (t < rathersmall)
769 :     t = rathersmall; //JMB make sure temperature doesnt get too small
770 :    
771 :     handle.logMessage(LOGINFO, "Reducing the temperature to", t);
772 :     funcval = fopt;
773 : ulcessvp 11 for (i = 0; i < nvars; ++i)
774 : agomez 1 x[i] = bestx[i];
775 :     }
776 : ulcessvp 11 // EcoSystem->writeOptValuesOMP();
777 : agomez 1 }
778 : ulcessvp 11 #endif
779 :    
780 :     void OptInfoSimann::newValue(int nvars, int l, IntVector& param, DoubleVector& trialx,
781 :     DoubleVector& x, DoubleVector& lowerb, DoubleVector& upperb, DoubleVector& vm)
782 :     {
783 :     int i, k;
784 :     // double old = trialx[param[l]];
785 :     // cout << "[" << endl;
786 :     for (i = 0; i < nvars; ++i) {
787 :     if (i == param[l]) {
788 :     trialx[i] = x[i] + ((randomNumber(&seed) * 2.0) - 1.0) * vm[i];
789 :    
790 :     //If trialx is out of bounds, try again until we find a point that is OK
791 :     if ((trialx[i] < lowerb[i]) || (trialx[i] > upperb[i])) {
792 :     //JMB - this used to just select a random point between the bounds
793 :     k = 0;
794 :     while ((trialx[i] < lowerb[i]) || (trialx[i] > upperb[i])) {
795 :     trialx[i] = x[i] + ((randomNumber(&seed) * 2.0) - 1.0) * vm[i];
796 :     k++;
797 :     if (k > 10) //we've had 10 tries to find a point neatly, so give up
798 :     trialx[i] = lowerb[i] + (upperb[i] - lowerb[i]) * randomNumber(&seed);
799 :     }
800 :     }
801 :     } else
802 :     trialx[i] = x[i];
803 :     // cout <<" " << trialx[i];
804 :     }
805 :    
806 :     // cout << endl<< "old[" <<param[l] << "]:" << old <<"-" << trialx[param[l]]<< " - " << vm[param[l]]<<endl;
807 :     }
808 :    
809 :    
810 :     //Generate trialx, the trial value of x
811 :     void newValue(int nvars, int l, IntVector& param, DoubleVector& trialx,
812 :     DoubleVector& x, DoubleVector& lowerb, DoubleVector& upperb, DoubleVector& vm, unsigned* seed)
813 :     {
814 :     int i, k;
815 :     // double old = trialx[param[l]];
816 :     // cout << "[" << endl;
817 :     for (i = 0; i < nvars; ++i) {
818 :     if (i == param[l]) {
819 :     trialx[i] = x[i] + ((randomNumber(&*seed) * 2.0) - 1.0) * vm[i];
820 :    
821 :     //If trialx is out of bounds, try again until we find a point that is OK
822 :     if ((trialx[i] < lowerb[i]) || (trialx[i] > upperb[i])) {
823 :     //JMB - this used to just select a random point between the bounds
824 :     k = 0;
825 :     while ((trialx[i] < lowerb[i]) || (trialx[i] > upperb[i])) {
826 :     trialx[i] = x[i] + ((randomNumber(&*seed) * 2.0) - 1.0) * vm[i];
827 :     k++;
828 :     if (k > 10) //we've had 10 tries to find a point neatly, so give up
829 :     trialx[i] = lowerb[i] + (upperb[i] - lowerb[i]) * randomNumber(&*seed);
830 :     }
831 :     }
832 :     } else
833 :     trialx[i] = x[i];
834 :     // cout << trialx[i]<<" ";
835 :     }
836 :    
837 :     // cout << endl <<l<< "-old[" <<param[l] << "]:" << old <<"-" << trialx[param[l]]<< " - " << vm[param[l]]<<endl;
838 :     }
839 :    
840 :     void buildNewParams_f(Siman& seed, DoubleVector& params) {
841 :    
842 :     newValue(seed.getNvars(), seed.getL(), seed.getParam(), params, seed.getX(),
843 :     seed.getLowerb(), seed.getUpperb(), seed.getVm(), seed.getSeed());
844 :    
845 :     }
846 :    
847 :     /// Represents the function that computes how good the parameters are
848 :     double evaluate_f(const DoubleVector& params) {
849 :     double trialf;
850 :     #ifndef NO_OPENMP
851 :     int id = omp_get_thread_num ();
852 :     trialf = EcoSystems[id]->SimulateAndUpdate(params);
853 :     // cout << id << "-";
854 :     #else
855 :     trialf = EcoSystem->SimulateAndUpdate(params);
856 :     #endif
857 :     // cout << "trailf:" << trialf<< endl;
858 :     return -trialf;
859 :     }
860 :    
861 :     struct ControlClass {
862 :    
863 :     void adjustVm(Siman& seed) {
864 :     //Adjust vm so that approximately half of all evaluations are accepted
865 :     int i;
866 :     double ratio, nsdiv = seed.getNsdiv();
867 :    
868 :     DoubleVector vm = seed.getVm();
869 :     DoubleVector upperb = seed.getUpperb();
870 :     DoubleVector lowerb = seed.getLowerb();
871 :    
872 :     double uratio = seed.getUratio();
873 :     double lratio = seed.getLratio();
874 :    
875 :     // cout << "uratio:" << uratio << endl;
876 :     // cout << "lratio:" << lratio << endl;
877 :    
878 :     //cout << "adjust vm" << endl;
879 :     for (i = 0; i < seed.getNvars(); i++) {
880 :     // cout << "nacp[" << i << "]:" << seed.getNacp(i) << endl;
881 :     ratio = nsdiv * seed.getNacp(i);
882 :     seed.setNacp(i,0);
883 :     if (ratio > uratio) {
884 :     (vm)[i] = (vm)[i] * (1.0 + seed.getCs() * (ratio - seed.getUratio()));
885 :     } else if (ratio < lratio) {
886 :     (vm)[i] = (vm)[i] / (1.0 + seed.getCs() * (seed.getLratio() - ratio));
887 :     }
888 :    
889 :     if ((vm)[i] < rathersmall)
890 :     (vm)[i] = rathersmall;
891 :     if ((vm)[i] > (upperb[i] - lowerb[i]))
892 :     (vm)[i] = upperb[i] - lowerb[i];
893 :     }
894 :     seed.setVm(vm);
895 :     }
896 :    
897 :     void temperature(Siman& seed, DoubleVector& x){
898 :     int i;
899 :     double t = seed.getT();
900 :     t *= seed.getRt();
901 :     if (t < rathersmall)
902 :     t = rathersmall; //JMB make sure temperature doesnt get too small
903 :    
904 :     handle.logMessage(LOGINFO, "Reducing the temperature to", t);
905 :     seed.setT(t);
906 :    
907 :     DoubleVector* bestx = seed.getBestx();
908 :    
909 :     //funcval = fopt;
910 :     for (i = 0; i < seed.getNvars(); i++)
911 :     x[i] = (*bestx)[i];
912 :     }
913 :    
914 :     void optimum(double trialf, double &fopt, int iters, DoubleVector trialx,
915 :     DoubleVector init, Siman siman){
916 :     //If greater than any other point, record as new optimum
917 :     int i, nvars = siman.getNvars();
918 :     DoubleVector scalex(nvars);
919 :     DoubleVector* bestx = siman.getBestx();
920 :    
921 :     if ((trialf > fopt) && (trialf == trialf)) {
922 :     for (i = 0; i < nvars; i++)
923 :     (*bestx)[i] = trialx[i];
924 :     fopt = trialf;
925 :    
926 :     if (siman.getScale()) {
927 :     for (i = 0; i < nvars; i++)
928 :     scalex[i] = (*bestx)[i] * init[i];
929 :     EcoSystem->storeVariables(-fopt, scalex);
930 :     } else
931 :     EcoSystem->storeVariables(-fopt, (*bestx));
932 :    
933 :     handle.logMessage(LOGINFO, "\nNew optimum found after", iters, "function evaluations");
934 :     handle.logMessage(LOGINFO, "The likelihood score is", -fopt, "at the point");
935 :     EcoSystem->writeBestValues();
936 :     }
937 :     }
938 :    
939 :     /**
940 :     @brief Decides wheter the current item evaluated must be chosen as new optimum
941 :     @param funcval value for old optimum
942 :     @param trialf value for current item evaluated
943 :     */
944 :     bool mustAccept(double funcval, double trialf, Siman &siman, int iters) {
945 :     //Accept the new point if the new function value better
946 :     bool ret = true;
947 :     int i;
948 :     // cout << "mustAccept:" << trialf << "|" << funcval<< endl;
949 :     int aux = siman.getParam()[siman.getL()];
950 :     if ((trialf - funcval) > verysmall) {
951 :     siman.incrementNacp(aux);
952 :     siman.incrementNacc();
953 :     //JMB - not sure about this ...
954 :     } else {
955 :     double p, pp;
956 :     //Accept according to metropolis condition
957 :    
958 :     // cout << "t:" << siman.getT() << endl;
959 :    
960 :     p = expRep((trialf - funcval) / siman.getT());
961 :     pp = randomNumber(siman.getSeedM());
962 :     if (pp < p) {
963 :     // cout << pp << "<" << p << endl;
964 :     siman.incrementNacp(aux);
965 :     siman.incrementNaccmet();
966 :     // //Accept point
967 :     // for (i = 0; i < nvars; i++)
968 :     // x[i] = trialx[i];
969 :     // funcval = trialf;
970 :     // naccmet++;
971 :     // nacp[param[l]]++;
972 :     } else {
973 :     //Reject point
974 :     ret = false;
975 :     siman.incrementNrej();
976 :     }
977 :     }
978 :     // cout << "nacp[" << aux << "]:" << siman.getNacp(aux) << endl;
979 :     // JMB added check for really silly values
980 :     if (isZero(trialf)) {
981 :     handle.logMessage(LOGINFO, "Error in Simulated Annealing optimisation after",
982 :     iters, "function evaluations, f(x) = 0");
983 :    
984 :     siman.setConverge(-1);
985 :     return false;
986 :     }
987 :    
988 :     siman.incrementL();
989 :     if (siman.getL() == 0)
990 :     siman.incrementNS();
991 :     if (siman.getNS() >= siman.getNs()){
992 :    
993 :     siman.setNS(0);
994 :    
995 :     siman.incrementNT();
996 :     adjustVm(siman);
997 :     siman.randomizeParams();
998 :     // if (seed.getNT() >= seed.getNt()){
999 :     //
1000 :     // }
1001 :    
1002 :     }
1003 :    
1004 :     return ret;
1005 :    
1006 :     }
1007 :    
1008 :     /**
1009 :     @brief Decides whether the search must stop.
1010 :     It does not take into account the number of iterations, which is already considered by the template
1011 :     @param prev old optimum
1012 :     @param funcval new/current optimum
1013 :     */
1014 :     bool mustTerminate(double prev, double& funcval, Siman &siman, int iters) {
1015 :     bool quit = false;
1016 :     if (siman.getNT() >= siman.getNt())
1017 :     {
1018 :     // cout << "nt:" << siman.getNT();
1019 :     //Check termination criteria
1020 :     int i;
1021 :     DoubleVector fstar = siman.getFstar();
1022 :    
1023 :     siman.setNT(0);
1024 :    
1025 :     for (i = siman.getTempcheck() - 1; i > 0; i--)
1026 :     fstar[i] = fstar[i - 1];
1027 :     fstar[0] = funcval;
1028 :    
1029 :     if (fabs(prev - funcval) < siman.getSimanneps()) {
1030 :     quit = true;
1031 :     for (i = 0; i < siman.getTempcheck() - 1; i++)
1032 :     if (fabs(fstar[i + 1] - fstar[i]) > siman.getSimanneps())
1033 :     quit = false;
1034 :     }
1035 :    
1036 :     handle.logMessage(LOGINFO, "Checking convergence criteria after", iters,
1037 :     "function evaluations ...");
1038 :    
1039 :     temperature(siman, siman.getX());
1040 :    
1041 :     funcval = prev;
1042 :     }
1043 :     return quit;
1044 :     }
1045 :    
1046 :     void printResult(bool quit, Siman siman, int iters)
1047 :     {
1048 :     double * score = siman.getScore();
1049 :     DoubleVector * bestX = siman.getBestx();
1050 :    
1051 :     handle.logMessage(LOGINFO, "\nStopping Simulated Annealing optimisation algorithm\n");
1052 :     handle.logMessage(LOGINFO, "The optimisation stopped after", iters, "function evaluations");
1053 :     handle.logMessage(LOGINFO, "The temperature was reduced to", siman.getT());
1054 :     if (quit) {
1055 :     int* converge = siman.getConverge();
1056 :     handle.logMessage(LOGINFO, "The optimisation stopped because an optimum was found for this run");
1057 :    
1058 :    
1059 :     *converge = 1;
1060 :     }
1061 :     else {
1062 :     handle.logMessage(LOGINFO, "The optimisation stopped because the maximum number of function evaluations");
1063 :     handle.logMessage(LOGINFO, "was reached and NOT because an optimum was found for this run");
1064 :     }
1065 :     handle.logMessage(LOGINFO, "Number of directly accepted points", siman.getNacc());
1066 :     handle.logMessage(LOGINFO, "Number of metropolis accepted points", siman.getNaccmet());
1067 :     handle.logMessage(LOGINFO, "Number of rejected points", siman.getNrej());
1068 :     *score = EcoSystem->SimulateAndUpdate(*bestX);
1069 :     handle.logMessage(LOGINFO, "\nSimulated Annealing finished with a likelihood score of", *score);
1070 :     // EcoSystem->writeOptValues();
1071 :     }
1072 :     };
1073 :    
1074 :     // Required
1075 :     std::ostream &operator<<(std::ostream &os, const DoubleVector &p)
1076 :     {
1077 :     // os << "[ AUCH";
1078 :     // for (int i = 0; i< NPARAMS; ++i) {
1079 :     // std::cout << p[i] << ' ';
1080 :     // }
1081 :     // std::cout << "]";
1082 :     os << "";
1083 :     return os;
1084 :     }
1085 :    
1086 :    
1087 :     void OptInfoSimann::OptimiseLikelihood() {
1088 :    
1089 :     //set initial values
1090 :    
1091 :     double tmp, p, pp;
1092 :     double fopt, funcval, trialf;
1093 :     int a, i, j, k, l, quit;
1094 :     int rchange, rcheck, rnumber; //Used to randomise the order of the parameters
1095 :    
1096 :     handle.logMessage(LOGINFO,
1097 :     "\nStarting Simulated Annealing optimisation algorithm----\n");
1098 :     int nvars = EcoSystem->numOptVariables();
1099 :     DoubleVector x(nvars);
1100 :     DoubleVector init(nvars);
1101 :     DoubleVector trialx(nvars, 0.0);
1102 :     DoubleVector bestx(nvars);
1103 :     DoubleVector scalex(nvars);
1104 :     DoubleVector lowerb(nvars);
1105 :     DoubleVector upperb(nvars);
1106 :     DoubleVector fstar(tempcheck);
1107 :     DoubleVector vm(nvars, vminit);
1108 :     IntVector param(nvars, 0);
1109 :    
1110 :     EcoSystem->resetVariables(); //JMB need to reset variables in case they have been scaled
1111 :     if (scale) {
1112 :     EcoSystem->scaleVariables();
1113 :     #ifndef NO_OPENMP
1114 :     int numThr = omp_get_max_threads ( );
1115 :     for(i = 0; i < numThr; i++)
1116 :     EcoSystems[i]->scaleVariables();
1117 :     #endif
1118 :     }
1119 :     EcoSystem->getOptScaledValues(x);
1120 :     EcoSystem->getOptLowerBounds(lowerb);
1121 :     EcoSystem->getOptUpperBounds(upperb);
1122 :     EcoSystem->getOptInitialValues(init);
1123 :    
1124 :     for (i = 0; i < nvars; i++) {
1125 :     bestx[i] = x[i];
1126 :     param[i] = i;
1127 :     }
1128 :    
1129 :     if (scale) {
1130 :     for (i = 0; i < nvars; i++) {
1131 :     scalex[i] = x[i];
1132 :     // Scaling the bounds, because the parameters are scaled
1133 :     lowerb[i] = lowerb[i] / init[i];
1134 :     upperb[i] = upperb[i] / init[i];
1135 :     if (lowerb[i] > upperb[i]) {
1136 :     tmp = lowerb[i];
1137 :     lowerb[i] = upperb[i];
1138 :     upperb[i] = tmp;
1139 :     }
1140 :     }
1141 :     }
1142 :    
1143 :     //funcval is the function value at x
1144 :     funcval = EcoSystem->SimulateAndUpdate(x);
1145 :     if (funcval != funcval) { //check for NaN
1146 :     handle.logMessage(LOGINFO,
1147 :     "Error starting Simulated Annealing optimisation with f(x) = infinity");
1148 :     converge = -1;
1149 :     iters = 1;
1150 :     return;
1151 :     }
1152 :    
1153 :     //the function is to be minimised so switch the sign of funcval (and trialf)
1154 :     funcval = -funcval;
1155 :     // offset = EcoSystem->getFuncEval(); //number of function evaluations done before loop
1156 :     // nacc++;
1157 :     cs /= lratio; //JMB save processing time
1158 :     // nsdiv = 1.0 / ns;
1159 :     fopt = funcval;
1160 :     for (i = 0; i < tempcheck; i++)
1161 :     fstar[i] = funcval;
1162 :    
1163 :     //unsigned seed = 1234;
1164 :    
1165 :     Siman s(seed, seedM, seedP, nvars, nt, ns, param, &x, &lowerb, &upperb, vm, t, rt, (1.0 / ns),
1166 :     tempcheck, simanneps, fstar, lratio, uratio, cs, &bestx, scale, &converge, &score);
1167 :    
1168 :     ReproducibleSearch<Siman, DoubleVector, ControlClass, evaluate_f, buildNewParams_f>
1169 :     pa(s, x, simanniter);
1170 :    
1171 :     // cout << "Sequential run" << endl;
1172 :     // pa.seq_opt();
1173 :    
1174 :     #ifndef NO_OPENMP
1175 :     int numThr = omp_get_max_threads ( );
1176 :     pa.paral_opt_omp(numThr,numThr);
1177 :     #else
1178 :     pa.seq_opt();
1179 :     #endif
1180 :    
1181 :     }
1182 :    
1183 :    
1184 :    
1185 :    

root@forge.cesga.es
ViewVC Help
Powered by ViewVC 1.0.0  

Powered By FusionForge