Bug fix release.
line_search = "backtracking"
with a specified
step_down
parameter, an incorrectly large number of
gradient calculations was being reported.step_down
argument with
line_search = "backtracking"
, interpolation using function
and gradient evaluations is carried out. To get a typical Armijo-style
backtracking line search, specify a value for step_down
(e.g. step_down = 0.5
to halve the step size), and only
function evaluations are used.A patch release to fix an incompatibility with R-devel.
class
was being checked directly and
a scalar value was assumed. The correct behavior is to use
methods::is
.A patch release for a bug fix.
ls_max_fn
parameter), a
'bracket_step' not found
error could result. Thank you to
reporter Charles Driver.A patch release to fix an incompatibility with R-devel.
method = "TN"
). Can be
controlled using the tn_init
and tn_exit
options.method = "SR1"
), falling back to the
BFGS direction if a descent direction is not found.preconditioner
, which applies to the
conjugate gradient and truncated newton methods. The only value
currently available is preconditioner = "L-BFGS"
which uses
L-BFGS to estimate the inverse Hessian for preconditioning. The number
of updates to store for this preconditioner is controlled by the
memory
parameter, just as if you were using
method = "L-BFGS"
.fg
list,
supply a function hi
, that takes the par
vector as input. The function can return a matrix (obviously not a great
idea for memory use), or a vector, the latter of which is assumed to be
the diagonal of the matrix.ls_max_alpha
(for
line_search = "More-Thuente"
only): sets maximum value of
alpha that can be attained during line search.ls_max_alpha_mult
(for Wolfe-type line search only):
sets maximum value that can be attained by the ratio of the initial
guess for alpha for the current line search, to the final value of alpha
of the previous line search. Used to stop line searches diverging due to
very large initial guesses.ls_safe_cubic
(for
line_search = "More-Thuente"
only): if TRUE
,
use the safe-guarded cubic modification suggested by Xie and
Schlick.cg_update = "prfr"
, the “PR-FR”
(Polak-Ribiere/Fletcher-Reeves) conjugate gradient update suggested by
Gilbert and Nocedal.cg_udpate = "hs"
), Conjugate Descent
(cg_udpate = "cd"
), Dai-Yuan
(cg_udpate = "dy"
) and Liu-Storey
(cg_udpate = "ls"
).Initial release.