NagLibrary Namespace
関数リスト一覧   NagLibrary Namespaceへ  ライブラリイントロダクション  本ヘルプドキュメントのchm形式版

Namespace for all classes in the NAG Library for .NET.

Classes

  ClassDescription
A00
The methods in this chapter provide information about the NAG Library.
a00aa enables you to determine the precise Mark and maintenance level of the NAG Library which is being used, and also details of the implementation.
a00ac enables you to check if a valid key is available for the library licence management system.
C05
This chapter is concerned with the calculation of real zeros of continuous real functions of one or more variables. (Complex equations must be expressed in terms of the equivalent larger system of real equations.)
C05..::.c05ndCommunications
Communications Class for c05nd. See the examples in the Library Introduction.
C05..::.c05pdCommunications
Communications Class for c05pd. See the examples in the Library Introduction.
C06
This chapter is concerned with the following tasks.
(a) Calculating the discrete Fourier transform of a sequence of real or complex data values.
(b) Calculating the discrete convolution or the discrete correlation of two sequences of real or complex data values using discrete Fourier transforms.
(c) Calculating the inverse Laplace transform of a user-supplied function.
(d) Direct summation of orthogonal series.
(e) Acceleration of convergence of a sequence of real values.
C09
This chapter is concerned with the analysis of datasets (or functions or operators) in terms of frequency and scale components using wavelet transforms. Wavelet transforms have been applied in many fields from time series analysis to image processing and the localisation in either frequency or scale that they provide is useful for data compression or denoising. In general the standard wavelet transform uses dilation and scaling of a chosen function, ψt, (called the mother wavelet) such that
ψa,b t = 1 a ψ t-b a
where a gives the scaling and b determines the translation. Wavelet methods can be divided into continuous transforms and discrete transforms. In the continuous case, the pair a and b are real numbers with a0. For the discrete transform, a and b can be chosen as a=2-j, b=k2-j for integers j, k 
ψj,k t = 2j/2 ψ 2jt-k .
The discrete wavelet transform (DWT) at a single level together with its inverse and the multi-level DWT with inverse are provided. The choice of wavelets includes the orthogonal wavelets of Daubechies and a selection of biorthogonal wavelets.
C09..::.C09Communications
Communications class for C09. See the examples in the Library Introduction.
D01
This chapter provides methods for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules.
DataReader
DataReader Class, IO stream for reading NAG Data files
E01
This chapter is concerned with the interpolation of a function of one, two or three variables. When provided with the value of the function (and possibly one or more of its lowest-order derivatives) at each of a number of values of the variable(s), the methods provide either an interpolating function or an interpolated value. For some of the interpolating functions, there are supporting methods to evaluate, differentiate or integrate them.
E02
The main aim of this chapter is to assist you in finding a function which approximates a set of data points. Typically the data contain random errors, as of experimental measurement, which need to be smoothed out. To seek an approximation to the data, it is first necessary to specify for the approximating function a mathematical form (a polynomial, for example) which contains a number of unspecified coefficients: the appropriate fitting method then derives for the coefficients the values which provide the best fit of that particular form. The chapter deals mainly with curve and surface fitting (i.e., fitting with functions of one and of two variables) when a polynomial or a cubic spline is used as the fitting function, since these cover the most common needs. However, fitting with other functions and/or more variables can be undertaken by means of general linear or nonlinear methods (some of which are contained in other chapters) depending on whether the coefficients in the function occur linearly or nonlinearly. Cases where a graph rather than a set of data points is given can be treated simply by first reading a suitable set of points from the graph.
The chapter also contains methods for evaluating, differentiating and integrating polynomial and spline curves and surfaces, once the numerical values of their coefficients have been determined.
There is, too, a method for computing a Padé approximant of a mathematical function (see [Padé Approximants] and [Padé Approximants]).
E04
An optimization problem involves minimizing a function (called the objective function) of several variables, possibly subject to restrictions on the values of the variables defined by a set of constraint functions. Most methods in the Library are concerned with function minimization only, since the problem of maximizing a given objective function F(x) is equivalent to minimizing -Fx. Some methods allow you to specify whether you are solving a minimization or maximization problem, carrying out the required transformation of the objective function in the latter case.
In general methods in this chapter find a local minimum of a function f, that is a point x* s.t. for all x near x*fxfx*.
The E05 class contains methods to find the global minima of a function f. At a global minimum x* fxfx* for all x.
The H (not in this release) contains methods typically regarded as belonging to the field of operations research.
This introduction is only a brief guide to the subject of optimization designed for the casual user. Anyone with a difficult or protracted problem to solve will find it beneficial to consult a more detailed text, such as Gill et al. (1981) or Fletcher (1987).
If you are unfamiliar with the mathematics of the subject you may find some sections difficult at first reading; if so, you should concentrate on [Types of Optimization Problems], [Geometric Representation and Terminology], [Scaling], [Analysis of Computed Results] and [Recommendations on Choice and Use of Available Methods].
E04..::.e04dgOptions
Options Class for e04dg See the examples in the Library Introduction.
E04..::.e04mfOptions
Options Class for e04mf See the examples in the Library Introduction.
E04..::.e04ncOptions
Options Class for e04nc See the examples in the Library Introduction.
E04..::.e04nkOptions
Options Class for e04nk See the examples in the Library Introduction.
E04..::.e04nqOptions
Options Class for e04nq See the examples in the Library Introduction.
E04..::.e04ucOptions
Options Class for e04uc See the examples in the Library Introduction.
E04..::.e04ufOptions
Options Class for e04uf See the examples in the Library Introduction.
E04..::.e04ugOptions
Options Class for e04ug See the examples in the Library Introduction.
E04..::.e04usOptions
Options Class for e04us See the examples in the Library Introduction.
E04..::.e04vhOptions
Options Class for e04vh See the examples in the Library Introduction.
E04..::.e04wdOptions
Options Class for e04wd See the examples in the Library Introduction.
E05
Global optimization involves finding the absolute maximum or minimum value of a function (the objective function) of several variables, possibly subject to restrictions (defined by a set of bounds or constraint functions) on the values of the variables. Such problems can be much harder to solve than local optimization problems (which are discussed in E04 class) because it is difficult to determine whether a potential optimum found is global, and because of the nonlocal methods required to avoid becoming trapped near local optima. Most optimization methods in the NAG Library are concerned with function minimization only, since the problem of maximizing a given objective function F is equivalent to minimizing -F. In this chapter, you may specify whether you are solving a minimization or maximization problem; in the latter case, the required transformation of the objective function will be carried out automatically. In what follows we refer exclusively to minimization problems.
This introduction is a brief guide to the subject of global optimization, designed for the casual user. For further details you may find it beneficial to consult a more detailed text, such as Neumaier (2004). Furthermore, much of the material in the E04 class is relevant in this context also. In particular, it is strongly recommended that you read [Section Scaling] in the E04 class Chapter Introduction.
E05..::.e05jbOptions
Options Class for e05jb See the examples in the Library Introduction.
F06
This chapter is concerned with basic linear algebra methods which perform elementary algebraic operations involving scalars, vectors and matrices. It includes methods which conform to the specifications of the BLAS (Basic Linear Algebra Subprograms).
F07
This chapter provides methods for the solution of systems of simultaneous linear equations, and associated computations. It provides methods for
  • – matrix factorizations;
  • – solution of linear equations;
  • – estimating matrix condition numbers;
  • – computing error bounds for the solution of linear equations;
  • – matrix inversion;
  • – computing scaling factors to equilibrate a matrix.
Methods are provided for both real and complex data.
The methods in this chapter (F07 class) handle only dense and band matrices (not matrices with more specialized structures, or general sparse matrices).
The methods in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.
F08
This chapter provides methods for the solution of linear least-squares problems, eigenvalue problems and singular value problems, as well as associated computations. It provides methods for:
  • – solution of linear least-squares problems
  • – solution of symmetric eigenvalue problems
  • – solution of nonsymmetric eigenvalue problems
  • – solution of singular value problems
  • – solution of generalized symmetric-definite eigenvalue problems
  • – solution of generalized nonsymmetric eigenvalue problems
  • – solution of generalized singular value problems
  • – solution of generalized linear least-squares problems
  • – matrix factorizations associated with the above problems
  • – estimating condition numbers of eigenvalue and eigenvector problems
  • – estimating the numerical rank of a matrix
  • – solution of the Sylvester matrix equation
Methods are provided for both real and complex data.
The methods in this chapter (F08 class) handle only dense, band, tridiagonal and Hessenberg matrices (not matrices with more specialized structures, or general sparse matrices).
The methods in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.
It is not expected that you will need to read all of the following sections, but rather you will pick out those sections relevant to your particular problem.
G01
This chapter covers three topics:
  • plots, descriptive statistics, and exploratory data analysis;
  • statistical distribution functions and their inverses;
  • testing for Normality and other distributions.
G02
This chapter is concerned with two techniques – correlation analysis and regression modelling – both of which are concerned with determining the inter-relationships among two or more variables.
Other chapters of the NAG Library which cover similar problems are E02 class and E04 class. E02 class methods may be used to fit linear models by criteria other than least-squares, and also for polynomial regression; E04 class methods may be used to fit nonlinear models and linearly constrained linear models.
G03
This chapter is concerned with methods for studying multivariate data. A multivariate dataset consists of several variables recorded on a number of objects or individuals. Multivariate methods can be classified as those that seek to examine the relationships between the variables (e.g., principal components), known as variable-directed methods, and those that seek to examine the relationships between the objects (e.g., cluster analysis), known as individual-directed methods.
Multiple regression is not included in this chapter as it involves the relationship of a single variable, known as the response variable, to the other variables in the dataset, the explanatory variables. Routines for multiple regression are provided in G02 class.
G05
This chapter is concerned with the generation of sequences of independent pseudorandom and quasi-random numbers from various distributions, and the generation of pseudorandom time series from specified time series models.
G05..::.G05State
Class holding the state for methods in G05. See the examples in the Library Introduction.
G13
This chapter provides facilities for investigating and modelling the statistical structure of series of observations collected at equally spaced points in time. The models may then be used to forecast the series.
The chapter covers the following models and approaches.
  1. Univariate time series analysis, including autocorrelation functions and autoregressive moving average (ARMA) models.
  2. Univariate spectral analysis.
  3. Transfer function (multi-input) modelling, in which one time series is dependent on other time series.
  4. Bivariate spectral methods including coherency, gain and input response functions.
  5. Vector ARMA models for multivariate time series.
  6. Kalman filter models.
  7. GARCH models for volatility.
PrintManager
Utility class to control the output of error messages and monitoring information. See the examples in the Library Introduction.
S
This chapter is concerned with the provision of some commonly occurring physical and mathematical functions.
X01
This chapter is concerned with the provision of mathematical constants required by other methods within the Library.
X02
This chapter is concerned with parameters which characterise certain aspects of the computing environment in which the NAG Library is implemented. They relate primarily to floating-point arithmetic, but also to integer arithmetic, the elementary functions and exception handling. The values of the parameters vary from one implementation of the Library to another, but within the context of a single implementation they are constants.
The parameters are intended for use primarily by other methods in the Library, but users of the Library may sometimes need to refer to them directly.
X04
This chapter contains matrix printing utility methods.

Structures

  StructureDescription
Complex
Struct to denote a complex value as two doubles.

Delegates

  DelegateDescription
C05..::.C05AD_F
f must evaluate the function f whose zero is to be determined.
C05..::.C05AG_F
f must evaluate the function f whose zero is to be determined.
C05..::.C05AJ_F
f must evaluate the function f whose zero is to be determined.
C05..::.C05NB_FCN
fcn must return the values of the functions fi  at a point x.
C05..::.C05NC_FCN
fcn must return the values of the functions fi  at a point x.
C05..::.C05PB_FCN
Depending upon the value of iflag, fcn must either return the values of the functions fi  at a point x or return the Jacobian at x.
C05..::.C05PC_FCN
Depending upon the value of iflag, fcn must either return the values of the functions fi  at a point x or return the Jacobian at x.
D01..::.D01AH_F
f must return the value of the integrand f at a given point.
D01..::.D01AJ_F
f must return the value of the integrand f at a given point.
D01..::.D01AK_F
f must return the value of the integrand f at a given point.
D01..::.D01AL_F
f must return the value of the integrand f at a given point.
D01..::.D01AM_F
f must return the value of the integrand f at a given point.
D01..::.D01AN_G
g must return the value of the function g at a given point x.
D01..::.D01AP_G
g must return the value of the function g at a given point x.
D01..::.D01AQ_G
g must return the value of the function g at a given point x.
D01..::.D01AR_FUN
fun must return the value of the integrand f at a specified point.
D01..::.D01AS_G
g must return the value of the function g at a given point x.
D01..::.D01BD_F
f must return the value of the integrand f at a given point.
D01..::.D01DA_F
f must return the value of the integrand f at a given point.
D01..::.D01DA_PHI1
phi1 must return the lower limit of the inner integral for a given value of y.
D01..::.D01DA_PHI2
phi2 must return the upper limit of the inner integral for a given value of y.
D01..::.D01FC_FUNCTN
functn must return the value of the integrand f at a given point.
D01..::.D01GD_VECFUN
vecfun must evaluate the integrand at a specified set of points.
D01..::.D01GD_VECREG
vecreg must evaluate the limits of integration in any dimension for a set of points.
D01..::.D01JA_F
f must return the value of the integrand f at a given point.
D01..::.D01PA_FUNCTN
functn must return the value of the integrand f at a given point.
E04..::.E04AB_FUNCT
You must supply this method to calculate the value of the function Fx at any point x in a,b. It should be tested separately before being used in conjunction with e04ab.
E04..::.E04BB_FUNCT
You must supply this method to calculate the values of Fx and dF dx  at any point x in a,b.
It should be tested separately before being used in conjunction with e04bb.
E04..::.E04CB_FUNCT
funct must evaluate the function F at a specified point. It should be tested separately before being used in conjunction with e04cb.
E04..::.E04CB_MONIT
monit may be used to monitor the optimization process. It is invoked once every iteration.
If no monitoring is required, monit may be the dummy monitoring method e04cbk supplied by the NAG Library.
E04..::.E04DG_OBJFUN
objfun must calculate the objective function Fx and possibly its gradient as well for a specified n element vector x.
E04..::.E04FC_LSQFUN
lsqfun must calculate the vector of values fix at any point x. (However, if you do not wish to calculate the residuals at a particular x, there is the option of setting a parameter to cause e04fc to terminate immediately.)
E04..::.E04FC_LSQMON
If iprint0, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.
If iprint<0, the dummy method e04fdz can be used as lsqmon.
E04..::.E04FY_LSFUN1
You must supply this method to calculate the vector of values fix at any point x. It should be tested separately before being used in conjunction with e04fy (see the E04 class).
E04..::.E04GD_LSQFUN
lsqfun must calculate the vector of values fix and Jacobian matrix of first derivatives fi xj  at any point x. (However, if you do not wish to calculate the residuals or first derivatives at a particular x, there is the option of setting a parameter to cause e04gd to terminate immediately.)
E04..::.E04GD_LSQMON
If iprint0, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.
If iprint<0, the dummy method e04fdz can be used as lsqmon.
E04..::.E04GY_LSFUN2
You must supply this method to calculate the vector of values fix and the Jacobian matrix of first derivatives fi xj  at any point x. It should be tested separately before being used in conjunction with e04gy (see the E04 class).
E04..::.E04GZ_LSFUN2
You must supply this method to calculate the vector of values fix and the Jacobian matrix of first derivatives fi xj  at any point x. It should be tested separately before being used in conjunction with e04gz.
E04..::.E04HC_FUNCT
funct must evaluate the function and its first derivatives at a given point. (The minimization methods mentioned in [Description] gives you the option of resetting parameters of funct to cause the minimization process to terminate immediately. e04hc will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::.E04HD_FUNCT
funct must evaluate the function and its first derivatives at a given point. (e04lb gives you the option of resetting parameters of funct to cause the minimization process to terminate immediately. e04hd will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::.E04HD_H
h must evaluate the second derivatives of the function at a given point. (As with funct, a parameter can be set to cause immediate termination.)
E04..::.E04HE_LSQFUN
lsqfun must calculate the vector of values fix and Jacobian matrix of first derivatives fi xj  at any point x. (However, if you do not wish to calculate the residuals or first derivatives at a particular x, there is the option of setting a parameter to cause e04he to terminate immediately.)
E04..::.E04HE_LSQHES
lsqhes must calculate the elements of the symmetric matrix
Bx=i=1mfixGix,
at any point x, where Gix is the Hessian matrix of fix. (As with lsqfun, there is the option of causing e04he to terminate immediately.)
E04..::.E04HE_LSQMON
If iprint0, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.
If iprint<0, the dummy method e04fdz can be used as lsqmon.
E04..::.E04HY_LSFUN2
You must supply this method to calculate the vector of values fix and the Jacobian matrix of first derivatives fi xj  at any point x. It should be tested separately before being used in conjunction with e04hy (see the E04 class).
E04..::.E04HY_LSHES2
You must supply this method to calculate the elements of the symmetric matrix
Bx=i=1mfixGix,
at any point x, where Gix is the Hessian matrix of fix. It should be tested separately before being used in conjunction with e04hy (see the E04 class).
E04..::.E04JY_FUNCT1
You must supply funct1 to calculate the value of the function Fx at any point x. It should be tested separately before being used with e04jy (see the E04 class).
E04..::.E04KD_FUNCT
funct must evaluate the function Fx and its first derivatives F xj  at a specified point. (However, if you do not wish to calculate F or its first derivatives at a particular x, there is the option of setting a parameter to cause e04kd to terminate immediately.)
E04..::.E04KD_MONIT
If iprint0, you must supply monit which is suitable for monitoring the minimization process. monit must not change the values of any of its parameters.
If iprint<0, a monit with the correct parameter list must still be supplied, although it will not be called.
E04..::.E04KY_FUNCT2
You must supply funct2 to calculate the values of the function Fx and its first derivative F xj  at any point x. It should be tested separately before being used in conjunction with e04ky (see the E04 class).
E04..::.E04KZ_FUNCT2
You must supply this method to calculate the values of the function Fx and its first derivatives F xj  at any point x. It should be tested separately before being used in conjunction with e04kz (see E04 class).
E04..::.E04LB_FUNCT
funct must evaluate the function Fx and its first derivatives F xj  at any point x. (However, if you do not wish to calculate Fx or its first derivatives at a particular x, there is the option of setting a parameter to cause e04lb to terminate immediately.)
E04..::.E04LB_H
h must calculate the second derivatives of F at any point x. (As with funct, there is the option of causing e04lb to terminate immediately.)
E04..::.E04LB_MONIT
If iprint0, you must supply monit which is suitable for monitoring the minimization process. monit must not change the values of any of its parameters.
If iprint<0, a monit with the correct parameter list should still be supplied, although it will not be called.
E04..::.E04LY_FUNCT2
You must supply this method to calculate the values of the function Fx and its first derivatives F xj  at any point x. It should be tested separately before being used in conjunction with e04ly (see the E04 class).
E04..::.E04LY_HESS2
You must supply this method to evaluate the elements Hij= 2F xixj  of the matrix of second derivatives of Fx at any point x. It should be tested separately before being used in conjunction with e04ly (see the E04 class).
E04..::.E04NK_QPHX
For QP problems, you must supply a version of qphx to compute the matrix product Hx. If H has zero rows and columns, it is most efficient to order the variables x= yz T  so that
Hx= H10 00 y z = H1y 0 ,
where the nonlinear variables y appear first as shown. For FP and LP problems, qphx will never be called by e04nk and hence qphx may be the dummy method e04nku.
E04..::.E04NQ_QPHX
For QP problems, you must supply a version of qphx to compute the matrix product Hx for a given vector x. If H has rows and columns of zeros, it is most efficient to order x so that the nonlinear variables appear first. For example, if x=y,zT and only y enters the objective quadratically then In this case, ncolh should be the dimension of y, and qphx should compute H1y. For FP and LP problems, qphx will never be called by e04nq and hence qphx may be the dummy method e04nsh.
E04..::.E04UC_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (= c x ) for a specified n element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e04uc and confun may be the dummy method e04udm. (e04udm is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E04..::.E04UC_OBJFUN
objfun must calculate the objective function Fx and (optionally) its gradient gx = F x  for a specified n-vector x.
E04..::.E04UG_CONFUN
confun must calculate the vector Fx of nonlinear constraint functions and (optionally) its Jacobian = F x  for a specified n1 (n) element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e04ug and confun may be the dummy method e04ugm. (e04ugm is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E04..::.E04UG_OBJFUN
objfun must calculate the nonlinear part of the objective function fx and (optionally) its gradient = f x  for a specified n1 (n) element vector x. If there are no nonlinear objective variables (i.e., nonln=0), objfun will never be called by e04ug and objfun may be the dummy method e04ugn. (e04ugn is included in the NAG Library.)
E04..::.E04US_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (= c x ) for a specified n element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e04us and confun may be the dummy method e04udm. (e04udm is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E04..::.E04US_OBJFUN
objfun must calculate either the ith element of the vector fx = f1x,f2x,,fmxT  or all m elements of fx and (optionally) its Jacobian (= f x ) for a specified n element vector x.
E04..::.E04VH_USRFUN
usrfun must define the nonlinear portion fx  of the problem functions Fx=fx + Ax , along with its gradient elements Gij x= fix xj . (A dummy method is needed even if f0  and all functions are linear.)
In general, usrfun should return all function and gradient values on every entry except perhaps the last. This provides maximum reliability and corresponds to the default option setting, Derivative Option=1.
The elements of Gx  are stored in the array g[i-1], for i=1,2,,leng, in the order specified by the input arrays igfun and jgvar.
In practice it is often convenient not to code gradients. e04vh is able to estimate them by finite differences, using a call to usrfun for each variable xj  for which some fix xj  needs to be estimated. However, this reduces the reliability of the optimization algorithm, and it can be very expensive if there are many such variables xj .
As a compromise, e04vh allows you to code as many gradients as you like. This option is implemented as follows. Just before usrfun is called, each element of the derivative array g is initialized to a specific value. On exit, any element retaining that value must be estimated by finite differences.
Some rules of thumb follow:
(i) for maximum reliability, compute all gradients;
(ii) if the gradients are expensive to compute, specify optional parameter Nonderivative Linesearch and use the value of the input parameter needg to avoid computing them on certain entries. (There is no need to compute gradients if needg=0 on entry to usrfun.);
(iii) if not all gradients are known, you must specify Derivative Option=0. You should still compute as many gradients as you can. (It often happens that some of them are constant or zero.);
(iv) again, if the known gradients are expensive, don't compute them if needg=0 on entry to usrfun;
(v) use the input parameter status to test for special actions on the first or last entries;
(vi) while usrfun is being developed, use the optional parameter Verify Level to check the computation of gradients that are supposedly known;
(vii) usrfun is not called until the linear constraints and bounds on x  are satisfied. This helps confine x  to regions where the functions fix  are likely to be defined. However, be aware of the optional parameter Minor Feasibility Tolerance if the functions have singularities on the constraint boundaries;
(viii) set status=-1  if some of the functions are undefined. The linesearch will shorten the step and try again;
(ix) set status-2  if you want e04vh to stop.
E04..::.E04VJ_USRFUN
usrfun must define the problem functions Fx . This method is passed to e04vj as the external parameter usrfun.
E04..::.E04WD_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian, c x , for a specified n-vector x. If there are no nonlinear constraints (i.e., ncnln=0), e04wd will never call confun, so it may be the dummy method e04wdp. (e04wdp is included in the NAG Library). If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
If all constraint gradients (Jacobian elements) are known (i.e., Derivative Level=2 ​ or ​ 3 ), any constant elements may be assigned to cjac once only at the start of the optimization. An element of cjac that is not subsequently assigned in confun will retain its initial value throughout. Constant elements may be loaded in cjac during the first call to confun (signalled by the value of nstate=1). The ability to preload constants is useful when many Jacobian elements are identically zero, in which case cjac may be initialized to zero and nonzero elements may be reset by confun.
It must be emphasised that, if Derivative Level < 2 , unassigned elements of cjac are not treated as constant; they are estimated by finite differences, at nontrivial expense.
E04..::.E04WD_OBJFUN
objfun must calculate the objective function Fx and (optionally) its gradient gx = F x  for a specified n-vector x.
E04..::.E04XA_OBJFUN
If mode=0 or 2, objfun must calculate the objective function; otherwise if mode=1, objfun must calculate the objective function and the gradients.
E04..::.E04YA_LSQFUN
lsqfun must calculate the vector of values fix and their first derivatives fi xj  at any point x. (The minimization methods mentioned in [Description] give you the option of resetting a parameter to terminate immediately. e04ya will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::.E04YB_LSQFUN
lsqfun must calculate the vector of values fix and their first derivatives fi xj  at any point x. (e04he gives you the option of resetting parameters of lsqfun to cause the minimization process to terminate immediately. e04yb will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::.E04YB_LSQHES
lsqhes must calculate the elements of the symmetric matrix
Bx=i=1mfixGix,
at any point x, where Gix is the Hessian matrix of fix. (As with lsqfun, a parameter can be set to cause immediate termination.)
E04..::.E04ZC_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and its Jacobian for a specified n-vector x. If there are no nonlinear constraints (ncnln=0), confun will not be called by e04zc and confun may be the dummy method e04vdm. (e04vdm is included in the NAG Library.) If there are nonlinear constraints, e04zc always calls confun and objfun together, in that order.
E04..::.E04ZC_OBJFUN
objfun must calculate the objective function Fx and its gradient for a specified n-element vector x.
E05..::.E05JB_MONIT
monit may be used to monitor the optimization process. It is invoked upon every successful completion of the procedure in which a sub-box is considered for splitting. It will also be called just before e05jb exits if that splitting procedure was not successful.
If no monitoring is required, monit may be the dummy monitoring method e05jbk supplied by the NAG Library.
E05..::.E05JB_OBJFUN
objfun must evaluate the objective function Fx for a specified n-vector x.
G02..::.G02EF_MONFUN
You may define your own function or specify the NAG defined default function (g02efh not in this release).
G02..::.G02HB_UCV
ucv must return the value of the function u for a given value of its argument. The value of u must be non-negative.
G02..::.G02HD_CHI
If isigma>0, chi must return the value of the weight function χ for a given value of its argument. The value of χ must be non-negative.
G02..::.G02HD_PSI
psi must return the value of the weight function ψ for a given value of its argument.
G02..::.G02HF_PSI
psi must return the value of the ψ function for a given value of its argument.
G02..::.G02HF_PSP
psp must return the value of ψt=ddt ψt for a given value of its argument.
G02..::.G02HL_UCV
ucv must return the values of the functions u and w and their derivatives for a given value of its argument.
G02..::.G02HM_UCV
ucv must return the values of the functions u and w for a given value of its argument.
PrintManager..::.MessageLogger
Delegate type to use for messages.