E04.e04nqOptions Class
関数リスト一覧   NagLibrary Namespaceへ  ライブラリイントロダクション  本ヘルプドキュメントのchm形式版

Options Class for e04nq See the examples in the Library Introduction.

Syntax

C#
public class e04nqOptions
Visual Basic (Declaration)
Public Class e04nqOptions
Visual C++
public ref class e04nqOptions
F#
type e04nqOptions =  class end

Description of the Optional Parameters

Check FrequencyiDefault =60
Every ith iteration after the most recent basis factorization, a numerical test is made to see if the current solution x,s satisfies the linear constraints Ax-s=0. If the largest element of the residual vector r=Ax-s is judged to be too large, the current basis is refactorized and the basic variables recomputed to satisfy the constraints more accurately. If i0, the value i=99999999 is used and effectively no checks are made.
Check Frequency=1 is useful for debugging purposes, but otherwise this option should not be needed.
Crash OptioniDefault =3
Crash TolerancerDefault =0.1
If start="C", an internal Crash procedure is used to select an initial basis from various rows and columns of the constraint matrix A -I . The value of i determines which rows and columns of A are initially eligible for the basis, and how many times the Crash procedure is called. Columns of -I are used to pad the basis where necessary.
i Meaning
0 The initial basis contains only slack variables: B=I.
1 The Crash procedure is called once, looking for a triangular basis in all rows and columns of the matrix A.
2 The Crash procedure is called once, looking for a triangular basis in rows.
3 The Crash procedure is called twice, treating linear equalities and linear inequalities separately.
If i1, certain slacks on inequality rows are selected for the basis first. (If i2, numerical values are used to exclude slacks that are close to a bound.) The Crash procedure then makes several passes through the columns of A, searching for a basis matrix that is essentially triangular. A column is assigned to ‘pivot’ on a particular row if the column contains a suitably large element in a row that has not yet been assigned. (The pivot elements ultimately form the diagonals of the triangular basis.) For remaining unassigned rows, slack variables are inserted to complete the basis.
The Crash Tolerance allows the Crash procedure to ignore certain ‘small’ nonzero elements in each column of A. If amax  is the largest element in column j, other nonzeros aij  in the column are ignored if aij amax × r . (To be meaningful, r should be in the range 0 r < 1 .)
When r>0.0, the basis obtained by the Crash procedure may not be strictly triangular, but it is likely to be nonsingular and almost triangular. The intention is to obtain a starting basis containing more columns of A and fewer (arbitrary) slacks. A feasible solution may be reached sooner on some problems.
For example, suppose the first m columns of A form the matrix shown under LU Factor Tolerance; i.e., a tridiagonal matrix with entries -1, 4, -1. To help the Crash procedure choose all m columns for the initial basis, we would specify a Crash Tolerance of r for some value of r>0.5.
Defaults
This special keyword may be used to reset all optional parameters to their default values.
Elastic ModeiDefault =1
This parameter determines if (and when) elastic mode is to be started. Three elastic modes are available as follows:
i Meaning
0 Elastic mode is never invoked. e04nq will terminate as soon as infeasibility is detected. There may be other points with significantly smaller sums of infeasibilities.
1 Elastic mode is invoked only if the constraints are found to be infeasible (the default). If the constraints are infeasible, continue in elastic mode with the composite objective determined by the values of the optional parameters Elastic Objective and Elastic Weight.
2 The iterations start and remain in elastic mode. This option allows you to minimize the composite objective function directly without first performing Phase 1 iterations.
The success of this option will depend critically on your choice of Elastic Weight. If Elastic Weight is sufficiently large and the constraints are feasible, the minimizer of the composite objective and the solution of the original problem are identical. However, if the Elastic Weight is not sufficiently large, the minimizer of the composite function may be infeasible, even if a feasible point exists.
Elastic ObjectiveiDefault =1
This determines the form of the composite objective fx + γ j vj + wj  in Phase 2 (γ). Three types of composite objectives are available.
i Meaning
0 Include only the true objective fx  in the composite objective. This option sets γ=0 in the composite objective and allows e04nq to ignore the elastic bounds and find a solution that minimizes fx subject to the non-elastic constraints. This option is useful if there are some ‘soft’ constraints that you would like to ignore if the constraints are infeasible.
1 Use a composite objective defined with γ determined by the value of Elastic Weight. This value is intended to be used in conjunction with Elastic Mode=2.
2 Include only the elastic variables in the composite objective. The elastics are weighted by γ=1. This choice minimizes the violations of the elastic variables at the expense of possibly increasing the true objective. This option can be used to find a point that minimizes the sum of the violations of a subset of constraints specified by the input array helast.
Elastic WeightrDefault =1.0
This defines the value of γ in the composite objective in Phase 2 (γ).
At each iteration of elastic mode, the composite objective is defined to be
minimize​ ​σ ​ ​ fx + γ ​ (sum of infeasibilities);
where σ=1 for Minimize, σ=-1 for Maximize, and fx is the quadratic objective.
Note that the effect of γ is not disabled once a feasible point is obtained.
Expand FrequencyiDefault =10000
This option is part of an anti-cycling procedure (see [Miscellaneous]) designed to allow progress even on highly degenerate problems.
The strategy is to force a positive step at every iteration, at the expense of violating the constraints by a small amount. Suppose that the value of the optional parameter Feasibility Tolerance is δ. Over a period of i iterations, the feasibility tolerance actually used by e04nq (i.e., the working feasibility tolerance) increases from 0.5δ to δ (in steps of 0.5δ/i).
Increasing the value of i helps reduce the number of slightly infeasible nonbasic variables (most of which are eliminated during the resetting procedure). However, it also diminishes the freedom to choose a large pivot element (see the description of the optional parameter Pivot Tolerance).
If i0, the value i=99999999 is used and effectively no anti-cycling procedure is invoked.
Factorization FrequencyiDefault =100LP or 50QP
If i>0, at most i basis changes will occur between factorizations of the basis matrix.
For LP problems, the basis factors are usually updated at every iteration. Higher values of i  may be more efficient on problems that are extremely sparse and well scaled.
For QP problems, fewer basis updates will occur as the solution is approached. The number of iterations between basis factorizations will therefore increase. During these iterations a test is made regularly according to the value of optional parameter Check Frequency to ensure that the linear constraints Ax-s=0 are satisfied. Occasionally, the basis will be refactorized before the limit of i updates is reached. If i0, the default value is used.
Feasibility TolerancerDefault = max10-6,ε
A feasible problem is one in which all variables satisfy their upper and lower bounds to within the absolute tolerance r. (This includes slack variables. Hence, the general constraints are also satisfied to within r.)
e04nq attempts to find a feasible solution before optimizing the objective function. If the sum of infeasibilities cannot be reduced to zero, the problem is assumed to be infeasible. Let sInf be the corresponding sum of infeasibilities. If sInf is quite small, it may be appropriate to raise r by a factor of 10 or 100. Otherwise, some error in the data should be suspected.
Note that if sInf is not small and you have not asked e04nq to minimize the violations of the elastic variables (i.e., you have not specified Elastic Objective=2), there may be other points that have a significantly smaller sum of infeasibilities. e04nq will not attempt to find the solution that minimizes the sum unless Elastic Objective=2.
If the constraints and variables have been scaled (see the description of the optional parameter Scale Option), then feasibility is defined in terms of the scaled problem (since it is more likely to be meaningful).
Infinite Bound SizerDefault =1020
If r0, r defines the ‘infinite’ bound infbnd in the definition of the problem constraints. Any upper bound greater than or equal to infbnd will be regarded as + (and similarly any lower bound less than or equal to -infbnd will be regarded as -). If r<0, the default value is used.
Iterations LimitiDefault = max10000, 10 maxm,n
The value of i specifies the maximum number of iterations allowed before termination. Setting i=0 and Print Level>0 means that: the workspace needed to start solving the problem will be computed and printed; and feasibility and optimality will be checked. No iterations will be performed. If i<0, the default value is used.
LU Density Tolerancer1Default = 0.6  
LU Singularity Tolerancer2Default = ε23  
The density tolerance r1 is used during LU factorization of the basis matrix. Columns of L and rows of U are formed one at a time, and the remaining rows and columns of the basis are altered appropriately. At any stage, if the density of the remaining matrix exceeds r1, the Markowitz strategy for choosing pivots is terminated. The remaining matrix is factored by a dense LU procedure. Raising the density tolerance towards 1.0 may give slightly sparser LU factors, with a slight increase in factorization time.
If r2>0, r2 defines the singularity tolerance used to guard against ill-conditioned basis matrices. After B is refactorized, the diagonal elements of U are tested as follows. If ujjr2 or ujj<r2maxiuij, the jth column of the basis is replaced by the corresponding slack variable. If r20, the default value is used.
LU Factor Tolerancer1Default =100.0
LU Update Tolerancer2Default =10.0
The values of r1 and r2 affect the stability and sparsity of the basis factorization B=LU, during refactorization and updates respectively. The lower triangular matrix L is a product of matrices of the form
1 0 μ 1
where the multipliers μ will satisfy μri. The default values of r1 and r2 usually strike a good compromise between stability and sparsity. They must satisfy r1, r21.0.
For large and relatively dense problems, r1=10.0​ or ​5.0 (say) may give a useful improvement in stability without impairing sparsity to a serious degree.
For certain very regular structures (e.g., band matrices) it may be necessary to reduce r1​ and/or ​r2 in order to achieve stability. For example, if the columns of A include a sub-matrix of the form
4 -1 -1 4 -1 -1 4 -1 -1 4 -1 -1 4 ,
one should set both r1 and r2 to values in the range 1.0ri<4.0.
LU Partial PivotingDefault
LU Complete Pivoting
LU Rook Pivoting
The LU factorization implements a Markowitz-type search for pivots that locally minimize the fill-in subject to a threshold pivoting stability criterion. The default option is to use threshold partial pivoting. The options LU Complete Pivoting and LU Rook Pivoting are more expensive but more stable and better at revealing rank, as long as the LU Factor Tolerance is not too large (say <2.0).
MinimizeDefault
Maximize
Feasible Point
This option specifies the required direction of the optimization. It applies to both linear and nonlinear terms (if any) in the objective function. Note that if two problems are the same except that one minimizes fx and the other maximizes -fx, their solutions will be the same but the signs of the dual variables πi and the reduced gradients dj (see [Main Iteration]) will be reversed.
The option Feasible Point means ‘ignore the objective function, while finding a feasible point for the linear constraints’. It can be used to check that the constraints are feasible without altering the call to e04nq.
NolistDefault
List
Normally each optional parameter specification is printed to unit Print File as it is supplied. Optional parameter Nolist may be used to suppress the printing and optional parameter List may be used to restore printing.
Old Basis FileiDefault =0
If i>0 , the basis maps information will be obtained from this file. The file will usually have been output previously as a New Basis File or Backup Basis File. A full description of information recorded in New Basis File and Backup Basis File is given in Gill et al. (2005a).
The file will not be acceptable if the number of rows or columns in the problem has been altered.
Optimality TolerancerDefault = max10-6,ε
This is used to judge the size of the reduced gradients dj=gj - ajT π , where gj  is the jth component of the gradient, aj  is the associated column of the constraint matrix A -I , and π  is the set of dual variables.
By construction, the reduced gradients for basic variables are always zero. The problem will be declared optimal if the reduced gradients for nonbasic variables at their lower or upper bounds satisfy
dj / π -r  or  dj / π r
respectively, and if dj / π r  for superbasic variables.
In the above tests, π  is a measure of the size of the dual variables. It is included to make the tests independent of a scale factor on the objective function. The quantity π  actually used is defined by
π=maxσ/m,1 , where ​ σ= i=1 m πi,
so that only large scale factors are allowed for.
If the objective is scaled down to be very small, the optimality test reduces to comparing dj  against 0.01r .
Partial PriceiDefault =10LP or 1QP
This option is recommended for large FP or LP problems that have significantly more variables than constraints (i.e., nm). It reduces the work required for each pricing operation (i.e., when a nonbasic variable is selected to enter the basis). If i=1, all columns of the constraint matrix A -I  are searched. If i>1, A and I are partitioned to give i roughly equal segments Aj,Ij, for j=1,2,,i (modulo i). If the previous pricing search was successful on Aj-1,Ij-1, the next search begins on the segments Aj and Ij. If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to enter the basis. If nothing is found, the search continues on the next segments Aj+1,Ij+1, and so on. If i0, the default value is used.
Pivot TolerancerDefault =ε23
Broadly speaking, the pivot tolerance is used to prevent columns entering the basis if they would cause the basis to become almost singular.
When x  changes to x+αp  for some search direction p , a ‘ratio test’ determines which component of x  reaches an upper or lower bound first. The corresponding element of p  is called the pivot element. Elements of p  are ignored (and therefore cannot be pivot elements) if they are smaller than the pivot tolerance r .
It is common for two or more variables to reach a bound at essentially the same time. In such cases, the optional parameter Feasibility Tolerance (say t ) provides some freedom to maximize the pivot element and thereby improve numerical stability. Excessively small values of t  should therefore not be specified. To a lesser extent, the optional parameter Expand Frequency (say f ) also provides some freedom to maximize the pivot element. Excessively large values of f  should therefore not be specified.
Print FileiDefault =0
If i>0 , the following information is output to i  during the solution of each problem:
a listing of the optional parameters;
some statistics about the problem;
the amount of storage available for the LU  factorization of the basis matrix;
notes about the initial basis resulting from a Crash procedure or a Basis file;
the iteration log;
basis factorization statistics;
the exit ifail condition and some statistics about the solution obtained;
the printed solution, if requested.
The last four items are described in [Further Comments] and [Description of Monitoring Information]. Further brief output may be directed to the Summary File.
Print FrequencyiDefault =100
If i>0 , one line of the iteration log will be printed every i th iteration. A value such as i=10  is suggested for those interested only in the final solution. If i0, the value of i=99999999 is used and effectively no checks are made.
Print LeveliDefault =1
This controls the amount of printing produced by e04nq as follows.
i Meaning
0 No output except error messages. If you want to suppress all output, set Print File=0.
=1 The set of selected options, problem statistics, summary of the scaling procedure, information about the initial basis resulting from a Crash or a Basis file, a single line of output at each iteration (controlled by the optional parameter Print Frequency), and the exit condition with a summary of the final solution.
10 Basis factorization statistics.
Punch Filei1Default =0
Insert Filei2Default =0
These files provide compatibility with commercial mathematical programming systems. The Punch File from a previous run may be used as an Insert File for a later run on the same problem. A full description of information recorded in Insert File and Punch File is given in Gill et al. (2005a).
If i1>0 , the final solution obtained will be output to file i1 . For linear programs, this format is compatible with various commercial systems.
If i2>0 , the Insert File containing basis information will be read. The file will usually have been output previously as a Punch File. The file will not be accessed if Old Basis File is specified.
QPSolver CholeskyDefault
QPSolver CG
QPSolver QN
Specifies the active-set algorithm used to solve the quadratic program in Phase 2 (γ). QPSolver Cholesky holds the full Cholesky factor R of the reduced Hessian ZTHZ. As the QP iterations proceed, the dimension of R changes with the number of superbasic variables. If the number of superbasic variables needs to increase beyond the value of Reduced Hessian Dimension, the reduced Hessian cannot be stored and the solver switches to QPSolver CG. The Cholesky solver is reactivated if the number of superbasics stabilizes at a value less then Reduced Hessian Dimension.
QPSolver QN solves the QP using a quasi-Newton method. In this case, R is the factor of a quasi-Newton approximate Hessian.
QPSolver CG uses an active-set method similar to QPSolver QN, but uses the conjugate-gradient method to solve all systems involving the reduced Hessian.
The Cholesky QP solver is the most robust, but may require a significant amount of computation if there are many superbasics.
The quasi-Newton QP solver does not require computation of the exact R at the start of Phase 2 (γ). It may be appropriate when the number of superbasics is large but relatively few iterations are needed to reach a solution (e.g., if e04nq is called with a Warm Start).
The conjugate-gradient QP solver is appropriate for problems with many degrees of freedom (say, more than 2000 superbasics).
Reduced Hessian DimensioniDefault = 1 LP ​ or ​ min2000,nH+1,n QP
This specifies that an i by i triangular matrix R (to define the reduced Hessian according to RT R = ZT HZ ). is to be available for use by the Cholesky QP solver.
Scale OptioniDefault =2
Scale TolerancerDefault =0.9
Scale Print
Three scale options are available as follows:
i Meaning
0 No scaling. This is recommended if it is known that x  and the constraint matrix never have very large elements (say, larger than 100).
1 The constraints and variables are scaled by an iterative procedure that attempts to make the matrix coefficients as close as possible to 1.0 (see Fourer (1982)). This will sometimes improve the performance of the solution procedures.
2 The constraints and variables are scaled by the iterative procedure. Also, a certain additional scaling is performed that may be helpful if the right-hand side b  or the solution x  is large. This takes into account columns of A -I  that are fixed or have positive lower bounds or negative upper bounds.
Optional parameter Scale Tolerance affects how many passes might be needed through the constraint matrix. On each pass, the scaling procedure computes the ratio of the largest and smallest nonzero coefficients in each column:
ρj=maxj aij / mini aij aij 0 .
If maxj ρj is less than r  times its previous value, another scaling pass is performed to adjust the row and column scales. Raising r  from 0.9 to 0.99 (say) usually increases the number of scaling passes through A . At most 10 passes are made. The value of r should lie in the range 0<r<1.
Scale Print causes the row scales ri and column scales cj to be printed to Print File, if System Information Yes has been specified. The scaled matrix coefficients are a- ij = aij cj / ri, and the scaled bounds on the variables and slacks are l-j = lj / cj , u-j = uj / cj , where cj = rj-n  if j>n.
Solution FileiDefault =0
If i>0 , the final solution will be output to file i  (whether optimal or not).
To see more significant digits in the printed solution, it will sometimes be useful to make i  refer to the system Print File.
Summary Filei1Default =0
Summary Frequencyi2Default =100
If i1>0 , the Summary File is output to file i1 , including a line of the iteration log every i2 th iteration. In an interactive environment, it is useful to direct this output to the terminal, to allow a run to be monitored online. (If something looks wrong, the run can be manually terminated.) Further details are given in [Description of Monitoring Information]. If i20, the value of i2=99999999 is used and effectively no checks are made.
Superbasics LimitiDefault = 1 LP ​ or ​ min nH + 1 ,n QP
This places a limit on the storage allocated for superbasic variables. Ideally, i  should be set slightly larger than the ‘number of degrees of freedom’ expected at an optimal solution.
For linear programs, an optimum is normally a basic solution with no degrees of freedom. (The number of variables lying strictly between their bounds is no more than m , the number of general constraints.) The default value of i  is therefore 1.
For quadratic problems, the number of degrees of freedom is often called the ‘number of independent variables’. Normally, i  need not be greater than nH+1 , where nH is the number of leading nonzero columns of H . For many problems, i  may be considerably smaller than nH. This will save storage if nH is very large.
Suppress Parameters
Normally e04nq prints the options file as it is being read, and then prints a complete list of the available keywords and their final values. The optional parameter Suppress Parameters tells e04nq not to print the full list.
System Information NoDefault
System Information Yes
This option prints additional information on the progress of major and minor iterations, and Crash statistics. See [Description of Monitoring Information].
Timing LeveliDefault =0
If i>0 , some timing information will be output to the Print File, if Print File>0.
Unbounded Step SizerDefault =infbnd
If r>0, r specifies the magnitude of the change in variables that will be considered a step to an unbounded solution. (Note that an unbounded solution can occur only when the Hessian is not positive-definite.) If the change in x during an iteration would exceed the value of r, the objective function is considered to be unbounded below in the feasible region. If r0, the default value is used. See Infinite Bound Size for the definition of infbnd.

Inheritance Hierarchy

System..::.Object
  NagLibrary..::.E04..::.e04nqOptions

See Also