Chapter 6

MULTI VARIABLE OPTIMIZATION PROCEDURES

Introduction top

This part of optimization is the most dynamic topic. As pointed out by Sargent(33) in 1979, relevant papers are being published at the rate of about 200 per month in more than 30 journals, not counting conference proceedings and special collections. A conference in 1979 added 450 papers to the list. Applications are varied and appear in almost every field.

     Production codes for nonlinear optimization comparable to MPSX (Mathematical Programming System Extended) for linear programming are not available. However, over the past two decades, the capability to locate the optimum of a nonlinear economic model of a plant and to comply with several hundred constraints associated with the process models, unit capacities, raw material availabilities, and product demands has been developed in proprietary codes of major corporations(1). Generally available nonlinear codes for large problems have grown from university and government research programs on numerical experimentation with algorithms, and their documentation is somewhat limited (2). However, the theoretical foundations for these computer programs were developed in the 1960's, and, as with linear programming, the capability to solve optimization problems with increasing numbers of constraints has grown with improvements in computer hardware and software. However, there still is debate about which algorithms and/or computer codes are superior; and Lasdon(3) has recommended having several codes available which implement some of the more successful methods.

     The effectiveness of a multivariable optimization procedure depends on several, interrelated things. These are the optimization theory, the algorithms to implement the theory, the computer program and programming language used for computations with the algorithms, the computer to run the program, and the optimization problems being solved. For example, in the area of multivariable, unconstrained search methods, there are several hundred algorithms which have been used with varying degrees of success. They have been programmed in FORTRAN mainly, run on various types of computers, and applied to a range of problems from simple algebraic expressions to plant simulation.

     This chapter describes unconstrained and constrained multivariable search algorithms are described that have been successful in solving industrial optimization problems. Examples are given to illustrate these methods, and references to sources for computer programs are given for the methods. Also, references to recent and classical texts and articles are included for further information. For example, a two volume set of books by Fletcher (4,5) is a recent comprehensive compilation of the mathematical aspects of nonlinear programming methods, as are the equally recent books by Gill et al.(6), McCormick(7), and Bertsedkas(50). The books by Reklaitis, et al.(15), Vanderplaat(24), Haftka and Kamat(54), and Dennis and Schnabel(55) describe the theory and recent computational practice, and Avriel's book(9) gives a broad mathematical coverage of the subject. Finally, Wilde's book(10), Optimum Seeking Methods, the first book devoted to the subject, still contains valuable information in a very readable style.

     In general form the nonlinear optimization problem can be stated as:

Optimize:      y(x)                                                           (6-1)

Subject to:    fi(x) = 0            for i = 1,2,...,h

fi(x) > 0                  i =  h+1,...,m

There are n independent variables, x = (x1,x2,...xn), m constraint equations of which h are equality constraints. Also, the values of the xj's can have upper and lower bounds specified. For this general form Avriel(11) points out that there is no unified approach to obtain the optimal solution of the nonlinear optimization problem which is comparable to the unifying role of the Simplex Method in linear programming. He states that the Simplex Method can efficiently solve a linear program in thousands of variables, but how to minimize an unconstrained nonlinear function with a few variables is an important question.

     There are four classes of procedures for multivariable optimization that are applicable to nonlinear economic models with nonlinear constraints. These are multivariable search methods, multivariable elimination procedures, random search, and stochastic approximation procedures. Multivariable search methods are the most important and are discussed in detail. The capabilities and limitations of the other three methods are given in summary, with reference to other sources for more complete information.

     Multivariable search methods can be thought of as encompassing the theory and algorithms of nonlinear programming along with the associated computational methods. These procedures use algorithms which are based on geometric or logical concepts to move rapidly from a starting point away from the optimum to a point near the optimum. Also, they attempt to satisfy the constraints associated with the problem and the Kuhn-Tucker conditions as they generate improved values of the economic model.

     Multivariable elimination procedures are methods that reduce the feasible region (hypersurface of the independent variables) by discarding regions that are known not to contain the optimum. These are similar to minimax single variable search methods in that they eliminate intervals on each of the independent variables. However, these methods are restricted to certain types of functions, e.g., strongly unimodal functions. Also, to locate the best value of the profit function with these procedures, the reduction in the range of the independent variables increases as the number of independent variables increases. This effect has been referred to as the curse of dimensionality, and has been illustrated by Wilde(10). The single-variable minimax interval elimination procedures are not useful in multidimensions, because only line segments are eliminated in those procedures, and there is a very large number of lines in a plane.

     Random search is a method that places experiments randomly in the feasible region after it has been divided into a grid of discrete points. Knowing the number and location of the grid points, we can place a set of experiments randomly. Then we can determine with a certain probability that one of these points has a value of the profit function that is in a specified best fraction (top x%). Unimodality is not required, and the number of independent variables is not directly a factor. In adaptive or creeping random search, experiments are placed randomly in a selected section of the feasible region, and a best value is located. Then another section of the feasible region is placed around this best value, and random experiments are placed again. This procedure is repeated until a stopping criterion is met. In essence, random search is used as a multivariable search method.

     Stochastic approximation procedures are methods that apply to economic models that contain random error, e.g., the plant instead of a computer simulation of the plant. These techniques are similar to multivariable search methods, but they move slowly to avoid being confounded by the random error in the values of the economic model.

     These four methods are described such that they can be applied to industrial problems. The most important and widely used multivariable search methods are given first, and then the other three procedures are discussed.

Multivariable Search Methods Overview top

Wilde(10) has proposed a strategy for multivariable search methods that contains some important ideas. This strategy has an opening gambit, a middle game, and an end game, which is analogous to that of chess. In the opening gambit a starting point is selected. Then the middle game involves moving from this starting point to a point near the optimum as rapidly as possible. In the end game a quadratic fit to the economic model is performed to avoid stopping at a saddle point or sharp ridge.

     Generally, selecting a starting point is not a problem, for the current design or plant operating conditions are usually known. If they are not available, then midpoints between the upper and lower limits on the independent variables can be used, and Wilde(10) has suggested others, such as the centroid and the minimax.

     In the middle game a multivariable search method is used that moves rapidly from the starting point to a point that appears to be an optimum. Only enough explorations are performed at each step to obtain information useful to locate future experiments and to keep the method moving rapidly toward the optimum. The objective is to attain a series of improved values of the economic model with a minimum of computational effort.

     The end game takes over once the middle game procedure has located what appears to be an optimum. A quadratic fit to the economic model at this best point is performed to determine if it is an optimum, rather than a saddle point or a ridge. The strategy has the middle game continue, if an optimum is not located, or stop, if one is found based on the quadratic approximation.

     With these ideas in mind, multivariable search methods will be described that are middle game procedures applicable to unconstrained and constrained problems. One of the more frequently encountered unconstrained optimization problems is that of a nonlinear least-squares fit of a curve to experimental data. However, industrial optimization problems are constrained ones, almost without exception. Moreover, it will be seen that some constrained methods convert the problem into an unconstrained one, and then an unconstrained procedure is employed. Also, some of the more effective middle game procedures develop the information for the quadratic fit of the end game as they proceed from the starting point.

     There are several hundred unconstrained multivariable search methods, but most of these are variations on a few concepts. These concepts can be used to classify the methods. Many techniques may be called geometric methods, for they use a local, geometric property to find a direction having an improved value of the economic model. Typically, derivative measurements are required. Two examples are the direction of steepest ascent (gradient search) and quadratic fit to the profit function (Newton's method). Other techniques can be called logical methods, for they use an algorithm based on a logical concept to find an improved direction of the profit function. Two examples are pattern search and flexible polyhedron search. Typically, derivative measurements are not required, and these types of procedures have been called function comparison methods (6). However, two methods that would not fit into these two categories readily are extensions of linear and quadratic programming. Here, linear programming, for example, is applied iteratively to a linearized version of the nonlinear constrained problem to move toward the optimum from a starting point. The methods are called successive, or sequential, linear programming and successive, or sequential, quadratic programming.

     Another equally valid way to classify unconstrained methods has been given by Gill et al. (6). These categories are Newton, quasi-Newton, and conjugate gradient types, each with and without first or second derivatives, and functional comparison methods. Also, some of the quasi-Newton methods are called variable metric methods, and some of the conjugate gradient methods are called conjugate direction methods. They are all geometric methods, except for the functional comparison methods, which are logical methods.

     There are essentially six types of procedures to solve constrained nonlinear optimization problems. Four of these methods convert the constrained problem into an unconstrained one, and then an unconstrained search procedure is applied. These four types are penalty or barrier functions methods; the augmented Lagrangian functions; generalized reduced gradients; and feasible directions, or projections, sometimes called methods of restricted movement. The other two are the previously mentioned procedures of successive, or sequential, linear and quadratic programming.

top