- Introduction
- Search Problems and Search Plans
- Sequential Search Methods
- Open Initial Interval
- Other Methods
- Closure
- References
- Problems
With a sequential search, it
is possible to make use of previous information to locate subsequent experiments.
The result is a significantly larger reduction in the final interval of
uncertainty for the same number of experiments. The most efficient one
of the sequential search plans is called
This search technique is considered to be the best one of the minimax methods. It has the largest interval reduction of all of the procedures, but it requires that the number of experiments be specified in advance. The approach to developing this algorithm will be after that of Wilde (1) and Wilde and Beightler (2), where a dynamic programming approach is used. The derivation begins by placing the last two experiments optimally in the interval preceding the final interval of uncertainty. This is a simultaneous search with two experiments. Then the development determines the location of each preceding experiment to arrive at the location of the first two experiments. In Figure
5-8 the locations of the last two experiments, x Evaluating the
results of experiments x
We can now locate
experiment x We can locate experiment
x
This equation can be combined with equation (5-31) to give
This reasoning
can be repeated to locate points x
Repeated substitution of the equations determined
like equation (5-34) give the following results in terms of I
The generalization of equation (5-35) is:
and the generalization of equation (5-36) is:
The A
which gives the first twelve values of the Fibonacci numbers. We are now in a
position to obtain the equation to calculate the length of the final interval
of uncertainty, I
However, it is more convenient to have the
equations in terms of the resolution based on the initial interval, eI
Thus, knowing the initial interval, I The location of the first two experiments can now be determined using equation (5-38). Both of the first two experiments are needed to determine the part of the interval that is to be retained for further experiments. The first experiments are placed according to n-j = 2 or j = n-2, and substituting into equation (5-38) gives.
Again, it is convenient to have the above
equations in terms of the initial interval I
The location of
x
The procedure continues
after the first two experiments are evaluated by discarding the interval
that does not contain the optimum. The third experiment is placed in the
interval symmetrically to the one in the interval. Evaluating this experiment
permits discarding another section of the interval that does not contain
the optimum. The procedure is continued by placing the remaining experiments
symmetrically to the previous one with the best value until the final
experiment is placed. This final experiment will be symmetrical in the
final interval with the previous one with the best value, and they will
be separated by a distance equal to the resolution, eI
The classical theory solution was given in Example 5-1, and the optimum is located at x = 3/4 with a value of y(3/4) = 5.25. This is compared the best value of 5.246 located at x = 0.78 with four sequential experiments using Fibonacci search. A rapid interval reduction is obtained with Fibonacci search. This can be illustrated with a simplified form of equation (5-41) with e = 0 which is: For 11 experiments the initial interval
is reduced by 144, because A The search technique to be discussed next is a modification of Fibonacci search. Golden section search sacrifices being a minimax technique to relax the requirement that the number of experiments be specified in advance. This program performs a Fibonacci search to locate the maximum of the function given in the function FUNC in the interval between LBOUND and HBOUND. The input to this program is in free format, and it consists of the resolution desired (RESOL), the number of experiments (EXPNO), and the low and high bounds (LBOUND, HBOUND). The output includes the value of the function and the bounds on the interval of uncertainty for each application of the algorithm. The input and output shown with the program are the results obtained using the problem in example 5-2. To maximize a new function, supply the equation in the function FUNCT.
This method is used when the
number of experiments is not known in advance. The search starts at essentially
the same place as Fibonacci search when more than about six Fibonacci
experiments are specified, i.e., x The ratio of successive intervals t is defined as: Using the following relation This equation can be combined with equation (5-46) to give:
The solution of this quadratic equation is given by Wilde(1) as:
To begin the search, the second (or first) experiment is located using equation (5-47) at:
and the first (or second) experiment is located symmetrically in the interval as:
and after The following example illustrates the procedure using the same function in the prior examples.
A rapid interval
reduction was obtained with golden section search also. For eleven experiments
the initial interval would be reduced by 123, compared with 144 for Fibonacci
search. Both techniques have been widely used in industrial applications.
In the next section we will briefly describe another extension of Fibonacci
search, called
This method has been developed for the problem where the independent variable takes on only discrete values. Thus, it is necessary to have a search method that searches on these discrete values. Examples could be the determination of the optimum route for an aircraft or the optimum number of temperature or pressure measuring devices for a process. The approach is
to modify the Fibonacci search technique such that each experiment falls
on a discrete value of the independent variable as the search proceeds.
This means that the first two experiments must start on discrete values
of the independent variable, called Beginning with equation (5-41) it is simplified for e = 0, i.e., the resolution is zero. This gives the following equation.
For this equation I To start the search, equation (5-43) is used with e = 0, i.e.:
Using equation (5-54) to have this equation
in terms of I
Consequently, the second experiment is located
at the lattice point corresponding to A
A further discussion of this procedure is given by Wilde(1). Now our attention will turn to extending the use of these methods on an open interval in the next section.
Many multivariable search techniques search on a line in space as they attempt to move from a starting point through better points to arrive at an optimum, as will be seen in the next chapter. This requires a single-variable search, where one end of the interval is known, and the other is to be found in order to have the optimum bounded in the interval. The approach is to attempt to place two experiments along the line of search to bound the optimum and, at the same time, have the experiments be in the correct position to continue a Fibonacci or golden section search. The compromise is one of attempting to select an interval that is large enough to ensure that the optimum is bounded, but no so large that excessive effort is required to reduce the interval to an acceptable final value. Also, if the optimum is not bounded with two experiments, provision can be made to have a third experiment placed such that it is in the proper position to continue the search. The second experiment would be used with the third one for the two required in the search technique. Once the optimum is bounded, then the previously described procedures are used to reduce the size of the interval to an acceptable value. In addition to illustrating the above procedure for bounding an optimum in an open interval, we want to illustrate how this procedure can be used in a multivariable search procedure. The gradient line developed in Chapter 2 will be used for this purpose. A line in space can be represented in vector notation by the following equation.
where a
is called the
For the value of a
= 0, The gradient line was given by equation (2-32) and is rewritten here using the plus sign to have the direction of steep ascent as:
In this case values of the parameter a
greater than zero generate points The problem of
maximizing y(
Golden section
search will be used to illustrate the procedures to bound the optimum
on an open interval. In this case it will not be necessary to specify
the number of experiments, and a stopping criteria can be used based on
the computed values of y The first two experiments are located using equations (5-51) and (5-52), which are:
The problem is
one of specifying I
This should result
in y(x Wilde and Beightler
(2) have an expanded discussion of this procedure giving the generalization
of equation (5-60) for
There are several nonminimax techniques that have been used successfully for single-variable optimization. They involve fitting a polynomial to the profit function and computing the stationary points of the polynomial iteratively until a satisfactory optimum has been obtained. The polynomial is usually quadratic or at most cubic, and procedures can be incorporated to bound the optimum when the interval is open. These procedures will be reviewed, and further descriptions are given by Avriel(3), Beveridge and Schechter(6), and Himmelblau(7). To have a quadratic fit of the profit (or cost) function, the coefficients of the function are evaluated by making three measurements. The quadratic equation can be written as:
The stationary point of the above equation is obtained by setting the first derivative equal to zero to give:
The values of a and b are determined by
measurements at three points to determine y(x
which are three linear algebraic equations in a, b, and c. In matrix form they can be written as: This set of equations can be solved for
a and b, and the results are substituted into equations (5-62) to compute
x The procedure involves
selecting three initial points and computing the optimum of the quadratic
approximation, x A complimentary
procedure to be used with the quadratic method for an open interval to
bound the optimum is described by Himmelblau (7) also. It is called the
A simple example illustrating the use of these algorithms is given by Himmelblau(7) and Beightler, et al. (4). Also Wilde and Beightler (2) give the equation for a cubic approximation to that profit function. It should be mentioned that the Armijo line search (11) has been used successfully with the multivariable search method, successive quadratic programming (12). This line search does not have minmax properties, but has been shown to converge to the minimum of a continuous function. Also, a sequence for the parameter of the gradient line has been shown to converge to the minimum of a continuous function. Further details are given in the article by Armijo (11) for these theorems and in the article by Beigler and Cuthrell (12) for applications with the successive quadratic programming algorithm for optimization of process simulators. |