Chapter 5

  SINGLE VARIABLE SEARCH TECHNIQUES

Sequential Search Methods (2) top

   With a sequential search, it is possible to make use of previous information to locate subsequent experiments. The result is a significantly larger reduction in the final interval of uncertainty for the same number of experiments. The most efficient one of the sequential search plans is called Fibonacci search, from the famous sequence of numbers that appears in the equations of this procedure. This minimax method requires that the number of experiments be specified in advance, which may be inconvenient. Almost as efficient as Fibonacci search is golden section search, and this method does not require that the number of experiments be specified in advance. A third, related search plan, lattice search, is designed for functions that are defined only at discrete points. These methods, which place experiments one at a time, will be developed in the following paragraphs. However, the theory is available to place any number of experiments simultaneously in an interval and repeat this placement sequentially. These methods are called even block, odd block, and golden block search. Their description and minimax proofs are given by Wilde and Beightler (2) and Avriel (3). It is important to know that these other methods are available for use, but they will not be discussed here, for their application to the solution of industrial problems has been limited.

Fibonacci Search top

This search technique is considered to be the best one of the minimax methods. It has the largest interval reduction of all of the procedures, but it requires that the number of experiments be specified in advance. The approach to developing this algorithm will be after that of Wilde (1) and Wilde and Beightler (2), where a dynamic programming approach is used. The derivation begins by placing the last two experiments optimally in the interval preceding the final interval of uncertainty. This is a simultaneous search with two experiments. Then the development determines the location of each preceding experiment to arrive at the location of the first two experiments.

     In Figure 5-8 the locations of the last two experiments, xn and xn-1, are shown. They are placed symmetrically about the center of the interval preceding the final interval and are separated by a distance equal to the resolution, dIn (= eIo). The final interval is In, the one preceding it is In-1, and d is the fractional resolution based on the final interval of uncertainty. For convenience, at the start of the derivation one end of the interval In-1 is the left boundary of the initial interval, Io, and the other end must be the location of experiment xn-2, i.e., y(xn-1) > y(xn-2) for maximizing.

     Evaluating the results of experiments xn and xn-1 will determine the location of the final interval, but not its length. The length of In is determined by the minimax simultaneous search of uniform pairs given by equation (5-10) for two experiments. This equation can be written as follows:

In = In-1 /2 + dIn /2

or                                                                      (5-31)

In = In-1 / (2 - d)

     We can now locate experiment xn-3 by considering that had y(xn-1) < y(xn-2), then xn would have been located to the right of xn-2 by a distance equal to dIn for maximizing. This is shown in Figure 5-8, and experiment xn-3 has to be located a distance of In-1 to the right of xn-1. In addition, the final interval, In, could be located at any one of the four positions shown for In in Figure 5-8, depending on the outcomes of xn-2, xn-1 and xn.

     We can locate experiment xn-4 by considering that had y(xn-2) < y(xn-3), then xn-4 would have to be located to the right of xn-3 by a distance equal to In-2. This is shown in Figure 5-8 and insures that the final interval is In. This also shows that the interval In-2 is the sum of the final interval In and In-1, i.e.:

In-2 = In + In-1                                         (5-32)

This equation can be combined with equation (5-31) to give

In-2 = In + (2 - d)In                                 (5-33)

or

In-2 = (3 - d)In                                         (5-34)

     This reasoning can be repeated to locate points xn-5, xn-6,..., and the following results will be obtained, i.e., an interval is equal to the sum of the two following intervals.

In-3 = In-2 + In-1

In- 4 = In-3 + In-2                                      (5-35)

In-5 = In-4 + In-3

.

.

.

Repeated substitution of the equations determined like equation (5-34) give the following results in terms of In for the equations (5-35).

In-3 = (5 - 2d)In

In-4 = (8 - 3d)In                                        (5-36)

In-5 = (13 - 5d)In

.

.

.

The generalization of equation (5-35) is:

In-j = In-(j-1) + In-(j-2)                               (5-37)

and the generalization of equation (5-36) is:

In-j = (Aj+2 - Ajd)In                                (5-38)

 

     The Aj's are a famous sequence of numbers dating back to 1202 (see Wilde (1) for details), and are called the Fibonacci numbers. The sequence can be generated by the following equations:

Ao = 0

A1 = 1

An = An-1 + An-2                          (5-39)

and

n        0     1    2    3    4     5    6      7       8      9    10     11      12

An     0     1    1    2    3     5    8    13    21     34    55    89    144

which gives the first twelve values of the Fibonacci numbers.

     We are now in a position to obtain the equation to calculate the length of the final interval of uncertainty, In, knowing the length of the initial interval, Io, and to determine the placement of the first two experiments. The value of n - j = 1 is used in equation (5-38) to determine I1 as given below.

I1 = (An+1 - An-1d)In                            (5-40)

However, it is more convenient to have the equations in terms of the resolution based on the initial interval, eIo. The substitution of eIo = dIn is made, and I1 is equal to Io, because two experiments are required for interval reduction. Equation 5-40 becomes:

In = [(1 + eAn-1) / An+1] Io                   (5-41)

Thus, knowing the initial interval, Io, the fractional resolution based on the initial interval, e, and the number of experiments, n, we determine the length of the final interval of uncertainty, In, by equation (5-41). The location of this final interval is determined by the outcome of the experiments.

     The location of the first two experiments can now be determined using equation (5-38). Both of the first two experiments are needed to determine the part of the interval that is to be retained for further experiments. The first experiments are placed according to n-j = 2 or j = n-2, and substituting into equation (5-38) gives.

I2 = (An - An-2d)In                                (5-42)

Again, it is convenient to have the above equations in terms of the initial interval Io and the corresponding resolution e. Making the substitution for dIn = eIo and recognizing that x2 can be located a distance I2 from the left hand side of the interval gives:

x2 = I2 = AnIn - eAn-2 Io                       (5-43)

     The location of x2 is shown in Figure 5-8, along with the location of x1. The position for x1 is symmetrically to x2 a distance I2 from the right-hand side of the interval. The position for x1 can be computed by having n-j = 3 in equation (5-38). The result is:

x1 = I3 = (An-1 - An-3d)In = An-1In - eAn-3Io                  (5-44)

     The procedure continues after the first two experiments are evaluated by discarding the interval that does not contain the optimum. The third experiment is placed in the interval symmetrically to the one in the interval. Evaluating this experiment permits discarding another section of the interval that does not contain the optimum. The procedure is continued by placing the remaining experiments symmetrically to the previous one with the best value until the final experiment is placed. This final experiment will be symmetrical in the final interval with the previous one with the best value, and they will be separated by a distance equal to the resolution, eIo = dIn. The following example illustrates the procedure using the function of Example 5-1.

Example 5-2

Search for the maximum of the following function using Fibonacci search with four experiments. The initial interval Io = 1.0 and the resolution e= 0.05:

Maximize: y = 3 + 6x - 4x2

The final interval of uncertainty is computed using equation (5-41).

I4 = [(1 + (0.05) (2)) /5] (1) = 0.22

The location of this interval will be determined by placing the experiments and is shown on the line at the end of the problem. The location of the second experiment is computed using equation (5-43).

x2 = 3(0.22) - (0.05)(1)(1) = 0.610

The location of the first experiment is symmetrical to the second one, i.e.:

x1 = 1.0 - 0.61 = 0.39

The same results could be obtained using equation (5-44). Evaluating the function gives:

y(0.39) = 4.732 y(0.61) = 5.172

 

The optimum lies in the interval 0.39 < x* < 1.0.

Experiment x3 is placed symmetrically to x2 in this interval as:

x3 = 0.39 + (1 - 0.61) = 0.78

and

y(0.78) = 5.246

The optimum lies in the interval 0.61 < x* < 1.

Experiment x4 is placed symmetrically to x3 in this interval as:

x4 = 0.61 + (1 - 0.78) = 0.83

and

y(0.83) = 5.244

The optimum lies in the region 0.61 < x* < 0.83, which is the location of the final interval computed at the beginning of the example.

Ex5-2.jpg (10556 bytes)

The classical theory solution was given in Example 5-1, and the optimum is located at x = 3/4 with a value of y(3/4) = 5.25. This is compared the best value of 5.246 located at x = 0.78 with four sequential experiments using Fibonacci search.

     A rapid interval reduction is obtained with Fibonacci search. This can be illustrated with a simplified form of equation (5-41) with e = 0 which is:

Equ5-45.jpg (2419 bytes)

For 11 experiments the initial interval is reduced by 144, because A12 = 144. This rapid reduction in the interval containing the optimum, along with the simple procedure of placing experiments symmetrically in the interval to continue the search, and its basis in the minimax principle, has made Fibonacci search one of the most widely used procedures. Table 5-1 gives a program for Fibonacci search, along with the input from Example 5-2 and the corresponding output.

     The search technique to be discussed next is a modification of Fibonacci search. Golden section search sacrifices being a minimax technique to relax the requirement that the number of experiments be specified in advance.

This program performs a Fibonacci search to locate the maximum of the function given in the function FUNC in the interval between LBOUND and HBOUND. The input to this program is in free format, and it consists of the resolution desired (RESOL), the number of experiments (EXPNO), and the low and high bounds (LBOUND, HBOUND). The output includes the value of the function and the bounds on the interval of uncertainty for each application of the algorithm. The input and output shown with the program are the results obtained using the problem in example 5-2. To maximize a new function, supply the equation in the function FUNCT.

Golden Section Search top

This method is used when the number of experiments is not known in advance. The search starts at essentially the same place as Fibonacci search when more than about six Fibonacci experiments are specified, i.e., x1 and x2 are located at the same position, x1 = 0.382, x2 = 0.618, based on a unit initial interval. The idea is to keep the procedure of placing experiments symmetrically in the interval to rapidly reduce the section that contains the optimum. To do this equation (5-37) is used, and the ratio of successive intervals is maintained constant. Dividing equation (5-37) by In-(j-2) gives:

Equ5-46.jpg (4592 bytes)

The ratio of successive intervals t is defined as:

Equ5-47.jpg (5852 bytes)

Using the following relation

Equ5-48.jpg (6377 bytes)

This equation can be combined with equation (5-46) to give:

t2 = t + 1                                                               (5-49)

The solution of this quadratic equation is given by Wilde(1) as:

t = ( 1 + 5 )/2 = 1.618033989...                           (5-50)

     To begin the search, the second (or first) experiment is located using equation (5-47) at:

x2  =  I2  =  I1/t  =  Io/t  =  0.618 Io                       (5-51)

and the first (or second) experiment is located symmetrically in the interval as:

x1 = Io - Io/t = (1 - 1/t)Io = 0.382 Io                      (5-52)

and after n experiments the final interval of uncertainty is:

Equ5-53.jpg (3064 bytes)

     The following example illustrates the procedure using the same function in the prior examples.

 

Example 5-3

Search for the maximum of the following function using golden section search stopping after four experiments to compare with previous results. The initial interval is Io = 1.0.

Maximize: y = 3 + 6x - 4x2

The final interval of uncertainty is computed using equation (5-53).

I4 = 1/(1.618)3 = 0.236

The location of the second experiment is computed using equation (5-51)

x2 = Io/t = 1/t = 0.618

and the first experiment is located symmetrically in the interval as:

x1 = 1 - 0.618 = 0.382

Evaluating the function gives

y(0.382) = 4.71 y(0.618) = 5.18

The optimum lies in the interval 0.382 < x* < 1.0

Experiment x3 is placed symmetrically to x2 in this interval as:

x3 = 0.382 + (1 - 0.618) = 0.764

and

y(0.764) = 5.24

The optimum lies in the interval 0.618 < x* < 1.0.

Experiment x4 is placed symmetrically to x3 in this interval as:

x4 = 0.618 + (1 - 0.764) = 0.854

and

y(0.854) = 5.20

The optimum lies in the region 0.618 < x* < 0.854, which is the location of the final interval computed at the beginning of the example. It is slightly larger, 0.236, than the Fibonacci final interval value of 0.220.

     A rapid interval reduction was obtained with golden section search also. For eleven experiments the initial interval would be reduced by 123, compared with 144 for Fibonacci search. Both techniques have been widely used in industrial applications. In the next section we will briefly describe another extension of Fibonacci search, called lattice search, which applies to functions which are defined only at discrete values of the independent variables.

Lattice Search top

This method has been developed for the problem where the independent variable takes on only discrete values. Thus, it is necessary to have a search method that searches on these discrete values. Examples could be the determination of the optimum route for an aircraft or the optimum number of temperature or pressure measuring devices for a process.

     The approach is to modify the Fibonacci search technique such that each experiment falls on a discrete value of the independent variable as the search proceeds. This means that the first two experiments must start on discrete values of the independent variable, called lattice points, and the final two experiments are adjacent to each other at lattice points.

     Beginning with equation (5-41) it is simplified for e = 0, i.e., the resolution is zero. This gives the following equation.

In = Io /An+1                       (5-54)

For this equation Io is the number of lattice points plus one, and In is equal to one to have the last two experiments adjacent to each other. However, these conditions overdetermine equation (5-54), and another approach is as follows. To have In be equal to one, the number of experiments is determined by selecting the Fibonacci number, An+1, which is equal to or greater Io plus one. Usually, the Fibonacci number will not match the comparable value of Io plus one, and this value will have to be increased by adding fictitious points to Io. These points are usually added to either the left or right end of the interval. However, they can be located in any convenient place as long as they are not one of the points that must be evaluated during the search.

     To start the search, equation (5-43) is used with e = 0, i.e.:

x2 = An In                            (5-55)

Using equation (5-54) to have this equation in terms of Io gives:

x2 = An (Io /An+1)               (5-56)

Consequently, the second experiment is located at the lattice point corresponding to An. The first experiment is located symmetrically to x2 in the initial interval. Then the function is evaluated at the two points, and the third and subsequent experiments are placed symmetrically in the interval remaining. The last two experiments will differ by one, and the optimum lattice point will have been located. The following example illustrates the procedure using the function of the previous examples and precision points.

Example 5-4

Search for the maximum of the following function using lattice search and 20 precision points on the initial interval of Io = 1.0.

y = 3 + 6x - 4x2

The precision points are given by the following equation:

xi = 0.05i - 0.025        for       i = 1,2,...,20

The first point is x1 = 0.025, and the last point is x20 = 0.975. The points are spaced on the line as shown at the end of the example. Using equation (5-54) to have In be 1.0, An+1 is selected to be equal to Io = 21. In this case 21 is the Fibonacci number A8, and no fictitious points are required. The second experiment is placed according to equation (5-56) at x2 = A7 = 13. The first experiment is placed symmetrically in the interval at x1 = 21 - 13 = 8. The results are shown on the line at the end of the example for the remaining experiments.

Ex5-4.jpg (12303 bytes)

Evaluating the outcomes of the experiments, there is a tie between x3 and x5. Consequently, for this lattice search either lattice points 15 or 16 could be used as the optimal value.

     A further discussion of this procedure is given by Wilde(1). Now our attention will turn to extending the use of these methods on an open interval in the next section.

Open Initial Interval top

Many multivariable search techniques search on a line in space as they attempt to move from a starting point through better points to arrive at an optimum, as will be seen in the next chapter. This requires a single-variable search, where one end of the interval is known, and the other is to be found in order to have the optimum bounded in the interval.

     The approach is to attempt to place two experiments along the line of search to bound the optimum and, at the same time, have the experiments be in the correct position to continue a Fibonacci or golden section search. The compromise is one of attempting to select an interval that is large enough to ensure that the optimum is bounded, but no so large that excessive effort is required to reduce the interval to an acceptable final value. Also, if the optimum is not bounded with two experiments, provision can be made to have a third experiment placed such that it is in the proper position to continue the search. The second experiment would be used with the third one for the two required in the search technique. Once the optimum is bounded, then the previously described procedures are used to reduce the size of the interval to an acceptable value.

     In addition to illustrating the above procedure for bounding an optimum in an open interval, we want to illustrate how this procedure can be used in a multivariable search procedure. The gradient line developed in Chapter 2 will be used for this purpose.

     A line in space can be represented in vector notation by the following equation.

x = a + a(b - a)                                         (5-57)

where a is called the parameter of the line. The components of this vector equation can be written using the components of x, and known points a and b as:

x1 = a1 + a(b1 - a1)

x2 = a2 + a(b2 - a2)                                    (5-58)

.

.

xn = an + a(bn - an)

For the value of a = 0, x is equal to a and for a = 1, x is equal to b. Values of a between zero and one generate values of x between a and b, values of a greater than one generate values of x beyond b, and values of a less than zero generate values of x beyond a.

     The gradient line was given by equation (2-32) and is rewritten here using the plus sign to have the direction of steep ascent as:

x = xo + ay(xo)                                        (5-59)

In this case values of the parameter a greater than zero generate points x in the direction of steep ascent.

     The problem of maximizing y(x), a multivariable function, is converted to maximizing y[xo + ay(xo)], a function of a only, along the line of steep ascent starting at point xo. Consequently, there is an open interval along the gradient line beginning at xo, and it is necessary to bound the maximum by selecting two values of a. Then a Fibonacci or golden section search can be conducted on a to determine the location of final interval and values of y(x) as previously described. The following simple example illustrates the use of Fibonacci search in placing experiments on a line in space.

Example 5-5

The following is the equation of the direction of steep ascent (gradient line) for a function through the point xo (1, 1, 0, 2). It has been determined that the maximum lies between xo and the point x1(5, 2, 3, 7). Locate the position of the first two experiments, for a Fibonacci search having five experiments, and determine the length of the final interval of uncertainty on each coordinate axis. The resolution can be taken as zero. The equation for the gradient line is:

x = xo + ay(xo)

x1 = 1 + 4a

x2 = 1 + 1a

x3 = 0 + 3a

x4 = 2 + 5a

For this problem a = 1 generates point x1 and a = 0 gives the starting point xo. Consequently, a Fibonacci search is to be conducted on the interval of 0 < a < 1 with n = 5. Equation (5-41) is used for the final interval on a:

I5 = Io /A6 = 1/8 = 0.125

Equations (5-43) and (5-44) are used to locate x1 and x2 on a.

a2 = An . In = 5/8 = 0.625

a1 = An-1 . In = 3/8 = 0.375

Final interval of uncertainty:

variable    initial interval       final interval

     a          0 < a < 1    1         0.125(1) = 0.125

     x1          1 < x1 < 5   4         0.125(4) = 0.5

     x2          1 < x2 < 2   1         0.125(1) = 0.125

     x3         0 < x3 < 3    3         0.125(3) = 0.375

     x4         2 < x4 < 7    5         0.125(5) = 0.725

Locations of the first two experiments are:

first experiment                          second experiment

x1 = 1 + 4(0.375) = 2.5               x1 = 1 + 4(0.625) = 3.5

x2 = 1 + 1(0.375) = 1.375           x2 = 1 + 1(0.625) = 1.625

x3 = 0 + 3(0.375) = 1.125           x3 = 0 + 3(0.625) = 1.875

x4 = 2 + 5(0.375) = 3.875           x4 = 2 + 5(0.625) = 5.125

Depending on the outcome of these experiments, sections of the interval are discarded, and the remaining experiments are placed symmetrically in the remaining parts of the intervals to obtain the final interval computed above for each variable. The length of the final interval of uncertainty is usually different for each variable, and there may be one variable that ultimately specifies the number of experiments to have a sufficiently small final interval.

     Golden section search will be used to illustrate the procedures to bound the optimum on an open interval. In this case it will not be necessary to specify the number of experiments, and a stopping criteria can be used based on the computed values of yn+1 and yn, e.g., y[xo + an+1 y(xo)] - y[xo + any(xo)] < s, where s is a specified small number.

     The first two experiments are located using equations (5-51) and (5-52), which are:

x2 = 0.618 Io                         (5-51)

x1 = 0.382 Io                         (5-52)

     The problem is one of specifying Io sufficiently large that y(x1) > y(x2) for maximizing to have the maximum bounded between zero and x2. If this does not happen, then x2 will have to become x1, and a third experiment will have to be placed. The location of this experiment is given by the following equation obtained using the two previous equations.

x3 = 0.618( x2 / 0.382 )          (5-60)

     This should result in y(x2) > y(x3), have the maximum bounded and have x2 and x3 in the proper location to continue the search on the interval x1 < x* < x3. Then golden section search can be conducted placing experiments symmetrically in the interval until the stopping criteria is reached.

     Wilde and Beightler (2) have an expanded discussion of this procedure giving the generalization of equation (5-60) for h measurements using Fibonacci search. Their conclusion is "that underestimation of the reduction ratio needed costs twice as many extra measurements as overestimation by the same amounts." The following example will illustrate the procedure for an unbounded function represented graphically.

Example 5-6

An unbounded function is shown in Figure 5-9. It will be used to illustrate the location of the first two and four additional experiments of golden section search. It is obvious from the function that one experiment must be greater than 9.0. This is information we would not have normally, but the problem still remains to select a value of Io using equations (5-51) and (5-52) such that the function is bounded with two experiments. Having selected a value of Io and computed x1 and x2, then the values of y(x1) and y(x2) can be readily evaluated using Figure 5-9. To simplify the procedure x1 can be calculated directly from x2 by combining equations (5-51) and (5-52) as

x1 = 0.382( x2 / 0.618)

Arbitrarily select x2 = 12.0 and computing x1, using the above equation gives a value of x1 = 7.42. The corresponding values of y are y(7.42) = 4.9 and y(12.0) = 3.5, and the maximum has been bounded. The remaining four experiments are placed symmetrically in the interval with the previous best value as shown on the figure. The final interval is 8.52 < x* < 10.26 with the value maximum of y(9.16) = 5.3 in this interval.

Other Methods top

There are several nonminimax techniques that have been used successfully for single-variable optimization. They involve fitting a polynomial to the profit function and computing the stationary points of the polynomial iteratively until a satisfactory optimum has been obtained. The polynomial is usually quadratic or at most cubic, and procedures can be incorporated to bound the optimum when the interval is open. These procedures will be reviewed, and further descriptions are given by Avriel(3), Beveridge and Schechter(6), and Himmelblau(7).

     To have a quadratic fit of the profit (or cost) function, the coefficients of the function are evaluated by making three measurements. The quadratic equation can be written as:

y = ax2 + bx + c                                          (5-61)

The stationary point of the above equation is obtained by setting the first derivative equal to zero to give:

x* = -b /2a                                                   (5-62)

The values of a and b are determined by measurements at three points to determine y(x1), y(x2), and y(x3). Thus equation (5-61) can be written as:

y(x1) = ax12 + bx1 + c

y(x2) = ax22 + bx2 + c                                   (5-63)

y(x3) = ax32 + bx3 + c

which are three linear algebraic equations in a, b, and c. In matrix form they can be written as:

Equ5-64.jpg (5945 bytes)

This set of equations can be solved for a and b, and the results are substituted into equations (5-62) to compute x*, the stationary point of the quadratic approximation to the profit function. The result is:

Equ5-65.jpg (9593 bytes)

     The procedure involves selecting three initial points and computing the optimum of the quadratic approximation, x*, by equation (5-65). Typically, the procedure continues using x* as one of the next three points to repeat the application of equation (5-65). The point with the smallest value of y (for maximizing) is discarded. The procedure is repeated until a tolerance on the dependent variable is met, e.g., yn+1 - yn < s, where s is a small number that is specified. However, there is a possibility that the procedure can oscillate around the optimum. In this case logic has to be incorporated into the computer program that recognizes when yn+1 < yn (when maximizing) and takes corrective action. An algorithm which incorporates these procedures, called Powell's method, is described by Himmelblau(7). The technique is generally called the quadratic method(3).

     A complimentary procedure to be used with the quadratic method for an open interval to bound the optimum is described by Himmelblau (7) also. It is called the DSC method and is a logical procedure that takes an initial step, Dx, from the starting point to evaluate the function. It continues the search using increments 2Dx, 4Dx, 8Dx,..., until the optimum is bounded. These points provide the data for a second-order interpolation to estimate x*. However, Himmelblau(7) states that instead of using a second order interpolation, it is better to shift to the quadratic method once the optimum has been bounded to have a more efficient algorithm.

     A simple example illustrating the use of these algorithms is given by Himmelblau(7) and Beightler, et al. (4). Also Wilde and Beightler (2) give the equation for a cubic approximation to that profit function.

     It should be mentioned that the Armijo line search (11) has been used successfully with the multivariable search method, successive quadratic programming (12). This line search does not have minmax properties, but has been shown to converge to the minimum of a continuous function. Also, a sequence for the parameter of the gradient line has been shown to converge to the minimum of a continuous function. Further details are given in the article by Armijo (11) for these theorems and in the article by Beigler and Cuthrell (12) for applications with the successive quadratic programming algorithm for optimization of process simulators.

top