CLASSICAL THEORY OF MAXIMA AND MINIMA
Based on geometric intuition from the previous example, we can understand the famous Weierstrass' theorem (11,12) which guarantees the existence of maxima and minima. It states: Every function which is continuous in a closed domain possesses a maximum and minimum Value either in the interior or on the boundary of the domain. The proof is by contradiction. A continuous function of n variables attains a maximum or a minimum in the interior of a region, only at those values of the variables for which the n partial derivatives either vanish simultaneously (stationary points) or at which one or more of these derivatives cease to exist (i.e., are discontinuous). The proof involves examining the Taylor Series expansion
at the points where the partial derivatives either vanish or cease to
exist.
As we have seen, it is not necessary for all stationary points to be local maxima and minima, since there is a possibility of saddle or inflection points. Now we need to develop procedures to determine if stationary points are maxima or minima. These sufficient conditions will be developed for one independent variable first and then extended for two and n independent variables, using the same concepts. Once the local maxima and minima are located, it is necessary to compare the individual points to locate the global maximum and minimum.
To develop criteria establishing whether a stationary point is a local maximum or minimum, we begin by performing a Taylor series expansion about the stationary point x_{o}. y(x) = y(x_{o}) + y'(x_{o}) (x  x_{o}) + ½ y''(x_{o}) (x  x_{o})^{2} + higher order terms Now, select x sufficiently close to x_{o} so the higher order terms become negligible compared to the secondorder terms. Since the first derivative is zero at the stationary point, the above equation becomes y(x) = y(xo) + ½y" (x_{o}) (x  x_{o}) (21) We can determine if x_{o} is a local maximum or minimum by examining the value of y"(x_{o}), since (x  x_{o})^{2} is always positive. If y"(x_{o}) is positive, then the terms ½y"(x_{o}) (x  x_{o})^{2} will always add to y(x_{o}) in equation (21) for x taking on values that are less than of greater than x_{o}. For this case y(xo) is a local minimum. This is summarized in the following: ______________________________________________________________ If y''(x_{o}) > 0 then y(x_{o}) is a minimum y''(x_{o}) < 0 y(x_{o}) is a maximum y''(x_{o}) = 0 no statement can be made If the second derivative is zero, it is necessary to examine higher order derivatives. In general if y''(xo) = ... = y^{n1}(xo) = 0, the Taylor series expansion becomes
If n is even, then (x  xo)n is always positive, and the result is: ______________________________________________________________ If y^{(n)}(xo) > 0 then y(xo) is a minimum y^{(n)}(xo) < 0 y(xo) is a maximum
the point is or is not an extreme point, according as the first nonvanishing derivative is of even or odd order. If it is even, there is a maximum or minimum according as the derivative is negative or positive.
Example 21 y(x) = x^{4}/4  x^{2}/2 y'(x) = x^{3}  x = x(x^{2}  1) = x(x  1)(x+1) = 0 Stationary points are x = 0,1,1 y"(x) = 3x^{2}  1 y"(0) = 1 maximum y"(1) = 2 minimum y"(1) = 2 minimum
y'(x) = 5x^{4} = 0 stationary point is x = 0 y"(x) = 20x^{3} y"(0) = 0 y"'(x) = 60x^{2} y"'(0) = 0 no statement can be made y(4)(x) = 120x y(4)(0) = 0 y(5)(x)
= 120 y(5)(0) = 120 n is odd, and the stationary
To develop the criteria for a local maximum or minimum for a stationary point x_{o}(x_{10}, x_{20}) of a function of two variables, a Taylor's series expansion is made about this point.
where the subscripts x_{1} and x_{2} indicate partial differentiation with respect to those variables and evaluation at the stationary point. Again we select y(x_{1}, x_{2}) sufficiently close to y(x_{10}, x_{20}) so the higher order terms become negligible compared to the second order terms. Also, the first derivatives are zero at the stationary point. Thus equation (23) can be written in matrix form as: In matrixvector notation the above equation can be written as
where H^{o} is the matrix of second partial derivatives evaluated at the stationary point x_{o} and is called the Hessian matrix. The term in the bracket of equation (25) is called a differential quadratic form, and y(x_{o}) will be a minimum or a maximum accordingly if this term is always positive or always negative. Based on this concept, it can be shown (1) that if the following results apply x_{o} is a maximum or a minimum. If they do not hold, x_{o} could be a saddle point and is not a maximum or a minimum.
An illustration of the above results is given in example 22. The term in the bracket of equation(25) is an example of a quadratic form. It will be necessary to describe a quadratic form briefly before giving the sufficient conditions for maxima and minima for nindependent variables.
To perform a similar analysis for a function with more than two independent variables, it is necessary to determine what is called the sign of the quadratic form. The general quadratic form (1) is written as: where a_{ij} are the components of symmetric matrix A, e.g., a_{ij} = a_{ji}. It turns out (1) that we can determine if Q is always positive or negative, for all finite values of x_{i} and x_{j}, by evaluating the signs of D_{i}, the determinants of the principal submatrices of A. The important results that will be used subsequently are:
The result of the previous two sections can be extended to the case of n independent variables by considering the Taylor series expansion for n independent variables around stationary point x_{o}:
Again select x sufficiently close to x_{o}, so the higher order terms become negligible compared to the second order terms. Also, the first derivatives are zero at the stationary point. Thus, equation (28) can be written in matrixvector notation as:
The second term on the right hand side of equation (29) is called a differential quadratic form as shown below.
We can now use the same procedure in evaluating the character of the stationary points for n independent variables. For example, if the term containing the Hessian matrix is always positive for perturbations of the independent variables around the stationary point, then the stationary point is a local minimum. For this differential quadratic form to be positive always, the H^{o} > 0 for i = 1,2,...n. The same reasoning can be applied for a local maximum, and the results for these two cases are summarized below:
The proof of this theorem employs arguments similar to those given above. The following example illustrates these methods.
Example 22 The flow diagram of a simple process is shown in Figure 22 (2) where the hydrocarbon feed is mixed with recycle and compressed before being passed into a catalytic reactor. The product and unreacted material are separated by distillation, and the unreacted material is recycled. The pressure, P, in psi and recycle ratio, R, must be selected to minimized the total annual cost for the required production rate of 10^{7} pounds per year. The feed is brought up to pressure at an annual cost of $1000P, mixed with the recycle stream, and fed to the reactor at an annual cost of $4 x 10^{9}/PR. The product is removed in a separator at a cost of $10^{5}R per year, and the unreacted material is recycled in a recirculating compressor which consumes $1.5 x 10^{5}R annually. Determine the optimal operating pressure, recycle ratio, and total annual cost; and show that the cost is a minimum Solution: The equation giving the total operating cost is:
Equating the partial derivatives of C with respect P and R to zero gives two algebraic equations to be solved for P and R.
Substituting to determine the corresponding total operating cost gives
C (P,R) is a minimum if Performing the appropriate partial differentiation and evaluation at the stationary point ( P = 1000, R = 4) gives Thus, the stationary point is a minimum since both determinants are positive.
