Bordered Hessian for Optimization

Bordered Hessian is a matrix method to optimize an objective function f(x,y) where there are two factors ( x and y mentioned here ), the word optimization is used here because in real life there are always limitations ( constraints ) which we have to consider and we have to maximize ( if it is output or utility function ) or minimize ( if it is a cost function ) with respect to the limitation.

So in this blog I am building one mathematical derivation example of utility maximization from consumption of two goods x and y with the constraint of total income available (I) and prices of these goods (p1) for good x and (p2) for good y with parallel solution.

So as a rational human we want to get as much utility as possible from consumption of x and y but our resources are limited so we will try to maximize utility from our scarce resources.

Max x,y f(x,y)  subject to  g(x,y) = I

Utility (objective function) =   U = u(x,y) = 2xy

2xy means that both goods are complement for us, it can be seen as assume we do not buy x ( x= 0 ) then to total utility function also becomes zero. And the coefficient 2 represent that for each pair of x and y ( means x = 1 and y = 1 to x = 2 and y = 2 ) we get twice the utility from each higher pair. This specification of utility function is imaginary as utility cannot be measured in reality but it is in consensus that for each individual this function can be different.

Budget (constraint) = I = g(x,y)    ,   I = p1x + p2y , 90 = 3x + 4y

Also we have budget constraint, here nothing is imaginary where income can be measured and prices of good x and good y are also measured from the market. Following table will show parallel solving of bordered hessian with left side is mathematical derivation and right side is solving the example with values.

Lagrange function

FUNCTION WITH NOTATION FUNCTION WITH VALUES
L = UL = U(x,y) +λ ( I – g(x,y)) 

First order condition (FOC)

 

Lx =  dL / dx = df(x,y)/dx – λ[dg(x,y)/dx] = 0

Ly = df(x,y)/dy – λ[dg(x,y)/dx] = 0

Lλ = dL/dλ = 1 – g(x,y) = 0

 

 

 

 

 

Solving these gives the values of x y that optimizes the Lagrange function. By optimization means within the limit of total income and the given prices we can at most get x units of good x and y units of good y.

And the value of λ will represent shadow cost; it tells how much objective (utility) will increase if we increase one unit of budget (resources).

L =2xy +λ(90 – 3x -4y)First order condition

Lx =  dL / dx = 2y  – 3λ = 0

Ly = dL / dy = 2x – 4λ = 0

Lλ = dL / dλ  = 90 – 3x – 4y = 0

from these three equations we have

2y = 3λ  ;  λ = (2/3)y

2x = 4λ ; λ = (1/2)x

3x + 4y = 90

 

(2/3)y = (1/2)x

4y = 3x

Y = (3/4) x

 

3x + 4 * (3/4) x = 90

6x = 90

x = 15 units

 

45 + 4y = 90

4y = 90-45

4y = 45

y= 11.25 units

λ = 7.5

these are the values or x and y at which the Lagrange function is optimized.

But optimized does not mean that the utility is now at its maximum or minimum it only tells that we can afford x and y units of goods with this income and prices.

If it were a 2 variable without constraint question then the second derivatives would have solve this problem, this second derivative process in the non constraint numerical is called second order condition. But with the constraint involved then there are cross partials that are needed to b solved too. By cross partials means how does increase in one good in our consumption as compared to second good effect out utility.

For the 2 variable non-constraint case the method for the second order condition would have been a 2×2 matrix below.

|H| = | fxx  fxy |

            | fyx  fyy |

Where the determinants, |H| > 0 means function is minimum ( means x and y measured earlier in FOC are making sure that the objective is minimum possible ) and |H| < 0 means function is maximum ( means x and y measured earlier in FOC are making sure that objective function is maximum possible).

So the bordered hessian is just expansion of the 2×2 case hessian determinant above, So following is the method to solve this.

Now Bordered Hessian Determinant

Consider 2 variable x, y model with one constraint so hessian will be of 3×3 order will all of its components are the second derivative of the functions defined above

 

|H| =

Lλλ  Lλx  Lλy

Lxλ  Lxx  Lxy

Lyλ  Lyx  Lyy

As we know that in our langrage example the maximum power of the λ (coefficient of constraint or shadow cost) is numerical value so its derivative is 0.

 Lλλ = 0

And from

L = U(x,y) + λ ( I – g(x,y))

We can see that

Lλx = -dg(x,y) / dx = -gx

and                                                                                                          Lλy = -dg(x,y) / dy = -gy

By calculating other way around too we can confirm that

Lλx = Lxλ   and   Lλy = Lyλ

So by placing values the simplified bordered hessian becomes

|H| =

0    -gx  -gy

-gx  Lxx  Lxy

-gy  Lyx  Lyy

= |B2|

Here Lxx is the second derivative of the Lx w.r.t. x, similarly for y

B2 is the bordered hessian with 2 variables

Now Utility will be maximized (negative definite) if

|B2| > 0

And minimum (positive definite) if

|B2| <0

Remember its inference is opposite as of simple hessian determinant

Now placing values from above example:

|H| =

0     -gx   -gy

-gx  Lxx  Lxy

-gy  Lyx  Lyy

= |B2|

|H| =

0  3  4

3  0  2

4  2  0

= |B2|

Solving the determinant

|B2| = -3(-8) + 4(6) = 24+24 =48 > 0

This |B2| > 0 being second order condition confirms that the units of x and y which we determined from the Lagrange above considering our income and market prices as our limitation has yielded us maximum possible objective function (utility for this case).  So final verdict is that the graphical analysis we see that a consumer try to reach the maximum possible indifference curve which just touches the budget line, so this numerical is same process but in mathematics we have objective function as indifference curves, and the budget line as constraint function. So the estimated values of x and y are giving us coordinates on the graph where the budget line just touches the highest possible indifference curve.

Advertisements

7 thoughts on “Bordered Hessian for Optimization

  1. Bree says:

    Hi!

    This is really interesting.

    Why isn’t the sufficient condition identifying if the bordered hessian matrix is positive or negative definite? Instead, it is to identify if the det of the matrix is positive or negative? I don’t think they mean the same thing because identifying positive/negative definite needs more work.

    May I know the reference book of the constrained optimization problem you mentioned here?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s