Consider the a general constrained optimization problem:

(237) |

(238) |

(239) |

(240) |

(241) |

(242) |

(243) |

(244) |

(245) |

Now the optimization problem can be written as:

(246) |

(247) |

(248) |

i.e. | (249) |

- The non-negativity in the primal feasibility of the original KKT is dropped;
- Consequently the inequality in the dual feasibility of the original KKT conditions is also dropped;
- There is an extra term in complementarity which will vanish when .

The optimal solution that minimizes the objective function can be found by solving the simultaneous equations in the modified KKT conditions (with no inequalities) given above. To do so, we first express the equations in the KKT conditions above as a nonlinear equation system , and find its Jacobian matrix composed of the partial derivatives of the function with respect to each of the three variables , , and :

(251) |

(252) |

is symmetric as the Hessian matrices and are symmetric.

We can now use the Newton-Raphson method to find the solution of the nonlinear equation iteratively:

(253) |

(254) |

To get the initial values for the iteration, we first find any point inside the feasible region and then , based on which we further find by solving the first equation in the KKT conditions in Eq. (250).

Actually this equation system above can be separated into two subsystems, which are easier to solve. Pre-multiplying to the third equation:

(255) |

(256) |

(257) |

(258) |

(259) |

(260) |

(261) |

We now consider the two special cases of linear and quadratic programming:

**Linear programming**:- .
- .

(263) (264) function x=InterierPointLP(A,c,b,x) % given A,b,c of the LP problem and inital value of x, % find optimal solution x that minimizes c'*x [m n]=size(A); % m constraints, n variables z1=zeros(n); z2=zeros(m); % zero matrices in coefficient matrix z3=zeros(m,n); I=eye(n); % identity matrix in coefficient matrix y=ones(n,1); t=9; % initinal value for parameter t alpha=0.3; % small enough not to exceed boundary mu=x./t; lambda=pinv(A')*(mu-c); % initial value of lambda w=[x; lambda; mu]; % initial values for all three B=[c+A'*lambda-mu; A*x-b; x.*mu-y/t]; p=[x(1) x(2)]; % initial guess of solution while norm(B)>10^(-7) t=t*9; % increase parameter t X=diag(x); M=diag(mu); C=[z1 A' -I; A z2 z3; M z3' X]; B=[c+A'*lambda-mu; A*x-b; x.*mu-y/t]; dw=-inv(C)*B; % find search direction w=w+alpha*dw; % step forward x=w(1:n); lambda=w(n+1:n+m); mu=w(n+m+1:length(w)); p=[p; x(1), x(2)]; end scatter(p(:,1),p(:,2)); % plot trajectory of solution end

**Example:**A given linear programming problem shown below is first converted into the standard form:

We choose an initial value , and an initial parameter , which is scaled up by a factor of in each iteration. We also used full Newton step size of . After 8 iterations, the algorithm converged to the optimal solution corresponding to the maximum function value :

**Quadratic programming:**- .
- .

(266) (267)

The Matlab code for the interior point method for the QP problem is listed below:

function [x mu lambda]=InteriorPointQP(Q,A,c,b,x) n=length(c); % n variables m=length(b); % m constraints z2=zeros(m); z3=zeros(m,n); I=eye(n); y=ones(n,1); t=9; % initinal value for parameter t alpha=0.1; % stepsize, small enough not to exceed boundary mu=x./t; lambda=pinv(A')*(mu-c-Q*x); % initial value of lambda w=[x; lambda; mu]; % initial value B=[Q*x+c+A'*lambda-mu; A*x-b; x.*mu-y/t]; while norm(B)>10^(-7) t=t*9; % increase parameter t X=diag(x); M=diag(mu); C=[Q A' -I; A z2 z3; M z3' X]; B=[Q*x+c+A'*lambda-mu; A*x-b; x.*mu-y/t]; dw=-inv(C)*B; % find search direction w=w+alpha*dw; % step forward x=w(1:n); lambda=w(n+1:n+m); mu=w(n+m+1:length(w)); end end

**Example:**

A 2-D quadratic programming problem shown below is to minimize four different quadratic functions under the same linear constraints as the liniear programming problem in the previous example. In general, the quadratic function is given in the matrix form: