Quantcast
Channel: Yet Another Math Programming Consultant
Viewing all articles
Browse latest Browse all 804

A multi-objective nonlinear integer programming problem

$
0
0

Picture of the efficient points


In [1], a small multi-objective non-convex MINLP problem is stated:


Let's see how one could approach a problem like this.

First of all, multi-objective problems don't have an obvious single optimal solution. Instead, we often generate a (subset of) the points that are non-dominated. These points form the efficient frontier. As this is a pure integer problem, the efficient frontier is not a line (surface) but just a collection of points.

First, let's clean up the model a little bit. Objective 1 cannot be evaluated for \(\color{darkred}x_1=0\), so we introduce a lower bound of 1 on \(\color{darkred}x_1\). 

Here I want to demonstrate two very different approaches, The first is just to enumerate all feasible points \((\color{darkred}x_1,\color{darkred}x_2)\), evaluate the objective functions, and then filter out dominated solutions by a Pareto sorting algorithm. The second approach is to have an optimization model generate one Pareto-efficient point at a time, using a weighted objective function. Then we add constraints so that we don't generate new points that are dominated. We solve this model several times until the model becomes infeasible.

The solution sets these two methods are finding are identical.


Approach 1: No optimization


We start with a method that does not require optimization at all. The number of possible combinations of the \(x\) variables is small, so we can enumerate them all. Here is some Python code that just does that:


import pandas as pd
pd.set_option("max_rows", 10)

#
# just some values
#
c0 = 1.4; c1 = 1; c2 = 2; c3 = 3; c4 = 4; c5 = 5
d1 = 1; d2 = 2; d3 = 3; d4 = 4; d5 = 5
e1 = 1
a1 = 12; a2 = 8; a3 = 15

# list of lists
L = []

def evalf(x1,x2):
f1 = c0 + (c1+c2*x2+c5*x2**2)/float(x1) + c3*x1 + c4*x2
f2 = d1*x1 + d2*x2 + d3*x1**2 + d4*x2**2 + d5*x1*x2
f3 = -e1*x2
return f1,f2,f3

for x1 in range(1,a1+1):
for x2 in range(0,min(a2,a3-x1)+1):
f1,f2,f3 = evalf(x1,x2)
L.append([x1,x2,f1,f2,f3])

# store in data frame
df = pd.DataFrame(L,columns=['x1','x2','f1','f2','f3'])
df


The results are just 93 records:

Feasible solutions


A sorting algorithm can be used to retain only the non-dominated solutions. I used the code from [2]:

pd.DataFrame(eps_sort([L], [2,3,4]),columns=['x1','x2','f1','f2','f3'])

This yielded:



Non-dominated solutions

This approach is very fast.


Approach 2: optimization model to generate non-dominated points


In this model, we find a single new non-dominated point. The idea is to loop over finding a single non-dominated point until the model becomes infeasible. We then know that all points on the efficient frontier have been found.

For this, we first develop a base model. It looks like:

Base model
\[\begin{align}\min&\sum_k \color{darkblue}w_k \cdot\color{darkred}f_k \\ & \color{darkred}f_1 = \color{darkblue}c_0 + (\color{darkblue}c_1+\color{darkblue}c_2\cdot \color{darkred}x_2+\color{darkblue}c_5\cdot\color{darkred}x_2^2)/\color{darkred}x_1 + \color{darkblue}c_3\cdot \color{darkred}x_1 + \color{darkblue}c_4\cdot \color{darkred}x_2 \\ & \color{darkred}f_2 = \color{darkblue}d_1\cdot \color{darkred}x_1 + \color{darkblue}d_2\cdot \color{darkred}x_2 + \color{darkblue}d_3\cdot \color{darkred}x_1^2 + \color{darkblue}d_4\cdot \color{darkred}x_2^2 + \color{darkblue}d_5\cdot \color{darkred}x_1\cdot \color{darkred}x_2 \\ & \color{darkred}f_3 = -\color{darkblue}e_1\cdot \color{darkred}x_2 \\ & \color{darkred}x_1 + \color{darkred}x_2 \le \color{darkblue}a_3 \\ & \color{darkred}x_1 \in \{1,\dots,\color{darkblue}a_1\} \\ & \color{darkred}x_2 \in \{0,\dots,\color{darkblue}a_2\}\end{align}\]

We want to find the box for the \(\color{darkred}f\)-space. We do this by solving the model with different values for the weights \(\color{darkblue}w_k\). The results are:

 
----     92 PARAMETER fbox  objective box

lo up

obj1 5.400373.400
obj2 4.000706.000
obj3 -8.000


---- 95 PARAMETER Mbig-M

obj1 368.000, obj2 702.000, obj3 8.000


We can see that from the box coordinates we can calculate big-M values for constraints I'll discuss next. 

The main ideas for tracing the frontier are:
  • A point found using positive weights \(\color{darkblue}w_k\) will be on the efficient frontier. That means: it will never be dominated by other feasible points.
  • I used weights \(\color{darkblue}w_k=1\).
  • We add constraints to make sure that a new point is both different and not dominated by earlier points. For this, we require that one of the objectives needs to improve. We can write this as a set of constraints: \[\begin{align} &  \color{darkred}f_k \le \color{darkblue}{\mathit{points}}_{p,k} - \color{darkblue}\epsilon + (\color{darkblue}M_k + \color{darkblue}\epsilon) \cdot (1-\color{darkred}\delta_{p,k}) && \forall p,k\\ &\sum_k\color{darkred}\delta_{p,k} \ge 1 && \forall p \\ & \color{darkred}\delta_{p,k}\in \{0,1\}\end{align}\] Here \(\color{darkblue}{\mathit{points}}_{p,k}\) are the objective values of previously found points, and \(\color{darkblue}\epsilon\gt 0\) is a small tolerance to find improving function values. Without this we may find the same point again. I used \(\color{darkblue}\epsilon=0.01\). This equation is stating what it means if a point is non-dominated. It is a useful concept to know. 
  • The algorithm will stop when there are no more non-dominated points to be found.
  • I used Baron to solve the MINLP subproblems.
  • This approach can only be used for fairly small problems. Finding thousands of points would be too expensive.

The output looks like:


 
----    159 PARAMETER solutions  non-dominated solutions

x1 x2 obj1 obj2 obj3

point1 1.0005.4004.000
point2 1.0001.00016.40015.000 -1.000
point3 2.0001.00015.40030.000 -1.000
point4 1.0002.00037.40034.000 -2.000
point5 2.0002.00027.90054.000 -2.000
point6 3.0002.00026.73380.000 -2.000
point7 1.0003.00068.40061.000 -3.000
point8 2.0003.00045.40086.000 -3.000
point9 3.0003.00039.733117.000 -3.000
point10 4.0003.00038.400154.000 -3.000
point11 2.0004.00067.900126.000 -4.000
point12 1.0004.000109.40096.000 -4.000
point13 3.0004.00056.067162.000 -4.000
point14 4.0004.00051.650204.000 -4.000
point15 2.0005.00095.400174.000 -5.000
point16 3.0005.00075.733215.000 -5.000
point17 1.0005.000160.400139.000 -5.000
point18 5.0004.00050.200252.000 -4.000
point19 4.0005.00067.400262.000 -5.000
point20 2.0006.000127.900230.000 -6.000
point21 3.0006.00098.733276.000 -6.000
point22 5.0005.00063.600315.000 -5.000
point23 1.0006.000221.400190.000 -6.000
point24 4.0006.00085.650328.000 -6.000
point25 6.0005.00062.067374.000 -5.000
point26 2.0007.000165.400294.000 -7.000
point27 5.0006.00079.000386.000 -6.000
point28 3.0007.000125.067345.000 -7.000
point29 7.0005.00061.829439.000 -5.000
point30 4.0007.000106.400402.000 -7.000
point31 6.0006.00075.567450.000 -6.000
point32 1.0007.000292.400249.000 -7.000
point33 5.0007.00096.400465.000 -7.000
point34 2.0008.000207.900366.000 -8.000
point35 3.0008.000154.733422.000 -8.000
point36 7.0006.00073.971520.000 -6.000
point37 4.0008.000129.650484.000 -8.000
point38 6.0007.00090.733534.000 -7.000
point39 5.0008.000115.800552.000 -8.000
point40 8.0006.00073.525596.000 -6.000
point41 1.0008.000373.400316.000 -8.000
point42 7.0007.00087.543609.000 -7.000
point43 6.0008.000107.567626.000 -8.000
point44 8.0007.00085.900690.000 -7.000
point45 7.0008.000102.543706.000 -8.000


We see we can reproduce the results of the first approach. It always feels good if solutions are reproducible especially if they are produced in very different ways. 

Of course, there are many variants of this model possible. For instance, we can prevent repeating points explicitly by no-good cuts.

References


Appendix: GAMS model


$ontext

 
https://or.stackexchange.com/questions/7013/how-to-linearize-a-non-convex-optimization-objective-function

 
find efficient frontier point by point

$offtext


*------------------------------------------------------------------------------
* data
*------------------------------------------------------------------------------

scalars
  c0 
/1.4 /
  c1 
/1 /
  c2 
/2 /
  c3 
/3 /
  c4 
/4 /
  c5 
/5 /
  d1 
/1 /
  d2 
/2 /
  d3 
/3 /
  d4 
/4 /
  d5 
/5 /
  e1 
/1 /
  a1 
/12 /
  a2 
/8 /
  a3 
/15 /
;


set dummy /x1,x2/;

*------------------------------------------------------------------------------
* base model
*------------------------------------------------------------------------------

integervariables x1,x2;

set k /obj1*obj3/;
alias (k,k2);

parameter w(k) 'weights';

variables
   f(k)  
'objectives'
   f0    
'weighted sum'
;

equations
   obj1
   obj2
   obj3
   xsum
   fsum
;

x1.lo = 1;
x1.up = a1;
x2.up = a2;


obj1..  f(
'obj1') =e= c0 + (c1+c2*x2+c5*sqr(x2))/x1 + c3*x1 + c4*x2;
obj2..  f(
'obj2') =e= d1*x1 + d2*x2 + d3*sqr(x1) + d4*sqr(x2) + d5*x1*x2;
obj3..  f(
'obj3') =e= -e1*x2;

xsum.. x1+x2 =l= a3;

fsum.. f0 =e=
sum(k2,w(k2)*f(k2));

option optcr=0, minlp=baron;

model base /all/;

*------------------------------------------------------------------------------
* determine fbox: min and max f(k)
*------------------------------------------------------------------------------
base.solprint = %solprint.Silent%;

parameters
   fbox(k,*)
'objective box'
   M(k)     
'big-M'
;

loop(k,
   w(k) = 1;
  
solve base minimizing f0 using minlp;
   fbox(k,
'lo') = f.l(k);
  
solve base maximizing f0 using minlp;
   fbox(k,
'up') = f.l(k);
   w(k) = 0;
);
display fbox;

M(k) = fbox(k,
'up')-fbox(k,'lo');
display M;


*------------------------------------------------------------------------------
* non-dominated constraints
*------------------------------------------------------------------------------
sets
   p
'points'/point1*point1000/
   pf(p)
'found points'
;
pf(p) =
no;

parameter solutions(p,*) 'non-dominated solutions';
solutions(p,k) = 0;

binaryvariable
  df(p,k)
'non-dominated check'
;

equations
    fbetter1(k,p)
'obj(k) is better compared to point p'
    fbetter2(p)
;

fbetter1(k,pf).. f(k) =l= solutions(pf,k) - 0.01 + (M(k)+0.01)*(1-df(pf,k));
fbetter2(pf)..
sum(k,df(pf,k)) =g= 1;

model ndom /all/;

*------------------------------------------------------------------------------
* big loop
*------------------------------------------------------------------------------


ndom.solprint = %solprint.Silent%;
w(k) = 1;

loop(p,

  
solve ndom minimizing f0 using minlp;
  
break$(ndom.modelstat <> %modelStat.optimal%);


   solutions(p,
'x1') = x1.l;
   solutions(p,
'x2') = x2.l;
   solutions(p,k) = f.l(k);

   pf(p) =
yes;

);

display solutions;

parameter sums(*) 'checksum';
set c /x1,x2,obj1,obj2,obj3/;
sums(c) =
sum(pf,solutions(pf,c));
display sums;


Viewing all articles
Browse latest Browse all 804

Trending Articles