In [1] a small blending problem is proposed.
There are different raw steel materials that have to be mixed (blended) into a final material that has certain specifications. In this case the specifications are limits on the elements Carbon (C), Copper (Cu) and Manganese (Mn). We assume things blend linearly.
The problem is small so let's try PuLP here.
The data for the problem is as follows:
The basic model is:
The blending constraint is nonlinear: we divide by the total weight of the final product to calculate the percentages. We can linearize this fraction in two ways:
We often need to split sandwich equations into two simple inequalities. That often leads to duplicate expressions: \[\begin{align} & \frac{1}{\mathit{Demand}}\sum_i \mathit{Element}_{i,j}\cdot\mathit{Use}_i \ge \mathit{Min}_j \\ & \frac{1}{\mathit{Demand}}\sum_i \mathit{Element}_{i,j} \cdot\mathit{Use}_i \le \mathit{Max}_j \end{align}\] For small problems this is not an issue. For larger problems, I prefer to introduce extra variables that prevents these duplicate expressions. We end up with the following linear programming formulation:
Even for small, almost trivial models, it makes sense to first develop a mathematical model. Especially, if you are not very experienced in developing linear programming models. Starting with a pen and a piece of paper is sometimes better than immediately start coding.
An implementation using PuLP can look like:
Notes:
We will get an error if we misspell things. E.g. if we use in the second table Mgn instead of Mn, we will see:
Pulp is not giving a very well-formed error message here (not sure if Pulp can actually do that -- Python is issuing this before Pulp sees what is happening). But at least we are alerted (rather heavy-handedly) there is something wrong when we use Mn. Careful inspection of the stack frame shows we have a problem in the constraint
This error is actually generating an exception inside an exception handler!
Of course a much better and simpler error message would be: "supplyData.loc["A","Mn"]: Column "Mn" not found in data frame supplyData." IMHO programmers do not pay enough attention to provide meaningful error messages.
The default solver is CBC via a DLL call. I don't think it is possible to see the solver log using this setup. I prefer to see the solver log, just to make sure there are no surprises. For this I used:
This will call the CBC executable (via an MPS file) and it will show the CBC log:
The solver log shows that the presolver removes 13 of the 17 rows. This high reduction rate is related to the singleton constraints. When we look at the model we generated 3+3+7=13 bound constraints. The presolver is getting rid of these and makes proper bounds of them.
Note that: msg=1 may not always work when running as a Jupyter notebook.
For debugging PuLP models, I recommend:
The output of print(model) is:
The LP file looks like:
The information is basically the same, but the ordering of the rows is a bit different.
We can try to model and solve the same problem using CVXPY. CVXPY is matrix oriented, so very different than PuLP. Here is my attempt:
Notes:
The results are not rounded. That often means that the solution of an interior point algorithm looks a bit ugly. In essence this is the same solution as we found with PuLP/CBC.
There are different raw steel materials that have to be mixed (blended) into a final material that has certain specifications. In this case the specifications are limits on the elements Carbon (C), Copper (Cu) and Manganese (Mn). We assume things blend linearly.
The problem is small so let's try PuLP here.
Problem data
The data for the problem is as follows:
Demand: 5000 Kg
Specification of final material:
Element %Minimum %Max
Carbon 23
Copper 0.40.6
Manganese 1.21.65
Raw material inventory:
Alloy C% Cu% Mn% Stocks kg Price € / kg
Iron alloy 2.500.001.3040001.20
Iron alloy 3.000.000.8030001.50
Iron alloy 0.000.300.0060000.90
Copper alloy 0.0090.000.0050001.30
Copper alloy 0.0096.004.0020001.45
Aluminum alloy 0.000.401.2030001.20
Aluminum alloy 0.000.600.002,5001.00
Mathematical Model
The basic model is:
| Blending Model |
|---|
| \[\begin{align}\min & \sum_i \color{darkblue}{\mathit{Cost}}_i\cdot \color{darkred}{\mathit{Use}}_i\\ & \color{darkblue}{\mathit{Min}}_j \le \frac{\displaystyle\sum_i \color{darkblue}{\mathit{Element}}_{i,j}\cdot \color{darkred}{\mathit{Use}}_i}{\displaystyle \sum_i \color{darkred}{\mathit{Use}}_i} \le \color{darkblue}{\mathit{Max}}_j \\ & \sum_i \color{darkred}{\mathit{Use}}_i = \color{darkblue}{\mathit{Demand}} \\ & 0 \le \color{darkred}{\mathit{Use}}_i \le \color{darkblue}{\mathit{Available}}_i \end{align} \] |
The blending constraint is nonlinear: we divide by the total weight of the final product to calculate the percentages. We can linearize this fraction in two ways:
- multiply all sides by \(\sum_i \mathit{Use}_i\). This leads to \[\mathit{Min}_j \cdot\sum_i \mathit{Use}_i \le \sum_i \mathit{Element}_{i,j}\cdot\mathit{Use}_i \le \mathit{Max}_j \sum_i \mathit{Use}_i\]
- or by observing that \(\sum_i \mathit{Use}_i\) is constant: it is always equal to \(\mathit{Demand}\). I.e. \[ \mathit{Min}_j \le \frac{1}{\mathit{Demand}} \sum_i \mathit{Element}_{i,j}\cdot\mathit{Use}_i \le \mathit{Max}_j\]
We often need to split sandwich equations into two simple inequalities. That often leads to duplicate expressions: \[\begin{align} & \frac{1}{\mathit{Demand}}\sum_i \mathit{Element}_{i,j}\cdot\mathit{Use}_i \ge \mathit{Min}_j \\ & \frac{1}{\mathit{Demand}}\sum_i \mathit{Element}_{i,j} \cdot\mathit{Use}_i \le \mathit{Max}_j \end{align}\] For small problems this is not an issue. For larger problems, I prefer to introduce extra variables that prevents these duplicate expressions. We end up with the following linear programming formulation:
| Linear Programming Formulation |
|---|
| \[\begin{align}\min & \sum_i \color{darkblue}{\mathit{Cost}}_i \cdot\color{darkred}{\mathit{Use}}_i\\ & \color{darkblue}{\mathit{Demand}} \cdot \color{darkred}{\mathit{Content}}_j = \sum_i \color{darkblue}{\mathit{Element}}_{i,j} \cdot\color{darkred}{\mathit{Use}}_i \\ & \sum_i \color{darkred}{\mathit{Use}}_i = \color{darkblue}{\mathit{Demand}} \\ & \color{darkred}{\mathit{Use}}_i \in [0, \color{darkblue}{\mathit{Available}}_i]\\ & \color{darkred}{\mathit{Content}}_j \in [\color{darkblue}{\mathit{Min}}_j,\color{darkblue}{\mathit{Max}}_j] \end{align} \] |
Even for small, almost trivial models, it makes sense to first develop a mathematical model. Especially, if you are not very experienced in developing linear programming models. Starting with a pen and a piece of paper is sometimes better than immediately start coding.
Implementation in Python/Pulp
An implementation using PuLP can look like:
fromioimport StringIO
importpandasaspd
importpulpaslp
# for inputting tabular data below
deftable(s):
return pd.read_csv(StringIO(s),sep='\s+',index_col='ID')
#------------------------------------------------------------------
# data
#------------------------------------------------------------------
demand = 5000
requirements = table("""
ID Element Min Max
C Carbon 2 3
Cu Copper 0.4 0.6
Mn Manganese 1.2 1.65
""")
supplyData = table("""
ID Alloy C Cu Mn Stock Price
A "Iron alloy" 2.50 0.00 1.30 4000 1.20
B "Iron alloy" 3.00 0.00 0.80 3000 1.50
C "Iron alloy" 0.00 0.30 0.00 6000 0.90
D "Copper alloy" 0.00 90.00 0.00 5000 1.30
E "Copper alloy" 0.00 96.00 4.00 2000 1.45
F "Aluminum alloy" 0.00 0.40 1.20 3000 1.20
G "Aluminum alloy" 0.00 0.60 0.00 2500 1.00
""")
print("----- Data-------")
print(requirements)
print(supplyData)
#------------------------------------------------------------------
# derived data
#------------------------------------------------------------------
# our sets are stockItems ["A","B",..] and elements ["C","Cu",...]
Items = supplyData.index
Elements = requirements.index
print("----- Indices-------")
print(Items)
print(Elements)
#------------------------------------------------------------------
# LP Model
#------------------------------------------------------------------
use = lp.LpVariable.dicts("Use",Items,0,None,cat='Continuous')
content = lp.LpVariable.dicts("Content",Elements,0,None,cat='Continuous')
model = lp.LpProblem("Steel", lp.LpMinimize)
# objective : minimize cost
model += lp.lpSum([use[i]*supplyData.loc[i,'Price'] for i in Items ])
# upper bounds wrt availability
for i in Items:
model += use[i] <= supplyData.loc[i,'Stock']
# final content of elements and their bounds
for j in Elements:
model += demand*content[j] == lp.lpSum([use[i]*supplyData.loc[i,j] for i in Items])
model += content[j] >= requirements.loc[j,'Min']
model += content[j] <= requirements.loc[j,'Max']
# meet demand
model += lp.lpSum([use[i] for i in Items]) == demand
# for debugging
#print(model)
#------------------------------------------------------------------
# Solve and reporting
#------------------------------------------------------------------
model.solve()
print("----- Model Results-------")
print("Status:", lp.LpStatus[model.status])
print("Objective:",lp.value(model.objective))
# collect results
L = []
for i in Items:
L.append(['use',i,0.0,use[i].varValue,supplyData.loc[i,'Stock']])
for j in Elements:
L.append(['content',j,requirements.loc[j,'Min'],content[j].varValue,requirements.loc[j,'Max']])
results = pd.DataFrame(L,columns=['Variable','Index','Lower','Value','Upper'])
print(results)
Notes:
- We input the basic data as data frames. Data frames are a standard way to handle tabular data. Data frames are originally from the R statistical software system.
- Usually read_csv is for CSV files. Here we use it to read from a string. Blanks are used as separator to make the table more readable for humans.
- For each data frame we added an index column. This index will allow us to select a row from the data frame. Note that the index is a string. In general using strings as index is safer than using an index number. We see much earlier that things are wrong when making a mistake like using \(j\) (element) instead of \(i\) (raw material).
- Python Pandas allows duplicate indices. We can check for this using duplicated() function.
- Because we access the data by name, it would not matter if the rows or columns are in a different position. This is more like a database table, where we assume no particular ordering.
- We also use a data frame for reporting. Data frames are printed in a nicer way than Python arrays, and they can be exported to CVS files or spreadsheet with one function call.
- The variables are also indexed by names. This is accomplished by lp.LpVariable.dicts(). This is safer than using a standard array of variables.
- AFAIK, PuLP can only handle scalars bounds in the LpVariable statement. This means we have a number of bounds specified as singleton constraints.
The results look like:
----- Data-------
Element Min Max
ID
C Carbon 2.03.00
Cu Copper 0.40.60
Mn Manganese 1.21.65
Alloy C Cu Mn Stock Price
ID
A Iron alloy 2.50.01.340001.20
B Iron alloy 3.00.00.830001.50
C Iron alloy 0.00.30.060000.90
D Copper alloy 0.090.00.050001.30
E Copper alloy 0.096.04.020001.45
F Aluminum alloy 0.00.41.230001.20
G Aluminum alloy 0.00.60.025001.00
----- Indices-------
Index(['A', 'B', 'C', 'D', 'E', 'F', 'G'], dtype='object', name='ID')
Index(['C', 'Cu', 'Mn'], dtype='object', name='ID')
----- Model Results-------
Status: Optimal
Objective: 5887.57427835
Variable Index Lower Value Upper
0 use A 0.04000.0000004000.00
1 use B 0.00.0000003000.00
2 use C 0.0397.7630206000.00
3 use D 0.00.0000005000.00
4 use E 0.027.6127232000.00
5 use F 0.0574.6242603000.00
6 use G 0.00.0000002500.00
7 content C 2.02.0000003.00
8 content Cu 0.40.6000000.60
9 content Mn 1.21.2000001.65
Safety
KeyError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2896try:
-> 2897return self._engine.get_loc(key)
2898 except KeyError:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Mn'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
9 frames
<ipython-input-3-d87ed8f34980> in <module>()
59 # final content of elements and their bounds
60for j in Elements:
---> 61 model += demand*content[j] == lp.lpSum([use[i]*supplyData.loc[i,j] for i in Items])
62 model += content[j] >= requirements.loc[j,'Min']
63 model += content[j] <= requirements.loc[j,'Max']
<ipython-input-3-d87ed8f34980> in <listcomp>(.0)
59 # final content of elements and their bounds
60for j in Elements:
---> 61 model += demand*content[j] == lp.lpSum([use[i]*supplyData.loc[i,j] for i in Items])
62 model += content[j] >= requirements.loc[j,'Min']
63 model += content[j] <= requirements.loc[j,'Max']
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in __getitem__(self, key)
1416 except (KeyError, IndexError, AttributeError):
1417 pass
-> 1418return self._getitem_tuple(key)
1419else:
1420 # we by definition only have the 0th axis
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _getitem_tuple(self, tup)
803 def _getitem_tuple(self, tup):
804try:
--> 805return self._getitem_lowerdim(tup)
806 except IndexingError:
807 pass
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _getitem_lowerdim(self, tup)
959return section
960 # This is an elided recursive call to iloc/loc/etc'
--> 961return getattr(section, self.name)[new_key]
962
963 raise IndexingError("not applicable")
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in __getitem__(self, key)
1422
1423 maybe_callable = com.apply_if_callable(key, self.obj)
-> 1424return self._getitem_axis(maybe_callable, axis=axis)
1425
1426 def _is_scalar_access(self, key: Tuple):
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis)
1848 # fall thru to straight lookup
1849 self._validate_key(key, axis)
-> 1850return self._get_label(key, axis=axis)
1851
1852
/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _get_label(self, label, axis)
154 # but will fail when the index is not present
155 # see GH5667
--> 156return self.obj._xs(label, axis=axis)
157 elif isinstance(label, tuple) and isinstance(label[axis], slice):
158 raise IndexingError("no slices here, handle elsewhere")
/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py in xs(self, key, axis, level, drop_level)
3735 loc, new_index = self.index.get_loc_level(key, drop_level=drop_level)
3736else:
-> 3737 loc = self.index.get_loc(key)
3738
3739if isinstance(loc, np.ndarray):
/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2897return self._engine.get_loc(key)
2898 except KeyError:
-> 2899return self._engine.get_loc(self._maybe_cast_indexer(key))
2900 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2901if indexer.ndim > 1 or indexer.size > 1:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Mn'
Pulp is not giving a very well-formed error message here (not sure if Pulp can actually do that -- Python is issuing this before Pulp sees what is happening). But at least we are alerted (rather heavy-handedly) there is something wrong when we use Mn. Careful inspection of the stack frame shows we have a problem in the constraint
model += demand*content[j] == lp.lpSum([use[i]*supplyData.loc[i,j] for i in Items])
This error is actually generating an exception inside an exception handler!
Of course a much better and simpler error message would be: "supplyData.loc["A","Mn"]: Column "Mn" not found in data frame supplyData." IMHO programmers do not pay enough attention to provide meaningful error messages.
Solver log
The default solver is CBC via a DLL call. I don't think it is possible to see the solver log using this setup. I prefer to see the solver log, just to make sure there are no surprises. For this I used:
model.solve(lp.COIN_CMD(msg=1))
This will call the CBC executable (via an MPS file) and it will show the CBC log:
Welcome to the CBC MILP Solver
Version: 2.9
Build Date: Jan 62019
command line - cbc.exe fbbd73baa2494109b8e990ce26eb79b6-pulp.mps branch printingOptions all solution fbbd73baa2494109b8e990ce26eb79b6-pulp.sol (default strategy 1)
At line 2 NAME MODEL
At line 3 ROWS
At line 22 COLUMNS
At line 64 RHS
At line 82 BOUNDS
At line 83 ENDATA
Problem MODEL has 17 rows, 10 columns and 34 elements
Coin0008I MODEL read with 0 errors
Presolve 4 (-13) rows, 7 (-3) columns and 18 (-16) elements
0 Obj 479.87991 Primal inf 10199.217 (4)
3 Obj 5887.5743
Optimal - objective value 5887.5743
After Postsolve, objective 5887.5743, infeasibilities - dual 1275.5592 (2), primal 0 (0)
Presolved model was optimal, full model needs cleaning up
Optimal - objective value 5887.5743
Optimal objective 5887.574275 - 3 iterations time 0.012, Presolve 0.00
Option for printingOptions changed from normal to all
Total time (CPU seconds): 0.03 (Wallclock seconds): 0.03
The solver log shows that the presolver removes 13 of the 17 rows. This high reduction rate is related to the singleton constraints. When we look at the model we generated 3+3+7=13 bound constraints. The presolver is getting rid of these and makes proper bounds of them.
Note that: msg=1 may not always work when running as a Jupyter notebook.
Debugging
For debugging PuLP models, I recommend:
- print(model). Printing the model shows how PuLP interpreted the constraints.
- Writing an LP file: model.writeLP("steel.lp").
The output of print(model) is:
Steel:
MINIMIZE
1.2*Use_A + 1.5*Use_B + 0.9*Use_C + 1.3*Use_D + 1.45*Use_E + 1.2*Use_F + 1.0*Use_G + 0.0
SUBJECT TO
_C1: Use_A <= 4000
_C2: Use_B <= 3000
_C3: Use_C <= 6000
_C4: Use_D <= 5000
_C5: Use_E <= 2000
_C6: Use_F <= 3000
_C7: Use_G <= 2500
_C8: 5000 Content_C - 2.5 Use_A - 3 Use_B = 0
_C9: Content_C >= 2
_C10: Content_C <= 3
_C11: 5000 Content_Cu - 0.3 Use_C - 90 Use_D - 96 Use_E - 0.4 Use_F
- 0.6 Use_G = 0
_C12: Content_Cu >= 0.4
_C13: Content_Cu <= 0.6
_C14: 5000 Content_Mn - 1.3 Use_A - 0.8 Use_B - 4 Use_E - 1.2 Use_F = 0
_C15: Content_Mn >= 1.2
_C16: Content_Mn <= 1.65
_C17: Use_A + Use_B + Use_C + Use_D + Use_E + Use_F + Use_G = 5000
VARIABLES
Content_C Continuous
Content_Cu Continuous
Content_Mn Continuous
Use_A Continuous
Use_B Continuous
Use_C Continuous
Use_D Continuous
Use_E Continuous
Use_F Continuous
Use_G Continuous
The LP file looks like:
\* Steel *\
Minimize
OBJ: 1.2 Use_A + 1.5 Use_B + 0.9 Use_C + 1.3 Use_D + 1.45 Use_E + 1.2 Use_F
+ Use_G
Subject To
_C1: Use_A <= 4000
_C10: Content_C <= 3
_C11: 5000 Content_Cu - 0.3 Use_C - 90 Use_D - 96 Use_E - 0.4 Use_F
- 0.6 Use_G = 0
_C12: Content_Cu >= 0.4
_C13: Content_Cu <= 0.6
_C14: 5000 Content_Mn - 1.3 Use_A - 0.8 Use_B - 4 Use_E - 1.2 Use_F = 0
_C15: Content_Mn >= 1.2
_C16: Content_Mn <= 1.65
_C17: Use_A + Use_B + Use_C + Use_D + Use_E + Use_F + Use_G = 5000
_C2: Use_B <= 3000
_C3: Use_C <= 6000
_C4: Use_D <= 5000
_C5: Use_E <= 2000
_C6: Use_F <= 3000
_C7: Use_G <= 2500
_C8: 5000 Content_C - 2.5 Use_A - 3 Use_B = 0
_C9: Content_C >= 2
End
The information is basically the same, but the ordering of the rows is a bit different.
Comparison to CVXPY
We can try to model and solve the same problem using CVXPY. CVXPY is matrix oriented, so very different than PuLP. Here is my attempt:
fromioimport StringIO
importpandasaspd
importnumpyasnp
importcvxpyascp
# for inputting tabular data below
deftable(s):
return pd.read_csv(StringIO(s),sep='\s+',index_col='ID')
#------------------------------------------------------------------
# data
#------------------------------------------------------------------
demand = 5000
requirements = table("""
ID Element Min Max
C Carbon 2 3
Cu Copper 0.4 0.6
Mn Manganese 1.2 1.65
""")
supplyData = table("""
ID Alloy C Cu Mn Stock Price
A "Iron alloy" 2.50 0.00 1.30 4000 1.20
B "Iron alloy" 3.00 0.00 0.80 3000 1.50
C "Iron alloy" 0.00 0.30 0.00 6000 0.90
D "Copper alloy" 0.00 90.00 0.00 5000 1.30
E "Copper alloy" 0.00 96.00 4.00 2000 1.45
F "Aluminum alloy" 0.00 0.40 1.20 3000 1.20
G "Aluminum alloy" 0.00 0.60 0.00 2500 1.00
""")
print("----- Data-------")
print(requirements)
print(supplyData)
#------------------------------------------------------------------
# derived data
#------------------------------------------------------------------
# our sets are stockItems ["A","B",..] and elements ["C","Cu",...]
Items = supplyData.index
Elements = requirements.index
# extract arrays (make sure order is identical)
Min = requirements.loc[Elements,"Min"]
Max = requirements.loc[Elements,"Max"]
Cost = supplyData.loc[Items,"Price"]
Avail = supplyData.loc[Items,"Stock"]
Element = supplyData.loc[Items,Elements]
# counts
NumItems = np.shape(Items)[0]
NumElements = np.shape(Elements)[0]
# reshape into proper column vectors to make cvxpy happy
Min = np.reshape(Min.to_numpy(),(NumElements,1))
Max = np.reshape(Max.to_numpy(),(NumElements,1))
Cost = np.reshape(Cost.to_numpy(),(NumItems,1))
Avail = np.reshape(Avail.to_numpy(),(NumItems,1))
Element = Element.to_numpy()
#------------------------------------------------------------------
# LP Model
#------------------------------------------------------------------
use = cp.Variable((NumItems,1),"Use",nonneg=True)
content = cp.Variable((NumElements,1),"Content",nonneg=True)
model = cp.Problem(cp.Minimize(Cost.T @ use),
[cp.sum(use) == demand,
cp.multiply(demand,content) == Element.T @ use,
content >= Min,
content <= Max,
use <= Avail
])
#------------------------------------------------------------------
# Solve and reporting
#------------------------------------------------------------------
model.solve(solver=cp.ECOS,verbose=True)
print("----- Model Results-------")
print("status:",model.status)
print("objective:",model.value)
results = pd.DataFrame({'variable':'use',
'index': Items,
'lower':0,
'level':use.value.flatten(),
'upper':Avail.flatten()
})
results = results.append(pd.DataFrame({'variable':'content',
'index': Elements,
'lower':Min.flatten(),
'level':content.value.flatten(),
'upper':Max.flatten()
}))
print(results)
Notes:
- I did my best to make sure that the ordering of rows and columns in the data frames is not significant.
- We convert the information in the data frames to standard NumPy arrays for the benefit of CVXPY.
- The model is compact, but we needed to put more effort in data extraction.
The results look like:
----- Data-------
Element Min Max
ID
C Carbon 2.03.00
Cu Copper 0.40.60
Mn Manganese 1.21.65
Alloy C Cu Mn Stock Price
ID
A Iron alloy 2.50.01.340001.20
B Iron alloy 3.00.00.830001.50
C Iron alloy 0.00.30.060000.90
D Copper alloy 0.090.00.050001.30
E Copper alloy 0.096.04.020001.45
F Aluminum alloy 0.00.41.230001.20
G Aluminum alloy 0.00.60.025001.00
ECOS 2.0.7 - (C) embotech GmbH, Zurich Switzerland, 2012-15. Web: www.embotech.com/ECOS
It pcost dcost gap pres dres k/t mu step sigma IR | BT
0 +6.293e+03 -4.362e+04 +9e+041e-018e-021e+004e+03 --- --- 11 - | - -
1 +5.462e+03 -6.110e+04 +7e+042e-015e-022e+033e+030.53618e-01000 | 00
2 +5.497e+03 +1.418e+03 +7e+031e-024e-035e+023e+020.93133e-02000 | 00
3 +4.981e+03 +3.654e+03 +2e+034e-031e-032e+021e+020.69475e-02000 | 00
4 +5.687e+03 +3.974e+03 +2e+037e-039e-043e+029e+010.40227e-01000 | 00
5 +5.653e+03 +5.326e+03 +5e+021e-032e-042e+012e+010.91271e-01000 | 00
6 +5.692e+03 +5.535e+03 +2e+025e-048e-051e+011e+010.58741e-01000 | 00
7 +5.791e+03 +5.642e+03 +2e+026e-045e-052e+017e+000.73615e-01000 | 00
8 +5.843e+03 +5.798e+03 +6e+012e-042e-055e+002e+000.98904e-01000 | 00
9 +5.886e+03 +5.883e+03 +4e+001e-051e-063e-012e-010.94541e-02000 | 00
10 +5.888e+03 +5.888e+03 +5e-022e-073e-084e-032e-030.98902e-03000 | 00
11 +5.888e+03 +5.888e+03 +6e-042e-095e-105e-052e-050.98901e-04100 | 00
12 +5.888e+03 +5.888e+03 +6e-062e-116e-125e-073e-070.98901e-04100 | 00
OPTIMAL (within feastol=1.9e-11, reltol=1.1e-09, abstol=6.3e-06).
Runtime: 0.000566 seconds.
----- Model Results-------
status: optimal
objective: 5887.574272281105
variable index lower level upper
0 use A 0.04.000000e+034000.00
1 use B 0.01.283254e-063000.00
2 use C 0.03.977630e+026000.00
3 use D 0.05.135476e-075000.00
4 use E 0.02.761272e+012000.00
5 use F 0.05.746243e+023000.00
6 use G 0.05.163966e-062500.00
0 content C 2.02.000000e+003.00
1 content Cu 0.45.999999e-010.60
2 content Mn 1.21.200000e+001.65
The results are not rounded. That often means that the solution of an interior point algorithm looks a bit ugly. In essence this is the same solution as we found with PuLP/CBC.
References
- Translating a LP from Excel to Python, https://stackoverflow.com/questions/59579342/translating-a-lp-from-excel-to-python-pulp