In [1] an attempt was made to model the following situation:
- There are \(n=25\) bins (indicated by \(i\)), each with \(q_i\) parts (\(q_i\) is given).
- There are \(m=5\) pallets. We can load up to 5 bins onto a pallet. The set of pallets is called \(j\).
- We want to minimize the standard deviation of the number of parts loaded onto a pallet.
The question of how to model the standard deviation thing comes up now and then, so let’s see how we can model this simple example.
First we notice that we need to introduce some kind of binary assignment variable to indicate on which pallet a bin is loaded:
\[x_{i,j}=\begin{cases}1 & \text{if bin $i$ is assigned to pallet $j$}\\0&\text{otherwise}\end{cases}\] |
\[\bbox[lightcyan,10px,border:3px solid darkblue]{ |
The variables \(p_j\) and \(\mu\) are (unrestricted) continuous variables (to be precise: \(p_j\) will be integer automatically).
The objective is complicated and would require an MINLP solver. We can simplify the objective as follows:
\[\min\>\sum_j (p_j-\mu)^2 \] |
Now we have a MIQP model that can be solved with a number of good solvers.
It is possible to look at the spread in a different way and come up with a linear objective:
\[\begin{align}\min\>&p_{\text{max}}-p_{\text{min}}\\ |
In practice we might even use a simpler approach:
\[\begin{align}\min\>&p_{\text{max}}\\ |
To be complete I also want to mention that I have seen cases where a quadratic objective of the form:
| \[\min\>\sum_j p_j^2\] |
was used.
The original post [1] suggest to use as number of parts:
Obviously drawing from the Normal distribution will give us fractional values. In practice I would expect the number of parts to be a whole number. Of course if we would consider an other attribute such as weight, we could see fractional values. In the model per se, we don’t assume integer valued \(q_i\)’s so let’s stick with these numbers.
To reduce some symmetry in the model we can add:
| \[p_j \ge p_{j-1}\] |
Some results are below.
The two quadratic models were difficult to solve to optimality. I stopped after 1000 seconds and both were still working. Interestingly just minimizing \(\sum p_j^2\) seems to get a somewhat better solution within the 1000 second time limit: it reduces the range from 0.1 to 0.052 and the standard deviation from 0.04 to 0.02.
The linear models solve to global optimality quickly and get better results.
By using a linear approximation we can reduce the range to 0.034 and 0.038 (and the standard deviation to 0.014 and 0.016). The model that minimizes \(p_{max}-p_{min}\) seems to be the best performing: both the quality of the solution is very good and the solution time is the fastest (34 seconds).
This is an example of a model where MIQP solvers are not as fast as we want: they really fall behind their linear counterparts.
References
- LPSolve API, Minimize a function of constraints, https://stackoverflow.com/questions/46203974/lpsolve-api-minimize-a-function-of-constraints