The matrix balancing problem can be described as [1]: find a non-negative matrix \(\color{darkred}A_{i,j}\), that is as close as possible to \(\color{darkblue}A^0_{i,j}\), while observing given row and column totals and a given sparsity pattern. Or, mathematically,
| Matrix Balancing Problem |
|---|
| \[\begin{align}\min_{\color{darkred}A}\>&{\bf{dist}}(\color{darkred}A,\color{darkblue}A^0)\\ & \sum_i \color{darkred}A_{i,j} = \color{darkblue}c_j && \forall j\\ & \sum_j \color{darkred}A_{i,j} = \color{darkblue}r_i && \forall i \\&\color{darkred}A_{i,j}=0 &&\forall i,j|\color{darkblue}A^0_{i,j}=0\\ &\color{darkred}A_{i,j}\ge 0 \end{align} \] |
There are numerous ways to specify the objective. Here we focus on three popular ones:
- Cross-entropy: \(\displaystyle{\min \sum_{i,j} \color{darkred}A_{i,j} \ln \frac{\color{darkred}A_{i,j}}{\color{darkblue}A^0_{i,j}}}\),
- Quadratic: \(\displaystyle{\min \sum_{i,j} \left(\color{darkred}A_{i,j}-\color{darkblue}A^0_{i,j}\right)^2}\),
- Relative Quadratic: \(\displaystyle{\min \sum_{i,j} \left(\frac{\color{darkred}A_{i,j}}{\color{darkblue}A^0_{i,j}}-1\right)^2}\)
One interesting way is to look at the distributions of the errors \(\color{darkred}A-\color{darkblue}A^0\) (only for nonzero \(\color{darkblue}A^0_{i,j}\)). Using some random data and a large instance, we see the following pattern:
The entropy objective is nestled nicely between the two quadratic distributions.
I think this is a rather appealing visualization, giving insight into how these objectives behave. Missing in this is a sense where these errors appear: for small \(\color{darkblue}A^0_{i,j}\) or large ones. I am not sure how to depict that.
References
- Some matrix balancing experiments, https://yetanothermathprogrammingconsultant.blogspot.com/2022/08/some-matrix-balancing-experiments.html