Hello, I'm starting to run into problem sizes that result in out-of-memory errors. The matrix is quite sparse, and the presolver does an excellent job of reducing the problem size (when it doesn't run out of memory first). For example:
lpx_simplex: original LP has 775626 rows, 3861761 columns, 302081 non-zeros lpx_simplex: presolved LP has 18378 rows, 75520 columns, 302080 non-zeros At some point, though, things get a bit out of hand. lpx_simplex: original LP has 1319436 rows, 6581761 columns, 163841 non-zeros umalloc: size = 8024; no memory available First, is there a good rule-of-thumb to estimate memory usage given rows, columns and non-zeros? Second, is there a particular set of options I should be using/avoiding in order to minimize memory? Finally, would a realistic solution be to modify the presolver to use less memory (presumably at the expense of taking much, much longer to run)? I've also tried using the interior-point method, and while it does handle larger matricies, it's running out of memory too. Thanks much, Barry Rountree _______________________________________________ Help-glpk mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-glpk
