Thank you for your detailed e-mail. I tried parm.nrs_max = 10; but it did not help.
> The objective > may be inaccurate also due to reduced cost tolerance which is 1e-7 by > default. Besides, if the objective coefficients are too small (much > less than 1.0), it would be desired to scale them. My LP problems are also very degenerate. The first LP problem i solve has a zero objective that is feasibilty check. Then i solve min x_j and max_j for all columns. (In fact it is possible to skip some columns analysing the non-basic variables.) So again, the objective is very simple, there is exactly one variable with coefficient 1.0. All variables are box constrained with initial -1.0 and 1.0 lower and upper bounds, respectively. Does this help getting an error estimate on the objective function value? Does this help setting a stricter tolerated error on the objective function value? Is there a way to exploit this simple objective? > If you think that a new row/column added to the instance is badly > scaled, you can change its scale factor with glp_set_rii (for row) > or with glp_set_sjj (for column). Once the LP problem is built, i never add a new column or row, i only manipulate the objective as written above. I have tried all three scalings, but the results are the same. Thank you for your help. _______________________________________________ Help-glpk mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-glpk
