Hello Michael !
You are right and that's why I mention:
===
Probably a review of "glp_gmi_gen" function and others that use qsort
with this insight can reveal a better/stable approach to calculate cuts
(or other tasks around qsort).
===
And testing with miplib2017 problems with a hard
On Thu, 1 Oct 2020, Domingo Alvarez Duarte wrote:
But in reality it seems that musl/qsort results in "stable sort" but actually
for hashi.mod and tiling.mod the order of the "tie-breaker" results in a lot
shorter solving time.
Note that you have achieved consistency of result,
not
On Thu, 1 Oct 2020, Heinrich Schuchardt wrote:
On 9/30/20 10:02 PM, Michael Hennebry wrote:
On Tue, 29 Sep 2020, Domingo Alvarez Duarte wrote:
I found why GLPK in wasm was faster with "--cuts" option than native,
it was due wasm using qsort from muslc that is a "stable sort", I've
added it
On 9/30/20 10:02 PM, Michael Hennebry wrote:
> On Tue, 29 Sep 2020, Domingo Alvarez Duarte wrote:
>
>> I found why GLPK in wasm was faster with "--cuts" option than native,
>> it was due wasm using qsort from muslc that is a "stable sort", I've
>> added it to my GLPK repository and running all
On Tue, 29 Sep 2020, Domingo Alvarez Duarte wrote:
I found why GLPK in wasm was faster with "--cuts" option than native, it was
due wasm using qsort from muslc that is a "stable sort", I've added it to my
GLPK repository and running all models in the "examples" folder only
"color.mod"
On Sat, 26 Sep 2020, Domingo Alvarez Duarte wrote:
I did a revision of the usage of "glp_long_double" see here
https://github.com/mingodad/GLPK/commit/4941d1633e52b802bdc5f102715ac3db25db5245
Revised usage of glp_long_double, now it does solve hashi.mod and tiling.mod
faster with
On Fri, 25 Sep 2020, Andrew Makhorin wrote:
Why do you want glpk to produce absolutely identical results on
different platforms? This has no practical sense.
In some cases, it does make sense,
but in C89 might be difficult to achieve.
If the different platforms have effectively
identical
On Sun, 2020-09-27 at 11:32 +0200, Manuel Muñoz Márquez wrote:
> I agree with you, Andrew, but the problem is when the output is not a
> real number.
>
> Suppose that you have to decide which of the project that are planning
> a big company will be done in the next year. Little difference in
>
I agree with you, Andrew, but the problem is when the output is not a
real number.
Suppose that you have to decide which of the project that are planning
a big company will be done in the next year. Little difference in
computation may lead to a solutions that are far enough one from the
other
Hello !
Activating glp_long_double one by one I found that the ones that really
makes hashi.mod with "--cuts" perform better are the ones bellow (but
doing so degrades performance for anything else).
=
/***
*
On Sat, 2020-09-26 at 09:51 +0200, Manuel Muñoz Márquez wrote:
> > Why do you want glpk to produce absolutely identical results on
> > different platforms? This has no practical sense.
> >
>
> I think this is a desirable behavior. If your are solving a real
> problem it look very weird if you
Hello Michael !
I did a revision of the usage of "glp_long_double" see here
https://github.com/mingodad/GLPK/commit/4941d1633e52b802bdc5f102715ac3db25db5245
Revised usage of glp_long_double, now it does solve hashi.mod and
tiling.mod faster with "--cuts" option but hashi.mod without
>Why do you want glpk to produce absolutely identical results on
> different platforms? This has no practical sense.
>
I think this is a desirable behavior. If your are solving a real
problem it look very weird if you provides a solution using your
computer and when you put in the user web
> Something doesn't match for me, because I'm compiling GLPK/glpsol
> with and without optmization and get the same reported cuts:
>
> Cuts on level 0: gmi = 5; mir = 44; cov = 20; clq = 3; !! *
> the same
> =
> Cuts on level 0: gmi = 5; mir = 44; cov = 20; clq = 3; *
> the
Hello Andrew !
Something doesn't match for me, because I'm compiling GLPK/glpsol with
and without optmization and get the same reported cuts:
Compiled with "-O3 -DNDEBUG"
=
glpsol2 --cuts -m hashi.mod
GLPSOL: GLPK LP/MIP Solver, v4.65
Parameter(s) specified in the command line:
--cuts
Hello Mike !
I did changed in several places see here
https://github.com/mingodad/GLPK/commit/b370a854be0c10c06e025896dedc4e3461278497
===
Changed some declarations to glp_long_double in hope that using it on
mainly "sums" would improve performance/accuracy, but right now
performance
On Fri, 2020-09-25 at 10:04 +0200, Domingo Alvarez Duarte wrote:
> Hello Michael !
>
> Thank you for reply !
>
> I'll take into account the use of possible wider float format for
> intermediary values using something like your suggestion of
> redefinable
> type like "glp_double_t" (actually in
Hello Michael !
Thank you for reply !
I'll take into account the use of possible wider float format for
intermediary values using something like your suggestion of redefinable
type like "glp_double_t" (actually in gcc 9 in linux x86 "double_t" and
"double" are equal).
But also leave the
On Thu, 24 Sep 2020, Domingo Alvarez Duarte wrote:
I just got glpsol with "long double" working and add binaries for anyone that
want to test then here https://github.com/mingodad/GLPK/releases
As noted there it'll benefit from tuning the constants in src/glpk_real.h
Any
Hello !
I just got glpsol with "long double" working and add binaries for anyone
that want to test then here https://github.com/mingodad/GLPK/releases
As noted there it'll benefit from tuning the constants in src/glpk_real.h
Any help/suggestion/comment is welcome !
Cheers !
On 22/9/20
Hello Michael !
Thanks for reply !
After your reply I did a refactoring on this this branch
https://github.com/mingodad/GLPK/tree/local-set-param where I replaced
all occurrences of "double" by "glp_double" and added a definition for
it as shown bellow, but it seems that there are several
On Tue, 22 Sep 2020, Domingo Alvarez Duarte wrote:
Due to the big difference in solver time how could we figure out what's is it
in order to use this knowledge to improve the native solver time ?
I mean what debug/verbose options could help us have a clue ?
I expect the answer is none.
My
It is well known that varying initial conditions can dramatically change
the running time of a solver (and the tree searched). In this case there
are floating point differences and in the first instance the integer
solution is found very early and has the same value with the lp solution
Hello Andrew !
In this case mean compiling with "-O3 -DNDEBUG -DWITH_SPLAYTREE" on
Arm7, "-O3 -DNDEBUG -flto -march=native -ffp-contract=off
-DWITH_SPLAYTREE" on x86_64 and "-O3 -DNDEBUG -flto -DWITH_SPLAYTREE" on
wasm.
How are the parameters for the cut generations selected ?
Isn't
On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:
> Hello again !
>
> On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm)
> solving the "hashi.mod" with "--cuts" takes 98s and without it 909s,
> using glpsol native compiled within termux takes 497s with "--cuts"
>
Hello again !
On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm)
solving the "hashi.mod" with "--cuts" takes 98s and without it 909s,
using glpsol native compiled within termux takes 497s with "--cuts" and
without it 925s.
Arm7 32bits Nexus-5:
wasm "--cuts -m hashi.mod"
Hello Tor !
Thanks for reply !
Do you know why that happen ?
Which criteria is used to select one ?
Can we somehow force/direct the solver to adopt a different one ?
Cheers !
On 22/9/20 14:03, Tor Myklebust wrote:
Note that the two solvers made different branching decisions once it
came
Hello Andrew !
Due to the big difference in solver time how could we figure out what's
is it in order to use this knowledge to improve the native solver time ?
I mean what debug/verbose options could help us have a clue ?
Cheers !
On 21/9/20 17:11, Andrew Makhorin wrote:
On Mon, 2020-09-21
On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:
> Hello Andrew !
>
> Are you saying that floating point calculations are more
> efficient/precise in webassembly ?
No. I meant that due to floating-point computations running the same
computer program with the same data as a rule
Hello Andrew !
Are you saying that floating point calculations are more
efficient/precise in webassembly ?
Cheers !
On 21/9/20 15:08, Andrew Makhorin wrote:
Does someone can give a possible explanation ?
floating-point computations
> Does someone can give a possible explanation ?
floating-point computations
31 matches
Mail list logo