Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Michael Hennebry

On Tue, 22 Sep 2020, Domingo Alvarez Duarte wrote:

Due to the big difference in solver time how could we figure out what's is it 
in order to use this knowledge to improve the native solver time ?


I mean what debug/verbose options could help us have a clue ?


I expect the answer is none.
My guess is that neither platform is inherently better than the other.
Which small roundings will be better doubtless depends
on the particular problem and version of GLPK.
Andrew insists on C89.
That pretty much eliminates control over how floating point is done.

double x=1./3., y=1./3.;
C89 does not require x and y to have the same value.
IEEE arithmetic would, despite possible double rounding.

Even with IEEE, platforms are allowed to evaluate floating point
expressions with a precision larger than required by the expressions.
They are not required to be consistent.
double x=2.0+DBLE_EPSILON-2.0, y=2.0+DBL_EPSILON-2.0;
x could be 0 or DBLE_EPSILON, as could y.
They need not be the same.
Once upon a time, that was a real problem with gcc.
For C89, it might still be.

With a lot of casts to long double and sufficient
care in the representation of constants,
Andrew might be able to get consistent behaviour between
IEEE platforms with matching doubles and long doubles.

It's been a while since I looked at any GLPK code.
My expectation is that at least some of it, e.g. dot product, is memory bound.
In such a case, explicitly using long doubles would
not slow it down and could improve its accuracy.
Possibly Andrew already does that.
I haven't looked lately.

--
Michael   henne...@web.cs.ndsu.nodak.edu
"Sorry but your password must contain an uppercase letter, a number,
a haiku, a gang sign, a heiroglyph, and the blood of a virgin."
 --  someeecards



Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Chris Matrakidis
It is well known that varying initial conditions can dramatically change
the running time of a solver (and the tree searched). In this case there
are floating point differences and in the first instance the integer
solution is found very early and has the same value with the lp solution
terminating the search instantly, while in the other it takes a few
iterations more (and again terminates the search instantly).

You can't draw any conclusion about solver performance from just one
instance, where you may be lucky (or not).

Best Regards,

Chris Matrakidis

On Tue, 22 Sep 2020 at 20:03, Domingo Alvarez Duarte 
wrote:

> Hello Andrew !
>
> In this case mean compiling with "-O3 -DNDEBUG -DWITH_SPLAYTREE" on
> Arm7, "-O3 -DNDEBUG -flto -march=native -ffp-contract=off
> -DWITH_SPLAYTREE" on x86_64 and "-O3 -DNDEBUG -flto -DWITH_SPLAYTREE" on
> wasm.
>
> How are the parameters for the cut generations selected ?
>
> Isn't strange that on wasm it's been faster than native ?
>
> Doesn't this difference gives insight to select the parameter differently ?
>
> Cheers !
>
> On 22/9/20 17:56, Andrew Makhorin wrote:
> > On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:
> >> Hello again !
> >>
> >> On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm)
> >> solving the "hashi.mod" with "--cuts" takes 98s and without it 909s,
> >> using glpsol native compiled within termux takes 497s with "--cuts"
> >> and
> >> without it 925s.
> >
> > What does "native" mean? Just changing, for example, optimization level
> > of the compiler may essentially change the set of generated cuts and
> > thus the solution time.
> >
> >
> >> Arm7 32bits Nexus-5:
> >>
> >>   wasm "--cuts -m hashi.mod" -> 98s
> >>
> >>   wasm " -m hashi.mod" -> 909s
> >>
> >>   native "--cuts -m hashi.mod" -> 497s
> >>
> >>   native " -m hashi.mod" -> 925s
> >>
> >>
> >> Laptop Linux 64bits I7:
> >>
> >>   wasm "--cuts -m hashi.mod" -> 8s
> >>
> >>   wasm " -m hashi.mod" -> 142s
> >>
> >>   native "--cuts -m hashi.mod" -> 73s
> >>
> >>   native " -m hashi.mod" -> 55s
> >>
> >>
> >> On arm7 "--cuts" improves the performance in both wasm and native.
> >>
> >> On x86_64 "--cuts" improves in wasm but degrade in native.
> >>
> >> I hope this could give hints to improve GLPK solver performance by
> >> inspecting the decision's criteria and eventually find a better ones.
> >>
> >> Anyone can give any idea with this data ?
> >>
> >> Cheers !
> >>
> >> On 21/9/20 17:11, Andrew Makhorin wrote:
> >>> On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:
>  Hello Andrew !
> 
>  Are you saying that floating point calculations are more
>  efficient/precise in webassembly ?
> >>> No. I meant that due to floating-point computations running the same
> >>> computer program with the same data as a rule produces different
> >>> results
> >>> on different platforms.
> >>>
>  Cheers !
> 
>  On 21/9/20 15:08, Andrew Makhorin wrote:
> >> Does someone can give a possible explanation ?
> > floating-point computations
> >>
>
>


Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello Andrew !

In this case mean compiling with "-O3 -DNDEBUG -DWITH_SPLAYTREE" on 
Arm7, "-O3 -DNDEBUG -flto -march=native -ffp-contract=off 
-DWITH_SPLAYTREE" on x86_64 and "-O3 -DNDEBUG -flto -DWITH_SPLAYTREE" on 
wasm.


How are the parameters for the cut generations selected ?

Isn't strange that on wasm it's been faster than native ?

Doesn't this difference gives insight to select the parameter differently ?

Cheers !

On 22/9/20 17:56, Andrew Makhorin wrote:

On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:

Hello again !

On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm)
solving the "hashi.mod" with "--cuts" takes 98s and without it 909s,
using glpsol native compiled within termux takes 497s with "--cuts"
and
without it 925s.


What does "native" mean? Just changing, for example, optimization level
of the compiler may essentially change the set of generated cuts and
thus the solution time.



Arm7 32bits Nexus-5:

  wasm "--cuts -m hashi.mod" -> 98s

  wasm " -m hashi.mod" -> 909s

  native "--cuts -m hashi.mod" -> 497s

  native " -m hashi.mod" -> 925s


Laptop Linux 64bits I7:

  wasm "--cuts -m hashi.mod" -> 8s

  wasm " -m hashi.mod" -> 142s

  native "--cuts -m hashi.mod" -> 73s

  native " -m hashi.mod" -> 55s


On arm7 "--cuts" improves the performance in both wasm and native.

On x86_64 "--cuts" improves in wasm but degrade in native.

I hope this could give hints to improve GLPK solver performance by
inspecting the decision's criteria and eventually find a better ones.

Anyone can give any idea with this data ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different
results
on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations






Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Andrew Makhorin
On Tue, 2020-09-22 at 15:53 +0200, Domingo Alvarez Duarte wrote:
> Hello again !
> 
> On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm) 
> solving the "hashi.mod" with "--cuts" takes 98s and without it 909s, 
> using glpsol native compiled within termux takes 497s with "--cuts"
> and 
> without it 925s.


What does "native" mean? Just changing, for example, optimization level
of the compiler may essentially change the set of generated cuts and
thus the solution time.


> 
> Arm7 32bits Nexus-5:
> 
>  wasm "--cuts -m hashi.mod" -> 98s
> 
>  wasm " -m hashi.mod" -> 909s
> 
>  native "--cuts -m hashi.mod" -> 497s
> 
>  native " -m hashi.mod" -> 925s
> 
> 
> Laptop Linux 64bits I7:
> 
>  wasm "--cuts -m hashi.mod" -> 8s
> 
>  wasm " -m hashi.mod" -> 142s
> 
>  native "--cuts -m hashi.mod" -> 73s
> 
>  native " -m hashi.mod" -> 55s
> 
> 
> On arm7 "--cuts" improves the performance in both wasm and native.
> 
> On x86_64 "--cuts" improves in wasm but degrade in native.
> 
> I hope this could give hints to improve GLPK solver performance by 
> inspecting the decision's criteria and eventually find a better ones.
> 
> Anyone can give any idea with this data ?
> 
> Cheers !
> 
> On 21/9/20 17:11, Andrew Makhorin wrote:
> > On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:
> > > Hello Andrew !
> > > 
> > > Are you saying that floating point calculations are more
> > > efficient/precise in webassembly ?
> > 
> > No. I meant that due to floating-point computations running the same
> > computer program with the same data as a rule produces different
> > results
> > on different platforms.
> > 
> > > Cheers !
> > > 
> > > On 21/9/20 15:08, Andrew Makhorin wrote:
> > > > > Does someone can give a possible explanation ?
> > > > 
> > > > floating-point computations
> 
> 



Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello again !

On an Android phone arm7 32bits Nexux-5 with chrome browser (wasm) 
solving the "hashi.mod" with "--cuts" takes 98s and without it 909s, 
using glpsol native compiled within termux takes 497s with "--cuts" and 
without it 925s.


Arm7 32bits Nexus-5:

    wasm "--cuts -m hashi.mod" -> 98s

    wasm " -m hashi.mod" -> 909s

    native "--cuts -m hashi.mod" -> 497s

    native " -m hashi.mod" -> 925s


Laptop Linux 64bits I7:

    wasm "--cuts -m hashi.mod" -> 8s

    wasm " -m hashi.mod" -> 142s

    native "--cuts -m hashi.mod" -> 73s

    native " -m hashi.mod" -> 55s


On arm7 "--cuts" improves the performance in both wasm and native.

On x86_64 "--cuts" improves in wasm but degrade in native.

I hope this could give hints to improve GLPK solver performance by 
inspecting the decision's criteria and eventually find a better ones.


Anyone can give any idea with this data ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different results
on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations






Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello Tor !

Thanks for reply !

Do you know why that happen ?

Which criteria is used to select one ?

Can we somehow force/direct the solver to adopt a different one ?

Cheers !

On 22/9/20 14:03, Tor Myklebust wrote:
Note that the two solvers made different branching decisions once it 
came time to solve the MIP.


On Tue, 22 Sep 2020, Domingo Alvarez Duarte wrote:


Hello Andrew !

Due to the big difference in solver time how could we figure out 
what's is it in order to use this knowledge to improve the native 
solver time ?


I mean what debug/verbose options could help us have a clue ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different 
results

on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations









Re: GLPSOL in webassemby faster than native ?

2020-09-22 Thread Domingo Alvarez Duarte

Hello Andrew !

Due to the big difference in solver time how could we figure out what's 
is it in order to use this knowledge to improve the native solver time ?


I mean what debug/verbose options could help us have a clue ?

Cheers !

On 21/9/20 17:11, Andrew Makhorin wrote:

On Mon, 2020-09-21 at 16:09 +0200, Domingo Alvarez Duarte wrote:

Hello Andrew !

Are you saying that floating point calculations are more
efficient/precise in webassembly ?

No. I meant that due to floating-point computations running the same
computer program with the same data as a rule produces different results
on different platforms.


Cheers !

On 21/9/20 15:08, Andrew Makhorin wrote:

Does someone can give a possible explanation ?

floating-point computations