[sage-devel] Error building Sage on UM 16.04: error making OpenBLAS

2017-07-13 Thread Christopher Phoenix
I'm building Sage 7.6 on my laptop, and there was an error making openblas 
that directed me to the log files for that package. The log file said that 
detecting CPU failed, and to set TARGET explicitly. It also suggested I 
email this google group to explain the problem, with the relevant part of 
the log files. So I've attached it below. 

OS: Ubuntu Mate 16.04 LTS
Sage Version: 7.6
HW: Lenovo Thinkpad 11e, 500gb HD, 4 gb ram, Intel Celeron N2940 with 4 cpu 
cores

Before the build, I made sure that I had all the listed dependencies and 
suggested packages already installed. Then I cloned the Git repo, set 
MAKE='make -j5 -4' and typed make. Make ran for about 45 min or more before 
it stopped and reported the error. I asked about this issue on sage-support 
earlier (https://groups.google.com/forum/#!topic/sage-support/NlRyew12xDQ).

Someone had the same issue on very similar hardware (another 11e) here: 
https://groups.google.com/d/msg/sage-devel/zQsZsivts0I/cblwvEkNDgAJ The log 
files look almost exactly the same. They reported that setting 
OPENBLAS_CONFIGURE="TARGET=ATOM" resolved this cpu detection issue, so I'm 
going to try setting this and building Sage again later today. I was a 
little confused since a Celeron is not an Atom afaik, I'm guessing this is 
a catch-all setting for lower-end processors?

Any advice will be greatly appreciated!

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.
Found local metadata for openblas-0.2.19.p0
Attempting to download package OpenBLAS-0.2.19.tar.gz from mirrors
http://files.sagemath.org/spkg/upstream/openblas/OpenBLAS-0.2.19.tar.gz
[..]
openblas-0.2.19.p0

Setting up build directory for openblas-0.2.19.p0
Finished extraction
Applying patches from ../patches...
Applying ../patches/openblas-0.2.19-MAKE.patch
patching file Makefile
Applying ../patches/openblas-0.2.19-OSX_DEPLOY.patch
patching file Makefile.system
Applying ../patches/openblas-0.2.19-SMP_conditional.patch
patching file Makefile.system
Hunk #1 succeeded at 299 (offset 2 lines).
Hunk #2 succeeded at 845 (offset 2 lines).
Hunk #3 succeeded at 1046 (offset 2 lines).
Hunk #4 succeeded at 1054 (offset 2 lines).
Applying ../patches/openblas-0.2.19-utest_ldflags.patch
patching file utest/Makefile

Host system:
Linux christopher-ThinkPad-11e 4.4.0-83-generic #106-Ubuntu SMP Mon Jun 26 
17:54:43 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

C compiler: gcc
C compiler version:
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/5/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 
5.4.0-6ubuntu1~16.04.4' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-vtable-verify --enable-libmpx --enable-plugin --with-system-zlib 
--disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-amd64/jre --enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-amd64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-amd64 
--with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 
--with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib 
--with-tune=generic --enable-checking=release --build=x86_64-linux-gnu 
--host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 

Building OpenBLAS: make  USE_THREAD=0
make[3]: Entering directory 
'/home/christopher/sagemath/sage/local/var/tmp/sage/build/openblas-0.2.19.p0/src'
getarch_2nd.c: In function 'main':
getarch_2nd.c:12:35: error: 'SGEMM_DEFAULT_UNROLL_M' undeclared (first use in 
this function)
 printf("SGEMM_UNROLL_M=%d\n", SGEMM_DEFAULT_UNROLL_M);
   ^
getarch_2nd.c:12:35: note: each undeclared identifier is reported only once for 
each function it appears in
getarch_2nd.c:13:35: error: 

Re: [sage-devel] Fwd: [ODK participants] Blog post on fast multivariate arithmetic

2017-07-13 Thread Francesco Biscani
mppp also uses a small value optimisation. The number of limbs that can be
stored without dynamic memory allocation can be selected at compile time,
and the benchmarks on the website are done using 1 limb (64 bits) of static
storage.

I can think of a few things that might influence positively mppp's
performance with respect to FLINT:

- inlining (as mppp is header-only) avoids the overhead of function calls
and might allow the compiler to optimise better.
- mppp does not do automatic downsizing, once you are using dynamic storage
it's up to you to demote to a static integer. I would imagine this saves a
few branches with respect to FLINT.
- I spent a lot of time tinkering with the add/sub/mul code, trying to find
the code flow/layout that would squeeze out the best performance for small
operands. Maybe I just got lucky with a specific way of arranging the code
particularly friendly to GCC, but I don't know really - I am not a
low-level/assembly type of guy. I just tried many different variations and
picked the one that performed better.

Cheers,

  Francesco.

On 13 July 2017 at 12:25, 'Bill Hart' via sage-devel <
sage-devel@googlegroups.com> wrote:

> So why is it faster than Flint say? Except for the overhead in the Flint
> fmpz type, which uses a single word initially and only upgrades to an mpz_t
> on overflow, it should currently be doing more allocations than Flint. And
> Flint should be faster for something like a dot product, especially if the
> integers are all small, since it never actually allocates mpz_t's in that
> case. What is the new innovation?
>
> Bill.
>
> On Wednesday, 12 July 2017 16:00:16 UTC+2, bluescarni wrote:
>>
>> In the benchmarks I use the C++ interfaces of FLINT and
>> Boost.Multiprecision only for ease of initialization/destruction. The bulk
>> of the operations is performed using directly the C API of FLINT and GMP.
>> mp++ itself has some moderate template metaprogramming in place, but for
>> instance it is currently lacking expression templates support (unlike
>> fmpzxx), the focus at the moment being on fast low-level primitives
>> (add/sub/mul/addmul etc.).
>>
>> Cheers,
>>
>>   Francesco.
>>
>> On 12 July 2017 at 15:13, 'Bill Hart' via sage-devel <
>> sage-...@googlegroups.com> wrote:
>>
>>> Beware, Bernard Parisse has just helped me track down why the Flint
>>> timings for the sparse division only benchmark looked so ridiculously low.
>>> It turns out that due to an accident of interfacing between Nemo and Flint,
>>> it was using reflected lexicographical ordering instead of true
>>> lexicographical ordering. If I had labelled them "exact division", instead
>>> of "quotient only" and not included the x^(n - 3) term in the benchmark
>>> itself, the timings could be considered correct (though Giac would also
>>> have been able to do the computation much faster in that case). But
>>> unfortunately, this discovery means I had to change the timings for Flint
>>> for that benchmark. It is now correct on the blog.
>>>
>>> The timings for mppp are really good. I'm surprised you beat the Flint
>>> timings there, since we use pretty sophisticated templating in our C++
>>> interface. But clearly there are tricks we missed!
>>>
>>> Bill.
>>>
>>> On Wednesday, 12 July 2017 12:16:33 UTC+2, bluescarni wrote:

 Interesting timings, they give me some motivation to revisit the dense
 multiplication algorithm in piranha :)

 As an aside (and apologies if this is a slight thread hijack?), I have
 been spending some time in the last few weeks decoupling the multiprecision
 arithmetic bits from piranha into its own project, called mp++:

 https://github.com/bluescarni/mppp

 So far I have extracted the integer and rational classes, and currently
 working on the real class (arbitrary precision FP).

 Cheers,

   Francesco.

>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "sage-devel" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sage-devel+...@googlegroups.com.
>>> To post to this group, send email to sage-...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sage-devel.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, 

[sage-devel] Re: Is it possible to make an arm port of sage?

2017-07-13 Thread Pavel Sayekat
Thanks everyone :). I got it here, a working sage 6.9 arm version for 
Raspbian Jessie 
https://raspberrypi.stackexchange.com/questions/69183/is-there-any-raspberry-pi-version-of-sagemath?noredirect=1#comment109029_69183
 
:)

On Sunday, July 2, 2017 at 7:07:54 PM UTC+6, Pavel Sayekat wrote:
>
> Is it possible to make an arm port of sage like Raspberry PI?
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Two issues about the coding theory method "weight_enumerator"

2017-07-13 Thread David Joyner
On Thu, Jul 13, 2017 at 5:59 AM, 'B. L.' via sage-devel
 wrote:
> Dear Sage-Developers,
>
> I'd like to report two issues that I came across when working with the
> coding theory classes of SAGE.
>
> The Sage Reference Manual: Coding Theory, Release 7.6 [1] explains on p. 31:
> weight_enumerator [...] This is the bivariate, homogeneous polynomial in 푥
> and 푦 whose coefficient to x^iy^{n-i} is the number of codewords of
> 푠푒푙푓 of Hamming weight 푖. Here, 푛 is the length of 푠푒푙푓.
> Actually, Sage returns another polynomial, namely the polynomial in 푥 and
> 푦 whose coefficient to x^{n-i}y^i is the number of codewords of 푠푒푙푓 of
> Hamming weight 푖. (So the roles of x and y are interchanged).
> This can be directly with C.weight_enumerator?? in the example below [2].
>
> I suggest to either change the description in the reference or to alter the
> code in Sage.
>

I'd propose just switching the x,y in the code:

(1) On line 3503 of linear_code.py, change "return
sum(spec[i]*x**(n-i)*y**i for i in range(n+1))" to "return
sum(spec[i]*x**(i)*y**(n-i) for i in range(n+1))"

(2) On line 3507, change "return sum(spec[i]*x**(n-i) for i in
range(n+1))" to "return sum(spec[i]*x**(i) for i in range(n+1))"

(3) Some of the examples may change accordingly.

This mistake could be my fault, since I wrote the original version
(long long ago). Unfortunately, I lack the git skills to submit a
patch.


> The function weight_enumerator(bivariate=False) returns the evaluation of
> the the above polynomial for y=1. Should't it be (in the current version)
> the evaluation with x=1? In other words: Wouldn't one expect a polynomial in
> x (or y) whose coefficient to x^i (or y^i) is the number of codewords of
> 푠푒푙푓 of Hamming weight 푖?
> The example below [2] illustrates my point: The code has four codewords, one
> of weight 0, two of weight 3, one of weiht 4. Sage gives x^5 + 2*x^2 + x as
> the univariate weight enumerator. I would have expected either 1 + 2*x^3 +
> x^4 or 1 + 2*y^3 + y^4.
>
> If you agree, I suggest to alter the code accordingly.
>
> Kind regards
> Barbara
> PS: I tested the code with Sage version 7.6 on an iMac.
>
>
> [1] http://doc.sagemath.org/pdf/en/reference/coding/coding.pdf
>
> [2] Sage code for  the SageMathCell
>
> C= LinearCode(Matrix(GF(2),2,5, [[1,1,0,1,0], [0,0,1,1,1]]))
> print C.list()
> print C.spectrum()
> print C.weight_enumerator()
> print C.weight_enumerator(bivariate=False)
> C.weight_enumerator??
>
> http://sagecell.sagemath.org/?z=eJxztlXwycxLTSxyzk9J1fBNLCnKrNBwd9Mw0tQx0jHVUYiONtQx1DEA4VggzwDMBMLYWE1NXq6Cosy8EgVnvZzM4hINJH5xQWpySVFpLrJYeWpmekZJfGpeaW5qUWJJfhF-yaTMssSizMSSVFu3xJziVKBaLKrs7QGIgD2K=sage
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


Re: [sage-devel] Fwd: [ODK participants] Blog post on fast multivariate arithmetic

2017-07-13 Thread 'Bill Hart' via sage-devel
So why is it faster than Flint say? Except for the overhead in the Flint 
fmpz type, which uses a single word initially and only upgrades to an mpz_t 
on overflow, it should currently be doing more allocations than Flint. And 
Flint should be faster for something like a dot product, especially if the 
integers are all small, since it never actually allocates mpz_t's in that 
case. What is the new innovation?

Bill.

On Wednesday, 12 July 2017 16:00:16 UTC+2, bluescarni wrote:
>
> In the benchmarks I use the C++ interfaces of FLINT and 
> Boost.Multiprecision only for ease of initialization/destruction. The bulk 
> of the operations is performed using directly the C API of FLINT and GMP. 
> mp++ itself has some moderate template metaprogramming in place, but for 
> instance it is currently lacking expression templates support (unlike 
> fmpzxx), the focus at the moment being on fast low-level primitives 
> (add/sub/mul/addmul etc.).
>
> Cheers,
>
>   Francesco.
>
> On 12 July 2017 at 15:13, 'Bill Hart' via sage-devel <
> sage-...@googlegroups.com > wrote:
>
>> Beware, Bernard Parisse has just helped me track down why the Flint 
>> timings for the sparse division only benchmark looked so ridiculously low. 
>> It turns out that due to an accident of interfacing between Nemo and Flint, 
>> it was using reflected lexicographical ordering instead of true 
>> lexicographical ordering. If I had labelled them "exact division", instead 
>> of "quotient only" and not included the x^(n - 3) term in the benchmark 
>> itself, the timings could be considered correct (though Giac would also 
>> have been able to do the computation much faster in that case). But 
>> unfortunately, this discovery means I had to change the timings for Flint 
>> for that benchmark. It is now correct on the blog.
>>
>> The timings for mppp are really good. I'm surprised you beat the Flint 
>> timings there, since we use pretty sophisticated templating in our C++ 
>> interface. But clearly there are tricks we missed!
>>
>> Bill. 
>>
>> On Wednesday, 12 July 2017 12:16:33 UTC+2, bluescarni wrote:
>>>
>>> Interesting timings, they give me some motivation to revisit the dense 
>>> multiplication algorithm in piranha :)
>>>
>>> As an aside (and apologies if this is a slight thread hijack?), I have 
>>> been spending some time in the last few weeks decoupling the multiprecision 
>>> arithmetic bits from piranha into its own project, called mp++:
>>>
>>> https://github.com/bluescarni/mppp
>>>
>>> So far I have extracted the integer and rational classes, and currently 
>>> working on the real class (arbitrary precision FP).
>>>
>>> Cheers,
>>>
>>>   Francesco.
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sage-devel" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sage-devel+...@googlegroups.com .
>> To post to this group, send email to sage-...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/sage-devel.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.


[sage-devel] Two issues about the coding theory method "weight_enumerator"

2017-07-13 Thread 'B. L.' via sage-devel
Dear Sage-Developers,

I'd like to report two issues that I came across when working with the 
coding theory classes of SAGE.

   1. The Sage Reference Manual: Coding Theory, Release 7.6 [1] explains on 
   p. 31:
   weight_enumerator [...] This is the bivariate, homogeneous polynomial in 
   푥 and 푦 whose coefficient to x^iy^{n-i} is the number of codewords of 
   푠푒푙푓 of Hamming weight 푖. Here, 푛 is the length of 푠푒푙푓.
   Actually, Sage returns another polynomial, namely the polynomial in 푥 
   and 푦 whose coefficient to x^{n-i}y^i is the number of codewords of 
   푠푒푙푓 of Hamming weight 푖. (So the roles of x and y are interchanged).
   This can be directly with C.weight_enumerator?? in the example below [2].
   
   I suggest to either change the description in the reference or to alter 
   the code in Sage.
   
   2. The function weight_enumerator(bivariate=False) returns the 
   evaluation of the the above polynomial for y=1. Should't it be (in the 
   current version) the evaluation with x=1? In other words: Wouldn't one 
   expect a polynomial in x (or y) whose coefficient to x^i (or y^i) is the 
   number of codewords of 푠푒푙푓 of Hamming weight 푖?
   The example below [2] illustrates my point: The code has four codewords, 
   one of weight 0, two of weight 3, one of weiht 4. Sage gives x^5 + 2*x^2 + 
   x as the univariate weight enumerator. I would have expected either 1 + 
   2*x^3 + x^4 or 1 + 2*y^3 + y^4.
   
   If you agree, I suggest to alter the code accordingly.
   
Kind regards
Barbara
PS: I tested the code with Sage version 7.6 on an iMac.


[1] http://doc.sagemath.org/pdf/en/reference/coding/coding.pdf

[2] Sage code for  the SageMathCell

C= LinearCode(Matrix(GF(2),2,5, [[1,1,0,1,0], [0,0,1,1,1]]))
print C.list()
print C.spectrum()
print C.weight_enumerator()
print C.weight_enumerator(bivariate=False)
C.weight_enumerator??

http://sagecell.sagemath.org/?z=eJxztlXwycxLTSxyzk9J1fBNLCnKrNBwd9Mw0tQx0jHVUYiONtQx1DEA4VggzwDMBMLYWE1NXq6Cosy8EgVnvZzM4hINJH5xQWpySVFpLrJYeWpmekZJfGpeaW5qUWJJfhF-yaTMssSizMSSVFu3xJziVKBaLKrs7QGIgD2K=sage

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.