Re: [sage-devel] linbox 64-bit charpoly
On Tue, Sep 27, 2016 at 8:34 PM, 'Bill Hart' via sage-devel < sage-devel@googlegroups.com> wrote: > > > On Tuesday, 27 September 2016 20:53:28 UTC+2, Jonathan Bober wrote: >> >> On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel < >> sage-...@googlegroups.com> wrote: >> >>> I'm pretty sure the charpoly routine in Flint is much more recent that 2 >>> years. Are you referring to a Sage implementation on top of Flint >>> arithmetic or something? >>> >> >> It is just a problem with Sage. >> > > Sure, I realised the problem was in Sage. I just wasn't sure if the > algorithm itself is implemented in Flint or Sage. > > >> Sorry, I thought I was clear about that. I assume that no one has been >> using the algorithm='flint' option in Sage in the last two years, which >> makes sense, because most people aren't going to bother changing the >> default. >> >> >>> The only timing that I can find right at the moment had us about 5x >>> faster than Sage. It's not in a released version of Flint though, just in >>> master. >>> >> >> That sounds really nice. On my laptop with current Sage, it might be the >> other way around. With Sage 7.3 on my laptop, with this particular matrix, >> I get >> > > Yes, Sage/Linbox was about 2.5x times faster than the old charpoly routine > in Flint, I believe. The new one is quite recent and much quicker. > > >> sage: %time f = A.charpoly(algorithm='flint') >> CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s >> Wall time: 1min 24s >> >> sage: %time f = A.charpoly(algorithm='linbox') >> CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s >> Wall time: 13.3 s >> >> However, perhaps the average runtime with linbox is infinity. (Also, this >> in an out of date Linbox.) >> >> I think that Linbox may be "cheating" in a way that Flint is not. I'm >> pretty sure both implementations work mod p (or p^n?) for a bunch of p and >> reconstruct. From my reading of the Flint source code (actually, I didn't >> check the version in Sage) and comments from Clement Pernet, I think that >> Flint uses an explicit Hadamard bound to determine how many primes to use, >> while Linbox just waits for the CRT'd polynomial to stabilize for a few >> primes. >> > > Ouch! > > Yes, in the new code we use an explicit proven bound. I can't quite recall > all the details now, but I recall it is multimodular. > > I would give it a good amount of testing before trusting it. We've done > quite a lot of serious testing of it and the test code is nontrivial, but > some real world tests are much more likely to shake out any bugs, including > the possibility I screwed up the implementation of the bound. > Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I didn't look at what Sage has until now (flint 2.5.2, which looks likes it uses a fairly simple O(n^4) algorithm). I had previously looked at the source code of the version of flint that I've actually been using myself, which is from June. As I now recall (after reading an email I sent in June) I'm using a "non-released" version precisely for the nmod_mat_charpoly() function, which doesn't exist in the most recent release (which I guess might be 2.5.2, but flintlib.org seems to be having problems at the moment). I've actually done some fairly extensive real world semi-testing of nmod_mat_charpoly() in the last few months (for almost the same reasons that have lead me to investigate Sage/Linbox) but not fmpz_mat_charpoly(). "semi" means that I haven't actually checked that the answers are correct. I'm actually computing characteristic polynomials of integer matrices, but writing down the integer matrices is too expensive, so I'm computing the polynomials more efficiently mod p and then CRTing. Also, I'm doing exactly what I think Linbox does, in that I am just waiting for the polynomials to stabilize. Step 2, when it eventually happens, will separately compute the roots of these polynomials numerically, which will (heuristically) verify that they are correct. (Step 3 might involve actually proving somehow that everything is correct, but I strongly fear that it might involve confessing that everything is actually only "obviously" correct.) Once step 2 happens, I'll either report some problems or let you know that everything went well. > > >> I have no idea how much of a difference that makes in this case. >> >> >>> Bill. >>> >>> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote: On Tue, Sep 27, 2016 at 4:18 AM, William Steinwrote: > On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober > wrote: > > On Mon, Sep 26, 2016 at 11:52 PM, William Stein > wrote: > > >> > >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober > wrote: > >> > In the matrix_integer_dense charpoly() function, there is a note > in the > >> > docstring which says "Linbox charpoly disabled on 64-bit > machines, since > >> > it
Re: [sage-devel] linbox 64-bit charpoly
On Tue, Sep 27, 2016 at 8:02 PM, William Steinwrote: > On Tue, Sep 27, 2016 at 11:53 AM, Jonathan Bober > wrote: > > On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel > > wrote: > >> > >> I'm pretty sure the charpoly routine in Flint is much more recent that 2 > >> years. Are you referring to a Sage implementation on top of Flint > arithmetic > >> or something? > > > > > > It is just a problem with Sage. Sorry, I thought I was clear about that. > I > > assume that no one has been using the algorithm='flint' option in Sage in > > the last two years, which makes sense, because most people aren't going > to > > bother changing the default. > > > >> > >> The only timing that I can find right at the moment had us about 5x > faster > >> than Sage. It's not in a released version of Flint though, just in > master. > > > > > > That sounds really nice. On my laptop with current Sage, it might be the > > other way around. With Sage 7.3 on my laptop, with this particular > matrix, I > > get > > > > sage: %time f = A.charpoly(algorithm='flint') > > CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s > > Wall time: 1min 24s > > > > sage: %time f = A.charpoly(algorithm='linbox') > > CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s > > Wall time: 13.3 s > > > > However, perhaps the average runtime with linbox is infinity. (Also, > this in > > an out of date Linbox.) > > > > I think that Linbox may be "cheating" in a way that Flint is not. I'm > pretty > > sure both implementations work mod p (or p^n?) for a bunch of p and > > reconstruct. From my reading of the Flint source code (actually, I didn't > > check the version in Sage) and comments from Clement Pernet, I think that > > Flint uses an explicit Hadamard bound to determine how many primes to > use, > > while Linbox just waits for the CRT'd polynomial to stabilize for a few > > If it is really doing this, then it should definitely not be the > default algorithm for Sage, unless proof=False is explicitly > specified. Not good. > > Yes, I've had the same thought, which is actually part of the reason I took the time to write this. I hope that Clement, or someone else who knows, will notice and confirm or deny. Also, eventually I will probably try to read the Linbox source code. It is possible that I am wrong. (I guess that there is some certification step, and that it is somewhat heuristic, but maybe it is more definitive than that.) > William > > > primes. I have no idea how much of a difference that makes in this case. > > > >> > >> Bill. > >> > >> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote: > >>> > >>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein > wrote: > > On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober > wrote: > > On Mon, Sep 26, 2016 at 11:52 PM, William Stein > > wrote: > > >> > >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober > >> wrote: > >> > In the matrix_integer_dense charpoly() function, there is a note > in > >> > the > >> > docstring which says "Linbox charpoly disabled on 64-bit > machines, > >> > since > >> > it > >> > hangs in many cases." > >> > > >> > As far as I can tell, that is not true, in the sense that (1) I > >> > have > >> > 64-bit > >> > machines, and Linbox charpoly is not disabled, (2) > >> > charpoly(algorithm='flint') is so horribly broken that if it were > >> > ever > >> > used > >> > it should be quickly noticed that it is broken, and (3) I can't > see > >> > anywhere > >> > where it is actually disabled. > >> > > >> > So I actually just submitted a patch which removes this note > while > >> > fixing > >> > point (2). (Trac #21596). > >> > > >> > However... > >> > > >> > In some testing I'm noticing problems with charpoly(), so I'm > >> > wondering > >> > where that message came from, and who knows something about it. > >> > >> Do you know about "git blame", or the "blame" button when viewing > any > >> file here: https://github.com/sagemath/sage/tree/master/src > > > > > > Ah, yes. Of course I know about that. And it was you! > > > > You added that message here: > > Dang... I had a bad feeling that would be the conclusion :-) > >>> > >>> > >>> Well, I'm sure you've done one or two things in the meantime that will > >>> allow me to forgive this one oversight. > >>> > > In my defense, Linbox/FLINT have themselves changed a lot over the > years... We added Linbox in 2007, I think. > > >>> > >>> Yes. As I said, this comment, and the design change, is ancient. In > some > >>> limiting testing, linbox tends to be faster than flint, but has very > high > >>> variance in the timings. (I haven't actually
Re: [sage-devel] Re: memory management issue: deleted variables not released
I just noticed this thread because of your recent reply, and happened to read through. (I haven't regularly read sage-devel for a while.) As to your original email: I think there is a subtle python memory management issue there. If you run sage: BIG=myfunction(somevars) sage: BIG=myfunction(somevars) then on the second invocation of the function, I'm pretty sure that the way Python works, it calculates the result of the function call and then assigns it to the variable BIG. In between, the garbage collector will probably run sometimes, but because the variable BIG has not yet been reassigned, the garbage collector might not clean it up. So it seems reasonable to me that sage: BIG=myfunction(somevars) sage: BIG = 0 sage: BIG=myfunction(somevars) may behave differently. Having said all that... It doesn't sound right that running the function once costs %50 of ram., and running it twice (with the BIG = 0) in between, costs 75%. However, there are certainly situations where that can happen. As was mentioned, Sage caches some computations, and that can occasionally lead to unwanted memory use. Additionally, when running this sort of short test, it seems a good idea to manually invoke the python garbage collector (import gc; gc.collect()) before conclusively declaring that there is a memory leak. The _best_ way to help (and get help) and to get attention, if there is really a memory leak, is to write a short loop that looks something like while 1: x = some_simple_function() gc.collect() print get_memory_usage() and outputs an increasing sequence of numbers. Going from some complicated code to a simple loop like that may be an arduous debugging task in itself, and is something I would consider a valuable service to Sage if it really finds a bug. In the intermediate regime, just sharing some code could be useful, if you are willing and able. There are at least a few people (such as myself, during the occasionally periods while I am paying attention) with >4 GB of ram and 10 minutes of cpu cycles to spare, who may be willing to help. Finally (and this is the reason that I read through this thread and replied), there was a change in the way that Sage manages PARI memory usage (between 7.0 and 7.1, I think. See https://trac.sagemath.org/ticket/19883) which probably affects a very small number of users, but affects them very badly. (I know about this because it affects me.) If on your machine with 100 GB of ram, the output of 'cat /proc/sys/vm/overcommit_memory' is 2, then it affects you. Alternatively, if overcommit_memory is 0, then it is possible you are misreading the memory usage: the virtual memory usage will be high, but not the actual memory usage. The problem will hopefully be fixed by 7.4 (see https://trac.sagemath.org/ticket/21582), but the high virtual memory usage confusion will probably persist. Of course, it is also quite possible that you've found some other bad problem that popped up between 7.0 and 7.1. On Tue, Sep 27, 2016 at 9:44 PM, Deniswrote: > > Tried but it didn't work out. MathCloud admins say they can't help. Tried > also at SageCell but the calculation wouldn't end either way after several > hours. Any ideas? > > Denis > > -- > You received this message because you are subscribed to the Google Groups > "sage-devel" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sage-devel+unsubscr...@googlegroups.com. > To post to this group, send email to sage-devel@googlegroups.com. > Visit this group at https://groups.google.com/group/sage-devel. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: [sagemath-admins] trac not responding
On Tue, Sep 27, 2016 at 2:10 PM, Volker Braunwrote: > I may have time next weekend to containerize the buildbot, though no > promises. It does need quite a lot of disk space (the old one was about 50GB > iirc) to hold all the build logs and binary builds. The new machine has 8TB of hard disks in it... > Whats the plan for external networking and secrets? Regarding secrets, I want to use kubernetes's secret management. However, I've not yet set that up in a way that I'm happy with, so for now we may have to just mount a directory with secrets in it. I haven't thought about network yet. > > > > On Tuesday, September 27, 2016 at 9:57:43 PM UTC+2, William wrote: >> >> On Tue, Sep 27, 2016 at 12:54 PM, Harald Schilly >> wrote: >> > On Tue, Sep 27, 2016 at 9:29 PM, Dima Pasechnik >> > wrote: >> >>> we can't, because there are filesize limits. >> >> how about git-lfs ? (which is probably not cheap to use) >> > >> > I don't think we need any of that, a normal CDN is fine, too. Problem >> > with using a commercial one is that the traffic is so expensive. >> > >> >> >> >> We just have to ask (via ODK). >> > >> > Well, any help is fine, but I bet there won't be a solution until >> > william's deadline. >> >> Sorry if it is was lost in the noise, but there is also a very nice >> brand new clean machine, read to run Docker images, at UW. (48 >> cores, 256GB RAM, brand new OS install.) It should work well for this >> purpose. Harald is looking into runnings files.sagemath.org on it >> now. >> >> William >> >> >> -- >> William (http://wstein.org) > > -- > You received this message because you are subscribed to the Google Groups > "sage-devel" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sage-devel+unsubscr...@googlegroups.com. > To post to this group, send email to sage-devel@googlegroups.com. > Visit this group at https://groups.google.com/group/sage-devel. > For more options, visit https://groups.google.com/d/optout. -- William (http://wstein.org) -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: [sagemath-admins] trac not responding
I may have time next weekend to containerize the buildbot, though no promises. It does need quite a lot of disk space (the old one was about 50GB iirc) to hold all the build logs and binary builds. Whats the plan for external networking and secrets? On Tuesday, September 27, 2016 at 9:57:43 PM UTC+2, William wrote: > > On Tue, Sep 27, 2016 at 12:54 PM, Harald Schilly >wrote: > > On Tue, Sep 27, 2016 at 9:29 PM, Dima Pasechnik > wrote: > >>> we can't, because there are filesize limits. > >> how about git-lfs ? (which is probably not cheap to use) > > > > I don't think we need any of that, a normal CDN is fine, too. Problem > > with using a commercial one is that the traffic is so expensive. > > > >> > >> We just have to ask (via ODK). > > > > Well, any help is fine, but I bet there won't be a solution until > > william's deadline. > > Sorry if it is was lost in the noise, but there is also a very nice > brand new clean machine, read to run Docker images, at UW. (48 > cores, 256GB RAM, brand new OS install.) It should work well for this > purpose. Harald is looking into runnings files.sagemath.org on it > now. > > William > > > -- > William (http://wstein.org) > -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: memory management issue: deleted variables not released
Tried but it didn't work out. MathCloud admins say they can't help. Tried also at SageCell but the calculation wouldn't end either way after several hours. Any ideas? Denis -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: [sagemath-admins] trac not responding
On Tue, Sep 27, 2016 at 12:54 PM, Harald Schillywrote: > On Tue, Sep 27, 2016 at 9:29 PM, Dima Pasechnik wrote: >>> we can't, because there are filesize limits. >> how about git-lfs ? (which is probably not cheap to use) > > I don't think we need any of that, a normal CDN is fine, too. Problem > with using a commercial one is that the traffic is so expensive. > >> >> We just have to ask (via ODK). > > Well, any help is fine, but I bet there won't be a solution until > william's deadline. Sorry if it is was lost in the noise, but there is also a very nice brand new clean machine, read to run Docker images, at UW. (48 cores, 256GB RAM, brand new OS install.) It should work well for this purpose. Harald is looking into runnings files.sagemath.org on it now. William -- William (http://wstein.org) -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: [sagemath-admins] trac not responding
On Tue, Sep 27, 2016 at 9:29 PM, Dima Pasechnikwrote: >> we can't, because there are filesize limits. > how about git-lfs ? (which is probably not cheap to use) I don't think we need any of that, a normal CDN is fine, too. Problem with using a commercial one is that the traffic is so expensive. > > We just have to ask (via ODK). Well, any help is fine, but I bet there won't be a solution until william's deadline. -- h -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] linbox 64-bit charpoly
On Tuesday, 27 September 2016 20:53:28 UTC+2, Jonathan Bober wrote: > > On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel < > sage-...@googlegroups.com > wrote: > >> I'm pretty sure the charpoly routine in Flint is much more recent that 2 >> years. Are you referring to a Sage implementation on top of Flint >> arithmetic or something? >> > > It is just a problem with Sage. > Sure, I realised the problem was in Sage. I just wasn't sure if the algorithm itself is implemented in Flint or Sage. > Sorry, I thought I was clear about that. I assume that no one has been > using the algorithm='flint' option in Sage in the last two years, which > makes sense, because most people aren't going to bother changing the > default. > > >> The only timing that I can find right at the moment had us about 5x >> faster than Sage. It's not in a released version of Flint though, just in >> master. >> > > That sounds really nice. On my laptop with current Sage, it might be the > other way around. With Sage 7.3 on my laptop, with this particular matrix, > I get > Yes, Sage/Linbox was about 2.5x times faster than the old charpoly routine in Flint, I believe. The new one is quite recent and much quicker. > sage: %time f = A.charpoly(algorithm='flint') > CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s > Wall time: 1min 24s > > sage: %time f = A.charpoly(algorithm='linbox') > CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s > Wall time: 13.3 s > > However, perhaps the average runtime with linbox is infinity. (Also, this > in an out of date Linbox.) > > I think that Linbox may be "cheating" in a way that Flint is not. I'm > pretty sure both implementations work mod p (or p^n?) for a bunch of p and > reconstruct. From my reading of the Flint source code (actually, I didn't > check the version in Sage) and comments from Clement Pernet, I think that > Flint uses an explicit Hadamard bound to determine how many primes to use, > while Linbox just waits for the CRT'd polynomial to stabilize for a few > primes. > Ouch! Yes, in the new code we use an explicit proven bound. I can't quite recall all the details now, but I recall it is multimodular. I would give it a good amount of testing before trusting it. We've done quite a lot of serious testing of it and the test code is nontrivial, but some real world tests are much more likely to shake out any bugs, including the possibility I screwed up the implementation of the bound. > I have no idea how much of a difference that makes in this case. > > >> Bill. >> >> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote: >>> >>> On Tue, Sep 27, 2016 at 4:18 AM, William Steinwrote: >>> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober wrote: > On Mon, Sep 26, 2016 at 11:52 PM, William Stein wrote: >> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober wrote: >> > In the matrix_integer_dense charpoly() function, there is a note in the >> > docstring which says "Linbox charpoly disabled on 64-bit machines, since >> > it >> > hangs in many cases." >> > >> > As far as I can tell, that is not true, in the sense that (1) I have >> > 64-bit >> > machines, and Linbox charpoly is not disabled, (2) >> > charpoly(algorithm='flint') is so horribly broken that if it were ever >> > used >> > it should be quickly noticed that it is broken, and (3) I can't see >> > anywhere >> > where it is actually disabled. >> > >> > So I actually just submitted a patch which removes this note while >> > fixing >> > point (2). (Trac #21596). >> > >> > However... >> > >> > In some testing I'm noticing problems with charpoly(), so I'm wondering >> > where that message came from, and who knows something about it. >> >> Do you know about "git blame", or the "blame" button when viewing any >> file here: https://github.com/sagemath/sage/tree/master/src > > > Ah, yes. Of course I know about that. And it was you! > > You added that message here: Dang... I had a bad feeling that would be the conclusion :-) >>> >>> Well, I'm sure you've done one or two things in the meantime that will >>> allow me to forgive this one oversight. >>> >>> In my defense, Linbox/FLINT have themselves changed a lot over the years... We added Linbox in 2007, I think. >>> Yes. As I said, this comment, and the design change, is ancient. In some >>> limiting testing, linbox tends to be faster than flint, but has very high >>> variance in the timings. (I haven't actually checked flint much.) Right now >>> I'm running the following code on 64 cores, which should test linbox: >>> >>> import time >>> >>> @parallel >>> def test(n): >>> start = time.clock() >>> f =
Re: [sage-devel] Re: [sagemath-admins] trac not responding
On Tue, Sep 27, 2016 at 5:57 PM, Harald Schillywrote: > On Tue, Sep 27, 2016 at 7:39 PM, William Stein wrote: >> no volunteers to migrate files/build/rsync.sagemath.org... We should >> just switch to GitHub. > > we can't, because there are filesize limits. how about git-lfs ? (which is probably not cheap to use) Anyhow, it's probably better to use some free cloud - I'm in fact at the moment in Krakow, attending a series of workshops http://www.digitalinfrastructures.eu/ It appears that they (e.g. egi.eu) have spare capacity for hosting and running things... We just have to ask (via ODK). Dima > > -- h > > -- > > --- > You received this message because you are subscribed to the Google Groups > "sagemath-admins" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sagemath-admins+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] linbox 64-bit charpoly
On Tue, Sep 27, 2016 at 11:53 AM, Jonathan Boberwrote: > On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel > wrote: >> >> I'm pretty sure the charpoly routine in Flint is much more recent that 2 >> years. Are you referring to a Sage implementation on top of Flint arithmetic >> or something? > > > It is just a problem with Sage. Sorry, I thought I was clear about that. I > assume that no one has been using the algorithm='flint' option in Sage in > the last two years, which makes sense, because most people aren't going to > bother changing the default. > >> >> The only timing that I can find right at the moment had us about 5x faster >> than Sage. It's not in a released version of Flint though, just in master. > > > That sounds really nice. On my laptop with current Sage, it might be the > other way around. With Sage 7.3 on my laptop, with this particular matrix, I > get > > sage: %time f = A.charpoly(algorithm='flint') > CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s > Wall time: 1min 24s > > sage: %time f = A.charpoly(algorithm='linbox') > CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s > Wall time: 13.3 s > > However, perhaps the average runtime with linbox is infinity. (Also, this in > an out of date Linbox.) > > I think that Linbox may be "cheating" in a way that Flint is not. I'm pretty > sure both implementations work mod p (or p^n?) for a bunch of p and > reconstruct. From my reading of the Flint source code (actually, I didn't > check the version in Sage) and comments from Clement Pernet, I think that > Flint uses an explicit Hadamard bound to determine how many primes to use, > while Linbox just waits for the CRT'd polynomial to stabilize for a few If it is really doing this, then it should definitely not be the default algorithm for Sage, unless proof=False is explicitly specified. Not good. William > primes. I have no idea how much of a difference that makes in this case. > >> >> Bill. >> >> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote: >>> >>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein wrote: On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober wrote: > On Mon, Sep 26, 2016 at 11:52 PM, William Stein > wrote: >> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober >> wrote: >> > In the matrix_integer_dense charpoly() function, there is a note in >> > the >> > docstring which says "Linbox charpoly disabled on 64-bit machines, >> > since >> > it >> > hangs in many cases." >> > >> > As far as I can tell, that is not true, in the sense that (1) I >> > have >> > 64-bit >> > machines, and Linbox charpoly is not disabled, (2) >> > charpoly(algorithm='flint') is so horribly broken that if it were >> > ever >> > used >> > it should be quickly noticed that it is broken, and (3) I can't see >> > anywhere >> > where it is actually disabled. >> > >> > So I actually just submitted a patch which removes this note while >> > fixing >> > point (2). (Trac #21596). >> > >> > However... >> > >> > In some testing I'm noticing problems with charpoly(), so I'm >> > wondering >> > where that message came from, and who knows something about it. >> >> Do you know about "git blame", or the "blame" button when viewing any >> file here: https://github.com/sagemath/sage/tree/master/src > > > Ah, yes. Of course I know about that. And it was you! > > You added that message here: Dang... I had a bad feeling that would be the conclusion :-) >>> >>> >>> Well, I'm sure you've done one or two things in the meantime that will >>> allow me to forgive this one oversight. >>> In my defense, Linbox/FLINT have themselves changed a lot over the years... We added Linbox in 2007, I think. >>> >>> Yes. As I said, this comment, and the design change, is ancient. In some >>> limiting testing, linbox tends to be faster than flint, but has very high >>> variance in the timings. (I haven't actually checked flint much.) Right now >>> I'm running the following code on 64 cores, which should test linbox: >>> >>> import time >>> >>> @parallel >>> def test(n): >>> start = time.clock() >>> f = B.charpoly() >>> end = time.clock() >>> runtime = end - start >>> if f != g: >>> print n, 'ohno' >>> return runtime, 'ohno' >>> else: >>> return runtime, 'ok' >>> >>> A = load('hecke_matrix') >>> A._clear_cache() >>> B, denom = A._clear_denom() >>> g = B.charpoly() >>> B._clear_cache() >>> >>> import sys >>> >>> for result in test(range(10)): >>> print result[0][0][0], ' '.join([str(x) for x in result[1]]) >>> sys.stdout.flush() >>> >>> where the file hecke_matrix was produced by >>> >>>
Re: [sage-devel] linbox 64-bit charpoly
On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel < sage-devel@googlegroups.com> wrote: > I'm pretty sure the charpoly routine in Flint is much more recent that 2 > years. Are you referring to a Sage implementation on top of Flint > arithmetic or something? > It is just a problem with Sage. Sorry, I thought I was clear about that. I assume that no one has been using the algorithm='flint' option in Sage in the last two years, which makes sense, because most people aren't going to bother changing the default. > The only timing that I can find right at the moment had us about 5x faster > than Sage. It's not in a released version of Flint though, just in master. > That sounds really nice. On my laptop with current Sage, it might be the other way around. With Sage 7.3 on my laptop, with this particular matrix, I get sage: %time f = A.charpoly(algorithm='flint') CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s Wall time: 1min 24s sage: %time f = A.charpoly(algorithm='linbox') CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s Wall time: 13.3 s However, perhaps the average runtime with linbox is infinity. (Also, this in an out of date Linbox.) I think that Linbox may be "cheating" in a way that Flint is not. I'm pretty sure both implementations work mod p (or p^n?) for a bunch of p and reconstruct. From my reading of the Flint source code (actually, I didn't check the version in Sage) and comments from Clement Pernet, I think that Flint uses an explicit Hadamard bound to determine how many primes to use, while Linbox just waits for the CRT'd polynomial to stabilize for a few primes. I have no idea how much of a difference that makes in this case. > Bill. > > On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote: >> >> On Tue, Sep 27, 2016 at 4:18 AM, William Steinwrote: >> >>> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober >>> wrote: >>> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein >>> wrote: >>> >>> >> >>> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober >>> wrote: >>> >> > In the matrix_integer_dense charpoly() function, there is a note in >>> the >>> >> > docstring which says "Linbox charpoly disabled on 64-bit machines, >>> since >>> >> > it >>> >> > hangs in many cases." >>> >> > >>> >> > As far as I can tell, that is not true, in the sense that (1) I have >>> >> > 64-bit >>> >> > machines, and Linbox charpoly is not disabled, (2) >>> >> > charpoly(algorithm='flint') is so horribly broken that if it were >>> ever >>> >> > used >>> >> > it should be quickly noticed that it is broken, and (3) I can't see >>> >> > anywhere >>> >> > where it is actually disabled. >>> >> > >>> >> > So I actually just submitted a patch which removes this note while >>> >> > fixing >>> >> > point (2). (Trac #21596). >>> >> > >>> >> > However... >>> >> > >>> >> > In some testing I'm noticing problems with charpoly(), so I'm >>> wondering >>> >> > where that message came from, and who knows something about it. >>> >> >>> >> Do you know about "git blame", or the "blame" button when viewing any >>> >> file here: https://github.com/sagemath/sage/tree/master/src >>> > >>> > >>> > Ah, yes. Of course I know about that. And it was you! >>> > >>> > You added that message here: >>> >>> Dang... I had a bad feeling that would be the conclusion :-) >>> >> >> Well, I'm sure you've done one or two things in the meantime that will >> allow me to forgive this one oversight. >> >> >>> In my defense, Linbox/FLINT have themselves changed a lot over the >>> years... We added Linbox in 2007, I think. >>> >>> >> Yes. As I said, this comment, and the design change, is ancient. In some >> limiting testing, linbox tends to be faster than flint, but has very high >> variance in the timings. (I haven't actually checked flint much.) Right now >> I'm running the following code on 64 cores, which should test linbox: >> >> import time >> >> @parallel >> def test(n): >> start = time.clock() >> f = B.charpoly() >> end = time.clock() >> runtime = end - start >> if f != g: >> print n, 'ohno' >> return runtime, 'ohno' >> else: >> return runtime, 'ok' >> >> A = load('hecke_matrix') >> A._clear_cache() >> B, denom = A._clear_denom() >> g = B.charpoly() >> B._clear_cache() >> >> import sys >> >> for result in test(range(10)): >> print result[0][0][0], ' '.join([str(x) for x in result[1]]) >> sys.stdout.flush() >> >> where the file hecke_matrix was produced by >> >> sage: M = ModularSymbols(3633, 2, -1) >> sage: S = M.cuspidal_subspace().new_subspace() >> sage: H = S.hecke_matrix(2) >> sage: H.save('hecke_matrix') >> >> and the results are interesting: >> >> jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | head >> 30 27.98 ok >> 64 28.0 ok >> 2762 28.02 ok >> 2790 28.02 ok >> 3066 28.02 ok >> 3495 28.03 ok >> 3540 28.03 ok >> 292 28.04 ok >> 437 28.04 ok >> 941 28.04 ok >> >>
[sage-devel] Re: OS X testers needed
OS X 10.10.2 Yosemite: $ ./sage -tp --long src/sage/libs/pari -- All tests passed! -- Total time for all tests: 9.1 seconds cpu time: 14.0 seconds cumulative wall time: 14.3 seconds -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] linbox 64-bit charpoly
I'm pretty sure the charpoly routine in Flint is much more recent that 2 years. Are you referring to a Sage implementation on top of Flint arithmetic or something? The only timing that I can find right at the moment had us about 5x faster than Sage. It's not in a released version of Flint though, just in master. Bill. On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote: > > On Tue, Sep 27, 2016 at 4:18 AM, William Stein> wrote: > >> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober > > wrote: >> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein > > wrote: >> >> >> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober > > wrote: >> >> > In the matrix_integer_dense charpoly() function, there is a note in >> the >> >> > docstring which says "Linbox charpoly disabled on 64-bit machines, >> since >> >> > it >> >> > hangs in many cases." >> >> > >> >> > As far as I can tell, that is not true, in the sense that (1) I have >> >> > 64-bit >> >> > machines, and Linbox charpoly is not disabled, (2) >> >> > charpoly(algorithm='flint') is so horribly broken that if it were >> ever >> >> > used >> >> > it should be quickly noticed that it is broken, and (3) I can't see >> >> > anywhere >> >> > where it is actually disabled. >> >> > >> >> > So I actually just submitted a patch which removes this note while >> >> > fixing >> >> > point (2). (Trac #21596). >> >> > >> >> > However... >> >> > >> >> > In some testing I'm noticing problems with charpoly(), so I'm >> wondering >> >> > where that message came from, and who knows something about it. >> >> >> >> Do you know about "git blame", or the "blame" button when viewing any >> >> file here: https://github.com/sagemath/sage/tree/master/src >> > >> > >> > Ah, yes. Of course I know about that. And it was you! >> > >> > You added that message here: >> >> Dang... I had a bad feeling that would be the conclusion :-) > > > Well, I'm sure you've done one or two things in the meantime that will > allow me to forgive this one oversight. > > >> In my defense, Linbox/FLINT have themselves changed a lot over the >> years... We added Linbox in 2007, I think. >> >> > Yes. As I said, this comment, and the design change, is ancient. In some > limiting testing, linbox tends to be faster than flint, but has very high > variance in the timings. (I haven't actually checked flint much.) Right now > I'm running the following code on 64 cores, which should test linbox: > > import time > > @parallel > def test(n): > start = time.clock() > f = B.charpoly() > end = time.clock() > runtime = end - start > if f != g: > print n, 'ohno' > return runtime, 'ohno' > else: > return runtime, 'ok' > > A = load('hecke_matrix') > A._clear_cache() > B, denom = A._clear_denom() > g = B.charpoly() > B._clear_cache() > > import sys > > for result in test(range(10)): > print result[0][0][0], ' '.join([str(x) for x in result[1]]) > sys.stdout.flush() > > where the file hecke_matrix was produced by > > sage: M = ModularSymbols(3633, 2, -1) > sage: S = M.cuspidal_subspace().new_subspace() > sage: H = S.hecke_matrix(2) > sage: H.save('hecke_matrix') > > and the results are interesting: > > jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | head > 30 27.98 ok > 64 28.0 ok > 2762 28.02 ok > 2790 28.02 ok > 3066 28.02 ok > 3495 28.03 ok > 3540 28.03 ok > 292 28.04 ok > 437 28.04 ok > 941 28.04 ok > > jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | tail > 817 2426.04 ok > 1487 2466.3 ok > 1440 2686.43 ok > 459 2745.74 ok > 776 2994.01 ok > 912 3166.9 ok > 56 3189.98 ok > 546 3278.22 ok > 1008 3322.74 ok > 881 3392.73 ok > > jb12407@lmfdb5:~/sage-bug$ python analyze_output.py test_output3 > average time: 53.9404572616 > unfinished: [490, 523, 1009, 1132, 1274, 1319, 1589, 1726, 1955, 2019, > 2283, 2418, 2500, 2598, 2826, 2979, 2982, 3030, 3057, 3112, 3166, 3190, > 3199, 3210, 3273, 3310, 3358, 3401, 3407, 3434, 3481, 3487, 3534, 3546, > 3593, 3594, 3681, 3685, 3695, 3748, 3782, 3812, 3858, 3864, 3887] > > There hasn't yet been an ohno, but on a similar run of 5000 tests > computing A.charpoly() instead of B I have 1 ohno and 4 still running after > 5 hours. (So I'm expecting an error in the morning...) > > I think that maybe I was getting a higher error rate in Sage 7.3. The > current beta is using a newer linbox, so maybe it fixed something, but > maybe it isn't quite fixed. > > Maybe I should use a small matrix to run more tests more quickly, but this > came from a "real world" example. > > >> -- >> William (http://wstein.org) >> >> -- >> You received this message because you are subscribed to the Google Groups >> "sage-devel" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to sage-devel+...@googlegroups.com . >> To post to this group, send email to sage-...@googlegroups.com >> . >> Visit
Re: [sage-devel] Re: [sagemath-admins] trac not responding
On Tue, Sep 27, 2016 at 7:39 PM, William Steinwrote: > no volunteers to migrate files/build/rsync.sagemath.org... We should > just switch to GitHub. we can't, because there are filesize limits. -- h -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
[sage-devel] Re: [sagemath-admins] trac not responding
William Stein wrote: > On Mon, Sep 26, 2016 at 3:10 PM, William Steinwrote: >>> On Mon, Sep 26, 2016 at 6:22 AM, Volker Braun wrote: Yes, both the file server files.sagemath.org and buildbot build.sagemath.org are down... >> >> I've done what I can and right now >> >>- files.sagemath.org >>- rsync.sagemath.org >> >> seem to respond to pings. And >> >>- build.sagemath.org doesn't. >> >> I have a brand new machine at UW running Ubuntu 16.04, with many >> cores and tons of RAM. Please >> >> SOMEBODY VOLUNTEER TO move files.sagemath.org, rsync.sagemath, and >> build.sagemath.org to run as **Docker containers** on this new >> machine. >> >> ** I will definitely power off everything except this one new machine >> one week from now. ** > > no volunteers to migrate files/build/rsync.sagemath.org... We should > just switch to GitHub. Repost with a subject more on topic? -leif -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: [sagemath-admins] trac not responding
On Mon, Sep 26, 2016 at 3:10 PM, William Steinwrote: >> On Mon, Sep 26, 2016 at 6:22 AM, Volker Braun wrote: >>> Yes, both the file server files.sagemath.org and buildbot build.sagemath.org >>> are down... > > I've done what I can and right now > >- files.sagemath.org >- rsync.sagemath.org > > seem to respond to pings. And > >- build.sagemath.org doesn't. > > I have a brand new machine at UW running Ubuntu 16.04, with many > cores and tons of RAM. Please > > SOMEBODY VOLUNTEER TO move files.sagemath.org, rsync.sagemath, and > build.sagemath.org to run as **Docker containers** on this new > machine. > > ** I will definitely power off everything except this one new machine > one week from now. ** no volunteers to migrate files/build/rsync.sagemath.org... We should just switch to GitHub. William -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: openblas segfault?
On Monday, September 26, 2016 at 11:23:49 PM UTC+2, Jonathan Bober wrote: > > On Mon, Sep 26, 2016 at 10:10 PM, Jean-Pierre Flori> wrote: > >> I suspect that perhaps the copy I have the works is working because I >>> built it as sage 7.3 at some point with SAGE_ATLAS_LIB set and then rebuilt >>> it on the develop branch, which didn't get rid of the atlas symlinks that >>> were already setup. So maybe it isn't actually using openBLAS. >>> >> SAGE_ATLAS_LIB is just used for ATLAS, not OpenBlas. >> > > 1. So does that mean that on a clean build of the current development > branch (or a recent enough beta) SAGE_ATLAS_LIB has, by default, no effect? > Yes. > > 2. Either way, I suspect that doesn't necessarily mean that nothing > strange happens if I build Sage 7.3 (with SAGE_ATLAS_LIB set) and then do a > 'git checkout develop', then build again without first doing a distclean. > Yes. I'm not exactly in what situation you end up when going this way because: * at 7.3, you ran configure/make which set up Atlas as the blas provider, build it and link everything to it * when checking out develop openblas became the default but I'm not sure this actually changed anything if configure was not run again before make... the best way to now for sure is to see if libraries depending on blas got rebuilt (in particular atlas and openblas do not provide binary compatible libraries, you have to link to libs with different names) Maybe Jeroen or Volker have a better idea of what situation you end up in. But if you run "make distclean" then you can be sure that openblas will be built and linked to... -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.
Re: [sage-devel] Re: [sagemath-admins] trac not responding
On Monday, September 26, 2016 at 12:02:36 PM UTC+2, Dima Pasechnik wrote: > > > > On Monday, September 26, 2016 at 9:04:08 AM UTC, Volker Braun wrote: >> >> Somebody killed unauthorized git:// over the weekend... incorrect >> firewall rule? >> > > It was William, I suppose. > Actually, disabling anonymous git:// would perhaps help to reduce server > load, without > real functionality problems for development... > > It does. At least for me. I already manage too many ssh keys and I found it great to be able to fetch from the trac git without setting everything up... > >> >> On Monday, September 26, 2016 at 10:52:13 AM UTC+2, Harald Schilly wrote: >>> >>> On Mon, Sep 26, 2016 at 10:16 AM, Dima Pasechnik>>> wrote: >>> > this really has to be documented properly. >>> >>> just added to your ticket a comment, repeating it here: >>> it's not only the git-trac page, but also the "the hard way" page: >>> http://doc.sagemath.org/html/en/developer/manual_git.html#the-trac-server >>> where git:// is mentioned. >>> it's probably best to condense all this down to a single minimal case >>> that covers the full development setup with ssh keys. >>> >>> -- h >>> >> -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at https://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.