Disappointed. That's about as controversial as the pope being a catholic.
Bill.
On Wednesday, 17 August 2016 07:02:16 UTC+2, William wrote:
>
>
> http://ask.sagemath.org/question/34442/can-i-create-commercial-software-using-sagemath
>
>
> I put: "ANSWER: It depends on what you mean by "commerci
I'm pretty sure the charpoly routine in Flint is much more recent that 2
years. Are you referring to a Sage implementation on top of Flint
arithmetic or something?
The only timing that I can find right at the moment had us about 5x faster
than Sage. It's not in a released version of Flint thoug
On Tuesday, 27 September 2016 20:53:28 UTC+2, Jonathan Bober wrote:
>
> On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel <
> sage-...@googlegroups.com > wrote:
>
>> I'm pretty sure the charpoly routine in Flint is much more recent that 2
You don't need to post all your code, just a small example that
demonstrates the problem you are experiencing.
If your computation is using half the memory on the machine, the solution
is likely going to be to find a way to make it use less memory or to get a
machine with more memory.
The beh
projects. Hang tight.
Bill.
On Wednesday, 28 September 2016 14:47:31 UTC+2, leif wrote:
>
> Jonathan Bober wrote:
> > On Tue, Sep 27, 2016 at 8:34 PM, 'Bill Hart' via sage-devel
> > sage-...@googlegroups.com >> wrote:
> >
> >
> >
> &g
By default, Singular uses 16 bit exponents. But it is perfectly capable of
working with exponents up to 64 bits. That will be slower of course.
I guess it isn't easy for Sage to change the relevant ring upon overflow to
one using 64 bit exponents.
I can't say whether it would be easy or hard fo
Note that Hans has fixed the fact that Singular wasn't reporting this as an
overflow.
On Sunday, 9 October 2016 17:35:57 UTC+2, Bill Hart wrote:
>
> By default, Singular uses 16 bit exponents. But it is perfectly capable of
> working with exponents up to 64 bits. That will be slower of course.
>
On Sunday, 9 October 2016 18:08:29 UTC+2, Dima Pasechnik wrote:
>
>
>
> On Sunday, October 9, 2016 at 3:35:57 PM UTC, Bill Hart wrote:
>>
>> By default, Singular uses 16 bit exponents. But it is perfectly capable
>> of working with exponents up to 64 bits. That will be slower of course.
>>
>> wh
On Monday, 10 October 2016 12:31:25 UTC+2, Dima Pasechnik wrote:
>
>
>
> On Sunday, October 9, 2016 at 4:48:31 PM UTC, Bill Hart wrote:
>>
>>
>>
>> On Sunday, 9 October 2016 18:08:29 UTC+2, Dima Pasechnik wrote:
>>>
>>>
>>>
>>> On Sunday, October 9, 2016 at 3:35:57 PM UTC, Bill Hart wrote:
>
On Tuesday, 11 October 2016 09:33:57 UTC+2, Jean-Pierre Flori wrote:
>
> Yes it is a feature of the Singular 4 update that Singular and Sage work
> by default with 16 bit exponents on 32 and 64 bit platform by default.
> If only all of of you had read carefully the 543 comments of the update
>
On Tuesday, 11 October 2016 15:18:26 UTC+2, Dima Pasechnik wrote:
>
>
>
> On Tuesday, October 11, 2016 at 4:47:23 AM UTC, Bill Hart wrote:
>>
>>
>>
>> On Monday, 10 October 2016 12:31:25 UTC+2, Dima Pasechnik wrote:
>>>
>>>
>>>
>>> On Sunday, October 9, 2016 at 4:48:31 PM UTC, Bill Hart wrote:
>>
>
>
>
>>
>>>
>>> This should create the polynomial x, then try to raise it to the power
>>> of 2^30, which is about a billion I think.
>>>
>>> Along the way it will use the FFT, which is a bit of a memory hog.
>>>
>>> One day we ought to fix the powering code to handle monomials
>>> separately
Hans does seem to fix most bugs that are reported unless they require
extensive rewriting or aren't considered bugs. It looks like this code was
written with the expectation that it would be maintained, so I'd just
report it to him [1].
Bill.
[1] https://www.singular.uni-kl.de/index.php/singul
Are you sure you need an account on the trac to report it? The http version
at least seems to accept essentially anonymous input.
Alternatively, I guess JP's suggestion should work fine. It might be even
better if some discussion is needed about what interface NTL should provide.
Bill.
On Tues
What are P and Q here? The polynomials to be factored?
Magma takes no appreciable time at all to factor these. It also finds the
factors of their product in less than 3s with 32mb memory usage.
Bill.
On Sunday, 30 October 2016 19:21:24 UTC+1, parisse wrote:
>
> Unless I'm mistaken, the polynomi
On Friday, 28 October 2016 18:44:09 UTC+2, Dima Pasechnik wrote:
>
> 5 variables and degree 100 is really, really huge. Especially over QQ, the
> coefficients of
> polynomials will just totally blow.
> In fact, 5 variables and degree 10 might still be quite hard, in
> particular over QQ or othe
On Thursday, 3 November 2016 23:42:27 UTC+1, Dima Pasechnik wrote:
>
> Are there open-source implementations of this available?
>
Pari might be a good place to look. Otherwise, I doubt it. If Bernard
Parisse hasn't done it, it probably doesn't exist.
I remember one of the Maple people gettin
On Friday, 4 November 2016 13:39:39 UTC+1, Bill Hart wrote:
>
>
>
> On Thursday, 3 November 2016 23:42:27 UTC+1, Dima Pasechnik wrote:
>>
>> Are there open-source implementations of this available?
>>
>
> Pari might be a good place to look. Otherwise, I doubt it. If Bernard
> Parisse hasn't done
On Friday, 4 November 2016 13:50:55 UTC+1, Jeroen Demeyer wrote:
>
> On 2016-11-04 13:41, 'Bill Hart' via sage-devel wrote:
> > Sorry just 1s after posting this, I remembered Pari doesn't have
> > multivariate factoring.
>
> PARI doesn't really have
BTW, I tried taking the GCD of the two polynomials below in Pari and it
just runs out of memory and takes ages. So it doesn't look like it does
much that is useful for multivariates.
On the other hand, their univariate factoring is quite good (better than
mine in some cases, at present).
f = z
>
>
> For use in multivariate GCD over Z, I eventually want to implement Soo
> Go's algorithm [1], and over fields, probably a highly optimised Zippel.
> For more generic GCD I can't find anything better than the subresultant
> pseudoremainder algorithm of Collins. If anyone knows of anything b
By the way, that (Masters) thesis is absolutely superbly written. I really
wish all mathematics was written like this.
Bill.
On Friday, 4 November 2016 14:17:01 UTC+1, Bill Hart wrote:
>
>
>> For use in multivariate GCD over Z, I eventually want to implement Soo
>> Go's algorithm [1], and over
amples. But just occasionally, really good ideas
turn up in papers by people who don't know how to use a computer, so this
isn't a winning strategy.
Bill.
>
> Anyway, sorry for the rantish mode, and good luck :)
>
> Francesco.
>
>
>
>
>
> On 4 N
On Friday, 4 November 2016 14:44:09 UTC+1, bluescarni wrote:
>
> On 4 November 2016 at 14:33, 'Bill Hart' via sage-devel <
> sage-...@googlegroups.com > wrote:
>>
>> There are many completely incorrect published algorithms for GCD, even in
>> the univa
Sure, if you change the order of the variables it will finish in no time
using PSR.
On Friday, 4 November 2016 17:27:27 UTC+1, Martin R wrote:
>
> FriCAS does this in no time:
>
> sage: R. = PolynomialRing(QQ)
>
> sage: f = z^40*y^6*x^2100 + 2*t^15*z^53*y^9*x^2078 + z^40*y^7*x^2003 +
> z^40*y^6*
On Friday, 4 November 2016 17:38:17 UTC+1, Bill Hart wrote:
>
> Sure, if you change the order of the variables it will finish in no time
> using PSR.
>
Sorry, I should have said that x has to be the main variable. I could
design some polys for which the timing didn't depend on the main variabl
Actually, I've now checked and it looks like Fricas uses PSR, which will
take forever on this problem if you have the variable ordering with x the
main variable. Of course I'm not terribly familiar with the source code
layout of Fricas, so I could be wrong here.
Bill.
On Friday, 4 November 201
I implemented a multivariable psr GCD algorithm in Julia and now the
timings are as follows:
* if z or x is the main variable, it takes a long time (too long to wait
for)
* if t or y is the main variable it takes 0.1 - 0.2s on average, depending
on the exact ordering, with the one exception be
Thanks very much for writing up this brief note. I'm collecting algorithms
at the moment, for an eventual assault on multivariate factoring. This may
be useful!
Bill.
On Tuesday, 8 November 2016 21:03:01 UTC+1, parisse wrote:
>
> I have improved the sparse multivariate factorization algorithm i
I assume you are using the modular algorithm to remove the final lot of
content at the end of the psr algorithm. Otherwise the algorithm takes
quite a long time, since even if we remove the known factors of the content
along the way, as specified by the algorithm, the result is far from
primiti
On Friday, 11 November 2016 21:00:49 UTC+1, parisse wrote:
>
>
>
> Le vendredi 11 novembre 2016 07:03:46 UTC+1, Bill Hart a écrit :
>>
>> I assume you are using the modular algorithm to remove the final lot of
>> content at the end of the psr algorithm. Otherwise the algorithm takes
>> quite a
By the way, yesterday I implemented algorithm 3 in Brown's paper on psrgcd.
I didn't notice any improvement, so I abandoned it. I wasn't able to decide
whether algorithm 4 is actually an algorithm, let alone whether it would be
worth implementing, so I didn't bother with that.
Bill.
--
You re
On Saturday, 12 November 2016 07:42:53 UTC+1, parisse wrote:
>
>
>
> Le samedi 12 novembre 2016 06:50:45 UTC+1, Bill Hart a écrit :
>>
>>
>>
>> Looking how the content (188 monomials) is computed, it's done by exact
>>> divisions only (it's the leading coefficient up to trivial coefficients).
>>
On Saturday, 12 November 2016 08:14:25 UTC+1, Bill Hart wrote:
>
>
> I wonder if it is sometimes worth taking the gcd G of the leading
> coefficients of the original primitive polynomials and then taking the gcd
> of that with the leading coefficient L of the result of the psr process,
> and t
I actually tried taking linear combinations of the coefficients, to see if
that would help, after reading the suggestion in a paper. But it turned out
to be so much slower than not doing it, that I abandoned it. However, I
didn't take the size of the coefficients into account, so I certainly
wa
A colleague suggested to look at the Popov form. I didn't look at what Sage
is currently doing, so my apologies if this turns out to not be a useful
comment.
Here is a random paper on this that I found [1].
Bill.
[1] http://perso.ens-lyon.fr/gilles.villard/BIBLIOGRAPHIE/PDF/issac96.pdf
On Tue
If there's anything specific I can do to help, just let me know.
On Thursday, 10 November 2016 22:26:50 UTC+1, Victor Shoup wrote:
>
> Just posted a new version. In addition to a few performance improvements,
> I've added new routines that give direct access to the underlying "limbs"
> of a ZZ,
The Sage Notebook isn't likely to work under the WSL. It's a text console
environment only. Microsoft intended it mainly to provide Linux development
tools to people, not as a way of running graphical applications.
You can probably fix the memory allocation issues though. You likely need
to in
Ah I see. I assumed it would also try to spawn a browser or something.
On the other hand, I understand some of the networking stuff is still not
completed. You can of course use wget and the like so it can't be too
broken.
Bill.
On Friday, 9 December 2016 15:15:43 UTC+1, Dima Pasechnik wrote:
Hi all,
We are seeking applicants for a two and a half year mathematical software
developer/researcher position here in Kaiserslautern, to start as soon as
possible, and certainly by March 1st 2017. This position is funded by the
European Union H2020 OpenDreamKit project.
The successful applicant
It works for me too!
On Thursday, 15 December 2016 07:40:11 UTC+1, GK wrote:
>
> Hi again, thanks to the insight. I managed to successfully run sage in
> WSL. notebook(automatic_login='False') triggered indeed a different
> reaction, but again did not work. The trick was to run bash on ubuntu on
On Tuesday, 27 December 2016 07:28:22 UTC+1, GK wrote:
>
> After some windows update, it does not work for me anymore... :(
>
> Sage starts, the notebook loads, but once I try to do anything (i.e.
> create a new worksheet or open an existing one) I get a server error (on
> the browser) and some
Why is the constant coefficient of the resultant enough to determine if the
resultant has a given root?
On Wednesday, 25 January 2017 08:09:31 UTC+1, Ralf Stephan wrote:
>
> Thanks Dima. It turns out as absolutely essential for the implementation
> of Gosper's algorithm. I can now get WZ certifi
Hi all,
MPIR has been modified recently, and new tuning crossovers have been added.
If you have a machine that you want MPIR to run fast on, we would really
appreciate help getting tuning values for your machine. Here is how.
git clone https://github.com/wbhart/mpir
cd mpir
./configure --enable-
I can only answer one of your questions.
On Saturday, 11 February 2017 17:16:29 UTC+1, Jakob Kroeker wrote:
>
> By default, Singular uses 16 bit exponents. But it is perfectly capable of
>> working with exponents up to 64 bits. That will be slower of course.
>>
>
> How to change this? Is it runti
Apparently if you have a very recent machine, yasm may fail to build the
assembly files for your architecture. To get around this, install the
latest yasm [1] and use MPIR's --with-system-yasm option.
If your system is recent and detects as core2 or k8 or simply x86_64 or
something else obviously
Thanks very much! I'll insert these today. I'll attach the broadwell ones
to a ticket, until someone can sort out the configuration for that system.
There are probably numerous cpuid family/model pairs that correspond to
Broadwell, so we'll have to add these as they become known.
On 14 February 20
Hi all,
We have released mpir-3.0.0-alpha1 to give people the opportunity to report
any issues on Linux before we start issuing release candidates.
N.B: this alpha release is Linux only! We will issue an alpha with
MSVC/MinGW support as soon as the Windows build files are finalised.
N.B: when te
I asked Hans Schoenemann about this. Whilst Singular does support doing
Groebner bases over inexact fields, there is no error checking and so this
is not considered useful. It's only there for people who want to run the
computation and examine the output themselves and see if they think it is
m
Hi all,
MPIR-3.0.0-alpha2 has now been released. This adds support for MinGW and
MSVC 2013, 2015 and 2017. There are no changes to the Linux build, except
slightly better tuning for Broadwell.
We plan to start doing release candidates (for Linux, at least) early next
week, with a final release at
Hi all,
We have decided to change the requirements for the ~2.5 year mathematical
software developer job opening we have at TU Kaiserslautern.
The main changes are to the type of person we are seeking. It now says we
are interested in candidates with an interest in either:
* algebra or number
Hi all,
We have just released MPIR-3.0.0-rc1. If no issues are reported with this
release candidate by 28th Feb, we will make it the final MPIR-3.0.0 release.
The only changes since the last alpha were additional Broadwell CPUs
supported, which should only affect you if your Broadwell chip was
mi
Possibly, but it is only calling Flint for gcd, presumably over Z/nZ. It
could just as easily be a bug in Sage itself, calling Flint with invalid
data.
On 21 February 2017 at 16:59, Dima Pasechnik wrote:
> Apparently it's a bug in Flint, no?
>
>
> On Tuesday, February 21, 2017 at 2:03:21 PM UTC,
The new MPIR fixes this and will be out by the 28th.
On Thursday, 23 February 2017 21:02:21 UTC+1, vdelecroix wrote:
>
> Thanks! Applying the patch it just went fine.
>
> (I might create a relevant patch for Sage but we might migrate to the
> newer mpir in preparation...)
>
> On 23/02/2017 20:3
Hi all,
We have just released MPIR-3.0.0.
http://mpir.org/
Note that you now need to have the latest yasm to build MPIR.
http://yasm.tortall.net/
To build yasm, download the tarball:
./configure
make
To test MPIR, download the tarball:
./configure --enable-gmpcompat --with-yasm=/path_to_yas
Because it is "divisibility test with quotient", not "divisibility test".
It is equivalent to calling Magma's "IsDivisibleBy" with two return
arguments.
On Monday, 10 July 2017 09:34:39 UTC+2, Ralf Stephan wrote:
>
> On Monday, July 10, 2017 at 8:55:23 AM UTC+2, vdelecroix wrote:
>>
>> He was ce
7.6
On Monday, 10 July 2017 11:56:32 UTC+2, vdelecroix wrote:
>
> On 10/07/2017 09:34, Ralf Stephan wrote:
> > On Monday, July 10, 2017 at 8:55:23 AM UTC+2, vdelecroix wrote:
> >>
> >> He was certainly not using the awfully slow symbolic ring
> >>
> >
> > Then his slow timings for e.g. "Divi
The reason that I required the quotient as well in the divisibility
benchmark was that Magma does the n = 20 dense case in 0.15s otherwise, and
I don't believe it is possible to do it that fast if you aren't doing it
heuristically, as I explained in the blog post. Therefore, all the systems
tim
On Monday, 10 July 2017 13:31:26 UTC+2, vdelecroix wrote:
>
> On 10/07/2017 12:48, mmarco wrote:
> > It is surprising the difference between singular and Sage, considering
> that
> > Sage mostly relies on Singular for multivariate polynomial arithmetic.
> In
> > the case of divisions, I susp
Switching to Singular for working over ZZ seems like a good idea. I timed
it over ZZ and QQ in Singular and don't notice much difference in the
timings.
I'm sure the following is obvious, but let me mention it just in case. In
the blog post I explain that some systems do not explicitly provide
On Monday, 10 July 2017 17:05:10 UTC+2, vdelecroix wrote:
>
> On 10/07/2017 16:36, 'Bill Hart' via sage-devel wrote:
> >> BTW, it would be good to have them in the post!
> >>
> >
> > It already says in the post that for systems that didn't
On Tuesday, 11 July 2017 12:20:15 UTC+2, Simon King wrote:
>
> Hi,
>
> On 2017-07-10, mmarco > wrote:
> > It is surprising the difference between singular and Sage, considering
> that
> > Sage mostly relies on Singular for multivariate polynomial arithmetic.
>
> Note that Singular is optimis
On Tuesday, 11 July 2017 20:26:51 UTC+2, Johan S. H. Rosenkilde wrote:
>
> > That's absolutely correct, and a point I make in my blog. One heuristic
> is
> > that GBs tend to have a large number of very small polynomials and so
> one
> > can dispatch larger arithmetic operations to a differen
Beware, Bernard Parisse has just helped me track down why the Flint timings
for the sparse division only benchmark looked so ridiculously low. It turns
out that due to an accident of interfacing between Nemo and Flint, it was
using reflected lexicographical ordering instead of true lexicographic
e moment being on fast low-level primitives
> (add/sub/mul/addmul etc.).
>
> Cheers,
>
> Francesco.
>
> On 12 July 2017 at 15:13, 'Bill Hart' via sage-devel <
> sage-...@googlegroups.com > wrote:
>
>> Beware, Bernard Parisse has just helped me
I am not a
> low-level/assembly type of guy. I just tried many different variations and
> picked the one that performed better.
>
> Cheers,
>
> Francesco.
>
> On 13 July 2017 at 12:25, 'Bill Hart' via sage-devel <
> sage-...@googlegroups.com > wrote:
Alex Best pointed out this morning that something similar came up for the
GMP guys a while back. Apparently something to do with jump tables. That is
probably affecting our code here [1].
We'll look into it. Unfortunately, in the mean time, the best one can
really do is delete the affected file
It's certainly not a generic problem affecting Mac [0]. It's Skylake
specific and something their linker doesn't like that works on that
architecture on Linux.
Bill.
[0]
https://travis-ci.org/wbhart/mpir/builds/252901954?utm_source=github_status&utm_medium=notification
On Wednesday, 19 July 2
The only workaround I'm currently aware of is to remove the offending
addmul_1.asm file. It is to do with our use of jump tables.
On Monday, 31 July 2017 19:13:57 UTC+2, Michael Frey wrote:
>
> Hi François,
>
> Thank you for your help.
>
> I tried as you suggested from a clean source. Unfortunat
Thanks for the quick reply Bill.
Your comments make more sense (to me) now, since I see that your concerns
are more about the way categories are handled in Sage, or at least how it
interacts with the parent/element system.
At this stage there is no category theory in Nemo (it wouldn't be
appropri
On 1 October 2015 at 20:23, William Stein wrote:
By the way, look at how coercion "works" in Magma:
>
> $ magma
> Magma V2.18-5 Thu Oct 1 2015 16:59:12 on compute3-us [Seed =
> 629019987]
> Type ? for help. Type -D to quit.
> > R := PolynomialRing(IntegerRing());
> > x + 1/2;
>
> >> x + 1/
Hi all,
A while ago the Sage project reported a bug in basecase division in MPIR on
32 bit machines [1].
Since then we found a rare bug in basecase division on a 64 bit machine,
likely caused by the same issue.
Fortunately most modern x86_64 machines were tuned to not use the broken
implementati
Not in the published divide-and-conquer algorithm, no.
There was a bug in the implementation of the new basecase algorithm, which
was not published, nor very important. On most machines it was switched
off. And even on the machines where it was used (with the exception of
itanium where it seemed t
Brian and I are now happy with the state of things for 2.7.1, so I will put
up the release when I get the chance, possibly over the weekend, Monday at
the latest (barring unforeseen interruptions).
Bill.
On 13 November 2015 at 16:08, Bill Hart wrote:
> Hi all,
>
> A while ago the Sage project r
On 13 November 2015 at 19:10, Jean-Pierre Flori wrote:
>
>
> On Friday, November 13, 2015 at 9:40:27 AM UTC-8, Bill Hart wrote:
>>
>> Not in the published divide-and-conquer algorithm, no.
>>
>> There was a bug in the implementation of the new basecase algorithm,
>> which was not published, nor v
On 13 November 2015 at 21:41, Jean-Pierre Flori wrote:
>
>
> On Friday, November 13, 2015 at 12:34:41 PM UTC-8, Bill Hart wrote:
>>
>>
>>
>> On 13 November 2015 at 19:10, Jean-Pierre Flori wrote:
>>
>>>
>>>
>>> On Friday, November 13, 2015 at 9:40:27 AM UTC-8, Bill Hart wrote:
Not in t
Incidentally, whilst removing the basecase implementation I found and
removed a bug in the asymptotically fast division code. Or rather, removing
the basecase implementation also removed that bug. However, this could not
have been what caused the bug we observed in the basecase code.
Bill.
On 13
Hi all,
We have released MPIR-2.7.1 to fix some rare bugs and a few other issues:
* Fix bug in Karatsuba assembly code on Windows
* Fix bug in basecase division code
* Add some missing information in AUTHORS
* Travis continuous integration
* Add building of tests to command line build for Windows
That looks like a stupid bug on our part.
I think the best way to fix this is to correct the .so version number in
our repo since the current numbering is useless.
Bill.
On 19 November 2015 at 14:28, Jeroen Demeyer wrote:
> The .so version number changed from
>
> libmpir.so.16.0.0 (MPIR 2.7.0
Hmm, actually, I have
LIBGMP_LT_CURRENT = 16
LIBGMP_LT_REVISION = 1
LIBGMP_LT_AGE = 6
I don't see where the 10 is coming from.
Bill.
On 19 November 2015 at 15:03, Bill Hart wrote:
> That looks like a stupid bug on our part.
>
> I think the best way to fix this is to correct the .so version nu
You can understand my confusion then, as in MPIR 2.7.0 we had:
LIBGMP_LT_CURRENT = 16
LIBGMP_LT_REVISION = 0
LIBGMP_LT_AGE = 6
and in MPIR-2.7.1 we have:
LIBGMP_LT_CURRENT = 16
LIBGMP_LT_REVISION = 1
LIBGMP_LT_AGE = 6
leading to .so version numbers 16.0.0 and 10.6.1.
So just to get this straig
We have actually hit this problem ourselves and are quite frankly mystified
as to where it is picking up a broken MPIR or GMP from...
Anyway, I propose the following solution. If we change the numbers in
Makefile.am to 22 : 6 : 1 we should get a library version number 16.6.1
which should exceed al
Well, 30s ago I reissued 2.7.1.
I will now try to issue 2.7.2. But it may cause additional issues. Changing
the version number is a massively complicated job. Another massively
entertaining facepalm.
Bill.
On 20 November 2015 at 19:23, Jeroen Demeyer wrote:
> On 2015-11-20 17:06, 'Bill Hart' v
Hi all,
We discovered that MPIR-2.7.1 caused many systems to complain of missing
symbols (and other systems to be silently broken).
The reason was MPIR-2.7.0 had a broken .so version number, so the linker
considered MPIR-2.7.1 to be a downgrade compared to MPIR-2.7.0 (or other
versions of GMP etc
Hi all,
It is with great pleasure that we release Nemo-0.4.
Nemo is a computer algebra package written in the Julia programming
language, with the eventual aim of covering commutative algebra, number
theory and group theory.
For instructions on getting and using Nemo-0.4, including full
document
No there is no upstream repository for this.
We had a couple of GSoC students working on the quadratic sieve this year,
but neither finished the project.
We do have someone working on parallel linear algebra for the QS, and so at
some point this will initiate some more action in cleaning up the F
Hi all,
today Alex Best started in Kaiserslautern on this OpenDreamKit project.
He's working on support in MPIR for the latest Intel and AMD processors.
However, we have very little access to anything later than about 2010.
Does anyone have more recent Intel and AMD machines (linux servers) that
Nathan, you started this thread with words like, "help william earn
more $$ than he has now ".
You are making a claim here, that William is personally making money for
himself from Sage. If you make the claim, it is up to you to prove the
claim. So, what is your evidence that William is now
Nathann, yes, I think William deserves it. Sage would not exist without
William, everyone knows this. That was certainly my point. But I've also
known William for years and he has spent a lot of time searching for ways
to pay Open Source developers. For example, he and I submitted a proposal
to
o
> pay them equally.
>
> And yes, again, it is legal. Look at the newspapers, and you will get
> a good idea of the difference between legal and right.
>
> I don't think I ever said anything different than that.
>
> Nathann
>
> On 16 February 2016 at
Oh, but I thought this whole thread was all about complaining about
people's curriculum and where they are getting money from publicly. So now
you don't like that idea? Curious indeed.
So you say that actually this is all about who you take money from. So
William is taking money from you? How s
On Tuesday, 16 February 2016 14:00:39 UTC+1, Nathann Cohen wrote:
> If I hire people to write research papers and stamp my name on it, and
> claim that it is my work, I would be a fraud.
>
> If I write a paper with coauthors and remove their name at the last
> minute, becoming the only one
On Tuesday, 16 February 2016 14:15:08 UTC+1, Bill Hart wrote:
>
>
> Anyway, any potential returns for my employer from continuing this
> discussion are now diminishing well beyond the point of being reasonable.
> So that's my last comment on this subject.
>
>
Just a legal disclaimer because I
Nathann,
Point taken, loud and clear. But consider boycotting the forum for a while
under protest and returning after a break.
Why sacrifice the things that you enjoy, just to make a point to others.
I'm pretty sure there are lots of people that value your contributions in
this community great
Nathan, I'm glad you posted this so others can see your perspective on
things. It will no doubt be eye opening for some people who just don't
understand where you are coming from.
But as I said, it is your opinion that William is coming in and making
decisions that affect Sage without consultin
Hi Ибкс,
That sounds interesting. Is your source code available somewhere for people
to look at?
What method did you use to compute Weber polynomials? Does it use a
guaranteed precision? How do you guarantee the output is correct?
Bill.
On Saturday, 5 March 2016 13:22:09 UTC+1, Ибкс Спбпу wro
On Saturday, 5 March 2016 18:11:20 UTC+1, William wrote:
>
> Relevant: Sage has a Weber class polynomial database that David Kohel
> wrote:
>
> https://www.math.aau.at/user/cheuberg/sage/doc/6.10.beta3/en/reference/databases/sage/databases/db_class_polynomials.html#sage.databases.db_class_p
It looks like this thing is going to be pretty solid [1]. I'm actually
incredibly excited about it, more than I have been about Windows for about
20 years.
Bill.
[1] https://channel9.msdn.com/Events/Build/2016/P488
On Wednesday, 30 March 2016 18:51:50 UTC+2, Mike Hansen wrote:
>
> It's looking
It's almost a year since I predicted Microsoft would do something like
this. I have an email from September last year which was a followup email
some months after I told a colleague something like this would happen. If I
recall our conversation correctly, I was pretty specific about being able
Hi all,
Since February this year, Alex Best has been working for us in
Kaiserslautern on a new superoptimiser for assembly language in MPIR. The
superoptimiser is now working beautifully, but Alex has managed to get
himself a PhD position and will be leaving us at the end of July this year.
Th
1 - 100 of 204 matches
Mail list logo