%20mod%207)=400,0,GF(7%2C2)=400,200,factor(x%5E4%2B%204%2Cg)=400,400,factor(x%5E4%2B4.0)=800,0,cfactor(x%5E4%2B4.0)&]session[/url]
An example of simple webapp using the giac wasm kernel is available here
https://www-fourier.univ-grenoble-alpes.fr/~parisse/giacjs/
On Wednesday, May 1, 2024 at 12:08:2
There is no universal answer, it depends on the matrix. For some,
Gauss-Bareiss will perform well, for some others Lagrange interpolation
will, you can guess that with total degree and partial degree bounds. For
some matrices (sparse ones) minor expansion will perform better (compute
first all
I believe that wasm is the future, because you don't have to install
anything and computations are done in the browser client, they do not
require ressources from a server (except for the initial download of the
wasm file). Giac/Xcas does that since many years now, (initially it was a
request
he number of variables (whatever it is) at
> the beginning and returning an error message, rather than me checking with
> gdb, which I've already done!
>
> agape
> brent
>
>
> On Friday, June 9, 2023 at 6:37:44 AM UTC-4 parisse wrote:
>
>> There is code for up to 64
There is code for up to 64 variables. I'm not sure for more. Can you send
your input? That way I can check with gdb.
On Monday, June 5, 2023 at 9:19:04 PM UTC+2 Brent W. Baccala wrote:
> Hi -
>
> I don't think giac can handle more than 15 variables in a Gröbner basis
> calculation.
>
> This
Maple seems (much) slower than giac on this example
giac:
0>> int(floor(x)^2,x=0..1);
333283335000
// Time 0.2
maple:
int(floor(x)^2,x=0..1);
memory used=142.1MB, alloc=150.1MB, time=1.12
memory used=230.5MB, alloc=182.1MB, time=1.81
The antiderivative returned by giac (and by maple) for floor(x)^2 is only
piecewise continuous and this is expected. But both CAS implement
additional code to check for non continuous antiderivative (in simple
situations for giac), and they correctly evaluate
integrate(floor(x)^2,x,0,3/2) to
our
> current
> favorite language on the precedence of operators. Or what the operators
> meant.
> Or what happens when integers "overflow".
>
> Consider learning Lisp.
>
>
>
> Parts of Macsyma/Maxima are more like 60 years old. Almost as old as Lisp
> (
:51 UTC+1, dim...@gmail.com a écrit :
> On Thu, Jan 21, 2021 at 8:04 PM parisse
> wrote:
> >
> > Well, searching for "lisp infix notation" is not very convincing (unless
> I missed something?), compared to built-in infix support. You might prefer
> Lisp to C/C++,
at one can actually
write a CAS in C/C++, that compares very well with the Lisp-based CAS
Maxima.
Le jeudi 21 janvier 2021 à 18:07:14 UTC+1, dim...@gmail.com a écrit :
> On Wed, Jan 20, 2021 at 7:13 PM parisse
> wrote:
> >
> > As the author of a CAS, I can state that you need much m
As the author of a CAS, I can state that you need much more than 2 weeks to
learn a programming language to make a CAS, and much much more if you want
to be fast. Life is short, therefore choose your programming language
carefully! I don't regret my choice for C (+ C++ STL and operator
It's probably easier just to add
AC_CHECK_LIB(cliquer,main)
Le vendredi 6 novembre 2020 à 11:18:13 UTC+1, dim...@gmail.com a écrit :
> On Thu, Nov 5, 2020 at 8:59 PM Dima Pasechnik wrote:
> >
> >
> >
> > On Thu, 5 Nov 2020, 19:23 parisse, wrote:
> >>
&g
When compiling giac with nauty, I install nauty by hand and statically
linked, nothing more than libnauty.a is required.
In configure.ac, the check is done by
AC_CHECK_LIB(nauty,main)
AC_CHECK_HEADERS(nauty/naututil.h)
I can of course add another check if it's required to compile with nauty,
Le vendredi 29 mars 2019 19:25:05 UTC+1, rpea...@gmail.com a écrit :
>
> This is interesting. One thing we discovered with Maple, which I think
>> is known by others, is that when degree drops occur in the modular
>> computations, you can stop F4 and output the new polynomials that have
>>
computation (10 days real time).
For more details, see this report
<https://hal.archives-ouvertes.fr/hal-02081648>
Giac source code:
https://www-fourier.ujf-grenoble.fr/~parisse/debian/dists/stable/main/source/giac_1.5.0-49.tar.gz
--
You received this message because you are subscribed to the
Le mardi 15 janvier 2019 07:32:31 UTC+1, Dima Pasechnik a écrit :
>
> On Fri, Jan 11, 2019 at 7:43 PM parisse > wrote:
> >
> > The latest version of giac is 1.5.0-35 (1.4.9-45 is exactly 1 year old
> now). Some compilation bugs were reported by Dima on Xcas forum and
The latest version of giac is 1.5.0-35 (1.4.9-45 is exactly 1 year old
now). Some compilation bugs were reported by Dima on Xcas forum and fixed
some weeks ago.
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and
Le samedi 15 décembre 2018 20:57:02 UTC+1, Bill Hart a écrit :
>
>
>
> And even if giac did all that, it is one of many projects doing
> multivariate polynomial arithmetic in Europe. There's also Trip, Piranha,
> Factory, Pari/GP, Gap. I really don't think it is a valid argument that
> just
Bill, my feeling is that part of ODK money was used to improve multivariate
polynomial arithmetic implementations precisely in a domain where Giac
behaves well (and maybe I should emphasize that unlike almost all other
CAS, Giac is a library, i.e. is interoperable with any software that can
Giac source code has been updated, with the following (much faster) timings
for gbasis computation on Q
server 32 processors Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz, 64G of RAM
16 threads [CPU time, real time], 4 threads [CPU, real], 1 thread
cyclic8 : [43.37,11.25], [31.82,12.15], 26.12 (48
Got cyclic9 on Q with 16 threads in 45 minutes real time (about 6h and 20
minutes CPU time).
I made a few changes to the way parallelization is called, and got some
> progress. Now cyclic9 on Q takes a little more than 4h with 1 thread, 1h41
> real time/4h40 CPU with 6 threads (probably not
Giac on Geogebra SVN
<https://dev.geogebra.org/trac/browser/trunk/geogebra/giac/src/giac>
The giac tarball is self-contained for compiling on gnu systems. The stable
version corresponding to the latest debian packages is giac_stable.tgz
<https://www-fourier.ujf-grenoble.fr/~par
Le dimanche 9 décembre 2018 20:44:30 UTC+1, Dima Pasechnik a écrit :
>
> On Sun, Dec 9, 2018 at 1:42 PM parisse > wrote:
> >
> > Efficient code does not depend on how you handle it (git, svn or
> tarballs or whatever).
>
> Efficiency of handling code does depe
Efficient code does not depend on how you handle it (git, svn or tarballs
or whatever). And I don't think different practices is the real reason why
Giac was/is mostly ignored here.
After having done a few tests, I think I know why my code on Q is slower
with more threads (if the number of
Le samedi 8 décembre 2018 23:44:32 UTC+1, Dima Pasechnik a écrit :
>
> On Sat, Dec 8, 2018 at 5:03 PM parisse > wrote:
>
> > and even if I was, I don't want to depend from google or any company
> for something like that (the risk of IP problems is much too high
>
>
Le vendredi 7 décembre 2018 12:15:56 UTC+1, Dima Pasechnik a écrit :
>
> you can certainly get free cloud resources from Google, to spin out
> Linux (and not only) VMs with many cores, they have a faculty
> programme like this.
> I've been using it since Sept.
> https://cloud.google.com/edu/
Le vendredi 7 décembre 2018 07:53:18 UTC+1, Markus Wageringel a écrit :
>
>
> While there will be some overhead due to the conversion from and to Sage,
> it is the same in both cases. In fact, I observe similar times with the
> native Giac that is installed into the Sage environment, when
Le mercredi 5 décembre 2018 23:44:43 UTC+1, Markus Wageringel a écrit :
>
> Am Samstag, 24. November 2018 23:11:26 UTC+1 schrieb parisse:
>>
>>
>> Giac supports double revlex ordering, this is the order used by the
>> eliminate command of Giac. Geogebra has many
Le mardi 27 novembre 2018 12:00:16 UTC+1, Simon King a écrit :
>
> Hi Bernard,
>
> On 2018-11-27, parisse > wrote:
> > I meant a more efficient elimination order like double revlex.
>
> Actually I've never heard of that. The only reference I could find with
I meant a more efficient elimination order like double revlex.
Le lundi 26 novembre 2018 22:04:56 UTC+1, Simon King a écrit :
>
> Hi!
>
>
> What is your definition of "elimination order"? If I understand
> correctly, lex *is* an elimination order.
>
> Best regards,
> Simon
>
>
--
You
Le lundi 26 novembre 2018 17:16:16 UTC+1, Bill Hart a écrit :
>
>
>
>
> From his recent talks, his implementation is nowadays more than
> competitive.
>
I confirm that his timings are very good: for example almost 3 times faster
than Giac for cyclic9 modular. On the other hand, the
Le vendredi 23 novembre 2018 11:46:07 UTC+1, Martin Albrecht a écrit :
>
> Hi,
>
> speaking of Giac (sorry, if this should rather be on sage-support or
> off-list):
>
> Can I get the degree reached during the computation and the sizes of the
> matrices considered out somehow?
>
export
Le vendredi 23 novembre 2018 23:53:51 UTC+1, Markus Wageringel a écrit :
>
> Thanks for the feedback everyone.
>
> Am Donnerstag, 22. November 2018 09:53:43 UTC+1 schrieb parisse:
>>
>> Did you make some comparisons with Giac ?
>>
>> Some benchmarks from Ro
Le jeudi 22 novembre 2018 10:11:39 UTC+1, Thierry (sage-googlesucks@xxx) a
écrit :
>
> Hi,
>
>
> It was on my todo list for a while too, since our implementations are very
> slow. Here "very" means "prohibitively", since some systems can not be
> solved with Sage in decent time (via
Did you make some comparisons with Giac ?
Some benchmarks from Roman Pearce and my own tests, about 2 years old.
Roman used an Intel Core i5 4570 3.2 GHz with 8 GB DDR3-1600 running 64-bit
Linux (4 cores, 4 threads, 6M cache, turbo 3.2 -> 3.6GHz). I also checked
Giac on my Mac (Core i5 2.9Ghz,
Le mardi 27 février 2018 17:08:35 UTC+1, Erik Bray a écrit :
>
>
> Another approach I've seen to this sort of thing that works completely
> locally is to pre-generate some data based on the range of inputs for
> the widgets, and bundle that all up so that there's no actual live
>
Le mardi 26 septembre 2017 10:43:10 UTC+2, Bill Hart a écrit :
>
> We used to do this, and Daniel noticed that it wasn't really threadsafe.
>
It would be in my implementation, but inserting requires sometimes memory
allocation and it seems to slow down too much. Anyway, as explained
earlier,
I found a way to get better timings by caching the index of the insertion
chain of the previous monomial. But now multi-threaded execution is slower
than 1 thread execution most certainly because of locks during insertion...
I will probably force 1 thread sparse multiplication.
--
You
Le lundi 25 septembre 2017 18:56:26 UTC+2, Bill Hart a écrit :
>
> Do you use anything special for memory management? Mickael Gastineau
> recommended jemalloc, which we haven't tried yet. I assume you expected to
> see better times for the threaded benchmarks with giac?
>
I'm not using
Hi,
Which compiler are you running?
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to
And why not giac? flint is a little faster for basic multivariate
polynomial arithmetic on 1 thread, but giac is multithread and has more
advanced fast functionnalities like gcd, factorization, Groebner basis or
rational univariate representation.
--
You received this message because you are
Le dimanche 3 septembre 2017 16:06:46 UTC+2, rjf a écrit :
>
> I was doing timing on the same task and found that one system
> (used for celestial mechanics) was spectacularly fast on a test just like
> this one.
> One reason was that it first changed f*(f+1) to
>
> f^2 +f
> and was clever in
FYI, this test takes a few seconds with the following giac script (6.2s on
my Mac with 1 thread):
threads:=1; n:=30;
f := symb2poly((1 + x + y + z+t)^n,[x,y,z,t]):;
time(p:=f*(f+1));
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe
>From a symbolic (calculus) point of view, 0^0 should return undef.
Otherwise you can not do a first quick substitution if you are computing
limits.
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop
https://www-fourier.ujf-grenoble.fr/~parisse/giac/castex.html
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sage-devel+unsubscr...@googlegroups.co
I think that people who never wrote symbolic integration algorithms
underestimate the work required (this is also true in other areas, for
example simplification, UI, etc.). I believe that the current symbolic
integration implementations are good enough whatever you choose in Maxima,
Axiom
My guess is that Mathematica added more special functions and integration
methods using them mainly for advertising, not because some researchers
needed them, otherwise some of them would probably work on this in an
open-source CAS.
About step by step, I cover some cases, for example
Le samedi 4 mars 2017 11:39:59 UTC+1, Dima Pasechnik a écrit :
>
>
>
> On Saturday, March 4, 2017 at 10:24:19 AM UTC, parisse wrote:
>>
>>
>>
>> Le samedi 4 mars 2017 09:09:17 UTC+1, Dima Pasechnik a écrit :
>>>
>>>
>>> Why is
Le samedi 4 mars 2017 09:09:17 UTC+1, Dima Pasechnik a écrit :
>
>
> Why isn't xcas on Android Play store (so that the installation really goes
> as it is normally done with Android apps)?
>
Because the HTML5 version of Xcas is not an android app. You can run it on
any web browser (it's
be able to do basic CAS stuff (simplify, derive,
integrate, plot, etc.) and a little more with CAS calculators. Running Xcas
on a smartphone is really straightforward, just open the URL
<http://www-fourier.ujf-grenoble.fr/~parisse/xcasen.html>, while installing
it locally for an exam t
Le mercredi 1 mars 2017 22:58:48 UTC+1, rjf a écrit :
>
> As I have said before, the objective of most students taking calculus
> is to pass the course so they never have to know any of this integration
> stuff ever again. Thus computer systems are useful primarily to
> help them do homework
Le mercredi 1 mars 2017 06:38:28 UTC+1, rjf a écrit :
>
> Other than the academic interest in 'anti-differentiation' it is not
> clear that this is such an important problem in (say) physics or
> engineering.
> Definite integration problems can be done by quadrature programs,
> and of course
Le mardi 28 février 2017 18:32:19 UTC+1, mmarco a écrit :
>
> If it makes sense to use integration by parts or not deppends heavily on
> the actual expression. I suspect that, if you try to make a sane criterion
> te decide when to apply it, you could end up with something very
> complicated
Le mardi 28 février 2017 15:57:53 UTC+1, mmarco a écrit :
>
> Many RUBI rules actually consist on applying that kind of algorithms. The
> trick with those algorithms is that sometimes they help, and sometimes they
> hurt (in the sense that you get something that is actually harder to
>
My opinion is that it's better to add new algorithms for failures than
rules. Of course adding rules will add a few success, but it's not like
adding algorithms that can be combined together like integration by part
and partial fraction decomposition or integration of rational fraction of x
I have myself implemented symbolic integration in Giac/Xcas in a spirit
similar to Maxima or Axiom that is a few dozens *algorithms* for some
classes of integrands, then the Risch algorithm in the rational case, like
Maxima while it seems that Axiom implements the more general algebraic
Risch
Le dimanche 29 janvier 2017 08:47:22 UTC+1, Ralf Stephan a écrit :
>
>
> Note that replacing other parameters by 0 does not always work, for
>> example for sum((-1)^k*comb(n,k)/comb(k+a,k),k)=a/(n+a), I had to put non-0
>> values for the parameter.
>>
>
> Proving the identity does not seem a
I would say it's easier to check that the gcd of A(x) and B(x+t) is not
trivial for the value of t that are integer roots of the resultant (with
other parameters replaced by 0). Note that replacing other parameters by 0
does not always work, for example for
Le lundi 23 janvier 2017 20:03:49 UTC+1, Dima Pasechnik a écrit :
>
>
>
> On Monday, January 23, 2017 at 6:48:14 PM UTC, parisse wrote:
>>
>> 12s on my Mac, with giac 1.2.3
>>
>
> Nice. Do you do interpolation of the determinant at a number of points?
12s on my Mac, with giac 1.2.3
Le lundi 23 janvier 2017 15:40:13 UTC+1, Ralf Stephan a écrit :
>
> Hello,
> is there a faster way to compute resultants than
> what Singular provides? Is there software outside Sage
> that can do this faster?
>
> Resultants of big polynomials are needed by the
Le samedi 12 novembre 2016 08:16:41 UTC+1, Bill Hart a écrit :
>
>
>
>
> I wonder if it is sometimes worth taking the gcd G of the leading
> coefficients of the original primitive polynomials and then taking the gcd
> H of that with the leading coefficient L of the result of the psr process,
Le samedi 12 novembre 2016 06:50:45 UTC+1, Bill Hart a écrit :
>
>
>
> Looking how the content (188 monomials) is computed, it's done by exact
>> divisions only (it's the leading coefficient up to trivial coefficients).
>>
>
> Can you give me a hint where this is implemented. I didn't find this
Le vendredi 11 novembre 2016 07:03:46 UTC+1, Bill Hart a écrit :
>
> I assume you are using the modular algorithm to remove the final lot of
> content at the end of the psr algorithm. Otherwise the algorithm takes
> quite a long time, since even if we remove the known factors of the content
>
Le mercredi 9 novembre 2016 17:42:32 UTC+1, Bill Hart a écrit :
>
> I implemented a multivariable psr GCD algorithm in Julia and now the
> timings are as follows:
>
> * if z or x is the main variable, it takes a long time (too long to wait
> for)
>
> * if t or y is the main variable it takes
I have improved the sparse multivariate factorization algorithm in giac
1.2.2-101, it will factor the given polynomial in about 2s. It is based on
a simple idea of comparing a few bivariate factorizations, it is explained
here https://hal.archives-ouvertes.fr/hal-01394062 in order to help other
I tried the gcd of the 2 polynomials from Bill Hart, I get
2*z^33*y^6*x^1078*t^15-14*z^
24*y^6*x^80*t^15+z^20*y^4*x^1003+z^20*y^3*x^1100+z^20*y^3*x^1000-2*z^13*y^3*x^79*t^16+78*z^13*y^3*x^78*t^15-7*z^11*y^4*x^5-7*z^11*y^3*x^102-7*z^11*y^3*x^2-y*x^4*t+39*y*x^3-x^101*t+39*x^100-x*t+39
after 20s
I tried the gcd of the 2 polynomials from Bill Hart, I get
2*z^33*y^6*x^1078*t^15-14*z^24*y^6*x^80*t^15+z^20*y^4*x^1003+z^20*y^3*x^1100+z^20*y^3*x^1000-2*z^13*y^3*x^79*t^16+78*z^13*y^3*x^78*t^15-7*z^11*y^4*x^5-7*z^11*y^3*x^102-7*z^11*y^3*x^2-y*x^4*t+39*y*x^3-x^101*t+39*x^100-x*t+39
after 20s
Unless I'm mistaken, the polynomials are at the end.
I guess that the heuristics used by Singular for sparse multivariate
factorization did not succeed for this polynomial (this pair is slightly
more complicated than the previous pairs), and reverted to dense
factorization (probably Hensel lift
The probabilistic early termination does not take much time here, the
charpoly stabilizes at about 85% of the primes required to reach the
Hadamard bound. Testing with a few random matrices, I often get
stabilization at about 80% (+/-10%), in this situation I think it's best to
wait a little
Le mercredi 28 septembre 2016 03:13:11 UTC+2, Jonathan Bober a écrit :
>
>
> Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I
> didn't look at what Sage has until now (flint 2.5.2, which looks likes it
> uses a fairly simple O(n^4) algorithm). I had previously looked at
1.2.2-67 is ready.
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to
disable-gui should be working. If not, then I probably made a mistake while
copying the archive, you can try
http://www-fourier.ujf-grenoble.fr/~parisse/giac/giac-1.2.2.tar.gz
I don't know what you mean by ETA, but disable-ao should also be working.
For lapack, it was a little nightmare before
Tried
AX_BLAS([have_blas=yes],[have_blas=no])
AX_LAPACK([have_lapack=yes],[have_lapack=no])
without success as I feared (my autoconf is 2.69). I will stick to the
current checks.
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
error near unexpected token `,AX_LAPACK'
./configure: line 15831: `AX_BLAS(,AX_LAPACK())'
Le lundi 4 juillet 2016 14:25:15 UTC+2, Dima Pasechnik a écrit :
>
>
>
> On Monday, July 4, 2016 at 1:06:06 PM UTC+1, parisse wrote:
>>
>> It's back online.
>>
>
> OK, I ju
It's back online.
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to
You are welcome if you know how to fix configure.in for lapack support (as
long as it does not break my current compilations configurations).
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving
I'm fixing the build for --disable-gui and I will also add a --disable-ao
flag in configure.in (ao is used for the playsnd command). You can
--disable-lapack if you think it will cause problems, LAPACK is interesting
inside giac only for large matrices (more than 1000x1000, otherwise giac
If you are writing code as part of your work in a French public
institution, for example the Université Paris-Sud, then the copyright
holder of the code you write is the University and you should get
authorization from the University to license it under the GPL (I suppose
you can assume that
Le lundi 9 mai 2016 09:18:53 UTC+2, john_perry_usm a écrit :
>
>
> For the homogeneous cyclic-8,
>
> > int RT = rtimer; int T=timer; size(sba(k,0,0)); rtimer-RT; timer-T;
> 1182
> 6854
> 5113
>
>
Strange figures: I get 455 for the first (which is correct for the basis
size, while 1182 is
Le dimanche 8 mai 2016 04:08:54 UTC+2, john_perry_usm a écrit :
>
> What about homogeneous cyclic-8? I'm not sure it will be any better; I'm
> just curious.
>
> I do know Singular is working on improving aspects of the sba()
> implementation, and I'm a bit surprised it's that slow.
>
That's
Le samedi 7 mai 2016 07:30:42 UTC+2, john_perry_usm a écrit :
>
> I'm sorry. I got the name mixed up; the function you want to look at is
> sba(), not dstd() (which is something experimental of mine that never saw
> the light of day).
>
>
Le vendredi 6 mai 2016 15:07:48 UTC+2, john_perry_usm a écrit :
>
>
> One of us misunderstands the other. Here's what I'm saying:
>
>- Singular's std() is neither an F4- nor F5-style algorithm; it is a
>traditional, Buchberger algorithm that uses a modified Gebauer-Möller pair
>
Le mercredi 4 mai 2016 23:00:23 UTC+2, john_perry_usm a écrit :
>
>
> Unfortunately Roman doesn't mention on that page whether he used
> Singular's std() or dstd(). The numbers look vaguely std()ish to me (i.e.,
> when I compute the GB of Cyclic-8 using std(), it takes about 40 seconds;
>
Le mercredi 4 mai 2016 12:05:04 UTC+2, mmarco a écrit :
>
> Can you also ask him about the license?
>
> Also, has somebody done timing comparisons with singular?
>
>
Perhaps you should have a look at the link I've posted, there is a
comparison of mgb with magma, singular and my own system
Le mercredi 4 mai 2016 10:25:42 UTC+2, Luca De Feo a écrit :
>
> I was also thinking about writing an interface to FGb. Maple uses this
> library via the C API to compute Gröbner bases. As far as I know Magma uses
> an older version of this code too. So it must be doable.
>
>
>
Maple will
s/will be a lot of competition in this area, that's why I
> have
> > choosen a complementary approach with my CAS : in the browser but
> offline
> > once downloaded (http://www-fourier.ujf-grenoble.fr/~parisse/xcasen.html).
>
> > Perhaps both approachs will help gain more
there is/will be a lot of competition in this area, that's why I
have choosen a complementary approach with my CAS : in the browser but
offline once downloaded
(http://www-fourier.ujf-grenoble.fr/~parisse/xcasen.html). Perhaps both
approachs will help gain more opensource math software user
Regarding the mission statement, I'm a little bit skeptic one can build a
viable alternative to Magma on one side and Maple, Mathematica, Matlab on
the other side. Magma is a very specialized software that is probably
unknown to most mathematicians, and almost certainly unknown in other
Le samedi 1 août 2015 22:44:25 UTC+2, Rob Beezer a écrit :
Dear Bernard,
I was thinking more of the static HTML pages you sent that had been
generated from TeX/LaTeX with your GIAC extensions (giac.tex). The
mathematics on those pages might look better with MathJax and that would be
I have updated the giac.tex
http://www-fourier.ujf-grenoble.fr/~parisse/giac/giac.tex file (and also Xcas
offline in the browser
http://www-fourier.ujf-grenoble.fr/~parisse/giac/xcasen.html), if the
HTML file is loaded from hard disk it detects Chrome (or IE) and renders
with mathjax
Problem solved (my Chrome was infected by an adware).
--
You received this message because you are subscribed to the Google Groups
sage-devel group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sage-devel+unsubscr...@googlegroups.com.
To post to this
mathjax properly. And it's slow. Look at the
difference between Firefox/Mathml
http://www-fourier.ujf-grenoble.fr/~parisse/xcasen.html and Latex/Mathjax.
http://www-fourier.ujf-grenoble.fr/~parisse/xcasen.html
--
You received this message because you are subscribed to the Google Groups
sage-devel
is
Javascript and can be configured to execute locally.
I got disappointing results for dynamic rendering (bugs, slowness) when
using latex/mathjax for Xcas offline in the browser, perhaps because I do
not use mathjax properly.
Compare latex/mathjax
http://www-fourier.ujf-grenoble.fr/~parisse/giac
in test.tex
http://www-fourier.ujf-grenoble.fr/%7Eparisse/giac/test.tex or a more
complete example I started a few weeks ago by adapting a course in French
http://www-fourier.ujf-grenoble.fr/~parisse/mat249/mat237.html (replace
html by tex in the link above to see the source code
I had a quick look, but I'm still a little bit confused how the source are
written. Do you write your source files in xml or have you some kind of
converter from a latex source file?
--
You received this message because you are subscribed to the Google Groups
sage-devel group.
To unsubscribe
Since this topic raises more interest than the giac topic raised by
Frederic yesterday, I will pollute it a little by pointing to the ticket
(http://www.google.com/url?q=http%3A%2F%2Ftrac.sagemath.org%2Fticket%2F18749sa=Dsntz=1usg=AFQjCNG_L1Fc90KhKpiCCnDCggjiG5LfVg)
where I made a few
Le mercredi 25 février 2015 20:56:06 UTC+1, kcrisman a écrit :
Who on earth thinks that the Sieve of Eratosthenes is designed for modern
production work???
Me. I´m using it for the ithprime function in Giac. ithprime(70)
returns 10570841 in 0.07 second while in sage
A perhaps more interesting benchmark : how long does it take to factor it
back?
--
You received this message because you are subscribed to the Google Groups
sage-devel group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Le vendredi 5 décembre 2014 21:14:54 UTC+1, maldun a écrit :
I don't think that the functionality of Sage is the big problem, in fact
Sage has a great features for zero cost.
Nothing is really free.
My estimate for a google search is an energy cost of 16Wh per search
(equivalent to 7g
Le lundi 8 décembre 2014 16:24:30 UTC+1, Emmanuel Charpentier a écrit :
How do you reach your estimate of 16 Wh/Google search ? Any source ?
In French:
http://www.planetoscope.com/electronique/980-emissions-de-co2-par-les-recherches-sur-google.html
7g CO2/request.
Google's own published
1 - 100 of 172 matches
Mail list logo