[sage-devel] Re: Final 3.2.2 sources released

2008-12-19 Thread Gary Furnish

Was that with a parallel test?  I'm aware of the marker issue but
haven't been able to consistently reproduce it.  The marker was a
temporary fix to solve even worse race conditions, so its still a step
in the right direction.

On Fri, Dec 19, 2008 at 12:51 PM, mabshoff mabsh...@googlemail.com wrote:



 On Dec 19, 11:45 am, Justin C. Walker jus...@mac.com wrote:
 On Dec 18, 2008, at 23:23 , mabshoff wrote:

 SNIP

 Hi Justin,

 Upgraded rc1 to final on Mac OS X, both 10.5.5 and 10.4.11, without
 problems.

 Well, it was two tiny patches :)

 All tests passed on 10.5.5, but on 10.4.11, I see one failure:

 sage -t  devel/sage/sage/dsage/interface/dsage_interface.py
 **
 File /Users/tmp/sage-3.2.2.rc1/devel/sage/sage/dsage/interface/
 dsage_interface.py,
   line 567:
  sage: j
 Expected:
  625
 Got:
  62500DSAGE00
 **
 1 items had failures:
 1 of   8 in __main__.example_16
 ***Test Failed*** 1 failures.
 [DSage] Closed connection to localhost
 [DSage] Closed connection to localhost
 [DSage] Closed connection to localhost
 [DSage] Closed connection to localhost
 For whitespace errors, see the file /Users/tmp/sage-3.2.2.rc1/
 tmp/.doctest_dsage_int
 erface.py
   [45.9 s]

 I reran this test by itself and it succeeded.  Go figure.

 Should I create a trac ticket, or is this issue understood?

 I think Gary is working in that area, so we don't need a ticket for
 the specific doctest failure at the moment. I hit similar issues on
 occasion, but for now I would suggest we sit back a little and see
 which patches come down the pipeline. The above looks like a pexpect
 issue with a marker, but I am not 100% sure.

 Justin

 Cheers,

 Michael

 --
 Justin C. Walker, Curmudgeon-At-Large
 Institute for the Enhancement of the Director's Income
 
 When LuteFisk is outlawed,
 Only outlaws will have LuteFisk
 
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Proposal for inclusion of GINAC/pynac-0.1 in Sage.

2008-08-25 Thread Gary Furnish

I've been trying to get an answer for this question for the last few
weeks: Is the plan to extend ginac (write algorithms in C) or to
extend sage (write new algorithms in Sage) using cython/python?  This
is very much a design related question, and in the hurry to get GiNaC
through review I feel that design issues and questions have been very
much ignored.  To put the question somewhat differently, are
algorithms using the new symbolics system going to be use GiNaC/pynac
enough that switching to any other low level system will be very, very
difficult (because new code such as sums may depend directly on GiNaC
specific behavior)?  If this is not intended, what will be done to try
to prevent Sage from becoming overly dependent on GiNaC in the long
term?

--Bill

On Mon, Aug 25, 2008 at 8:24 AM, Burcin Erocal [EMAIL PROTECTED] wrote:

 Hi,

 On Mon, 25 Aug 2008 07:12:27 -0700 (PDT)
 parisse [EMAIL PROTECTED] wrote:

 I still do not understand why giac is not even mentionned in the
 symbolic discussion considering the fact that like ginac, it is a C++
 library, but unlike ginac (Ginac Is Not A Cas), giac (Giac Is A Cas)
 has much more advanced calculus functions (either functionnalities
 like limits, integration) and good benchmarks.

 I think the only reason giac is not mentioned in the benchmarks is that
 it wasn't available. There are already interfaces to MMA and Maple from
 Sage, so they are easy to time. Sympy and sympycore are already in
 Python, so no trouble there. GiNaC was easy to build and understand, so
 I could create packages and write an interface in a matter of hours.

 There was already an attempt (by Ondrej) to make a package for giac,
 which is the first step to writing an interface. However, IIRC, it
 didn't succeed.


 There is also the question why use GiNaC and not giac as the symbolic
 arithmetic engine in Sage. The answer lies in the formulation of the
 question, and the word arithmetic.

 We already have a pretty good symbolic engine in Sage, maxima does
 quite a good job of solving integrals, limits, etc. The main problem
 with Maxima is that we cannot extend it. The situation would be the
 same if we adopted yet another system, such as giac.

 The point of the pynac effort (at least from my pov), is to acquire a
 fast and capable arithmetic and manipulation engine and write the
 higher level algorithms on top of that. This way Sage can advance
 from being a user interface to become a research environment.


 I thought sage was an
 effort to build a free alternative to maple or mathematica and that
 collaboration between projects having this goal would prevail, not
 competition (how much time lost duplicating the functionnalities
 already available in giac for pynac?).
 snip

 Pynac is a modification of GiNaC to use python objects as coefficients
 in its data types. This was a rather tricky, but well isolated change,
 since GiNaC already abstracts this functionality into a single class. I
 don't think this is duplicating any functionality already in giac.


 Cheers,

 Burcin

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Proposal for inclusion of GINAC/pynac-0.1 in Sage.

2008-08-25 Thread Gary Furnish


Make it so sympy also runs on top of GiNaC.  This will force the creation
of a clear interface specification.


If there is going to be a clear interface spec, then we should go and
make a clear interface spec so that anyone, not just GiNaC can
potentially conform to it.  Perhaps this is the best long term
solution?

My symbolics code is already GPLv2+ so fixing the headers is just a
technicality.

On Mon, Aug 25, 2008 at 10:24 AM, William Stein [EMAIL PROTECTED] wrote:

 On Mon, Aug 25, 2008 at 8:54 AM, didier deshommes [EMAIL PROTECTED] wrote:

 On Mon, Aug 25, 2008 at 4:59 AM, William Stein [EMAIL PROTECTED] wrote:

 Hi,

 I propose that pynac be included in Sage.

 VOTE:
  [ ] Yes, include Pynac in Sage
  [ ] No, do not (please explain)
  [ ] Hmm, I have questions (please ask).

 I have a question: what will happen to gfurnish's work?
 Is it going to be completely abandoned?

 If he clearly licenses it under GPLv2+ then hopefully it won't be, and
 somebody will find ways to use it.  gfurnish -- are you willing to
 post a version of all your code with GPLv2+ headers?

  -- William

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: units, was: Suggestion components to add onto SAGE

2008-07-21 Thread Gary Furnish

This is done very often in physics; without this feature I largely
consider any units system useless.

On Mon, Jul 21, 2008 at 7:06 PM, Carl Witty [EMAIL PROTECTED] wrote:

 On Jul 20, 1:52 pm, William Stein [EMAIL PROTECTED] wrote:
(1) Are the list of all units one uses pretty standard? Is there
 a table, say in Wikipedia, with pretty much all of them?  Or do
 people make up new units in the course of their work or research?

 For just day-to-day unit conversion, it would be great to be able to
 make up new units and define their conversions; things like 1
 gallon_of_paint is 150 feet^2 or 100 paces is 237 feet.

 Carl
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: working on Sage

2008-07-08 Thread Gary Furnish

On Tue, Jul 8, 2008 at 12:52 AM, William Stein [EMAIL PROTECTED] wrote:

 On Mon, Jul 7, 2008 at 9:12 PM, Elliott Brossard
 [EMAIL PROTECTED] wrote:
 Hi William,

 I am becoming more familiar with both Linux and Sage now, which makes things
 much easier. I finished porting the Maxima and Wester integration tests to
 Sage, though there are many that currently fail...I've attached them, if

 You attached only the ones that fail?   Where are the ones that succeed?

Perhaps we can setup a new tests repo for the very large test suite
currently under development?  Alternatively we could stick it in new
symbolics, but that would change term ordering.  In any case I'd like
to try the test suite myself.

 you'd like to see. The problem that many of them have results from ambiguity
 of variables under a radical, in a denominator, or in a function with a
 restricted domain. As an example, inputting

 Can you just make a new test that tests each case?

+1
 var('a')
 integrate(log(x)/a, x, a, a+1)

 will throw an error: 'is a positive or negative?' Two assume
 statements--assume(x+a+10) and assume(a-10)--render Sage capable of
 responding, and it outputs

 ((a + 1)*log(a + 1) - a*log(a) - 1)/a

 My TI-89 calculator, which uses Derive, gets the same result, though without
 using 'assume' in any form. Trouble is, I can't imagine there's a quick fix
 for this, and since most of the failing integrals are from Maxima's own test

 It would likely be possible -- though difficult (maybe not too difficult) --
 to have Sage automatically give all possible answers to Maxima and
 construct a conditional integral expression that gives each possible
 answer for given conditions.

I think this is a good idea if we want to go down the derive road, but
it is not clear to me that this is the best idea.  The number of
different piecewise expressions here could blow up very quickly.
However, the alternative of 100% ignoring everything else seems like a
bad idea too, so the piecewise route is probably best.

 suite, they must already know about it. Some of the other failing
 integration tests are merely identities that the current version of Maxima
 likely fixes...at least I hope so. I also wrote a series of tests for the
 various rules of differentiation, though all of them check out fine.

 You can get the current version of Maxima presumably from the maxima
 website.

 What I would like to start on next is the calc101 sort of website that I
 discussed with you before, though I was wondering if we could meet to go
 over the specifics of how I would implement it. I'll be at the UW on
 Thursday for a class from 3-5pm, so I would be happy to meet with you
 beforehand or afterward, though if that doesn't work another day would be
 fine as well.

 I am in San Diego.  I could meet with you next week maybe, but I'm
 not sure since I'm going to Europe on the morning of the 16th.

If nothing else we could meet and talk about Symbolics/Maxima if your
interested.
  -- William

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: fast_float rewrite -- comments requested

2008-07-07 Thread Gary Furnish

I was one of the people who discussed this at dev1, and give a very
positive +1 to it (especially possible code auto generation).

On Tue, Jul 1, 2008 at 10:59 AM,  [EMAIL PROTECTED] wrote:




 On Tue, 1 Jul 2008, Carl Witty wrote:


 I can't find any caching for fast_float objects (I can't see anywhere
 they would get attached to a symbolic expression or a polynomial).  Am
 I just not looking in the right place?

 Compiled polynomials get attached to polynomial objects, but if
 you're not seeing fast floats getting stuck into dictionaries and the
 like then we're probably lucky and I doubt there's (m)any pickled
 versions of them floating around.

 - Robert

 Tom Boothby's compiled polynomials don't have anything to do with
 fast_float (yet).

 Carl

 1) Compiled polynomials are not currently pickleable, so no worries there.
 2) According to Carl's proposal, it should be trivial to alter my code to use 
 the new fast_eval objects.



 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: pyprocessing

2008-06-24 Thread Gary Furnish

I already gave this a verbal +1, but does anyone know what happens if
you fork a sage process with open pexpect interface?

On Tue, Jun 24, 2008 at 10:26 AM, William Stein [EMAIL PROTECTED] wrote:

 On Tue, Jun 24, 2008 at 10:21 AM, Nick Alexander [EMAIL PROTECTED] wrote:

 Anyway, since every single person voted +1 and nobody voted -1 or
 had issues, I declare this package officially accepted.

 -1!  That was fast.


 It was a full 3 days.


 What happened to the inclusion procedures?
   In particular, I am
 interested to know what other options were investigated and why
 pyprocessing is considered the best possible solution right now.


 This is discussed at great length in the pages that I linked to, since
 pyprocessing passed the inclusion procedure for inclusion in standard
 Python already.   Please read thinks at the top of this thread.

 I'll reopen the voting for two more days.

  -- William

 


I'

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Pickling functions

2008-06-14 Thread Gary Furnish

This code pickles and unpickles functions.  Note it is simi-hackish
and I make no guarentee about it working in python 3.0 (but I'll be
maintaining a copy for distributed stuff I'm working on for dev1.

import new, types, copy_reg, cPickle
#See python cookbook for more details
def code_ctor(*args):
return new.code(*args)
def reduce_code(co):
if co.co_freevars or co.co_cellvars:
raise ValueError, Cannot pickle code objects from closures
return code_ctor, (co.co_argcount, co.co_nlocals, co.co_stacksize, \
   co.co_flags, co.co_code, co.co_consts, co.co_names, \
   co.co_varnames, co.co_filename, co.co_name, \
   co.co_firstlineno, co.co_lnotab)
copy_reg.pickle(types.CodeType, reduce_code)

def picklefunction(func):
return cPickle.dumps(func.func_code)

def unpicklefunction(pickled):
recovered = cPickle.loads(pickled)
ret = new.function(recovered, globals())
return ret



On Sat, Jun 14, 2008 at 11:25 AM, David Roe [EMAIL PROTECTED] wrote:

 One way to get around this limitation in python is to use callable
 classes instead of functions.
 David

 On Sat, Jun 14, 2008 at 10:42 AM, David Harvey
 [EMAIL PROTECTED] wrote:

 On Jun 14, 2008, at 1:25 PM, Daniel Bump wrote:


 Some code that has been proposed by Nicolas Thiery
 for sage/combinat/families.py would create classes
 that have as attributes dictionaries of functions.
 However dumps(s) will raise an exception if s is
 such a class instance.
 Example: the simple reflections in a Weyl group. See:
 http://groups.google.com/group/sage-combinat-devel/msg/8b987cd471db3493?hl=en
 What it boils down to is this. The following is
 fine in native Python:

 import pickle
 def f(x): return x+1

 ...

 pickle.dumps(f)

 'c__main__\nf\np0\n.'

 pickle.dumps({1:f})

 '(dp0\nI1\nc__main__\nf\np1\ns.'
 But if you try to run this from within Sage,
 both calls to dumps() will raise exceptions.
 Is this a bug in Sage?

 I actually thought you couldn't really pickle functions, even in plain
 python.
 http://docs.python.org/lib/node317.html
 Note that functions (built-in and user-defined) are pickled by ``fully
 qualified'' name reference, not by value. This means that only the function
 name is pickled, along with the name of module the function is defined in.
 Neither the function's code, nor any of its function attributes are pickled.
 Thus the defining module must be importable in the unpickling environment,
 and the module must contain the named object, otherwise an exception will be
 raised.
 david

 


 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: hg clone and symlinks...

2008-06-06 Thread Gary Furnish

This was not my point.  My point was that if you use multiple branches
at the same time, your forced to have multiple physical branches of
the files, and while from my conversations on irc this may be
possible, its going to involve spending a bunch of effort to get what
we already have working again.

On Fri, Jun 6, 2008 at 12:57 AM, root [EMAIL PROTECTED] wrote:

I don't exactly understand these distributed control systems very
well, so hopefully this isn't an obvious question. Right now as I'm
working on symbolics I commonly have files from multiple branches open
(symbolics-stable/backup, symbolics-current, calculus-old).  I also
have to frequently have to toggle between them (to test backwards
compatibility).

 Doing a branch

  git-branch calculus-old

 generates a 40 bit key which points to a history tree node. That is
 all the disk space required.

  git-checkout calculus-old

 magically transforms the files so they conform to the calculus-old
 sources and you can work in that tree.

  git-checkout master

 will bring you back to the trunk, magically transforming the files again.

 Note that since git uses hashcodes to track changes it will not duplicate
 files. Two files with the same hash code occupy one file. One side effect
 is that a distro might actually be smaller when moved into git if it
 had copies of files in subdirectories.

 Thus the branch only needs to keep the changes as it shares the
 unchanged files by pointing at the same sources. So a branch only costs
 you the disk space and time to maintain the changes.






It seems like switching to branching would require
having three full sage installs at the cost of a huge amount of space
if I want to be able to switch between them quickly and edit multiple
versions of the same file, which seems like a massive negative side
effect.  Do I understand this right?

 So, in answer to your question, no it does not require much disk space.
 I have 20+ branches locally and there is almost no disk space overhead
 and changing between the branches happens in about the time it takes
 to type the command.

 Tim


 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: hg clone and symlinks...

2008-06-05 Thread Gary Furnish

I don't exactly understand these distributed control systems very
well, so hopefully this isn't an obvious question. Right now as I'm
working on symbolics I commonly have files from multiple branches open
(symbolics-stable/backup, symbolics-current, calculus-old).  I also
have to frequently have to toggle between them (to test backwards
compatibility).  It seems like switching to branching would require
having three full sage installs at the cost of a huge amount of space
if I want to be able to switch between them quickly and edit multiple
versions of the same file, which seems like a massive negative side
effect.  Do I understand this right?

On Thu, Jun 5, 2008 at 5:27 PM, Carl Witty [EMAIL PROTECTED] wrote:

 On Jun 5, 3:28 pm, Glenn H Tarbox, PhD [EMAIL PROTECTED] wrote:
 On Thu, 2008-06-05 at 14:23 -0700, Carl Witty wrote:
  As far as the second half of your response, I'm confused.  The build
  system already understands chains where the output of one step is the
  input to the next: the .so file is made from the .o file, which is
  made from the .c file, which is made from a variety of .pyx, .pxd,
  and .pxi files.  When we change a .pxi file, all the required
  rebuilding is done automatically (which may take 10 or 20 minutes for
  changes to core .pxi files, which is why we don't want to do it
  every time we change branches).

 yea, as I said to Robert, the frequent branch toggle is a new one to me.
 Typically, a developer branches, hacks, occasionally shelving, but
 working through to local merge.  Maybe Sage development is different...
 I really don't know.

 Hmm... now that you mention it, I hardly ever change branches
 (clones), so I'm not sure why this feels so important to me (and
 evidently to Robert).  Also, most branch changes would rebuild fairly
 quickly (if every modification to Sage required a 20-minute rebuild,
 development would be very painful; but it's actually fairly rare to
 touch these core .pxi files).  My current theory is that it only
 takes 2 or 3 20-minute rebuilds on branch changes to leave you scarred
 for life, and willing to do almost anything to avoid the pain :)

  Are you talking about the sage - sage-NAME symlink?  I'm never
  confused by it because I never use it; I always use sage-NAME
  directly.  We could probably figure out a way to avoid this symlink,
  if we decided it was a problem.

 Yup, Robert brought up the same point...  and its perhaps only a problem
 for me cuz I'm using PyDev which behaves poorly... its my problem and
 will fix it... but it exposed things to me which seem unnatural and
 upon further investigation got downright kinky... not that I'm not
 flexible (I did move to Seattle but didn't get all the piercings just
 yet :-)

 What does PyDev do with the symlink?

 I don't actually know what all the symlink is used for, so I don't
 know how hard it would be to eliminate; but I would guess that moving
 it somewhere else would not be too bad.

 Actually, we presumably can't rely on the symlink for a native Windows
 port anyway, so the people working on that may eliminate the symlink
 fairly soon.

 OTOH, Sage does all kinds of things differently... and its amazing
 (there's no system out there where you download a 200MB tarball and type
 make... it just doesn't happen) so I'm a bit reluctant to mess around
 all that much just yet...

 Yep... it certainly impressed me the first time I installed it.

4) It's easier to abandon the history of a clone, both for hopelessly
broken development efforts and refereeing of other people's patches.

   git branch -D badbranch

   as opposed to the usual

   git merge developmentBranch
   git branch -d developmentBranch

   (btw, I'm a git user... I'm sure there's something similar in Hg)

  I don't think there is.  (I've looked.)

 I'll take your word for it but there definitely should be... since
 there isn't I'd posit that there is badness architecturally with Hg cuz
 this issue must come up multiple times a day

 I only know a little bit about mercurial internals, but I know enough
 to have a guess as to why this is hard.

 In mercurial, each source file has one corresponding file in the
 repository; the repository files are treated as append-only.  So if
 you make changes to a file in branch A, commit, change it in branch B,
 commit, branch A, commit, branch B, commit, then this repository file
 will end up with the changes ...ABAB.  Then expunging branch A would
 require rebuilding this file, which otherwise is never necessary in
 mercurial.  (There is a mercurial extension that lets you eliminate
 all changes in a repository after a given revision number; this is
 easier to handle, since you just have to go through all the repository
 files and truncate them to the appropriate point.)

 Carl

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more 

[sage-devel] Re: coercing of log(2)*1.0

2008-06-03 Thread Gary Furnish

On Mon, Jun 2, 2008 at 11:41 PM, Robert Bradshaw
[EMAIL PROTECTED] wrote:

 On Jun 2, 2008, at 12:55 PM, William Stein wrote:

 On Mon, Jun 2, 2008 at 12:53 PM, Gary Furnish
 [EMAIL PROTECTED] wrote:

 -1. First, everything cwitty said is correct.

 More on this below.

 Second, if we start
 using ZZ[sqrt(2)] and ZZ[sqrt(3)], then sqrt(2)+sqrt(3) requires
 going
 through the coercion system which was designed to be elegant instead
 of fast, so this becomes insanely slow for any serious use.

 The coercion system is designed to be elegant *and* fast. Writing
 something like 1 + sqrt(2) is going to require the coercion system no
 matter what we do, as is 1 + sqrt(2) + 1/2. Computing QQ(sqrt(2), sqrt
 (3)) may take a millisecond or two, but after that coercion into it
 from ether ring will be fast.

As long as there are classes in pure python that use MI on the
critical path that symbolics has to use, the argument that coercion
was written to be fast makes no sense to me.
 Finally, this is going to require serious code duplication from
 symbolics, so
 I'm not sure what the big gain is over just using symbolics to do
 this
 in the first place.

 Number field element work completely differently than symbolics, I
 see little if any code duplication.

Fair enough.
 Also, cwitty pointed out that

 sage: sum([sqrt(p) for p in prime_range(1000)])

 works fine in Sage now, but with your (and my) proposal,
 it would be impossible, since it would require constructing
 a ring of integers of a number field of degree 2^168..

 This is the biggest argument against such a proposal, and I'm not
 quite sure how best to handle this. One would have to implement large
 number fields over sparse polynomials, and only lazily compute the
 ring of integers. Or, as John Cremona suggested, train users. None
 of the above are ideal.

 I would like to give my motivation for not having sqrt(2) be in SR:

 1) Speed. I know you're working very hard to make symbolics much,
 much faster than they currently are, but I still don't think it'll be
 able to compete in this very specialized domain. Currently:

 sage: timeit(((1+sqrt(2))^100).expand())
 5 loops, best of 3: 52.9 ms per loop

 sage: timeit(((1+sqrt(2)+sqrt(3))^50).expand())
 5 loops, best of 3: 579 ms per loop
 sage: sym_a = sqrt(2) + sqrt(3)
 sage: timeit(((1+sym_a)^50).expand())
 5 loops, best of 3: 576 ms per loop

 Compared to


 sage: K.a = NumberField(x^2-2)
 sage: timeit(((1+a)^100))
 625 loops, best of 3: 48.4 µs per loop

 sage: K.a = NumberField(x^4 - 10*x^2 + 1)
 sage: timeit(((1+a)^50))
 625 loops, best of 3: 138 µs per loop

 That's over three orders of magnitude faster (and it's *not* due to
 pexpect/interface overhead as the actual input and output are
 relatively small). Making arithmetic involving a couple of radicals
 fast should probably be the most important, especially as one starts
 making structures over them.

Symbolics isn't going to approach number field speed.  I think we can
do much better then maxima, but its not going to be that much better
(maybe if we encapsulate number fields as a special case in SR)
 2) I personally don't like having to sprinkle the expand and and/or
 simplify all over the place. Now I don't think symbolics should be
 expanded automatically, but stuff like (1+sqrt(2))^2 should be or 1/(1
 +i). It's like just getting the question back. (I guess I'm revealing
 my bias that I don't think of it as living in SR, but rather a number
 field...) On that note, I can't even figure out how to do simplify
 (sqrt(2)-3)/(sqrt(2)-1) in the symbolics...as opposed to

 sage: K.sqrt2 = NumberField(x^2-2)
 sage: (sqrt2-3)/(sqrt2-1)
 -2*sqrt2 - 1
 sage: 1/(sqrt2-1)
 sqrt2 + 1

Your going to have a hard time convincing me that the default behavior
in Mathematica and Maple is wrong.  This makes sense for number theory
but not for people using calculus.

 3) The coercion system works best when things start as high up the
 tree as they can, and the Symbolic Ring is like the big black hole at
 the bottom that sucks everything in (and makes it slow). There is a
 coercion from ZZ[sqrt(2)] (with embedding) to SR, but not the other
 way around, and even trying to cast the other way is problematic. I'd
 rather that matrix([1, 1/2, 0, sqrt(2)]) land in a matrix space over
 the a number field (rather than over SR), and ZZ['x'].gen() + sqrt(2)
 be an actual polynomial in x. Also, the SR, though very useful,
 somehow seems less rigorous (I'm sure that this is improving).

When coercion is faster we can consider changing this.  My definition
of fast is 10 cycles if the parents are the same, no dictionary
lookups if one parent is in the other for all common cases, otherwise
reasonablely quick pure Cython code.  New and old coercion fail these
tests of sufficiently quick, and I'm not waiting to finish symbolics
until they do pass those tests.

My alternative option is lets throw in a flag, defaults to off
(current behavior) that lets you turn on sqrt/powers

[sage-devel] Re: coercing of sqrt(2)

2008-06-03 Thread Gary Furnish

On Tue, Jun 3, 2008 at 2:34 AM, Robert Bradshaw
[EMAIL PROTECTED] wrote:

 On Jun 3, 2008, at 12:09 AM, Gary Furnish wrote:


 On Mon, Jun 2, 2008 at 11:41 PM, Robert Bradshaw
 [EMAIL PROTECTED] wrote:

 On Jun 2, 2008, at 12:55 PM, William Stein wrote:

 On Mon, Jun 2, 2008 at 12:53 PM, Gary Furnish
 [EMAIL PROTECTED] wrote:

 -1. First, everything cwitty said is correct.

 More on this below.

 Second, if we start
 using ZZ[sqrt(2)] and ZZ[sqrt(3)], then sqrt(2)+sqrt(3) requires
 going
 through the coercion system which was designed to be elegant
 instead
 of fast, so this becomes insanely slow for any serious use.

 The coercion system is designed to be elegant *and* fast. Writing
 something like 1 + sqrt(2) is going to require the coercion system no
 matter what we do, as is 1 + sqrt(2) + 1/2. Computing QQ(sqrt(2),
 sqrt
 (3)) may take a millisecond or two, but after that coercion into it
 from ether ring will be fast.

 As long as there are classes in pure python that use MI on the
 critical path that symbolics has to use, the argument that coercion
 was written to be fast makes no sense to me.

 Not sure what you mean by MI here, could you explain. In any case,
 just because coercion isn't as fast as it could be doesn't mean that
 it's not written for speed and much faster than it used to be. Of
 course there's room for improvement, but right now the focus is
 trying to finish the new system (which isn't really that new
 compared to the change made a year ago) in place.

Sets, and in particular a bunch of the category functionality (homset)
get used in coercion, and use MI, making them impossible to cythonize.
 Finally, this is going to require serious code duplication from
 symbolics, so
 I'm not sure what the big gain is over just using symbolics to do
 this
 in the first place.

 Number field element work completely differently than symbolics, I
 see little if any code duplication.

 Fair enough.
 Also, cwitty pointed out that

 sage: sum([sqrt(p) for p in prime_range(1000)])

 works fine in Sage now, but with your (and my) proposal,
 it would be impossible, since it would require constructing
 a ring of integers of a number field of degree 2^168..

 This is the biggest argument against such a proposal, and I'm not
 quite sure how best to handle this. One would have to implement large
 number fields over sparse polynomials, and only lazily compute the
 ring of integers. Or, as John Cremona suggested, train users. None
 of the above are ideal.

 I would like to give my motivation for not having sqrt(2) be in SR:

 1) Speed. I know you're working very hard to make symbolics much,
 much faster than they currently are, but I still don't think it'll be
 able to compete in this very specialized domain. Currently:

 sage: timeit(((1+sqrt(2))^100).expand())
 5 loops, best of 3: 52.9 ms per loop

 sage: timeit(((1+sqrt(2)+sqrt(3))^50).expand())
 5 loops, best of 3: 579 ms per loop
 sage: sym_a = sqrt(2) + sqrt(3)
 sage: timeit(((1+sym_a)^50).expand())
 5 loops, best of 3: 576 ms per loop

 Compared to


 sage: K.a = NumberField(x^2-2)
 sage: timeit(((1+a)^100))
 625 loops, best of 3: 48.4 µs per loop

 sage: K.a = NumberField(x^4 - 10*x^2 + 1)
 sage: timeit(((1+a)^50))
 625 loops, best of 3: 138 µs per loop

 That's over three orders of magnitude faster (and it's *not* due to
 pexpect/interface overhead as the actual input and output are
 relatively small). Making arithmetic involving a couple of radicals
 fast should probably be the most important, especially as one starts
 making structures over them.

 Symbolics isn't going to approach number field speed.  I think we can
 do much better then maxima, but its not going to be that much better
 (maybe if we encapsulate number fields as a special case in SR)

 I'd rather have them be a number field elements (with all the
 methods, etc) over providing a wrapping in SR. Otherwise one ends up
 with something like pari, where everything just sits in the same parent.

 2) I personally don't like having to sprinkle the expand and and/or
 simplify all over the place. Now I don't think symbolics should be
 expanded automatically, but stuff like (1+sqrt(2))^2 should be or
 1/(1
 +i). It's like just getting the question back. (I guess I'm revealing
 my bias that I don't think of it as living in SR, but rather a number
 field...) On that note, I can't even figure out how to do simplify
 (sqrt(2)-3)/(sqrt(2)-1) in the symbolics...as opposed to

 sage: K.sqrt2 = NumberField(x^2-2)
 sage: (sqrt2-3)/(sqrt2-1)
 -2*sqrt2 - 1
 sage: 1/(sqrt2-1)
 sqrt2 + 1

 Your going to have a hard time convincing me that the default behavior
 in Mathematica and Maple is wrong.  This makes sense for number theory
 but not for people using calculus.

 OK, this is a valid point, though the non-calculus portions (and
 emphasis) of Sage are (relatively) more significant. Sage is not a
 CAS, that is just one (important) piece of it.

 Maple does

   1/(1+I);
   1

[sage-devel] Re: coercing of sqrt(2)

2008-06-03 Thread Gary Furnish

On Tue, Jun 3, 2008 at 12:11 PM, Robert Bradshaw
[EMAIL PROTECTED] wrote:

 On Jun 3, 2008, at 11:06 AM, Gary Furnish wrote:

 I think we had a discussion on irc about how homsets still got used
 for determining the result of something in parent1 op something in
 parent2 (maybe it was with someone else?)

 I sure hope not. If so, that needs to change (but I'm pretty sure it
 isn't).

Well allegedly there is some function that computes what parent
something is in if you have parent1 op parentt2 that is new in this
system (but I can't find said function).  Allegedly said function has
to use Homsets, so performance is going to be horrible(and this makes
sense, because its what I have to use now).  I consider homsets to be
a gigantic flaw in coercion that absolutely have to be fixed for me to
consider using more of the coercion system in symbolics.

 I'm also -1 for hard-coding
 knowledge and logic about ZZ,QQ, etc into the coercion model.  I am +1
 for hardcoding it into the elements of say, ZZ,QQ,RR and then having
 them call the coercion model only if those hardcodes can't figure the
 situation out.

 That sounds much better, though I'm still not a fan.

Sure, its kindof ugly, but it will give us the performance we need,
and I don't see a better way to allow us to use coercions all over the
place without having performance drop to 0.

 On Tue, Jun 3, 2008 at 11:48 AM, Robert Bradshaw
 [EMAIL PROTECTED] wrote:

 On Jun 3, 2008, at 7:13 AM, Gary Furnish wrote:

 As long as there are classes in pure python that use MI on the
 critical path that symbolics has to use, the argument that
 coercion
 was written to be fast makes no sense to me.

 Not sure what you mean by MI here, could you explain. In any
 case,
 just because coercion isn't as fast as it could be doesn't mean
 that
 it's not written for speed and much faster than it used to be. Of
 course there's room for improvement, but right now the focus is
 trying to finish the new system (which isn't really that new
 compared to the change made a year ago) in place.

 Sets, and in particular a bunch of the category functionality
 (homset)
 get used in coercion, and use MI, making them impossible to
 cythonize.

 Ah, yes. Homsets. They're not used anywhere in the critical path
 though. (If so, that should be fixed.)




 2) I personally don't like having to sprinkle the expand and
 and/or
 simplify all over the place. Now I don't think symbolics
 should be
 expanded automatically, but stuff like (1+sqrt(2))^2 should be or
 1/(1
 +i). It's like just getting the question back. (I guess I'm
 revealing
 my bias that I don't think of it as living in SR, but rather a
 number
 field...) On that note, I can't even figure out how to do
 simplify
 (sqrt(2)-3)/(sqrt(2)-1) in the symbolics...as opposed to

 sage: K.sqrt2 = NumberField(x^2-2)
 sage: (sqrt2-3)/(sqrt2-1)
 -2*sqrt2 - 1
 sage: 1/(sqrt2-1)
 sqrt2 + 1

 Your going to have a hard time convincing me that the default
 behavior
 in Mathematica and Maple is wrong.  This makes sense for number
 theory
 but not for people using calculus.

 OK, this is a valid point, though the non-calculus portions (and
 emphasis) of Sage are (relatively) more significant. Sage is not a
 CAS, that is just one (important) piece of it.

 Maple does

 1/(1+I);
   1/2 - 1/2 I

 I somewhat ignored (1/1+i) (I agree there is an obvious
 simplification), but (x+1)^2 shouldn't get simplified under any
 circumstances.  This has (little) do with speed (for this small of
 exponent) and everything to do with being consistent with the high
 degree cases and keeping expressions uncluttered.

 I agree that (x+1)^2 shouldn't get simplified, but for me this has a
 very different feel than (1+I)^2 or (1+sqrt(2))^2.

 at least. Looking to the M's for ideas is good, but they should not
 always dictate how we do things--none but Magma has the concept of
 parents/elements, and Sage uses a very OO model which differs from
 all of them. Why doesn't it make sense for Mathematica/Maple? I
 think
 it's because they view simplification (or even deciding to
 simplify)
 as expensive.

 3) The coercion system works best when things start as high up
 the
 tree as they can, and the Symbolic Ring is like the big black
 hole at
 the bottom that sucks everything in (and makes it slow). There
 is a
 coercion from ZZ[sqrt(2)] (with embedding) to SR, but not the
 other
 way around, and even trying to cast the other way is
 problematic. I'd
 rather that matrix([1, 1/2, 0, sqrt(2)]) land in a matrix space
 over
 the a number field (rather than over SR), and ZZ['x'].gen() +
 sqrt(2)
 be an actual polynomial in x. Also, the SR, though very useful,
 somehow seems less rigorous (I'm sure that this is improving).

 When coercion is faster we can consider changing this.

 Coercion speed is irrelevant to the issues I mentioned here...
 and as
 coercion+number fields is *currently* faster than what you could
 hope
 to get with SR (the examples above

[sage-devel] Re: coercing of log(2)*1.0

2008-06-03 Thread Gary Furnish

When I personally use Mathematica etc, I often don't expand
expressions x^20+20*x^19+. doesn't tell me much about where an
expression comes from.  (x+5)^20 tells me a bunch.  Expanding
expressions generally causes information loss for many calculus and
physics problems and going overboard can be bad (although this isn't
an issue for number theory).  Furthermore sqrt(2)sqrt(3) is not
necessarily equal to sqrt(6), so not simplifying there is appropriate
in most circumstances.

On Tue, Jun 3, 2008 at 11:28 AM, Soroosh Yazdani [EMAIL PROTECTED] wrote:


 On Tue, Jun 3, 2008 at 3:09 AM, Gary Furnish [EMAIL PROTECTED] wrote:
 snip

 Your going to have a hard time convincing me that the default behavior
 in Mathematica and Maple is wrong.  This makes sense for number theory
 but not for people using calculus.

 Hmm, I must say from using maple on expressions like this, I found the
 answers that it gave were completely pointless at times, and I was forced to
 run expand and simplify many times, and use many tricks to get the answer in
 the most simplified form. At times, the easiest way was to evaluate the
 expressions, and then use LLL to get the expression back. Admittedly I am a
 number theorist, although at the time when I was using maple extensively, I
 was still in undergrad. When I started using MAGMA, the learning curve
 seemed considerably higher, but completely worth the trouble for algebraic
 expressions. The current proposal by Robert seems to be the best of both
 world for my taste. (And I agree with him that a lot of it is a matter of
 taste.)

 As an aside, one of my gripes teaching calculus is that the students don't
 simplify expressions like sqrt(2)sqrt(3) or 1/(1+i), and I would prefer if
 SAGE doesn't contribute to such atrocity. :)

 Soroosh
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: coercing of sqrt(2)

2008-06-03 Thread Gary Furnish

On Tue, Jun 3, 2008 at 2:48 PM, Robert Bradshaw
[EMAIL PROTECTED] wrote:

 On Jun 3, 2008, at 11:17 AM, Gary Furnish wrote:

 On Tue, Jun 3, 2008 at 12:11 PM, Robert Bradshaw
 [EMAIL PROTECTED] wrote:

 On Jun 3, 2008, at 11:06 AM, Gary Furnish wrote:

 I think we had a discussion on irc about how homsets still got used
 for determining the result of something in parent1 op something in
 parent2 (maybe it was with someone else?)

 I sure hope not. If so, that needs to change (but I'm pretty sure it
 isn't).

 Well allegedly there is some function that computes what parent
 something is in if you have parent1 op parentt2 that is new in this
 system (but I can't find said function).  Allegedly said function has
 to use Homsets, so performance is going to be horrible(and this makes
 sense, because its what I have to use now).

 When a new morphism is created it needs a parent, which is a Homset
 that may be looked up/created at that time. This is probably what you
 are talking about. However, this is a single (tiny) constant overhead
 over the entire runtime of Sage. I'm sure this could be further
 optimized, but creating all the homsets ZZ - QQ - RR - CC, etc.
 will be done long before any symbolic code gets hit.

This is not what I'm talking about.  What I'm talking about is you
can't access members of homset without leaving cython and using
python.
 I consider homsets to be
 a gigantic flaw in coercion that absolutely have to be fixed for me to
 consider using more of the coercion system in symbolics.

 Ironically, other people see it as a plus that coercion has been
 given a more categorical founding.

No, I consider categories to be good.  I consider bad implementations
of categories to be bad.  Implementations that make extensive use of
MI and are thus impossible to cythonize or access without py dict
lookups are not good implementations of categories.  If coercion was
implemented with 100% pure Cython code (with an eye for speed where it
is needed), I would be significantly less upset with it then I am now
where people tell me that if I need more speed I'm out of luck.

 I'm also -1 for hard-coding
 knowledge and logic about ZZ,QQ, etc into the coercion model.  I
 am +1
 for hardcoding it into the elements of say, ZZ,QQ,RR and then having
 them call the coercion model only if those hardcodes can't figure
 the
 situation out.

 That sounds much better, though I'm still not a fan.

 Sure, its kindof ugly, but it will give us the performance we need,
 and I don't see a better way to allow us to use coercions all over the
 place without having performance drop to 0.

 One may be able to eek out a bit more performance by doing this, but
 it's not as if performance is awful in the current model.

For the things you do.  There is no code that pushes the coercion
system anywhere near as much as symbolics in Sage does.
 - Robert


 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: coercing of sqrt(2)

2008-06-03 Thread Gary Furnish

On Tue, Jun 3, 2008 at 4:39 PM, Robert Bradshaw
[EMAIL PROTECTED] wrote:

 On Jun 3, 2008, at 3:08 PM, Gary Furnish wrote:


 On Tue, Jun 3, 2008 at 2:48 PM, Robert Bradshaw
 [EMAIL PROTECTED] wrote:

 When a new morphism is created it needs a parent, which is a Homset
 that may be looked up/created at that time. This is probably what you
 are talking about. However, this is a single (tiny) constant overhead
 over the entire runtime of Sage. I'm sure this could be further
 optimized, but creating all the homsets ZZ - QQ - RR - CC, etc.
 will be done long before any symbolic code gets hit.

 This is not what I'm talking about.  What I'm talking about is you
 can't access members of homset without leaving cython and using
 python.

 Of course homsets could be optimized, but once created they aren't
 used in the coercion itself. I don't know why you'd need to access
 the members of homset unless you were doing some very high-level
 programming. The domain and codomain are cdef attributes of
 Morphisms, which are in Cython (and that took a fair amount of work
 as they was a lot of MI going on with them until a year ago).

 I consider homsets to be
 a gigantic flaw in coercion that absolutely have to be fixed for
 me to
 consider using more of the coercion system in symbolics.

 Ironically, other people see it as a plus that coercion has been
 given a more categorical founding.

 No, I consider categories to be good.  I consider bad implementations
 of categories to be bad.  Implementations that make extensive use of
 MI and are thus impossible to cythonize or access without py dict
 lookups are not good implementations of categories.

 That depends on what they are being used for, but categories lend
 themselves very naturally to multiple inheritance because of what
 they are mathematically. I understand wanting, .e.g., arithmetic to
 be super fast, but I don't see the gain in hacks/code duplication to
 strip out multiple inheritance from and cythonize categories. Python
 is a beautiful language, but it comes with a cost (e.g. dict
 lookups). Other than initial homset creation, what would be faster
 (for you) if homsets and categories were written in Cython?

 If coercion was implemented with 100% pure Cython code (with an eye
 for speed where it is needed),

 The critical path for doing arithmetic between elements is 100% pure
 Cython code. The path for discovering coercions has Python in it, but
 until (if ever) all Parents are re-written in Cython this isn't going
 to be fully optimal anyways. And it only happens once (compared to
 the old model where it happened every single time a coercion was
 needed).

But I don't have elements, I have variables that represent elements.
Maybe the solution here is to run an_element on each parent and then
multiply them (and in fact I do this as a last case resort) and then
get the parent of the result, but this is a pretty bad way to do
things as it requires construction of elements during coercion.
Furthermore, this isn't even implemented in some classes, the default
is written in python, etc.  Even if those issues were dealt with,
having to multiply two elements to figure out where the result is a
bad design.

 Of course, much of the discover part could and should be optimized.
 But right now we've already got enough on our plate trying to get the
 changes we have pushed through. (Any help on this front would be
 greatly appreciated.)

 I would be significantly less upset with it then I am now
 where people tell me that if I need more speed I'm out of luck.

 That's not the message I'm trying to send--I'm sure there's room for
 improvement (though I have a strong distaste for hard-coding special
 cases in all over the place). I don't think make everything Cython
 is going to solve the problem...
No but special cases/writing my own fast coercion code seems to work
significantly better then trying to use your system.  This is
definitely a bad situation, because we shouldn't need another set of
coercion codes that deal with the cases where the main coercion code
is too slow.  Either coercion has to be designed to be fast, or
symbolics is going to have to reach into the innards of the coercion
framework to make it possible to deal with quickly.  It would be even
better if we could have symbolicring over RR or symbolicring over ZZ
or what not, but this is 100% impossible unless the coercion framework
is done 100% in cython, with special cases, with speed as a top
priority instead of beauty.

 I'm also -1 for hard-coding
 knowledge and logic about ZZ,QQ, etc into the coercion model.  I
 am +1
 for hardcoding it into the elements of say, ZZ,QQ,RR and then
 having
 them call the coercion model only if those hardcodes can't figure
 the
 situation out.

 That sounds much better, though I'm still not a fan.

 Sure, its kindof ugly, but it will give us the performance we need,
 and I don't see a better way to allow us to use coercions all
 over the
 place without having

[sage-devel] Re: coercing of sqrt(2)

2008-06-03 Thread Gary Furnish

On Tue, Jun 3, 2008 at 7:21 PM, Robert Bradshaw
[EMAIL PROTECTED] wrote:

 On Jun 3, 2008, at 4:36 PM, Gary Furnish wrote:

 That depends on what they are being used for, but categories lend
 themselves very naturally to multiple inheritance because of what
 they are mathematically. I understand wanting, .e.g., arithmetic to
 be super fast, but I don't see the gain in hacks/code duplication to
 strip out multiple inheritance from and cythonize categories. Python
 is a beautiful language, but it comes with a cost (e.g. dict
 lookups). Other than initial homset creation, what would be faster
 (for you) if homsets and categories were written in Cython?

 If coercion was implemented with 100% pure Cython code (with an eye
 for speed where it is needed),

 The critical path for doing arithmetic between elements is 100% pure
 Cython code. The path for discovering coercions has Python in it, but
 until (if ever) all Parents are re-written in Cython this isn't going
 to be fully optimal anyways. And it only happens once (compared to
 the old model where it happened every single time a coercion was
 needed).

 But I don't have elements, I have variables that represent elements.
 Maybe the solution here is to run an_element on each parent and then
 multiply them (and in fact I do this as a last case resort) and then
 get the parent of the result, but this is a pretty bad way to do
 things as it requires construction of elements during coercion.
 Furthermore, this isn't even implemented in some classes, the default
 is written in python, etc.  Even if those issues were dealt with,
 having to multiply two elements to figure out where the result is a
 bad design.

 Ah, yes. We talked about this before, and I implemented the analyse
 and explain functions which carefully avoid all Python:

 http://cython.org/coercion/hgwebdir.cgi/sage-coerce/rev/1a5c8ccfd0df


 Of course, much of the discover part could and should be optimized.
 But right now we've already got enough on our plate trying to get the
 changes we have pushed through. (Any help on this front would be
 greatly appreciated.)

 I would be significantly less upset with it then I am now
 where people tell me that if I need more speed I'm out of luck.

 That's not the message I'm trying to send--I'm sure there's room for
 improvement (though I have a strong distaste for hard-coding special
 cases in all over the place). I don't think make everything Cython
 is going to solve the problem...
 No but special cases/writing my own fast coercion code seems to work
 significantly better then trying to use your system.

 Writing a huge number of special cases is almost always going to be
 faster, not doubt about it. The Sage coercion code needs to be
 extremely generic as it handles all kinds of objects (and I'm excited
 to have symbolics that respect this).

 This is
 definitely a bad situation, because we shouldn't need another set of
 coercion codes that deal with the cases where the main coercion code
 is too slow.

 Yes.

 Either coercion has to be designed to be fast, or
 symbolics is going to have to reach into the innards of the coercion
 framework to make it possible to deal with quickly.  It would be even
 better if we could have symbolicring over RR or symbolicring over ZZ
 or what not, but this is 100% impossible unless the coercion framework
 is done 100% in cython, with special cases, with speed as a top
 priority instead of beauty.

 Would it be 95% possible if it's 95% written in Cython, with only a
 5% performance hit :-).

 The best balance I see is you can hard code ZZ/RR/... for speed if
 you want (you're worried about virtual function calls anyways), and
 then call off to coercion in the generic cases. You're going to have
 to do this a bit anyways, as the coercion model doesn't handle stuff
 like where is the sin(x) if x is in R? which is handled via the
 normal OO sense based on the type of x (and may depend on x).

 To help with collaboration, what you want out of the coercion model
 is, given R and S (and an operation op), what will be the result of
 op between elements of R and S, and you want this to be as fast as
 possible (e.g. 100% Cython, no homsets, ...).

Correct

 - Robert


 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: log vs log_b

2008-06-02 Thread Gary Furnish

We could easily change this so that we by default go ahead and coerce
the log(10) to RR if its multiplied by something in RR. (so that we
end up with a numeric result).  This is probably a good idea.

On Mon, Jun 2, 2008 at 12:05 PM, Robert Bradshaw
[EMAIL PROTECTED] wrote:


 On Jun 1, 2008, at 2:13 AM, Martin Albrecht wrote:


 I didn't even know there was a log_b, so I would be *very* happy
 to delete it.

  -- William

 They are not the same:

 sage: log_b(10,2)
 3.32192809489
 sage: log(10,2)
 log(10)/log(2)

 but log(10,2).n() is.

 I don't think we need a separate log_b command for that. However,
 this has always annoyed me:

 sage: log(10.0,2)
 3.32192809488736
 sage: log(10,2.0)
 1.44269504088896*log(10)

 - Robert

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: coercing of log(2)*1.0

2008-06-02 Thread Gary Furnish

-1. First, everything cwitty said is correct.  Second, if we start
using ZZ[sqrt(2)] and ZZ[sqrt(3)], then sqrt(2)+sqrt(3) requires going
through the coercion system which was designed to be elegant instead
of fast, so this becomes insanely slow for any serious use.  Finally,
this is going to require serious code duplication from symbolics, so
I'm not sure what the big gain is over just using symbolics to do this
in the first place.

On Mon, Jun 2, 2008 at 12:31 PM, Robert Bradshaw
[EMAIL PROTECTED] wrote:

 On Jun 2, 2008, at 9:47 AM, William Stein wrote:

 On Mon, Jun 2, 2008 at 9:45 AM, Carl Witty [EMAIL PROTECTED]
 wrote:

 On Jun 2, 9:17 am, William Stein [EMAIL PROTECTED] wrote:
 On Mon, Jun 2, 2008 at 1:30 AM, Henryk Trappmann
 But back to SymbolicRing and SymbolicConstant.
 I have the following improvement
 SUGGESTION: when creating sqrt(2) or other roots from integers,
 then
 assign to them the parent AlgebraicReal or AlgebraicNumer
 accordingly
 instead of the too general Symbolic Ring.

 That's definitely planned.

 Actually, if you mean that sqrt(2) should become the same as
 AA(sqrt(2)) is now, I'm not sure that's a good idea, for two reasons.
 First, AA and QQbar by design don't maintain enough information to
 print nicely.  (This could be improved somewhat from the current
 state, but not enough to compete with symbolic radical expressions.)
 Second, since AA and QQbar incorporate complete decision procedures,
 it is easy to construct examples where they are very, very slow; I
 think people would often be happier with the less complete but much
 faster techniques used in symbolics.

 I think the plan is that algebraic elements won't just be generic
 symbolic
 elements, e.g., sqrt(2) would be a generator for ZZ[sqrt(2)].  This
 has
 been discussed a few times.I didn't mean that using AA or QQbar
 by default was precisely what is planned.


 Yep. Specifically, the plan is for sqrt(2) to become an element of ZZ
 [sqrt(2)] *with* and embedding into RR (so stuff like RR(sqrt(2)) or
 even 1.123 + sqrt(2) works). We would want to use very nice AA/QQbar
 code to compute, say, sqrt(2) + sqrt(3) (the result would live in a
 specific number field with embedding). (Nice) number fields with
 embedding would coerce into SR.

 - Robert


 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: log vs log_b

2008-05-31 Thread Gary Furnish

A bunch of the stuff in misc.functional is junk... I've deleted a
bunch of it in my symbolics rewrite without any issues.

On Sat, May 31, 2008 at 4:04 PM, William Stein [EMAIL PROTECTED] wrote:

 On Sat, May 31, 2008 at 2:32 PM, John H Palmieri [EMAIL PROTECTED] wrote:

 What is the role of log_b (from sage.misc.functional)?  It looks like
 log (from sage.calculus.calculus) does everything log_b does.  Should
 perhaps log_b not be used anymore?  That is, is it a good idea to
 delete

  from functional import log as log_b

 ?

 I didn't even know there was a log_b, so I would be *very* happy
 to delete it.

  -- William

 P.S. Of course somebody will probably check the revision history
 of Sage and point out that I wrote log_b...

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: log vs log_b

2008-05-31 Thread Gary Furnish

The things in misc.functional are covered up by other imports in
general.  They only are usable if you change the import order in
all.py in major ways, and no other code in Sage uses them.  In fact
there is code in there that is not doctested that doesn't even work.
In general I agree with you, but in this specific case I believe this
is 100% safe.

On Sat, May 31, 2008 at 4:40 PM, mabshoff [EMAIL PROTECTED] wrote:

 On Jun 1, 12:27 am, Gary Furnish [EMAIL PROTECTED] wrote:

 Hi Gary,

 A bunch of the stuff in misc.functional is junk... I've deleted a
 bunch of it in my symbolics rewrite without any issues.

 You because you didn't have any problem there does not mean we can
 just delete things willy-nilly. I would very strongly recommend that
 we write down some deprecation procedure and make it mandatory that
 such procedures are followed. If you remove some function now and a
 months from now somebodies code breaks because we removed something
 now and the person did either skip a couple upgrades or did not read
 all the details in the release notes *we developers* look like idiots
 for breaking the code. Now if it printed warnings per default for the
 next six months and then was removed I am 100% fine with that.

 And I am not talking about some what if situation, but

 #3253 f.jacob() used to work to compute jacobian ideal. Now it
 doesn't

 illustrates the problem. The jacob() function was renamed in 2.10.3 to
 gradient(), but in 3.0.2 somebody noticed that his old code was broken
 all the sudden.

 SNIP

 Cheers,

 Michael
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: coercing of log(2)*1.0

2008-05-31 Thread Gary Furnish


 But if so then I want to have something like SymbolicNumber which is
 the subset of SymbolicRing that does not contain variables. And that
 this SymbolicNumber is coerced automatically down when used with
 RealField.

There are really, really severe issues with coercion out of the
SymbolicRing because of assumptions that are made by the coercion
system.  If this was allowed, one could end up with situations where
the code would try to coerce complex numbers into RR, and this should
not be allowed.  I'm well aware this is not ideal, it is just that my
hands are largely tied by the coercion system.  I'd like to find a
better answer here (and this really becomes less of an issue with the
more advanced features in the new symbolics system I'm working on),
but this is a long term project to get coercion out of SR working.

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: log vs log_b

2008-05-31 Thread Gary Furnish

+1

On Sat, May 31, 2008 at 5:15 PM, mabshoff [EMAIL PROTECTED] wrote:



 On Jun 1, 1:02 am, William Stein [EMAIL PROTECTED] wrote:
 On Sat, May 31, 2008 at 3:56 PM, Gary Furnish [EMAIL PROTECTED] wrote:

  The things in misc.functional are covered up by other imports in
  general.  They only are usable if you change the import order in
  all.py in major ways, and no other code in Sage uses them.  In fact
  there is code in there that is not doctested that doesn't even work.
  In general I agree with you, but in this specific case I believe this
  is 100% safe.

 Yes, misc.functional is a very very old dumping ground file, where I put
 stuff that I wanted to call with functional notation (instead of 
 object-oriented
 method notation).So I'm not surprised a lot of it is not irrelevant and
 not even imported to sage.all.

 Ok, so I am concluding the same thing as William and would be more
 specific about depreciation procedures only for functions imported per
 default. Anything that is broken and has been broken for a while
 should just be removed.

 Thoughts?

 William

 Cheers,

 Michael
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: coercing of log(2)*1.0

2008-05-31 Thread Gary Furnish

-1 To this idea.  Better to try to improve the coercion system to
support something that interacts with symbolics better.

On Sat, May 31, 2008 at 5:23 PM, William Stein [EMAIL PROTECTED] wrote:

 On Sat, May 31, 2008 at 3:33 PM, Jason Grout
 [EMAIL PROTECTED] wrote:

 Henryk Trappmann wrote:
 On May 31, 10:55 pm, Carl Witty [EMAIL PROTECTED] wrote:
 Actually, there's no homomorphism either way;
 RR(R2(2)+R2(3)) != RR(R2(2)) + RR(R2(3))

 Hm, thats an argument. I somehow thought that it is closer to a
 homomorphism but perhaps this reasoning has no base.

 IMHO, giving a+b the precision of the less-precise operand is better
 than using the more-precise operand, because the latter has too much
 chance of confusing people who don't understand floating point.  For
 instance, if 1.3+RealField(500)(pi) gave a 500-bit number, I'll bet a
 lot of people would assume that this number matched 13/10+pi to almost
 500 bits.

 Hm, yes, but this binary decimal conversion is another topic, I mean
 nobody would assume that 0.333 coerced to 500 *decimal* digits
 matches 1/3 to 500 digits? I anyway wondered why one can not specify a
 base in RealField, or did I merely overlook the opportunity?

 Of course, maybe there are other choices that are better than either
 of these.  We could throw away RealField and always use
 RealIntervalField instead; but that's slower, has less special
 functions implemented, and has counterintuitive semantics for
 comparisons.  We could do pseudo-interval arithmetic, and say that
 1e6+R2(3) should be a 20-bit number, because we know about 20 bits of
 the answer; but to be consistent, we should do similar psuedo-interval
 arithmetic even if both operands are the same precision,

 At least RR would then be a ring ;)

 and then RR
 becomes less useful for people who do understand how floating point
 works and want to take advantage of it.

 Ya, I dont want to change floating point, it just seems somewhat
 arbitrary to me to coerce down precision, while coercing up precision
 with integers. Of course if you consider integer with precision
 infinity (as indeed there are no rounding errors and it is an exact
 ring) then this makes sense again.

 But if so then I want to have something like SymbolicNumber which is
 the subset of SymbolicRing that does not contain variables. And that
 this SymbolicNumber is coerced automatically down when used with
 RealField.

 I hope this is not superfluous, but there is a SymbolicConstant class,
 which seems to be automatically used when dealing with numbers and not
 variables:

 sage: type(SR(3))
 class 'sage.calculus.calculus.SymbolicConstant'
 sage: type(1.0*SR(3))
 class 'sage.calculus.calculus.SymbolicConstant'

 The parent of SymbolicConstant is SymbolicRing.  The parent
 is what matters for coercion.

 sage: parent(SR(3))
 Symbolic Ring

 It would be conceivable that we could have a SymbolicConstant
 ring that is a subring of the full SymbolicRing.I haven't thought
 through at all how this would work, but it might address the
 OP's confusion.Note that it would problematic in Sage, since
 e.g., if x = var('x'), then x + 1 - x is the symbolic 1, so we have
 that the sum of two elements of SR would be in SC, i.e., rings
 wouldn't be closed under addition.   This would take a lot of care
 to actually do.  I'm not saying any of this should be done.

 William

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Trac #3276 + Maxima-isms

2008-05-23 Thread Gary Furnish

With the symbolics rewrite moving quickly, I'd like to request that if
possible people try to avoid adding more Maxima-isms to
sage.calculus.  Specifically, if functionality is being added, please
try to keep it general in that the design of the functionality is
not dictated by what Maxima does.  Functions that pass arguments
directly to Maxima are especially bad, as I either have to break API
compatibility or write complicated, expensive (to write and to
execute) parsers to allow backwards compatibility, and I'm not a very
big fan of having to remove interfaces that were added less then a
month before.  This applies primarily to things like assume(x,
analytic) or otherwise where we are just passing a symbolic
expression directly to Maxima without any real internal logic (which
is also bad because it means the code isn't really doing any error
checking either).
Thanks,
Gary
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.0.2.rc0 released!

2008-05-23 Thread Gary Furnish

#3291 fixes this issue and will be merged in RC3/Release in case
anyone else is having this iissue

On Fri, May 23, 2008 at 3:56 PM, Jaap Spies [EMAIL PROTECTED] wrote:

 Jaap Spies wrote:
 mabshoff wrote:

 On May 23, 6:44 pm, John Cremona [EMAIL PROTECTED] wrote:
 All tests passed! with rc0 + the dsage fix manually applied.

 John
 Cool, I have released

 http://sage.math.washington.edu/home/mabshoff/release-cycles-3.0.2/sage-3.0.2.rc2.tar

 which fixes three issues:

 #3279: Michael Abshoff: Sage 3.0.2.rc0: Copy dsage_* scripts from the
 scrips.spkg
 #3280: Michael Abshoff: Sage 3.0.2.rc0: fix rebuild Sage documentation
 issues
 #3281: Gary Furnish: libecm fails to pbuild

 I verified with a fresh build that #3279 is truly fixed, so let's hope
 for the best.


 pbuild did it on Fedora 9, 32 bits with 2 cpu's.
 Now testing.


 [EMAIL PROTECTED] sage-3.0.2.rc2]$ ./sage
 --
 | SAGE Version 3.0.2.rc2, Release Date: 2008-05-23   |
 | Type notebook() for the GUI, and license() for information.|
 --

 ---
 ImportError   Traceback (most recent call last)

 /home/jaap/work/downloads/sage-3.0.2.rc2/local/bin/ipython console in 
 module()

 /home/jaap/downloads/sage-3.0.2.rc2/local/lib/python/site-packages/sage/all_cmdline.py
  in module()
  12 try:
  13
 --- 14 from sage.all import *
  15 from sage.calculus.predefined import x
  16 preparser(on=True)

 /home/jaap/downloads/sage-3.0.2.rc2/local/lib/python/site-packages/sage/all.py
  in module()
  60 from sage.misc.sh import sh
  61
 --- 62 from sage.libs.all   import *
  63
  64 get_sigs()

 /home/jaap/downloads/sage-3.0.2.rc2/local/lib/python/site-packages/sage/libs/all.py
  in module()
  10 from sage.libs.pari.all   import pari, pari_gen, allocatemem, 
 PariError
  11
 --- 12 from sage.libs.mwrank.all  import (mwrank_EllipticCurve, 
 mwrank_MordellWeil,
  13mwrank_initprimes,
  14set_precision as 
 mwrank_set_precision)

 /home/jaap/downloads/sage-3.0.2.rc2/local/lib/python/site-packages/sage/libs/mwrank/all.py
  in module()
   8set_precision)
   9
 --- 10 from mwrank import initprimes as mwrank_initprimes
  11
  12

 ImportError: 
 /home/jaap/downloads/sage-3.0.2.rc2/local/lib/python2.5/site-packages/sage/libs/mwrank/mwrank.so:
  undefined symbol: mw_del

 sage: 1+1
 ---
 NameError Traceback (most recent call last)

 /home/jaap/work/downloads/sage-3.0.2.rc2/local/bin/ipython console in 
 module()

 NameError: name 'Integer' is not defined

 sage:



 Jaap


 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] The Symbolics Feature Request thread

2008-05-22 Thread Gary Furnish

So after a discussion on irc about how log2(8) should evaluate to 3 by
default, I thought I'd start taking feature requests for the symbolics
rewrite I'm currently working on.  The current list is (with many of
these already done)
Symbolics must not rely on maxima for most operations.
Symbolics must be fast.
Symbolics must be easier to understand.
Autosimplification of Constant==Constant to True or False
Support for substitution of vectors/matrixes
Support for variables and expressions in arbitrary parents (yes this
means that you can have x \in Z/5 for example).
Support for noncommutative multiplication (with simplification in some
cases).
Support for autodeduction of types (Ex: if x\in RR, and y \in CC, we
know that im(x*y) might not be in RR, so don't simplify it, whereas if
y \in RR, its safe to simplify im(x*y)-0)
Im(x) simplifies to 0 IFF x is Real (and so on), unlike now when Im(x)
- 0
Better support for unevaluated functions and piecewise expressions
Better assumption handeling
Support more simplification options (such as in Mathematica's
Full_Simplify command).
Function(Constant) should auto-evaluate for integers if the result is
exact.
Support for calling functions as cpdef/cdef functions instead of using
dictionary lookups.
Better support for Multivariable calculus and differential geometry.
Support for representing numbers such as 7^10^10^2 symbolically if
they do not fit in memory.
I'm sure I forgot a few things I'm working on, but I was interested in
knowing if anyone had any annoyances about the current system that
they'd liked fixed, especially as many of them will be easier to
tackle now.

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: 3.0.1 pbuild is much slower

2008-05-06 Thread Gary Furnish

Try building with SAGE_BUIILD_THREADS=2.  If you are building with 3
threads on a 2 cpu system you could be seeing some type of
cache/scheduler issue as pbuild fully saturates the threads it
launches to 100% cpu usage.

On Tue, May 6, 2008 at 2:05 AM, Dan Drake [EMAIL PROTECTED] wrote:
 I have access to a dual processor machine, so I thought I'd try out the
  new pbuild stuff in 3.0.1. Unfortunately, exporting SAGE_BUILD_THREADS=3
  and SAGE_PBUILD=yes resulted in a *slower* build:


   real205m21.375s
   user184m45.427s
   sys 24m50.515s

  was at the bottom after running `make'. When I cleared those variables
  and ran `make' in a clean tree, I got

   real144m59.654s
   user111m43.727s
   sys 18m36.220s

  When the supposed pbuild was running, I ran top in another window and it
  seemed like it was never actually running more than one process at a
  time; top consistently reported about 50% idle and a load average near
  1.

  Below are some specs for the machine. Any thoughts on why it isn't
  actually doing a parallel build?


  It's a Fedora 7 box; uname -a gives

  Linux vinh505.math.umn.edu 2.6.23.15-80.fc7 #1 SMP Sun Feb 10 17:29:10
  EST 2008 i686 athlon i386 GNU/Linux

  and /proc/cpuinfo is

  processor   : 0
  vendor_id   : AuthenticAMD
  cpu family  : 6
  model   : 8
  model name  : AMD Athlon(tm) MP 2600+
  stepping: 1
  cpu MHz : 2133.460
  cache size  : 256 KB
  fdiv_bug: no
  hlt_bug : no
  f00f_bug: no
  coma_bug: no
  fpu : yes
  fpu_exception   : yes
  cpuid level : 1
  wp  : yes
  flags   : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov
  pat pse36 mmx fxsr sse syscall mp mmxext 3dnowext 3dnow ts
  bogomips: 4270.04
  clflush size: 32

  processor   : 1
  vendor_id   : AuthenticAMD
  cpu family  : 6
  model   : 8
  model name  : AMD Athlon(tm) Processor
  stepping: 1
  cpu MHz : 2133.460
  cache size  : 256 KB
  fdiv_bug: no
  hlt_bug : no
  f00f_bug: no
  coma_bug: no
  fpu : yes
  fpu_exception   : yes
  cpuid level : 1
  wp  : yes
  flags   : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov
  pat pse36 mmx fxsr sse syscall mp mmxext 3dnowext 3dnow ts
  bogomips: 4266.20
  clflush size: 32



  --
  ---  Dan Drake [EMAIL PROTECTED]
  -  KAIST Department of Mathematical Sciences
  ---  http://math.kaist.ac.kr/~drake

 -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.6 (GNU/Linux)

  iD8DBQFIIBFFr4V8SljC5LoRAsrZAJ9w7otvjVpki/5CFb8C/OTzFDfhwACgocij
  A0EqWiwo8EFsx1FfxDI7NqY=
  =UUpu
  -END PGP SIGNATURE-



--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Informal: 3.0.1.final released!

2008-05-04 Thread Gary Furnish

This is bug #3097 -- http://trac.sagemath.org/sage_trac/ticket/3097

The patches do fix the problem for the parallel build case (at least
for me), but arn't positively reviewed yet for the slowbuild case.

On Sun, May 4, 2008 at 3:50 PM, John Cremona [EMAIL PROTECTED] wrote:

  Build ok but --testall still stalling here (using PBUILD):

  sage -t  devel/sage/sage/dsage/tests/testdoc.py

  [EMAIL PROTECTED] -a
  Linux host-57-71 2.6.18.8-0.3-default #1 SMP Tue Apr 17 08:42:35 UTC
  2007 x86_64 x86_64 x86_64 GNU/Linux
  [EMAIL PROTECTED] | grep SAGE
  SAGE_BUILD_THREADS=2
  SAGE_PBUILD=yes

  John

  2008/5/4 mabshoff [EMAIL PROTECTED]:


 
Here we go with 3.0.1.final. pbuild should work, but there
is still a known failure while doctesting (see #3098). 3.0.1
official sources will be rolled out in a couple hours and
binaries should follow tomorrow. The official announcement
will go out once we have binaries and mirrored them out.
  
Sources  binaries in the usual space, i.e. check out
  
http://sage.math.washington.edu/home/mabshoff/release-cycles-3.0.1/
  
Cheers,
  
Michael
  
Merged in final:
  
#3010: Michael Abshoff: Numerical noise doctest failure in
  rings/complex_double.pyx
#3085: Jason Grout: fix identity matrix docs
#3087: Tim Abbott: Fix stupid mistakes in Debian palp copyright
  files
#3088: Tim Abbott: Fixes for Debian gfan build
#3092: Tim Abbott: Debian Singular permissions fixes
#3093: Tim Abbott: Debian lcalc package missing -DINCLUDE_PARI
  flag
#3094: Tim Abbott: Update to SAGE Debian packaging
#3095: Lars Fischer, Michael Abshoff: Notebook, Documentation
  of DATA has a small error
#3096: Michael Abshoff: Fix documentation rebuild issues for
  Sage 3.0.1.final
#3098: Willem Jan Palenstijn, William Stein: doctest failure
  in devel/sage/sage/rings/ring.pyx
#3101: Gary Furnish, Michael Abshoff: pbuild: mwrank.so needs
  to be build as a C++ extension

  

  


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Sage 3.0.1-alpha1 released!

2008-05-01 Thread Gary Furnish

If you export SAGE_PBUILD=yes before you run make (so that it uses
pbuild from the beginning) it is necessary to link site-packages
yourself with the following command from $SAGE_ROOT:
ln -s devel/sage/build/sage/ local/lib/python/site-packages/sage

This will be fixed in rc0.

On Thu, May 1, 2008 at 12:58 AM, mabshoff
[EMAIL PROTECTED] wrote:

  Hello,

  this release should have been out two days ago, but somehow
  general slowness and me spending a lot of time on various
  porting issues did delay this release more than it should
  have. Gary's pbuild should now be fully functional, but it
  wasn't made the default build system yet. If you run

  export SAGE_PBUILD=yes

  before building Sage pbuild will be used. All the various
  sage [-b|-ba|-br|...] should work as should -clone  friends.
  The number of threads used during the build is set via

  export SAGE_BUILD_THREADS=8

  for eight threads. Switching between pbuild and normal
  build is not possible in all cases, so in case of problems
  nuke the build directory on devel/sage. Please try out
  pbuild and report any trouble. We *really* want to switch
  to it per default soon.

  I am currently seeing an odd doctest failure in

  doctest failure in devel/sage/sage/server/simple/twist.py

  Other than that things should just work.

  Sources and binaries are in the usual place.

  
 http://sage.math.washington.edu/home/mabshoff/release-cycles-3.0.1/sage-3.0.1.alpha1.tar

  
 http://sage.math.washington.edu/home/mabshoff/release-cycles-3.0.1/sage-3.0.1.alpha0-sage.math-only-x86_64-Linux.tar.gz

  We are getting toward the end of the release cycle

  Merged in alpha1:

  #1549: Alex Ghitza: Sage 2.9: fix optional doctests in tut.tex
  #2216: Alex Ghitza: Creating an order in a number field --
infinite loop?
  #2504: Alex Ghitza: number field .units() method caches
proof=False result and returns it for proof=True
  #2523: Craig Citro: bug in modular symbols for GammaH subgroup
  #2716: Marshall Hampton: convex hulls and polyhedral functions
  #2741: William Stein, Timothy Clemans: Implement mesh lines
in 3d plots
  #2938: Craig Citro: Fix ModularSymbols(GammaH(8,[3])).decomposition()
ModularSymbols(GammaH(81, [10])).decomposition();
  #3029: Tim Abbott: Move DEB_AUTO_UPDATE_DEBIAN_CONTROL out
of Debian packages
  #3030: Gary Furnish: Cython working directory command line
option patch
  #3031: Kiran Kedlaya, Craig Citro:  Add zeta_function method
for schemes
  #3032: Dan Bump: minor docstring cleanup in crystals.py and
tensor_product.py
  #3034: Tim Abbott: improved cleaning code for Debian packages
  #3036: Tim Abbott: SAGE_TESTDIR broken
  #3037: Gary Furnish, Robert Bradshaw: update cython to
0.9.6.13-20080426
  #3038: Tim Abbott: SAGE setup.py fixes for using Debian
packaged polybori, zn_poly
  #3039: Tim Abbott: Improve auto-generated version numbers
for Debian packages
  #3041: Francois Bissey, Michael Abshoff: optimization setting
in LinBox.spkg is broken
  #3054: Jason Grout: copying a graph doesn't copy _pos or
_boundary
  #3055: Jason Grout: creating subgraph does not delete _pos
entries
  #3057: Tom Boothby: MPolynomialRing_generic type-checks to
determine commutativity
  #3059: William Stein, Timothy Clemans: notebook -- rewrite
notebook(...) function to *not* use SSL by default
  #3061: Michael Abshoff, Max Murphy: use readlink and realpatch
so that symlinking sage works
  #3063: Didier Deshommes: empty matrices: norm() returns a
ValueError
  #3064: Didier Deshommes: empty matrices: density() function
throws a ZeroDivisionError
  #3066: Didier Deshommes: empty matrices: gram_schmidt() throws
a NameError
  #3067: Didier Deshommes: matrices: numeric_array() is missing
an import

  Merged in alpha0:

  #783: Alex Ghitza: dilog is lame
  #1187: Alex Ghitza: bug in G.conjugacy_classes_subgroups()
  #1921: Alex Ghitza, Mike Hansen: add random_element to groups
  #2302: Michael Abshoff, William Stein, Scot Terry: Add a 64
bit glibc 2.3 based binary of Sage to the default
build platforms
  #2325: David Roe, Kiran Kedlaya: segfault in p-adic extension()
method
  #2821: Alex Ghitza: get rid of anything about this document
sections of any sage docs that say send email to stein
  #2939: David Joyner: piecewise.py improvements (docstring and
laplace fixes)
  #2985: Michael Abshoff: ITANIUM (RHEL 5) -- bug in rubik.py's
OptimalSolver()
  #2993: Michael Abshoff: OSX/gcc 4.2: disable padlock support
per default
  #2995: Alex Ghitza: some new functionality and doctests for
congruence subgroups
  #3003: Jason Bandlow: Bugfix for to_tableau() method of
CrystalOfTableaux elements
  #3005: Craig Citro: modabar -- failure to compute endomorphism
ring
  #3006: David Joyner: missing elliptic integrals in special.py

[sage-devel] Re: Sage 3.0.1-alpha1 released!

2008-05-01 Thread Gary Furnish

The symlink is needed only after make completes when you try to run
sage, not to build it.

On Thu, May 1, 2008 at 11:18 AM, John Cremona [EMAIL PROTECTED] wrote:

  Gary, that does not work!

  [EMAIL PROTECTED] SAGE_PBUILD=yes
  [EMAIL PROTECTED] -s devel/sage/build/sage/ 
 local/lib/python/site-packages/sage
  ln: creating symbolic link `local/lib/python/site-packages/sage': No
  such file or directory

  This is in a freshly unpacked SAGE_ROOT, which does not yet have even
  a devel subdirectory:

  [EMAIL PROTECTED]
  /home/jec/sage-3.0.1.alpha1
  [EMAIL PROTECTED]
  COPYING.txt  example.sage  HISTORY.txt  makefile  README.txt  sage
  sage-README-osx.txt  spkg


  Pity, as I was looking forward to speeding up build time on a
  4-processor machine!

  John

  2008/5/1 Gary Furnish [EMAIL PROTECTED]:


 
If you export SAGE_PBUILD=yes before you run make (so that it uses
pbuild from the beginning) it is necessary to link site-packages
yourself with the following command from $SAGE_ROOT:
ln -s devel/sage/build/sage/ local/lib/python/site-packages/sage
  
This will be fixed in rc0.
  
  
  
On Thu, May 1, 2008 at 12:58 AM, mabshoff
[EMAIL PROTECTED] wrote:

  Hello,

  this release should have been out two days ago, but somehow
  general slowness and me spending a lot of time on various
  porting issues did delay this release more than it should
  have. Gary's pbuild should now be fully functional, but it
  wasn't made the default build system yet. If you run

  export SAGE_PBUILD=yes

  before building Sage pbuild will be used. All the various
  sage [-b|-ba|-br|...] should work as should -clone  friends.
  The number of threads used during the build is set via

  export SAGE_BUILD_THREADS=8

  for eight threads. Switching between pbuild and normal
  build is not possible in all cases, so in case of problems
  nuke the build directory on devel/sage. Please try out
  pbuild and report any trouble. We *really* want to switch
  to it per default soon.

  I am currently seeing an odd doctest failure in

  doctest failure in devel/sage/sage/server/simple/twist.py

  Other than that things should just work.

  Sources and binaries are in the usual place.

  
 http://sage.math.washington.edu/home/mabshoff/release-cycles-3.0.1/sage-3.0.1.alpha1.tar

  
 http://sage.math.washington.edu/home/mabshoff/release-cycles-3.0.1/sage-3.0.1.alpha0-sage.math-only-x86_64-Linux.tar.gz

  We are getting toward the end of the release cycle

  Merged in alpha1:

  #1549: Alex Ghitza: Sage 2.9: fix optional doctests in tut.tex
  #2216: Alex Ghitza: Creating an order in a number field --
infinite loop?
  #2504: Alex Ghitza: number field .units() method caches
proof=False result and returns it for proof=True
  #2523: Craig Citro: bug in modular symbols for GammaH subgroup
  #2716: Marshall Hampton: convex hulls and polyhedral functions
  #2741: William Stein, Timothy Clemans: Implement mesh lines
in 3d plots
  #2938: Craig Citro: Fix ModularSymbols(GammaH(8,[3])).decomposition()
ModularSymbols(GammaH(81, [10])).decomposition();
  #3029: Tim Abbott: Move DEB_AUTO_UPDATE_DEBIAN_CONTROL out
of Debian packages
  #3030: Gary Furnish: Cython working directory command line
option patch
  #3031: Kiran Kedlaya, Craig Citro:  Add zeta_function method
for schemes
  #3032: Dan Bump: minor docstring cleanup in crystals.py and
tensor_product.py
  #3034: Tim Abbott: improved cleaning code for Debian packages
  #3036: Tim Abbott: SAGE_TESTDIR broken
  #3037: Gary Furnish, Robert Bradshaw: update cython to
0.9.6.13-20080426
  #3038: Tim Abbott: SAGE setup.py fixes for using Debian
packaged polybori, zn_poly
  #3039: Tim Abbott: Improve auto-generated version numbers
for Debian packages
  #3041: Francois Bissey, Michael Abshoff: optimization setting
in LinBox.spkg is broken
  #3054: Jason Grout: copying a graph doesn't copy _pos or
_boundary
  #3055: Jason Grout: creating subgraph does not delete _pos
entries
  #3057: Tom Boothby: MPolynomialRing_generic type-checks to
determine commutativity
  #3059: William Stein, Timothy Clemans: notebook -- rewrite
notebook(...) function to *not* use SSL by default
  #3061: Michael Abshoff, Max Murphy: use readlink and realpatch
so that symlinking sage works
  #3063: Didier Deshommes: empty matrices: norm() returns a
ValueError
  #3064: Didier Deshommes: empty matrices: density() function
throws a ZeroDivisionError
  #3066: Didier Deshommes: empty matrices: gram_schmidt() throws
a NameError

[sage-devel] Re: fast vs viable (offline post)

2008-04-30 Thread Gary Furnish

   - It suffers from the I can do it better, do-it-yet-again-in-python
syndrome, where it will be discovered that python is too slow
so we need to rewrite it in Cython and do obscure, undocumented,
performance enhancing software hacks.

Unfortunately computers live in the physical world, not the
theoretical world.  Ugly performance enhancing software hacks are
often needed to make theoretical algorithms viable.

   - It suffers from the OpenMath communication issue (e.g. if you
take an Axiom expression, export it to maple, compute with it,
and re-import it to Axiom you have violated a lot of type
assumptions in Axiom, possibly violated branch cut assumptions
(e.g. acosh), done invalid simplifications, and any number of
violent mathematical mistakes)

If merely by exporting the data and re-importing it you have violated
assumptions, then Axiom is broken and needs a better exporting system.

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: fast vs viable (offline post)

2008-04-30 Thread Gary Furnish

My point was that information on branch cuts should either A) be
publicly available or B) preferably available as an export option.
Mathematica and Maple both do A.  Perhaps B is the better answer for
open systems.  In any event I stand by my point that this is only an
issue because people have a tendency to ignore branch cuts; it is not
actually a flaw inherent in aggregating many systems, and there are no
technical reasons it couldn't be handled better.

On Wed, Apr 30, 2008 at 6:39 PM, root [EMAIL PROTECTED] wrote:

 - It suffers from the OpenMath communication issue (e.g. if you
  take an Axiom expression, export it to maple, compute with it,
  and re-import it to Axiom you have violated a lot of type
  assumptions in Axiom, possibly violated branch cut assumptions
  (e.g. acosh), done invalid simplifications, and any number of
  violent mathematical mistakes)
  
  If merely by exporting the data and re-importing it you have violated
  assumptions, then Axiom is broken and needs a better exporting system.

  Well, that's something of the issue, actually.

  Suppose we're looking at an inverse function that uses branch cuts.
  System A uses cut 1, say $-\pi  x  \pi$
  System B uses cut 2, say $0  x  2\pi$

  Suppose you take a result from System A:

   x=A.getResult()

  simplify it with System B

   y=B.simplify()

  and hand it back to System A

   A.compute(y)

  Trigonometric simplification formulas depend on the branch cuts.
  Thus, the simplification performed in B, while perfectly valid under
  the branch cut assumptions in System B, may not be valid under the
  branch cut assumptions in System A.

  You get a final answer (assuming a LONG chain of using a lot of
  available systems in Sage). Is the answer correct?

  Do all of the subsystems in Sage that use transcendental functions
  use the same choice of branch cuts in all their routines? Frankly,
  I'm not sure how to begin to answer the question because (a) most
  (all?) of the systems do NOT document their branch cut assumptions
  and (b) I'd likely not be able to follow the logic of so many
  systems through their simplifier.

  Is this important? Only if you want a correct answer.



  Tim





  


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: [fricas-devel] Re: [sage-devel] Re: FriCAS/Open-Axiom and SAGE

2008-04-23 Thread Gary Furnish
No doubt the sage system is a bit too ad hoc.  However, this is something
that will be fixed in the symbolics rewrite.  While externally it will
maintain most backwards compatibility, internally it will be vastly
different (for example, elements (including variables) in the symbolic
ring (which is no longer even necessarily a ring!) no longer necessarily
commute, etc).  Care is being taken to make it significantly less ad hoc.
Right now, most functionality of the sage.calculus module is dictated
strictly by how maxima is designed.  As we move to our own internal
reperesentation, this obviously will not be sufficient.  I'd be interested
in hearing which features of FriCAS/OpenAxiom has that might be useful in
more detail.  My main technical concerns are that anything interacting
through a pexpect interface is going to be slow and will not interact with
Sage's internal coercion system (aside from the fact that adding another
dependency to the standard core increases code complexity for symbolics
dramatically).  I downloaded FriCAS to try to investigate, but I couldn't
find a good introduction to the high level structure of the FriCAS code (at
the file level).  Does such a thing exist?

-- Gary Furnish

On Wed, Apr 23, 2008 at 9:49 AM, Bill Page [EMAIL PROTECTED]
wrote:


 On Wed, Apr 23, 2008 at 4:28 AM, Michael.Abshoff wrote:
 
   Martin Rubey wrote:
 
I do not see how the problem of differing representations can be
resolved. Up to now I thought that Sage simply doesn't have an
internal representation, and just uses the one from the external
program - that's how my polymake interface for FriCAS/Axiom
works.
 
   I am not 100% clear, but I believe for elements in a symbolic ring
   we do not yet have a Sage internal representation. It is being written
   in Cython since arithmetic is probably slowed down by a factor of
  10,000 by using pexpect+Maxima. That is largely the fault of pexpect.
 

 I think that what Martin means is that if Sage has it's own internal
 representation of something then the problem of converting from one
 representation to another is unavoidable if the symbolic manipulation
 is done by an external program. This is independent of whether one
 uses the pexpect interface or not. Of course if symbolic manipulation
 is implemented in the Sage interpreter or compiled Cython code, then
 this conversion may not be necessary. However it is sometimes
 convenient also to change the internal representation depending on the
 type of manipulation to be done.

 So far as I can see Sage does have an internal representation for
 Symbolic Ring.

 But the idea of a Symbolic Ring seems very strange to me. What is
 this from a mathematical point of view? How is it different from, say,
 a polynomial? Do you really mean that the variables of the ring do not
 commute as in the sense of a free algebra?

 http://en.wikipedia.org/wiki/Free_algebra

 I worry sometimes that the Sage type system is a little too ad hoc... :-(

 Anyway, here is an example computation that includes conversion from
 the internal Sage representation to the internal Axiom representation
 of a symbolic object:

 [EMAIL PROTECTED]:~# sage
 --
 | SAGE Version 3.0, Release Date: 2008-04-22 |
 | Type notebook() for the GUI, and license() for information.|
 --

 sage: # Construct some symbolic expression in Sage
 sage: var('a b c d')
 (a, b, c, d)

 sage: A = matrix([[a,b],[c,d]])

 sage: A

 [a b]
 [c d]

 sage: x=det(A)

 sage: x
 a*d - b*c

 sage: x.parent()
 Symbolic Ring

 sage: x?
 Type:   SymbolicArithmetic
 Base Class: class 'sage.calculus.calculus.SymbolicArithmetic'
 String Form:

   a d - b c
 Namespace:  Interactive
 Docstring:

Represents the result of an arithmetic operation on
f and g.

 sage: # Convert it to an object in Axiom
 sage: y=axiom(x)

 sage: y
 a*d+(-b*c)

 sage: y.type()
 Polynomial Integer

 sage: y?
 Type:   AxiomElement
 Base Class: class 'sage.interfaces.axiom.AxiomElement'
 String Form:a*d+(-b*c)
 Namespace:  Interactive
 Docstring:
a*d+(-b*c)

  ...
   So: What should you do:
 
   a) wait a week or two for ecls and gdbm to become optional
   b) build an optional FriCAS/OpenAxiom spkg using ecls
   c) write a pexpect interface to use integration/ODE/guess if that
   is something where you see FriCAS/OpenAxiom as better
   than anything else.
 
   In parallel take a look at
 
   http://wiki.sagemath.org/spkg/InclusionProcedure
 
   and write some proposal *why* FriCAS/OpenAxiom should become
   part of standard Sage and *what* it should/can [via pexpect] do.
   ...

 Martin I would be interested in working with you on doing this (and
 anyone else who would like to join in), that is provided you are
 willing to take the lead

[sage-devel] Re: sage lite

2008-04-01 Thread Gary Furnish
Right now pulling in group theory may end up pulling in calculus.  There are
similar issues all over with really tight coupling between subsystems.  It
ought to be possible to use group theory (maybe without a feature or two)
without calculus and vice versa.

On Tue, Apr 1, 2008 at 11:33 AM, Nick Alexander [EMAIL PROTECTED]
wrote:



 On 1-Apr-08, at 10:21 AM, Gary Furnish wrote:

  Maybe.  I see two real issues.
  1) Sage right now has really bad global namespace pollution issues
  that make it very hard to import just one or two files.  I don't
  see why this shouldn't be fixable, it just needs someone to work on
  it.  This would not be that hard, and would probably catch some
  subtle import bugs in the process.

 Warning: I'm not expert in this area.  AFAICT, the Sage library
 itself exports nothing globally -- it's all in a startup line similar
 to from sage.all_cmdline import *.  You might be talking about
 another scenario, though.

 Nick

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage lite

2008-04-01 Thread Gary Furnish
Maybe.  I see two real issues.
1) Sage right now has really bad global namespace pollution issues that make
it very hard to import just one or two files.  I don't see why this
shouldn't be fixable, it just needs someone to work on it.  This would not
be that hard, and would probably catch some subtle import bugs in the
process.

2) Every cython file compiles to an separate dll, dramatically increasing
used space.  This would require a change to cython to fix, but ought to be
doable.  Maybe space is not as big of an issue as ease of use though.

The main dependency of the calculus package right now is on ZZ and RR.  If
an alternate integer and real ring was provided with no external
dependencies, it could be easily modified to use them.  Maybe including gmp
isn't an issue for you, however.  This is not something I am opposed to
doing (as it is simple to implement) but I don't want to do the work (to
create dependency free rings) myself.  If I was provided such rings, I would
have no problem adding an option for my calculus system to use them.

In short, if there was real interest and this, and someone (you?) wanted to
help with it, I'd get behind the idea very quickly.  I would very much like
to see the sage.symbolics package useable without importing all of Sage.

On Tue, Apr 1, 2008 at 10:30 AM, Ondrej Certik [EMAIL PROTECTED] wrote:


 Hi,

 Sage motivation is to create a viable alternative to Ma*.

 There are also people, who don't need an alternative to Ma*, but
 rather a good library, read for example this email from Gael:

 http://groups.google.com/group/sympy/msg/f8f497d1d32fab30

 who works on Mayavi2 (yet we have paraview3, that imho is more mature,
 but it's a beast).

 What are your opinions - can Sage become (in a year, or two) in
 something like Gael (and me) want?

 Basically, I think we need a modular calculus package, that interacts
 well with other python scientific packages.

 Ondrej

 P.S. I am looking forward to Garry's calculus patch. :)

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage lite

2008-04-01 Thread Gary Furnish
Wierd circular import issues can (should) be solved with circular cdef
imports.  I think the easiest fix to crazy deps (group theory on calculus)
might be to do something alone the lines of
foo = None
def importcrazydeps():
import sage.foo as localfoo
foo = localfoo
Then have sage.x.package import all package modules and sage.x.all import
sage.package and then run importcrazydeps() on any function.
Perhaps another approach would be in Cython with an import optional foo
(does not throw an exception on failure).

On Tue, Apr 1, 2008 at 1:41 PM, Carl Witty [EMAIL PROTECTED] wrote:


 On Apr 1, 11:23 am, Robert Bradshaw [EMAIL PROTECTED]
 wrote:
  On Apr 1, 2008, at 10:45 AM, Nick Alexander wrote:
 
   On 1-Apr-08, at 10:36 AM, Gary Furnish wrote:
 
   Right now pulling in group theory may end up pulling in calculus.
   There are similar issues all over with really tight coupling
   between subsystems.  It ought to be possible to use group theory
   (maybe without a feature or two) without calculus and vice versa.
 
   This isn't really a global namespace pollution issue, but it is a
   concern.  One way to deal with this is to make (more) imports
   function or class local.  I'm not sure if there are performance
   penalties for this, especially in Cython.  Can anyone say?
 
  Importing locally takes time, even if just to discover the cached
  import if it has already been done once. This is independent of
  whether or not the file is in Cython (though the relative overhead
  may be much greater for a Cythonized function).

 For instance, see https://bugs.launchpad.net/cython/+bug/155076, where
 I measure the cost of a local import as around 2 microseconds.

  The order in which things are imported is really, really crazy right
  now, as anyone trying to hunt down an (easy to trigger) circular
  references can attest to. It would be great if this could be cleaned
  up, both for developing Sage and for making things more modular so
  other projects can benefit from them.

 Yes, this would be great!

 It seems like it would be very difficult to fix, though.

 Carl
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage lite

2008-04-01 Thread Gary Furnish
Why not use import sage.rings.integer_ring as module_integer_ring.  If the
location changes, just change what it is imported as.
On Tue, Apr 1, 2008 at 3:01 PM, Robert Bradshaw 
[EMAIL PROTECTED] wrote:


 On Apr 1, 2008, at 1:50 PM, William Stein wrote:
 
  On Tue, Apr 1, 2008 at 1:36 PM, Robert Bradshaw
  [EMAIL PROTECTED] wrote:
 
   On Apr 1, 2008, at 1:23 PM, Gary Furnish wrote:
  Wierd circular import issues can (should) be solved with circular
  cdef imports.  I think the easiest fix to crazy deps (group theory
  on calculus) might be to do something alone the lines of
  foo = None
  def importcrazydeps():
  global foo
 
  import sage.foo as localfoo
  foo = localfoo
  Then have sage.x.package import all package modules and sage.x.all
  import sage.package and then run importcrazydeps() on any function.
 
   Yep, I think such things should be handled manually rather than
   adding special behavior and caching to import functions in Cython.
   Note that of you do cdef foo then accessing foo in the global
   namespace will be a C lookup.
 
   One problem is that then we will have all kinds of
   importcrazydepsfor_x() functions at the end of sage.x.all, which
  will
   get longer and longer, until we have circular dependancies among
   these, etc.
 
 
  Perhaps another approach would be in Cython with an import optional
  foo (does not throw an exception on failure).
 
   -1. Then it throws an error later on? It has to check every time?
 
   I think the longterm solution is to reduce the number of from foo
   import blah (if you just do import foo and don't use foo, it will
   handle it just fine), reduce the number of unneeded imports (e.g.
   from sage.rings.all import ZZ which needlessly imports lots of
   other stuff than ZZ), and perhaps set up a hierarchy (e.g. decide
   which of groups or calculus should be imported first, and then if
  you
   need to go the other way do it via late imports of without the
   from keyword.
 
  I'm not disagreeing but want to point out a potential drawback
  to the above suggestion.  Let's say I'm writing my modular abelian
  varieties package (which I am right now).  If I type
 
  from sage.rings.integer_ring import ZZ
 
  instead of
 
  from sage.rings.all import ZZ
 
  then my code will break if the rings directory is reorganized
  (and it does get reorganized sometimes, e.g., moving code
  into the polynomial subdirectory...).   Thus typing
 
  from sage.rings.all import ZZ
 
  results in my modabvar package being more robust against
  changes in Sage.
 
  I am not claiming that is enough of a reason to not do
  from sage.rings.integer_ring import ZZ
  just that there are pros and cons to both approaches.

 This is a very good point. It's kind of related to the hierarchy
 idea--one would hope that rings are all loaded before one starts
 loading the modular forms stuff.

 Actually, this very statement has caused me many headaches in places
 (not in the modular directory) where importing ZZ from rings.all
 imports a whole bunch of other stuff (e.g. algebraic reals) that in
 turn imports the thing I'm trying to work on. rings.all is so huge it
 might be worth having a rings.basic or something that just imports
 ZZ, QQ, and maybe a couple of others.

   Sometimes it amazes me that the whole library manages to load at
  all.
 
  It's even more amazing that it correctly runs 52 thousand
  lines of doctest input on a bunch of different platforms:
 
  was$ sage -grep sage: |wc -l
 52637

 :)

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sage lite

2008-04-01 Thread Gary Furnish
I think it is entirely possible that fixing import problems may have to come
at the expense of ease of use.  I don't see rings.basic helping much.  I
envision grand discussions about what constitutes a basic ring and what
should and should not be included.  What happens if we still have issues
after a rings.basic... do we move to a rings.basic and rings.basic2 system?

On Tue, Apr 1, 2008 at 3:14 PM, Robert Bradshaw 
[EMAIL PROTECTED] wrote:


 On Apr 1, 2008, at 2:08 PM, Gary Furnish wrote:
  Why not use import sage.rings.integer_ring as module_integer_ring.
  If the location changes, just change what it is imported as.

 I think the point is that re-arranging the rings directory should
 have minimal impact outside of it. This is one of the big benifits of
 an .all file.

 
  On Tue, Apr 1, 2008 at 3:01 PM, Robert Bradshaw
  [EMAIL PROTECTED] wrote:
 
  On Apr 1, 2008, at 1:50 PM, William Stein wrote:
  
   On Tue, Apr 1, 2008 at 1:36 PM, Robert Bradshaw
   [EMAIL PROTECTED] wrote:
  
On Apr 1, 2008, at 1:23 PM, Gary Furnish wrote:
   Wierd circular import issues can (should) be solved with circular
   cdef imports.  I think the easiest fix to crazy deps (group theory
   on calculus) might be to do something alone the lines of
   foo = None
   def importcrazydeps():
   global foo
  
   import sage.foo as localfoo
   foo = localfoo
   Then have sage.x.package import all package modules and sage.x.all
   import sage.package and then run importcrazydeps() on any
  function.
  
Yep, I think such things should be handled manually rather than
adding special behavior and caching to import functions in Cython.
Note that of you do cdef foo then accessing foo in the global
namespace will be a C lookup.
  
One problem is that then we will have all kinds of
importcrazydepsfor_x() functions at the end of sage.x.all, which
   will
get longer and longer, until we have circular dependancies among
these, etc.
  
  
   Perhaps another approach would be in Cython with an import
  optional
   foo (does not throw an exception on failure).
  
-1. Then it throws an error later on? It has to check every time?
  
I think the longterm solution is to reduce the number of from foo
import blah (if you just do import foo and don't use foo, it
  will
handle it just fine), reduce the number of unneeded imports (e.g.
from sage.rings.all import ZZ which needlessly imports lots of
other stuff than ZZ), and perhaps set up a hierarchy (e.g. decide
which of groups or calculus should be imported first, and then if
   you
need to go the other way do it via late imports of without the
from keyword.
  
   I'm not disagreeing but want to point out a potential drawback
   to the above suggestion.  Let's say I'm writing my modular abelian
   varieties package (which I am right now).  If I type
  
   from sage.rings.integer_ring import ZZ
  
   instead of
  
   from sage.rings.all import ZZ
  
   then my code will break if the rings directory is reorganized
   (and it does get reorganized sometimes, e.g., moving code
   into the polynomial subdirectory...).   Thus typing
  
   from sage.rings.all import ZZ
  
   results in my modabvar package being more robust against
   changes in Sage.
  
   I am not claiming that is enough of a reason to not do
   from sage.rings.integer_ring import ZZ
   just that there are pros and cons to both approaches.
 
  This is a very good point. It's kind of related to the hierarchy
  idea--one would hope that rings are all loaded before one starts
  loading the modular forms stuff.
 
  Actually, this very statement has caused me many headaches in places
  (not in the modular directory) where importing ZZ from rings.all
  imports a whole bunch of other stuff (e.g. algebraic reals) that in
  turn imports the thing I'm trying to work on. rings.all is so huge it
  might be worth having a rings.basic or something that just imports
  ZZ, QQ, and maybe a couple of others.
 
Sometimes it amazes me that the whole library manages to load at
   all.
  
   It's even more amazing that it correctly runs 52 thousand
   lines of doctest input on a bunch of different platforms:
  
   was$ sage -grep sage: |wc -l
  52637
 
  :)
 
 
 
 
  


 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Glib algorithms #2436 vote

2008-03-27 Thread Gary Furnish
I'll move this to an spkg.

On Thu, Mar 27, 2008 at 2:07 AM, mabshoff 
[EMAIL PROTECTED] wrote:


 SNIP
Furthermore, I intend to help
   maintain the C algorithms.  I fully intend to work on them actively if
 their
   speed is not sufficient. Making a seperate spkg dramatically increases
 the
   difficulty of active development.
 
  Why???  I could have said the same about Pyrex two years ago, and it
 would
  have been silly.  So could you please explain why this is true of
 glib-lite.
  I'm not saying it isn't try!  I just don't see why it should be.  I know
  you're extremely good at writing and arguing points, so please be
 patient
  with me and explain why Making a seperate spkg dramatically increases
 the
  difficulty of active development.   Again, I'm *not* at all sure I'm
 right!
  And you've been much closer to that code than I am, so you're much more
 likely
  to be right.  That said, you've not given any argument, and this is the
 time
  to work these things out -- that's why we have a vote, etc., for
 packages.
 
-- William
 

 Having slept on the whole spkg vs. sagelib issue: I think doing glib-
 lite in an spkg might be a little more work right now, but it forces
 us [ehh Gary] to factor out the pbuild bits or write a makefile that
 uses a bunch of env variables determined by spkg-install. Sticking
 that in an spkg-install isn't that much work. Making the makefile
 independent of pbuild would also increase the chances that other
 projects would actually use the code.

 Cheers,

 Michael
 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Glib algorithms #2436 vote

2008-03-26 Thread Gary Furnish
Honestly, independent of the spkg vs libcsage issue, which is really an
issue of semantics in my opinion, Sage has no high speed implementations of
C algorithms.  Sage can not escape this forever.  Either someone will have
to write their own at some point or we can use glib as a starting block.  It
is a big chunk of code, but Sage needs fast lists, hash tables, etc.  Using
glib as a starting point dramatically reduces debugging time, and is
therefore preferable.  I don't see how blocking a glib patch because of
maintenance issues really helps solve this problem in the long run.  Is it
really preferable that I code up custom versions of the things so that I can
have fast symbolic implementations?

Most of the bloat in glib is internationaliztaion, which is not included in
this patch.  The parts that are included are simple enough and well
documented enough (either in the code or in glib documents) that anyone
should be able to easily maintain it.  Furthermore, I intend to help
maintain the C algorithms.  I fully intend to work on them actively if their
speed is not sufficient.  Making a seperate spkg dramatically increases the
difficulty of active development.

On Wed, Mar 26, 2008 at 12:26 PM, William Stein [EMAIL PROTECTED] wrote:


 On Wed, Mar 26, 2008 at 11:16 AM, didier deshommes [EMAIL PROTECTED]
 wrote:
 
 
   On Wed, Mar 26, 2008 at 1:52 PM, mabshoff
   [EMAIL PROTECTED] wrote:
   
   
   
 On Mar 26, 6:43 pm, William Stein [EMAIL PROTECTED] wrote:
  On Wed, Mar 26, 2008 at 10:09 AM, mabshoff
 
   
 [EMAIL PROTECTED] wrote:
 
On Mar 26, 6:02 pm, William Stein [EMAIL PROTECTED] wrote:
 Is any of the code gpl v3+ only?
 
No.
 
  That's good.
 
 
 
 How difficult will it be to update our version whenever
 upstream
 changes?  Do only you know how to do this?
 
Not particularly hard.
 
  You didn't answer my second question.
   
 Gary did it and I didn't pay much attention to it. I assume it will
 be
 documented. I don't consider such a thing hard once it has been
 documented.
   
   
 Why put this in c_lib instead of a separate spkg called
 glib-min?
 Couldn't such a package be useful outside of sage?
 
It is easiest if we put it into libcsage.
 
  That's not a good enough answer.   Until now almost all code in
 libcsage and
  the main sage library has been new code we've written -- except a
 few
  exceptions,
  where we greatly regretted them greatly and moved the code out
 later.
  So from experience I'm very opposed to this code being in c_lib.
 
  I vote -1 to this code going into sage unless:
 (1) it is put in a separate spkg, and
   
 We can certainly do that.
   
   
 (2) the process of extracting glib-min from the official glib
  tarball is automated.
   
 That is unlikely to happen since it requires manual interaction. It
 will break in the next release in six months and writing automated
 tools will take longer than actually doing the work in the first
 place.
 
   How frequent are the glib releases? If they're not that frequent, this
   should less than an issue as long as Gary documents what he's done
   somewhere :)

 If you've been maintaining packages for Sage for three years, and expect
 to be maintaining them for years to come, you'de view this as a much
 bigger deal. It's really bad when there is a big chunk of code in Sage
 that gets out of stream with up stream, but no easy way to resolve that
 problem.

  -- William

 


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: Glib algorithms #2436 vote

2008-03-26 Thread Gary Furnish
There are non trivial changes involved in getting compilation without
internationalization, primarily because their error handling system uses
internationalization in many places.  Its not just a copy and paste job, but
now that I've figured out exactly what headers we need and set up autoconf
replacement its not too difficult either (I'd say I could probably do it in
an hour and someone not familiar in two or three).  On the other hand
applying individual patches to files would be very easy.  I also noticed
many changes but most of them did not seem to be centered around the
algorithms that I actually added.  I do not agree with the maintence
argument for a seperate spkg, but I would be more then willing to move it
out of libcsage if others are interested in using the algorithms for other
Sage SPKGs.
I have not yet benchmarked hash tables (nor actually tried them out), but
one of the big advantages is that they avoid much of the memory management
issues, so I don't expect them to be slower (and if they are, I may have to
fix that).

On Wed, Mar 26, 2008 at 3:06 PM, Robert Bradshaw 
[EMAIL PROTECTED] wrote:


 On Mar 26, 2008, at 11:57 AM, Gary Furnish wrote:

  Honestly, independent of the spkg vs libcsage issue, which is
  really an issue of semantics in my opinion, Sage has no high speed
  implementations of C algorithms. Sage can not escape this forever.
  Either someone will have to write their own at some point or we can
  use glib as a starting block. It is a big chunk of code, but Sage
  needs fast lists, hash tables, etc. Using glib as a starting point
  dramatically reduces debugging time, and is therefore preferable.

 Browsing the glib documentation, looking at its features, and reading
 what other people have to say about it I think this is a worthy set
 of things to include.

 One question I have is how much faster glib hashtables are than
 python dictionaries (as accessed directly through the Python/C API,
 which they will be from Cython if declared as type dict) and how
 much faster glib (linked or non-linked) lists are then Python lists.
 Have you run any tests? If there is not a significant speed
 difference, it would be preferable to use the Python datatypes to
 store Python objects when possible (for cleaner code and to minimize
 recounting woes). This wouldn't mean that glib isn't worth including
 however.

  I don't see how blocking a glib patch because of maintenance issues
  really helps solve this problem in the long run. Is it really
  preferable that I code up custom versions of the things so that I
  can have fast symbolic implementations?

 No. I think the point is that before including it we should consider
 issues of maintenance and this may influence where we put it. (Much
 easier, for instance, than trying to move it later.) I agree that it
 should probably be a seperate spkg.

  Most of the bloat in glib is internationaliztaion, which is not
  included in this patch. The parts that are included are simple
  enough and well documented enough (either in the code or in glib
  documents) that anyone should be able to easily maintain it.
  Furthermore, I intend to help maintain the C algorithms. I fully
  intend to work on them actively if their speed is not sufficient.
  Making a seperate spkg dramatically increases the difficulty of
  active development.

 I agree that making an spkg raises the barrier of working on it, but
 not by that much. Also, as an spkg, other components of Sage can make
 use of it, and I think it will be much easier to keep in sync with
 and contribute upstream.

 I'd also really like to avoid a fork, which is what this could easily
 turn into. I'm noticing a lot of changes from glib 2.16.1 at GNOME.
 Is this the version you started from? Are you simply copying a subset
 of the c/h files, or are there significant changes that need to be
 done every time glib is updated (every month or two looking at the
 history)?

 I would like to see this included, but I think these issues need to
 be resolved now, or they never will until it hurts us.

 - Robert

 
 
  On Wed, Mar 26, 2008 at 12:26 PM, William Stein [EMAIL PROTECTED]
  wrote:
 
  On Wed, Mar 26, 2008 at 11:16 AM, didier deshommes
  [EMAIL PROTECTED] wrote:
  
  
   On Wed, Mar 26, 2008 at 1:52 PM, mabshoff
   [EMAIL PROTECTED] wrote:
   
   
   
On Mar 26, 6:43 pm, William Stein [EMAIL PROTECTED] wrote:
 On Wed, Mar 26, 2008 at 10:09 AM, mabshoff

   
 [EMAIL PROTECTED] wrote:

  On Mar 26, 6:02 pm, William Stein [EMAIL PROTECTED] wrote:
   Is any of the code gpl v3+ only?

  No.

 That's good.



   How difficult will it be to update our version whenever
  upstream
   changes? Do only you know how to do this?

  Not particularly hard.

 You didn't answer my second question.
   
Gary did it and I didn't pay much attention to it. I assume it
  will be
documented. I don't consider such a thing hard once it has

[sage-devel] Glib algorithms #2436 vote

2008-03-25 Thread Gary Furnish

Trac #2436 adds the following algorithms from glib to libcsage:
Multiplatform threads
Thread pools
Asynchronous Queues
Memory Slices
Doubly and Singly linked lists
Queues
Sequences
Hash Tables
Arrays
Balanced Binary Trees
N-ary Trees
Quarks

In particular it features a slab memory allocator based on:
http://citeseer.ist.psu.edu/bonwick94slab.html
http://citeseer.ist.psu.edu/bonwick01magazines.html
The documentation for glib is found at 
http://library.gnome.org/devel/glib/2.14/glib-Memory-Slices.html
The files are all GPL 2.0 or greater.  Although glib normally has
extensive dependencies, all of them have been removed, as have parts
of glib that are not strictly algorithms (such as string parsing).  To
avoid autoconf/make troubles, the new parallel build system currently
in experimental testing features a simple, elegant python autoconf
replacement.  The extra build time for libcsage is minimal, and wall
time often decreases due to the new parallel build system.  Finally,
until testing is complete, parallel build and glib are orthogonal and
can exist in parallel to all existing Sage code, making review
significantly easier.
Right now there are no easy to use C libraries included with sage.
Using c++ stl requires extremely painful Cython wrapping.  Therefore
everywhere pure performance is needed, ad hoc solutions are often used
(see integer.pyx, etc), which often introduce subtle and painful
bugs.  Furthermore there are many places in Sage that could benefit
from a unified slab allocator (as opposed to a pool which can only
grow).  Finally the extensive symbolics modifications I am working on
make use of glib (especially trees and lists) to enable very fast
symbolic manipulations.  By using glib algorithms I avoid having to
roll my own code that would require extensive manual review and
debugging.  This code drop is big enough that Mabshoff would like a
formal +1 vote, and I'd be happy to address any concerns.

Mar 25 00:00:13 mabshoff  yes. It isn't an spkg, but the code drop is
large enought to warrent a vote.
Mar 25 00:00:21 mabshoff  You can quote me on that.
Mar 25 00:00:35 mabshoff  You should say *why* it is a good idea and
*what* goodies it does provide.
Mar 25 00:00:59 mabshoff  Since it will only become useful after the
switch to pbuild and doesn't
Mar 25 00:01:21 mabshoff  harm anything with the old build it also
doesn't have an impact on the current
Mar 25 00:01:24 mabshoff  codebase.
Mar 25 00:01:31 mabshoff  You can quote me on that, too.

--Gary

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---