On Mon, Mar 12, 2012 at 6:27 PM, Ronan Lamy <[email protected]> wrote:
> Le lundi 12 mars 2012 à 14:18 -0600, Aaron Meurer a écrit :
>> On Mon, Mar 12, 2012 at 1:42 PM, Ronan Lamy <[email protected]> wrote:
>> > Le lundi 12 mars 2012 à 20:13 +0100, [email protected] a
>> > écrit :
>> >> I thought that the old module is deprecated and it is to be removed.
>> >> In the new module all assumptions are "property cashes" as you put it
>> >> if I am not mistaken.
>> >
>> > The old assumptions are a fundamental part of the core, so they can't
>> > just be removed, or deprecated in any meaningful way.
>>
>> So what do you think should be done, in the long run? I agree that we
>> should (and pretty much will have to) keep the old API around, but
>> internally, we should just use the new assumptions, so that we don't
>> have code duplication.
>
> In the long run, I think the systems should be merged, by breaking up
> the old system and connecting its low-level parts to the new one.
I also think this is the way to go, at least in the sense that we want
to keep the fundamental aspects of the old system, namely, the global
assumptions implemented as attribute access calls. To me, saying we
should unhook the old assumptions and hook them into the new one and
saying that we should implement global assumptions in the new
assumptions with the same API as the old assumptions are just saying
the same thing from two opposite directions.
Of course, that's a pretty broad description of the plan. As far as
the actual implementation details go, and the new APIs go, we will
have to discuss things to figure out the best way to do it.
>>
>> >
>> > In the new assumptions, there's no cache: things like ask(Q.prime(7))
>> > are recomputed every time, contrarily to Integer(7).is_prime.
>>
>> Should this be changed? It seems like this would make things
>> inefficient if we used the new assumptions everywhere.
>>
> Yes, it would be rather inefficient, but this isn't the worst problem
> IIRC. The real pain point is that actual assumptions (i.e. Q.prime(n) as
> opposed to Q.prime(7)) are stored in a big set that must be inspected in
> full every time ask() is called.
By "in full" do you mean as in a linear time search? Is this solved
by just storing the (global) assumptions in the objects themselves (or
at least caching them there)?
>
>> And if so, should the cache be stored in a global dict, or inside the
>> objects themselves? Anyway, I guess this is a (probably) trivial
>> implementation detail that can be worked out once we start replacing
>> the old assumptions.
>
> A global cache would keep objects alive indefinitely, which might be a
> problem, even though the cache currently tends to do that anyway.
Well, on the one hand, Integer(7) is likely to not be the same object
every time (unless we assume that we are using the regular cache). So
if we store it in the object itself, that won't help.
On the other hand, it makes perfect sense to store it inside
Symbol('n', integer=True). We have to at least store the given
assumption, by definition (unless we want to make Symbol('n',
integer=True) automatically assume that Symbol('n').is_integer is
True, which is a bad idea).
But I do think this is a real issue that we should try to think about,
even for our normal cache, because people use SymPy for very large
expressions all the time, as we've seen here on the mailing list. And
the worst part is, when you operate on such large expressions, that's
when you need the speed of the cache the most. And I really think
that we're stress testing SymPy far too little. But I'm digressing...
Aaron Meurer
--
You received this message because you are subscribed to the Google Groups
"sympy" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/sympy?hl=en.