On 10.04.2013 17:12, Ronan Lamy wrote:
On 10/04/2013 16:59, Tom Bachmann wrote:
I don't think that a command-line flag is a good idea, particularly
considering that the new assumptions version won't be supposed to work.
We already have way too many relevant combinations of flags and platform
characteristics to test them all (cache enabled or disabled, Python 2.5,
2.6, 2.7, 3.2, 3.3, PyPy, Python vs gmpy ground types, 32- vs 64-bit,
...)


Well what would you suggest instead of a command line flag? One could
use an environment variable, but clearly the difference is only cosmetic.
I'd suggest simply working in a separate branch, cherry-picking changes
that can apply to master immediately, and/or a switch in code (i.e. put
NEW_ASSUMPTIONS = False somewhere, and manually edit it to True when we
want to test the new version).
IMO, there's no point at which we'll want to allow end-users to choose
between old and new assumptions, so the switch should be purely for
devs' use.

Oh of course, I'm completely fine with that. I was thinking of the flag essentially for only my convenience.


In any case, the new assumptions need some work before they can even be
considered as a realistic replacement for the old ones. In particular,
the issues I see include:
* ensuring consistency between inferred facts (i.e. making it impossible
to get ask(Q.real(x)) == False but ask(Q.positive(x)) == True)

The only way to get inconsistent conclusions is to start with
inconsistent assumptions.
No, it's also possible to derive different conclusions inconsistently
starting from the same assumptions. The old assumptions are quite clever
in this regard, and ensure (assuming sufficient test coverage) that
deriving x.is_real directly or inferring it from x.is_positive always
yield the same result. They also allow the is_* properties to call each
other recursively without worrying about infinite recursion, which helps
a lot to keep things consistent, no matter which set of assumptions it
starts with.

I'm not sure I understand. Are you saying this could happen because of bugs in the new assumptions code?

Can you explain your point with an example?

Do we really want to safeguard against that? I suppose it makes sense,
so I will try to
figure out what the cost of this is going to be.

I'll keep it in mind.

* keeping reasonable performance even with hundreds or thousands of
assumptions in the system

Well, isn't being able to run a real-life example using the new
assumptions is the easiest way to expose bottlenecks?
It doesn't take a clever benchmark to see that anything in ask() that is
O(n) in the number of assumptions causes critical performance issues.
For instance:
https://github.com/sympy/sympy/blob/master/sympy/assumptions/ask.py#L60

That is a valid point.

Serious question (exploring all directions here): do we want to support huge numbers of assumptions?

I guess the current system can do it, so we should only abandon it for good reasons.

* allowing new predicates to be defined at run-time


Why is this important?
The point is mostly to allow specialised predicates (for instance,
matrix predicates) to be defined in the modules that use them.

I can see how that is desirable, but I fail to see why it is vital for removing the old assumptions.

Also,
using new-style assumptions everywhere will probably cause thorny
bootstrapping issues. The old assumptions are actually defined before
everything else and are required to instantiate any Basic instance.
Since the new assumptions currently require instantiating some Basic
objects before anything can work, the switch will need some thought
here.

Sure. I didn't claim to having thought through everything. I was just outlining a general strategy.

Making the new assumptions more modular should help.

What do you mean by this?

These can be tackled now, without worrying about the old assumptions.


Well according to the GSOC wiki page

"The project is to completely remove our old assumptions system,
replacing it with the new one."

so this is what I am trying to tackle...
What I'm saying is that these points don't require any changes to the
core, so they should be done first. Refactoring the assumptions will
involve a ton of tricky and interrelated changes. It'll be very useful
to deal separately with any change that can be isolated.


That is a *very* good point, which I should keep in mind.

I guess my main unease with this type of approach ("fix it before we hit the problem") is that it will never be clear if we have fixed all problems. But of course fixing obvious ones first sounds like a good idea.

Thanks for your thoughts, they are much appreciated.

--
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sympy?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to