Am 03.07.2013 23:55, schrieb Ondřej Čertík:
Maybe it is possible to implement pattern matching efficiently, but I am a bit skeptical.
From what I see in the linked web pages on Mathematica's rule engine, these patterns are similar to grammar rules for compilers, plus a layer of tests for applicability of a pattern.
It's possible to merge such patterns into a single super pattern. I.e. don't test all patterns in sequence, use a state machine that's fed the next input symbol and label each state with all the patterns that are still valid at that point; apply a pattern as soon as a state is reached where only one pattern is left. (Usually, you actually label only the end states, but I found state machines easier to debug if one could see the list of still-matching patterns on each node. This helps with diagnosing ambiguities in the input and with diagnosing bugs in the pattern matching code.) This is essentially how lexical analyzers (scanners) are constructed. Except we would feed tree nodes to the state machine, a scanner feeds characters.
Setting up the state machine us usually O(number of tokens in rules), though pathological rule sets can cause exponential size requirements. Running a tree through such a pattern matcher, on the other hand, should be linear with number of tree nodes and require no object creation, which is as fast as it gets.
-- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/sympy. For more options, visit https://groups.google.com/groups/opt_out.
