Larry Hastings wrote:
Chetan Pandya wrote:
> I don't have a patch build, since I didn't download the revision used
> by the patch.
> However, I did look at values in the debugger and it looked like x in
> your example above had a reference count of 2 or more within
> string_concat even when there were no other assignments that would
> account for it.
It could be the optimizer.  If you concatenate hard-coded strings, the
peephole optimizer does constant folding.  It says "hey, look, this
binary operator is performed on two constant objects".  So it evaluates
the _expression_ itself and substitutes the result, in this case swapping
(pseudotokens here) [PUSH "a" PUSH "b" PLUS] for [PUSH "ab"].

Oddly, it didn't seem to optimize away the whole _expression_.  If you say
"a" + "b" + "c" + "d" + "e", I would have expected the peephole
optimizer to turn that whole shebang into [PUSH "abcde"].  But when I
gave it a cursory glance it seemed to skip every-other; it
constant-folded "a" + "b", then  + "c" and optimized ("a" + "b" + "c") +
"d", resulting ultimately I believe in [PUSH "ab" PUSH "cd" PLUS PUSH
"e" PLUS].  But I suspect I missed something; it bears further
investigation.

I looked at the optimizer, but couldn't find any place where it does constant folding for strings. However, I an unable to set breakpoints for some mysterious reason, so investigation is somewhat hard. But  I am not bothered about it anymore, since it does not behave the way I originally thought it did.

But this is all academic, as real-world performance of my patch is not
contingent on what the peephole optimizer does to short runs of
hard-coded strings in simple test cases.

> The recursion limit seems to be optimistic, given the default stack
> limit, but of course, I haven't tried it.
I've tried it, on exactly one computer (running Windows XP).  The depth
limit was arrived at experimentally.  But it is probably too optimistic
and should be winched down.
On the other hand, right now when you do x = "a" + x ten zillion times
there are always two references to the concatenation object stored in x:
the interpreter holds one, and x itself holds the other.  That means I
have to build a new concatenation object each time, so it becomes a
degenerate tree (one leaf and one subtree) recursing down the right-hand
side.

This is the case I  was thinking of (but not what I wrote).

I plan to fix that in my next patch.  There's already code that says "if
the next instruction is a store, and the location we're storing to holds
a reference to the left-hand side of the concatenation, make the
location drop its reference".  That was an optimization for the
old-style concat code; when the left side only had one reference it
would simply resize it and memcpy() in the right side.  I plan to add
support for dropping the reference when it's the *right*-hand side of
the concatenation, as that would help prepending immensely.  Once that's
done, I believe it'll prepend ((depth limit) * (number of items in
ob_sstrings - 1)) + 1 strings before needing to render.

I am confused as to whether you are referring to the LHS or the concatenation operation or the assignment operation. But I haven't looked at how the reference counting optimizations are done yet. In general, there are caveats about removing references, but I plan to look at that later.

There is another, possibly complimentary way of reducing the recursion depth. While creating a new concatenation object, instead of inserting the two string references, the strings they reference can be inserted in the new object. This can be done if the number of strings they contain is small. In the x = "a" + x case, for example, this will reduce the recursion depth of the string tree (but not reduce the allocations).


-Chetan
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to