If you feel like I've attacked you, I apologize: it wasn't my intention.
Please, don't get it personal: I only reported my honest opinion, albeit
after a re-read it looks too rude, and I'm sorry for that.

Regarding the post-bytecode optimization issues, they are mainly
represented by the constant folding code, which is still in the peephole
stage. Once it's moved to the proper place (ASDL/AST), then such kind of
issues with the stack calculations disappear, whereas the remaining ones
can be addressed by a fix of the current stackdepth_walk function.

And just to be clear, I've nothing against your code. I simply think that,
due to my experience, it doesn't fit in CPython.

Regards
Cesare

2016-05-18 18:50 GMT+02:00 <zr...@fastmail.com>:

> Your criticisms may very well be true. IIRC though, I wrote that pass
> because what was available was not general enough. The stackdepth_walk
> function made assumptions that, while true of code generated by the current
> cpython frontend, were not universally true. If a goal is to move this
> calculation after any bytecode optimization, something along these lines
> seems like it will eventually be necessary.
>
> Anyway, just offering things already written. If you don't feel it's
> useful, no worries.
>
>
> On Wed, May 18, 2016, at 11:35 AM, Cesare Di Mauro wrote:
>
> 2016-05-17 8:25 GMT+02:00 <zr...@fastmail.com>:
>
> In the project https://github.com/zachariahreed/byteasm I mentioned on
> the list earlier this month, I have a pass that to computes stack usage
> for a given sequence of bytecodes. It seems to be a fair bit more
> agressive than cpython. Maybe it's more generally useful. It's pure
> python rather than C though.
>
>
> IMO it's too big, resource hungry, and slower, even if you convert it in C.
>
> If you take a look at the current stackdepth_walk function which CPython
> uses, it's much smaller (not even 50 lines in simple C code) and quite
> efficient.
>
> Currently the problem is that it doesn't return the maximum depth of the
> tree, but it updates the intermediate/current maximum, and *then* it uses
> it for the subsequent calculations. So, the depth artificially grows, like
> in the reported cases.
>
> It doesn't require a complete rewrite, but spending some time for
> fine-tuning it.
>
> Regards
> Cesare
>
>
>
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to