On Fri, Oct 25, 2019 at 11:30 AM Edward K. Ream <[email protected]> wrote:

The new fstringify-file command will beautify as well as fstringify.
> There's no easy way *not* to beautify the file.
>

I have reached a good stopping point, which doesn't necessarily mean that
I'll stop :-)

This morning I spent several hours creating a "do-nothing" tokenizer.  I
then remembered the untokenize function in Python's tokenize module. This
is exactly what is needed!

Those preliminary hours were not wasted--they helped me understand all the
issues. The untokenize code is short, but far from easy. The subtleties
involve recreating the whitespace *between* tokens.  Continued lines
(backspace-newlines) are the acid test. The add_whitespace method is the
crucial code. I'm so glad I don't have to recreate it!

The untokenize function supposedly *guarantees* round-tripping of source
code. I may study python's unit tests to see why this statement can be made
with confidence.

Round-tripping defines a do-nothing "beautifier".  My fstringify code will
be based on untokenize, but it will step in and handle string tokens.

*Summary*

Imo, this project is worth any amount of work, because it shows how to base
black or fstringify on tokens. tokenize.untokenize implements a do-nothing
"beautifier".

A do-nothing beautifier could easily provide the foundation for the "real"
fstringify, and would also be extremely useful for black.  I am ever more
convinced that using tokens is superior to parse trees for text munging.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAMF8tS3N3xFKo%3D%2BhVNPrAoW3Y3v30SfDwQvbYv-D-4PVhOySpA%40mail.gmail.com.

Reply via email to