On Tuesday, November 30, 2021 at 9:41:39 AM UTC-6 Edward K. Ream wrote:

> On Tuesday, November 30, 2021 at 8:21:35 AM UTC-6 [email protected] wrote:
>
>> Would it be feasible to use Python's tokenizer?  That would eliminate the 
>> problem of unusual whitespace, as well as continuation lines, in a 
>> perfectly Python-compatible way.
>>
>
> The short answer is: feasible, yes, useful, no.
>

A slightly longer answer.  The front end of Leo's importers are solid. 
gen_lines calls scan_line to determine the scan state. For python, the line 
ends is "in a state" if the *end *of the line is in a string or docstring. 
In those case, the python importer simply adds the line to the present node.

The difficulties lie in the back end, not the front end. gen_lines assigns 
lines to nodes (creating new nodes as needed), based on a combination of:

- The indentation and 'kind' fields node p and all its ancestors.
- The indentation of the to-be-assigned line.

The difficulty is that of an explosion of cases. In this respect (and 
others) the python importer resembles a compiler's code generator.

I think I have discovered a way to drastically reduce the number of "code 
generation" cases.  Perhaps later today.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/a79dc2f4-43f3-4863-8d2c-3a4a9e7b6b58n%40googlegroups.com.

Reply via email to