This Engineering Notebook post announces several momentous Ahas, all 
directly related to VNodes and our attitude towards them. These Ahas 
resolve long-standing questions and suggest new ways of simplifying Leo's 
most fundamental code.


*Acknowledgements*: These Ahas would not have happened with recent 
discussions with Félix and Vitalije.


*Aha! *The best way to check outline structure is to test the contents of 
the *v.parents* and *v.children* arrays.


The new *c.checkVnodeLinks* method is 100 times faster than previous checks 
and finds problems that have lain hidden until now. Leo performs these 
checks when reading, pasting or saving outlines.


The read check *has already failed* when loading an outline after switching 
branches. The paste checks cause two paste-related unit tests to fail.


*Aha!* c.checkVnodeLinks can *repair* broken or missing links.


The unified PR contains prototype code. I'll enable this code only after it 
passes a strict new unit test.


*Aha!* VNodes are the proper context for most low-level operations.


I've never realized how true this is until now.


*Aha!* The *c.all_unique_nodes* generator (or something similar, see below) 
should be the basis for many VNode operations.


Relying on c.all_unique_nodes is a new pattern for me! For example, 
c.checkVnodeLinks now uses c.all_unique_nodes instead of a bespoke 
recursion on v.children.


*Aha!* I now know how to backup/restore a VNode and its descendants!


Neither recursion nor pickling will work because VNodes contain backlinks 
in their v.parents arrays. Instead, Leo's new archive code will create a 
json-like dict in which almost all keys and values will be gnxs!


Define *p.archive* (yes, p) something like this (untested, but the code 
passes mypy):


def archive(self) -> dict[str, Any]:

    """Return a json-like archival dictionary for p/v.unarchive."""

    p = self


    # Create a list of all vnodes in p.self_and_subtree.

    all_unique_vnodes: list[VNode] = []

    for p in p.self_and_subtree():

        if p.v not in all_unique_vnodes:

            all_unique_vnodes.append(p.b)


    # Create an archive of all_vnodes.

    parents_dict: dict[str, list[str]] = {}

    for v in all_unique_vnodes:

        parents_dict [v.gnx] = [z.gnx for z in v.parents]


    children_dict: dict[str, list[str]] = {}

    for v in all_unique_vnodes:

        children_dict [v.gnx] = [z.gnx for z in v.children]


    marks_dict: dict[str, str] = {}

    for v in all_unique_vnodes:

        marks_dict [v.gnx] = str(int(v.isMarked()))


    uas_dict: dict[str, dict] = {}

    for v in all_unique_vnodes:

        uas_dict [v.gnx] = v.archive_ua() # To do.


    return {

        'vnodes': all_unique_vnodes,

        'parents': parents_dict,

        'children': children_dict,

        'marks': marks_dict,

        'uAs': uas_dict,

    }


*Summary*


*c.checkVnodeLinks* reveals previous hidden errors in Leo's code. Leo's 
codebase is about to become significantly more solid!


c.checkVnodeLinks can correct any broken or missing links in a 
straightforward way. c.checkVnodeLinks will raise ValueError during unit 
tests :-)


A generator on unique vnodes is the natural way to accumulate archival 
data. These data should be a json-like dict in which all keys will be gnxs 
and many values will be lists of gnxs.


*p.archive* and *v.archive_uas* will likely replace ad-hoc code scattered 
throughout Leo's codebase.


*I shall merge the unified PR into devel later today.* There is no reason 
to delay. I'll then create new issues and PRs:


- Fix the failing unit tests related to pasting nodes.

- Create a stringent new unit test for repairing links.

- Enable automatic link repair.

- Experiment with archives.


Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/a5a37019-1a2b-4012-9849-cc903e442451n%40googlegroups.com.

Reply via email to