On Sun, Oct 31, 2021 at 8:22 AM [email protected] <[email protected]>
wrote:

> Very large collections are best thought of a graphs, IMO, because there
> are usually many types of connections between them - depending of course on
> the type and intended use of the entries.  However, treelike *views* into
> the data are very often much better for a human to work with.  With large
> collections, it can take a long time to create a view from scratch, so it
> is helpful to create the most important ones in advance.  In the database
> world, these creation of such views are helped by indexes, temporary
> tables, and database views.  In Python (and other languages that have
> native map structures), dictionaries can play that role.
>
> With increasing size, finding something becomes harder.  It may well be
> that for Leo, once it can work with very large numbers of nodes, that we
> will need new and faster ways to find items and peruse them.
>
> Another issue of size is the amount of data that a single node can hold.
> I recently crashed Leo by trying to read some 80 megabytes of text into the
> body of a node.  I was curious how fast it could do a search and replace on
> that much data, but I didn't find out because of the crash.  Of course, we
> are currently limited by Qt's capabilities, and Leo may never need to do
> such a thing, so it may not matter.
>

Thanks for these comments. They are close to my thinking.

These issues have low priority at present. I have no great confidence that
these issues can be solved. More importantly, I have no need to solve them!

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAMF8tS0aJp4Y-wLgc9k__Bn8yzgkOzXkw2ock%2B1S4YQbQ9z-qQ%40mail.gmail.com.

Reply via email to