As you will see in a second, I am definitely not a programmer so, FWIW: I recall that a fast way to search things is with a hash table. This can be genereated in advance and need only be generated once, into a static list, I think. Then TW instead of documents manages hash values (...not sure if that makes sense) so that upon clicking to open a tiddler the document get's located and presented. I'm guessing there's only a few links on each document so once a document is opened (including the starting document), you can scan it to locate all links and then have these open in the background (or even lazy loading) so that when the reader clicks a link, that doc is already fetched.
(Gobbledygook?) <:-) On Monday, March 3, 2014 9:20:56 PM UTC+1, Timothy Groves wrote: > > Some friends of mine and I are writing a program that outputs a metric > crapton of text, and we stumbled across TiddlyWiki whilst looking for an > easy way to store and view the data. It seems perfect, except for one tiny > detail: creating the file. To clarify, we are talking literally millions > of wiki entries at once - somewhere in the neighbourhood of fifty to one > hundred and fifty million entries per run. Clearly, we don't want to > manually import. > > Is there an easy-to-follow guide for outputting a fully populated TW file? > If not, I can tear the program apart and examine it line by line, but I > was hoping that someone could point me in the right direction to save me > some work. > -- You received this message because you are subscribed to the Google Groups "TiddlyWiki" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/tiddlywiki. For more options, visit https://groups.google.com/groups/opt_out.

