Hi Mat,

On 29/12/14 14:02, Mat wrote:
Erwan, thanks for your reply. Very interesting. Some thoughts arise:

I don't know how long a conversion from TW into TW node.js takes but the other steps you describe should be pretty fast if automated (right?).In other words, if the TWs were already in node.js form then this would be much simpler, correct?

Sorry maybe I wasn't clear: currently my process is already totally automated: my script does all the downloading, converting to Node.js, filtering (keeping only regular tiddlers), adding to the global wiki and finally converting back to standalone html. Also the script is called automatically every day with a cron task from a local machine.

I'm not sure but I suspect that publishing public wikis as node.js on the web would not make much sense, because then they wouldn't be readable by a standard browser (and this is a quite convenient feature :) ). Anyway imho the conversion is only one of the computer-intensive parts, but to be honest I didn't measure which part takes how much time (dowloading the wikis is also very significant).



But how about this for the twitter type idea, if at all possible; maybe it'd be enough to start off with looking at tiddler $:/core/ui/MoreSideBar/All <http://tiddlywiki.com/#%24%3A%2Fcore%2Fui%2FMoreSideBar%2FAll> and filter out the tiddlers that start with # and @. For twitter, you're not interested in all tiddlers, after all. Would this simplify/quickify your metod?

Further, if your engine is included in the local wiki, it could look at the date for the last update and filter not only the # and @ tiddlers, but also only include those added or modified since last update!


Yes I see your point, but I can't do that technically (or I don't know how): I need to download the full html page, and I can only decode it by converting it to node.js. It's only at this stage that I can start manipulating tiddlers and filtering.

All this process is cumbersome because it takes place from outside TW. I have no idea if it could be done from inside (I know very little about javascript and about web development in general), but this would be the ideal situation indeed: a wiki A would be able to "communicate" with another wiki B, so A could send a filter request to B which would return only the required tiddlers, then A could transclude them the way it wants. In this case there is no conversion needed at all; storage might even be unnecessary, unless the workload is too much for the network; even in that case, if the tiddlers are copied, the update process becomes much lighter because we only need to obtain the most recent tiddlers, as you suggested.


Regards
Erwan


BTW, I just posted a question <https://groups.google.com/forum/?fromgroups=#%21topic/tiddlywikidev/7KxIAyLmdjc> on the dev group if it is possible to include a filter straight into the url which would be another approach for this.

<:-)

--
You received this message because you are subscribed to the Google Groups "TiddlyWiki" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected] <mailto:[email protected]>. To post to this group, send email to [email protected] <mailto:[email protected]>.
Visit this group at http://groups.google.com/group/tiddlywiki.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"TiddlyWiki" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/tiddlywiki.
For more options, visit https://groups.google.com/d/optout.

Reply via email to