I'm attracted to the spate of recent flat-file blogging platforms, using 
plain text (markdown) files to store the blog posts. I especially like the 
idea of using dropbox or a git repo to store the files. 

Has anyone experimented with this in web2py? I can imagine at least two 
ways of doing it:

1. A controller retrieves the text file at runtime based on its filename, 
parses it as necessary, and passes it through to the view. This strikes me 
as both simplest and slowest. It would bypass the web2py model structure 
altogether. It could be sped up significantly by caching pages on the 
server. 

2. A watcher of some sort (cron/scheduler job, other utility watching for 
file changes, git hook?) notices any new/changed files in the content 
directory. This triggers a call to a background process that parses the 
file and stores its content in the db. This would have the advantage of 
using a model and the associated data abstraction, and on first page access 
I assume it would be faster. But I'm not sure that the small speed-up is 
worth the extra complexity, especially if the speed is negated by 
server-side caching.

I suppose that something like 2 would have to be present in 1 anyway, since 
the system would have to recognize the presence of a new file and add it to 
the index of available posts.

Are there other approaches I'm not thinking of? Tools or libraries that 
would be useful in the process? Or do you think it's all just not worth the 
trouble?

Thanks,

Ian 

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to