I don't know if anyone else has been looking into this but as I had time on my hands I've knocked up some experimental code. I'm not pushing this as the ideal solution, really I just want something 'out there' that can be discussed in more concrete terms.
Inspiration... On the OLPC wiki someone said "Something like a tiny webserver that used SQLite or dbh for storage would be enough". I don't know who wrote that, but thanks, you got me thinking and acting :D Also as MediaWiki is the wiki I'm used to using you may notice a bit of inspiration from there. Construction... - Small C program. (Please, no language advocacy :) - Launched by (x)inetd on connection (I wanted to avoid a long running daemon. Will (x)inetd even be present?) - It's a mini webserver (only implements what's needed to perform its function), UI is web broswer. Dependencies... - libsqlite3 for storing 'books' (Library is small, fast, ACID application SQL database) - libmagic (to identify mime-type of uploaded files) Theory of operation... Very much a wiki. Each 'ebook' is a separate sqlite database that can have pages of arbitrary content entered into it (text, images, files, anything). It implements a subset of the wiki markup used by MediaWiki for basic document creation and linking. Transfer of books is straightforward as they are single files. You may want to gzip them to reduce their transit size. In terms of styling documents it's all down to CSS. Not quite sure of the best way to integrate it. Maybe the presence of a <page_name>_css page causes it to be added as an additional stylesheet? Problems/missing... String handling is done is a very traditional C manner and I suspect it will break on some multibyte character encodings. Anyone know of a good C library for handling these character formats? Revisions aren't in yet. I didn't find a nice C lib for diffing, unless someone knows of one then either I write something or just pull code from GNU diff. Suggestions welcome. Uploaded files have to be loaded completely into memory for database insert/retrieve operations. This limits the max size of files that handled to the available memory. Would anyone want to put a 50MB file into a book? Maybe. Extraction isn't in yet. This will let you copy pages between books. Categories aren't in yet. Being able to tag content into none or more categories will aid both in terms of in-book organisation as well as extraction. Security. There isn't any. Although I've been toying with the idea of having the main program listen on localhost with an authenticating proxy sitting on public. That way the user can manage an ACL for public access to their books without it bloating the core. Where's the code then? I'm in the process of setting up a sourceforge project. When it's all setup and uploaded I'll drop another email to the list. I hope to have this done within a week. One problem, I've no idea what to call the project. I guess calling it 'olpcwiki' would be a bit bold ^_^ Hopefully this will draw out more discussion on the "wiki as ebook reader" topic. ttfn, John -- olpc-software mailing list [email protected] https://www.redhat.com/mailman/listinfo/olpc-software
