"Ben Wilson" wrote: >> what were the reasons for the current method to store page markup text >> (in one line, with newline and percent sign converted)? > >Think of the wiki page storage as a hash (associative array), with >each row of the file being a different key/value pair. When the page >is read into such a hash, then calling a specific value is a simple >hash reference. Specifically relating to the page text, it is >"$Page['text']"
well, that's the representation in PHP after reading from the page data file. > Aliasing the newlines allows all the page text to be >displayed on one line this way. ...making the read and write functions of PageStore very simple. As I understand PM's mail, that was the reason. >This approach differs from DokuWiki where the current page is in one >file, and all metadata (including history) is stored separately. IMO that's another point. I can store the data without aliasing newlines but still with metadata and history. I _strongly_ dislike the DokuWiki approach to separate metadata and page text. One of the reasons for me not to use DokuWiki although it looks nice besides this. >As far as accessing the data separately, the PmWiki approach is >actually quite technology independent. What I mean is you can grep out >the page text and pipe through sed to display the source the way it is >written. I've used grep/sed, and Python both to read PmWiki pages. do you want to share this? One could use this along with the diff/merge tool. You don't have a solution to put changes back to the page data file? [...] >> Before I start to hack an import filter for Beyond Compare: Are there >> tools to convert, compare, edit the pages? >> >> Maybe I'm simply using the wrong tools... > >I'd need a bit more specificity for me to help here. If you're talking See my answer to Patrick. Oliver -- Oliver Betz, Muenchen (oliverbetz.de) _______________________________________________ pmwiki-users mailing list [email protected] http://www.pmichaud.com/mailman/listinfo/pmwiki-users
