Hi Dave,
oXygen for example transforms DocBook documents stored in XML databases
like MarkLogic, eXist, Berkeley XML DB, xDB (former XHive), etc. We
access the databases resources as URLs implementing standard Java URL
handlers to connect to each database, thus the transformation is no
different for a file stored in the database than for a file stored on a
web server or on the local file system.
Best Regards,
George
--
George Cristian Bina
<oXygen/> XML Editor, Schema Editor and XSLT Editor/Debugger
http://www.oxygenxml.com
On 5/6/10 11:40 AM, Dave Pawson wrote:
Has anyone tried processing XML source held in a database? Obviously an
xml database.
With a goal of chunked html, I can see the problems.
I'm thinking of something like
xquery the [appropriate chunk level] out to get the id values.
Generate the toc using those, then iterate through the [chunk levels]
processing one [say chapter or section or topic] at a time.
Seems doable. Anyone see any big holes?
My higher level goal is processing large files. My biggest document
is only 120Kwds, possibly 150Kb, but I'm sure the time will come
when people are processing large files.
Just curious.
regards
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]