Hi Mike and Ghislain, > Of course, parsing a full language like SQL is best done using the typical > approaches (lexer, grammar, etc) as Mike suggests, and is not trivial. But if > the subset is really very simple (as simple as your example), known in > advance, and if there are no irregularities in newlines, etc, then the above, > more ad-hoc approach could work as well quite straightforwardly: a for to > iterate on the lines, start tumbling windows at rows that start with "CREATE > TABLE", then sub-windows to catch the parentheses and the COMMENTs, and then > convert the contents to XML nodes.
yes, the scenario is really that simple. I get files with round about 40 to 100 CREATE TABLEs each and have to transform those into XML files (subsequently I have to transform those XML files into DocBook entity files where XQuery would come into play anyway). I tried to import those CREATE TABLEs into MySQL and to export the resulting database as XML. Unfortunately MySQL only exports to <table name="mytable1"> tag level with the CREATE TABLE state as value: <pma:table name="mytable1"> CREATE TABLE `mytable1` ( `FIELD1` xxx DEFAULT NULL, `FIELD2` xxx DEFAULT NULL, `FIELD3` xxx DEFAULT NULL ) </pma:table> Maybe I'm missing something with the XML export feature of MySQL. I'm not an XQuery expert... maybe that export will do already to use XQuery for generating the DocBook entity file. Is it possible to take apart tag values with XQuery so that every 'FIELDx' gets its own entry after the transformation? @Ihe Onwuka: Using DB2 is not an option. I just get those files. Best i can do is to use MySQL. Best regards Michael _______________________________________________ firstname.lastname@example.org http://x-query.com/mailman/listinfo/talk