I am doing, for several months, constant-space processing of large XML files using iteratees. The file contains many XML elements (which are a bit complex than a number). An element can be processed independently. After the parser finished with one element, and dumped the related data, the processing of the next element starts anew, so to speak. No significant state is accumulated for the overall parsing sans the counters of processed and bad elements, for statistics. XML is somewhat like JSON, only more complex: an XML parser has to deal with namespaces, parsed entities, CDATA sections and the other interesting stuff. Therefore, I'm quite sure there should not be fundamental problems in constant-space parsing of JSON.
BTW, the parser itself is described there http://okmij.org/ftp/Streams.html#xml _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe