>> Splitting the input and parsing it top-level group at a time, and sort >> of building the hash hierarchy myself, I can keep the memory usage >> down. > This sounds like another good practical solution to me.
Another reason why this appeals to me is that you could split the file up and then run parslet on each segment in multiple processes using something like 'procrastinate'. This would distribute the parsing over multiple cores, giving you a parallel speedup! k
