Been there - tried that. Pretty slow for big data sets. Though for read-only, if you set up their in-memory table you can slice and dice the data pretty effectively once you do heave it off the disk. For my use-case, querying the data is not a priority.
For what I'm doing (trading backtesting) the data sets are just too big for conventional DBs. If exotically expensive corporate solutions aren't on the menu, the consensus seems to be that serialising to file is the answer. There's also a lot to be said for keeping it simple and failsafe.
