Hi Don,
damn, that was quick!
And thx, I'll look into that. The reading it in wasn't much of a
problem, I had been able to use MS-ODBC for that, there's a driver for
ODBC files. The problem is more the type of data structure I'd be
reading it into. In SQL I would have the data indexed by several
different columns, if I use maps I'd only have one key, so if I need to
lookup data in the map by a value that is not the key the lookups will
become quite expensive.
Any suggestions, what do you do in these cases?
Günther
Don Stewart schrieb:
gue.schmidt:
Hi,
is the above mentioned book still *the* authority on the subject?
I bought the book, read about 10 pages and then put it back on the
shelf. Um.
In my app I have to deal with 4 csv files, each between 5 - 10 mb, and
some static data.
I had put all that data into an Sqlite3 database and used SQL on it.
But, as the requirements keep changing the SQL becomes a bit messy. I
guess we've all had that experience.
So I'm wondering if I will find clues in this book how to do my querying
and handling of moderately large data in a more haskellish way and be
able to drop the SQL.
Use the fine libraries on http://hackage.haskell.org.
E.g. bytestring-csv then load that into a finite map?
These days it is rare to have to roll your own new data structures...
-- don
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe