David Worrall wrote: > I'm working w. a dynamic dataset of some 3500 tables - each table > grows sequentially. Total data ~= 5GB
Sorry I've no comparisons for you, but what kind of data is it? If it's a lot of numbers and short strings, PyTables+ numpy could be very fast. MySQL has a reputation for speed, but interestingly, we found we got better performance from Durus (A pure python object persistence system) for a read only dataset, though not as big as yours. Beware the pitfalls of benchmarking -- I think you're probably going to need to make a prototype that does the kinds of things you need to do, and test a few systems yourself. good luck, -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception _______________________________________________ Pythonmac-SIG maillist - Pythonmac-SIG@python.org http://mail.python.org/mailman/listinfo/pythonmac-sig