David Johnson wrote:
We're looking at 10-100 billion tx per year stored and performed.
Partly I'm trying to gauge where the dividing line is between using the ZODB
and not, and also estimate how many server instances should be running.
I'm not sure any transactional database will handle that sort of rate with
a single database. We did some tests 2 years ago and with commodity
hardware, we were able to commit around 50 simple transactions per second (tps)
to a file storage over ZEO. This is about 60 times slower than you need,
100 billion tx per year or about 3000 tps. I imagine you could do somewhat
than that if you got beefier hardware.
A *quick* google on transaction rates yielded a fairly old article:
At that time, most of the databases tested on Unix did less that 100 tps.
Many of them much less. Of course that was a long time ago.
Does anyone know of more recent data?
Of course, if you can segregate your data, you can get higher
transaction rates by employing multiple database servers, ZODB
or otherwise. I've faily confident that this is what you'll
need to do.
Jim Fulton mailto:[EMAIL PROTECTED] Python Powered!
CTO (540) 361-1714 http://www.python.org
Zope Corporation http://www.zope.com http://www.zope.org
Zope3-users mailing list