Markus Schaber wrote:

Hi, John,

John Arbash Meinel schrieb:



I am doing research for a project of mine where I need to store
several billion values for a monitoring and historical tracking system
for a big computer system. My currect estimate is that I have to store
(somehow) around 1 billion values each month (possibly more).



If you have that 1 billion perfectly distributed over all hours of the
day, then you need 1e9/30/24/3600 = 385 transactions per second.



I hope that he does not use one transaction per inserted row.

In your in-house tests, we got a speedup factor of up to some hundred
when bundling rows on insertions. The fastest speed was with using
bunches of some thousand rows per transaction, and running about 5
processes in parallel.


You're right. I guess it just depends on how the data comes in, and what
you can do at the client ends. That is kind of where I was saying put a
machine in front which gathers up the information, and then does a batch
update. If your client can do this directly, then you have the same
advantage.



John
=:->


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to