Am 15.04.2010, 20:52 Uhr, schrieb Jim Fulton <>:

> On Tue, Apr 13, 2010 at 8:42 PM, Nitro <> wrote:
>> Am 14.04.2010, 04:08 Uhr, schrieb Laurence Rowe <>:
>>> Running your test script on my small amazon EC2 instance on linux
>>> takes between 0.0 and 0.04 seconds (I had to remove the divide by
>>> total to avoid a zero division error). 0.02 is 5000/s.
>> I don't know how EC2 works in detail, but 5000 transactions per second
>> sound impossible to write to disk. Even 500 are impossible if your disk
>> doesn't have VERY fast access times.
> Unlike most other databases, ZODB records written to file storages are  
> always
> appended, so there is no seeking involved.  The only seeking involved in  
> writes
> is that needed to read previous records, but if a test is simply
> writing the same
> object over and over, or updating a small corpus, the previous record is  
> likely
> to be in disk cache. Of course, other things happening on the system will
> typically cause the disk heads to seek away from the end of the database  
> file,
> but you're unlikely to see that in a simpler benchmark.

Well, I am not hard disk export, but from what I know the heads are moving  
in relation to the platter all the time, because the platter is spinning.  
So if you have a 7200 rpm hard drive, you get 120 rounds per second. This  
means even if the head does not move, you have to wait at worst 8.3ms  
until you hit your desired write position on the track. So you can have as  
high as 8.3 ms "seek" time. That's why I think 5000 true fsynced writes  
are impossible.


P.S.: When the _commit is doing its job, my disk makes an awful lot of  
noises. So if I hammer it with 50 transactions per second I'm inclined to  
think the disk will die faster than in buffered mode. So I am wondering if  
the _commit even has a detrimental effect on the durability of my data...
For more information about ZODB, see the ZODB Wiki:

ZODB-Dev mailing list  -

Reply via email to