Guys,

Thanks for all the great feedback.  Still processing it but here are  
somethings we will try.

RelStorage - in our app context to see if there it helps / hurts. will  
report back results.  Quick tests show some improvement.  We will also  
look at tuning out current ZEO setup.  Last time I looked there was  
only the invalidation queue.  I poked around a bit for tuning docs and  
didn't see any.  Can someone point me to them?

zc.catalogqueue - we are adding fullish indexing to our objects (they  
aren't text pages so not truly a "full" index).  hopefully moving  
indexing out to a separate process will keep the impact of the new  
index low and help with our current conflict issues.

Our slow loading object was a persistent with a regular list inside of  
the main pickle.  Objects that the list pointed to were persistent  
which I believe means that hey will load separately.  In general we  
have tried to make our persistent objects reasonably large to lower  
the amount of load round trips.  I haven't actually checked its size  
yet but it will be interesting to see.

-EAD



On Dec 4, 2009, at 6:21 PM, Shane Hathaway wrote:

> Jim Fulton wrote:
>> I find this a bit confusing.  For the warm numbers, It looks like  
>> ZEO didn't
>> utilize a persistent cache, which explains why the ZEO numbers are  
>> the
>> same for hot and cold. Is that right?
>
> Yes.  It is currently difficult to set up ZEO caches, which I  
> consider an issue with this early version of zodbshootout.   
> zodbshootout does include a sample test configuration that turns on  
> a ZEO cache, but it's not possible to run that configuration with a  
> concurrency level greater than 1.
>
>> What poll interval are you using for relstorage in the tests?
>> Assuming an application gets reasonable cache hit rates, I don't  
>> see any
>> meaningful difference between ZEO and relstorage in these numbers.
>
> You are entitled to your opinion. :-)  Personally, I have observed a  
> huge improvement for many operations.
>
>>>> Second, does the test still write and then read roughly the same
>>>> amount of data as before?
>>> That is a command line option.  The chart on the web page shows  
>>> reading and
>>> writing 1000 small persistent objects per transaction,
>> Which is why I consider this benchmark largely moot. The database is
>> small enough
>> to fit in the servers disk cache.  Even the slowest access times are
>> on the order of .5
>> milliseconds. Disk accesses are typically measured in 10s of
>> milliseconds.  With magnetic
>> disks, for databases substantially larger than the server's ram, the
>> network component
>> of loading objects will be noise compared to the disk access.
>
> That's why I think solid state disk is already a major win,  
> economically, for large ZODB setups.  The FusionIO cards in  
> particular are likely to be at least as reliable as any disk.  It's  
> time to change the way we think about seek time.
>
> Shane
>

_______________________________________________
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to