[ZODB-Dev] fixing far-future timestamps (patch)
Hi all - I'm one of the unfortunates who managed to break a Data.fs when migrating a ZEO backend to new hardware. Unfortunately, I missed the 'CRITICAL' error logged by ZOE (aside: is there a fail_on_critical option somewhere?) and ended up with transaction ids that parse as timestamps in the year 4732 (or there abouts). This caused surprisingly few issues for our Zope/Plone install. Of course ZODB itself, didn't care, it just started using incremental tids. The biggest impact was that packing had no effect any more. (Versions are Zope 2.9.10, ZODB 3.6.3) Long story short, I dug into the code for FileStorage, and patched copyTransactionsFrom to detect and correct future timestamps. This approach was inspired by the existing code that detects and corrects out-of-order transactions, as well as the FileStorage __init__ code that detects future timestamps. I've attached a patch. I've successfully used this in a small python script to correct my problem, reading one FileStorage and writing a new one. I thought I should send this here, for comment. Is this something that should go into mainline ZODB? Ross -- Ross Reedstrom, Ph.D. reeds...@rice.edu Systems Engineer Admin, Research Scientistphone: 713-348-6166 The Connexions Project http://cnx.orgfax: 713-348-3665 Rice University MS-375, Houston, TX 77005 GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE --- BaseStorage.py.orig 2009-05-28 12:17:31.0 -0500 +++ BaseStorage.py 2009-05-28 12:51:34.0 -0500 @@ -414,16 +414,24 @@ print ('Time stamps back in order %s' % (t)) ok=1 +t=TimeStamp(tid) +t_now = time.time() +t_now = TimeStamp(*time.gmtime(t_now)[:5] + (t_now % 60,)) +if t t_now: +print ('Time stamp from the future, resetting: %s' % (t)) +tid = None + if verbose: print _ts self.tpc_begin(transaction, tid, transaction.status) +tid=self._tid for r in transaction: oid=r.oid if verbose: print oid_repr(oid), r.version, len(r.data) if restoring: -self.restore(oid, r.tid, r.data, r.version, +self.restore(oid, tid, r.data, r.version, r.data_txn, transaction) else: pre=preget(oid, None) ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] fixing far-future timestamps (patch)
Hmm, I seem to not be receiving email from this list, even though I signed up. Anyway, saw this via the archives. Would a doctest in FileStorage be o.k.? The change is in BaseStorage, but the only case I've experienced uses FileStorage (and I can leverage the tests in there for out-of-order tids ...) And what's the preferred patch-submission format? Ross -- Ross Reedstrom, Ph.D. reeds...@rice.edu Systems Engineer Admin, Research Scientistphone: 713-348-6166 The Connexions Project http://cnx.orgfax: 713-348-3665 Rice University MS-375, Houston, TX 77005 GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE On Thu, Jun 04, 2009 at 01:13:04PM -0400, Jim Fulton wrote: On Jun 4, 2009, at 12:04 PM, Ross J. Reedstrom wrote: I've successfully used this in a small python script to correct my problem, reading one FileStorage and writing a new one. I thought I should send this here, for comment. Is this something that should go into mainline ZODB? +1, with tests. :) Jim -- Jim Fulton Zope Corporation ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] fixing far-future timestamps (patch)
On Tue, Jun 09, 2009 at 06:29:38PM -0400, Jim Fulton wrote: On Jun 9, 2009, at 4:19 PM, Ross J. Reedstrom wrote: Hmm, I seem to not be receiving email from this list, even though I signed up. Anyway, saw this via the archives. Would a doctest in FileStorage be o.k.? Yup. The change is in BaseStorage, but the only case I've experienced uses FileStorage (and I can leverage the tests in there for out-of-order tids ...) Base storage was a mistake. :/ I assume you mean for my patch? I put it next to the other code that also corrects tids by the same means: setting it to None. Walking the code, I don't really see anywhere else to do this, especially keeping it out of the main path: the next stop is tcp_begin, that'd be right out. And what's the preferred patch-submission format? *I* prefer svn branches. :) Are you a contributor? Otherwise, a launchpad submission, as Tres suggested, is fine. *checks his zope.org account* 2000? Had it really been that long? Wow. I've got an account, but sadly never completed the contributor agreement (or at least, I don't recall if I have) Hmm, perhaps I should fill one out ... Ross -- Ross Reedstrom, Ph.D. reeds...@rice.edu Systems Engineer Admin, Research Scientistphone: 713-348-6166 The Connexions Project http://cnx.orgfax: 713-348-3665 Rice University MS-375, Houston, TX 77005 GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO and relstporage performance
On Tue, Oct 13, 2009 at 06:30:31PM -0600, Shane Hathaway wrote: Laurence Rowe wrote: Shane's earlier benchmarks show MySQL to be the fastest RelStorage backend: http://shane.willowrise.com/archives/relstorage-10-and-measurements/ Yep, despite my efforts to put PostgreSQL on top. :-) It seems that PostgreSQL has more predictable performance and behavior, while MySQL wins slightly in raw performance once every surprisingly slow query has been optimized. The usual wisdom on that is MySQL is faster at raw table reading, PostgreSQL at concurrency, esp. w/ any writing thrown in. Often, the performance curves 'cross' at some point. Could be 16 in this case. For my druthers, I just _trust_ PostgreSQL a lot more: the one MySQL DB I have that I use on an on-going basis is my MythTV media-pc. Once a month or so I have to repair a table. I've used PostgreSQL in professional high load production for years and never had a corruption issue: even when running out of disk! (the main culprit in my experience w/ MySQL, since that's the default state for a PVR: full!) snip caching discussion This leads to an interesting question. Memcached or ZEO cache--which is better? While memcached has a higher minimum performance penalty, it also has a lower maximum penalty, since memcached hits never have to wait for disk. Also, memcached can be shared among processes, there is a large development community around memcached, and memcached creates opportunities for developers to be creative with caching strategies. shared caches: this is the main reason I've been looking at relstore: we're running many Zope FEs against one ZOE right now, and due to the nature of the load-balancer, we're seeing little gain from the caches. I'm looking to fix that issue, to some extent, but sharing across all the FEs on one box would be a big win, I'm sure. So I'm inclined to stick with memcached even though the ZEO cache numbers look better. Ross -- Ross Reedstrom, Ph.D. reeds...@rice.edu Systems Engineer Admin, Research Scientistphone: 713-348-6166 The Connexions Project http://cnx.orgfax: 713-348-3665 Rice University MS-375, Houston, TX 77005 GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ZEO and relstporage performance
On Wed, Oct 14, 2009 at 02:20:50PM -0400, Benji York wrote: On Wed, Oct 14, 2009 at 1:08 PM, Ross J. Reedstrom reeds...@rice.edu wrote: shared caches: this is the main reason I've been looking at relstore: we're running many Zope FEs against one ZOE right now, and due to the nature of the load-balancer, we're seeing little gain from the caches. I'm looking to fix that issue, to some extent, but sharing across all the FEs on one box would be a big win, I'm sure. For similar reasons I've been considering various affinity approaches lately. Most people are familiar with session affinity, but I'm thinking of something more like data affinity. Instead of having a big cache that is shared in order to increase the chance of a request's data being in the cache, you would instead have many smaller caches (just like ZEO works now) and send the requests to the process(es) that are most likely to have the appropriate data in their cache. We're actually set up w/ squid in front of the zope FEs, using IPC to talk to them all. The default behavior is just to respond w/ a CACHE MISS and use network access timings to select. This is non-optimal, since it does little true load-balancing, until the FE is completely hammered (very non-linear response-time curve). I'd love to see an example where someone replaced the default response w/ something more meaningful. The shoal that replying HIT' for ZEO cached data breaks on is that the IPC request contains a URL, not ZODB object refs. And converting one to the other is what the whole dang machine _does_. And you have very little time to answer that IPC query, lest you destroy the gains you hope to get from having a hot-cache. So some ad-hoc approximation, like keeping the last couple hundred URLs served, and responding 'HIT' for those, might get some part of the benefit. This is probably the wrong list for it, but does anyone know of a published example of replacing Zope's default icp-server response? Last time I looked, I couldn't find one. Ross -- Ross Reedstrom, Ph.D. reeds...@rice.edu Systems Engineer Admin, Research Scientistphone: 713-348-6166 The Connexions Project http://cnx.orgfax: 713-348-3665 Rice University MS-375, Houston, TX 77005 GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev