Re: [ZODB-Dev] RFC: database ids
Am 15.08.2013 00:49, schrieb Vincent Pelletier: Le 15 août 2013 00:09, Jim Fulton j...@zope.com a écrit : Comments? Please make database ID reachable where _p_oid is reachable (maybe on _p_jar, I don't mind a few attribute lookup levels/trivial calls). I would love to be able to uniquely identify an object without risking collisions in multi-mountpoint setups, including in cases where traversal may not provide a cannonical path (ex: CMF Skinnable kind of indirections behind the back of acquisition). I don't recall use-cases requiring being able to also retrieve the object from those coordinates (probably none exist, a path suits this better), but at least to use (db_id, _p_oid) as a cache key. -- Vincent Pelletier ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev +1 to both the original proposal and Vincent's comments. Cheers, Jürgen -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] make ZODB as small and compact as expected
Am 22.07.2013 13:27, schrieb Jim Fulton: On Sun, Jul 21, 2013 at 12:12 AM, Christian Tismer tis...@stackless.com wrote: This is my last emission for tonight. I would be using ZODB as a nice little package if it was one. There should be nothing else but ZODB.some_package Instead, there is BTrees persistent transaction zc.lockfile zc.zlibstorage ZConfig zdaemon ZEO ZODB ZODB3 (zlibstorage) zope.interface and what I might have forgotton. Exception: There is also zodbpickle which I think is very usefull and general-purpose, and I wan to keep it, also I will try to push it into standard CPython. So, while all the packages are not really large, there are too many namespaces touched, and things like Zope Enterprize Objects are not meant to be here as open source pretending modules which the user never asked for. Despite it's tech-bubblishish acronym expansion, which few people are aware of, ZEO is the standard client-server component of ZODB, is widely used, and is certainly open source. I think these things could be re-packed into a common namespace and be made simpler. If ZODB had been born much later, it would certainly have used a namespace package. Now, it would be fairly disruptive to change it. Even zope.interface could be removed from this intended-to-be user-friendly simple package. I don't understand what you're saying. It's a dependency if ZODB. So while the amount of code is astonishingly small, the amount of abstraction layering tells the reader that this was never really meant to be small. And this makes average, simple-minded users like me shy away and go back to simpler modules like Durus. But the latter has serious other pitfalls, which made me want to re-package ZODB into something small, pretty, tool-ish, versatile thing for the pocket. Actually I'm trying to re-map ZOPE to the simplistic Durus interface, without its short-comings and lack of support. I think a successfully down-scaled, isolated package with ZODB's great implementation, but a more user-oriented interface would help ZODB a lot to get widely accepted and incorporated into very many projects. Right now people are just too much concerned of implicit complication which actually does not exist. I volunteer to start such a project. Proposing the name david, as opposed to goliath. ZODB is an old project that has accumulated some cruft over the years, however: - I've tried to simplify it and, with the exception of ZEO, I think it's pretty straightforward. - ZODB is used by a lot of people with varying needs and tastes. The fact that it is pretty modular has allowed a lot of useful customizations. - I'm pretty happy with the layered storage architecture. - With modern package installation tools like buildout and pip, having lots of dependencies shouldn't be a problem. ZODB uses lots of packages that have uses outside of ZODB. I consider this a strength, not a weakness. Honestly, I have no interest in catering to users who don't use buildout, or pip, or easy_install. - The biggest thing ZODB needs right now is documentation. Unfortunately, this isn't easy. There is zodb.org, but much better documentation is needed. Very much agreed. The most important thing to mention here is IMO the use of virtualenv which keeps your python installation's site packages clean. I don't mind if a certain package I need pulls in any number of dependencies, as the are resolved automagiaclly. @Chris: Maybe you could explain a bit more, why that bothers you? Which brings me to the second most important thing here: documentation. As you seem to be starting a new project using ZODB, maybe you could come up with a Getting started with ZODB in your own project or so which would (how I picture it) include a step by step walkthrough from zero to writing some simple Persistent classes and using them. That woul make your life easier in the long run as steps written down correctly tend to be very easy to reproduce. On the other hand the community would benefit from some neice peace of documentation. Is there a ZODB wiki (didn't find one), where we could gather this stuff? I've come accross a whole bunch of useful code snippets that a use seldomly but which are very, very useful then. For example how do i find/retrieve a object with oid 0xYYY?, How do I copy transactions starting whith tid xxx from one ZODB to another? I keep these in my private CMS, where they are obviously only useful to me. Time for a ZODB wiki?! Cheers! Jürgen Jim -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de
Re: [ZODB-Dev] zodb conversion questions
Am 07.02.2013 17:11, schrieb Jim Fulton: On Thu, Feb 7, 2013 at 10:48 AM, Jürgen Herrmann juergen.herrm...@xlhost.de wrote: Am 06.02.2013 15:05, schrieb Jürgen Herrmann: Hi there! I hav a relstorage with mysql backend that grew out of bounds and we're looking into different backend solutions now. Possibly also going back to FileStorage and using zeo... Anyway we'll have to convert the databases at some point. As these are live DBs we cannot shut them down for longer than the ususal maintenance interval during the night, so for maybe 2-3h. a full conversion process will never complete in this time so we're looking for a process that can split the conversion into two phases: 1. copy transactions from backup of the source db to the destination db. this can take a long time, we don't care. note the last timestamp/transaction_id converted. 2. shut down the source db 3. copy transactions from the source db to the destination db, starting at the last converted transaction_id. this should be fast, as only a few transactions need to be converted, say 1% . if i would reimplement copyTransactionsFrom() to accept a start transaction_id/timestamp, would this result in dest being an exact copy of source? source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source) last_txn_id = source.lastTransaction() source.close() dest.close() source = open_my_source_storage() # add some transactions source.close() source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source, last_txn_id=last_txn_id) source.close() dest.close() I will reply to myself here :) This actually works, tested with a modified version of FileStorage for now. I modified the signature of copyTransactionsFrom to look like this: def copyTransactionsFrom(self, source, verbose=0, not_before_tid=None): ``start`` would be better to be consistent with the iterator API. not_before_tid is a packed tid or None, None meaning copy all (the default, so no existing API usage would break). Is there public interest in modifying this API permamently? +.1 This API is a bit of an attractive nuisance. I'd rather people learn how to use iterators in their own scripts, as they are very useful and powerful. This API just hides that. The second part, replaying old transactions is a bit more subtle, but it's still worth it for people to be aware of it. If I were doing this today, I'd make this documentation rather than API. But then, documentation ... whimper. Anybody want to look at the actual code changes? Sure, if they have tests. Unfortunately, we can only accept pull requests from zope contributors. Are you one? Wanna be one? :) Jim 1. SUCCESS. I migrated to FileStorage/ZEO successfully with the modified copyTransactionsFrom() with downtime of only 10 minutes. Very cool :) DB sizes are reasonable again and repozo backups are set up, just like before the migration to RelStorage. Feels robust again, a good feeling. I'd advise anybody thinking about migrating to RelStorage to think about a proper (incremental) backup strategy for the underlying sql db first, this might become crucial! repozo makes your life easy there when using FileStorage. 2. Regarding the tests: I did not find any test for copyTransactionsFrom() in the current test suite, am I blind? If there isn't any test yet, maybe you could suggest a proper file for the test, maybe even a similar one to start from? 3. Regarding my request to have this in ZODB 3.10.5: After reading my own statement I quickly realized that changing an API can only happen in a major version. Anyway, this was a one-shot action so I'll probably never need my own code again :) best regards, Jürgen -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] zodb conversion questions
Am 06.02.2013 15:05, schrieb Jürgen Herrmann: Hi there! I hav a relstorage with mysql backend that grew out of bounds and we're looking into different backend solutions now. Possibly also going back to FileStorage and using zeo... Anyway we'll have to convert the databases at some point. As these are live DBs we cannot shut them down for longer than the ususal maintenance interval during the night, so for maybe 2-3h. a full conversion process will never complete in this time so we're looking for a process that can split the conversion into two phases: 1. copy transactions from backup of the source db to the destination db. this can take a long time, we don't care. note the last timestamp/transaction_id converted. 2. shut down the source db 3. copy transactions from the source db to the destination db, starting at the last converted transaction_id. this should be fast, as only a few transactions need to be converted, say 1% . if i would reimplement copyTransactionsFrom() to accept a start transaction_id/timestamp, would this result in dest being an exact copy of source? source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source) last_txn_id = source.lastTransaction() source.close() dest.close() source = open_my_source_storage() # add some transactions source.close() source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source, last_txn_id=last_txn_id) source.close() dest.close() I will reply to myself here :) This actually works, tested with a modified version of FileStorage for now. I modified the signature of copyTransactionsFrom to look like this: def copyTransactionsFrom(self, source, verbose=0, not_before_tid=None): not_before_tid is a packed tid or None, None meaning copy all (the default, so no existing API usage would break). Is there public interest in modifying this API permamently? Anybody want to look at the actual code changes? best regards, Jürgen Herrmann -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] zodb conversion questions
@jim, resent to the list, sorry. Am 07.02.2013 17:11, schrieb Jim Fulton: On Thu, Feb 7, 2013 at 10:48 AM, Jürgen Herrmann juergen.herrm...@xlhost.de wrote: Am 06.02.2013 15:05, schrieb Jürgen Herrmann: Hi there! I hav a relstorage with mysql backend that grew out of bounds and we're looking into different backend solutions now. Possibly also going back to FileStorage and using zeo... Anyway we'll have to convert the databases at some point. As these are live DBs we cannot shut them down for longer than the ususal maintenance interval during the night, so for maybe 2-3h. a full conversion process will never complete in this time so we're looking for a process that can split the conversion into two phases: 1. copy transactions from backup of the source db to the destination db. this can take a long time, we don't care. note the last timestamp/transaction_id converted. 2. shut down the source db 3. copy transactions from the source db to the destination db, starting at the last converted transaction_id. this should be fast, as only a few transactions need to be converted, say 1% . if i would reimplement copyTransactionsFrom() to accept a start transaction_id/timestamp, would this result in dest being an exact copy of source? source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source) last_txn_id = source.lastTransaction() source.close() dest.close() source = open_my_source_storage() # add some transactions source.close() source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source, last_txn_id=last_txn_id) source.close() dest.close() I will reply to myself here :) This actually works, tested with a modified version of FileStorage for now. I modified the signature of copyTransactionsFrom to look like this: def copyTransactionsFrom(self, source, verbose=0, not_before_tid=None): ``start`` would be better to be consistent with the iterator API. this was my first approach, though for my usecase it would be misleading as the code roughly looks like this: if tid not_before_tid: continue and it excludes the given tid from the transactions re-stored. maybe we can come up with a better name but start doesn't nail it :) not_before_tid is a packed tid or None, None meaning copy all (the default, so no existing API usage would break). Is there public interest in modifying this API permamently? +.1 This API is a bit of an attractive nuisance. I'd rather people learn how to use iterators in their own scripts, as they are very useful and powerful. This API just hides that. not sure i understand this correctly, maybe you could elaborate a bit more? for my usecase you'd suggest i just use the storage iterator and walk/re-store the transactions in my own code? there's a lot of checking and branching going on inside copyTransactionsFrom(), that's why i asked if this would work in the first place. The second part, replaying old transactions is a bit more subtle, but it's still worth it for people to be aware of it. If I were doing this today, I'd make this documentation rather than API. But then, documentation ... whimper. Anybody want to look at the actual code changes? Sure, if they have tests. Unfortunately, we can only accept pull requests from zope contributors. Are you one? Wanna be one? :) i'll look at the supplied test and see if i can make my test script a proper test case for the test suite. shouldn't be too hard. we'll decide about the contributor stuff after that :) btw i need this to be in the ZODB version current Zope2 uses, is this one on github already? if so, where can i find it? even if i don't become a contributor this would make generating patches much easier. Jim thanks for your help! Jürgen -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage and over growing database.
Am 07.02.2013 20:22, schrieb Shane Hathaway: On 02/06/2013 04:23 AM, Jürgen Herrmann wrote: I think this is not entirely correct. I ran in to problems serveral times when new_oid was emptied! Maybe Shane can confirm this? (results in read conlfict errors) Ah, that's true. You do need to replicate new_oid. Then I'd like to talk a little about my current relstorage setup here: It's backed by mysql, history-preserving setup. Recently one of our DBs started to grow very quickly and it's object_state.ibd (InnoDB) file is just over 86GB as of today. Packing now fails due to mysql not being able to complete sorts in the object_ref table. object_ref is also very big (36GB MYD file, 25GB MYI file). I took a backup of the DB and let zodbconvert convert it back to a FileStorage, the resulting file is 6GB (!). I will pack it and see how big it is then. I will also investigate how big on disk this DB would be when stored in postgresql. This situation poses another problem for us: using zodbconvert to convert this mess to a Filestorage tages just over an hour when writing to a ramdisk. I suspect converting to postgres will take more than 10 hours, which is unacceptable for us as this is a live database an cannot be offline for more than 2-3 hours in the nicht. So we will have to investigate into a special zodbconvert that uses a two step process: 1. import transactions to new storage from a mysql db backup 2. import rest of transactions that occurred after the backup was made from the live database (which is offline for that time of course) looking at zodbconvert using copyTransactionsFrom() i thnik this should be possible but up to now i did non investigate furhter. maybe shane could confirm this? maybe this could also be transformed into a neat way of getting incremental backups out of zodbs in general? Yes, that could work. As for MySQL growing tables without bounds... well, that wouldn't surprise me very much. I know that's entirely not your fault but may be worth mentioning in the docs. Relstorage with MySQL works *very* well for DB sizes 5GB or so, above that - not so much :/ That issue has given me some sleepless nights, especially because the conversion step to another storage type takes quite a long time. But in less than two hours i came up with a workable solution today, maybe see the other messages on the list regarding that issue. I LOVE OPEN SOURCE. I LOVE PYTHON. :) best regards, Jürgen -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Relstorage and over growing database.
Am 07.02.2013 21:18, schrieb Jürgen Herrmann: Am 07.02.2013 20:22, schrieb Shane Hathaway: On 02/06/2013 04:23 AM, Jürgen Herrmann wrote: I think this is not entirely correct. I ran in to problems serveral times when new_oid was emptied! Maybe Shane can confirm this? (results in read conlfict errors) Ah, that's true. You do need to replicate new_oid. Then I'd like to talk a little about my current relstorage setup here: It's backed by mysql, history-preserving setup. Recently one of our DBs started to grow very quickly and it's object_state.ibd (InnoDB) file is just over 86GB as of today. Packing now fails due to mysql not being able to complete sorts in the object_ref table. object_ref is also very big (36GB MYD file, 25GB MYI file). I took a backup of the DB and let zodbconvert convert it back to a FileStorage, the resulting file is 6GB (!). I will pack it and see how big it is then. I will also investigate how big on disk this DB would be when stored in postgresql. This situation poses another problem for us: using zodbconvert to convert this mess to a Filestorage tages just over an hour when writing to a ramdisk. I suspect converting to postgres will take more than 10 hours, which is unacceptable for us as this is a live database an cannot be offline for more than 2-3 hours in the nicht. So we will have to investigate into a special zodbconvert that uses a two step process: 1. import transactions to new storage from a mysql db backup 2. import rest of transactions that occurred after the backup was made from the live database (which is offline for that time of course) looking at zodbconvert using copyTransactionsFrom() i thnik this should be possible but up to now i did non investigate furhter. maybe shane could confirm this? maybe this could also be transformed into a neat way of getting incremental backups out of zodbs in general? Yes, that could work. As for MySQL growing tables without bounds... well, that wouldn't surprise me very much. I know that's entirely not your fault but may be worth mentioning in the docs. Relstorage with MySQL works *very* well for DB sizes 5GB or so, above that - not so much :/ Also for the docs: on disk Restorage/MySQL uses 4x the size of a FileStorage with same contents. As packing tables are filled this grows by another factor of ~2. If you don't pack very regularly you might up ending in DBs that donb't permit packing anymore because of the big size very quickly. best regards, Jürgen That issue has given me some sleepless nights, especially because the conversion step to another storage type takes quite a long time. But in less than two hours i came up with a workable solution today, maybe see the other messages on the list regarding that issue. I LOVE OPEN SOURCE. I LOVE PYTHON. :) best regards, Jürgen -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] zodb conversion questions
Hi there! I hav a relstorage with mysql backend that grew out of bounds and we're looking into different backend solutions now. Possibly also going back to FileStorage and using zeo... Anyway we'll have to convert the databases at some point. As these are live DBs we cannot shut them down for longer than the ususal maintenance interval during the night, so for maybe 2-3h. a full conversion process will never complete in this time so we're looking for a process that can split the conversion into two phases: 1. copy transactions from backup of the source db to the destination db. this can take a long time, we don't care. note the last timestamp/transaction_id converted. 2. shut down the source db 3. copy transactions from the source db to the destination db, starting at the last converted transaction_id. this should be fast, as only a few transactions need to be converted, say 1% . if i would reimplement copyTransactionsFrom() to accept a start transaction_id/timestamp, would this result in dest being an exact copy of source? source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source) last_txn_id = source.lastTransaction() source.close() dest.close() source = open_my_source_storage() # add some transactions source.close() source = open_my_source_storage() dest = open_my_destination_storage() dest.copyTransactionsFrom(source, last_txn_id=last_txn_id) source.close() dest.close() thanks in advance and best regards, Jürgen Herrmann -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Build compression into ZODB 3.11?
Am 20.03.2012 15:27, schrieb Jim Fulton: On Thu, Mar 15, 2012 at 11:09 AM, Jim Fulton j...@zope.com wrote: On Wed, Mar 14, 2012 at 1:47 PM, Jim Fulton j...@zope.com wrote: ... At some point soonish, I'll do some tests against a large database. On a database with 180 million records, 150 million of which are compressable: CompressedCompress Uncompress Size % of time time uncompressed microseconds microseconds --- zlib3896 12.5 lz4 527.4 1.6 --- lz4 is an order of magnitude faster than zlib, however, lz4-compressed records were 36% larger. For me, I don't think the speedup is worth the loss of compression. Jim Do you have an estimation on how much of the whole store cpu time is used for compression? best regards, jürgen -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] Build compression into ZODB 3.11?
Am 14.03.2012 18:47, schrieb Jim Fulton: I'm pretty happy with how zc.zlibstorage has worked out. Should I build this into ZODB 3.11? BTW, lz4 compression looks interesting. The Python binding (at least from PyPI) is broken. I submitted an issue. Hopefully it will be fixed. Jim +1 best regards, Jürgen -- XLhost.de ® - Webhosting von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see http://zodb.org/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] RelStorage pack with history-free storage results in POSKeyErrors
._dostoreNP(oid, data=data) File /var/buildout-eggs/ZODB3-3.9.6-py2.6-linux-i686.egg/ZODB/tests/StorageTestBase.py, line 202, in _dostoreNP return self._dostore(oid, revid, data, 1, user, description) File /var/buildout-eggs/ZODB3-3.9.6-py2.6-linux-i686.egg/ZODB/tests/StorageTestBase.py, line 190, in _dostore r1 = self._storage.store(oid, revid, data, '', t) File /home/zope/relstorage_co/relstorage/storage.py, line 565, in store cursor, self._batcher, oid_int, prev_tid_int, data) File /home/zope/relstorage_co/relstorage/adapters/mover.py, line 453, in mysql_store_temp command='REPLACE', File /home/zope/relstorage_co/relstorage/adapters/batch.py, line 67, in insert_into self.flush() File /home/zope/relstorage_co/relstorage/adapters/batch.py, line 74, in flush self._do_inserts() File /home/zope/relstorage_co/relstorage/adapters/batch.py, line 110, in _do_inserts self.cursor.execute(stmt, tuple(params)) File /var/buildout-eggs/MySQL_python-1.2.3-py2.6-linux-i686.egg/MySQLdb/cursors.py, line 174, in execute self.errorhandler(self, exc, value) File /var/buildout-eggs/MySQL_python-1.2.3-py2.6-linux-i686.egg/MySQLdb/connections.py, line 36, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (1153, Got a packet bigger than 'max_allowed_packet' bytes) Ran 269 tests with 0 failures and 2 errors in 38.398 seconds. Tearing down left over layers: Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 420 tests, 0 failures, 2 errors in 58.929 seconds. This is a little weird, as I have max_allowed_packet set to 16M. Should these tests fail? That said, I don't think this has anything to do with the packing bug as I didn't see any exceptions or, in fact, any logging or output at all from zodbpack, and the only other exceptions seen were the POSKeyErrors... cheers, Chris i also had to up max_allowed_packet to 32M to make the tests work. best regards, jürgen -- XLhost.de ® - Webspace von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] question about object invalidation after ReadConflictErrors
hi there! i wrongly posted a bug to the zodb bugtracker on tuesday: https://bugs.launchpad.net/zodb/+bug/707332 as it turns out, that bug report was moot. it didn't fix my problem. out of curiosity, why is the exception raised inside the loop effectifely breaking the loop after the first object invalidated? my debug code showed that sometimes ~20 objects were in _readCurrent. was this a side effect of the relstorage bug involved? or can this happen more frequently? if so, why not invalidate all objects in _readCurrent then before re- raising the ConflictEror? bets regards, jürgen -- XLhost.de ® - Webspace von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] ReadConflictErrors with ZODB3.10.1 and Relstorage 1.4.1
i everybody! About a week ago i migrated our FileStorages to Relstorage instances. We have 5 databases, 4 of them mounted via ZodbMountPoints in a plain zope 2.13 installation. Since the migration i experience sporadic ReadConflictErrors (they occur about 1-4 hours after restarting the zope daemon). These ReadConflictErrors are not resolved by the ZPlublisher retries, instead all of them finally hit the SiteErrorLog after 3 retries. Here's a traceback of such an exception (the final one): 2011-01-26T12:55:59 ERROR Zope.SiteErrorLog 1296042959.440.268001912558 https://new.xlhost.de:456/InstanceEditor/index_html Traceback (innermost last): Module Zope2.App.startup, line 197, in __call__ Module ZPublisher.Publish, line 134, in publish Module Zope2.App.startup, line 301, in commit Module transaction._manager, line 95, in commit Module transaction._transaction, line 329, in commit Module transaction._transaction, line 443, in _commitResources Module ZODB.Connection, line 599, in commit ReadConflictError: database read conflict error (oid 0xfd19b5, serial this txn started with 0x038bd509f31cdf33 2011-01-25 10:49:56.979558, serial currently co mmitted 0x038bdad75eed2344 2011-01-26 11:35:22.248356) Please note that actually with plain ZODB 3.10.1 it's Connection.py line 570, but i added some logging code there for debugging purposes. It took me ages to find the cause for that problem and i think i found it by now, but not what causes this. The cause for the ReadConflictErrors is that the connection object has the oid 0xfd19b5 in it's _readCurrent dict and never throws it out. the concerned transaction above i'm pretty sure never read that object with oid 0xfd19b5, so either it's wrongly added to _readCurrent or (much more likely imho) it never gets removed from _readCurrent though it should. if i execute the following code in the debugger, the readconflicts concerning that oid are gone (please note that this happens in a new request, so i'm operating on the reused Connection object. _readCurrent is obviously reused and not cleared at transaction boundaries, is that expected?): from ZODB.utils import p64, u64 del object._p_jar._readCurrent[p64(0xfd19b5)] import transaction; transaction.commit() I'm sorry I cannot give you a minimal example to reproduce this error. That's also what makes it so difficult to debug, it's not reproduceable or at least i don't know how to provoke those errors yet. Does anybody have a clue what's going on here, any pointers in which direction to look next?` Best regards, Jürgen -- XLhost.de ® - Webspace von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ReadConflictErrors with ZODB3.10.1 and Relstorage 1.4.1
On Wed, 26 Jan 2011 07:08:14 -0700, Shane Hathaway sh...@hathawaymix.org wrote: On 01/26/2011 05:29 AM, Jürgen Herrmann wrote: _readCurrent is obviously reused and not cleared at transaction boundaries, is that expected?): No! Thanks for the great analysis. This insight is key. RelStorage has a monkey patch of the Connection.sync() method, which has not changed in a long time, so the monkey patch seemed safe enough. Well, sync() changed in ZODB 3.10, but the monkey patch didn't change along with it. Sigh... sorry. I've checked in a fix in Subversion. Please try it out. I need to look at the possible pack issue recently reported before we make a release. Shane Just installed Relstorage trunk instead of 1.4.1, we'll see... I'll comment again in a couple of hours, as my last zdob bug report / attempted fix was a complete flop - posted too early :) thanks very much for your help though! best regards, jürgen -- XLhost.de ® - Webspace von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ReadConflictErrors with ZODB3.10.1 and Relstorage 1.4.1
On Wed, 26 Jan 2011 07:08:14 -0700, Shane Hathaway sh...@hathawaymix.org wrote: On 01/26/2011 05:29 AM, Jürgen Herrmann wrote: _readCurrent is obviously reused and not cleared at transaction boundaries, is that expected?): No! Thanks for the great analysis. This insight is key. RelStorage has a monkey patch of the Connection.sync() method, which has not changed in a long time, so the monkey patch seemed safe enough. Well, sync() changed in ZODB 3.10, but the monkey patch didn't change along with it. Sigh... sorry. I've checked in a fix in Subversion. Please try it out. I need to look at the possible pack issue recently reported before we make a release. Shane as of now the zope2 daemon has been running for a little over 5h with no read conflict errors, looking good... is the suspected packing bug only affecting 1.5.x or also 1.4? i upgraded to the trunk version now, should i stop packing until it's tested/released? -- XLhost.de ® - Webspace von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] RelStorage pack with history-free storage results in POSKeyErrors
On Wed, 26 Jan 2011 21:54:52 +, Chris Withers ch...@simplistix.co.uk wrote: On 26/01/2011 21:05, Shane Hathaway wrote: On 01/26/2011 11:52 AM, Chris Withers wrote: On 26/01/2011 14:08, Shane Hathaway wrote: I've checked in a fix in Subversion. Please try it out. I need to look at the possible pack issue recently reported before we make a release. Where is this pack issue documented/discussed? See the discussion here with Anton Stonor. We are still only hypothesizing that there's a bug. Well, my case matches that case pretty exactly... MySQL 5.1, RelStorage 1.4.0... Anton, were you using a history-free storage? Also, does RelStorage have a bug tracker anywhere? Not yet. The need for one has not been clear until very recently. RelStorage is turning into a community project and every community project needs a bug tracker. I suggest we use Launchpad. Meh, I'm no fan of Launchpad, I vastly prefer Trac, but it's not my project and I'll use whatever your choose more than happily ;-) That may be related, but first, are you mounting databases? You have to be careful with mounted databases and packing. This exact setup has been used with FileStorages served over ZEO for getting on for a decade now... Does RelStorage do anything different in this area? (in the affected project, I'm fairly certain there are no cross-database references...) cheers, Chris is there a script or some example code to search for cross db references? i'm also eager to find out... for now i disabled my packing cronjobs. jürgen -- XLhost.de ® - Webspace von supersmall bis eXtra Large XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 Web: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org https://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] testing a mvcc storage
hi all! what tests do i have to run in order to ensure full working state of a mvcc storage? how do i run these tests? i have lates zodb 3.9 beta in a virtualenv here. thanks, jürgen herrmann -- XLhost.de - eXperts in Linux hosting ® XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Volker Geith, Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 WEB: http://www.XLhost.de IRC: #xlh...@irc.quakenet.org ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] increasing order of oids?
hi there! is there a strict requirement for oids to be increasing? having read a ton of code my head is not so sure anymore, though i can't remember seeing a requirement for such anywhere in the code. if RadosStorage could pre-allocate oids in batches of thousands, that would probably give me a big performance win, or to put it the other way round: rados is not designed to be used for such time-critical and at the same time completely synced operations. and getting new oids will happen often... best regards, jürgen -- XLhost.de - eXperts in Linux hosting ® XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Volker Geith, Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 WEB: http://www.XLhost.de IRC: #xlh...@irc.quakenet.org ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] zodb design document
hi there! is there a zodb design document? what i'm interested in are the following things: - some basic description how the zodb works (f.ex. i don't understand what the difference between a serial and a transaction id is)? - looking at basestorage and the methods that have to be implemented, is there documentation what the reimplementation in a concrete subclass has to do exactly? (ordering of things, desired side effects etc.)? why do i ask? i'd like to create a RadosStorage zodb backend ceph's underlying object storage RADOS (see http://ceph.newdream.net/blog/category/rados/ ) thanks in advance for your replies! best regards, jürgen herrmann -- XLhost.de - eXperts in Linux hosting ® XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Volker Geith, Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 WEB: http://www.XLhost.de IRC: #xlh...@irc.quakenet.org ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] RelStorage MySQL - StorageError: Unable to acquire commit lock
On Wed, August 12, 2009 22:23, Rudá Porto Filgueiras wrote: I begin to use RelStorage in a production site with Plone 2.5. Everything was running without failures since 01 august 2009. But today after a failure in tpc_abort, all instances conneceted to MySQL can't acquire commit lock. Follow tpc_abort traceback: 2009-08-12T14:12:08 ERROR txn.1115806016 Error in tpc_abort() on manager MultiObjectResourceAdapter for ZODB.DB.TransactionalUndo object at 0x210f6790 at 654293328 Traceback (most recent call last): File /usr/local/zope/agecom-virtual/eggs/ZODB3-3.7.3_polling-py2.4-linux-x86_64.egg/transaction/_transaction.py, line 533, in _cleanup rm.tpc_abort(self) File /usr/local/zope/agecom-virtual/eggs/ZODB3-3.7.3_polling-py2.4-linux-x86_64.egg/transaction/_transaction.py, line 628, in tpc_abort self.manager.tpc_abort(txn) File /usr/local/zope/agecom-virtual/eggs/ZODB3-3.7.3_polling-py2.4-linux-x86_64.egg/ZODB/BaseStorage.py, line 194, in tpc_abort self._abort() File /usr/local/zope/agecom-virtual/eggs/RelStorage-1.2.0b2-py2.4.egg/relstorage/relstorage.py, line 710, in _abort self._rollback_load_connection() File /usr/local/zope/agecom-virtual/eggs/RelStorage-1.2.0b2-py2.4.egg/relstorage/relstorage.py, line 166, in _rollback_load_connection self._load_conn.rollback() OperationalError: (2006, 'MySQL server has gone away') And after, all instances report this exeption: 2009-08-12T14:21:53 ERROR Zope.SiteErrorLog http://adm.agecom.ba.gov.br/login_form Traceback (innermost last): Module ZPublisher.Publish, line 121, in publish Module Zope2.App.startup, line 240, in commit Module transaction._manager, line 96, in commit Module transaction._transaction, line 395, in commit Module transaction._transaction, line 498, in _commitResources Module ZODB.Connection, line 730, in tpc_vote Module relstorage.relstorage, line 675, in tpc_vote Module relstorage.relstorage, line 659, in _vote Module relstorage.relstorage, line 566, in _prepare_tid Module relstorage.adapters.mysql, line 506, in start_commit Module relstorage.adapters.mysql, line 672, in _hold_commit_lock StorageError: Unable to acquire commit lock I solve the problem restarting all instances, and the site became operational again, but I have some questions: This can be a bug or there is any problem in my enviroment/application? There is another solution to release commit lock without restart all instances? Cheers, -- Rudá Porto Filgueiras http://python-blog.blogspot.com ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev i just came back from reading relstorage code (research work for radosstorage) and the lock is actually held on the mysql server. my guess is that the connection drop you experience earlier left the lock in place on the mysql-server. obviously the mysql-server did not notice you connection dying, otherwise it would have released the lock,see http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_get-lock probably restarting mysql would have solved your issue?! regards, jürgen -- XLhost.de - eXperts in Linux hosting ® XLhost.de GmbH Jürgen Herrmann, Geschäftsführer Boelckestrasse 21, 93051 Regensburg, Germany Geschäftsführer: Volker Geith, Jürgen Herrmann Registriert unter: HRB9918 Umsatzsteuer-Identifikationsnummer: DE245931218 Fon: +49 (0)800 XLHOSTDE [0800 95467833] Fax: +49 (0)800 95467830 WEB: http://www.XLhost.de IRC: #xlh...@irc.quakenet.org ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] ioerror on mounting db (zope mounpoint)
hi! after upgrading to zope 2.9 i receeive the following errors for about 20-30s after restart: 2006-01-25 10:54:16 ERROR ZODB.lock_file Error locking file /home/bliss/zope/var/bliss_data.fs.lock Traceback (most recent call last): File /home/bliss/zope/lib/python/ZODB/lock_file.py, line 63, in __init__ lock_file(self._fp) File /home/bliss/zope/lib/python/ZODB/lock_file.py, line 42, in lock_file fcntl.flock(file.fileno(), _flags) IOError: [Errno 11] Resource temporarily unavailable 2006-01-25 10:54:16 ERROR Zope.ZODBMountPoint Failed to mount database. exceptions.IOError ([Errno 11] Resource temporarily unavailable) Traceback (most recent call last): File /home/bliss/zope/lib/python/Products/ZODBMountPoint/MountedObject.py, line 262, in _getOrOpenObject conn = self._getMountedConnection(anyjar) File /home/bliss/zope/lib/python/Products/ZODBMountPoint/MountedObject.py, line 149, in _getMountedConnection conn = self._getDB().open() File /home/bliss/zope/lib/python/Products/ZODBMountPoint/MountedObject.py, line 162, in _getDB return getConfiguration().getDatabase(self._path) File /home/bliss/zope/lib/python/Zope2/Startup/datatypes.py, line 280, in getDatabase db = factory.open(name, self.databases) File /home/bliss/zope/lib/python/Zope2/Startup/datatypes.py, line 178, in open DB = self.createDB(database_name, databases) File /home/bliss/zope/lib/python/Zope2/Startup/datatypes.py, line 175, in createDB return ZODBDatabase.open(self, databases) File /home/bliss/zope/lib/python/ZODB/config.py, line 97, in open storage = section.storage.open() File /home/bliss/zope/lib/python/ZODB/config.py, line 135, in open quota=self.config.quota) File /home/bliss/zope/lib/python/ZODB/FileStorage/FileStorage.py, line 112, in __init__ self._lock_file = LockFile(file_name + '.lock') File /home/bliss/zope/lib/python/ZODB/lock_file.py, line 63, in __init__ lock_file(self._fp) File /home/bliss/zope/lib/python/ZODB/lock_file.py, line 42, in lock_file fcntl.flock(file.fileno(), _flags) is this something to worry about? seems that the lock can be acquired some time after the restart (abovementioned 20-30 seconds)... never have seen this behaviour with zope 2.8. regards, juergen herrmann ___ XLhost.de - eXperts in Linux hosting Jürgen Herrmann Bruderwöhrdstraße 15b, DE-93051 Regensburg Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] ioerror on mounting db (zope mounpoint)
hmm, there's definitely no other zope process around at that time. could it be that the index is just being recreated at that time? (there's around 12 objects in the db according zope's cache tab) i'm not a zodb specialist :) but i'd like to find out... regards, juergen herrmann On Wed, January 25, 2006 21:15, Dieter Maurer wrote: Jürgen Herrmann wrote at 2006-1-25 11:00 +0100: hi! after upgrading to zope 2.9 i receeive the following errors for about 20-30s after restart: 2006-01-25 10:54:16 ERROR ZODB.lock_file Error locking file /home/bliss/zope/var/bliss_data.fs.lock Traceback (most recent call last): File /home/bliss/zope/lib/python/ZODB/lock_file.py, line 63, in __init__ lock_file(self._fp) File /home/bliss/zope/lib/python/ZODB/lock_file.py, line 42, in lock_file fcntl.flock(file.fileno(), _flags) IOError: [Errno 11] Resource temporarily unavailable This means that the storage is opened by another process. Stop this other process and your problem should go away (unless your process tries to open it twice). -- Dieter ___ XLhost.de - eXperts in Linux hosting Jürgen Herrmann Bruderwöhrdstraße 15b, DE-93051 Regensburg Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: FW: [ZODB-Dev] python types question
thanks tim, my mistake! regards, juergen herrmann [ Tim Peters wrote:] [fwd'ing private msg, since it appears to have been intended to go to the list] -Original Message- From: Jürgen Herrmann [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 23, 2005 7:31 AM To: Tim Peters Subject: RE: [ZODB-Dev] python types question hi! first of all, thanks to everybody who replied to my message. i experimented a bit with OOBTree and OOTreeSet. OOBTree is surely fine for the mapping part of what i need. But i think OOTreeSet doesn't fit as a replacement for PersistentList because i need the oids in the list to maintain their order. i want to be able to change the order of oids in the lists, too. any further hints what to use instead of OOTreeSet then? regards, juergen herrmann ___ XLhost.de - eXperts in Linux hosting Jürgen Herrmann Bruderwöhrdstraße 15b, DE-93051 Regensburg Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev ___ XLhost.de - eXperts in Linux hosting Jürgen Herrmann Bruderwöhrdstraße 15b, DE-93051 Regensburg Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
[ZODB-Dev] python types question
hi! i have a backend to establish many2many relations between objects. the relations are all bi-directional. for the storage i use a PersistentMapping, keys are relation names (strings), values are PersistentLists that hold the (globaly uniqe) oids of the related objects (strings also). now my question: regarding performance and having conflict errors in mind (an object might well be related to several thousand others, such a PersistentList objectwould be that big then) - is it wise to use these two classes or is there something better to use? (any hints are ok, i'll gladly do the rtfm then :) thanks in advance, juergen herrmann ___ XLhost.de - eXperts in Linux hosting Jürgen Herrmann Bruderwöhrdstraße 15b, DE-93051 Regensburg Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
RE: [ZODB-Dev] zodb connection question
[ Tim Peters wrote:] [Jürgen Herrmann] ... so, what i need is a way to have my mechanism called before transaction commits (would be possible to use the hooks provided by zodb 3.4 for that, sure) and cycle through all modified objects (that's where the hooks in 3.4 are not enough for me, they don't provide any way to access a list of modified objects). Note that this isn't just a matter of exposing something to you: ZODB 3.4 has no list of modified objects, not even internally. Data managers (like ZODB.Connection) register with transactions in 3.3+, and data managers are responsible for keeping track of their own modified objects. Any number of data managers may register with a transaction, and each keeps track of its own details. So doing what you want would require at least two changes: 1. Adding a new method to the IDataManager interface, to deliver some notion of modified objects under the control of the data manager. Of course data managers would need to grow implementations of that too. Since you're presumably going to be changing even more objects based on the stream of modified objects returned, this can be tricky (do you get a point-in-time list, ignoring further changes? or an iterator that includes later changes? if the latter, is it OK to see an object more than once in the stream? like that). 2. Adding a related new method to ITransaction, to combine the notions of modified objects from all registered data managers. That's all doable, but it's hard to see it taking priority over other work. hmm, as it seemed quite impossible the way i wanted it, i almost dropped it from my wishlist. now you say, it's doable... i already had hacked into zodb.connection, the problem was that the objects maintained there (self._registered_objects) are not wrapped for acquisition, thus my reindexing method did not find the catalog to register with. how would you solve this problem? to answer your question above: a point-in-time list would be enough for me, just like to reindex all modified model objects. that should only change indexes. regards, juergen herrmann ___ XLhost.de - eXperts in Linux hosting Juergen Herrmann Weiherweg 10, 93051 Regensburg, Germany Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 ICQ: 27139974 - IRC: [EMAIL PROTECTED] WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
RE: [ZODB-Dev] zodb connection question
[ Tim Peters wrote:] [Jürgen Herrmann] hmm, as it seemed quite impossible the way i wanted it, i almost dropped it from my wishlist. now you say, it's doable... It would be possible to add new official APIs to ZODB to supply some notion of the collection of all modified objects, at the level ZODB sees objects. Of course that would be doable, given enough work (which I sketched). i already had hacked into zodb.connection, That's part of what would be needed, yes. the problem was that the objects maintained there (self._registered_objects) are not wrapped for acquisition, thus my reindexing method did not find the catalog to register with. how would you solve this problem? For the entirety of what you want to do, I'd pursue Jim's suggestion and forget about working at the ZODB level. You didn't actually want all modified objects to begin with: that appeared to be a hack, an indirect way to get at what you do want, which is a very specific subset of application objects that need to be reindexed. Jim suggested some principled ways to accomplish that without hacking ZODB internals. i'd have the objects sorted out, that simply don't implement the necessary callback function. this way only the interesting objects would take part in that dance. knowing all modified objects just seemed an easy way to get where i wanted. to answer your question above: a point-in-time list would be enough for me, just like to reindex all modified model objects. that should only change indexes. If we were to add official APIs to ZODB, we'd have to consider what everyone might want. Point-in-time is probably easiest to code, anyway. hmm, after several posts to the zope list and now to the zodb-dev list also, i guess that no one actually needs these hooks. everyone that replied to me, suggested not to try it this way. now i wrote an index manager, that keeps track of registered objects - still i have to register objects in set methods and methods that otherwise change the state of objects but all the other stuff is the way i want it now. in short: probably to much work if noone wants to use it afterwards (except me :))) regards, juergen herrmann ___ XLhost.de - eXperts in Linux hosting Juergen Herrmann Weiherweg 10, 93051 Regensburg, Germany Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 ICQ: 27139974 - IRC: [EMAIL PROTECTED] WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev
Re: [ZODB-Dev] zodb connection question
[ Jim Fulton wrote:] Jürgen Herrmann wrote: [ Jim Fulton wrote:] Jürgen Herrmann wrote: hi all! i'm trying to form a patch that will result in a method _before_commit() being called on each modified object in a transaction (if that method exists of course) right before commit. main sense is to automate/delay (re)cataloging. first i looked at the Transaction class, as there have been heavy modifications to it from zodb 3.2 to 3.4 Patching is generally a bad idea. I suggest creatinga new catalog (by subclassing or adapting an existing one) that: - queues updates - registers a before-commit callback with the TM on the first update in a transaction - processes the queue in the callback I did this recently for with Zope 3's catalog and it worked very well. (I happend to use subclassing and would use adaptation if I were to do it again.) Jim hi jim! thanks for your reply, i already thought about such a solution and discarded it because it would still be necessary to call a method on an object to recatalog it. this step (the programmer's responsibility) i want to eliminate for several reasons. Which are? i have developed a framework that nicely handles many2many relations, heavily uml based, python code generation for attribute getter/setters, code generation for methods (the def only), automatic index creation... in short, i want to make everything as easy for the programmer (atm only me :) first i had the code generation framework write out an explicit self.reindex_object() at the end of each setter, ugly. it also had the drawback that setting 3 attributes caused the object to reindex itself 3 times. not the best for performance. so, what i need is a way to have my mechanism called before transaction commits (would be possible to use the hooks provided by zodb 3.4 for that, sure) and cycle through all modified objects (that's where the hooks in 3.4 are not enough for me, they don't provide any way to access a list of modified objects). I would oppose a per-object callback mechanism. OTOH, I would not oppose providing enough hooks to allow you to implement such a mechanism yourself without patching ZODB. Note however, that such a mechanism might not help you anyway, as discussed below. as i showed in my first post, the callback to modified objects works as expected, but in the called method i'm not able to acquire the responsible Catalog for the modified object. That's because objects registered are not acquisition wrapped. up to what point can i modify objects in a transaction? why doesn't acquisition on my objects work in Connection::tpc_begin() ??? The objects are not acquisition wrapped. IMO, ZODB is way too low a level to do what you want to do. Note that what you are trying to do would be easier in Zope 3 because Zope 3 relies far less on wrappers for acquisition. Jim oh well, unfortunately i began with that project about 8months ago. that time zope (x)3 was still not recommended to users on the zope mailinglist, so i went for zope2. anyways, there must be a solution to that problem. if zodb is way to low for my approach, where do i get a possibility to get notified about a imminent transaction commit? and how could i find out about the modified objects? another question: would it be possible to wrap the objects to enable acquisition in my first approach? guess i'd have to know the container for that, right (which is no problem, because i have one distinct container folder for each class)? if that works somehow, i'd bug you further to provide me with that hooks you described above :) thanks in advance for your replies... juergen herrmann ___ XLhost.de - eXperts in Linux hosting Juergen Herrmann Weiherweg 10, 93051 Regensburg, Germany Fon: +49 (0)700 XLHOSTDE [0700 95467833] Fax: +49 (0)721 151 463027 ICQ: 27139974 - IRC: [EMAIL PROTECTED] WEB: http://www.XLhost.de ___ For more information about ZODB, see the ZODB Wiki: http://www.zope.org/Wikis/ZODB/ ZODB-Dev mailing list - ZODB-Dev@zope.org http://mail.zope.org/mailman/listinfo/zodb-dev