Hanno Schlichting wrote at 2009-4-11 14:43 +0200:
...
ZODB 3.9 removed a bunch of deprecated API's. Look at
http://pypi.python.org/pypi/ZODB3/3.9.0a12#change-history to see how
much changed in this version.
The main things were related to Versions are no-longer supported.
which changed some low
Dominique Lederer wrote at 2009-3-30 11:15 +0200:
I am using ZODB 3.8.1 with Relstorage 1.1.3 on Postgres 8.1
Frequently i am getting messages like:
Unexpected error
Traceback (most recent call last):
File
Sandra wrote at 2009-4-1 12:17 +:
...
def manage_upload(self,file='',REQUEST=None):
...
in python/OFS/image.py. But my Programm run without end.
Zope is not very efficient with uploading large files.
Thus, I may take some time -- but it should work.
I'm making some mistake ?
You should
Miles Waller wrote at 2008-12-4 19:42 +:
fstest - no problems
checkbtrees - no problems
fsrefs - returns errors about invalid objects (and reports all objects
as last updated: 5076-10-09 17:19:26.809896!), and finally fails with a
KeyError
Traceback (most recent call last):
File
Leonardo Santagada wrote at 2008-10-4 16:42 -0300:
...
Why doesn't zodb has a table of some form for this info?
You can implement one -- if you think this is worth the effort.
The ZODB has a hook classFactory(connection modulename, globalname)
on the DB class. It is responsible for mapping the
Jim Fulton wrote at 2008-10-1 13:40 -0400:
...
It may well be that a restart *may* not lead into a fully functional
state (though this would indicate a storage bug)
A failure in tpc_finish already indicates a storage bug.
Maybe -- although file system is full might not be so easy to avoid
in
Christian Theune wrote at 2008-10-3 10:32 +0200:
On Fri, 2008-10-03 at 09:55 +0200, Dieter Maurer wrote:
Jim Fulton wrote at 2008-10-1 13:40 -0400:
...
It may well be that a restart *may* not lead into a fully functional
state (though this would indicate a storage bug)
A failure
Jim Fulton wrote at 2008-9-30 18:30 -0400:
...
c. Close the file storage, causing subsequent reads and writes to
fail.
Raise an easily recognizable exception.
I raise the original exception.
Sad.
The original exception may have many consequences -- most probably
harmless. The special
Wichert Akkerman wrote at 2008-9-24 09:44 +0200:
Jim Fulton wrote:
I'd appreciate it if people would try it out soon.
I can say that the combination of 3.8.1b8 and Dieter's
zodb-cache-size-bytes patch does not seem to work. With
zodb-cache-size-bytes set to 1 gigabyte on an instance with
Tres Seaver wrote at 2008-9-12 06:35 -0400:
...
Reimplementing Pickle Cache in Python
=
...
from zope.interface import Attribute
from zope.interface import Interface
class IPickleCache(Interface):
API of the cache for a ZODB connection.
Andreas Jung wrote at 2008-9-12 10:31 +0200:
anyone having experiences with the performance of Relstorage on Zope
installations which heavy parallel writes (which is often a bottleneck).
Does Relstorage provide any significant advantages over ZEO.
As Relstorage emulates FileStorage behaviour
Izak Burger wrote at 2008-9-17 12:10 +0200:
I'm sure this question has been asked before, but it drives me nuts so I
figured I'll ask again. This is a problem that has been bugging me for
ages. Why does zope memory use never decrease? Okay, I've seen it
decrease maybe by a couple megabyte, but
Roché Compaan wrote at 2008-8-25 17:36 +0200:
On Sun, 2008-08-24 at 08:55 +0200, Roché Compaan wrote:
Thanks for the feedback. I'll re-run the tests without any text indexes,
as well as run it with other implementations such as TextIndexNG3 and
SimpleTextIndex and compare the results.
Some
Roché Compaan wrote at 2008-8-24 14:00 +0200:
This is the fsdump output for a single IOBTree:
data #00032 oid=1bac size=5435 class=BTrees._IOBTree.IOBTree
What is persisted as part of the 5435 bytes? References to containing
buckets? What else?
For optimization reasons,
an IOBTree
Roché Compaan wrote at 2008-8-23 19:31 +0200:
On Sat, 2008-08-23 at 14:09 +0200, Dieter Maurer wrote:
Roché Compaan wrote at 2008-8-22 14:49 +0200:
I've been doing some benchmarks on Plone and got some surprising stats
on the pickle size of btrees and their buckets that are persisted with
each
Dieter Maurer wrote at 2008-8-23 14:09 +0200:
...
A typical IISet contains 90 value records and a persistent reference.
I expect that an integer is pickled in 5 bytes. Thus, about 0.5 kB
should be expected as typical size of an IISet.
Your IISet instances seem to be about 1.5 kB large
Roché Compaan wrote at 2008-8-22 14:49 +0200:
I've been doing some benchmarks on Plone and got some surprising stats
on the pickle size of btrees and their buckets that are persisted with
each transaction. Surprising in the sense that they are very big in
relation to the actual data indexed. I
[EMAIL PROTECTED] wrote at 2008-7-31 15:09 -0400:
...
I don't have experience with running the db in readonly mode in
production.
There is no difference in cache handling between readonly and readwrite
mode.
An old thread explains why this (no-difference) is necessary.
--
Dieter
tsmiller wrote at 2008-5-28 19:55 -0700:
...
I have a bookstore that uses the ZODB as its storage. It uses qooxdoo as
the client and CherryPy for the server. The server has a 'saveBookById'
routine that works 'most' of the time. However, sometimes the
transaction.commit() does NOT commit the
Vincent Pelletier wrote at 2008-5-22 11:21 +0200:
...
BTW, the usual error hook treats conflict error exceptions differently from
others, and I guess it was done so because those can happen in TPC.
No, the reason is to repeat a transaction that failed due to
a ConflictError.
--
Dieter
Andreas Jung wrote at 2008-5-13 20:19 +0200:
...
Shared.DC.ZRDB.TM.TM is the standard Zope[2] way to implement a
ZODB DataManager.
Nowadays you create a datamanager implementing IDataManager and join it
with the current transaction. Shared.DC.ZRDB.TM.TM is pretty much
old-old-old-style.
Vincent Rioux wrote at 2008-4-9 11:58 +0200:
I am using zodb FileStorage for a standalone application and looking for
some advices, tutorials or descriptions for using a zodb made of an
aggregation of smaller ones.
I have been told that the mount mechanism should make the trick. Any
pointers
Manuel Vazquez Acosta wrote at 2008-4-5 11:49 -0400:
...
I wonder if there's a way to actually see what objects (or object types)
are modified by those transactions. So I can go directly to the source
of the (surely innecesary) transaction.
The ZODB utility fsdump generates a human readable
view
Benji York wrote at 2008-3-25 09:40 -0400:
Christian Theune wrote:
I talked to Brian Aker (MySQL guy) two weeks ago and he proposed that we
should look into a technique called `group commit` to get rid of the commit
contention.
...
Summary: fsync is slow (and the cornerstone of most commit
Benji York wrote at 2008-3-25 14:24 -0400:
... commit contentions ...
Almost surely there are several causes that all can lead to contention.
We already found:
* client side causes (while the client helds to commit lock)
- garbage collections (which can block a client in the
Chris Withers wrote at 2008-3-20 22:22 +:
Roché Compaan wrote:
Not yet, they are very time consuming. I plan to do the same tests over
ZEO next to determine what overhead ZEO introduces.
Remember to try introducing more app servers and see where the
bottleneck comes ;-)
We have seen
Dylan Jay wrote at 2008-3-10 17:37 +1100:
...
I have a few databases being served out of a zeo. I restarted them in a
routine operation and now I can't restart due to the following error
Any idea on how to fix this?
2008-03-10 06:29:12 ERROR ZODB.Connection Couldn't load state for 0x01
Thomas Lotze wrote at 2008-2-26 09:30 +0100:
Dieter Maurer wrote:
How often do you need it?
It is worse the additional index? Especially in view that a storage may
contain a very large number of transactions?
We've done it differently now anyway, using real iterators which store
their state
Alan Runyan wrote at 2008-2-26 13:07 -0600:
...
Most people come at ZODB with previous experience in RDBMS.
How do they map SQL INSERT/UPDATE activities to ZODB data structures?
In a way that does not create hotspot.
I tend to views the objects in an application
as belonging to three types:
Thomas Lotze wrote at 2008-2-12 11:09 +0100:
...
I don't think that's going to work here. Iterating through the
transactions in the database for each iteration is going to be totally
non-scalable.
It seems to us that it would actually be the right thing to require that
storages have an
Roché Compaan wrote at 2008-2-7 21:21 +0200:
...
So if I asked you to build a data structure for the ZODB that can do
insertions at a rate comparable to Postgres on high volumes, do you
think that it can be done?
If you need a high write rate, the ZODB is probably not optimal.
Ask yourself
Mignon, Laurent wrote at 2008-2-6 08:06 +0100:
After a lot of tests and benchmark, my feeling is that the ZODB does not seem
suitable for systems managing many data stored in a plane hierarchy.
The application that we currently develop is a business process management
system in opposition to a
Roché Compaan wrote at 2008-2-6 20:18 +0200:
On Tue, 2008-02-05 at 19:17 +0100, Dieter Maurer wrote:
Roché Compaan wrote at 2008-2-4 20:54 +0200:
...
I don't follow? There are 2 insertions and there are 1338046 calls
to persistent_id. Doesn't this suggest that there are 66 objects
Hello Shane,
Shane Hathaway wrote at 2008-2-3 23:57 -0700:
...
Looking into this more, I believe I found the semantic we need in the
PostgreSQL reference for the LOCK statement [1]. It says this about
obtaining a share lock in read committed mode: once you obtain the
lock, there are no
Roché Compaan wrote at 2008-2-4 20:54 +0200:
...
I don't follow? There are 2 insertions and there are 1338046 calls
to persistent_id. Doesn't this suggest that there are 66 objects
persisted per insertion? This seems way to high?
Jim told you that persistent_id is called for each object and
Meanwhile I have carefully studied your implementation.
There is only a single point I am not certain about:
As I understand isolation levels, they garantee that some bad
things will not happen but not that all not bad thing will happen.
For read committed this means: it garantees that I
Roché Compaan wrote at 2008-2-3 09:15 +0200:
...
I have tried different commit intervals. The published results are for a
commit interval of 100, iow 100 inserts per commit.
Your profile looks very surprising:
I would expect that for a single insertion, typically
one persistent object
Roché Compaan wrote at 2008-2-1 21:17 +0200:
I have completed my first round of benchmarks on the ZODB and welcome
any criticism and advise. I summarised our earlier discussion and
additional findings in this blog entry:
Hallo Shane,
Shane Hathaway wrote at 2008-1-31 13:45 -0700:
...
No, RelStorage doesn't work like that either. RelStorage opens a second
database connection when it needs to store data. The store connection
will commit at the right time, regardless of the polling strategy. The
load connection
Andreas Jung wrote at 2008-2-1 12:13 +0100:
--On 1. Februar 2008 03:03:53 -0800 Tarek Ziadé [EMAIL PROTECTED]
wrote:
Since BTrees are written in C, I couldn't add my own conflict manager to
try to merge buckets. (and this is
way over my head)
But you can inherit from the BTree classes and
Christian Theune wrote at 2008-1-30 21:21 +0100:
...
That would mean that the write skew phenomenon that you found would be
valid behaviour, wouldn't it?
No.
Am I missing something?
Yes. No matter how you order the two transactions in my example,
the result will be different from what the
Shane Hathaway wrote at 2008-1-31 01:08 -0700:
...
I admit that polling for invalidations probably limits scalability, but
I have not yet found a better way to match ZODB with relational
databases. Polling in both PostgreSQL and Oracle appears to cause no
delays right now, but if the polling
Shane Hathaway wrote at 2008-1-31 00:12 -0700:
...
1. Download ZODB and patch it with poll-invalidation-1-zodb-3-8-0.patch
What does poll invalidation mean?
The RelStorage maintains a sequence of (object) invalidations ordered
by transaction-id and the client can ask give me all
Shane Hathaway wrote at 2008-1-31 11:55 -0700:
...
Yes, quite right!
However, we don't necessarily have to roll back the Postgres transaction
on every ZODB.Connection close, as we're doing now.
That sounds very nasty!
In Zope, I definitely *WANT* to either commit or roll back the
transaction
Formerly, proposals lived on wiki.zope.org.
There, they could be commented and discussed.
Now proposals live somewhere. Usually, they can not be commented nor
discussed. But, they are registered at Launchpad.
For me, it is completely unclear how Launchpad should be used
to guide the route from a
Zvezdan Petkovic wrote at 2008-1-23 17:15 -0500:
On Jan 23, 2008, at 4:05 PM, Flavio Coelho wrote:
sorry, I never meant to email you personally
I have been wrong: Flavio has not forgotten the list, I had not looked
carefully enough. Sorry!
--
Dieter
Izak Burger wrote at 2008-1-24 13:57 +0200:
...
I'm kind of breaking my normal rules of engagement here by immediately
sending mail to a new list I just subscribed to, but then Andreas Jung
did ask me to send a mail about this to the list.
This morning one of our clients suddenly got this
Andreas Jung wrote at 2008-1-24 19:20 +0100:
...
Module ZODB.utils, line 96, in cp
IOError: [Errno 27] File too large
Apparently, you do not have large file support and your storage
file has readed the limit for small files.
LFS is usually required for files larger than 2GB. According to
Flavio Coelho wrote at 2008-1-22 17:43 -0200:
...
Actually what I am trying to run away from is the packing monster ;-)
Jim has optimized pack consideraly (-- zc.FileStorage).
I, too, have worked on pack optimization the last few days (we
cannot yet use Jims work because we are using ZODB 3.4
Looking at the current (not Jims new) pack algorithm to optimize
the reachability analysis, I recognized a behaviour that looks
like a potential data loss through packing.
The potential data loss can occur when an object unreachable at
pack time becomes reachable again after pack time.
The
ZODB.fsIndex tells us in its source code documentation that it splits
the 8 byte oid into a 6 byte prefix and a two byte suffix and
represents the index by an OOBTree(prefix - fsBucket(suffix - position))
It explains that is uses fsBucket (instead of a full tree) because
the suffix - position
Marius Gedminas wrote at 2008-1-21 00:08 +0200:
Personally, I'd be afraid to use deepcopy on a persistent object.
A deepcopy is likely to be no copy at all.
As Python's deepcopy does not know about object ids, it is likely
that the copy result uses the same oids as the original.
When you
Jim Fulton wrote at 2008-1-21 09:41 -0500:
... resurrections after pack time may get lost ...
I'm sure the new pack algorithm is immune to this. It would be
helpful to design a test case to try to provoke this.
I fear, we can not obtain full immunity at all -- unless we perform
packing
Kenneth Miller wrote at 2008-1-17 19:08 -0600:
...
Do I always
need to subclass persistent?
When you assign an instance of your (non persistent derived) class
as an attribute to a persistent object,
then your instance will be persisted together with its persistent
container.
However, local
Flavio Coelho wrote at 2008-1-17 14:57 -0200:
Some progress!
Apparently the combination of:
u._p_deactivate()
You do not need that when you use commit.
transaction.savepoint(True)
transaction.commit()
You can use u._p_jar.cacheGC() instead of the commit.
Tres Seaver wrote at 2008-1-17 01:30 -0500:
...
Mika, David P (GE, Research) wrote:
Can someone explain why the test below (test_persistence) is failing?
I am adding an attribute after object creation with __setstate__, but
I can't get the new attribute to persist.
You are mutating the
Jim Fulton wrote at 2007-12-28 10:20 -0500:
...
There Berkely Database Storage supported automatic incremental packing
without garbage collection. If someone were to revitalize that effort
and if one was willing to do without cyclic garbage collection, then
that storage would remove the
Jim Fulton wrote at 2007-12-1 10:09 -0500:
...
AFAIK, there hasn't been a release that fixes this problem. A
contributor to the problem is that I don't think anyone working on
ZODB has ready access to 64-bit systems. :(
We are using an old (ZODB 3.4) version on a 64 bit linux without
Jim Fulton wrote at 2007-12-2 13:51 -0500:
...
With what version of Python?
2.4.x
I believe the problem is related to both Python 2.5 and 64-bit systems
-- possibly specific 64-bit systems.
Okay. No experience with this.
As we use Zope (2), we do not use Python 2.5.
--
Dieter
Thomas Clement Mogensen wrote at 2007-9-27 12:43 +0200:
...
Within the last few days something very strange has happened: All
newly created or modified objects get a _p_mtime that is clearly
incorrect and too big for DataTime to consider it a valid timestamp.
(ie. int(obj._p_mtime) returns
Manuzhai wrote at 2007-9-18 12:46 +0200:
...
the Documentation link points to a page
that seems to mostly have papers and presentation from 2000-2002.
There is a good guide to the ZODB from Andrew Kuchling (or similar).
It may be old -- but everything is still valid.
On the internet, there is
Alan Runyan wrote at 2007-9-11 09:27 -0500:
...
oid 0xD87110L BTrees._OOBTree.OOBucket
last updated: 2007-09-04 14:43:37.687332, tid=0x37020D3A0CC9DCCL
refers to invalid objects:
oid ('\x00\x00\x00\x00\x00\xb0+f', None) missing: 'unknown'
oid ('\x00\x00\x00\x00\x00\xb0N\xbc',
Alan Runyan wrote at 2007-9-10 09:34 -0500:
...
While debugging this I had a conversation with sidnei about mounted
databases. He recalled that if your using a mounted database you
should not pack. If for some reason your mounted database had a cross
reference to another database and somehow
Jim Fulton wrote at 2007-8-20 10:32 -0400:
...
Application specific conflict resolution
would become a really difficult task.
I'm sure you realize that application specific conflict resolution
violates serializability.
No, I do not realize this.
Assume a counter which is not read only
Tres Seaver wrote at 2007-8-20 10:00 -0400:
...
Zope works for this case because each application thread uses a
per-request connection, to which it has exclusive access while the
connection is checked out from the pool (i.e., for the duration of the
request).
At least unless one make persistency
Jim Fulton wrote at 2007-8-20 10:15 -0400:
Excellent analysis snipped
1. and 3. (but obviously not 2.) could be handled by
implementing STICKY not by a bit but by a counter.
This has been planned for some. :/
I have (reread) this in your Different Cache Interaction proposal.
Thanks to
Jim Fulton wrote at 2007-8-20 10:45 -0400:
...
Dieter appears to have been bitten by this and he is one of we. :)
We, and I presume he, can be bitten by a Python function called from
BTree code calling back into the code on the same object. This is
possible, for example, in a __cmp__ or
Analysing the STICKY behaviour of 'Persistent', I recognized
that 'Persistent' does not customize the '__getattr__' but in fact
the '__getattribute__' method. Therefore, 'Persistent' is informed
about any attribute access and not only attribute access on a
ghosted instance.
Thogether with the
We currently see occational SIGSEGVs in BTrees/BucketTemplate.c:_bucket_get.
I am not yet sure but it looks as if the object had been deactivated
during the BUCKET_SEARCH.
Trying to analyse the problem, I had a close look at the STICKY
mechanism of persistent.Persistent which should prevent
Jim Carroll wrote at 2007-8-12 16:45 +:
...
Somehow, the code that adds the message to the persistent
list is running more than once. I have read that ZEO will
re-run python code on a retry
You have read something wrong.
The only thing, ZEO does in case of a conflict is trying to
Stefan H. Holek wrote at 2007-7-7 12:42 +0200:
BTrees.Length is used in many places to maintain the length of
BTrees. Just the other day it was added to zope.app.container.btree.
While I am happy about the speed improvements, I am concerned about
the fact that BTrees.Length declares itself
Jim Carroll wrote at 2007-6-22 16:30 +:
...
I'll be checking the quixote mailing list, but quixote isn't going to have
anything zope-specific, and I do think that it's the interaction with zope
that's giving me trouble...
The other sendmail packages are Zope products and can use some
Zope
Joachim Schmitz wrote at 2007-5-31 12:07 +0200:
...
2007-05-31 09:45:06 INFO Skins.create_level A923157 finished to create
level 200
Now the conflict error, look at the transaction start-time, this is
before the restart of zope !!
You are probably tricked out here: the serials are in fact UTC
Chris Withers wrote at 2007-5-29 16:02 +0100:
...
Once again, it would be nice, now that you have access, if you could
feed back your changes in areas like these rather than keeping them in
your own private source tree :-(
I would be busy for about 1 to 2 weeks -- and I do not have that time
Joachim Schmitz wrote at 2007-5-28 17:45 +0200:
In ZODB.Connection.Connection.open I see:
if self._reset_counter != global_reset_counter:
# New code is in place. Start a new cache.
self._resetCache()
else:
self._flush_invalidations()
So
Perry wrote at 2007-5-25 13:16 +0200:
database conflict error (oid 0x7905e6, class BTrees._IOBTree.IOBucket,
serial this txn started with 0x036ddc2a44454dee 2007-05-25
09:14:16.000950, serial currently committed 0x036ddc2c21950377
2007-05-25 09:16:07.870801) (80 conflicts (10 unresolved) since
Andreas Jung wrote at 2007-5-1 11:23 +0200:
...
I think you are right (as always). Then let me rephrase the question: how
can one distinguish if two transaction objects represent the same or
different transactions in such case where memory address is identical?
Why are you interested in such a
Jim Fulton wrote at 2007-5-4 14:40 -0400:
On May 4, 2007, at 2:33 PM, Dieter Maurer wrote:
Jim Fulton wrote at 2007-5-2 11:52 -0400:
...
I think I still rather like explicit, but I'm on the fence about
which approach is best. What do other people think?
From your description, I would use
Andreas Jung wrote at 2007-5-4 21:13 +0200:
--On 4. Mai 2007 21:05:00 +0200 Dieter Maurer [EMAIL PROTECTED] wrote:
But, the transactions are not concurrent in your original description!
Instead, one transaction has been committed and (only!) then you
see a transaction with the same id again
Chris Withers wrote at 2007-5-4 18:53 +0100:
To try and find out which objects were referencing all these workflow
histories, we tried the following starting with one of the oid of these
histories:
from ZODB.FileStorage import FileStorage
from ZODB.serialize import referencesf
fs =
Paul Winkler wrote at 2007-4-26 02:13 -0400:
In ExportImport._importDuringCommit() I found this little gem:
pfile = StringIO(data)
unpickler = Unpickler(pfile)
unpickler.persistent_load = persistent_load
newp = StringIO()
pickler =
Jim Fulton wrote at 2007-4-24 17:01 -0400:
I'm 99.9% sure that version commit and abort are broken in ZODB.DB.
The commit methods in CommitVersion, and AbortVersion (and
TransactionalUndo) call invalidate on the databse too soon -- before
the transaction has committed. This can have a
Alan Runyan wrote at 2007-4-11 11:31 -0500:
... ZEO lockups ...
PeterZ [EMAIL PROTECTED] reported today very similar problems
in [EMAIL PROTECTED]. He, too, gets:
File /opt/zope/Python-2.4.3/lib/python2.4/asyncore.py, line 343, in
recv
data = self.socket.recv(buffer_size)
error: (113, 'No
Paul Winkler wrote at 2007-4-6 13:30 -0400:
...
If I understand this stuff correctly, the code in question on a
filesystem that *doesn't* have the sparse file optimization would
equate to write N null bytes to this file as fast as possible.
True?
Posix defines the semantics.
I have not looked
Robert Gravina wrote at 2007-4-1 00:31 +0900:
...
Woohoo! I realised Connection.sync() does exactly what I need, but
this still doesn't work as expected.
class UpdatedDB(DB):
def invalidate(self, tid, oids, connection=None, version=''):
DB.invalidate(self, tid, oids, connection,
Tim Tisdall wrote at 2007-3-29 12:02 -0400:
Okay... I've managed to create a persistent object called 'p' with
the OID of the missing object. I have no idea how to determine the
database connection object to pass it to the
ZODB.Connection.Connection.add() .
If you have a persistent object
Lennart Regebro wrote at 2007-3-28 18:25 +0200:
On 3/27/07, Dieter Maurer [EMAIL PROTECTED] wrote:
However, this approach is only efficient when the sort index size
is small compared to the result size.
Sure. But with incremental searching, the result size is always one, right? ;-)
No. You
Atmasamarpan Novy wrote at 2007-3-28 11:02 +0200:
...
Problem:
Current ZODB design creates a separate cache for each ZODB connection
(ie. a thread in zope). It means that the same object could be
replicated in each connection cache. We cannot do much about it since we
do not know in advance
Tim Tisdall wrote at 2007-3-29 16:03 -0400:
It took me all day, but I finally managed to figure out how to do
what you suggested. Unfortunately, I still get the very same error:
POSKeyError, Error Value: 0x01edf2 . Just to make sure I did it
right, 0x01edf2 is the OID I should use in your
Tim Tisdall wrote at 2007-3-27 09:17 -0400:
The broken object is a 1gb plone instance. Which is what I'm trying
to recover.
You may try to find the (non broken) persistent subobjects of the broken
objects and relink them to a new object.
Then you can delete the broken object.
Whether you have
Jim Fulton wrote at 2007-3-26 15:55 -0400:
...
On Mar 26, 2007, at 3:28 PM, Dieter Maurer wrote:
Jim Fulton wrote at 2007-3-25 09:53 -0400:
On Mar 25, 2007, at 3:01 AM, Adam Groszer wrote:
MF I think one of the main limitations of the current catalog (and
MF hurry.query) is efficient
Tim Tisdall wrote at 2007-3-23 16:03 -0400:
When I run the fsrefs.py on the database I get the following:
-
oid 0x0L persistent.mapping.PersistentMapping
last updated: 2007-01-02 18:59:32.016077, tid=0x36AA393889A1800L
refers to invalid object:
oid ('\x00\x00\x00\x00\x00\x00\x00\x01',
Chris Withers wrote at 2007-3-22 08:43 +:
Dennis Allison wrote:
And that I'm lazy and really want to be able to do:
python rollback.py 2007-03-21 09:00
You have been told that you can specify a stop time
and the storage will stop at the given time.
Thus, you look at the code how this is
Jim Fulton wrote at 2007-3-21 10:06 -0400:
...
On Mar 21, 2007, at 9:59 AM, Tres Seaver wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jim Fulton wrote:
On Mar 21, 2007, at 6:41 AM, Chris Withers wrote:
Hi All,
Is there any existing method or script for rolling back a ZODB
Ray Liere wrote at 2007-3-15 08:32 -0700:
...
Yes -- to the question in your subject.
--
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/
ZODB-Dev mailing list - ZODB-Dev@zope.org
Ross Patterson wrote at 2007-3-15 14:25 -0700:
...
I recently became obsessed with this problem and sketched out an
architecture for presorted indexes. I thought I'd take this
opportunity to get some review of what I came to.
From my draft initial README:
Presort provides intids which assure
Chris Withers wrote at 2007-3-14 10:18 +:
Dieter Maurer wrote:
Yes, it looks like an error:
Apparently, assert end is not None failed.
Apparently storage.loadBefore returned a wrong value.
Unfortunately, neither of these means anything to me ;-)
That is because you did not look
Chris Withers wrote at 2007-3-13 11:34 +:
One of the users on one of my projects saw this error under high load:
Module Products.QueueCatalog.QueueCatalog, line 458, in reindexObject
Module Products.QueueCatalog.QueueCatalog, line 341, in catalog_object
Module
Jim Fulton wrote at 2007-2-25 08:21 -0600:
It might also be nice to have this generate events. That is, the
tracing storage should call zope.event.notify.
I intent in 3.8 or 3.9 to start having ZODB depend on zope.event. We
really should have used events rather than adding the callback's
Petra Chong wrote at 2007-2-13 18:27 -:
...
In the docs I have read that it is possible for non-zodb apps to plug
into the transaction framework. However, I am unable to find any
specifics as to how to do this.
What I'd like to do is this:
1. Have my app import transaction
2. When
1 - 100 of 250 matches
Mail list logo