Tim Peters wrote:
Very nice, Florent -- thank you! I added some stuff (trust me, you won't
object wink), and merged it to the 3.4 branch and to the ZODB trunk. I'll
remove ZODB/branches/efge-beforeCommitHook next. One of the nice things
about SVN is that retired branches and tags don't have to
Hi there,
I have a non-zope zeo client that pumps data into a storage server for
later consumption by a zope zeo client. Everything is Zope 2.7.5.
The non-zope client has logic that looks roughly like:
for work in queue:
try:
get_transaction().begin()
# do work, change zodb objects,
Christian Robottom Reis wrote:
On Fri, Apr 15, 2005 at 12:57:07PM +0100, Chris Withers wrote:
Okay, where in the above should I be calling sync()?
Where do I get sync from? get_transaction() doesn't have a synch attribute..
On the Connection, of course wink
And how do I get hold of the connection
Dieter Maurer wrote:
Currently, the ZODB cache can only be controlled via the maximal number
of objects. This makes configuration complex as the actual limiting
factor is the amount of available RAM and it is very difficult to
estimate the size of the objects in the cache.
I therefore propose
Christian Heimes wrote:
I'm migrating CMF objects to Archetypes objects including metadata,
security and so on.
Surely you mean hideous AT object to efficient CMF objects? 0.8 wink
The migiration of a typical object takes about 0.2
to 1 sec including catalog updates. A folderish object with
Sidnei da Silva wrote:
| Does't the *already included* zeoup.py do exactly what you are trying
| to do? See the bin directory of your zope home.
Unfortunately no. It uses ClientStorage, which goes through the 1000's
of lines of the connection dance using connect threads and large
timeouts.
Dieter Maurer wrote:
Let a bug occur in some component and then, instead (or in addition)
of fixing the bug, we say often: rip the component off Zope.
That was threadened for Versions, and ZClasses and now
for Refresh.
Well, they all don't work right, confuse people are aren't
Victor Safronovich wrote:
Hello Chris Withers,
Friday, September 23, 2005, 9:12:19 PM, you wrote:
CW Why not just use MaildropHost?
1. Because it is *nix only, it use os.fork(), using Thread.setDaemon(1) is
more friendly for me.
I KNOW Jens will accept good patches ;-)
2. Because
Chris Spencer wrote:
I understand that, but my point was when you call transaction.commit(),
you don't necessarily know what you're committing.
Yes you do, each thread has its own connection to the database, and this
connection has an independent view of the database from any other thread.
Andreas Jung wrote:
Sounds like a total insane idea to me to use ZEO to distribute code :-)
Yeah, 'cos ZODB-based Script (Python)'s and ZPT's aren't code :-P
Chris
--
Simplistix - Content Management, Zope Python Consulting
- http://www.simplistix.co.uk
Tim Peters wrote:
I'm developing a ZODB based Collection Management software, and, for a
bunch of reasons, i have to know the list of modified objects before the
current transaction commit. Looking around seems there is no a public
API to obtains this list
That's true.
How hard would it be
Hi Tim,
Thanks for the feedback,
[Chris]
AttributeError: ClientCache instance has no attribute '_get'
Both at random times while the servers are running...
Tim Peters wrote:
Then-- sorry --I don't have a clue. Like I said, I don't see any code
capable of removing a ClientCache's `_get`
Hi Tim,
Tim Peters wrote:
As before, this Shouldn't load state ... error is almost certainly due to
a logic error in some product you're using, or in Zope. Take this message
as meaning exactly what it says: something is trying to work with a
persistent object after the Connection it came from
Hi All,
ZEO's got rather noisy in Zope 2.8 :-S
Anyone mind if I make the following change to the ZEO trunk?
cheers,
Chris
PS: What branches should I merge this to to get it into the next 2.8
release, assuming it is okay?
Index: ZEO/ClientStorage.py
Dieter Maurer wrote:
And I posted a ZConfig extension that allows to read the environment
(thus using environment variables in the configuration file).
Did you talk to the ZConfig maintainer about merging this?
I know he wasn't keen but I think it's great extra functionality to have...
Chris
Tim Peters wrote:
ZEO's got rather noisy in Zope 2.8 :-S
Offhand I didn't find any message logged at INFO level by ClientStorage.py
in ZODB 3.4 (Zope 2.8) that wasn't also logged at INFO level by
ClientStorage.py in ZODB 3.2 (Zope 2.7). What specifically do you find
noisier under 2.8 than
Florent Guillaume wrote:
I guess I'm looking for a show_stack option...
import traceback; traceback.print_stack()
Yes thankyou, my egg sucking is quite proficient ;-)
Tim, how would I go about getting one of those added to Python?
(It's not much of a code change, but the process of getting
Tim Peters wrote:
no inverse for oid_repr, and there isn't a need for one. For any 8-byte
string oid S,
p64(int(oid_repr(S), 0)) == S
so that's how to get an inverse of oid_repr if you really want one
(although I don't know why anyone would).
...'cos oid_repr is what gets used to log
Dieter Maurer wrote:
I know he wasn't keen but I think it's great extra functionality to have...
Then, maybe, you lobby a bit?
That's what I'm doing.
Fred Drake is Mr ZConfig, no?
cheers,
Chris
--
Simplistix - Content Management, Zope Python Consulting
-
Hot on the heals of my posts about Shouldn't load state for comes the
eagerly awaited sequel Couldn't load state for ;-)
Here's the errors:
2005-11-29T14:24:24 ERROR ZODB.Connection Couldn't load state for 0x6a7ed9
Traceback (most recent call last):
File lib/python/ZODB/serialize.py, line
Yay! Where's the correct place to report these nowadays?
cheers,
Chris
Dieter Maurer wrote:
Chris Withers wrote at 2005-11-29 14:33 +:
Hot on the heals of my posts about Shouldn't load state for comes the
eagerly awaited sequel Couldn't load state for ;-)
...
File lib/python/ZEO/zrpc
Tim Peters wrote:
Sorry, I couldn't find a comprehensible question here after reasonable
effort to extract one. Clearly, Zope2's DateTime.DateTime.DateTime objects
are neither persistent nor do they define any mutating methods. Are those
relevant? If not, try to ask a question directly,
Hi All,
We recently upgraded to Zope 2.8.4 and have been seeing some different
and special errors every so often (why does this always and only ever
happen to me? ;-) Nothing noticeably bad appears to result from these,
but, as always, I loose hair over them and would like ot know what's
Chris Spencer wrote:
I understand this has some drawbacks. Namely, it will only work for
new-style classes, but for a large code base this might be easier than
manually writing _p_changed = 1 everywhere.
The number of times you end up actually having to write this is pretty
minimal ;-)
Tim Peters wrote:
This is on a cluster of machines, with the errors not coming from any one
machine as far as I can see...
Networking gear and cables are also hardware ;-)
large bank datacenter - not a chance in hell of sensibly testing this :-(
No need, you already did, and I added
Jim Fulton wrote:
2. I think a real packaging system, like eggs would have helped here.
Eggs in particular would have allowed multiple versions of
zope.interface
to be installed. Zope would have gotten the version it needed and ZODB
would have gotten the version it needed. (Hm, maybe
Chris McDonough wrote:
See the egg intro doc at http://peak.telecommunity.com/DevCenter/
PythonEggs .
I've scanned that before... Jim's mentioned some pretty deep and
interesting stuff, I wondered if he'd found more in-depth docs or
whether I was just missing stuff on that page...
Chris
Tino Wildenhain wrote:
apt-get install wwwoffle :-)
C:\Zopeapt-get install wwwoffle
'apt-get' is not recognized as an internal or external command,
operable program or batch file.
:-(
Chris
--
Simplistix - Content Management, Zope Python Consulting
- http://www.simplistix.co.uk
Okay, now that we have 2.8.4 in place, we get proper reporting of
ConflictErrors and today we started seeing one happening over and over
again which looked roughly as follows:
Traceback (most recent call last):
File lib/python/Products/Transience/Transience.py, line 844, in
new_or_existing
Hi All,
This is with whatever ZODB ships with Zope 2.8.5...
I have a Stepper (zopectl run on steroids) job that deals with lots of
big objects.
After processing each one, Stepper does a transaction.get().commit(). I
thought this was enough to keep the object cache at a sane size, however
Hi Tim,
Tim Peters wrote:
Do:
import ZODB
print ZODB.__version__
to find out.
Good to know, thanks...
I have a Stepper (zopectl run on steroids) job that deals with lots of
big objects.
Can you quantify this?
60,000 File objects of the order of 2Mb each.
It does not do
Tim Peters wrote:
[Chris Withers]
...
...oh well, if only the ZODB cache was RAM-usage-based ratehr than object
count based ;-)
Ya, that's not feasible. More plausible would be to base ZODB cache targets
on aggregate pickle size; ZODB _could_ know that, and then it would also be
strongly
Dieter Maurer wrote:
I plan to implement that -- waiting for the new style contributor
agreement from the Zope Foundation ;-)
Yay! Go Dieter!
cheers,
Chris
--
Simplistix - Content Management, Zope Python Consulting
- http://www.simplistix.co.uk
Chris McDonough wrote:
What front end do you use to do the request distribution?
Pound.
*barf*
ew..
Chris
--
Simplistix - Content Management, Zope Python Consulting
- http://www.simplistix.co.uk
___
For more information
Chris McDonough wrote:
Pound.
*barf*
ew..
Works great.
Except it doesn't balance load and can't do SSL correctly, right? ;-)
Chris
--
Simplistix - Content Management, Zope Python Consulting
- http://www.simplistix.co.uk
Antonio Beamud Montero wrote:
Where Status is a Persistent Class, with an OOBTree attribute called
_dict, and serveral methods wrapping this _dict, like add, remove, etc..
OOBTree's have conflict resolution code which will try and do the right
thing instead of raising a ConflictError...
You live and learn, thanks Tim and Jeremy! :-)
Chris
Tim Peters wrote:
[Chris Withers]
Is it just me or does zeoup.py write a transaction to the end of Data.fs
containing a MinPO object?
Make sure a ZEO server is running.
usage: zeoup.py [options]
The test will connect to a ZEO server
Antonio Beamud Montero wrote:
But I can't minimize more the cache... What I need to do to free more
memory space, because this always grows...
This is a problem with python, I hear rumours it'll be fixed in Python 2.5.
cheers,
Chris
--
Simplistix - Content Management, Zope Python
Hi All,
I was wondering whether anyone had implemented a FIFO persistent queue
class which has the following conflict resolution strategy:
two concurrent adds: adds both new items to the end of the queue in a
time-based order
one add and one remove happening concurrently: add the new item
Tim Peters wrote:
It means that some object in the Connection was in a modified state
when an attempt to close the Connection was made. The current
transaction must be committed or aborted first -- ZODB can't guess
whether the pending changes should be committed or thrown away, so it
won't let
Dieter Maurer wrote:
Pascal Peregrina wrote at 2006-4-12 08:35 +0100:
I use FileStorage (via ZEO).
I have switched a big dictionary from PersistentMapping to BTree.
In the past, it was easy to compute added/deleted keys from states (cause
PersistentMapping state contains the whole
Philipp von Weitershausen wrote:
in memory. Dieter estimates 20% to 35% slowdown for the C algorithms
(whatever that means), Tim seems to think it won't have such a big
effect. I guess we'll only know after some benchmarks.
Can we please not make any definite decisions until this issue has
Dieter Maurer wrote:
Chris Withers wrote at 2006-4-18 08:34 +0100:
...
If having two isn't acceptable, then why do we have an I and O BTree's,
not to mention the special ones used for in-memory ZODB indexes? Surely
we should just have one BTree class?
Using I versus O BTrees makes a huge
Sidnei da Silva wrote:
Got the following exception while doing some work on a Zope instance
here. It's the first time I see such error.
* Module ZEO.ClientStorage, line 746, in load
* Module ZEO.ClientStorage, line 769, in loadEx
* Module ZEO.ServerStub, line 192, in loadEx
*
Hi All,
Anyone else seen this?
(864) CW: error in notifyConnected (('x', y))
Traceback (most recent call last):
File C:\Zope\2.9.2\lib\python\ZEO\zrpc\client.py, line 506, in
notify_client
self.client.notifyConnected(self.conn)
File C:\Zope\2.9.2\lib\python\ZEO\ClientStorage.py, line
Hi All,
I'm trying to fix this bug:
http://www.zope.org/Collectors/Zope/2062
And I've narrowed it down to the following lines in History.py:
if serial != self._p_serial:
self.manage_beforeHistoryCopy()
state=self._p_jar.oldstate(self, serial)
print
the problem with the two tid's being the
same and do you have any idea why this could be occuring in production
code on Zope 2.9.x?
cheers,
Chris
Chris Withers wrote:
Hi All,
Anyone else seen this?
(864) CW: error in notifyConnected (('x', y))
Traceback (most recent call last):
File C:\Zope
Not sure if Jeremy still on this list so CCing...
Tim Peters wrote:
No, I don't. The internal docs/comments are inconsistent on this
point. FileCache.settid() starts with
##
# Update our idea of the most recent tid.
Is this the most recently used or the most recently available?
Florent Guillaume wrote:
base._p_changed=0
Marks the object not changed, to allow ghostifying.
base._p_deactivate()
Ghostifies the object.
base.__setstate__(state)
Updates the object's dict directly. This really shouldn't be called on a
ghost object,
Tim Peters wrote:
it knew about. To support this, a persistent ZEO cache stores the
value of the largest tid the ZEO client knew about in the cache file.
Hmmm, didn't think I was using a persistent client cache here...
...well, there are .zec files in the var directory, so I guess I must
Florent Guillaume wrote:
base._p_activate() # make sure we're not a ghost
base.__setstate__(state) # change the state
base._p_changed = True # marke object as dirty
OK, this is the code I went with.
Well the C code is pretty clear, it does a PyDict_Clear before doing
Dieter Maurer wrote:
Chris Withers wrote at 2006-5-1 11:34 +0100:
...
...well, there are .zec files in the var directory, so I guess I must
be. What controls whether a persistent or temporary client cache is used?
Your Zope configuration file, of course ;-)
Yeah, I knew that bit
Jens Vagelpohl wrote:
Any recent zope has this in its default zope.conf:
Come on, you know Chris *never* takes the attempt to look something up
myself step ;)
Actually, I think that's a little unfair.
This is a particularly meaninglessly named key...
Chris
--
Simplistix - Content
Hi Jim,
Jim Fulton wrote:
BTW, I strongly discourage use of Undo except in emergencies.
Sadly, except when undoing the last (non-undo) transactions in
a database can lead to inconsistency.
What sort of inconsistencies are you referring to?
Undo can be a very attractive feature although
Jim Fulton wrote:
Chris Withers wrote:
Jim Fulton wrote:
Even if you did track reads, how would you distinguish an unsafe
read as above from a normal read that shouldn't cause a conflict?
A write (or the undo of a write) would conflict with any reads in later
transactions.
Wouldn't
Hi All,
I mentioned this before:
File C:\Zope\2.9.2\lib\python\ZEO\cache.py, line 151, in setLastTid
self.fc.settid(tid)
File C:\Zope\2.9.2\lib\python\ZEO\cache.py, line 1060, in settid
raise ValueError(new last tid (%s) must be greater than
ValueError: new last tid
Tim Peters wrote:
Sure, but no way to guess from here. The only thing I can really
guess from the above is that your client is going to the server a lot
to get data.
Well, the client and the server are on the same machine, which isn't
load or memory bound, and doesn't seem to be i/o bound
Pascal Peregrina wrote:
This reminds me something I noticed when we migrated from 2.7 to 2.8
Well, it's 2.7 to 2.9 here, but yeah, it's the same big jump ;-)
Our issue was a very big PersistentMapping based tree of objects, which was
involved in a lot of RW and RO transactions from
Andreas Jung wrote:
BTrees perform best when keys' prefixes are randomly distributed.
So if your application generates keys like 'foo001', 'foo002',... you'll
get lots of conflicts. Same for consecutive integers in IOBTree.
Tempted to call bullshit on this, since there's code in the catalog
Florent Guillaume wrote:
Chris Withers wrote:
Florent Guillaume wrote:
I can comment, I have a big brain too: the code in the catalog uses
per-connection series of keys, so no conflicts arise.
Really? I thought they were per-thread... wasn't aware that each
thread was tied to one connection
Dieter Maurer wrote:
PostGres does use looks, lots of them and for different purposes.
Could ZODB use locks to gain a similar performance boost?
The only thing for which Postgres does not use locks is reading.
For this is uses MVCC (which we meanwhile adapted for the ZODB
to get rid of
Jean Jordaan wrote:
It looks like '_tid' fits the bill. It's not available when using ZEO though,
which took me a while to figure out.
Might be good to check the relevent apis. A lot of this has got much
clearer in recent zodb releases.
If you end up using variables starting with
Tres Seaver wrote:
Zope Corporation's Zope Replication Services products operates along
those lines:
http://www.zope.com/products/zope_replication_services.html
Yeah, but you can only write to one of the storages, right?
Chris
--
Simplistix - Content Management, Zope Python Consulting
Jim Fulton wrote:
I'll note that I have been able to provoke the error message, but, for
me at least, the result was non-fatal. The error occurs during cache
verification and causes the connection to fail. The connection thread
keeps running and tries again.
Okay, with the proviso that
Hi Jim,
Sorry for the delay, was on holiday in Canada...
Jim Fulton wrote:
On May 31, 2006, at 4:03 AM, Chris Withers wrote:
File C:\Zope\2.9.2\lib\python\ZEO\cache.py, line 151, in setLastTid
self.fc.settid(tid)
File C:\Zope\2.9.2\lib\python\ZEO\cache.py, line 1060, in settid
Jim Fulton wrote:
Jim Fulton wrote:
On May 31, 2006, at 4:03 AM, Chris Withers wrote:
File C:\Zope\2.9.2\lib\python\ZEO\cache.py, line 151, in setLastTid
self.fc.settid(tid)
File C:\Zope\2.9.2\lib\python\ZEO\cache.py, line 1060, in settid
raise ValueError(new last tid (%s) must
Andrew McLean wrote:
Any advice gratefully received.
I'd suggested moving to a ZODB version new enough to have MVCC support,
which will likely make your problem go away...
Chris
--
Simplistix - Content Management, Zope Python Consulting
- http://www.simplistix.co.uk
Patrick Gerken wrote:
system with ZEO for an ERP5 deployment. In my case I don't need to
care for data replication, all is stored on a SAN considered HA by the
customer already.
You run your live Data.fs off a SAN?
That usually makes for interesting performance problems in the best case!
So
Patrick Gerken wrote:
You run your live Data.fs off a SAN?
That usually makes for interesting performance problems in the best case!
Well, yes, that is/was the idea when suddenly HA requests popped up.
You might be right,
Painful experience has taught me that I am ;-)
need to access many
...rather than just incrementing integers?
I'm asking 'cos I've just started having time-stamp reduction errors
on a production system where a contingent system is having a .fs file
that's been re-constituted from repozo backups tested with fstest.py...
cheers,
Chris
--
Simplistix -
Hi All,
One of my customers has a large (21GB) production zodb which they back
up onto a contingency server using repozo and rsync. The process is
roughly as follows:
1. pack the production database to 3 days once a day.
2. create a full backup with repozo and rsync this to the contingency
Dieter Maurer wrote:
You should be happy about the much more explicit information.
It may allow you to analyse your problem better.
This question has nothing to do with that problem, it just came up as a
result of once again being reminded that we use timestamps as
transaction ids.
For
Dieter Maurer wrote:
The pickle size is a *VERY* rough estimation (probably wrong
by a factor of 5 to 15)
But, as you point out, much better than a hard coded 1 ;-)
We probably would get a much better estimation using
PySizer but probably at a significantly higher cost.
Right, I guess
Jim Fulton wrote:
- I wonder if an argument could be made than we shouldn't
implicitly deactivate an object that has been accessed in a
a transaction while the transaction is still running.
Would this prevent ZODB from ever promising not to use more than a
certain amount of memory?
The
Jim Fulton wrote:
Chris Withers wrote:
Jim Fulton wrote:
- I wonder if an argument could be made than we shouldn't
implicitly deactivate an object that has been accessed in a
a transaction while the transaction is still running.
Would this prevent ZODB from ever promising not to use more
Jim Fulton wrote:
My intuition is still that sharing objects between threads will
introduce a host of subtle bugs.
Yes, I'll +lots to this.
I'm much more interested in seeing a memory-limited cache of some
description an sad to see this thread derail into sharing data between
threads, which
+1 from me too, this feels like a really good proposal :-)
Chris
Jim Fulton wrote:
+1
Lennart Regebro wrote:
On 10/11/06, Roché Compaan [EMAIL PROTECTED] wrote:
http://mail.zope.org/pipermail/zodb-dev/2004-July/007682.html
I read this thread, and it seems to me that the ultimate solution
David Binger wrote:
This is an interesting point, and it makes me wonder if
there would be interest having the fsync behavior vary on
a per-transaction basis instead of a per-storage basis.
Maybe the client submitting transactions that are just
Session-like changes could include a message to the
import transaction
Something simple:
s = transaction.savepoint()
s.rollback()
Something less so:
s = transaction.savepoint()
s1 = transaction.savepoint()
s.rollback()
...okay, so we can nest savepoints, yay!
s1.rollback()
Traceback (most recent call last):
File stdin, line 1, in ?
I'm hoping this is just a simple ordering bug...
Does anyone have any objections to the attached patch?
Chris
--
Simplistix - Content Management, Zope Python Consulting
- http://www.simplistix.co.uk
Index: _transaction.py
Chris Bainbridge wrote:
Hi Alan,
- You cant just catch ConflictError and pass
I do conn.sync() at the top of the loop which is supposed to abort the
connection and re-sync the objects with the zeo server.
Urm, sounds like you're looking for transaction.abort().
Also, be aware of the
Jim Fulton wrote:
You have. I spend a fair bit if time on it for the 2.10/3.3 releases.
This was mainly to
chase a problem on the Mac but I ended up cleaning up some internal
messiness
quite a bit.
OK.
Of course, there's also the blob work.
Not sure how this relates to persistent zeo
Simon Burton wrote:
btree.minKey(t) is documented* to return the smallest key at least
as big as t. It seems that if there is no such element it
returns the maximum key.
*in the programming guide, v3.6.0
Hmm, can you write a failing unit test that demonstrates this?
The BTrees package does
Adam Groszer wrote:
Hello,
Just run into a usual Cannot pickle type
'zope.security._proxy._Proxy' objects exception.
What does your patch give you that this error message doesn't?
+try:
+self._p.dump(state)
+except Exception, msg:
it's logger msg, it's the
(trying again to send to the right list)
Hi All,
One of the users on one of my projects saw this error under high load:
Module Products.QueueCatalog.QueueCatalog, line 458, in reindexObject
Module Products.QueueCatalog.QueueCatalog, line 341, in catalog_object
Module
Dieter Maurer wrote:
Yes, it looks like an error:
Apparently, assert end is not None failed.
Apparently storage.loadBefore returned a wrong value.
Unfortunately, neither of these means anything to me ;-)
I guess I should file a bug report?
Why collector?
cheers,
Chris
--
Simplistix -
Dieter wrote:
Unfortunately, neither of these means anything to me ;-)
That is because you did not look at the code :-)
Much as I wish I had time to read and learn the whole zodb code base, I
don't. It wasn't clear what that code did and what those assertions
really meant...
Jim wrote:
Jeremy Hylton wrote:
transaction end committed. If end is None, it implies that the
revision returned by loadBefore() is the current revision. There is
an assert here, because the _setstate_noncurrent() is only called if
the object is in the invalidated set, which implies that there is a
Dieter Maurer wrote:
Chris Withers wrote at 2007-3-16 08:45 +:
...
Is there any way an object could be invalidated without there being a
non-current revision to read?
Sure (through a call to ZODB.DB.DB.invalidate), although usually
it is done only after the object changed.
OK. I'm
Hi All,
Is there any existing method or script for rolling back a ZODB
(filestorage-backed in this case, if that makes it easier) to a certain
point in time?
eg: Make this Data.fs as it was at 9am this morning
If not, I'll be writing one, where should I add it to when I'm done?
cheers,
Jim Fulton wrote:
On Mar 21, 2007, at 6:41 AM, Chris Withers wrote:
Hi All,
Is there any existing method or script for rolling back a ZODB
(filestorage-backed in this case,
Back end to what?
I meant as opposed to BDBStorage or OracleStorage ;-)
I don't know whether to attempt
Laurence Rowe wrote:
- Should this create a new FileStorage? Or should it modify the
existing FileStorage in place?
Probably create a new one (analogous to a pack). Seems safer than
truncating to me.
Nah, this is working on a copy of production data, not the real thing.
Disk space is an
Dennis Allison wrote:
The ZODB is an append only file system so truncating works just fine.
Yup, but it's finding the location to truncate back to that's the
interesting bit.
And that I'm lazy and really want to be able to do:
python rollback.py 2007-03-21 09:00
You can use any
of the
Benji York wrote:
Nah, the changes need to be permenant, tested, and then rolled back...
I can't reconcile permanent
ie: committed to disk, not DemoStorage...
and rolled back. :)
undo the changes committed to disk, to a point in time, once the results
have been tested.
If the app
Adam Groszer wrote:
Somehow relevant to the subject I just found an article on Wickert's
site:
http://www.wiggy.net/ , Using a seperate Data.fs for the catalog
The win here is actually partitioning the object cache...
Similar wins could be achieved without making backup/pack/etc more
Alan Runyan wrote:
Do you have anything that is committing very large transactions?
No. In fact; these clients could be running in read only mode. As far
as I'm concerned.
How does data get into the ZEO storage then?
cheers,
Chris
--
Simplistix - Content Management, Zope Python
Alan Runyan wrote:
We have 10 ZEO clients that are for public consumption READ ONLY.
We have a separate ZEO client that is writing that is on a separate box.
I'd put money on the client doing the writing causing problems.
That or client side cache thrash caused by zcatalog or similar ;-)
The
Alan Runyan wrote:
data = self.socket.recv(buffer_size)
error: (113, 'No route to host')
That *is* very odd, anything other than pound being used for load
balancing or traffic shaping?
This has to be a major problem maker in the system. Pound is simply
round robin connections to pool of
Hi All,
We have a big(ish) zodb, which is about 29GB in size.
Thanks to the laughable difficulty of getting larger disks in big
corporates, we've been looking into what's taking up that 29GB and were
a bit surprised by the results.
Using space.py from the ZODBTools in Zope 2.9.4, it turns
Gary Poster wrote:
you can call cache minimize after a threshold.. maybe every 100
iterations.
sounds good, assuming you know you are not writing.
I've used this trick loads, especially for huge datastructure migrations
where writing is happening. I wonder why I haven't bumped into
1 - 100 of 293 matches
Mail list logo