Hi
Safari 4.0.2 fails to send an Authorization header to the server when
the user is authenticated via basic authentication. This results in all
sorts of permission problems.
I realise this is not a Zope problem but perhaps I can hack some
temporary solution server-side to convince Safari to
Daniel Dekany wrote:
How to create a template context (here inside ZPT) that is not an
object from the ZODB, just a temporary object? This is what I tried:
class AdhocContext(Implicit):
pt = PageTemplateFile(whatever/path, globals())
...
MyZopeProduct:
def
On Mon, Apr 27, 2009 at 12:40 PM, Peter Bengtsson pete...@gmail.com wrote:
What have you done to investigate memory leaks?
What external connectors are you using, like MySQL or LDAP?
It is probably not a memory leak. The graph is what I'd expect in a
garbage collection scenario (ie. Python).
I've followed this thread with interest since I have a Zope site with
tens of millions of entries in BTrees. It scales well, but it requires
many tricks to make it work.
Roche Compaan wrote these great pieces on ZODB, Data.fs size and
scalability at
My apologies. I sent this to the wrong list.
H
___
Zope-Dev maillist - Zope-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zope-dev
** No cross posts or HTML encoding! **
(Related lists -
http://mail.zope.org/mailman/listinfo/zope-announce
Hi all
I run my script foo.zctl with zopectl run foo.ctl param1 param2.
This script operates on a large ZODB and catches ConflictErrors
accordingly. It iterates over a set, updates data and commits the
transaction every 100 iterations. But I've noticed two things:
1. ConflictErrors are never
Hi Tres!
Thanks for the tips. I managed to get my script running in batches and
with manual intervention. When in future I encounter the same problems
I'll report back to this thread.
Hedley
___
Zope maillist - Zope@zope.org
Hi all
I run my script foo.zctl with zopectl run foo.ctl param1 param2.
This script operates on a large ZODB and catches ConflictErrors
accordingly. It iterates over a set, updates data and commits the
transaction every 100 iterations. But I've noticed two things:
1. ConflictErrors are never
Hi Tim
I'm more involved with Plone but can provide a slightly more
digestible answer :)
Chris mentioned unit tests. You do not have to write a new unit test.
What is required is to have a look at the tests and then identify one
that is relevant to your problematic method. This test should be
The usual Plone catalogs (portal_catalog, uid_catalog,
reference_catalog and membrane_tool) all run above 90% hit rate if the
server is up to it. portal_catalog is invalidated the most so it
fluctuates the most.
If the server is severely underpowered then catalogcache is much less
effective.
Have you measures the time needs for some standard ZCatalog queries
used with a Plone site with the communication overhead with memcached?
Generally spoken: I think the ZCatalog is in general fast. Queries using a
fulltext index are known to be more expensive or if you have to deal with
large
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly with
transactions. Your cache will happily deliver never-committed data and
also it will not isolate transactions from each
In addition, you need to include a serial in your cache keys to avoid
dirty reads.
The cache invalidation code actively removes items from the cache. Am
I understanding you correctly?
H
___
Zope-Dev maillist - Zope-Dev@zope.org
On Sat, Oct 25, 2008 at 2:57 PM, Andreas Jung [EMAIL PROTECTED] wrote:
On 25.10.2008 14:53 Uhr, Hedley Roos wrote:
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly
If you catalog an object, then search for it and then abort the
transaction, your cache will have data in it that isn't committed.
Kind of like how I came to the same conclusion in parallel to you and
stuffed up this thread :)
Additionally when another transaction is already running in
Additionally when another transaction is already running in parallel, it
will see cache inserts from other transactions.
A possible solution is to keep a module level cache which can be
committed to the memcache on transaction boundaries. That way I'll
incur no performance penalty.
H
Hi all
The past few weeks I've been optimizing a busy Plone site and so
collective.catalogcache was born.
It uses memcached as a distributed ZCatalog cache. It is currently in
production and seems to be holding just fine. The site went from being
unusable to serving quite a bit of data.
I'll
17 matches
Mail list logo