Dieter Maurer wrote:
3. If repozo is not to blame, what could be?
One possibility would be a bad call.
A bad call?
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
___
For more informatio
Dieter Maurer wrote:
You should be happy about the much more explicit information.
It may allow you to analyse your problem better.
This question has nothing to do with that problem, it just came up as a
result of once again being reminded that we use timestamps as
transaction ids.
For exa
Dieter Maurer wrote:
A new proposal:
http://www.zope.org/Wikis/ZODB/MemorySizeLimitedCache
It outlines how to implement a ZODB cache limited not be the
number of containing objects but by their estimated memory size.
Feedback welcome -- either here or in the Wiki.
I think any work in this
Dieter Maurer wrote:
The pickle size is a *VERY* rough estimation (probably wrong
by a factor of 5 to 15)
But, as you point out, much better than a hard coded "1" ;-)
We probably would get a much better estimation using
"PySizer" but probably at a significantly higher cost.
Right, I guess
Jim Fulton wrote:
- I wonder if an argument could be made than we shouldn't
implicitly deactivate an object that has been accessed in a
a transaction while the transaction is still running.
Would this prevent ZODB from ever promising not to use more than a
certain amount of memory?
The b
Jim Fulton wrote:
Chris Withers wrote:
Jim Fulton wrote:
- I wonder if an argument could be made than we shouldn't
implicitly deactivate an object that has been accessed in a
a transaction while the transaction is still running.
Would this prevent ZODB from ever promising not to use
Jim Fulton wrote:
My intuition is still that sharing objects between threads will
introduce a host of subtle bugs.
Yes, I'll +lots to this.
I'm much more interested in seeing a memory-limited cache of some
description an sad to see this thread derail into sharing data between
threads, which
Roché Compaan wrote:
I'm tempted to deploy ZODB without fsync on some production FileStorage
instances. Will I regret it?
Well, let us know when you find out ;-)
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
_
+1 from me too, this feels like a really good proposal :-)
Chris
Jim Fulton wrote:
+1
Lennart Regebro wrote:
On 10/11/06, Roché Compaan <[EMAIL PROTECTED]> wrote:
http://mail.zope.org/pipermail/zodb-dev/2004-July/007682.html
I read this thread, and it seems to me that the ultimate solution
David Binger wrote:
This is an interesting point, and it makes me wonder if
there would be interest having the fsync behavior vary on
a per-transaction basis instead of a per-storage basis.
Maybe the client submitting transactions that are just
Session-like changes could include a message to the
import transaction
Something simple:
s = transaction.savepoint()
s.rollback()
Something less so:
s = transaction.savepoint()
s1 = transaction.savepoint()
s.rollback()
...okay, so we can nest savepoints, yay!
>>> s1.rollback()
Traceback (most recent call last):
File "", line 1, in ?
I'm hoping this is just a simple ordering bug...
Does anyone have any objections to the attached patch?
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
Index: _transaction.py
Chris Bainbridge wrote:
Hi Alan,
- You cant just catch ConflictError and pass
I do conn.sync() at the top of the loop which is supposed to abort the
connection and re-sync the objects with the zeo server.
Urm, sounds like you're looking for transaction.abort().
Also, be aware of the weird
Christian Theune wrote:
I've tried to analyse the situation for this a bit. Some annotations are
in http://www.zope.org/Collectors/Zope/2151 and maybe this can trigger
more input.
Thanks for looking into this, any help is much appreciated!
Note: I was a bad boy and used ZODB trunk for the an
Jim Fulton wrote:
You have. I spend a fair bit if time on it for the 2.10/3.3 releases.
This was mainly to
chase a problem on the Mac but I ended up cleaning up some internal
messiness
quite a bit.
OK.
Of course, there's also the blob work.
Not sure how this relates to persistent zeo cl
Simon Burton wrote:
btree.minKey(t) is documented* to return the smallest key at least
as big as t. It seems that if there is no such element it
returns the maximum key.
*in the programming guide, v3.6.0
Hmm, can you write a failing unit test that demonstrates this?
The BTrees package does
Adam Groszer wrote:
Hello,
Just run into a usual "Cannot pickle objects" exception.
What does your patch give you that this error message doesn't?
+try:
+self._p.dump(state)
+except Exception, msg:
it's logger msg, it's the exception object being caught.
+
(trying again to send to the right list)
Hi All,
One of the users on one of my projects saw this error under high load:
Module Products.QueueCatalog.QueueCatalog, line 458, in reindexObject
Module Products.QueueCatalog.QueueCatalog, line 341, in catalog_object
Module Products.QueueCatalog.Queue
Dieter Maurer wrote:
Yes, it looks like an error:
Apparently, "assert end is not None" failed.
Apparently "storage.loadBefore" returned a wrong value.
Unfortunately, neither of these means anything to me ;-)
I guess I should file a bug report?
Why collector?
cheers,
Chris
--
Simplistix
Dieter wrote:
Unfortunately, neither of these means anything to me ;-)
That is because you did not look at the code :-)
Much as I wish I had time to read and learn the whole zodb code base, I
don't. It wasn't clear what that code did and what those assertions
really meant...
Jim wrote:
Jeremy Hylton wrote:
transaction end committed. If end is None, it implies that the
revision returned by loadBefore() is the current revision. There is
an assert here, because the _setstate_noncurrent() is only called if
the object is in the invalidated set, which implies that there is a
non-cu
Dieter Maurer wrote:
Chris Withers wrote at 2007-3-16 08:45 +:
...
Is there any way an object could be invalidated without there being a
non-current revision to read?
Sure (through a call to "ZODB.DB.DB.invalidate"), although usually
it is done only after the object changed.
Hi All,
Is there any existing method or script for rolling back a ZODB
(filestorage-backed in this case, if that makes it easier) to a certain
point in time?
eg: Make this Data.fs as it was at 9am this morning
If not, I'll be writing one, where should I add it to when I'm done?
cheers,
Chr
Jim Fulton wrote:
On Mar 21, 2007, at 6:41 AM, Chris Withers wrote:
Hi All,
Is there any existing method or script for rolling back a ZODB
(filestorage-backed in this case,
Back end to what?
I meant as opposed to BDBStorage or OracleStorage ;-)
I don't know whether to attempt th
Rodrigo Senra wrote:
I guess what Chris meant was:
Given a pivot point in time (date and time),
for *all* objects in the database (the whole Data.fs)
will suffer UNDO over transactions whose timestamp is
greater (more recent) than the given pivot timestamp.
Sounds about right...
Never
Tres Seaver wrote:
1. Open the existing file in time-travel mode (readonly, so that it
loads no transactions later than the given point). The 'stop'
parameter to the FileStorage initializer is the one which
triggers time-travel.
2. Open a new, writable filestorage.
3. Run
Jim Fulton wrote:
- Should this create a new FileStorage?
No.
Or should it modify the existing
FileStorage in place?
Yes.
- Should this work while the FileStorage is being used?
No.
- Should this behave transactional?
Not sure what you mean... in terms of being able to "undo the u
Laurence Rowe wrote:
- Should this create a new FileStorage? Or should it modify the
existing FileStorage in place?
Probably create a new one (analogous to a pack). Seems safer than
truncating to me.
Nah, this is working on a copy of production data, not the real thing.
Disk space is an iss
Jens Vagelpohl wrote:
On 21 Mar 2007, at 17:29, Chris Withers wrote:
I'm hoping for some means to just lop transactions off the end of the
Data.fs until I get to the point in time I want...
Why don't you just record the exact size of your Data.fs before starting
your migratio
Dennis Allison wrote:
The ZODB is an append only file system so truncating works just fine.
Yup, but it's finding the location to truncate back to that's the
interesting bit.
And that I'm lazy and really want to be able to do:
python rollback.py 2007-03-21 09:00
You can use any
of the sta
Jeremy Hylton wrote:
The MVCC implementation might not be prepared to cope with an explicit
call to invalidate.
oops?
Yould also achieve this by calling some method on
the object, right? _p_invalidate()? I don't remember the details.
Well, we're not doing any explicit invalidation like tha
Benji York wrote:
Chris Withers wrote:
I'm writing/running a bunch on migration processes on a 30GB Data.fs,
I'm hoping it's easier to roll back the Data.fs to before I started my
migration run than it is to grab a new copy on the ZODB from somewhere...
Demostorage, perh
Benji York wrote:
Nah, the changes need to be permenant, tested, and then rolled back...
I can't reconcile "permanent"
ie: committed to disk, not DemoStorage...
and "rolled back". :)
undo the changes committed to disk, to a point in time, once the results
have been tested.
If the app n
Adam Groszer wrote:
Somehow relevant to the subject I just found an article on Wickert's
site:
http://www.wiggy.net/ , "Using a seperate Data.fs for the catalog"
The win here is actually partitioning the object cache...
Similar wins could be achieved without making backup/pack/etc more
compl
Alan Runyan wrote:
Do you have anything that is committing very large transactions?
No. In fact; these clients could be running in read only mode. As far
as I'm concerned.
How does data get into the ZEO storage then?
cheers,
Chris
--
Simplistix - Content Management, Zope & Python Consulti
Alan Runyan wrote:
We have 10 ZEO clients that are for public consumption "READ ONLY".
We have a separate ZEO client that is writing that is on a separate box.
I'd put money on the client doing the writing causing problems.
That or client side cache thrash caused by zcatalog or similar ;-)
Th
Alan Runyan wrote:
>data = self.socket.recv(buffer_size)
> error: (113, 'No route to host')
That *is* very odd, anything other than pound being used for load
balancing or traffic shaping?
This has to be a major problem maker in the system. Pound is simply
round robin connections to pool o
Hi All,
We have a big(ish) zodb, which is about 29GB in size.
Thanks to the laughable difficulty of getting larger disks in big
corporates, we've been looking into what's taking up that 29GB and were
a bit surprised by the results.
Using space.py from the ZODBTools in Zope 2.9.4, it turns out
Dieter Maurer wrote:
In our private Zope version, I have still a note like this:
# DM 2005-08-22: always call '_flush_invalidations' as it does
# more than cache handling only
self._flush_invalidations()
if self._reset_counter != global_reset_counter:
Gary Poster wrote:
you can call cache minimize after a threshold.. maybe every 100
iterations.
sounds good, assuming you know you are not writing.
I've used this trick loads, especially for huge datastructure migrations
where writing is happening. I wonder why I haven't bumped into problems?
Tommy Li wrote:
I don't understand how I can use ZODB to do this, however. From what I
can gather from reading the manual, if I simply stored variables
referring to the parent and children in every TreeNode, I'd end up
storing the whole tree of TreeNodes's every time I wanted to store that
one
Tommy Li wrote:
Great. So when I store the parent node of a tree structure into zodb,
all the children get recursively stored?
Yes.
Or do I need to manually store each of these Persistent subobjects?
No. But having your node class inherit from Persistent will mean that
altering a node does
Hi All,
Where can I find the zodb tools (fsrefs.py, fstest.py, etc) as
appropriate for the zodb that ships with Zope 2.9.6?
iirc, 2.9.6 was one of the releases that didn't ship with ZODBTools due
to a bug, so I'm looking for places to hunt!
cheers,
Chris
--
Simplistix - Content Management
Okay, so I found a fix for my problem, thought I'd share for others.
So, you have an error like this:
POSKeyError: 0x3e0a
...do a bin/zopectl debug and then do roughly the following:
>>> from ZODB.utils import p64
>>> oid = p64(0x3e0a)
So now we've got the oid of the broken object, lets crea
Sidnei da Silva wrote:
Jim has acknowledge that this situation was *possible*, but I didn't
follow up further with a reproducible test case when I reported it.
Well, it's not a unit test, but your recipe to reproduce is above ;-)
(oh, and to boot, it happened on a customer instance - yay!)
1
Hi All,
I believe I've bottomed out the bug at the centre of my POSKeyError
report this morning and it's a real nasty. It's pretty convoluted but
I'm fairly sure it's a bug.
Imagine the following zope.conf setup:
path $INSTANCE/var/UnPacked.fs
mount-point /
path
Chris Withers wrote:
I'm dealing with what I suspect are a couple of corrupt filestorages.
One is packed every time content is loaded by a batch process and the
other is never packed. The UnPacked storage is mounted as the root and
then the Packed one is mounted in to /content.
I forg
Hi All,
I'm dealing with what I suspect are a couple of corrupt filestorages.
One is packed every time content is loaded by a batch process and the
other is never packed. The UnPacked storage is mounted as the root and
then the Packed one is mounted in to /content.
Running fstest.py on both .
Hi Shane,
Shane Hathaway wrote:
As a slight improvement, something like this should also work (untested):
data._p_jar = app._p_jar
data._p_oid = oid
app._p_jar.register(data)
import transaction
transaction.get().note('Fix POSKeyError')
transaction.commit()
I did try this:
>>> data._p_jar =
Jim Fulton wrote:
It's a shame ZODB doesn't turn POSKeyErrors into proper Broken objects
as it does when the class can no longer be imported. The problem with
POSKeyErrors is that they prevent you accessing *any object that refers
to the broken object* not just the missing object itself. This m
Binger David wrote:
It seems like all of these potential advantages are available through
the use of multiple storages, but none of them really require
database-level
support for cross-storage references.On the other hand, it seems
clear that cross-storage references make the system as a w
Binger David wrote:
Uh. I thought we were talking about reported POSKeyErrors that would
not exist
if there were no such thing as cross-storage references. I don't
understand
what there is to disagree with here. Are you saying that cross-storage
references reduce the risk of other applicat
Shane Hathaway wrote:
Chris Withers wrote:
I did try this:
>>> data._p_jar = app._p_jar
>>> data._p_oid = oid
>>> app.x = data
>>> import transaction
>>> transaction.get().note('Fix POSKeyError')
>>> transaction.comm
Jim Fulton wrote:
There's also a problem that we made cross-database references so easy to
make that they are made accidentally.
There's also no way to say "actually, I want to *move* this object to
this other storage.
Semantically, that's what my customer was trying to do...
make them m
Jim Fulton wrote:
If someone can make something like this work without modifying ZODB, I
won't object.
The only thing I'd like is to be able to:
- specify in policy that cross storage references should result in the
target object being copied to the storage
or
- have an api to say "where
Binger David wrote:
On Feb 13, 2008, at 3:58 PM, Jim Fulton wrote:
Note that, IMO, some of the best use cases for multi databases are
separating catalog and session data from regular content.
Could you say more about what the benefit of this separation is,
and why cross-storage references ar
Christian Theune wrote:
Okay, so I count two issues:
- packing and multiple mounted storages
- POSKeyErrors resulting in failure to load referring object rather
than creation of a broken referred object
Where would you like me to file these bug reports?
How about the bugtracker?
What's
Christian Theune wrote:
- specify in policy that cross storage references should result in the
target object being copied to the storage
I think the semantics to that aren't as obvious as they seem. This can
go bad regarding consistency when multiple cross-database reference from
various data
Kenneth Miller wrote:
client to connect and I'm able to see the data, but once the second client
is running, it can't see any more changes made by the first client.
What version of zodb are you doing?
Are you using a clients that have asyncore loops running?
cheers,
Chris
--
Simplistix - C
Dylan Jay wrote:
I have a few databases being served out of a zeo. I restarted them in a
routine operation and now I can't restart due to the following error
Do you use cross database references?
cheers,
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://w
Dylan Jay wrote:
Chris Withers wrote:
Dylan Jay wrote:
I have a few databases being served out of a zeo. I restarted them in
a routine operation and now I can't restart due to the following error
Do you use cross database references?
not to my knowledge.
That doesn't mean
Roché Compaan wrote:
Not yet, they are very time consuming. I plan to do the same tests over
ZEO next to determine what overhead ZEO introduces.
Remember to try introducing more app servers and see where the
bottleneck comes ;-)
Am I right in thinking the storage servers is still essentially
Hey All,
Just finished doing some rough speed tests having got RelStorage up and
running against an Oracle 10g database using Oracle's InstantClient and
cx_Oracle 4.3.3...
The tests involved basically creating 10 folders, each with 10 pages in
them, in a Plone site, using zope.testbrowser.
Shane Hathaway wrote:
Chris Withers wrote:
FileStorage-over-ZEO managed the test in 3 mins 20 seconds
Relatorage-to-Oracle managed the test in 3 mins 18 seconds
Cool, although it's not clear the storage speed had a major impact.
Well, the point was that RelStorage-to-Oracle was of si
Hi Shane,
How do pack a RelStorage?
Is there a script or do you have to click the button in the control panel?
If there are docs for this, can you point me at them?
cheers,
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
___
Shane Hathaway wrote:
Can't you use the standard manage_pack() method?
Yes, pack it about the same way you would pack a ZEO storage.
OK, but I do that using fspack.py ;-)
Is there anything similar for RelStorage or do I need to do a zopectl
run / stepper script?
cheers,
Chris
--
Simplis
Hi Shane,
First real world error from the Oracle adapter:
Module ZPublisher.Publish, line 125, in publish
Module Zope2.App.startup, line 238, in commit
Module transaction._manager, line 96, in commit
Module transaction._transaction, line 395, in commit
Module transaction._transaction,
Chris Withers wrote:
Module relstorage.relstorage, line 323, in store
Module relstorage.adapters.oracle, line 454, in store_temp
DatabaseError: ORA-24813: cannot send or receive an unsupported LOB
This was while trying to create a Plone object...
Interestingly:
- I could reproduce this
Shane Hathaway wrote:
OK, but I do that using fspack.py ;-)
Running fspack doesn't work with the storage online, does it? Maybe
you're talking about zeopack.py.
You're right, I think I probably was...
Is there anything similar for RelStorage or do I need to do a zopectl
run / stepper scri
Shane Hathaway wrote:
I'd be really interested to try some tests with multiple ZEO clients
attached to a FileStorage versus multiple RelStorage client attached
to Oracle. I suspect Oracle would win ;-)
I am certainly interested in hearing the results of that test.
Well, we tried this fairly
Shane Hathaway wrote:
http://ora-24813.ora-code.com/
That page suggests you may be running different versions of Oracle on
the client and server. I'm sure there are other possibilities, of course.
Indeed, hopefully Guy will take a look some time.
I should stress that this error was extremel
Andrew Thompson wrote:
I've been using the ZODB pretty intensively for a few weeks now, and
suddenly have started getting intermittent Database conflict errors, after
which calling conn.sync() makes no difference, and I cannot get back to the
pre-exception state in one of my ZEO clients, although
Hi All,
I have a project with one zeo server, one zope client, all Zope 2.9.8.
With the client running, I ran a script that did nothing other than scan
through data (ie: no changes, no commits) using "zopectl run" from the
instance home of the client.
After this, the zope client's web interf
Jens Vagelpohl wrote:
Sounds logical to me. The running instance and your zopectl run-script
work in the same space with identical configurations, so both will start
using the same ZEO cache file. To me this sounds like a recipe for
disaster. You should run your script in a dedicated instance
Alan Runyan wrote:
Chris,
Are you seeing this error?
Nope, don't think so.
cheers,
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
___
For more information about ZODB, see the ZODB Wiki:
h
Jim Fulton wrote:
The most recent releases of ZODB 3.8 have *numerous* cache-management
fixes. They also lock persistent cache files so you can't corrupt them
by trying to open them in multiple processes. Some of the fixes affect
non-persistent as well as persistent cache files.
Has zodb 3.8
Fred Drake wrote:
On Thu, Jul 31, 2008 at 2:15 PM, Andreas Jung <[EMAIL PROTECTED]> wrote:
They aren't part of the ZODB 3.8 but part of the trunk/3.9 - right?
These are bug fixes, and will be included in ZODB 3.8.1.
Are you referring to Christian and Shane's patches here?
cheers,
Chris
-
Jim Fulton wrote:
On Jul 31, 2008, at 1:53 PM, Chris Withers wrote:
What I'd *really* like is a stable zodb release with Christian's
patches for zeoraid and Shane's patches for RelStorage that then feeds
through into a stable release of Zope 2.
I'm not familiar wit
Christian Theune wrote:
However, I was hoping for inclusion of at least the
`storage-iterator-branch` in 3.9 and I asked a few times over the past
months already for review but haven't gotten any responses at all so
far. This branch has been used in production for a few month already.
In additio
Sidnei da Silva wrote:
> Keep in mind rsync is not erm, trivial to get going on Windows.
Really? I've never had problems with cygwin...
cheers,
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
_
This is a little disconcerting:
Failed to abort object: TransactionalUndo oid=
Traceback (most recent call last):
File "/opt/Zope-2.11/lib/python/transaction/_transaction.py", line
549, in abort
self.manager.abort(o, txn)
File "/opt/Zope-2.11/lib/python/ZODB/DB.py", line 809, in abort
Jim Fulton wrote:
>> File "/opt/Zope-2.11/lib/python/ZODB/DB.py", line 809, in abort
>> raise NotImplementedError
>> NotImplementedError
>
> You can ignore this error. It has no consequence.
What does it mean?
(It prevented my undo, which was a bit of a bummer :-S
> There's a bunch of stuf
Christian Theune wrote:
> On Fri, 2008-10-24 at 09:51 -0400, Jim Fulton wrote:
>> 2. I doubt that blobs have been factored into ZODB exports. This is,
>> obviously, an oversight.
>
> They were factored in and we have tests. However, the initial pickle
> will empty them: copying blobs this way in
Christian Theune wrote:
> On Fri, 2008-10-24 at 15:06 +0100, Chris Withers wrote:
>> Christian Theune wrote:
>>> On Fri, 2008-10-24 at 09:51 -0400, Jim Fulton wrote:
>>>> 2. I doubt that blobs have been factored into ZODB exports. This is,
>>>> obviousl
Carlos de la Guardia wrote:
> That's happened to me before. As Jim says, this error is not the
> culprit. If you check the error log on the ZMI you will find the *real*
> error.
This *was* the error that showed up in the ZMI...
Chris
--
Simplistix - Content Management, Zope & Python Consultin
Jim Fulton wrote:
> I've posted a new proposal:
>
>http://wiki.zope.org/ZODB/ExternalGC
>
> That addresses multi-database garbage collection and can also be
> useful in other situations.
>
> Comments are welcome.
I assume this would fix the following bug:
http://bugs.launchpad.net/zodb/
Izak Burger wrote:
> Because we use in zope.conf to mail us errors, it means
> that every restart creates a couple of hundred emails.
If you use mailinglogger then you can filter these out:
http://www.simplistix.co.uk/software/python/mailinglogger
cheers,
Chris
--
Simplistix - Content Manag
Hi All,
I was just wondering if anyone had deployed any high availability
solutions for ZEO storage servers using a SAN for the filestore used by
the storage server (and any associated BLOB files)? If so, how and how's
it panned out?
Likewise, anyone uses Red Hat Cluster Suite to implement par
buildout said easy_install barfed:
Installing zeoinstance.
Getting distribution for 'ZODB3'.
error: Setup script exited with error: None
An error occured when trying to install ZODB3 3.9.0a12.Look above this
message f
or any errors thatwere output by easy_install.
While:
Installing zeoinstance
Wichert Akkerman wrote:
>> Any ideas?
>>
>
> buildout hides all compile errors unless you run it with -v (or -vv).
buildout -vv didn't exactly provide much more help ;-)
C:\Python26\python.exe "-c" ""from setuptools.command.easy_install
import main;
main()"" "-mUNxd" ""C:\buildout-eggs\tmpc
Adam GROSZER wrote:
> Hello Chris,
>
> You have a c compiler working on the machine?
Yep, mingw I think...
But that doesn't look like the normal "you don't have a compile" whine...
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.uk
___
Andrew Sawyers wrote:
> I'll let you know shortly; I'm waiting on the test hardware for exactly
> this. I will be using an EMC SAN and SRDF replication.
I'm particularly interested in how you'll move the SAN from the primary
to the secondary node in the even of primary node failure, and how
yo
Wichert Akkerman wrote:
> On 4/3/09 3:43 PM, Chris Withers wrote:
>> I'm particularly interested in how you'll move the SAN from the primary
>> to the secondary node in the even of primary node failure, and how
>> you'll bring the secondary's zeo server u
Jim Fulton wrote:
> ZODB doesn't work with Python 2.6 yet on Windows.
Andreas tells me Python 2.6 is the target for Zope 2.12.
What kind of problems are there in this specific combination?
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.co.
Jim Fulton wrote:
>
>> Andreas tells me Python 2.6 is the target for Zope 2.12.
>> What kind of problems are there in this specific combination?
>
> The tests crash when built with the free ms compiler.
>
> If you want to help, you can debug this.
Sadly, easier to install on Linux. Blame vmware
Andrew Sawyers wrote:
>> I'm particularly interested in how you'll move the SAN from the primary
>> to the secondary node in the even of primary node failure,
> This won't be done by me; it's handled by another team.
Will it be done by software on the nodes or something else completely?
>> and ho
Hanno Schlichting wrote:
> Just be aware that ZODB 3.9 is not compatible with any stable Zope 2.x
> release. It only works and is required for Zope 2.12. It can be made to
> work with prior versions of Zope2 but that is a mild pain.
What are the problems with using ZODB 3.9 in Zope <2.12?
Chris
Forwarding this here in case it's related.
This is ZODB 3.9.0a11 on Python 2.5.1.
The problem only occurs for certain ParsedXML documents, the two I've
found so far were created in 2004-2005, which I guess would have been
Zope 2.7ish on Python 2.4.
Any ideas what this means or where it's comin
Hi All,
I get the following error when trying to open a filestorage .fs file
with ZODB3-3.9.0a11:
File "/ZODB3-3.9.0a11-py2.5-linux-i686.egg/ZODB/config.py", line 154,
in open
**options)
File
"/ZODB3-3.9.0a11-py2.5-linux-i686.egg/ZODB/FileStorage/FileStorage.py",
line 185, in __ini
Alan Runyan wrote:
> reported, https://bugs.launchpad.net/zodb/+bug/361184
> maybe chris can take a stab at it *wink*
That'll be a "no" I'm afraid ;-)
(I simply don't have the knowledge :-S)
Chris
--
Simplistix - Content Management, Zope & Python Consulting
- http://www.simplistix.c
301 - 400 of 411 matches
Mail list logo