On Feb 27, 2007, at 9:24 PM, Andre Mueninghoff wrote:

Hi Jared, (and Hi to other intrepid users),

Here's what occurred for me for nine collections when restoring from
osaf.us after the migration to Cosmo 0.6 and using my existing .ini file
with the test/restore settings menu option:

No. of collections     Error msg in the restore dialogue boxes

1                          Global name 'M2Crypto' is not defined
1                          Can't parse webpage
1                          <class 'zanshin http HTTPError'> (500)>
1                          <class 'zanshin http HTTPError'> (501)>
1                          That collection was not found
4                          [no error msg...collection restored]


I tried restoring using Andre's .ini file. 4 collections worked. I didn't get the m2crypto error, but Andi did point me to a source module that was missing an m2crypto import, so I just checked that into the Chandler trunk.


On one collection I got a "ConnectionDone" exception. I'm not sure if that's really a timeout error in disguise, but when I looked at the traffic, I saw that Cosmo was taking an exceptionally long time to return this ics file: Andre_Cal/19de35ff-929c-5b34-f3bd- e50599f93eb9.ics. Sure enough, even using cadaver and curl, that particular ics file takes a couple minutes to GET a response. Other ics files in that collection are returned quickly. Seems like a Cosmo problem, so I am cc'ing cosmo-dev on this message.


Grant, here is the stack trace for the ConnectionDone exception:

Traceback (most recent call last):
File "/Users/morgen/dev/trees/trunk/util/task.py", line 64, in __threadStart
    result = self.run()
File "/Users/morgen/dev/trees/trunk/application/dialogs/ SubscribeCollection.py", line 255, in run
    forceFreeBusy=task.forceFreeBusy)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ __init__.py", line 784, in subscribe
    username=username, password=password)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ __init__.py", line 1274, in subscribe2
    username=username, password=password)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ __init__.py", line 1451, in subscribeCalDAV
    share.sync(updateCallback=updateCallback, modeOverride='get')
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ shares.py", line 559, in sync
    forceUpdate=forceUpdate, debug=debug)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ conduits.py", line 302, in sync
    updateCallback=updateCallback)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ conduits.py", line 959, in _get
    updateCallback=updateCallback, stats=stats)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ conduits.py", line 798, in _conditionalGetItem
    updateCallback=updateCallback, stats=stats)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ webdav_conduit.py", line 404, in _getItem
    resp = self._getServerHandle().blockUntil(resource.get)
File "/Users/morgen/dev/trees/trunk/parcels/osaf/sharing/ WebDAV.py", line 85, in blockUntil
    return zanshin.util.blockUntil(callable, *args, **keywds)
File "/Users/morgen/dev/release/Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/site-packages/zanshin/ util.py", line 208, in blockUntil
    res.raiseException()
File "/Users/morgen/dev/release/Library/Frameworks/ Python.framework/Versions/2.5/lib/python2.5/site-packages/twisted/ python/failure.py", line 259, in raiseException
    raise self.type, self.value, self.tb

Is this a timeout, or something else?


Also, I tried subscribing to individual collections one by one, and that seems to work. It's only when multiple subscribes are going on simultaneously (as happens when you do a .ini restore) that I see all sorts of parsing errors, as if the content from the multiple requests is getting co-mingled or something. In fact, I put some instrumentation into zanshin to dump whenever it gets a PROPFIND response it can't parse, and sure enough, the following is a response that zanshin gets tripped up on (sensitive data edited out):

<?xml version="1.0" encoding="UTF-8"?>
<D:multistatus xmlns:D="DAV:">
    <D:response>
<D:href>https://osaf.us:443/cosmo/home/[EDITED OUT]/ Aaron_Cal/</D:href>
        <D:propstat>
            <D:prop>
<ticket:ticketdiscovery xmlns:ticket="http:// www.xythos.com/namespaces/StorageServer">
                    <ticket:ticketinfo>
                        <ticket:id>[EDITED OUT]</ticket:id>
                        <D:owner>
<D:href>https://osaf.us:443/cosmo/home/ [EDITED OUT]/</D:href>
                        </D:owner>
                        <ticket:timeout>Infinite</ticket:timeout>
                        <ticket:visits>infinity</ticket:visits>
                        <D:privilege>
      HTTP/1.1 207 Multi-Status
Server: Apache-Coyote/1.1
X-Cosmo-Version: 0.6.0
Set-Cookie: JSESSIONID=[EDITED OUT]; Path=/cosmo
Content-Type: text/xml;charset=UTF-8
Content-Length: 9305
Date: Wed, 28 Feb 2007 07:33:30 GMT

<?xml version="1.0" encoding="UTF-8"?>
<D:multistatus xmlns:D="DAV:">
    <D:response>
<D:href>https://osaf.us:443/cosmo/home/[EDITED OUT]/Alex_Cal/ 884dc0f6-7f16-11db-9cfc-0014a51af847.ics</D:href>
        <D:propstat>
            <D:prop>
                <D:getetag>"1020-1172517185000"</D:getetag>
                <D:displayname>[EDITED OUT]</D:displayname>
                <D:resourcetype/>
            </D:prop>
            <D:status>HTTP/1.1 200 OK</D:status>
        </D:propstat>
    </D:response>
    <D:response>
<D:href>https://osaf.us:443/cosmo/home/[EDITED OUT]/ Alex_Cal/.chandler/d5f472b8-a1e0-11db-ac61-f75221bfdbaa.xml</D:href>
        <D:propstat>
            <D:prop>
                <D:getetag>"378-1171492609000"</D:getetag>
<D:displayname>d5f472b8-a1e0-11db-ac61- f75221bfdbaa.xml</D:displayname>
                <D:resourcetype/>
            </D:prop>
            <D:status>HTTP/1.1 200 OK</D:status>
        </D:propstat>
    </D:response>
    <D:response>
<D:href>https://osaf.us:443/cosmo/home/[EDITED OUT]/ Alex_Cal/.chandler/884dc0f6-7f16-11

This sure looks like two different PROPFIND responses stepping on each others' toes. I'll need to work with Grant on this tomorrow to see if this is on Chandler/Zanshin's end or on Cosmo's. I suppose this could also be a side-effect of the server-side URL rewriting I read something about.

~morgen

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev

Reply via email to