[Zope-dev] Can I configure additional loggers via zope.conf?

2004-10-28 Thread Ames Andreas (MPA/DF)
Hello,

sorry if this the wrong list, I just wasn't sure.

I'd like to use a python lib which creates and uses its own logger
objects/hierarchy.  This hierarchy is not a subtree of the
'event'-hierarchy or the other default Zope-logger hierarchies.
Unfortunately I wasn't able to define a logger section in my
zope.conf (without changing the Zope/Startup/zopeschema.xml).

Given that the number of python libs using 'logging' seems to keep
growing my question is:  Is this my fault, a Zope bug, a wishlist item
or a feature (I'm using 2.7.2 if that matters)?


TIA,

andreas

___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


Re: [Zope-dev] Can I configure additional loggers via zope.conf?

2004-10-28 Thread Ames Andreas (MPA/DF)
Hello,

Fred Drake wrote:

 Please file a feature request for this so we don't forget it.  This
 can be assigned to me, though I'm not sure when I'll be able to get
 to it.

Please look to http://zope.org/Collectors/Zope/1555.  Unfortunately I
don't know how to assign this to you.


Thanks,

andreas

___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


Re: [Zope-dev] Patch: let non-seekable streams be input for ZPublisher (updated)

2004-08-24 Thread Ames Andreas (MPA/DF)
Hello,

Leonardo Rochael Almeida wrote:

 Yes, if you file a feature request on the bug collector, which can
 be found at:

I've put the most recent version of the patch to
http://collector.zope.org/Zope/1472.


cheers,

andreas

___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


Re: [Zope-dev] Patch: let non-seekable streams be input for ZPublisher (updated)

2004-08-23 Thread Ames Andreas (MPA/DF)
Hello,

Dieter Maurer wrote:

 Maybe, I have a much simpler solution:

   Something in ZServer makes all file fields seekable by
   delivering them through some temporary (either a StringIO
   or a temporary file).

   Maybe, you could do the same for your requests?

Yes, I could.  But trying to avoid it was the very reason I tried to
patch HTTPRequest for.

What you propose is what the HTTPServer does for instance.  I have at
least two objections:

1) For big requests (big means some hardcoded threshold) HTTPServer
   buffers the request on disk.  Then ZPublisher creates a
   FieldStorage instance which will write the request once again to
   disk.  And that also means the request is read from disk several
   times.  That's a bit too much blocking disk I/O in a production
   webserver for my liking.  And that's not counting retries; each
   retry will add another pair of blocking disk I/O in the current
   implementation.  What I'm trying to do is:

   - avoid unneeded disk I/O in retries

   - make it possible that self.stdin in HTTPRequest can be
 non-seekable such that it doesn't have to be buffered on disk
 before FieldStorage is created (and does exactly that buffering).

2) I'm convinced that the one and only ZServer thread should do
   without any blocking (I/O) call (besides select() that is) and I
   want AJPServer also be an experiment to see if this improves
   performance.


cheers,

andreas

___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


Re: [Zope-dev] Patch: let non-seekable streams be input for ZPublisher (updated)

2004-08-20 Thread Ames Andreas (MPA/DF)
Hi,

Andreas Ames wrote:

 next trial.  I hope I've got it straight now.

Not quite so (although this may only be nitpicking ;-).

fs_env = environ

should have read

fs_env = environ.copy()

obviously.  See attachment.


cheers,

andreas



--- lib.orig/python/ZPublisher/HTTPRequest.py	2004-08-18 16:37:18.0 +0200
+++ lib/python/ZPublisher/HTTPRequest.py	2004-08-20 13:07:25.0 +0200
@@ -60,6 +60,37 @@
 class NestedLoopExit( Exception ):
 pass
 
+FILELIST_ENV_ENTRY = 'request.filestorage.filelist'
+
+class ZFieldStorage(cgi.FieldStorage):
+\
+Tame the FieldStorage.
+
+This class overrides the stock implementation to add all file
+objects, generated while parsing a request, to a list.  This list
+is given by the caller.
+
+Unfortunately cgi.FieldStorage is not generic enough, to adapt the
+list of constructor parameters, by just deriving from it.  Thus
+the list is specified within the environ dict, given to the
+constructor.  This is ugly and could be fixed by a simple patch
+against cgi.FieldStorage.
+
+
+def __init__(self, fp=None, headers=None, outerboundary=,
+ environ=os.environ, keep_blank_values=0, strict_parsing=0):
+self._filelist = environ.get(FILELIST_ENV_ENTRY, [])
+cgi.FieldStorage.__init__(self, fp, headers, outerboundary,
+  environ, keep_blank_values,
+  strict_parsing)
+
+def make_file(self, binary=None):
+import tempfile
+res = tempfile.TemporaryFile(w+b)
+self._filelist.append(res)
+return res
+
+
 class HTTPRequest(BaseRequest):
 \
 Model HTTP request data.
@@ -125,10 +156,15 @@
 
 def retry(self):
 self.retry_count=self.retry_count+1
-self.stdin.seek(0)
+if not self._fs:# not really needed
+raise ValueError, Retrying on partially initialized request is not supported.  Call processInputs().
+for f in self._fs_filelist:
+f.seek(0)
 r=self.__class__(stdin=self.stdin,
  environ=self._orig_env,
- response=self.response.retry()
+ response=self.response.retry(),
+ fs = self._fs,
+ flist = self._fs_filelist
  )
 r.retry_count=self.retry_count
 return r
@@ -138,6 +174,8 @@
 # removing tempfiles.
 self.stdin = None
 self._file = None
+self._fs = None
+self._fs_filelist = None
 self.form.clear()
 # we want to clear the lazy dict here because BaseRequests don't have
 # one.  Without this, there's the possibility of memory leaking
@@ -237,7 +275,7 @@
 
 return self._client_addr
 
-def __init__(self, stdin, environ, response, clean=0):
+def __init__(self, stdin, environ, response, clean=0, fs=None, flist=None):
 self._orig_env=environ
 # Avoid the overhead of scrubbing the environment in the
 # case of request cloning for traversal purposes. If the
@@ -261,7 +299,11 @@
 self.steps=[]
 self._steps=[]
 self._lazies={}
-
+self._fs = fs
+if flist is None:
+self._fs_filelist = []
+else:
+self._fs_filelist = flist
 
 if environ.has_key('REMOTE_ADDR'):
 self._client_addr = environ['REMOTE_ADDR']
@@ -382,23 +424,26 @@
 taintedform=self.taintedform
 
 meth=None
-fs=FieldStorage(fp=fp,environ=environ,keep_blank_values=1)
-if not hasattr(fs,'list') or fs.list is None:
+if not self._fs:
+fs_env = environ.copy()
+fs_env[FILELIST_ENV_ENTRY] = self._fs_filelist
+self._fs=ZFieldStorage(fp=fp,environ=fs_env,keep_blank_values=1)
+if not hasattr(self._fs,'list') or self._fs.list is None:
 # Hm, maybe it's an XML-RPC
-if (fs.headers.has_key('content-type') and
-fs.headers['content-type'] == 'text/xml' and
+if (self._fs.headers.has_key('content-type') and
+self._fs.headers['content-type'] == 'text/xml' and
 method == 'POST'):
 # Ye haaa, XML-RPC!
 global xmlrpc
 if xmlrpc is None: import xmlrpc
-meth, self.args = xmlrpc.parse_input(fs.value)
+meth, self.args = xmlrpc.parse_input(self._fs.value)
 response=xmlrpc.response(response)
 other['RESPONSE']=self.response=response
 self.maybe_webdav_client = 0
 else:
-self._file=fs.file
+self._file=self._fs.file
 else:
-fslist=fs.list
+fslist=self._fs.list
 tuple_items={}
 lt=type([])
 CGI_name=isCGI_NAME

Re: [Zope-dev] Patch: let non-seekable streams be input for ZPublisher (updated)

2004-08-20 Thread Ames Andreas (MPA/DF)
Hi Dieter, *,

Dieter Maurer wrote:

   The FieldStorage must be rebuilt, because it can contain
   file objects. These file objects must be reset as they may
   have been (partially) read in the previous request.
   This prevent reusing the previous FieldStorage.

As you may have seen in my first patch proposal I actually tried to
address this problem, although halfbaken and buggy.  So here's the
next trial.  I hope I've got it straight now.

Why didn't I do something to reset the files, FiledStorage creates?

Because I thought it wouldn't be needed.  If you look at
cgi.FieldStorage you will see, that accessing FieldStorage's 'value'
field, will always reset the file before reading it and HTTPRequest's
'BODY' field does the same.

Cause of your hint I looked again and realised that I was wrong.
HTTPRquest's 'BODYFILE' field fails to reset the file before accessing
it (as in OFS/Image.py the File class does).  That could be easily
fixed.  What's worse is that FiledStorage can possibly create
subinstances of itsellf which can possibly contain other fileobjects.
This is the case for multipart-type requests.


How to solve this?

I can see three alternative aproaches:

1) In HTTPRequest.retry loop through self._fs.list (if it exists) and
   call seek(0) on every fileobject, you can find there.  This can
   only work if the maximum depth of the FiledStorage tree is
   predictable, which in turn depends on the possible input.  I don't
   know enough about the possible HTTP requests to determine such a
   limit or to even 'prove' its existance.

2) In HTTPRequest.retry recursively walk through the FieldStorage
   trees in self._fs.list (if it exists).  I wouldn't do that in
   Python without being able to predict the (worst case) number of
   function calls needed to do this recursion.  And without refreshing
   my knowledge of compiler techniques I wouldn't be able to roll the
   recursion out to a loop.

3) This is a somewhat drastic (and possibly conceived as hackish?)
   approach which OTOH is independent of the input and should perform
   quite well.  I overwrite cgi.FieldStorage, esp. its make_file
   method, such that it adds all fileobjects, that it creates, to a
   (possibly caller provided) list.  Thus the caller
   (i.e. HTTPRequest) has a list of all those fileobjects that need to
   be reset if the FieldStorage should be reused (as in ZPublisher's
   retries).

For the patch attached I chose the third approach.  Please let me
know, what you think about it.  If it is assessed to be ok I will
update the collctor item, I opened yesterday.


cheers,

andreas



--- lib.orig/python/ZPublisher/HTTPRequest.py	2004-08-18 16:37:18.0 +0200
+++ lib/python/ZPublisher/HTTPRequest.py	2004-08-19 17:42:31.0 +0200
@@ -60,6 +60,37 @@
 class NestedLoopExit( Exception ):
 pass
 
+FILELIST_ENV_ENTRY = 'request.filestorage.filelist'
+
+class ZFieldStorage(cgi.FieldStorage):
+\
+Tame the FieldStorage.
+
+This class overrides the stock implementation to add all file
+objects, generated while parsing a request, to a list.  This list
+is given by the caller.
+
+Unfortunately cgi.FieldStorage is not generic enough, to adapt the
+list of constructor parameters, by just deriving from it.  Thus
+the list is specified within the environ dict, given to the
+constructor.  This is ugly and could be fixed by a simple patch
+against cgi.FieldStorage.
+
+
+def __init__(self, fp=None, headers=None, outerboundary=,
+ environ=os.environ, keep_blank_values=0, strict_parsing=0):
+self._filelist = environ.get(FILELIST_ENV_ENTRY, [])
+cgi.FieldStorage.__init__(self, fp, headers, outerboundary,
+  environ, keep_blank_values,
+  strict_parsing)
+
+def make_file(self, binary=None):
+import tempfile
+res = tempfile.TemporaryFile(w+b)
+self._filelist.append(res)
+return res
+
+
 class HTTPRequest(BaseRequest):
 \
 Model HTTP request data.
@@ -125,10 +156,15 @@
 
 def retry(self):
 self.retry_count=self.retry_count+1
-self.stdin.seek(0)
+if not self._fs:
+raise ValueError, Retrying on partially initialized request is not supported.  Call processInputs().
+for f in self._fs_filelist:
+f.seek(0)
 r=self.__class__(stdin=self.stdin,
  environ=self._orig_env,
- response=self.response.retry()
+ response=self.response.retry(),
+ fs = self._fs,
+ flist = self._fs_filelist
  )
 r.retry_count=self.retry_count
 return r
@@ -138,6 +174,8 @@
 # removing tempfiles.
 self.stdin = None
 self._file = None
+self._fs = None
+self._fs_filelist = None
 self.form.clear()
 # we want 

[Zope-dev] Patch: let non-seekable streams be input for ZPublisher (updated)

2004-08-18 Thread Ames Andreas (MPA/DF)
Hi,

I had a thinko in my previous patch
(http://mail.zope.org/pipermail/zope-dev/2004-August/023630.html)
which is corrected in the attached version of the patch.  Sorry for
any inconvenience, I might have caused.

As I'm sort of protocol challenged (thanks to corporate firewall), svn
unfortunately isn't an option for me.  Thus I created the patch
against lib/python/ZPublisher/HTTPRequest.py as released in 2.7.2-0.
As the patch is almost trivial it shouldn't be a problem to apply it
even manually against other vesions (at least 2.7.0, 2.7.1 and 2.7.2
should work OOTB i.e. by using /usr/bin/patch).

To sum up its purpose again:  It's meant to avoid the stdin.seek()
call in the HTTPRequest class so that non-seekable streams can be used
as stdin.  It also potentially avoids unnecessary (blocking) disk I/O
in retries (this depends on the according request that's being worked
on) and repeated request parsing in cgi.FieldStorage.  The immediate
cause for the patch is that AJPServer uses non-seekable streams as
input for the ZPublisher.  To change this locally I'd need to go with
disk buffering.  Nevertheless I think this could provide a general
performance improvement.

To this end I just save the cgi.FieldStorage instance, that is created
by HTTPRequest.processInputs, when the request is worked upon for the
first time, across retry boundaries.

Questions:

- Is this the right forum/place to send patches to?

- Is there any chance that this could be applied to Zope's mainline?
  (If not I will proceed with a local disk buffering scheme in the
  long term.)


cheers,

andreas



--- lib.orig/python/ZPublisher/HTTPRequest.py	2004-08-18 16:37:18.0 +0200
+++ lib/python/ZPublisher/HTTPRequest.py	2004-08-18 16:39:55.0 +0200
@@ -125,10 +125,12 @@
 
 def retry(self):
 self.retry_count=self.retry_count+1
-self.stdin.seek(0)
+if not self._fs:
+raise ValueError, Retrying on partially initialized request is not supported.  Call processInputs().
 r=self.__class__(stdin=self.stdin,
  environ=self._orig_env,
- response=self.response.retry()
+ response=self.response.retry(),
+ fs = self._fs
  )
 r.retry_count=self.retry_count
 return r
@@ -138,6 +140,7 @@
 # removing tempfiles.
 self.stdin = None
 self._file = None
+self._fs = None
 self.form.clear()
 # we want to clear the lazy dict here because BaseRequests don't have
 # one.  Without this, there's the possibility of memory leaking
@@ -237,7 +240,7 @@
 
 return self._client_addr
 
-def __init__(self, stdin, environ, response, clean=0):
+def __init__(self, stdin, environ, response, clean=0, fs=None):
 self._orig_env=environ
 # Avoid the overhead of scrubbing the environment in the
 # case of request cloning for traversal purposes. If the
@@ -261,6 +264,7 @@
 self.steps=[]
 self._steps=[]
 self._lazies={}
+self._fs = fs
 
 
 if environ.has_key('REMOTE_ADDR'):
@@ -382,23 +386,24 @@
 taintedform=self.taintedform
 
 meth=None
-fs=FieldStorage(fp=fp,environ=environ,keep_blank_values=1)
-if not hasattr(fs,'list') or fs.list is None:
+if not self._fs:
+self._fs=FieldStorage(fp=fp,environ=environ,keep_blank_values=1)
+if not hasattr(self._fs,'list') or self._fs.list is None:
 # Hm, maybe it's an XML-RPC
-if (fs.headers.has_key('content-type') and
-fs.headers['content-type'] == 'text/xml' and
+if (self._fs.headers.has_key('content-type') and
+self._fs.headers['content-type'] == 'text/xml' and
 method == 'POST'):
 # Ye haaa, XML-RPC!
 global xmlrpc
 if xmlrpc is None: import xmlrpc
-meth, self.args = xmlrpc.parse_input(fs.value)
+meth, self.args = xmlrpc.parse_input(self._fs.value)
 response=xmlrpc.response(response)
 other['RESPONSE']=self.response=response
 self.maybe_webdav_client = 0
 else:
-self._file=fs.file
+self._file=self._fs.file
 else:
-fslist=fs.list
+fslist=self._fs.list
 tuple_items={}
 lt=type([])
 CGI_name=isCGI_NAME
___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


[Zope-dev] Re: Are input streams seekable in Zope? (+ patch proposal)

2004-08-12 Thread Ames Andreas (MPA/DF)
Hi,

at first let me say that I'm very sorry for the long delay; it was
just an oversight.

Tres Seaver wrote:

 Is there a reason why the AJP protocol won't allow you to rewind
 to the beginning of the request stream?  I don't think that the
 publisher does any other seek than to the start of the stream.

There is nothing in AJP that supports such a rewind but also nothing
that prevents it.  I think it's very comparable to the HTTP part of
the ZServer.  The problem probably was that I simply didn't expect
'stdin.seek()' to do something else than throwing an error.

I just buffer configurable amounts of bytes of an AJP requests in RAM,
no disk buffers or something for avoiding blocking I/O in ZServer.
When the configured amount is in memory I just stop reading from the
socket, thus I'm relying on TCP's traffic control.  The respective
ZPublisher thread informs the ZServer, when it has removed sth. from
the input buffer, by a trigger event.

 Perhaps you need to derive an AJPRequest from HTTPRequest, and
 arrange not to need the 'stidn' stream during a retry; another
 possibility would be to submit a patch which allowed the retry
 mechanism to work without re-parsing the request stream (basically,
 the patch would need to clone the cgi.FieldRequest set from the
 original request into the one used for the retry).

The first solution you propose is the one that immediately came to my
mind because it sounds similar to what the ZServer does for HTTP,
i.e. buffer big requests on disk smaller ones in StringIO.  The cause
I don't like that too much is that cgi.FileStorage doesn't let me
control if its input is disk buffered once again internally.  Writing
the same request to disk (and reading it in) potentially twice (or
even more often in the retry-case) is a bit too heavy for me.

I thought about trying to come up with a patch against
cgi.FieldStorage that would leave it to the caller to decide if
FieldStorage could do without its internal disk copy and just use the
input.  But even if I could do that it would take a long time to get
effective, so there'd be still the need for an immediate workaround.

So your second proposal is what I actually want to try first.  I've
just finished a patch against ZPublisher.HTTPRequest.py, that I
attached to this mail, that, on retry, doesn't read the whole request
again from the input stream (as during the first attempt) but just
keeps the toplevel cgi.FieldStorage, that has been created by the
first attempt in HTTPRequest.processInputs, in a newly created
instance variable self._fs. This instance variable is then preserved
across retry boundaries.  I've tested the patch preliminarily and it
worked for me.

As I don't know how and where to send a patch against the Zope(2) core
(and also because I'd hope for a little review here :-) I'd just be
glad if you could tell me how the correct procedure is.  I just hope
that I wouldn't have to go through some kind of 'paper war' just to
get three or four lines of code changed.


cheers,

andreas



--- ./orig.lib/python/ZPublisher/HTTPRequest.py	2004-03-12 21:00:48.0 +0100
+++ ./lib/python/ZPublisher/HTTPRequest.py	2004-08-12 17:11:55.0 +0200
@@ -125,10 +125,14 @@
 
 def retry(self):
 self.retry_count=self.retry_count+1
-self.stdin.seek(0)
+if self._fs:
+self._fs.file.seek(0)   # XXX not sure this is really needed
+else:
+raise ValueError, Retrying on partially initialized request is not supported.  Call processInputs().
 r=self.__class__(stdin=self.stdin,
  environ=self._orig_env,
- response=self.response.retry()
+ response=self.response.retry(),
+ fs = self._fs
  )
 r.retry_count=self.retry_count
 return r
@@ -138,6 +142,7 @@
 # removing tempfiles.
 self.stdin = None
 self._file = None
+self._fs = None
 self.form.clear()
 # we want to clear the lazy dict here because BaseRequests don't have
 # one.  Without this, there's the possibility of memory leaking
@@ -237,7 +242,7 @@
 
 return self._client_addr
 
-def __init__(self, stdin, environ, response, clean=0):
+def __init__(self, stdin, environ, response, clean=0, fs=None):
 self._orig_env=environ
 # Avoid the overhead of scrubbing the environment in the
 # case of request cloning for traversal purposes. If the
@@ -261,6 +266,7 @@
 self.steps=[]
 self._steps=[]
 self._lazies={}
+self._fs = fs
 
 
 if environ.has_key('REMOTE_ADDR'):
@@ -382,23 +388,24 @@
 taintedform=self.taintedform
 
 meth=None
-fs=FieldStorage(fp=fp,environ=environ,keep_blank_values=1)
-if not hasattr(fs,'list') or fs.list is None:
+if not self._fs:
+

[Zope-dev] Are input streams seekable in Zope?

2004-07-30 Thread Ames Andreas (MPA/DF)
Hi,

I've stumbled over some code in Zope 2.7.0 that seems to suggest that
input streams are meant to be seekable.  In an extension I wrote for
ZServer, i.e. AJPServer, I throw an exception when this is tried and
I'm quite sure the call to seek wasn't there in 2.6.x.  I found the
call to seek in ZPublisher.HTTPRequest.py, line 128, in method
retry.

Does this mean that only seekable streams are allowed in ZPublisher
(i.e. not stdin) or is this a bug?


TIA,

andreas
___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


Re: [Zope-dev] [Q] Pickle support for C wrapper and ZEO

2004-04-23 Thread Ames Andreas (MPA/DF)
Hello Dieter,

thanks for your answer.


Dieter Maurer [EMAIL PROTECTED] writes:

 You mean ZEO client instances not ZEO (server) instances, don't you?

Exactly, sorry for having been ambiguous.


 Why do you think your ODBC objects should be consistent across
 ZEO clients?
 I do not think that it is necessary as all your requests your
 be independent which means all transactions inside a single
 request which implied that connections need not to be shared.

The reason why I think about this ZEO scenario is that I thought it
was a goal for a ZEO deployment that all ZEO clients behave identical
when they respond to the same request.  That requires some sort of
shared state, which is what ZODB is used for in ZEO, AFAIK.

Furthermore such shared state is only useful when the state persist
across requests.  So I guess my real question should have been:

Is it useful/sensible to keep state across request-boundaries within a
database adapter in Zope (such state as ODBC-environment,
ODBC-connections, ODBC-statements and ODBC-descriptors)?

If you don't know ODBC well enough:  How do you do it in comparable
remote access adapters (for example a CORBA client in Zope or
something)?


TIA,

andreas


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


[Zope-dev] [Q] Pickle support for C wrapper and ZEO

2004-04-22 Thread Ames Andreas (MPA/DF)
Hi all,

I'm reposting here because I had no luck on [EMAIL PROTECTED]

I'm in the course of low-level wrapping the ODBC api.  Just in case
you wonder:  I know of mxODBC and pyodbc, they don't fit my needs as
well license-wise as technically (I want better metadata access,
thread-safety etc.).

I'll use my wrapper from within Zope.  There will be some python
wrapper around the low-level stuff and I wonder if it makes sense to
add pickle-support to that python wrapper.

The ODBC api is object-based and exhibits four object types:
environment, connection, statement and descriptor objects each of
which has a set of methods and properties.  Pickle-wise I'm not so
concerned about persistence across shutdown/restart cycles (I think
it's reasonable to re-create your ODBC environments, connections
etc. after restart) but rather about consistency across ZEO-instances.
My lack of experience makes me ask here for expert advise.

1)  Does the ZEO scenario demand some pickle support, e. g. to
use consitent environments/connections etc. across ZEO instances
or do I just misunderstand ZEO?

2)  If pickle support is a good idea:  What scope would you find
reasonable?  I. e. I can imagine that persistence of
environments/descriptors could be useful while persistent
connections/statements could cause more trouble than they are
worth.

As you can see the whole thing isn't very clear to me and so I'd
appreciate your comments.


TIA,

andreas


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )


Re: [Zope-dev] [Q] Pickle support for C wrapper and ZEO

2004-04-22 Thread Ames Andreas (MPA/DF)
Hello Martin,

[EMAIL PROTECTED] (Martin Kretschmar) writes:

putting this back on list, hope it's ok with you.

 Hello,

 I was once debugging an application which usually
 crashed after 2-3 days during load tests. It was
 doing a lot of database operations. The access
 was to a Microsoft SQL Server.

 There has been once a warning, that constantly
 creating and deleting CDatabase-Objects leeds to
 memory leaks. So the application had been written
 to reuse a given CDatabase object.

 The occasional crashes, shown to be somewhere in
 the ODBC code, were gone, when each Thread got back
 exactly his last CDatabase object in use.

 In this sense my definition of threadsafe is not
 always the way Microsoft sees it.

 Regards,
   Martin

Thanks for the hint, but I won't use MFC in my code (portability).
I'm currently using unixODBC to get started and hope to be able to
preserve source code compatibility with MS's ODBC Driver Manager (and
eventually iODBC if needed).  If there are bugs in the platform code
then these will bother me not before some time from now because I'm
still wrapping all (or rather a subset of) those SQL... functions
which live in sql[ext].h.  Currently my own refcount leaks are much
worse than any driver manager's memory leaks could ever be ;-).


cheers,

andreas


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://mail.zope.org/mailman/listinfo/zope-announce
 http://mail.zope.org/mailman/listinfo/zope )