Re: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-11 Thread Dario Lopez-Kästen

Tim Peters wrote:

[Dieter Maurer]


They did not tell us about their application. But, Zope's database
adapters work like this. They use the ZODB cache (and its pool) as an RDB
connection pool. Thus, if the ZODB caches are not released, the RDB
connections won't.



I believe you, but I'm not sure what to make of it; for example,

1. The OP is the only person who has this problem?

2. Other people have this problem too, and don't know what to do about
   it either, but never complain?

3. Other people have had this problem, but do know what to do about it?


I am one of those people of point 2 and 3.

This behaviour which I recognise of zope from years back, is still 
plaguing me with some apps we have. They still run zope2.6 and DCO2, so 
maybe some of the problems are aleviated in newer zopes, but I do 
recognise the problem description here.


/dario

--
-- ---
Dario Lopez-Kästen, IT Systems & Services Chalmers University of Tech.
Lyrics applied to programming & application design:
"emancipate yourself from mental slavery" - redemption song, b. marley

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-03 Thread Юдыцкий Игорь Владиславович
If you really want to know the numbers that we have played with...
Zserver threads - pool size
5 - 10
15 - 25
40 - 55
70 - 100
100 - 150
But there is no difference. We raised up these numbers according to the
comming clients, project is involving more and more workers and
different divisions.
In case somebody is interested - it's warehouse management/controlling
system.
In future there will be more than 320 people working at ones. Right now
we have about half of that load.

And i want to let you know that zope based upon zodb works amost perfect
at this task, except losing connections... or forgetting to forget
connections... :) and there is no limit as i can observe at this time.
Small improvement would place zodb to a higher software level of
reliability and stabilyty in a stress site load.

Tim, I don't like very mutch the idea of resetting cache on the throwing
away connection. I've played with this before Dieter suggested to do it.
I can't describe it very clearly, cause it'has no reproducable rsults.
But i guess that some zope code, may be publisher code uses some
connection attributes after closing connection. It looked like some
attributes returned None as a result of emty cache and closed
connection, and some data from site was missing. You can try it just
with often requests to the zope management interface and small pool_size
with clearing cache on removing.
I know that they soudn't be using anything after connection close, but i
think they are... and cache removal will lead to many changes in the
existing code based on the top of zodb.

I would prefer to break circle of objects references, so that connection
could silently die not waiting for the gc expiation... :)
But it's just a hope.

Pitcher.

В Втр, 03/01/2006 в 14:38 -0500, Tim Peters пишет:
> [Tim Peters]
> >> I'm still baffled by how you get into this state to begin with.  Florent
> >> explained why earlier, and I didn't see a reply to this part:
> 
> [Florent Guillaume]
> >>> By itself Zope never uses more than one connection per thread, and the
> >>> number of thread is usually small.
> 
> >> You replied that you had "hundreds" rather than "thousands" of
> >> connections, but it's a still mystery here how you manage to have that
> >> many.  For example, Zope defaults to using 4 ZServer threads, so do you
> >> specify a much larger zserver-threads value than the default?
> 
> [Dieter Maurer]
> > I explained how Zope can aggregate much more connections when the maximal
> > number of worker threads exceeds the size of the connection pool.
> > Apparently, you missed this message.
> 
> No, I saw it (and thanks), but in the absence of specifics from Pitcher
> there was no particular reason to believe it was a relevant guess.  Pitcher
> said yesterday:
> 
> At the time we used Zope-2.7.4 we had zope blocks very often and
> deny of serving, while high activity period, so to prevnt blocks we
> had set up the big pool-size and zserver threads parameters (we've
> played with different values but allways pool_size was larger number
> for about 1.5 times). It worked for a while. So...
> 
> I still don't know what pool-size and zserver-threads _are_ set to, but it's
> clear he believes his pool-size is substantially (~1.5x) larger than his
> zserver-threads setting (in which case, no, it doesn't seem likely that he's
> got more threads than connections).
> 
> ___
> For more information about ZODB, see the ZODB Wiki:
> http://www.zope.org/Wikis/ZODB/
> 
> ZODB-Dev mailing list  -  ZODB-Dev@zope.org
> http://mail.zope.org/mailman/listinfo/zodb-dev
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-03 Thread Tim Peters
[Tim Peters]
>> I'm still baffled by how you get into this state to begin with.  Florent
>> explained why earlier, and I didn't see a reply to this part:

[Florent Guillaume]
>>> By itself Zope never uses more than one connection per thread, and the
>>> number of thread is usually small.

>> You replied that you had "hundreds" rather than "thousands" of
>> connections, but it's a still mystery here how you manage to have that
>> many.  For example, Zope defaults to using 4 ZServer threads, so do you
>> specify a much larger zserver-threads value than the default?

[Dieter Maurer]
> I explained how Zope can aggregate much more connections when the maximal
> number of worker threads exceeds the size of the connection pool.
> Apparently, you missed this message.

No, I saw it (and thanks), but in the absence of specifics from Pitcher
there was no particular reason to believe it was a relevant guess.  Pitcher
said yesterday:

At the time we used Zope-2.7.4 we had zope blocks very often and
deny of serving, while high activity period, so to prevnt blocks we
had set up the big pool-size and zserver threads parameters (we've
played with different values but allways pool_size was larger number
for about 1.5 times). It worked for a while. So...

I still don't know what pool-size and zserver-threads _are_ set to, but it's
clear he believes his pool-size is substantially (~1.5x) larger than his
zserver-threads setting (in which case, no, it doesn't seem likely that he's
got more threads than connections).

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-03 Thread Dieter Maurer
Tim Peters wrote at 2006-1-2 15:05 -0500:
>I'm still baffled by how you get into this state to begin with.  Florent
>explained why earlier, and I didn't see a reply to this part:
>
>[Florent Guillaume]
>> By itself Zope never uses more than one connection per thread, and the 
>> number of thread is usually small.
>
>You replied that you had "hundreds" rather than "thousands" of connections,
>but it's a still mystery here how you manage to have that many.  For
>example, Zope defaults to using 4 ZServer threads, so do you specify a much
>larger zserver-threads value than the default?

I explained how Zope can aggregate much more connections when
the maximal number of worker threads exceeds the size of the connection
pool. Apparently, you missed this message.

-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-02 Thread Tim Peters
...

[Tim]
>> FYI, I added code to clear the cache in _ConnectionPool at the time a
>> closed Connection is forgotten.  This will be in ZODB 3.6 final, which
>> in turn will be in Zopes 2.9 and 3.2 final.

[Pitcher]
> Okay. Tank you. But it's good not always... You never said anything good
> or bad of the idle period. The idea of having this parameter doesn't seem
> to be useful for you?

I simply haven't thought about it, and won't have time for that in the
foreseeable future either.  ZODB wasn't designed with hundreds of
simultaneous connections in mind, and for that reason it's not surprising if
odd things happen when you push it that way.

> IMO it's good to have a little connection buffer right after the site
> activity gets down so in case of next activity jump we have ready
> connections (may be with cache, to keep good application perfomance). And
> after we have stabilized site load, connection pool may shrink to the
> size that would solve the incoming requests (depending of database server
> load and etc.) What should be the idle period value - let user to decide.
> And no need to the pool_size parameter, or may be the list value of the
> connection pool to have ready to server connections.  That's why subj was
> about connection poll that makes no sense in that architecture.

Connection management wasn't designed with this use case in mind at all.
Zope typically makes very modest demands on # of connections, and non-Zope
ZODB uses typically even less.  In both, it's considered to be "a problem"
in the app if the app _ever_ opens more than pool_size connections
simultaneously (that's why warning and critical log messages are made when
the app does), it's not expected that pool_size will be set to something
much larger than the default 7, and keeping caches around is important.

> Didn't mean to be rude... Sorry if i sounded offensive.

No offense taken!  I don't mean to sound dismissive either -- you may (or
may not) have good ideas here, but I simply don't have time to think about
them either way.  Hence:

>> I don't have time for more here now, so if others want more it's up to
>> them ;-)

> We have discussion that long that would be enough to rewrite Connection,
> DB, and utils classes/modules totaly... I would be good testing case.
>
> I've implemented idle logic as i could without deep knowledge of zodb.
> And next time we upgrade our system to a new version - will have to
> rewrite this logic again.

This is the right list to discuss such changes.  It would probably help a
lot if more people who cared about these use cases volunteered to help.

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-02 Thread Юдыцкий Игорь Владиславович
В Пнд, 02/01/2006 в 15:05 -0500, Tim Peters пишет:
> I'm still baffled by how you get into this state to begin with.  Florent
> explained why earlier, and I didn't see a reply to this part:
> 
> [Florent Guillaume]
> > Huh are you sure? That would mean you have thousands of threads. Or 
> > hundreds or ZEO clients. Or hundreds of ZODB mountpoints.
> > 
> > By itself Zope never uses more than one connection per thread, and the 
> > number of thread is usually small.
> 
> You replied that you had "hundreds" rather than "thousands" of connections,
> but it's a still mystery here how you manage to have that many.  For
> example, Zope defaults to using 4 ZServer threads, so do you specify a much
> larger zserver-threads value than the default?

Well, yes.
We have many (50 - 150) clients that work with zope application at ones.
All application logic is inside Firebird Database, so the application
gets all data to show from the ZSQL methods from RDB connetion.
Activity is not something of stable matter. It jumps up when users are
loging into the system and while regular work sometimes it jumps up and
down. Usually activity looks like alot of small sql requests or stored
procedure calls.

At the time we used Zope-2.7.4 we had zope blocks very often and deny of
serving, while high activity period, so to prevnt blocks we had set up
the big pool-size and zserver threads parameters (we've played with
different values but allways pool_size was larger number for about 1.5
times). It worked for a while. So.. we have moved to zope-2.8.5 and
started to watch every day alot of connections that was opened ones and
never used anymore. It caused DB server overeaten memory and system
swaping. This caused slowing down DB server and growing request queue.
So we started to look through the code of zodb to find out what happens.

> 
> FYI, I added code to clear the cache in _ConnectionPool at the time a closed
> Connection is forgotten.  This will be in ZODB 3.6 final, which in turn will
> be in Zopes 2.9 and 3.2 final.
> 

Okay. Tank you. But it's good not always...
You never said anything good or bad of the idle period. The idea of
having this parameter doesn't seem to be useful for you? IMO it's good
to have a little connection buffer right after the site activity gets
down so in case of next activity jump we have ready connections (may be
with cache, to keep good application perfomance). And after we have
stabilized site load, connection pool may shrink to the size that would
solve the incoming requests (depending of database server load and etc.)
What should be the idle period value - let user to decide. And no need
to the pool_size parameter, or may be the list value of the connection
pool to have ready to server connections. That's why subj was about
connection poll that makes no sense in that architecture. Didn't mean to
be rude... Sorry if i sounded offensive.

> I don't have time for more here now, so if others want more it's up to them
> ;-)
We have discussion that long that would be enough to rewrite Connection,
DB, and utils classes/modules totaly... I would be good testing case.

I've implemented idle logic as i could without deep knowledge of zodb.
And next time we upgrade our system to a new version - will have to
rewrite this logic again.

Anyway, thank you.

> 
> ___
> For more information about ZODB, see the ZODB Wiki:
> http://www.zope.org/Wikis/ZODB/
> 
> ZODB-Dev mailing list  -  ZODB-Dev@zope.org
> http://mail.zope.org/mailman/listinfo/zodb-dev
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-02 Thread Tim Peters
FYI, I added code to clear the cache in _ConnectionPool at the time a closed
Connection is forgotten.  This will be in ZODB 3.6 final, which in turn will
be in Zopes 2.9 and 3.2 final.

I don't have time for more here now, so if others want more it's up to them
;-)

> ...
> P.S. Call me Pitcher, i don't like 'OP' name.

No offense intended!  OP is a traditional abbreviation of "Original Poster".
Your first message used a character set that rendered your name as

??? ? ?

in my email client, and "OP" seemed a lot clearer than "??? ?
?" :-)

> Happy New Year to everyone who is reading this!!!
> Best wishes to you! I wish you to be healthy and all your family members
> too! Long live ZODB!
> :)

Best wishes to you to, Pitcher!

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-02 Thread Tim Peters
I'm still baffled by how you get into this state to begin with.  Florent
explained why earlier, and I didn't see a reply to this part:

[Florent Guillaume]
> Huh are you sure? That would mean you have thousands of threads. Or 
> hundreds or ZEO clients. Or hundreds of ZODB mountpoints.
> 
> By itself Zope never uses more than one connection per thread, and the 
> number of thread is usually small.

You replied that you had "hundreds" rather than "thousands" of connections,
but it's a still mystery here how you manage to have that many.  For
example, Zope defaults to using 4 ZServer threads, so do you specify a much
larger zserver-threads value than the default?

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-01 Thread Dieter Maurer
David Rushby wrote at 2005-12-30 11:14 -0800:
> ...
>Since ZODB doesn't know whether the connection it's releasing from the
>pool is still in use, I don't know whether resetting the connection's
>cache is appropriate as a general solution.  But it fixes a definite
>problem with Zope's behavior as a ZODB client application.

It is appropriate because the connection is closed (and therefore cannot
be used any longer) and it cannot be opened again (as opening is
not a "Connection" but a "DB" method).

-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2006-01-01 Thread Dieter Maurer
Tim Peters wrote at 2005-12-30 15:40 -0500:
> ...
> or maybe it would be better to break the reference cycles when
>_ConnectionPool forgets a Connection (so that the trash gets reclaimed right
>then, instead of waiting for a gc collection cycle to reach the generation
>in which the trash lives).

I think that would be a good idea.

-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-31 Thread Юдыцкий Игорь Владиславович
Thanx to everyone who has taken part of this discussion. Tim, Dieter,
Dave. Soory for keeping silence, it gets messy just before New Year
comes.

Back to what i have played with after your advice.

1. I've tried to add gc.collect() in App.ZApplication.Cleanup.__del__(),
so when the connection is released we do gc.collect. And it works. But
makes server too slow.
2. I've tried to gc.set_threshold() in an ZApplicationWrapper.__init__()
to perform gc cycles more often and watched gc.cycles by
gc.set_debug(gc.DEBUG_STATS) - result zope starts very slow and works
very slow, it cleans unused connections, but not all depending on
thresholdX values.
3. I've tried to put gc.collect() to _reduce_size method. It works too.

But all this does not cause the required behavor. 
#1 cleans the best way - but application becomes unusable because of low
latensy.
#2 Better then #1 but slows down zope startup and causes often
application freezing while it cleans garbage. And not all connection
objects are cleaned while collecting... some remain opened for an
unpredictable amount of time.
#3 Does clean connections but slows down the application right after
activity gets lower, so in pool.available we have more then desired
number of connection instances, pool tries to clean them up and freezes
the application and this makes another wave of growing pool because of
application speed has become lower...

So there is no apropriate way relying on gc in my view.

Another problem is an immediate cleaning of pool.available list right
after we get a little low request activity. There is no reason to
beleive that activity will not get higher in a few seconds, but we are
already cleaning connection poll and may be launching gc.collect, that
slows the whole application. Having bigger pool_size good to avoid often
poll shrinking while activity burst.
And from the other hand it's not likely to have big poll_size because
while low activity period there is no need to keep so many opened
connections to ZODB and especially to RDB.

So my understanding to have idle period rather ten pool_size.
In case when we do not need any RDB connectio or any other connection
pool may shrink to one or zero connections (one is better not to slow
latency and having cache of objects). In case of high request activity
pool has the size of number of connections that required to serve all
parallel requests depending on application speed.
And when we have activity vector down we are cleaning the pool according
to the logic of idle period. If connection is pushed more then idle
period ago then we do not need it any more.
I've implemented it this way:

def push(self, c):
assert c not in self.all
assert c not in self.available
c._pushed = time()
self.all.add(c)
self.available.append(c)

   def repush(self, c):
assert c in self.all
assert c not in self.available
c._pushed = time()
self.available.append(c)
self._reduce_size()

def _reduce_size(self):
t = time()
for i in xrange(len(self.available)):
if t - self.available[i]._pushed > 60: # here we nedd to use
configuration parameter value, but for testing that will do...
c = self.available.pop(0)
self.all.remove(c)
else:
break # since we have lifo stack there is no need to
check other connections

It works fine, if we go collect garbage of course.

So I can't use invoking of gc.collect() in my production application
instance, the reasons are given upper. And to close unused connections i
use another function that knows RDB DA handle oid and calls it's close()
method before self.all.remove(c) is called. But it's no good as you can
see. So i'm hoping that you will find the way to reimplement connection
class to let him die not waiting for it's turn comes in many many gc
cycles.

Once again i would like to thank everybody for getting into my problem,
willing to help. Hope that new versions of ZODB will have more flexible
mechanism of forgetting connections, likely having idle period parameter
rather ten pool_size.

Happy New Year to everyone who is reading this!!!
Best wishes to you! I wish you to be healthy and all your family members
too! Long live ZODB!
:)

P.S.
Call me Pitcher, i don't like 'OP' name.

В Сбт, 31/12/2005 в 09:20 +0100, Dieter Maurer пишет:
> Tim Peters wrote at 2005-12-30 14:51 -0500:
> >[Dieter Maurer]
> >> They did not tell us about their application. But, Zope's database
> >> adapters work like this. They use the ZODB cache (and its pool) as an RDB
> >> connection pool. Thus, if the ZODB caches are not released, the RDB
> >> connections won't.
> >
> >I believe you, but I'm not sure what to make of it; for example,
> >
> >1. The OP is the only person who has this problem?
> >
> >2. Other people have this problem too, and don't know what to do about
> >   it either, but never complain?
> 
> I expect (but the original poster may step

RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-31 Thread Dieter Maurer
Tim Peters wrote at 2005-12-30 14:51 -0500:
>[Dieter Maurer]
>> They did not tell us about their application. But, Zope's database
>> adapters work like this. They use the ZODB cache (and its pool) as an RDB
>> connection pool. Thus, if the ZODB caches are not released, the RDB
>> connections won't.
>
>I believe you, but I'm not sure what to make of it; for example,
>
>1. The OP is the only person who has this problem?
>
>2. Other people have this problem too, and don't know what to do about
>   it either, but never complain?

I expect (but the original poster may step in) that the problem
could occur in Zope only when the number of threads exceeds
the pool size (or additional connections are used in an application
specific way) as otherwise, there are no "dropped" connections.

Because formerly, it made no sense to have more worker threads than
that given by the pool size, this situation is likely to occur
rarely.

> ...
>>> If not, you may have better luck on the zope-db list (which is
>>> devoted to using other databases with Zope):
>
>> The problem is not with the RDB but with the ZODB connections that are
>> magically not garbage collected. He will probably not find help on
>> "zope-db".
>
>That suggestion was based on a guess that #3 (above) is most likely.  Of
>course I don't know, but #1 and #2 seem unlikely on the face of it.  If
>other people using RDB don't have this problem, then zope-db is the right
>place to ask how they manage to avoid it.

If the poster has no evidence that the ZODB connections are definitely kept
but just see that the relational database connections remain open,
the reason might indeed be at a completely different place:

  There are some reports on "zope-db" about DAs leaking relational
  database connections.
  
  The problem is not related to ZODB connection handling.
  Instead, the relational database connection is kept open
  even if the DA object was invalidated (and especially cleared).


We observed such behaviour with Zope 2.7/ZODB 3.2 and "ZPsycopgDA".

   When we used "resetCaches" (which should in principle release
   the old caches (I also added an "minimizeCache" because
   the cyclic gc does not work with Zope 2.7's ExtensionClass objects)),
   a new set of Postgres connections was opened without
   the old ones being closed.

   We worked around this problem by:

 *  avoiding to use "resetCaches"

 *  restarting Zope once a weak to get rid of
stale Postgres connections

> ...
>OTOH, if that's not
>what's going on here, I'd expect to have heard about this here more than
>once in the last 5 years ;-)

The older ZODB code (ZODB 3.2 and before) was seriously flawed
with respect to cache release handling.

Fortunately, it was very rare that caches need to be released.

I found the flaws because we used temporary connections extensively
and, of course, their caches need to go away with the temporary
connection. I had a hard fight to get rid of the memory leaks
induced by those flaws.


Now (ZODB 3.4 and above) caches might get released more often.
True, the garbage collector now has a chance to collect cycles
including "ExtensionClass" objects, but it is very easy to
defeat the GC -- an object with a "__del__" method is sufficient.

> Perhaps because the OP is unique in allowing
>hundreds (or thousands -- whatever) of Connections to be active
>simultaneously?  Don't know.

He did not say that.

  If his hypothesis is indeed true and some connections exceeding
  the pool size are kept indefinitely, then already slightly
  exceeding the pool size may lead to an unbound number of connections
  provided the "exceedance" occurs frequently enough.


>I suggested before that forcing calls to gc.collect() would give more
>evidence.  If that doesn't help, then it's most likely that the application
>is keeping Connections alive.  Since I'm not familiar with the RDB code, I
>suppose it's even possible that such code uses __del__ methods, and creates
>cycles of its own, that prevent cyclic gc from reclaiming them.  In that
>case, there are serious problems in that code.

There is also another "gc" attribute holding the garbage cycles
not released. The poster may examine them to check whether they
contain indeed ZODB connection objects.

-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-30 Thread Tim Peters
[Tim Peters]
...
>> I suggested before that forcing calls to gc.collect() would give more
>> evidence.  If that doesn't help, then it's most likely that the
>> application is keeping Connections alive.

[David Rushby]
> I'm not the OP, but I was able to reproduce the problem he was
> encountering.  Calls to gc.collect() *do* solve the problem, without
> any changes to the application.

Thank you!  That's progress.

> But mightn't clearing the released ZODB Connection's cache
> (http://mail.zope.org/pipermail/zodb-dev/2005-December/009688.html ) be a
> better solution?

I didn't suggest it as "a solution", but as a way to get evidence.  Now I'd
like to know whether adding gc.collect() solves the OP's problem, in the
OP's original context, too.

I expect that clearing the cache when _ConnectionPool "forgets" a Connection
is harmless, but that will take more thought to be sure, and in any case it
doesn't address that trash Connection objects, and trash caches, would still
accumulate.  If possible, I would rather fix the underlying problem than
just relieve the symptom du jour.  For example, maybe it's a bug in Python's
gc after all, or maybe it would be better to break the reference cycles when
_ConnectionPool forgets a Connection (so that the trash gets reclaimed right
then, instead of waiting for a gc collection cycle to reach the generation
in which the trash lives).

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-30 Thread David Rushby
--- Tim Peters <[EMAIL PROTECTED]> wrote:
> I suggested before that forcing calls to gc.collect() would give more
> evidence.  If that doesn't help, then it's most likely that the
> application
> is keeping Connections alive.

I'm not the OP, but I was able to reproduce the problem he was
encountering.  Calls to gc.collect() *do* solve the problem, without
any changes to the application.

But mightn't clearing the released ZODB Connection's cache
(http://mail.zope.org/pipermail/zodb-dev/2005-December/009688.html ) be
a better solution?



__ 
Yahoo! DSL – Something to write home about. 
Just $16.99/mo. or less. 
dsl.yahoo.com 

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-30 Thread Tim Peters
[Dieter Maurer]
> They did not tell us about their application. But, Zope's database
> adapters work like this. They use the ZODB cache (and its pool) as an RDB
> connection pool. Thus, if the ZODB caches are not released, the RDB
> connections won't.

I believe you, but I'm not sure what to make of it; for example,

1. The OP is the only person who has this problem?

2. Other people have this problem too, and don't know what to do about
   it either, but never complain?

3. Other people have had this problem, but do know what to do about it?

...

>> If not, you may have better luck on the zope-db list (which is
>> devoted to using other databases with Zope):

> The problem is not with the RDB but with the ZODB connections that are
> magically not garbage collected. He will probably not find help on
> "zope-db".

That suggestion was based on a guess that #3 (above) is most likely.  Of
course I don't know, but #1 and #2 seem unlikely on the face of it.  If
other people using RDB don't have this problem, then zope-db is the right
place to ask how they manage to avoid it.

> Some hints how to analyse garbage collection problems might help him.

Alas, the #1 cause for "garbage collection problems" is an application
keeping objects alive that the author thinks, or just assumes, "should be"
trash.  IOW, there usually isn't a problem with garbage collection when one
is _suspected_, because the disputed objects are in fact not trash.  If
that's what's going on here, mucking with ZODB might soften the symptoms but
without curing the underlying application problem.  OTOH, if that's not
what's going on here, I'd expect to have heard about this here more than
once in the last 5 years ;-)  Perhaps because the OP is unique in allowing
hundreds (or thousands -- whatever) of Connections to be active
simultaneously?  Don't know.

I suggested before that forcing calls to gc.collect() would give more
evidence.  If that doesn't help, then it's most likely that the application
is keeping Connections alive.  Since I'm not familiar with the RDB code, I
suppose it's even possible that such code uses __del__ methods, and creates
cycles of its own, that prevent cyclic gc from reclaiming them.  In that
case, there are serious problems in that code.

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-30 Thread David Rushby
I don't mean to intrude in this discussion, but I've communicated with
the original poster privately.

--- Dieter Maurer <[EMAIL PROTECTED]> wrote:
> Tim Peters wrote at 2005-12-29 12:59 -0500:
> > ...
> >[Tim Peters]
> >>> It means that _ConnectionPool no longer has a reason to remember
> >>> anything about that Connection.  Application code can continue
> >>> keeping it alive forever, though.
> >
> >[Denis Markov]
> >> But what about RDB-Connection what stay in cache forever?
> >
> >Sorry, I don't know anything about how your app uses RDB
> connections.  ZODB
> >isn't creating them on its own ;-)
> 
> They did not tell us about their application. But, Zope's database
> adapters work like this. They use the ZODB cache (and its pool)
> as an RDB connection pool. Thus, if the ZODB caches are not released,
> the RDB connections won't.

That is exactly the problem.  The ZODB client application in this case
is Zope (2.8.5), and relational database connections are not being
garbage collected in a timely manner because of the cycle.  If there's
an intense burst of activity followed by a lull during which no clients
are hitting Zope, numerous relational database connections are
sometimes left open because the garbage collector hasn't collected
their parent objects yet.

In this situation, adding a 'c._resetCache()' call to the end of the
while loop in _ConnectionPool._reduce_size (line 122 of
lib/python/ZODB/DB.py in the Zope 2.8.5 source code) fixes the problem.

Since ZODB doesn't know whether the connection it's releasing from the
pool is still in use, I don't know whether resetting the connection's
cache is appropriate as a general solution.  But it fixes a definite
problem with Zope's behavior as a ZODB client application.




__ 
Yahoo! for Good - Make a difference this year. 
http://brand.yahoo.com/cybergivingweek2005/
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-30 Thread Dieter Maurer
Tim Peters wrote at 2005-12-29 12:59 -0500:
> ...
>[Tim Peters]
>>> It means that _ConnectionPool no longer has a reason to remember
>>> anything about that Connection.  Application code can continue
>>> keeping it alive forever, though.
>
>[Denis Markov]
>> But what about RDB-Connection what stay in cache forever?
>
>Sorry, I don't know anything about how your app uses RDB connections.  ZODB
>isn't creating them on its own ;-)

They did not tell us about their application. But, Zope's database
adapters work like this. They use the ZODB cache (and its pool)
as an RDB connection pool. Thus, if the ZODB caches are not released,
the RDB connections won't.

> ...
>Nothing can be done
>to _force_ Connections to go away forever.

But most of their resources could be freed when they are removed from
the pool.
I definitely would do this -- in the sense of defensive programming.
Leaking the connections is much more critical than the "all" attribute
growth...

> ...
>> as a result we get many RDB-Connections what will never use but hang our
>> RDB
>
>At this point I have to hope that someone else here understands what you're
>doing.

I can recognize standard Zope behaviour -- although other applications
might show similar behaviour...

> If not, you may have better luck on the zope-db list (which is
>devoted to using other databases with Zope):

The problem is not with the RDB but with the ZODB connections
that are magically not garbage collected.
He will probably not find help on "zope-db".


Some hints how to analyse garbage collection problems might
help him.

-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Tim Peters
[Tim]
>> ...
>> Or there are no strong reference to `obj`, but `obj` is part of cyclic
>> garbage so _continues to exist_ until a round of Python's cyclic garbage
>> collection runs.

[Dieter Maurer]
> And this is *VERY* likely as any persistent object in the cache has a
> (strong, I believe) reference to the connection which in turn references
> any of these objects indirectly via the cache.

I'm not sure I follow:  it's not just "very likely" that Connections end up
in cycles, it's certain that they do.  The small test code I posted later
should make that abundantly clear.  They end up in cycles even if they're
never used:  call DB.open(), and the Connection it returns is already in a
cycle (at least because a Connection and its cache each hold a strong
reference to the other).

> In my view, closed connections not put back into the pool

That never happens:  when an open Connection is closed, it always goes back
into the pool.  If that would cause the configured pool_size to be exceeded,
then other, older closed Connections are removed from the pool "to make
room".  It's an abuse of the system for apps even to get into that state:
that's why ZODB logs warnings if pool_size is ever exceeded, and logs at
critical level if it's exceeded "a lot".  Connections should be viewed as a
limited resource.

> should be explicitely cleaned e.g. their cache cleared or at least
> minimized.

The code that removes older Connections from the pool doesn't do that now;
it could, but there's no apparent reason to complicate it that I can see.  

> If for some reason, the garbage collector does not release the
> cache/cache content cycles, then the number of connections would grow
> unboundedly which is much worse than an unbound grow of the "all"
> attribute.

There's a big difference, though:  application code alone _could_ provoke
unbounded growth of .all without the current defensive coding -- that
doesn't require hypothesizing Python gc bugs for which there's no evidence.
If an application is seeing unbounded growth in the number of Connections,
it's a Python gc bug, a ZODB bug, or an application bug.

While cyclic gc may still seem novel to Zope2 users, it's been in Python for
over five years, and bug reports against it have been very rare -- most apps
stopped worrying about cycles years ago, and Zope3 has cycles just about
everywhere you look.  ZODB isn't a pioneer here.

I ran stress tests against ZODB a year or so ago (when the new connection
management code was implemented) that created millions of Connections, and
saw no leaks then, regardless of whether they were or weren't explicitly
closed.  That isn't part of the test suite because it tied up a machine for
a day ;-), but nothing material has changed since then that I know of.  It's
possible a new leak got introduced, but I'd need more evidence of that
before spending time on it; the small test code I posted before showed that
at least that much still works as designed, and that hit all the major paths
thru the connection mgmt code.

> Pitcher seem to observe such a situation (where for some unknown
> reason, the garbage collector does not collect the connection.

I don't believe we have any real idea what they're doing, beyond that
"something somewhere" is sticking around longer than they would like.

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Dieter Maurer
Tim Peters wrote at 2005-12-29 11:28 -0500:
> ...
>Or there are no strong reference to `obj`, but `obj` is part of cyclic
>garbage so _continues to exist_ until a round of Python's cyclic garbage
>collection runs.

And this is *VERY* likely as any persistent object in the cache
has a (strong, I believe) reference to the connection which
in turn references any of these objects indirectly via the cache.

In my view, closed connections not put back into the pool
should be explicitely cleaned e.g. their cache cleared or at least
minimized.

If for some reason, the garbage collector does not release the
cache/cache content cycles, then the number of connections
would grow unboundedly which is much worse than an unbound grow
of the "all" attribute.
Pitcher seem to observe such a situation (where for some unknown
reason, the garbage collector does not collect the connection.

-- 
Dieter
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Tim Peters
Oops!  I sent this to zope-dev instead of zodb-dev by mistake.

[EMAIL PROTECTED]
>>> Not agree. Can you answer the question? Does self.all.remove(c) mean
>>> that we WANT to destroy connection instance?

[Tim Peters]
>> It means that _ConnectionPool no longer has a reason to remember
>> anything about that Connection.  Application code can continue
>> keeping it alive forever, though.

[Denis Markov]
> But what about RDB-Connection what stay in cache forever?

Sorry, I don't know anything about how your app uses RDB connections.  ZODB
isn't creating them on its own ;-)

> On the next peak load we get some next ZODB-Connections with
> RDB-Connection  After repush() old ZODB-Connections will be killed
> (if  > pool_size)

I don't like the word "killed" here, because it seems highly misleading.
ZODB doesn't destroy any Connections or any caches.  ZODB destroys all its
strong references to old Connections, and that's all.  Nothing can be done
to _force_ Connections to go away forever.  It's ZODB's job here to make
sure it isn't forcing Connections (beyond the pool_size limit) to stay
alive, and it's doing that job.  It can't "kill" Connections.

> but RDB-Connection stay in cache forever
> And so on

There's one cache per Connection.  If and when a Connection goes away, its
cache goes away too.  So when you say something "stays in cache forever", I
don't know what you mean -- you apparently have many (hundreds? thousands?)
of Connections, in which case you also have many (hundreds or thousands) of
caches.  I don't know how an RDB-Connection gets into even one of those
caches to begin with.

> as a result we get many RDB-Connections what will never use but hang our
> RDB

At this point I have to hope that someone else here understands what you're
doing.  If not, you may have better luck on the zope-db list (which is
devoted to using other databases with Zope):

http://mail.zope.org/mailman/listinfo/zope-db

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Tim Peters
...

[Florent Guillaume]
>> The self.all.remove(c) in _ConnectionPool attempts to destroy the
>> connection. If something else has a reference to it once it's closed,
>> then that's a bug, and it shouldn't. It should only keep a weak
>> reference to it at most.

[EMAIL PROTECTED]
> But it's nonsense!

Please try to remain calm here.  It's not nonsense, but if you're screaming
too loudly you won't be able to hear :-)

> If weakref exists then some other object has ref to the obj!

Or there are no strong reference to `obj`, but `obj` is part of cyclic
garbage so _continues to exist_ until a round of Python's cyclic garbage
collection runs.

> And weakValueDictionary is cleaned up automatically when the
> last strong ref disappears.

That's a necessary precondition, but isn't necessarily sufficient.  When the
last strong reference to a value in a weakValueDictionary goes away, if that
value is part of cyclic garbage then the weakValueDictionary does not change
until Python's cyclic gc runs. 

> Destroying obj with this logic is absurd:

I covered that before, so won't repeat it.  You misunderstood the intent of
this code.

...
> del self.data[id(obj)] <== there is no use to delete obj by
> deleting weakref... we just deleting weakref from the weakValueDictionary!

Yes, it's just deleting the weakref -- and that's all it's trying to do, and
there are good reasons to delete the weakref here (but are not the reasons
you thought were at work here).

> Try this: 1. add this method to Connection class definition
>
> def __del__(self):
> print 'Destruction...'
>
> then do this:

You're _really_ going to confuse yourself now ;-)  Because Connections are
always involved in reference cycles, adding a __del__ method to Connection
guarantees that Python's garbage collection will _never_ reclaim a
Connection (at least not until you explicitly break the reference cycles).

> >>> import sys
> >>> sys.path.append('/opt/Zope/lib/python')
> >>> from ZODB import Connection
> >>> c = Connection.Connection()
> >>> del(c)
> >>> c = Connection.Connection()
> >>> del(c._cache)

You're breaking a reference cycle "by hand" here, so that it becomes
_possible_ for gc to clean up the Connection.  But the only reason that was
necessary is because you added a __del__ method to begin with.

> >>> del(c)
> Destruction...
> >>>
>
> See? You can NOT delete object because _cache keeps reference to it...
> and connection remains forever!!!

That's because you added a __del__ method; it's not how Connection normally
works.  I'll give other code below illustrating this.

> It's cache has RDB connection objects and they are not closed. Connection
> becomes inaccessible and unobtainable trough the connection pool.

In your code above, `c` was never in a connection pool.  You're supposed to
get a Connection by calling DB.open(), not by instantiating Connection()
yourself (and I sure hope you're not instantiating Connection() directly in
your app!).

> That's what I wanted to say. It's definitely a BUG.

Sorry, there's no evidence of a ZODB bug here yet.

Consider this code instead.  It opens 10 Connections in the intended way
(via DB.open()), and creates a weakref with a callback to each so that we
can tell when they're reclaimed.  It then closes all the Connections, and
destroys all its strong reference to them:

"""
import weakref
import gc

import ZODB
import ZODB.FileStorage

class Wrap:
def __init__(self, i):
self.i = i

def __call__(self, *args):
print "Connection #%d went away." % self.i

N = 10
st = ZODB.FileStorage.FileStorage('blah.fs')

db = ZODB.DB(st)
cns = [db.open() for i in xrange(N)]
wrs = [weakref.ref(cn, Wrap(i)) for i, cn in enumerate(cns)]
print "closing connections"
for cn in cns:
cn.close()
print "del'ing cns"
del cns  # destroy all our hard references
print "invoking gc"
gc.collect()
print "done"
"""

This is the output:

closing connections
del'ing cns
invoking gc
Connection #0 went away.
Connection #1 went away.
Connection #2 went away.
Done

Note that "nothing happens" before Python's cyclic gc runs.  That's because
Connections are in reference cycles, and refcounting cannot reclaim objects
in trash cycles.  Because I used weakref callbacks instead of __del__
methods, cyclic gc _can_ reclaim Connections in trash cycles.

When the 10 Connections got closed, internally _ConnectionPool added them,
one at a time, to its .available queue.  When #7 was closed, the pool grew
to 8 objects, so it forgot everything it knew about the first Connection
(#0) in its queue.  "Nothing happens" then, though, because nothing _can_
happen before cyclic gc runs.  When #8 was closed, #1 got removed from
.available, and when #9 was closed, #2 got removed from .available.

When gc.collect() runs, those 3 Connections (#0, #1, and #2) are all
reclaimed.  The other 7 Connections (#3-#9) are still alive, sitting in the
.available queue waiting to be reused.

__

RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Tim Peters
 [Florent Guillaume]
>> ...
>> If you see many RDB connections, then it's a RDB problem and not a ZODB
>> problem. Something not releasing RDB connections quick enough, or
>> leaking RDB connections.

[EMAIL PROTECTED]
> Not agree. Can you answer the question? Does self.all.remove(c) mean that
> we WANT to destroy connection instance?

It means that _ConnectionPool no longer has a reason to remember anything
about that Connection.  Application code can continue keeping it alive
forever, though.

> If not then where in ZODB source code i can see connection destruction?
> Clearing cache and calling _v_database_connection.close() method?

There's isn't any explicit "destruction" code.  Python's cyclic garbage
collection reclaims a Connection `c` (along with its cache, etc) if and when
no strong references to `c` exist at the time Python's cyclic gc runs.

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


RE: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Tim Peters
[Florent Guillaume]
> ...
> The self.all.remove(c) in _ConnectionPool attempts to destroy the
> connection.

Nope, it's simply getting rid of a weak reference that no longer serves a
purpose, to avoid unbounded growth of the .all set in case of ill-behaved
application code, and to speed Python's cyclic gc a little.  Removing the
Connection from .available removed _ConnectionPool's only strong reference
to the Connection.

> If something else has a reference to it once it's closed, then that's a
> bug, and it shouldn't.

Yup!

...

> Even hundreds of ZODB connections is absurd.

I'd settle for calling it uncommon and unexpected.

> Again, with 4 threads you should never get more than 4 Filestorage
> connections plus 4 TemporaryStorage connections.

Bears repeating ;-)

___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Юдыцкий Игорь Владиславович
В Чтв, 29/12/2005 в 12:27 +0100, Florent Guillaume пишет:
> >>If you see many RDB connections, then it's a RDB problem and not a ZODB 
> >>problem. Something not releasing RDB connections quick enough, or 
> >>leaking RDB connections.
> > 
> > 
> > Not agree. Can you answer the question? Does self.all.remove(c) mean
> > that we WANT to destroy connection instance?
> 
> The self.all.remove(c) in _ConnectionPool attempts to destroy the 
> connection. If something else has a reference to it once it's closed, then 
> that's a bug, and it shouldn't. It should only keep a weak reference to it 
> at most.

But it's nonsense! If weakref exists then some other object has ref to
the obj! And weakValueDictionary is cleaned up automatically when the
last strong ref disappears.

Destroying obj with this logic is absurd:
def _reduce_size(self, strictly_less=False):
target = self.pool_size - bool(strictly_less)
while len(self.available) > target:
c = self.available.pop(0)  <== we have ref to the connection
here, before calling remove
self.all.remove(c) 

def remove(self, obj):
del self.data[id(obj)] <== there is no use to delete obj by
deleting weakref... we just deleting weakref from the
weakValueDictionary!

Try this:
1. add this method to Connection class definition

def __del__(self):
print 'Destruction...'

then do this:
>>> import sys
>>> sys.path.append('/opt/Zope/lib/python')
>>> from ZODB import Connection
>>> c = Connection.Connection()
>>> del(c)
>>> c = Connection.Connection()
>>> del(c._cache)
>>> del(c)
Destruction...
>>>
See?
You can NOT delete object because _cache keeps reference to it... and
connection remains forever!!! It's cache has RDB connection objects and
they are not closed. Connection becomes inaccessible and unobtainable
trough the connection pool.
That's what I wanted to say. It's definitely a BUG.

> 
> > If not then where in ZODB source code i can see connection destruction?
> > Clearing cache and calling _v_database_connection.close() method?
> Sorry I don't know what a _v_database_connection is, it's not in ZODB or 
> transaction code. If it's RDB code I can't help you.
Don't bother... it's RDB DA handle.
> 
> 
> > You've just caught me on "thousands" but gave no comments on deletion of
> > connection instances... but this is the clue to the topic.
> 
> Even hundreds of ZODB connections is absurd.
> Again, with 4 threads you should never get more than 4 Filestorage 
> connections plus 4 TemporaryStorage connections.
Okay... we moved from Zope 2.7.4, that blocked with small number of
threads and pool_size with high site activity, so we had to increase
those numbers. Anyway in the default configuration of 4 threads and
pool_size of 7 we can watch lots of lost connections, and we now know
both that it's a bug... so we have big pool_size to avoid connection
"deletion" (loosing).
> 
> Florent
> 
--== *** ==--
Заместитель директора Департамента Информационных Технологий
Юдыцкий Игорь Владиславович
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Florent Guillaume

A little bit of history...
We have zope as an application server for heavy loaded tech process. We
have high peaks of load several times a day and my question is about how
can we can avoid unused connections to remain in memory after peak is
passed?
Before ZODB-3.4.1 connection pool was fixed size of pool_size and that
caused zope to block down while load peaks.
ZODB-3.4.2 that is shipped with Zope-2.8.5 has connection pool that does
not limit the opened connections, but tries to reduce the pool to the
pool_size and this behavior is broken IMO.

Follow my idea...
After peak load I have many (thousands of connections) that have cached
up different objects including RDB  connections.


Hundreds... my mistake.

Huh are you sure? That would mean you have thousands of threads. Or 
hundreds or ZEO clients. Or hundreds of ZODB mountpoints.


By itself Zope never uses more than one connection per thread, and the 
number of thread is usually small.


If you see many RDB connections, then it's a RDB problem and not a ZODB 
problem. Something not releasing RDB connections quick enough, or 
leaking RDB connections.



Not agree. Can you answer the question? Does self.all.remove(c) mean
that we WANT to destroy connection instance?


The self.all.remove(c) in _ConnectionPool attempts to destroy the 
connection. If something else has a reference to it once it's closed, then 
that's a bug, and it shouldn't. It should only keep a weak reference to it 
at most.



If not then where in ZODB source code i can see connection destruction?
Clearing cache and calling _v_database_connection.close() method?


Sorry I don't know what a _v_database_connection is, it's not in ZODB or 
transaction code. If it's RDB code I can't help you.



You've just caught me on "thousands" but gave no comments on deletion of
connection instances... but this is the clue to the topic.


Even hundreds of ZODB connections is absurd.
Again, with 4 threads you should never get more than 4 Filestorage 
connections plus 4 TemporaryStorage connections.


Florent

--
Florent Guillaume, Nuxeo (Paris, France)   CTO, Director of R&D
+33 1 40 33 71 59   http://nuxeo.com   [EMAIL PROTECTED]
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


Re: [ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Юдыцкий Игорь Владиславович
В Чтв, 29/12/2005 в 11:30 +0100, Florent Guillaume пишет:
> > A little bit of history...
> > We have zope as an application server for heavy loaded tech process. We
> > have high peaks of load several times a day and my question is about how
> > can we can avoid unused connections to remain in memory after peak is
> > passed?
> > Before ZODB-3.4.1 connection pool was fixed size of pool_size and that
> > caused zope to block down while load peaks.
> > ZODB-3.4.2 that is shipped with Zope-2.8.5 has connection pool that does
> > not limit the opened connections, but tries to reduce the pool to the
> > pool_size and this behavior is broken IMO.
> > 
> > Follow my idea...
> > After peak load I have many (thousands of connections) that have cached
> > up different objects including RDB  connections.
Hundreds... my mistake.
> 
> Huh are you sure? That would mean you have thousands of threads. Or 
> hundreds or ZEO clients. Or hundreds of ZODB mountpoints.
> 
> By itself Zope never uses more than one connection per thread, and the 
> number of thread is usually small.
> 
> If you see many RDB connections, then it's a RDB problem and not a ZODB 
> problem. Something not releasing RDB connections quick enough, or 
> leaking RDB connections.

Not agree. Can you answer the question? Does self.all.remove(c) mean
that we WANT to destroy connection instance?
If not then where in ZODB source code i can see connection destruction?
Clearing cache and calling _v_database_connection.close() method?

You've just caught me on "thousands" but gave no comments on deletion of
connection instances... but this is the clue to the topic.

> 
> Florent
> 
--== *** ==--
Заместитель директора Департамента Информационных Технологий
Юдыцкий Игорь Владиславович
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] Re: Connection pool makes no sense

2005-12-29 Thread Florent Guillaume

A little bit of history...
We have zope as an application server for heavy loaded tech process. We
have high peaks of load several times a day and my question is about how
can we can avoid unused connections to remain in memory after peak is
passed?
Before ZODB-3.4.1 connection pool was fixed size of pool_size and that
caused zope to block down while load peaks.
ZODB-3.4.2 that is shipped with Zope-2.8.5 has connection pool that does
not limit the opened connections, but tries to reduce the pool to the
pool_size and this behavior is broken IMO.

Follow my idea...
After peak load I have many (thousands of connections) that have cached
up different objects including RDB  connections.


Huh are you sure? That would mean you have thousands of threads. Or 
hundreds or ZEO clients. Or hundreds of ZODB mountpoints.


By itself Zope never uses more than one connection per thread, and the 
number of thread is usually small.


If you see many RDB connections, then it's a RDB problem and not a ZODB 
problem. Something not releasing RDB connections quick enough, or 
leaking RDB connections.


Florent

--
Florent Guillaume, Nuxeo (Paris, France)   Director of R&D
+33 1 40 33 71 59   http://nuxeo.com   [EMAIL PROTECTED]
___
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev