Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-22 Thread Toby Dickenson

On Fri, 19 Apr 2002 07:54:42 -0400, Paul Everitt [EMAIL PROTECTED]
wrote:

This pattern applies better when you have a lot of document cataloging 
to be done.  A separate process can wake up, make a ZEO connection, and 
process the queue.  I don't think that indexing documents *has* to be a 
transactional part of every document save.

Ive used something similar to that in a previous project that didnt
get beyond prototype stage.

Under this cron-style approach, you also pay less of a conflict-error 
penalty, as you can increase the backoff period.

You dont need a 'backoff period' as such; you just move any jobs that
have suffered a conflict further back in the work queue. In some cases
you can almost eliminate ConflictErrors by making the background
process single-threaded.



Toby Dickenson
[EMAIL PROTECTED]


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-22 Thread Romain Slootmaekers


Yo,

I have been following this thread for quite some time now,
and call me stupid if you must, but why don't you just keep 
the data in the session and write it all out when the session
gets cleaned up?

For the original problem (keeping statistics of site usage)
this will be more than enough. 

I did a webmining project using this in 2000 
(ok, it was jsp and not zope, but the approach is still valid, moreover
since from 2.5? onwards, you have a built in SESSION object you can use) 
 
have fun,

Sloot. 



___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-22 Thread Chris McDonough

This is a pretty good idea... the default RAM-based storage that is used 
for sessions (TemporaryStorage) tries hard to resist conflicts. It is 
also nonundoing and does its own reference counting, so it needn't be 
packed unless there it contains cyclic datastructures (there is no UI to 
pack the default mounted storage anyway, so the  problem is kind of 
moot).  The TransientObject code (the SESSION object is an instance of a 
TransientObject) can make use of ZODB conflict resolution in many cases.

However, conflicts are still a problem with TemporaryStorage because it 
is a ZODB Storage implementation and it uses the same optimistic 
concurrency control as does FileStorage et. al.  But I imagine for most 
applications, the out of the box configuration would work just fine 
for things like counters and whatnot.  Someone could probably implement 
a limited-functionality session data storage that did not rely on ZODB 
or any other database that might be even better for this kind of thing.

Romain Slootmaekers wrote:
 Yo,
 
 I have been following this thread for quite some time now,
 and call me stupid if you must, but why don't you just keep 
 the data in the session and write it all out when the session
 gets cleaned up?
 
 For the original problem (keeping statistics of site usage)
 this will be more than enough. 
 
 I did a webmining project using this in 2000 
 (ok, it was jsp and not zope, but the approach is still valid, moreover
 since from 2.5? onwards, you have a built in SESSION object you can use) 
  
 have fun,
 
 Sloot. 
 
 
 
 ___
 Zope-Dev maillist  -  [EMAIL PROTECTED]
 http://lists.zope.org/mailman/listinfo/zope-dev
 **  No cross posts or HTML encoding!  **
 (Related lists - 
  http://lists.zope.org/mailman/listinfo/zope-announce
  http://lists.zope.org/mailman/listinfo/zope )


-- 
Chris McDonoughZope Corporation
http://www.zope.org http://www.zope.com
Killing hundreds of birds with thousands of stones



___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-19 Thread Paul Everitt


ForkedStorage, I like it simply for the coolness of the name. :^)

But it sparked a different kind of idea, leveraging a pattern that might 
emerge in Zope 3.

Let's say we had a queue in Zope.  We could asynchronously send changes 
into the queue.  Later, based on some policy (e.g. idle time, clock 
ticks, etc.), those changes would be enacted/committed.

Imagine the queue itself is in a different storage, likely 
non-versioned.  Imagine that the queue is processed every N seconds.  It 
takes all the work to do and performs it, but in a subtransaction.

Thus you might send the queue ten increments to a counter, but only one 
will be committed to the main storage.

To make programmers have to think less about the queue (send in the 
object reference, the method to use, and the parameters), you could make 
it look like a special form of subtransactions.  That is, you say:

   tm.beginQueuingTransactions()
   self.incrementCounter()
   self.title='Simple change'
   self.body = upload_file
   tm.endQueueingTransactions()

At the transaction level, all enclosed changes are queued for later 
commit.  You don't have to think any differently than rather object 
state management.

This pattern applies better when you have a lot of document cataloging 
to be done.  A separate process can wake up, make a ZEO connection, and 
process the queue.  I don't think that indexing documents *has* to be a 
transactional part of every document save.

Under this cron-style approach, you also pay less of a conflict-error 
penalty, as you can increase the backoff period.  There's no web browser 
on the other end, impatiently waiting for their flaming logo. :^)

Ahh well, fun to talk about.  Maybe this time next year we can repeat 
the conversation. :^)

--Paul

Shane Hathaway wrote:
 Jeremy Hylton wrote:
 
 CM == Chris McDonough [EMAIL PROTECTED] writes:



Completely agreed.  My disagreement is portraying the counter
problem as impossible with the zodb.  I think some people, as
evidenced by some of the responses, are willing to live with the
tradeoffs.  Other people will find managing a log file on disk to
be a more manageable solution.

   CM It would be best to make make a dual-mode undoing and nonundoing
   CM storage on a per-object basis.

 I'd really like to do this for ZODB4, but it seems hard to get it into
 FileStorage, without adding automatic incremental packing to
 FileStorage.

 Example: Object A is marked as save enough revisions to do a single
 undo.  When a transaction updates A and makes older revisions
 unnecessary, there's no obvious way to remove them without doing a
 pack.  We could write a garbage collector that removed unneeded things
 (as opposed to packing everything to a particular date), but it
 doesn't seem very useful if it needs to be run manually.
 
 
 One idea I've been floating in my head is the idea of a forked 
 storage, where some objects are stored in an undoable storage and others 
 are stored in a non-undoable storage.  I could try to explain it in 
 English but pseudocode is easier:
 
 
 class ForkedStorage:
 
 def __init__(self, undoable_storage, non_undoable_storage):
 self.undoable = undoable_storage
 self.non_undoable = non_undoable_storage
 
 def store(self, oid, data, serial):
 if not serial or serial == '\0' * 8:
 # For new objects, choose a storage.
 want_undo = self.wantUndoableStorage(data)
 if want_undo:
 storage = self.undoable
 else:
 storage = self.non_undoable
 else:
 # For existing objects, use the storage chosen previously.
 if self.undoable.load(oid):
 storage = self.undoable
 else:
 storage = self.non_undoable
 storage.store(oid, data, serial)
 
 def load(self, oid):
 data, serial = self.undoable.load(oid)
 if not data:
 data, serial = self.non_undoable.load(oid)
 if not data:
 raise POSException, 'data not found'
 return data, serial
 
 def wantUndoableStorage(self, data):
 u = cpickle.Unpickler()
 module, name = u.loads(data)
 class_ = getattr(__import__(module), name)
 if getattr(class_, '_p_undoable', 1):
 return 1
 else:
 return 0
 
 
 Only a simple idea. :-)
 
 Also, how would you specifiy the object's packing policy?  I'm
 thinking an _p_revision_control attribute or something like that.  If
 the attribute exists on an object, it sets a particular policy for
 that object.  
 
Do individual transactions need to play in this game, too?  I'm
 
 imagining a use case where an object is marked as no revisions but
 you want to be able to undo a particular transaction.  I'm not sure if
 that means :

 - you can undo the transaction, but the no revisions object
   keeps its current state.

 - you can undo the 

Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-19 Thread Toby Dickenson

On Wed, 17 Apr 2002 23:01:04 -0400, Chris McDonough
[EMAIL PROTECTED] wrote:

It would be best to make make a dual-mode undoing and nonundoing storage on
a per-object basis.  But a half step would be to make it easier to use
mounted storages ala
http://dev.zope.org/Wikis/DevSite/Proposals/StorageAndConnectionTypeRegistri
es.

A dual mode storage, or simply dual storages?

Storing counter objects *only* in a non-undo storage would be more
pleasant if ZODB supported cross-storage object references. 

Toby Dickenson
[EMAIL PROTECTED]


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-19 Thread Chris McDonough

 A dual mode storage, or simply dual storages?

The former as a long-term goal, the latter as a short-term goal.  The 
proposal I mentioned would make it easier to build tools that allow you 
to mount storages.

 Storing counter objects *only* in a non-undo storage would be more
 pleasant if ZODB supported cross-storage object references. 

Yup.  I don't think this is anywhere on the radar, though...

-- 
Chris McDonoughZope Corporation
http://www.zope.org http://www.zope.com
Killing hundreds of birds with thousands of stones



___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-19 Thread Chris Withers

Chris McDonough wrote:
 
  Storing counter objects *only* in a non-undo storage would be more
  pleasant if ZODB supported cross-storage object references.
 
 Yup.  I don't think this is anywhere on the radar, though...

How hard would they be to add?

cheers,

Chris


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-19 Thread Toby Dickenson

On Fri, 19 Apr 2002 08:18:47 -0400, Chris McDonough [EMAIL PROTECTED]
wrote:

 Storing counter objects *only* in a non-undo storage would be more
 pleasant if ZODB supported cross-storage object references. 

Yup.  I don't think this is anywhere on the radar, though...

H. cross-storage 'symbolic links' would help too. I think we could
implement that using the same trickery as mounted storages.

Toby Dickenson
[EMAIL PROTECTED]


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists -
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-19 Thread Shane Hathaway

Paul Everitt wrote:
 Let's say we had a queue in Zope.  We could asynchronously send changes 
 into the queue.  Later, based on some policy (e.g. idle time, clock 
 ticks, etc.), those changes would be enacted/committed.
 
 Imagine the queue itself is in a different storage, likely 
 non-versioned.  Imagine that the queue is processed every N seconds.  It 
 takes all the work to do and performs it, but in a subtransaction.
 
 Thus you might send the queue ten increments to a counter, but only one 
 will be committed to the main storage.
 
 To make programmers have to think less about the queue (send in the 
 object reference, the method to use, and the parameters), you could make 
 it look like a special form of subtransactions.  That is, you say:
 
   tm.beginQueuingTransactions()
   self.incrementCounter()
   self.title='Simple change'
   self.body = upload_file
   tm.endQueueingTransactions()
 
 At the transaction level, all enclosed changes are queued for later 
 commit.  You don't have to think any differently than rather object 
 state management.

Wow, on the surface, that would be very easy to do. 
Transaction.register() might dump to a long-lived queue instead of the 
single-transaction queue.

 This pattern applies better when you have a lot of document cataloging 
 to be done.  A separate process can wake up, make a ZEO connection, and 
 process the queue.  I don't think that indexing documents *has* to be a 
 transactional part of every document save.

Right.  Here's another way to think about it: we could use a catalog 
lookalike which, instead of updating indexes directly, asks a special 
ZEO client to perform the reindexing.  The special client might decide 
to batch updates.

 Under this cron-style approach, you also pay less of a conflict-error 
 penalty, as you can increase the backoff period.  There's no web browser 
 on the other end, impatiently waiting for their flaming logo. :^)

A variant on your idea is that when the transaction is finishing, if 
there are any regular objects to commit, the long-lived queue gets 
committed too.  That would be beneficial for counters, logs, and objects 
like Python Scripts which have to cache the compiled code in ZODB, but 
not as beneficial for catalogs.
Ok, thinking further... how about a Zope object called a peer delegate 
which can act like other Zope objects, but which actually calls out to 
another ZEO client to do the work?  It could be very interesting... it 
might use some standard RPC or RMI mechanism.  We would want to be 
careful to make it simple.

 Ahh well, fun to talk about.  Maybe this time next year we can repeat 
 the conversation. :^)

I hope we'll be talking about what we did instead of what we'll do. :-)

The change to transactions seems simple.  Another thought: the 
long-lived queue might be committed only when there are regular objects 
to commit *and* a certain amount of time has passed since the last 
commit of the long-lived queue.  That might work well for catalogs.  Cool!

Shane



___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-18 Thread Jeremy Hylton

 CM == Chris McDonough [EMAIL PROTECTED] writes:

   Completely agreed.  My disagreement is portraying the counter
   problem as impossible with the zodb.  I think some people, as
   evidenced by some of the responses, are willing to live with the
   tradeoffs.  Other people will find managing a log file on disk to
   be a more manageable solution.

  CM It would be best to make make a dual-mode undoing and nonundoing
  CM storage on a per-object basis.

I'd really like to do this for ZODB4, but it seems hard to get it into
FileStorage, without adding automatic incremental packing to
FileStorage.

Example: Object A is marked as save enough revisions to do a single
undo.  When a transaction updates A and makes older revisions
unnecessary, there's no obvious way to remove them without doing a
pack.  We could write a garbage collector that removed unneeded things
(as opposed to packing everything to a particular date), but it
doesn't seem very useful if it needs to be run manually.

Also, how would you specifiy the object's packing policy?  I'm
thinking an _p_revision_control attribute or something like that.  If
the attribute exists on an object, it sets a particular policy for
that object.  

Do individual transactions need to play in this game, too?  I'm
imagining a use case where an object is marked as no revisions but
you want to be able to undo a particular transaction.  I'm not sure if
that means :

- you can undo the transaction, but the no revisions object
  keeps its current state.

- you can undo the transaction, and because the transaction is
  specially marked as undoable, there actually is a revision

- you can't undo the transaction

The first choice seems appropriate for a counter (I think), but I'm
not sure if it makes sense for all possible revision-less objects.

Jeremy




___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-18 Thread Steve Alexander

Jeremy Hylton wrote:
CM == Chris McDonough [EMAIL PROTECTED] writes:

 
Completely agreed.  My disagreement is portraying the counter
problem as impossible with the zodb.  I think some people, as
evidenced by some of the responses, are willing to live with the
tradeoffs.  Other people will find managing a log file on disk to
be a more manageable solution.
 
   CM It would be best to make make a dual-mode undoing and nonundoing
   CM storage on a per-object basis.
 
 I'd really like to do this for ZODB4, but it seems hard to get it into
 FileStorage, without adding automatic incremental packing to
 FileStorage.

This might be possible without incremental packing, if the object will 
be of a fixed size.

I'm thinking of a simple counter here, something like:

class Counter(object):

   __slots__ = ['__count']

   def __init__(self):
 self.__count = 0

   def increment(self):
 self.__count += 1

   def getValue(self):
 return self.__count

Now, imagine that Counter was somehow Persistent too. (There would need 
to be a few more _p_... declarations in __slots__, and possibly some 
changes in the persistence machinery to allow for slots based instances 
as well as __dict__ based ones.)

I would naively expect a pickle of Counter instance to always remain the 
same size. Therefore, it could be updated in-place.

Of course, this would break various other nice behaviours of FileStorage.


Another variation on the same theme: have a fixed-size external 
reference instead of the object's pickle. The fixed-size reference 
points to a separate some_object.pickle file which contains the pickle 
for that one object. The some_object.pickle file gets overwritten on 
each update.

--
Steve Alexander





___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-18 Thread Shane Hathaway

Jeremy Hylton wrote:
CM == Chris McDonough [EMAIL PROTECTED] writes:

 
Completely agreed.  My disagreement is portraying the counter
problem as impossible with the zodb.  I think some people, as
evidenced by some of the responses, are willing to live with the
tradeoffs.  Other people will find managing a log file on disk to
be a more manageable solution.
 
   CM It would be best to make make a dual-mode undoing and nonundoing
   CM storage on a per-object basis.
 
 I'd really like to do this for ZODB4, but it seems hard to get it into
 FileStorage, without adding automatic incremental packing to
 FileStorage.
 
 Example: Object A is marked as save enough revisions to do a single
 undo.  When a transaction updates A and makes older revisions
 unnecessary, there's no obvious way to remove them without doing a
 pack.  We could write a garbage collector that removed unneeded things
 (as opposed to packing everything to a particular date), but it
 doesn't seem very useful if it needs to be run manually.

One idea I've been floating in my head is the idea of a forked 
storage, where some objects are stored in an undoable storage and others 
are stored in a non-undoable storage.  I could try to explain it in 
English but pseudocode is easier:


class ForkedStorage:

 def __init__(self, undoable_storage, non_undoable_storage):
 self.undoable = undoable_storage
 self.non_undoable = non_undoable_storage

 def store(self, oid, data, serial):
 if not serial or serial == '\0' * 8:
 # For new objects, choose a storage.
 want_undo = self.wantUndoableStorage(data)
 if want_undo:
 storage = self.undoable
 else:
 storage = self.non_undoable
 else:
 # For existing objects, use the storage chosen previously.
 if self.undoable.load(oid):
 storage = self.undoable
 else:
 storage = self.non_undoable
 storage.store(oid, data, serial)

 def load(self, oid):
 data, serial = self.undoable.load(oid)
 if not data:
 data, serial = self.non_undoable.load(oid)
 if not data:
 raise POSException, 'data not found'
 return data, serial

 def wantUndoableStorage(self, data):
 u = cpickle.Unpickler()
 module, name = u.loads(data)
 class_ = getattr(__import__(module), name)
 if getattr(class_, '_p_undoable', 1):
 return 1
 else:
 return 0


Only a simple idea. :-)

 Also, how would you specifiy the object's packing policy?  I'm
 thinking an _p_revision_control attribute or something like that.  If
 the attribute exists on an object, it sets a particular policy for
 that object.  
   Do individual transactions need to play in this game, too?  I'm
 imagining a use case where an object is marked as no revisions but
 you want to be able to undo a particular transaction.  I'm not sure if
 that means :
 
 - you can undo the transaction, but the no revisions object
   keeps its current state.
 
 - you can undo the transaction, and because the transaction is
   specially marked as undoable, there actually is a revision
 
 - you can't undo the transaction
 
 The first choice seems appropriate for a counter (I think), but I'm
 not sure if it makes sense for all possible revision-less objects.

The first choice also makes sense for a catalog.  Here's another 
possible variation: transactions that involve *only* non-undoable 
objects are non-undoable; all other transactions are undoable and revert 
the revision of non-undoable objects as well.

Shane



___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-17 Thread Chris McDonough

 That's only if you do it as a property.  It doesn't have to be done that
 way.  Shane and I discussed a counter that existed as a central
 datastructure.  Objects that were being counted would simply have
 methods to increment the count and display the count.

FWIW, this already mostly exists in Zope as the (tiny) BTrees.Length.Length
class.  It's a awfully nifty little piece of code.  Anybody who is
interested should read it and try to understand it because it's subtly
mindbending and ingenious and it is a prime example of why we love Jim. ;-)

 Completely agreed.  My disagreement is portraying the counter problem as
 impossible with the zodb.  I think some people, as evidenced by some of
 the responses, are willing to live with the tradeoffs.  Other people
 will find managing a log file on disk to be a more manageable solution.

It would be best to make make a dual-mode undoing and nonundoing storage on
a per-object basis.  But a half step would be to make it easier to use
mounted storages ala
http://dev.zope.org/Wikis/DevSite/Proposals/StorageAndConnectionTypeRegistri
es.





___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



[Zope-dev] how bad are per-request-write-transactions

2002-04-16 Thread Ivo van der Wijk

Hi,

How bad are per-request transactions in a non-ZEO environment? I.e.
each request on a folder or its subobjects will cause a write transaction
(somewhat like a non-fs counter, but worse as it happens for all subobjects)

And if this is really bad, are there any workarounds except for writing
to the filesystem?

Cheers

Ivo

-- 
Drs. I.R. van der Wijk  -=-
Brouwersgracht 132  Amaze Internet Services V.O.F.
1013 HA Amsterdam, NL   -=-
Tel: +31-20-4688336   Linux/Web/Zope/SQL/MMBase
Fax: +31-20-4688337   Network Solutions
Web: http://www.amaze.nl/Consultancy
Email:   [EMAIL PROTECTED]   -=-


___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-16 Thread Casey Duncan

This will kill performance, especially concurrent use of the site. It 
will also cause large amounts of database bloat. Do you need real time 
numbers, or is a delay (such as 24 hours) acceptable?

If you can stand a delay, another approach would be to write a script 
which scans the z2.log file (or another log that you generate on page 
hits) each night and in a single transaction updates a counter on each 
object hit.

If you use the z2.log, no additional writing is needed to the FS, and 
you get the benefit of easy access to the counts directly from the 
objects, without degrading performance or db bloat.

-Casey

Ivo van der Wijk wrote:
 Hi,
 
 How bad are per-request transactions in a non-ZEO environment? I.e.
 each request on a folder or its subobjects will cause a write transaction
 (somewhat like a non-fs counter, but worse as it happens for all subobjects)
 
 And if this is really bad, are there any workarounds except for writing
 to the filesystem?
 
 Cheers
 
   Ivo
 
 




___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )



Re: [Zope-dev] how bad are per-request-write-transactions

2002-04-16 Thread Eric Roby

I developed a profiler service for a production site about 8 months ago.  I
essentially did what you are asking.  I needed to see how customers were
using the various navigational elements and other services provided within
the site layout.  The logging service could not give me a sense of the
context.  To make a long story short,  I had a method in the
standard_html_header that kicked off the evaluation process.  I essentially
created a mirror of the site (containers/sub-containers/methods) for each
hit for each day for each month , etc...  This provided me with a way to see
specific site activity in real-time.  Each object that was evaluated (for
each day) had two tinyTable instances. One recorded each hit as a record
(IP, referrer, username, time) while the other tallied the numbers per hit
(per unique IP).

This was all running on a Sun on a terrible network and I saw little or no
performance difference and the ZODB growth was as you might expect adding
the additional folder objects and tinyTable instances.  It wasn't a high
profile site (about 3000 hits per week).  I ran the service for three months
with no problems.  The key was the hits recorded in the tinyTable's did not
create a ZODB transaction.

Hope this helps

Eric
- Original Message -
From: Casey Duncan [EMAIL PROTECTED]
To: Ivo van der Wijk [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, April 16, 2002 10:04 AM
Subject: Re: [Zope-dev] how bad are per-request-write-transactions


 This will kill performance, especially concurrent use of the site. It
 will also cause large amounts of database bloat. Do you need real time
 numbers, or is a delay (such as 24 hours) acceptable?

 If you can stand a delay, another approach would be to write a script
 which scans the z2.log file (or another log that you generate on page
 hits) each night and in a single transaction updates a counter on each
 object hit.

 If you use the z2.log, no additional writing is needed to the FS, and
 you get the benefit of easy access to the counts directly from the
 objects, without degrading performance or db bloat.

 -Casey

 Ivo van der Wijk wrote:
  Hi,
 
  How bad are per-request transactions in a non-ZEO environment? I.e.
  each request on a folder or its subobjects will cause a write
transaction
  (somewhat like a non-fs counter, but worse as it happens for all
subobjects)
 
  And if this is really bad, are there any workarounds except for writing
  to the filesystem?
 
  Cheers
 
  Ivo
 
 




 ___
 Zope-Dev maillist  -  [EMAIL PROTECTED]
 http://lists.zope.org/mailman/listinfo/zope-dev
 **  No cross posts or HTML encoding!  **
 (Related lists -
  http://lists.zope.org/mailman/listinfo/zope-announce
  http://lists.zope.org/mailman/listinfo/zope )



___
Zope-Dev maillist  -  [EMAIL PROTECTED]
http://lists.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope )