RE: [Zope-dev] Boost.Python

2001-06-04 Thread Albert Langer

[...James Treleaven]
 are there any complications with respect to my using Boost.Python to bind
 any Zope Python scripts that I write to C++ code?

You've _really_ lost me, maybe someone else knows what you mean.  I don't
know what you mean by 'bind' (do you mean writing python extension modules
in C?), or what Boost.Python is, or what you are referring to as 'Zope
Python scripts' (do you mean the python modules that come with Zope?).

I don't know the answers to James' questions but here's the links for
Boost Python:

and comparison with Zope Extension Classes and other approaches:

Looks very interesting. Check it out.

Zope-Dev maillist  -  [EMAIL PROTECTED]
**  No cross posts or HTML encoding!  **
(Related lists - )

RE: Randomness (RE: [Zope-dev] CoreSessionTracking 0.8)

2001-05-23 Thread Albert Langer

It's obvious. This is just Zope's way of telling you not live on hamburgers
and coke.

-Original Message-
Bjorn Stabell
Sent: Thursday, May 24, 2001 12:33 AM
To: Chris McDonough; Howard Zhang
Subject: Randomness (RE: [Zope-dev] CoreSessionTracking 0.8)

Allright, let me try again.  I wish I had a small piece of code to give
you so you can reproduce it, but right now you'd have to get our entire
CMF-based website.

The bug basically manifests itself in that there are two versions of the
variable we put in the session (a shopping cart dict).  When I browse
through the site (not even updating the shopping cart) it'll show one
version for some links (1-40) before it switches to show the other, and
so on.  It looks like the website has two shopping carts that it
switches back and forth between.  You can see the shopping cart on every
page in the website (it's embedded into the template).

We were using frames, but I tried it several times without frames now
and the bug remains.  I even noticed that other variables disappeared
randomly as well, e.g., USER_PREF_LANGUAGES which is set by the
Localizer, resulting in a key error (I've probably seen 300 pages views,
and then suddenly one going back to another page gives a key error?).

I'm very curious what could possibly be causing such problems.  I
thought there might be something wrong in the shared memory between
threads, as I can't see anything else changing but the threads (is there
a way to display which thread is doing the publishing?).

I've seen similar randomness displayed in other situations where I've
been reloading pages that would sometimes (same interval, about every
1-40 times) show one character set, and other times another.  I think
nobody likes to see that kind of randomness.  It gives me a very bad
stomach feeling.  I definately think it's something deeper than a
CoreSessionTracking problem.


-Original Message-
From: Chris McDonough [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 23, 2001 20:34
To: Howard Zhang
Subject: Re: [Zope-dev] CoreSessionTracking 0.8

I remember this problem, but I haven't been able to reproduce it.  But
maybe it's because I'm not understanding the steps to reproduce it.  The
sentence user adds coke to shopping cart and click link to add coke
again before request finished is hard to understand.  Can you explain? 
Are you using frames?

Howard Zhang wrote:
 The problem about CoreSessionTracking we describe before we can
 repeat again now.
 The step is:
( 1 )  User adds Burger to shopping cart
( 2 )  User adds coke to shopping cart and click link to add
 again before request finished
( 3 )  The Burger is disappear in shopping cart and just one
 ( not two )
( 4 )  Repeat the step 2,Burger is back
 Anything you could tell me would be helpful.
 Zope-Dev maillist  -  [EMAIL PROTECTED]
 **  No cross posts or HTML encoding!  **
 (Related lists - )

Zope-Dev maillist  -  [EMAIL PROTECTED]
**  No cross posts or HTML encoding!  **
(Related lists - )


[Zope-dev] RE: [Zope] xmlrpc slowness

2001-05-18 Thread Albert Langer

The new release is up on sourceforge.  It should be compatible with the
Zope client/server it was tested against.  It is at:

Let me know how it goes.  I'm curious to see what kind of speed increase
you see.  My guess is that the implementation at the other xmlrpc end will
be the bottle-neck pretty soon.

Thanks *very* much for this!!!

1000 xml-rpc calls per second with commodity hardware sounds pretty

I've had to just pass it on to a friend to checkout at the moment, but
will try to get back to you soon with speed results, as I'm sure the
other 3 CCs will. I've added the Zope list back to the addresses as
there may be others there interested in trying it out now that Phil Harris
has confirmed the new version interworks correctly with the Zope
implementation. Also added [Zope-dev] list as Zope developers should
be more interested and Matt Hamilton as issues below may be closely
related to his posting [Zope-dev] Asyncore in an external method.

In a previous email CC you said:

I'm really excited that zope people may be using this.  Let me know if you
have any questions/concerns/requests.

As you've also done binary releases for Windows and the various unixen that
Zope does releases for, it should be suitable for actually integrating
into Zope rather than just as an add-on Product or it's present form
as just a separate python process talking to Zope rather than implementing
Zope's xml-rpc itself.

So here goes with some questions/concerns/requests...

In the README you mention:

Some time I should document everything better, especially the nonblocking
aspects of the server and client.


* Non-Blocking IO:  This becomes important when you have critical
applications (especially servers) that CANNOT be held up because a client
is trickling a request in or is reading a response at 1 byte per second.

* Event based:  This goes hand in hand with non-blocking IO.  It lets you
use an event based model for your client/server application, where the
main body is basically the select loop on the file descriptors you are
interested in.  The work is done by registering callbacks for different
events you received on file descriptors.  This can be very nice.  There is
some work to be done here, but all the hooks are in place.  See TO DO for
a list of the features that have yet to be incorporated.

(BTW there is currently no TO DO file.)

These aspects are a very big advantage. For Zope as a client, I suspect
that the trickling issue may also be very important since it
could be blocking an expensive Zope thread while waiting for a
response from a slow remote server to pass on some information in
response to relayed request (ie http/xmlrpc relay mode or even straight
xmlrpc/xmlrpc proxy mode).

As you also mentioned, the implementation at the other xmlrpc
end will be the bottle-neck pretty soon.

Sorry I'm not very familiar with how to do this stuff myself, so I have
3 questions which maybe you could answer when you get around to doing
those docs (or even better by also providing implementations in the
next release ;-) or perhaps someone else on CC list knows the answers
or is planning to do something about it?

1) I'm not sure if I've got it right, but my understanding is that despite
being based on a Medusa Reactor design that can handle many web hits
with a single thread, Zope also maintains a small pool of threads to
allow for (brief) blocking on calls to the file system for ZODB and
for (also brief) external connections. I suspect these threads are
expensive because they each end up keeping separate copies of
cached web pages etc (to avoid slow python thread switching). So
simply increasing the number of such threads is not recommended for
improving Zope performance - performance relies on the non-blocking
async Medusa Reactor design of Zserver, not on the threading, which
just seems to be a sort of extra workaround.

If that is correct, then a few concurrent external calls to slow external
xmlrpc servers (eg for credit card authorization taking 30 seconds
or more) could easily tie up a lot of Zope resources. The non-blocking
py-xmlrpc client could presumably surrender it's turn in the main event
loop for it's thread until a response is received and then be woken up
by the response, thus improving things greatly.

Unfortunately I have no idea how to do this - whether it would just happen
automatically or there are built in facilities for doing that sort of thing
easily in Zope already, or whether it is difficult to do.

I am just guessing that there would be some special
tricks needed to wake up a channel when a response comes back (eg using
the stuff in Zserver/medusa/ and Zserver/PubCore/.
which I don't fully understand, but looks relevant).

Maybe I have misunderstood, but it looks to me like existing
use of xmlrpc clients *from* Zope to external 

RE: [Zope-dev] OR mapping proposal

2001-05-16 Thread Albert Langer


Comments encouraged!

I've added some there.

Jim highlighted a project Risk there:

Updates to RDBMS data outside of the OR mapping could cause
cached data to be inconsistent.

This strikes me as rather fundamental.

Unless the goals include actual *sharing* of RDBMS data with
other applications completely independent of Zope I doubt
that the most important benefits of an OR mapping could
be achieved. Essentially SQL RDBM Systems are *about*
sharing data among applications. When customers want
SQL that is often what they actually want. An SQL
RDBMS can be overkill for other purposes which may
be just as well achieved by an embedded ODBMS like ZODB,
an SQL file system like MySQL or an LDAP directory.

Alternative goals for *exporting* ZODB data to an RDBMS
continuously, *importing* data from an RDBMS at regular
intervals and *embedding* an RDBMS database for exclusive
use by Zope with no write access for other applications
could all be met more easily.

There is certainly no major difficulty on the RDBMS
side, giving a Zope instance control over a set of
tables for it's own use and providing append only
and read only access to export and import tables
or views for regular or continuous replication.

But the combination of all 3 (which could be delivered
incrementally in any order) is *not* the same as *sharing*.

As I understand it, Zope's approach to cacheing inherently
prevents support for the Isolation part of ACID. Conflicting
writes to the same object are detected by version stamps but
the objects used by a transaction in one thread may have
been inconsistently changed by transactions in other threads.
This will not be detected unless those objects used are
also changed.

Similar problems are inherent in LDAP directories, which
are also designed for relatively static data with a low
rate of updates.

This is acceptable for many applications. Scope can and
should be limited to sharing that works with optimistic
checkout and does not require pessimistic locking. It is
common for an Enterprise Object to be read from an
RDBMS with it's stamp noted, modified independently
by an application and then updated iff the stamp was not
changed. Only the simultaneous checking of the stamp and
update of the object needs to be wrapped within a short
ACID RDBMS transaction. For example ACS 4 maintains a
timestamp on every object which can be used for this
purpose. This is similar to the ZODB approach.

Note however that:

1) The application must be prepared to deal with an exception
that cannot just be handled as a lower layer ConflictError
by retrying.

2) The object will often be a composite - eg an order header
*and* all it's line items, and fulfilments. Entanglement with
other objects such as products (for pricing) is avoided by
specific application programming (which may also be done in
stored procedures within the DBMS).

3) This does not support *any* cacheing of objects outside
of a transaction. The RDBMS itself provides internal
cacheing (often of the entire database for efficient
queries with web applications). This leads to the ACS
paradigm of the web server is the database client,
which is actually rather similar to Zope's Zserver is
the ZODB client. Both ACS and Zope involve complex
issues for database client side cacheing

Both 1 and 2 completely preclude any possibility of the
same level of transparency as for ZODB, while in no way
hindering use of pythonic syntax.

For most Zope web object publishing purposes cached objects
just need to be kept reasonably up to date rather than
synchronized with RDBMS transactions. The only viable
mechanism I can think of for dealing with item 3 in
a Zope context would involve the RDBMS maintaining a
Changes table which it appends to whenever any object
that has a special column for ZeoItem is changed without
also changing the value of ZeoItem. (ACS does not do
this and I'm not sure what it does do).

Zeo would monitor that table, either by regular polling
or continuously (eg with PostgreSQL as a LISTENer
responding to NOTIFY commands issued automatically
whenever the triggers append to the Changes table).

For each change Zeo would notify it's Zope instances
to invalidate their caches for that item.

I'm not familiar enough with Zope cacheing internals
to know whether some other approach is feasible. Requiring
such changes in a shared database is certainly undesirable.

Q1. Could somebody please post specific URLs for relevant
documentation of Zope cacheing?

Q2. I have a related question about the Zope design overall. As far
as I can make out Zope actually keeps separate copies of persistent
objects in RAM for each thread, and relies on the fact that there
is a tree structure corresponding to the URL paths that ensures
objects from which attributes will be acquired tend to already
be in RAM when the acquisition occurs.

I assume this is trading off the horrendous inefficiency of

[Zope-dev] (LONG) RE: [SmartObjects] Wrap-up of the discussion going on on zope-dev?

2001-05-14 Thread Albert Langer

[Joachim Werner]
As most of you might have recognized, there is a very active thread
(actually two interwoven ones) about RO-mapping etc. going on on zope-dev.
I'd be very happy if someone could wrap up the stuff from there on the
SmartObjects list. ;-)

Thanks for the pointer. As I read them all through at once, after
seeing your pointer, I made notes before joining the discussion,
which may help, though far from a wrap-up.


There seem to be 6 separate topics:

1) Nature of SmartObjects, ZPatterns and Transwarp frameworks and
relation to Zope/ZODB framework.

2) Improvements to ZCatalog search efficiency

3) Query syntax - see Zwiki:

4) Schema transparency (an oxymoron?)

5) Storage of ZODB objects in separate DBMS tables per class - Object

6) Requirements for Object Relational Mapping.


Here's a list of all the items I found in the May archives
with the term ORMapping in the subject line, from thread
Experiments in ORMapping and interwoven thread oodb
philosophics was [above] until I subscribed. Will comment
later on subsequent messages rather than making notes.

If there are any other subject headings missed, or other postings
(including earlier months), please let me know.

Highlights before links attempt to be objective but are limited
by space, by my specific interests in the topic and by my lack of
thorough understanding of Zope internals and consequent previous
non-participation in [zope-dev] with consequent lack of appreciation
of where various posters are coming from.

Quotes following links are verbatim extracts (not necessarily
in same order or in context). Must read means important message
with no or inadequate highlights or extracts given here.

I have omitted numerous affirmations that ZODB meets most current
Zope requirements (perhaps with improvements to ZCatalog)
in view of the fact that nobody appeared to be arguing otherwise.

BTW the date order used in list archives could be better in numeric
order as that might more closely reflect which messages are likely
to have been seen before replying. I have re-arranged.

Starting from 10 May 2001:

Shane Hathaway (DC) - start of discussion - must read
includes link
overview in file: sketch

Tino Wildenhain - asks about ZPatterns, ZClasses, doesn't want
purely relational application server.

SH - answers TW, says not hard to implement by mapping classes
to a table.

TW - answers SH, prefers improved OODB because doesn't like
mapping classes to schema.

SH - answers TW, enhanced ZODB storage (RDBMS and BerkelyDBM) is in some
ways the best OODB.
The other motivations for an RDBMS are (1) people have existing schemas
and want Zope to access the same data as their existing apps, and they
want it to be transparent, and (2) tables with millions of entries are
easily stored in Zope but the perception is that the catalog isn't as
fast as a database index.  No one has done any tests AFAIK. [...]
That's one reason ZODB is so nice.  You can write an application without
writing a formal schema.

Casey Duncan (Kaivo) - suggests Matisse or Objectivity but limited by
not supporting ZODB versions. BerkelyDBM best soon. Mentions slow XML
storage startup.

JW - already 2 OR mapping projects, SmartObjects and TransWarp,
no duplication because Phillip from Transwarp participating in
SmartObjects list.

TW - answers CD, XML just another pickle format.

SH - answers JW. SmartObjects, ZPatterns and Transwarp require new
database API instead of maintaining transparency like ZODB. Projects
should look at replacing parts of ZODB instead of adding complexity.
ZODB has pieces that can be split apart and replaced as needed, such
as caching, persistence, transactions, the pickle jar, the
multi-threaded connection factory, and the storage layer.  I'm hoping
we can achieve OR mapping by only replacing the pickle jar, i.e.

Cees de Groot - answers SH. Confirms advantage of not having to
write formal schema, migrating from PostgreSQL to ZODB for that.
File Storage faster than Oracle and is basic transaction log
so nothing could be more reliable.
Are people using ZODB for non-Zope data? I'd be very interested to discuss
things like emulating extents, patterns for indexing, 

RE: oodb philosophics ;) was: Re: [Zope-dev] Experiments withORMapping

2001-05-14 Thread Albert Langer

[Karl Anderson]
Casey Duncan [EMAIL PROTECTED] writes:

 I am not arguing necessarily for SQL as a query language for the ZODB.
 Although it is an accepted standard, but not a perfect one by any means
 especially for OODBs. Its appeal lies mainly in the high level of
 community familiarity and the plethora of SQL software to borrow from.

Does anyone have an opinion on the possible usefulness of XPath,
XQuery, and other XML standards for this?  Someone suggested (on the
zope-xml wiki) that it would be nice to be able to drop in a cataloger
that supported a presumably standard and presumably well-known XML
query syntax, and which would work throughout the database because
Zope objects would support DOM.

This is all speculation, and I personally don't know much right now
about XML database interfaces and how finished or well-regarded they

An excellent introduction to this topic is:

Putting XML in context with hierarchical, relational, and
object-oriented models by David Mertz.

Author is a python developer with lots of interesting XML stuff.
See also his xml_matters 1 and 2 for xml_object and xml_pickle with
much nicer pythonic syntax instead of using DOM directly.

Article is also *essential* background for the distinction between
Object Mapping and Object Relational Mapping which needs to be
understood by anyone participating in this discussion.

An example of a python ODBMS with some partial support for OQL is 4ODS
from 4 Suite, which uses a very natural pythonic syntax for objects
stored in and queried from PostgreSQL:

Following is from 4Suite-docs-0.11/4Suite-0.11/html/4ODS-userguide.html
available via:

How to use the system (a very basic walk through)

First create a ODL file that represents what you want to store in test.odl

module simple {
  class person {
attribute string name;
attribute double weight;
relationship Person spouse inverse Person ::spouse_of;
relationship Person spouse_of inverse Person ::spouse;
relationship listPerson children  inverse Person ::child_of;
relationship Person child_of inverse Person ::children;
  class employee (extends person) {
attribute string id;

Now create a new database and initialize

 #OdlParse -ifp test test.odl

Now write some python code to do stuff with these people


#Every thing that is persisten must be done inside a transaction and open
from Ft.Ods import Database
db = Database.Database()'test')

tx =

#Create a new instance of some objects
import person
import employee
dad =
mom =
son1 =
son2 =
daughter =

#Set some attributes = Pops = Ma = Joey = Bobby = Betty
dad.weight = 240.50

#We can set attributes not defined in the ODL but they will not persist
mom.address = 1234 Error Way

#Set some relationships

#First set a one to one relationship
dad.spouse = mom

#Or we could have done it via the ODMG spec

#Add some children to the dad (our data model does not let mom have
children.  We'd need a family struct (left up to the reader)

#We can create relationships both ways

#Shortcut for adding
dad.children = daughter

#Now root the family to some top level object.
db.bind(dad,The Fam)

#Make it so

#Out side of a transaction we can still access the objects.
#However, any changes we make will not persist.
#NOTE, because 4ODS caches relationships, any relationships that were not
traversed during the
#transaction, cannot be traversed now because an object cannot be loaded
from the db outside
#of a transaction.

#Start a new tx to fetch

tx =

newDad = db.lookup(The Fam)

print newDad.children[0].name
print newDad.spouse

#Discard this transaction

Ft/Ods/test_suite and Ft/Ods/demo are good places to look for more examples

See also:

Some other relevant references are:

Extraction of DBMS catalogs to XML using python.

PostgreSQL as XML repository

Note that none of this has much to do with the original topic of
Object-*Relational* Mapping.

*Essential* background for understanding what an object-relational
persistence layer looks like is:

It isn't very long and there *absolutely* isn't any point discussing
how to design such an OR persistence layer without first reading
and fully understanding it. (I say that after having carefully
studied all the messages in this discussion - though I also said
so before ;-)

The rest of that web site has 

[Zope-dev] RE: [Zope] REQUIRING Python 2.1??

2001-04-13 Thread Albert Langer

I can't quite help wondering whether someone at DC has maybe gotten so
"into" the development of Py 2.1 that they just can't wait to use its new
stuff, whether it's objectively what's best for Zope or not.  The prudent
thing to do would have been to add features as needed using
1.5.2-compatible code, or at best to offer a "new18n" branch that requires
2.1, which people who are THAT desperate for i18n could choose to follow if
they wanted.  Then, say 6-12 months after 2.1 is gold, you could unify and
require it for 3.0.  Instead, for the sake of being able to let the Python
developers stick a Zope logo on the 2.1 release, we are risking a boatload
of trouble.

As far as I can make out the strategy you advocate is more or less exactly
what they *did* do - so smoothly you didn't even notice.

The *big* leap is from 1.5.2 to 2.0 which has been out for quite a while.
I18N is *desperately* needed but had to be delayed because of the
compatability problems you are rightly concerned about. So even after
I18N became feasible with 2.0 the main branch was made compatible
with using 2.0 but binaries released with 1.5.2 to avoid risking a
boatload of trouble while enabling people desperate for I18N to start
using 2.0 and at the same time discover as much as possible of the
hiccups before general switchover.

Waiting for the "odd numbered release" is also a generally sound
policy. Essentially you are confusing that prudent delay in
completing the smoothly planned (and very clearly announced long ago)
switch from 1.5.2 to 2.x with a sudden rush to 2.1. Whatever
problems do occur will be overwhelmingly from the 2.x, not from
it being 2.1 in particular.

Zope-Dev maillist  -  [EMAIL PROTECTED]
**  No cross posts or HTML encoding!  **
(Related lists - )