[sorry for the first reply containing only the quote of marc's nail...
Seems that I'm getting too tired...]

marc lindahl wrote:
> I've recently been testing a new setup with XFS
> (http://oss.sgi.com/projects/xfs/1.0_release.html) and decided to try to
> bloat data.fs before using this system for production.  The computer is set
> up with zope 2.3.3 installed from source (the old fashioned way, with
> everything in one directory), with a separate 18GB disk as /usr/local/zope.


>   File /usr/local/zope/Zope-2.3.3/lib/python/ZODB/FileStorage.py, line 745,
> in tpc_vote
>     (Object: /usr/local/zope/Zope-2.3.3/var/Data.fs)
> OverflowError: (see above)
> --------------------------
> Lines 744 and 745 are:
>             pos=self._pos
>             file.seek(pos)
> it looks like file.seek doesn't like the long int?
> I found one reference to large file support in Python:
> http://www.python.org/doc/current/lib/posix-large-files.html
> Could it be just that the default install script in the release needs to
> enable large file support?  Or....???


I had similar problems when I tried to install Python on a Linux box
with large file support. I haven't yet time enough to test Zops with a
data base > 2GB, but a tiny test program bailed out quite the same way.

In my case, it turned out that it was not enough to set the environment
variable CC, as described on the page on the Python site you mentioned.
(to be precise, I used CFLAGS intead of CC as recommended in a comment
in config.h)

Eventually, the following worked for me:


- checked config.h: Does it contains the the lines
"#define "
"#define HAVE_FTELLO 1"

(HAVE_FTELL64 instead of HAVE_FTELLO might work too)

- add the following two lines at the top of config.h:

#define _FILE_OFFSET_BITS=64

make && make install

Without the additional #defines, the glibc header files seem to "forget"
about large file support.



Zope-Dev maillist  -  [EMAIL PROTECTED]
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://lists.zope.org/mailman/listinfo/zope )

Reply via email to