Hi Andreas,
it handled fine in glibc with some additional flags defined. These were
added by me to the current CVS HEAD and works fine for Linux. Please make
sure you use latest CVS HEAD and rerun configure script.
Thanks,
Alex
Andreas Fink wrote:
There where days where harddisk had a limit
On Thu, 2006-09-07 at 01:42 +0200, Andreas Fink wrote:
But if I open the file with fopen64, then it works. Rather strange as
Linux seems to be the only OS where this has been done like that.
There's several reasons why maintainers of GNU C Library have chosen
this approach. Obviously, using
Enver ALTIN ealtin 'at' parkyeri.com writes:
On Thu, 2006-09-07 at 01:42 +0200, Andreas Fink wrote:
But if I open the file with fopen64, then it works. Rather strange as
Linux seems to be the only OS where this has been done like that.
There's several reasons why maintainers of GNU C
Hi,
The problem is not the kernel - 32-bit linux kernels have no problem
handling files larger than 2GB. But if the program is not compiled with
support for large files, it will receive an interrupt when it attempts
to write past the 2GB barrier. The solution is to compile with the
CFLAGS
Try to compile your C code with -D_FILE_OFFSET_BITS=64
On Wednesday 06 September 2006 06:08, Andreas Fink wrote:
Vincent says:
- logrotate is your friend... :)
- or use 64 bits arch
1. Logrotate is already my friend (it was misconfigured though...)
2. 64bit architecture is not a limit
PROTECTED]
Tel : +33 4 93 97 71 64 (fax 68)
- Original Message -
From: Peter Christensen [EMAIL PROTECTED]
To: Andreas Fink [EMAIL PROTECTED]
Cc: devel@kannel.org
Sent: Wednesday, September 06, 2006 11:48 AM
Subject: Re: Logfiles above 2G
Hi,
The problem is not the kernel - 32-bit linux
Andreas Fink andreas 'at' fink.org writes:
However if kannel's gwlib runs in full debug mode and the logfile
hits 2GB, the application quits/stops working and when you relaunch
it, it appends to the same 2GB logfile and quits again because it
cant go beyond this 2GB limit.
log.c seems to be
I don't meant the problem is IN 32-bit CPUs but in kernel filesystem
drivers, libc, and VFS layer WHERE you are using 32bits CPUs.
Of course you can deal with large filesystems, to the fact that you
can deal with XFS for example.
That test is quite vague. try it:
dd if=/dev/zero of=bigfile
use of syslog is optional. I dont compile it with syslog as I want to
have separate logfiles.
my small test C programm shows that normal compilation hits this wall.
But if I open the file with fopen64, then it works. Rather strange as
Linux seems to be the only OS where this has been done
There where days where harddisk had a limit of 2GB or 4GBThere where days where partitions had a limit of 2GB or 4GBThere where days where files had a limit of 2GB or 4GBThose days are long long gone.However if kannel's gwlib runs in full debug mode and the logfile hits 2GB, the application
:
Andreas Fink
To: devel Devel
Sent: Tuesday, September 05, 2006 8:33
PM
Subject: Logfiles above 2G
There where days where harddisk had a limit of 2GB or 4GB
There where days where partitions had a limit of 2GB or 4GB
There where days where files had a limit of 2GB or 4GB
)
- Original Message -
From: Andreas Fink
To: devel Devel
Sent: Tuesday, September 05, 2006 8:33 PM
Subject: Logfiles above 2G
There where days where harddisk had a limit of 2GB or 4GB
There where days where partitions had a limit of 2GB or 4GB
There where days where files had a limit of 2GB
Vincent says:
- logrotate is your friend... :)
- or use 64 bits arch
1. Logrotate is already my friend (it was misconfigured though...)
2. 64bit architecture is not a limit for having larger files. MacOS X
on intel has no problem creating such large files.
On 06.09.2006, at 02:34, Mi
13 matches
Mail list logo