Hi,

The problem is not the kernel - 32-bit linux kernels have no problem handling files larger than 2GB. But if the program is not compiled with support for large files, it will receive an interrupt when it attempts to write past the 2GB barrier. The solution is to compile with the CFLAGS "-D_LARGE_FILES -D_FILE_OFFSET_BITS=64". Actually this just makes the program call open64() and lseek64() instead of open() and lseek().

Compile kannel in this way, and kannel can handle enormous logs without crashing.

I thought that large file support was actually enabled by default now (in the latest CVS) - I believe I saw a note about that some time ago.

Med venlig hilsen / Best regards

Peter Christensen

Developer
------------------
Cool Systems ApS

Tel: +45 2888 1600
Mai: [EMAIL PROTECTED]
www: www.coolsystems.dk


Andreas Fink wrote:
Vincent says:
- logrotate is your friend... :)
- or use 64 bits arch


1. Logrotate is already my friend (it was misconfigured though...)
2. 64bit architecture is not a limit for having larger files. MacOS X on intel has no problem creating such large files.

On 06.09.2006, at 02:34, Mi Reflejo wrote:

The 2gb limit is in 32-bit cpu: Most exactly in libc, kernel fs
drivers and VFS layer.

This is not correct anymore. The kernel is running 32bit (but is actually a AMD Opteron with EMT64 support) but can deal with large filesystems. The filesystem also supports file larger than 2GB (people using VMWAre would cry otherwise). So the only location left is in libc.

See this:
Let me create a file larger than 2G

$ dd if=/dev/zero of=test bs=1M count=2050
2050+0 records in
2050+0 records out
2149580800 bytes (2.1 GB) copied, 48.2032 seconds, 44.6 MB/s
[EMAIL PROTECTED] ~]$ ls -l test
-rw-rw-r-- 1 afink afink 2149580800 Sep  6 05:42 test
[EMAIL PROTECTED] ~]$

So we can create files larger than 2GB on that filesystem. This was not the case in 2001 when I run into that problem too where the filesystem was really the limit. So it can not be kernel, not filesystem. So it must be a libc limitation but for appending data to a existing file, it sounds rather strange to me that fprintf has this limit. Lets test this with a small piece of C:

#include <stdio.h>
#include <errno.h>

main()
{
        FILE *f;
        int i;

        f = fopen("test.bin","a");

        if(f==NULL)
        {
                fprintf(stderr,"could not open file\nerrno= %d\n",errno);
                return -1;
        }
        fseek(f,0,SEEK_END);
        for(i=0;i<1000;i++)
        {
                fprintf(f,"The world is nice\n");
        }
        fclose(f);
}

This code shows that when I do a fopen on the 2.1GB file, I get an error 27 back. ( EFBIG). So it must be libc issue. However libc is used on my other platforms too and doesnt show this behaviours.

As "dd" has no probelm, Kannel can be made not having a problem too.
Furthermore, Kannel should not crash in that scenario which it currently does. And Kannel doesnt recover from that crash after a restart because the logfile doesnt change its size (we dont truncate it). So using the box around the box to autorestart it, wont solve it.









Reply via email to