Re: Logfiles above 2G

2006-09-08 Thread Alexander Malysh
Hi Andreas,

it handled fine in glibc with some additional flags defined. These were
added by me to the current CVS HEAD and works fine for Linux. Please make
sure you use latest CVS HEAD and rerun configure script.

Thanks,
Alex

Andreas Fink wrote:

 There where days where harddisk had a limit of 2GB or 4GB
 There where days where partitions had a limit of 2GB or 4GB
 There where days where files had a limit of 2GB or 4GB
 Those days are long long gone.
 
 However if kannel's gwlib runs in full debug mode and the logfile
 hits 2GB, the application quits/stops working and when you relaunch
 it, it appends to the same 2GB logfile and quits again because it
 cant go beyond this 2GB limit.
 
 Now this is not a bug of Kannel  but one of the operating system.
 This bug doesn't exist under MacOS X but it does exist in Linux
 Fedora 5 with a ext3 filesystem. From reading log.c, I can see that
 kannel does fopen(filename,a) and then vsfprintf or fprintf. As
 the file is opened append only, no seeks are used or anything fancy,
 I can not understand why the file is limited to 2GB even thought the
 filesystem for sure can handle files larger than 2GB.
 
 On our system it takes 2-3 days to hit this problem with a empty log
 file. As we mainly use MacOS X we never see this problem but having
 added a new Linux machine to the park, I'm puzzled to see this
 problem I've already spotted in 2001 and would have expected to have
 been long fixed in current linux kernels.
 
 Anyone have a hint here?
 
 
 
 
 
 
 Andreas Fink
 Fink Consulting GmbH
 ---
 Tel: +41-61-332 Fax: +41-61-331  Mobile: +41-79-2457333
 Address: Clarastrasse 3, 4058 Basel, Switzerland
 E-Mail:  [EMAIL PROTECTED]
 Homepage: http://www.finkconsulting.com
 ---
 ICQ: 8239353
 MSN: [EMAIL PROTECTED] AIM: smsrelay Skype: andreasfink
 Yahoo: finkconsulting SMS: +41792457333

-- 
Thanks,
Alex




Re: Logfiles above 2G

2006-09-07 Thread Enver ALTIN
On Thu, 2006-09-07 at 01:42 +0200, Andreas Fink wrote:
 But if I open the file with fopen64, then it works. Rather strange as  
 Linux seems to be the only OS where this has been done like that.

There's several reasons why maintainers of GNU C Library have chosen
this approach. Obviously, using 64 bits I/O functions on a 32 bits
system is slightly slower and also will consume more memory.

Noting that the C library is designed to be used by everybody, including
applications that deal with thousands of files in different estimated
sizes (remember squid?), using 64bits I/O is optional, and can be
enabled easily at compile time.

I'd second this.
-- 
Enver


signature.asc
Description: This is a digitally signed message part


Re: Logfiles above 2G

2006-09-07 Thread Guillaume Cottenceau
Enver ALTIN ealtin 'at' parkyeri.com writes:

 On Thu, 2006-09-07 at 01:42 +0200, Andreas Fink wrote:
  But if I open the file with fopen64, then it works. Rather strange as  
  Linux seems to be the only OS where this has been done like that.
 
 There's several reasons why maintainers of GNU C Library have chosen
 this approach. Obviously, using 64 bits I/O functions on a 32 bits
 system is slightly slower and also will consume more memory.
 
 Noting that the C library is designed to be used by everybody, including
 applications that deal with thousands of files in different estimated
 sizes (remember squid?), using 64bits I/O is optional, and can be
 enabled easily at compile time.

Opposedly, applications that deal with thousands of files could
easily enable a proper option at compile time for sticking to non
large files.

-- 
Guillaume Cottenceau
Create your personal SMS or WAP Service - visit http://mobilefriends.ch/



Re: Logfiles above 2G

2006-09-06 Thread Peter Christensen

Hi,

The problem is not the kernel - 32-bit linux kernels have no problem 
handling files larger than 2GB. But if the program is not compiled with 
support for large files, it will receive an interrupt when it attempts 
to write past the 2GB barrier. The solution is to compile with the 
CFLAGS -D_LARGE_FILES -D_FILE_OFFSET_BITS=64. Actually this just makes 
the program call open64() and lseek64() instead of open() and lseek().


Compile kannel in this way, and kannel can handle enormous logs without 
crashing.


I thought that large file support was actually enabled by default now 
(in the latest CVS) - I believe I saw a note about that some time ago.


Med venlig hilsen / Best regards

Peter Christensen

Developer
--
Cool Systems ApS

Tel: +45 2888 1600
Mai: [EMAIL PROTECTED]
www: www.coolsystems.dk


Andreas Fink wrote:

Vincent says:

- logrotate is your friend... :)
- or use 64 bits arch



1. Logrotate is already my friend (it was misconfigured though...)
2. 64bit architecture is not a limit for having larger files. MacOS X on 
intel has no problem creating such large files.


On 06.09.2006, at 02:34, Mi Reflejo wrote:


The 2gb limit is in 32-bit cpu: Most exactly in libc, kernel fs
drivers and VFS layer.


This is not correct anymore. The kernel is running 32bit (but is 
actually a AMD Opteron with EMT64 support) but can deal with large 
filesystems. The filesystem also supports file larger than 2GB (people 
using VMWAre would cry otherwise). So the only location left is in libc.


See this:
Let me create a file larger than 2G

$ dd if=/dev/zero of=test bs=1M count=2050
2050+0 records in
2050+0 records out
2149580800 bytes (2.1 GB) copied, 48.2032 seconds, 44.6 MB/s
[EMAIL PROTECTED] ~]$ ls -l test
-rw-rw-r-- 1 afink afink 2149580800 Sep  6 05:42 test
[EMAIL PROTECTED] ~]$

So we can create files larger than 2GB on that filesystem. This was not 
the case in 2001 when I run into that problem too where the filesystem 
was really the limit. So it can not be kernel, not filesystem. So it 
must be a libc limitation but for appending data to a existing file, it 
sounds rather strange to me that fprintf has this limit. Lets test this 
with a small piece of C:


#include stdio.h
#include errno.h

main()
{
FILE *f;
int i;

f = fopen(test.bin,a);

if(f==NULL)
{
fprintf(stderr,could not open file\nerrno= %d\n,errno);
return -1;
}
fseek(f,0,SEEK_END);
for(i=0;i1000;i++)
{
fprintf(f,The world is nice\n);
}
fclose(f);
}

This code shows that when I do a fopen on the 2.1GB file, I get an error 
27 back. ( EFBIG). So it must be libc issue. However libc is used on my 
other platforms too and doesnt show this behaviours.


As dd has no probelm, Kannel can be made not having a problem too.
Furthermore, Kannel should not crash in that scenario which it currently 
does.
And Kannel doesnt recover from that crash after a restart because the 
logfile doesnt change its size (we dont truncate it). So using the box 
around the box to autorestart it, wont solve it.













Re: Logfiles above 2G

2006-09-06 Thread Alberto Devesa

Try to compile your C code with -D_FILE_OFFSET_BITS=64



On Wednesday 06 September 2006 06:08, Andreas Fink wrote:
 Vincent says:
  - logrotate is your friend... :)
  - or use 64 bits arch

 1. Logrotate is already my friend (it was misconfigured though...)
 2. 64bit architecture is not a limit for having larger files. MacOS X
 on intel has no problem creating such large files.

 On 06.09.2006, at 02:34, Mi Reflejo wrote:
  The 2gb limit is in 32-bit cpu: Most exactly in libc, kernel fs
  drivers and VFS layer.

 This is not correct anymore. The kernel is running 32bit (but is
 actually a AMD Opteron with EMT64 support) but can deal with large
 filesystems. The filesystem also supports file larger than 2GB
 (people using VMWAre would cry otherwise). So the only location left
 is in libc.

 See this:
 Let me create a file larger than 2G

 $ dd if=/dev/zero of=test bs=1M count=2050
 2050+0 records in
 2050+0 records out
 2149580800 bytes (2.1 GB) copied, 48.2032 seconds, 44.6 MB/s
 [EMAIL PROTECTED] ~]$ ls -l test
 -rw-rw-r-- 1 afink afink 2149580800 Sep  6 05:42 test
 [EMAIL PROTECTED] ~]$

 So we can create files larger than 2GB on that filesystem. This was
 not the case in 2001 when I run into that problem too where the
 filesystem was really the limit. So it can not be kernel, not
 filesystem. So it must be a libc limitation but for appending data to
 a existing file, it sounds rather strange to me that fprintf has this
 limit. Lets test this with a small piece of C:

 #include stdio.h
 #include errno.h

 main()
 {
  FILE *f;
  int i;

  f = fopen(test.bin,a);

  if(f==NULL)
  {
  fprintf(stderr,could not open file\nerrno= %d
 \n,errno);
  return -1;
  }
  fseek(f,0,SEEK_END);
  for(i=0;i1000;i++)
  {
  fprintf(f,The world is nice\n);
  }
  fclose(f);
 }

 This code shows that when I do a fopen on the 2.1GB file, I get an
 error 27 back. ( EFBIG). So it must be libc issue. However libc is
 used on my other platforms too and doesnt show this behaviours.

 As dd has no probelm, Kannel can be made not having a problem too.
 Furthermore, Kannel should not crash in that scenario which it
 currently does.
 And Kannel doesnt recover from that crash after a restart because the
 logfile doesnt change its size (we dont truncate it). So using the
 box around the box to autorestart it, wont solve it.



Re: Logfiles above 2G

2006-09-06 Thread Vincent CHAVANIS

I'm in favor putting this in default value
But we need to be sure it does not break any existing installations.
(for fs that does not support it for example)

Vincent.

--
Telemaque - 06200 NICE - (FR)
Service Technique/Reseau - NOC
Developpement SMS/MMS/Kiosques
http://www.telemaque.fr/
[EMAIL PROTECTED]
Tel : +33 4 93 97 71 64 (fax 68)
- Original Message - 
From: Peter Christensen [EMAIL PROTECTED]

To: Andreas Fink [EMAIL PROTECTED]
Cc: devel@kannel.org
Sent: Wednesday, September 06, 2006 11:48 AM
Subject: Re: Logfiles above 2G



Hi,

The problem is not the kernel - 32-bit linux kernels have no problem 
handling files larger than 2GB. But if the program is not compiled with 
support for large files, it will receive an interrupt when it attempts to 
write past the 2GB barrier. The solution is to compile with the CFLAGS 
-D_LARGE_FILES -D_FILE_OFFSET_BITS=64. Actually this just makes the 
program call open64() and lseek64() instead of open() and lseek().


Compile kannel in this way, and kannel can handle enormous logs without 
crashing.


I thought that large file support was actually enabled by default now (in 
the latest CVS) - I believe I saw a note about that some time ago.


Med venlig hilsen / Best regards

Peter Christensen

Developer
--
Cool Systems ApS

Tel: +45 2888 1600
Mai: [EMAIL PROTECTED]
www: www.coolsystems.dk


Andreas Fink wrote:

Vincent says:

- logrotate is your friend... :)
- or use 64 bits arch



1. Logrotate is already my friend (it was misconfigured though...)
2. 64bit architecture is not a limit for having larger files. MacOS X on 
intel has no problem creating such large files.


On 06.09.2006, at 02:34, Mi Reflejo wrote:


The 2gb limit is in 32-bit cpu: Most exactly in libc, kernel fs
drivers and VFS layer.


This is not correct anymore. The kernel is running 32bit (but is actually 
a AMD Opteron with EMT64 support) but can deal with large filesystems. 
The filesystem also supports file larger than 2GB (people using VMWAre 
would cry otherwise). So the only location left is in libc.


See this:
Let me create a file larger than 2G

$ dd if=/dev/zero of=test bs=1M count=2050
2050+0 records in
2050+0 records out
2149580800 bytes (2.1 GB) copied, 48.2032 seconds, 44.6 MB/s
[EMAIL PROTECTED] ~]$ ls -l test
-rw-rw-r-- 1 afink afink 2149580800 Sep  6 05:42 test
[EMAIL PROTECTED] ~]$

So we can create files larger than 2GB on that filesystem. This was not 
the case in 2001 when I run into that problem too where the filesystem 
was really the limit. So it can not be kernel, not filesystem. So it must 
be a libc limitation but for appending data to a existing file, it sounds 
rather strange to me that fprintf has this limit. Lets test this with a 
small piece of C:


#include stdio.h
#include errno.h

main()
{
FILE *f;
int i;

f = fopen(test.bin,a);

if(f==NULL)
{
fprintf(stderr,could not open file\nerrno= %d\n,errno);
return -1;
}
fseek(f,0,SEEK_END);
for(i=0;i1000;i++)
{
fprintf(f,The world is nice\n);
}
fclose(f);
}

This code shows that when I do a fopen on the 2.1GB file, I get an error 
27 back. ( EFBIG). So it must be libc issue. However libc is used on my 
other platforms too and doesnt show this behaviours.


As dd has no probelm, Kannel can be made not having a problem too.
Furthermore, Kannel should not crash in that scenario which it currently 
does.
And Kannel doesnt recover from that crash after a restart because the 
logfile doesnt change its size (we dont truncate it). So using the box 
around the box to autorestart it, wont solve it.


















Re: Logfiles above 2G

2006-09-06 Thread Guillaume Cottenceau
Andreas Fink andreas 'at' fink.org writes:

 However if kannel's gwlib runs in full debug mode and the logfile
 hits 2GB, the application quits/stops working and when you relaunch
 it, it appends to the same 2GB logfile and quits again because it
 cant go beyond this 2GB limit.

log.c seems to be able to use syslog, so on Linux at least it
should use syslog and have no file limit or log rotation problem.
I'm rather confused.

-- 
Guillaume Cottenceau
Create your personal SMS or WAP Service - visit http://mobilefriends.ch/



Re: Logfiles above 2G

2006-09-06 Thread Mi Reflejo

I don't meant the problem is IN 32-bit CPUs but in kernel filesystem
drivers, libc, and VFS layer WHERE you are using 32bits CPUs.
Of course you can deal with large filesystems, to the fact that you
can deal with XFS for example.

That test is quite vague. try it:

dd if=/dev/zero of=bigfile bs=1024 count=3145728
dd if=/dev/zero of=bigfile bs=1024 count=5145728
dd if=/dev/zero bs=1024 count=5145728  bigfile

If you are really worried about the limit, you should see LFS (Large
File Summit). (I'm part of the devolepment and i can help you if you
need, but i think you don't need ;) )

If you only need large file support in kannel, you should (probably
you need some code changes) recompile using D_FILE_OFFSET_BITS=64

M
On 06 Sep 2006 15:54:06 +0200, Guillaume Cottenceau [EMAIL PROTECTED] wrote:

Andreas Fink andreas 'at' fink.org writes:

 However if kannel's gwlib runs in full debug mode and the logfile
 hits 2GB, the application quits/stops working and when you relaunch
 it, it appends to the same 2GB logfile and quits again because it
 cant go beyond this 2GB limit.

log.c seems to be able to use syslog, so on Linux at least it
should use syslog and have no file limit or log rotation problem.
I'm rather confused.

--
Guillaume Cottenceau
Create your personal SMS or WAP Service - visit http://mobilefriends.ch/






Re: Logfiles above 2G

2006-09-06 Thread Andreas Fink
use of syslog is optional. I dont compile it with syslog as I want to  
have separate logfiles.

my small test C programm shows that normal compilation hits this wall.
But if I open the file with fopen64, then it works. Rather strange as  
Linux seems to be the only OS where this has been done like that.


I'll check what the currentl CVS does about this.

-D_FILE_OFFSET_BITS=64 is not working here.
will verify with -D_LARGE_FILES. Also I found similar names (but not  
same) in the headerfiles.

This was tested on Fedora5.



On 06.09.2006, at 15:54, Guillaume Cottenceau wrote:


Andreas Fink andreas 'at' fink.org writes:


However if kannel's gwlib runs in full debug mode and the logfile
hits 2GB, the application quits/stops working and when you relaunch
it, it appends to the same 2GB logfile and quits again because it
cant go beyond this 2GB limit.


log.c seems to be able to use syslog, so on Linux at least it
should use syslog and have no file limit or log rotation problem.
I'm rather confused.

--
Guillaume Cottenceau
Create your personal SMS or WAP Service - visit http:// 
mobilefriends.ch/





Re: Logfiles above 2G

2006-09-05 Thread Vincent CHAVANIS



2 solutions here :

- logrotate is your friend... :)
- or use 64 bits 
arch

Vincent

--Telemaque - 06200 NICE - (FR)Service Technique/Reseau - NOC 
Developpement SMS/MMS/Kiosqueshttp://www.telemaque.fr/[EMAIL PROTECTED]Tel : +33 4 93 97 
71 64 (fax 68)

  - Original Message - 
  From: 
  Andreas Fink 
  
  To: devel Devel 
  Sent: Tuesday, September 05, 2006 8:33 
  PM
  Subject: Logfiles above 2G
  There where days where harddisk had a limit of 2GB or 4GB
  There where days where partitions had a limit of 2GB or 4GB
  There where days where files had a limit of 2GB or 4GB
  Those days are long long gone.
  
  However if kannel's gwlib runs in full debug mode and the logfile hits 
  2GB, the application quits/stops working and when you relaunch it, it appends 
  to the same 2GB logfile and quits again because it cant go beyond this 2GB 
  limit.
  
  Now this is not a bug of Kannel but one of the operating system. 
  This bug doesn't exist under MacOS X but it does exist in Linux Fedora 5 with 
  a ext3 filesystem. From reading log.c, I can see that kannel does 
  fopen("filename","a") and then vsfprintf or fprintf. As the file is opened 
  append only, no seeks are used or anything fancy, I can not understand why the 
  file is limited to 2GB even thought the filesystem for sure can handle files 
  larger than 2GB.
  
  On our system it takes 2-3 days to hit this problem with a empty log 
  file. As we mainly use MacOS X we never see this problem but having added a 
  new Linux machine to the park, I'm puzzled to see this problem I've already 
  spotted in 2001 and would have expected to have been long fixed in current 
  linux kernels.
  
  Anyone have a hint here?
  
  
  
  
  Andreas 
  Fink
  Fink Consulting 
  GmbH
  ---
  Tel: +41-61-332 Fax: 
  +41-61-331 
  Mobile: 
  +41-79-2457333
  Address: Clarastrasse 3, 
  4058 Basel, Switzerland
  E-Mail: 
  [EMAIL PROTECTED]
  Homepage: http://www.finkconsulting.com
  ---
  ICQ: 
  8239353
  MSN: [EMAIL PROTECTED] AIM: 
  smsrelay Skype: andreasfink
  Yahoo: finkconsulting SMS: 
  +41792457333
  


Re: Logfiles above 2G

2006-09-05 Thread Mi Reflejo

The 2gb limit is in 32-bit cpu: Most exactly in libc, kernel fs
drivers and VFS layer.

The limit exists because linux ports for 32bits CPUs use 32-bit
integers for file access and locking, so: 2^31 - 1 = 2GB

My solutions are:
1) You can implement LFS
2) Use XFS
3) Logrotate

M.

On 9/5/06, Vincent CHAVANIS [EMAIL PROTECTED] wrote:



2 solutions here :

- logrotate is your friend... :)
- or use 64 bits arch

Vincent

--
Telemaque - 06200 NICE - (FR)
Service Technique/Reseau - NOC
Developpement SMS/MMS/Kiosques
http://www.telemaque.fr/
[EMAIL PROTECTED]
Tel : +33 4 93 97 71 64 (fax 68)


- Original Message -
From: Andreas Fink
To: devel Devel
Sent: Tuesday, September 05, 2006 8:33 PM
Subject: Logfiles above 2G

There where days where harddisk had a limit of 2GB or 4GB

There where days where partitions had a limit of 2GB or 4GB

There where days where files had a limit of 2GB or 4GB
Those days are long long gone.


However if kannel's gwlib runs in full debug mode and the logfile hits 2GB,
the application quits/stops working and when you relaunch it, it appends to
the same 2GB logfile and quits again because it cant go beyond this 2GB
limit.


Now this is not a bug of Kannel  but one of the operating system. This bug
doesn't exist under MacOS X but it does exist in Linux Fedora 5 with a ext3
filesystem. From reading log.c, I can see that kannel does
fopen(filename,a) and then vsfprintf or fprintf. As the file is opened
append only, no seeks are used or anything fancy, I can not understand why
the file is limited to 2GB even thought the filesystem for sure can handle
files larger than 2GB.


On our system it takes 2-3 days to hit this problem with a empty log file.
As we mainly use MacOS X we never see this problem but having added a new
Linux machine to the park, I'm puzzled to see this problem I've already
spotted in 2001 and would have expected to have been long fixed in current
linux kernels.


Anyone have a hint here?










Andreas Fink
Fink Consulting GmbH
---
Tel: +41-61-332 Fax: +41-61-331  Mobile: +41-79-2457333
Address: Clarastrasse 3, 4058 Basel, Switzerland
E-Mail:  [EMAIL PROTECTED]
Homepage: http://www.finkconsulting.com
---
ICQ: 8239353
MSN: [EMAIL PROTECTED] AIM: smsrelay Skype: andreasfink
Yahoo: finkconsulting SMS: +41792457333








Re: Logfiles above 2G

2006-09-05 Thread Andreas Fink

Vincent says:

- logrotate is your friend... :)
- or use 64 bits arch



1. Logrotate is already my friend (it was misconfigured though...)
2. 64bit architecture is not a limit for having larger files. MacOS X  
on intel has no problem creating such large files.


On 06.09.2006, at 02:34, Mi Reflejo wrote:


The 2gb limit is in 32-bit cpu: Most exactly in libc, kernel fs
drivers and VFS layer.


This is not correct anymore. The kernel is running 32bit (but is  
actually a AMD Opteron with EMT64 support) but can deal with large  
filesystems. The filesystem also supports file larger than 2GB  
(people using VMWAre would cry otherwise). So the only location left  
is in libc.


See this:
Let me create a file larger than 2G

$ dd if=/dev/zero of=test bs=1M count=2050
2050+0 records in
2050+0 records out
2149580800 bytes (2.1 GB) copied, 48.2032 seconds, 44.6 MB/s
[EMAIL PROTECTED] ~]$ ls -l test
-rw-rw-r-- 1 afink afink 2149580800 Sep  6 05:42 test
[EMAIL PROTECTED] ~]$

So we can create files larger than 2GB on that filesystem. This was  
not the case in 2001 when I run into that problem too where the  
filesystem was really the limit. So it can not be kernel, not  
filesystem. So it must be a libc limitation but for appending data to  
a existing file, it sounds rather strange to me that fprintf has this  
limit. Lets test this with a small piece of C:


#include stdio.h
#include errno.h

main()
{
FILE *f;
int i;

f = fopen(test.bin,a);

if(f==NULL)
{
fprintf(stderr,could not open file\nerrno= %d 
\n,errno);

return -1;
}
fseek(f,0,SEEK_END);
for(i=0;i1000;i++)
{
fprintf(f,The world is nice\n);
}
fclose(f);
}

This code shows that when I do a fopen on the 2.1GB file, I get an  
error 27 back. ( EFBIG). So it must be libc issue. However libc is  
used on my other platforms too and doesnt show this behaviours.


As dd has no probelm, Kannel can be made not having a problem too.
Furthermore, Kannel should not crash in that scenario which it  
currently does.
And Kannel doesnt recover from that crash after a restart because the  
logfile doesnt change its size (we dont truncate it). So using the  
box around the box to autorestart it, wont solve it.