Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Mark Kirkwood

Andrew Sullivan wrote:

On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:

I'm actually amazed that postgres isn't already using large file
support.  Especially for tools like dump. 


Except it would only cause confusion if you ran such a program on a
system that didn't itself have largefile support.  Better to make the
admin turn all these things on on purpose, until everyone is running
64 bit systems everywhere.

A

Ah yes ... extremely good point - I had not considered that.

I am pretty sure all reasonably current (kernel = 2.4) Linux distros 
support largefile out of the box - so it should be safe for them.

Other operating systems where 64 bit file access can be disabled or 
unconfigured require more care - possibly  (sigh) 2 binary RPMS with a 
distinctive 32 and 64 bit label ...(I think the big O does this for 
Solaris).

Cheers

Mark 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Oliver Elphick

On Tue, 2002-08-13 at 03:57, Greg Copeland wrote:
  Are there any filesystems in common use (not including windows ones) that
  don't support 32-bit filesizes?
  
  Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),
  probably much more. What about the BSDs? XFS? etc
  
 
 Ext2  3 should be okay.  XFS (very sure) and JFS (reasonably sure)
 should also be okay...IIRC.  NFS and SMB are probably problematic, but I
 can't see anyone really wanting to do this. 

Hmm. Whereas I can't see many people putting their database files on an
NFS mount, I can readily see them using pg_dump to one, and pg_dump is
the program where large files are really likely to be needed.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Watch ye therefore, and pray always, that ye may be 
  accounted worthy to escape all these things that shall
  come to pass, and to stand before the Son of man.
   Luke 21:36 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Oliver Elphick

On Mon, 2002-08-12 at 21:07, Peter Eisentraut wrote:

 This is not the only issue.  You really need to check all uses of off_t
 (for example printf(%ld, off_t) will crash) and all places where off_t
 should have been used in the first place.  Furthermore you might need to
 replace ftell() and fseek() by ftello() and fseeko(), especially if you
 want pg_dump to support large archives.

Searching for fseek, ftell and off_t yields only 12 files in the whole
source tree, so fortunately the impact is not enormous.  As expected,
pg_dump is the main program involved.

There seem to be several places in the pg_dump code where int is used
instead of long int to receive the output of ftell().  I presume these
ought to be cleaned up as well.

Looking at how to deal with this, is the following going to be
portable?:

in pg_dump/Makefile:
CFLAGS += -D_LARGEFILE_SOURCE -D_OFFSET_BITS=64

in pg_dump.h:
#ifdef _LARGEFILE_SOURCE
  #define FSEEK fseeko
  #define FTELL ftello
  #define OFF_T_FORMAT %Ld
  typedef off_t OFF_T;
#else
  #define FSEEK fseek
  #define FTELL ftell
  #define OFF_T_FORMAT %ld
  typedef long int OFF_T;
#endif

In pg_dump/*.c:
change relevant occurrences of fseek and ftell to FSEEK and
FTELL

change all file offset parameters used or returned by fseek and
ftell to OFF_T (usually from int)

construct printf formats with OFF_T_FORMAT in appropriate places

 Still, most of the configuration work is already done in Autoconf (see
 AC_FUNC_FSEEKO and AC_SYS_LARGEFILE), so the work might be significantly
 less than the time spent debating the merits of large files on these
 lists. ;-)

Since running autoconf isn't part of a normal build, I'm not familiar
with that.  Can autoconf make any of the above unnecessary?

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Watch ye therefore, and pray always, that ye may be 
  accounted worthy to escape all these things that shall
  come to pass, and to stand before the Son of man.
   Luke 21:36 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Larry Rosenman

On Tue, 2002-08-13 at 03:42, Mark Kirkwood wrote:
 Andrew Sullivan wrote:
 
 On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
 
 I'm actually amazed that postgres isn't already using large file
 support.  Especially for tools like dump. 
 
 
 Except it would only cause confusion if you ran such a program on a
 system that didn't itself have largefile support.  Better to make the
 admin turn all these things on on purpose, until everyone is running
 64 bit systems everywhere.
 
 A
 
 Ah yes ... extremely good point - I had not considered that.
 
 I am pretty sure all reasonably current (kernel = 2.4) Linux distros 
 support largefile out of the box - so it should be safe for them.
 
 Other operating systems where 64 bit file access can be disabled or 
 unconfigured require more care - possibly  (sigh) 2 binary RPMS with a 
 distinctive 32 and 64 bit label ...(I think the big O does this for 
 Solaris).
Then, of course, there are systems where Largefiles support is a
filesystem by filesystem  (read mountpoint by mountpoint) option (E.G.
OpenUNIX). 

I think this is going to be a pandoras box. 



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Martijn van Oosterhout

On Tue, Aug 13, 2002 at 08:02:05AM -0500, Larry Rosenman wrote:
 On Tue, 2002-08-13 at 03:42, Mark Kirkwood wrote:
  Other operating systems where 64 bit file access can be disabled or 
  unconfigured require more care - possibly  (sigh) 2 binary RPMS with a 
  distinctive 32 and 64 bit label ...(I think the big O does this for 
  Solaris).
 Then, of course, there are systems where Largefiles support is a
 filesystem by filesystem  (read mountpoint by mountpoint) option (E.G.
 OpenUNIX). 
 
 I think this is going to be a pandoras box. 

I don't understand. Why would you want large-file support enabled on a
per-filesystem basis? All your system programs would have to support the
lowest common denomitor (ie, with large file support). Is it to make the
kernel enforce a limit for the purposes of compatability?

I'd suggest making it as simple as --enable-large-files and make it default
in a year or two.

-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 There are 10 kinds of people in the world, those that can do binary
 arithmetic and those that can't.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Zeugswetter Andreas SB SD


 Looking at how to deal with this, is the following going to be
 portable?:
 
 in pg_dump/Makefile:
 CFLAGS += -D_LARGEFILE_SOURCE -D_OFFSET_BITS=64
 
 in pg_dump.h:
 #ifdef _LARGEFILE_SOURCE
   #define FSEEK fseeko
   #define FTELL ftello
   #define OFF_T_FORMAT %Ld
   typedef off_t OFF_T;
 #else
   #define FSEEK fseek
   #define FTELL ftell
   #define OFF_T_FORMAT %ld
   typedef long int OFF_T;
 #endif

No, look at the int8 code to see how to make it portable.
On AIX e.g it is %lld and long long int.

Andreas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Oliver Elphick

On Tue, 2002-08-13 at 14:26, Zeugswetter Andreas SB SD wrote:
 
  Looking at how to deal with this, is the following going to be
  portable?:

 No, look at the int8 code to see how to make it portable.
 On AIX e.g it is %lld and long long int.

OK.  %lld is usable by glibc, so amend %Ld to %lld.

Any other comments?

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Watch ye therefore, and pray always, that ye may be 
  accounted worthy to escape all these things that shall
  come to pass, and to stand before the Son of man.
   Luke 21:36 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Tom Lane

Oliver Elphick [EMAIL PROTECTED] writes:
 Looking at how to deal with this, is the following going to be
 portable?:

   #define OFF_T_FORMAT %Ld

That certainly will not be.  Use INT64_FORMAT from pg_config.h.

   typedef long int OFF_T;

Why not just use off_t?  In both cases?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Oliver Elphick

On Tue, 2002-08-13 at 15:23, Tom Lane wrote:

typedef long int OFF_T;
 
 Why not just use off_t?  In both cases?

The prototype for fseek() is long int; I had assumed that off_t was not
defined if _LARGEFILE_SOURCE was not defined.

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Watch ye therefore, and pray always, that ye may be 
  accounted worthy to escape all these things that shall
  come to pass, and to stand before the Son of man.
   Luke 21:36 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Tom Lane

Oliver Elphick [EMAIL PROTECTED] writes:
 On Tue, 2002-08-13 at 15:23, Tom Lane wrote:
 Why not just use off_t?  In both cases?

 The prototype for fseek() is long int; I had assumed that off_t was not
 defined if _LARGEFILE_SOURCE was not defined.

Oh, you're right.  A quick look at HPUX shows it's the same way: ftell
returns long int, ftello returns off_t (which presumably is an alias
for long long int).  Okay, OFF_T seems a reasonable answer.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Joe Conway

Oliver Elphick wrote:
 On Tue, 2002-08-13 at 03:57, Greg Copeland wrote:
Ext2  3 should be okay.  XFS (very sure) and JFS (reasonably sure)
should also be okay...IIRC.  NFS and SMB are probably problematic, but I
can't see anyone really wanting to do this. 
 
 Hmm. Whereas I can't see many people putting their database files on an
 NFS mount, I can readily see them using pg_dump to one, and pg_dump is
 the program where large files are really likely to be needed.

I wouldn't totally discount using NFS for large databases. Believe it or 
not, with an Oracle database and a Network Appliance for storage, NFS is 
exactly what is used. We've found that we get better performance with a 
(properly tuned) NFS mounted NetApp volume than with attached storage on 
our HPUX box with several 100+GB databases.

Joe


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Oliver Elphick

On Tue, 2002-08-13 at 17:11, Rod Taylor wrote:
  I wouldn't totally discount using NFS for large databases. Believe it or 
  not, with an Oracle database and a Network Appliance for storage, NFS is 
  exactly what is used. We've found that we get better performance with a 
  (properly tuned) NFS mounted NetApp volume than with attached storage on 
  our HPUX box with several 100+GB databases.
 
 We've also tended to keep logs local on raid 1 and the data on a pair of
 custered netapps for PostgreSQL.

But large file support is not really an issue for the database itself,
since table files are split at 1Gb.  Unless that changes, the database
is not a problem.
 
-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 Watch ye therefore, and pray always, that ye may be 
  accounted worthy to escape all these things that shall
  come to pass, and to stand before the Son of man.
   Luke 21:36 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Tom Lane

Oliver Elphick [EMAIL PROTECTED] writes:
 But large file support is not really an issue for the database itself,
 since table files are split at 1Gb.  Unless that changes, the database
 is not a problem.

I see no really good reason to change the file-split logic.  The places
where the backend might possibly need large-file support are
* backend-side COPY to or from a large file
* postmaster log to stderr --- does this fail if log output
  exceeds 2G?
There might be some other similar issues, but that's all that comes to
mind offhand.

On a system where building with large-file support is reasonably
standard, I agree that PG should be built that way too.  Where it's
not so standard, I agree with Andrew Sullivan's concerns ...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Andrew Sullivan

On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:
 
 I see no really good reason to change the file-split logic.  The places
 where the backend might possibly need large-file support are
   * backend-side COPY to or from a large file

I _think_ this causes a crash.  At least, I _think_ that's what
caused it one day (I was doing one of those jackhammer-the-server
sorts of tests, and it was one of about 50 things I was doing at the
time, to see if I could make it fall over.  I did, but not where I
expected, and way beyond any real load we could anticipate).

   * postmaster log to stderr --- does this fail if log output
 exceeds 2G?

Yes, definitely, at least on Solaris.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread strange

On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:
 On a system where building with large-file support is reasonably
 standard, I agree that PG should be built that way too.  Where it's
 not so standard, I agree with Andrew Sullivan's concerns ...

What do you mean by standard? That only some filesystems are supported?
In Linux the vfat filesystem doesn't support largefiles, so the behaviour
is the same as if the application didn't specify O_LARGEFILE to open(2):
As Helge Bahmann pointed out, kernel will refuse to write files larger than
2GB. In current Linux, a signal (SIGXFSZ) is sent to the application
that then dumps core.


So, the use of O_LARGEFILE is nullified by the lack of support by the
filesystem, but no problem is introduced by the application supporting
largefiles, it already existed before.

All the crashes and problems presented on these lists occur when largefile
support isn't compiled, I didn't see one occuring from any application
having the support, but not the filesystem. (Your not so standard
support?)

The changes to postgresql doesn't seem complicated, I can try to make them
myself (fcntl on stdout, stdin; add check to autoconf; etc.) if no one
else volunteers.

Regards,
Luciano Rocha

-- 
Consciousness: that annoying time between naps.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Andrew Sullivan

On Tue, Aug 13, 2002 at 06:45:59PM +0100, [EMAIL PROTECTED] wrote:

 support isn't compiled, I didn't see one occuring from any application
 having the support, but not the filesystem. (Your not so standard

Wrong.  The symptom is _exactly the same_ if the program doesn't have
the support, the filesystem doesn't have the support, or both, at
least on Solaris.  I've checked.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Greg Copeland

On Tue, 2002-08-13 at 12:45, [EMAIL PROTECTED] wrote:
 On Tue, Aug 13, 2002 at 01:04:02PM -0400, Tom Lane wrote:
  On a system where building with large-file support is reasonably
  standard, I agree that PG should be built that way too.  Where it's
  not so standard, I agree with Andrew Sullivan's concerns ...
 
 What do you mean by standard? That only some filesystems are supported?
 In Linux the vfat filesystem doesn't support largefiles, so the behaviour
 is the same as if the application didn't specify O_LARGEFILE to open(2):
 As Helge Bahmann pointed out, kernel will refuse to write files larger than
 2GB. In current Linux, a signal (SIGXFSZ) is sent to the application
 that then dumps core.
 
 
 So, the use of O_LARGEFILE is nullified by the lack of support by the
 filesystem, but no problem is introduced by the application supporting
 largefiles, it already existed before.
 

Thank you.  That's a point that I previously pointed out...you just did
a much better job of it.  Specifically, want to stress that enabling
large file support is not dangerous.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Greg Copeland

On Tue, 2002-08-13 at 12:04, Tom Lane wrote:
 
 On a system where building with large-file support is reasonably
 standard, I agree that PG should be built that way too.  Where it's
 not so standard, I agree with Andrew Sullivan's concerns ...


Agreed.  This is what I originally asked for.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread strange

On Tue, Aug 13, 2002 at 02:09:07PM -0400, Andrew Sullivan wrote:
 On Tue, Aug 13, 2002 at 06:45:59PM +0100, [EMAIL PROTECTED] wrote:
 
  support isn't compiled, I didn't see one occuring from any application
  having the support, but not the filesystem. (Your not so standard
 
 Wrong.  The symptom is _exactly the same_ if the program doesn't have
 the support, the filesystem doesn't have the support, or both, at
 least on Solaris.  I've checked.

??

My point is that: Having postgresql the support doesn't bring NEW errors.

I never said postgresql would automagically gain support on filesystems
that don't support largfiles, I said no one mentioned an error caused by
postgresql *having* the support, but *not the filesystem*. Maybe I wasn't
clear, but I meant *new* errors.

As it seams, adding support to largefiles doesn't break anything.

Regards,
Luciano Rocha

-- 
Consciousness: that annoying time between naps.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Peter Eisentraut

Tom Lane writes:

  The prototype for fseek() is long int; I had assumed that off_t was not
  defined if _LARGEFILE_SOURCE was not defined.

All that _LARGEFILE_SOURCE does is make fseeko() and ftello() visible on
some systems, but on some systems they should be available by default.

 Oh, you're right.  A quick look at HPUX shows it's the same way: ftell
 returns long int, ftello returns off_t (which presumably is an alias
 for long long int).  Okay, OFF_T seems a reasonable answer.

fseek() and ftell() using long int for the offset was a mistake, therefore
fseeko() and ftello() were invented.  (This is independent of whether the
large file interface is used.)

To activate the large file interface you define _FILE_OFFSET_BITS=64,
which transparently replaces off_t and everything that uses it with a 64
bit version.  There is no need to use any of the proposed macro tricks
(because that exact macro trick is already provided by the OS).

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-13 Thread Peter Eisentraut

Tom Lane writes:

   * postmaster log to stderr --- does this fail if log output
 exceeds 2G?

That would be an issue of the shell, not the postmaster.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:

 I'm actually amazed that postgres isn't already using large file
 support.  Especially for tools like dump. 

Except it would only cause confusion if you ran such a program on a
system that didn't itself have largefile support.  Better to make the
admin turn all these things on on purpose, until everyone is running
64 bit systems everywhere.

A
-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 09:39, Andrew Sullivan wrote:
 On Sat, Aug 10, 2002 at 09:21:07AM -0500, Greg Copeland wrote:
 
  I'm actually amazed that postgres isn't already using large file
  support.  Especially for tools like dump. 
 
 Except it would only cause confusion if you ran such a program on a
 system that didn't itself have largefile support.  Better to make the
 admin turn all these things on on purpose, until everyone is running
 64 bit systems everywhere.

If by turn...on, you mean recompile, that's a horrible idea IMO. 
Besides, you're expecting that an admin is going to know that he even
needs to recompile to obtain this feature let alone that he'd interested
in compiling his own installation.  Whereas, more then likely he'll know
off hand (or can easily find out) if his FS/system supports large files
(32 bit sizes).

Seems like, systems which can natively support this feature should have
it enabled by default.  It's a different issue if an admin attempts to
create files larger than what his system and/or FS can support.

I guess what I'm trying to say here is, it's moving the problem from
being a postgres specific issue (not compiled in -- having to recompile
and install and not knowing if it's (dis)enabled) to a general body of
knowledge (does my system support such-n-such capabilities).

If a recompile time is still much preferred by the core developers,
perhaps a log entry can be created which at least denotes the current
status of such a feature when a compile time option is required.  Simply
having an entry of, LOG:  LARGE FILE SUPPORT (DIS)ENABLED (64-bit file
sizes), etc...things along those lines.  Of course, having a
--enable-large-files would be nice too.

This would seemingly make sense in other contexts too.  Imagine a
back-end compiled with large file support and someone else using fe
tools which does not support it.  How are they going to know if their
fe/be supports this feature unless we let them know?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:

 If by turn...on, you mean recompile, that's a horrible idea IMO. 

Ah.  Well, that is what I meant.  Why is it horrible?  PostgreSQL
doesn't take very long to compile.  

 I guess what I'm trying to say here is, it's moving the problem from
 being a postgres specific issue (not compiled in -- having to recompile
 and install and not knowing if it's (dis)enabled) to a general body of
 knowledge (does my system support such-n-such capabilities).

The problem is not just a system-level one, but a filesystem-level
one.  Enabling 64 bits by default might be dangerous, because a DBA
might think oh, it supports largefiles by default and therefore not
notice that the filesystem itself is not mounted with largefile
support.  But I suspect that the developers would welcome autoconfig
patches if someone offered them.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Lamar Owen

On Monday 12 August 2002 11:30 am, Andrew Sullivan wrote:
 The problem is not just a system-level one, but a filesystem-level
 one.  Enabling 64 bits by default might be dangerous, because a DBA
 might think oh, it supports largefiles by default and therefore not
 notice that the filesystem itself is not mounted with largefile
 support.  But I suspect that the developers would welcome autoconfig
 patches if someone offered them.

Interesting point.  Before I could deploy RPMs with largefile support by 
default, I would have to make sure it wouldn't silently break anything.  So 
keep discussing the issues involved, and I'll see what comes of it.  I don't 
have an direct experience with the largefile support, and am learning as I go 
with this.

Given that I have to make the source RPM's buildable on distributions that 
might not have the largefile support available, so on those distributions the 
support will have to be unavailable -- and the decision to build it or not to 
build it must be automatable.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:
  The problem is not just a system-level one, but a filesystem-level
  one.  Enabling 64 bits by default might be dangerous, because a DBA
  might think oh, it supports largefiles by default and therefore not
  notice that the filesystem itself is not mounted with largefile
  support.  

 keep discussing the issues involved, and I'll see what comes of it.  I don't 
 have an direct experience with the largefile support, and am learning as I go 
 with this.

I do have experience with both of these cases.  We're hosted in a
managed-hosting environment, and one day one of the sysadmins there
must've remounted a filesystem without largefile support.  Poof! I
started getting all sorts of strange pg_dump problems.  It wasn't
hard to track down, except that I was initially surprised by the
errors, since I'd just _enabled_ large file support.

This is an area that is not encountered terribly often, actually,
because postgres itself breaks its files at 1G.  Most people's dump
files either don't reach the 2G limit, or they use split (a
reasonable plan).

There are, in any case, _lots_ of problems with these large files. 
You not only need to make sure that pg_dump and friends can support
files bigger than 2G.  You need to make sure that you can move the
files around (your file transfer commands), that you can compress the
files (how is gzip compiled?  bzip2?), and even that you r backup
software takes the large file.  In a few years, when all
installations are ready for this, it seems like it'd be a good idea
to turn this on by default.  Right now, I think the risks are at
least as great as those incurred by telling people they need to
recompile.  

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 10:30, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 10:15:46AM -0500, Greg Copeland wrote:
 
  If by turn...on, you mean recompile, that's a horrible idea IMO. 
 
 Ah.  Well, that is what I meant.  Why is it horrible?  PostgreSQL
 doesn't take very long to compile.  


Many reasons.  A DBA is not always the same thing as a developer (which
means it's doubtful he's even going to know about needed options to pass
-- if any).  Using a self compiled installation may not install the same
sense of reliability (I know that sounds odd) as using a distribution's
package.  DBA may not be a SA, which means he should probably not be
compiling and installing software on a system.  Furthermore, he may not
even have access to do so.

Means upgrading in the future may be problematic.  Someone compiled with
large file support.  He leaves.  New release comes out.  Someone else
upgrades and now finds things are broken.  Why?  If it supported it out
of the box, issue is avoided.

Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are
fairly distinct.  You should not expect a DBA to function as a SA. 
Furthermore, SA and developer bodies of knowledge are also fairly
distinct.  You shouldn't expect a SA to know what compiler options he
needs to use to compile software on his system.  Especially for
something as obscure as large file support.


 
  I guess what I'm trying to say here is, it's moving the problem from
  being a postgres specific issue (not compiled in -- having to recompile
  and install and not knowing if it's (dis)enabled) to a general body of
  knowledge (does my system support such-n-such capabilities).
 
 The problem is not just a system-level one, but a filesystem-level
 one.  Enabling 64 bits by default might be dangerous, because a DBA
 might think oh, it supports largefiles by default and therefore not
 notice that the filesystem itself is not mounted with largefile
 support.  But I suspect that the developers would welcome autoconfig
 patches if someone offered them.


The distinction you make there is minor.  A SA, should know and
understand the capabilities of the systems he maintains (this is true
even if the SA and DBA are one).  This includes filesystem
capabilities.  A DBA, should only care about the system requirements and
trust that the SA can deliver those capabilities.  If a SA says, my
filesystems can support very large files, installs postgres, the DBA
should expect that match support in the database is already available. 
Woe is his surprise when he finds out that his postgres installation
can't handle it?!

As for the concern for danger.  Hmm...my understanding is that the
result is pretty much the same thing as exceeding max file size.  That
is, if you attempt to read/write beyond what the filesystem can provide,
you're still going to get an error.  Is this really more dangerous than
simply reading/writing to a file which exceeds max system capabilities? 
Either way, this issue exists and having large file support, seemingly,
does not effect it one way or another.

I guess I'm tying to say, the risk of seeing filesystem corruption or
even database corruption should not be effected by the use of large file
support.  Please correct me if I'm wrong.


Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 11:04, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 11:44:24AM -0400, Lamar Owen wrote:
  keep discussing the issues involved, and I'll see what comes of it.  I don't 
  have an direct experience with the largefile support, and am learning as I go 
  with this.
 
 I do have experience with both of these cases.  We're hosted in a
 managed-hosting environment, and one day one of the sysadmins there
 must've remounted a filesystem without largefile support.  Poof! I
 started getting all sorts of strange pg_dump problems.  It wasn't
 hard to track down, except that I was initially surprised by the
 errors, since I'd just _enabled_ large file support.

And, what if he just remounted it read only.  Mistakes will happen. 
That doesn't come across as being a strong argument to me.  Besides,
it's doubtful that a filesystem is going to be remounted while it's in
use.  Which means, these issues are going to be secondary to actual
product use of the database.  That is, either the system is working
correctly or it's not.  If it's not, guess it's not ready for production
use.

Furthermore, since fs mounting, if being done properly, is almost always
a matter of automation, this particular class of error should be few and
very far between.

Wouldn't you rather answer people with, remount your file system,
rather than, recompile with such-n-such option enabled, reinstall.  Oh
ya, since you're re-installing a modified version of your database,
probably a good paranoid option would be to back up and dump, just to be
safe.  Personally, I'd rather say, remount.


 There are, in any case, _lots_ of problems with these large files. 
 You not only need to make sure that pg_dump and friends can support
 files bigger than 2G.  You need to make sure that you can move the
 files around (your file transfer commands), that you can compress the
 files (how is gzip compiled?  bzip2?), and even that you r backup
 software takes the large file.  In a few years, when all
 installations are ready for this, it seems like it'd be a good idea
 to turn this on by default.  Right now, I think the risks are at
 least as great as those incurred by telling people they need to
 recompile.  
 

All of those are SA issues.  Shouldn't we leave that domain of issues
for a SA to deal with rather than try to force a single view down
someone's throat?  Which, btw, results is creating more work for those
that desire this feature.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Oliver Elphick

On Mon, 2002-08-12 at 16:44, Lamar Owen wrote:
 Interesting point.  Before I could deploy RPMs with largefile support by 
 default, I would have to make sure it wouldn't silently break anything.  So 
 keep discussing the issues involved, and I'll see what comes of it.  I don't 
 have an direct experience with the largefile support, and am learning as I go 
 with this.
 
 Given that I have to make the source RPM's buildable on distributions that 
 might not have the largefile support available, so on those distributions the 
 support will have to be unavailable -- and the decision to build it or not to 
 build it must be automatable.

I raised the question on the Debian developers' list.  As far as I can
see, the general feeling is that it won't break anything but will only
work with kernel 2.4.  It may break with 2.0, but 2.0 is no longer
provided with Debian stable, so I don't mind that.

The thread starts at
http://lists.debian.org/debian-devel/2002/debian-devel-200208/msg00597.html

I intend to enable it in the next version of the Debian packages (which
will go into the unstable archive if this works for me) by adding
-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 to CFLAGS for the entire build.

One person said:
However compiling with largefile support will change the size
of off_t from 32 bits to 64 bits - if postgres uses off_t or
anything else related to file offsets in a binary struct in one
of the database files you will break stuff pretty heavily. I
would not compile postgres with largefile support until it
is officially supported by the postgres developers.

but I cannot see that off_t is used in such a way.
-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 And he spake a parable unto them to this end, that men
  ought always to pray, and not to faint.   
 Luke 18:1 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:

 Many reasons.  A DBA is not always the same thing as a developer (which
 means it's doubtful he's even going to know about needed options to pass
 -- if any). 

This (and the upgrade argument) are simply documentation issues. 
If you check the FAQ_Solaris, there's already a line in there which
tells you how to do it.

 Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are
 fairly distinct.  You should not expect a DBA to function as a SA. 
 Furthermore, SA and developer bodies of knowledge are also fairly
 distinct.  You shouldn't expect a SA to know what compiler options he
 needs to use to compile software on his system.  Especially for
 something as obscure as large file support.

It seems to me that a DBA who is running a system which produces 2
Gig dump files, and who can't compile Postgres, is in for a rocky
ride.  Such a person needs at least a support contract, and in such a
case the supporting organisation would be able to provide the needed
binary.

Anyway, as I said, this really seems like the sort of thing that
mostly gets done when someone sends in a patch.  So if it scratches
your itch . . .

 The distinction you make there is minor.  A SA, should know and
 understand the capabilities of the systems he maintains (this is true
 even if the SA and DBA are one).  This includes filesystem
 capabilities.  A DBA, should only care about the system requirements and
 trust that the SA can deliver those capabilities.  If a SA says, my
 filesystems can support very large files, installs postgres, the DBA
 should expect that match support in the database is already available. 
 Woe is his surprise when he finds out that his postgres installation
 can't handle it?!

And it seems to me the distinction you're making is an invidious one. 
I am sick to death of so-called experts who want to blather on about
this or that tuning parameter of [insert big piece of software here]
without knowing the slightest thing about the basic operating
environment.  A DBA has responsibility to know a fair amount about
the platform in production.  A DBA who doesn't is one day going to
find out what deep water is.

 result is pretty much the same thing as exceeding max file size.  That
 is, if you attempt to read/write beyond what the filesystem can provide,
 you're still going to get an error.  Is this really more dangerous than
 simply reading/writing to a file which exceeds max system capabilities? 

Only if you were relying on it for backups, and suddenly your backups
don't work.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS\

2002-08-12 Thread Andrew Sullivan

On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:
 
 And, what if he just remounted it read only.  Mistakes will happen. 
 That doesn't come across as being a strong argument to me.  Besides,
 it's doubtful that a filesystem is going to be remounted while it's in
 use.  Which means, these issues are going to be secondary to actual
 product use of the database.  That is, either the system is working
 correctly or it's not.  If it's not, guess it's not ready for production
 use.

If it's already in production use, but was taken out briefly for
maintenance, and the supposed expert SAs do something dimwitted, then
it's broken, sure.  The point I was trying to make is that the
symptoms one sees from breakage can be from many different places,
and so a glib enable largefile support remark hides an actual,
real-world complexity.  Several steps can be broken, any one fof
which causes problems.  Better to force the relevant admins to do the
work to set things up for an exotic feature, if it is desired. 
There's nothing about Postgres itself that requires large file
support, so this is really a discussion about pg_dump.  Using split
is more portable, in my view, and therefore preferable.  You can also
use the native-compressed binary dump format, if you like one big
file.  Both of those already work out of the box.

  There are, in any case, _lots_ of problems with these large files. 

 All of those are SA issues.  

So is compiling the software correctly, if the distinction has any
meaning at all.  When some mis-installed bit of software breaks, the
DBAs won't go running to the SAs.  They'll ask here.

A

-- 

Andrew Sullivan   87 Mowat Avenue 
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M6K 3E3
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 11:40, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 11:07:51AM -0500, Greg Copeland wrote:
 
  Many reasons.  A DBA is not always the same thing as a developer (which
  means it's doubtful he's even going to know about needed options to pass
  -- if any). 
 
 This (and the upgrade argument) are simply documentation issues. 
 If you check the FAQ_Solaris, there's already a line in there which
 tells you how to do it.

And?  What's you're point.  That somehow make it disappear?  Even if it
had been documented, it doesn't mean the documentation made it to the
right hands or was obviously located.  Just look at postgres'
documentation in general.  How often are people told to read the
code.  Give me a break.  You're argument is a very weak straw.

 
  Lastly, and perhaps the most obvious, SA and DBA bodies of knowledge are
  fairly distinct.  You should not expect a DBA to function as a SA. 
  Furthermore, SA and developer bodies of knowledge are also fairly
  distinct.  You shouldn't expect a SA to know what compiler options he
  needs to use to compile software on his system.  Especially for
  something as obscure as large file support.
 
 It seems to me that a DBA who is running a system which produces 2
 Gig dump files, and who can't compile Postgres, is in for a rocky
 ride.  Such a person needs at least a support contract, and in such a
 case the supporting organisation would be able to provide the needed
 binary.

LOL.  Managing data and compiling applications have nothing to do with
each other.  Try, try again.

You also don't seem to understand that this isn't as simple as
recompile.  It's not!!!  We clear on this?!  It's as simple as
needing to KNOW that you have to recompile and then KNOWING you have to
use a serious of obtuse options when compiling.

In other words, you seemingly know everything you don't know which is
more than the rest of us.

 Anyway, as I said, this really seems like the sort of thing that
 mostly gets done when someone sends in a patch.  So if it scratches
 your itch . . .
 
  The distinction you make there is minor.  A SA, should know and
  understand the capabilities of the systems he maintains (this is true
  even if the SA and DBA are one).  This includes filesystem
  capabilities.  A DBA, should only care about the system requirements and
  trust that the SA can deliver those capabilities.  If a SA says, my
  filesystems can support very large files, installs postgres, the DBA
  should expect that match support in the database is already available. 
  Woe is his surprise when he finds out that his postgres installation
  can't handle it?!
 
 And it seems to me the distinction you're making is an invidious one. 
 I am sick to death of so-called experts who want to blather on about
 this or that tuning parameter of [insert big piece of software here]
 without knowing the slightest thing about the basic operating
 environment.  A DBA has responsibility to know a fair amount about

In other words, you can't have a subject matter expert unless he is an
expert on every subject?  Ya, right!

 the platform in production.  A DBA who doesn't is one day going to
 find out what deep water is.

Agreed...as it relates to the database.  DBA's should have to know
details about the filesystem...that's the job of a SA.  You seem to be
under the impression that SA = DBA or somehow a DBA is an SA with extra
knowledge.  While this is sometimes true, I can assure you this is not
always the case.

This is exactly why large companies often have DBAs in one department
and SA in another.  Their knowledge domains tend to uniquely differ.

 
  result is pretty much the same thing as exceeding max file size.  That
  is, if you attempt to read/write beyond what the filesystem can provide,
  you're still going to get an error.  Is this really more dangerous than
  simply reading/writing to a file which exceeds max system capabilities? 
 
 Only if you were relying on it for backups, and suddenly your backups
 don't work.
 

Correction.  Suddenly your backends never worked.  Seems like it would
of been caught prior to going into testing.  Surely you're not
suggesting that people place a system into production without having
testing full life cycle?  Back up testing is part of your life cycle
right?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS\

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 11:48, Andrew Sullivan wrote:
 On Mon, Aug 12, 2002 at 11:17:31AM -0500, Greg Copeland wrote:
[snip]

   There are, in any case, _lots_ of problems with these large files. 
 
  All of those are SA issues.  
 
 So is compiling the software correctly, if the distinction has any
 meaning at all.  When some mis-installed bit of software breaks, the
 DBAs won't go running to the SAs.  They'll ask here.

Either case, they're going to ask.  You can give them a simple solution
or you can make them run around and pull their hair out.

You're also assuming that SA = developer.  I can assure you it does
not.  I've met many an SA who's development experience was make and
korn scripts.  Expecting that he should know to use GNU_SOURCE and
BITS=64, it a pretty far reach.  Furthermore, you're even expecting that
he knows that such a recompile fix even exists.  Where do you think
he's going to turn?  The lists.  That's right.  Since he's going to
contact the list or review a faq item anyways, doesn't it make sense to
give them the easy way out (the the initiator and the mailing list)?

IMO, powerful tools seem to always be capable enough to shoot your self
in the foot.  Why make pay special attention with this sole feature
which doesn't really address it to begin with?

Would you at least agree that --enable-large-files, rather than
CFLAGS=xxx, is a good idea as might well be banners and log entries
stating that large file support has or has not been compiled in?

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Peter Eisentraut

Oliver Elphick writes:

 One person said:
 However compiling with largefile support will change the size
 of off_t from 32 bits to 64 bits - if postgres uses off_t or
 anything else related to file offsets in a binary struct in one
 of the database files you will break stuff pretty heavily. I
 would not compile postgres with largefile support until it
 is officially supported by the postgres developers.

 but I cannot see that off_t is used in such a way.

This is not the only issue.  You really need to check all uses of off_t
(for example printf(%ld, off_t) will crash) and all places where off_t
should have been used in the first place.  Furthermore you might need to
replace ftell() and fseek() by ftello() and fseeko(), especially if you
want pg_dump to support large archives.

Still, most of the configuration work is already done in Autoconf (see
AC_FUNC_FSEEKO and AC_SYS_LARGEFILE), so the work might be significantly
less than the time spent debating the merits of large files on these
lists. ;-)

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Martijn van Oosterhout

On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:
 The problem is not just a system-level one, but a filesystem-level
 one.  Enabling 64 bits by default might be dangerous, because a DBA
 might think oh, it supports largefiles by default and therefore not
 notice that the filesystem itself is not mounted with largefile
 support.  But I suspect that the developers would welcome autoconfig
 patches if someone offered them.

Are there any filesystems in common use (not including windows ones) that
don't support 32-bit filesizes?

Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),
probably much more. What about the BSDs? XFS? etc

-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 There are 10 kinds of people in the world, those that can do binary
 arithmetic and those that can't.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-12 Thread Greg Copeland

On Mon, 2002-08-12 at 18:41, Martijn van Oosterhout wrote:
 On Mon, Aug 12, 2002 at 11:30:36AM -0400, Andrew Sullivan wrote:
  The problem is not just a system-level one, but a filesystem-level
  one.  Enabling 64 bits by default might be dangerous, because a DBA
  might think oh, it supports largefiles by default and therefore not
  notice that the filesystem itself is not mounted with largefile
  support.  But I suspect that the developers would welcome autoconfig
  patches if someone offered them.
 
 Are there any filesystems in common use (not including windows ones) that
 don't support 32-bit filesizes?
 
 Linux (ext2) I know supports by default at least to 2TB (2^32 x 512bytes),
 probably much more. What about the BSDs? XFS? etc
 

Ext2  3 should be okay.  XFS (very sure) and JFS (reasonably sure)
should also be okay...IIRC.  NFS and SMB are probably problematic, but I
can't see anyone really wanting to do this.  Maybe some of the
clustering file systems (GFS, etc) might have problems???  I'm not sure
where reiserfs falls.  I *think* it's not a problem but something
tingles in the back of my brain that there may be problems lurking...

Just for the heck of it, I did some searching.  Found these for
starters:
http://www.suse.de/~aj/linux_lfs.html.
http://www.gelato.unsw.edu.au/~peterc/lfs.html

http://ftp.sas.com/standards/large.file/


So, in a nut shell, most modern (2.4.x+) x86 Linux systems should be
able to handle large files.

Enjoy,
Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-10 Thread Greg Copeland

On Sat, 2002-08-10 at 00:25, Mark Kirkwood wrote:
 Ralph Graulich wrote:
 
 Hi,
 
 just my two cents worth: I like having the files sized in a way I can
 handle them easily with any UNIX tool on nearly any system. No matter
 wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at
 a maximum size below any limits, handy for handling.
 
 Good point... however I was thinking that being able to dump the entire 
 database without resporting to gzips and splits was handy...
 
 
 For example, Oracle suggests it somewhere in their documentation, to keep
 datafiles at a reasonable size, e.g. 1 GB. Seems right to me, never had
 any problems with it.
 
 Yep, fixed or controlled sizes for data files is great... I was thinking 
 about databases rather than data files (altho I may not have made that 
 clear in my mail)
 

I'm actually amazed that postgres isn't already using large file
support.  Especially for tools like dump.  I do recognize the need to
keep files manageable in size but my file sizes for my needs may differ
from your sizing needs.

Seems like it would be a good thing to enable and simply make it a
function for the DBA to handle.  After all, even if I'm trying to keep
my dumps at around 1GB, I probably would be okay with a dump of 1.1GB
too.  To me, that just seems more flexible.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-09 Thread Oliver Elphick

On Fri, 2002-08-09 at 06:07, Lamar Owen wrote:
 On Thursday 08 August 2002 05:36 pm, Nigel J. Andrews wrote:
  Matt Kirkwood wrote:
 
   I just spent some of the morning helping a customer build Pg 7.2.1 from
   source in order to get Linux largefile support in pg_dump etc. They
   possibly would have kept using the binary RPMs if they had this feature.
 
 And you added this by doing what, exactly?  I'm not familiar with pg_dump 
 largefile support as a standalone feature.

As far as I can make out from the libc docs, largefile support is
automatic if the macro _GNU_SOURCE is defined and the kernel supports
large files.  

Is that a correct understanding? or do I actually need to do something
special to ensure that pg_dump supports large files?

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight, UK
http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 ...ask, and ye shall receive, that your joy may be 
  full.  John 16:24 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-09 Thread Nigel J. Andrews


On Fri, 9 Aug 2002, Helge Bahmann wrote:

  As far as I can make out from the libc docs, largefile support is
  automatic if the macro _GNU_SOURCE is defined and the kernel supports
  large files.
 
  Is that a correct understanding? or do I actually need to do something
  special to ensure that pg_dump supports large files?
 
 in this case you still have to use large file functions in the code
 explicitly
 
 the easiest way to get large file support is to pass
 -D_FILE_OFFSET_BITS=64 to the preprocessor, and I think I remember doing
 this once for pg_dump
 
 see /usr/include/features.h

There is some commentary on this in my /usr/doc/libc6/NOTES.gz, which I presume
Oliver has already found since I found it after reading his posting. It gives a
bit more detail that the header file for those who want to check this out. I
for one was completely unaware of those 64 bit functions.


-- 
Nigel J. Andrews
Director

---
Logictree Systems Limited
Computer Consultants


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-09 Thread Mark Kirkwood

Lamar Owen wrote:


And you added this by doing what, exactly?  I'm not familiar with pg_dump 
largefile support as a standalone feature.


Enabling largefile support for the utilities was accomplished by :

CFLAGS=-O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 ./configure ...

It seemed to me that the ability to dump databases 2G without gzip, 
split etc was a good thing. What do you think ?



You have this wrong.  The distributions do periodically sync up with my 
revision, and I with theirs, but they do their own packaging.

I see so if you enabled such support, they they would probably sync 
that too ?



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-09 Thread Mark Kirkwood

Ralph Graulich wrote:

Hi,

just my two cents worth: I like having the files sized in a way I can
handle them easily with any UNIX tool on nearly any system. No matter
wether I want to cp, tar, dump, dd, cat or gzip the file: Just keep it at
a maximum size below any limits, handy for handling.

Good point... however I was thinking that being able to dump the entire 
database without resporting to gzips and splits was handy...


For example, Oracle suggests it somewhere in their documentation, to keep
datafiles at a reasonable size, e.g. 1 GB. Seems right to me, never had
any problems with it.

Yep, fixed or controlled sizes for data files is great... I was thinking 
about databases rather than data files (altho I may not have made that 
clear in my mail)

best wishes

Mark



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-08 Thread Nigel J. Andrews



Note, I'm not sure this belongs in -hackers so I've added -general but left
-hackers in so that list can at least see that it's going to -general.


On Thu, 8 Aug 2002, mark Kirkwood wrote:

 Hi all,
 
 I just spent some of the morning helping a customer build Pg 7.2.1 from 
 source in order to get Linux largefile support in pg_dump etc. They 
 possibly would have kept using the binary RPMs if they had this feature.
 
 This got me to wondering why the Redhat/Mandrake...etc binary RPMS are 
 built without it.
 
 Would including default largefile support in Linux RPMs be a good idea ?
 
 (I am presuming that such RPMs are built by the Pg community and 
 supplied to the various distros... apologies if I have this all wrong...)


I must admit that I am fairly new to PostgreSQL but I have used it and read
stuff about it and I'm not sure what you mean. Could you explain what you
did?

A quick scan of the source shows that there may be an issue in
storage/file/buffile.c:BufFileSeek() is that the sort of thing you are talking
about? Or maybe I've got it completely wrong and you're talking about adding
code to pg_dump although I thought that could already handle large
objects. Actually, I'm going to shut up now before I really do show my
ignorance and let you answer.


-- 
Nigel J. Andrews
Director

---
Logictree Systems Limited
Computer Consultants


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Linux Largefile Support In Postgresql RPMS

2002-08-08 Thread Lamar Owen

On Thursday 08 August 2002 05:36 pm, Nigel J. Andrews wrote:
 Matt Kirkwood wrote:

  I just spent some of the morning helping a customer build Pg 7.2.1 from
  source in order to get Linux largefile support in pg_dump etc. They
  possibly would have kept using the binary RPMs if they had this feature.

And you added this by doing what, exactly?  I'm not familiar with pg_dump 
largefile support as a standalone feature.

  (I am presuming that such RPMs are built by the Pg community and
  supplied to the various distros... apologies if I have this all
  wrong...)

You have this wrong.  The distributions do periodically sync up with my 
revision, and I with theirs, but they do their own packaging.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster