Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 Ignoring my general dislike of enums, I have a few issues with the patch 
 as it is:

 1. What's the point of having comparison operators for enums? For most 
 use cases, there's no natural ordering of enum values.

If you would like to be able to index enum columns, or even GROUP BY one,
you need those; whether the ordering is arbitrary or not is irrelevant.

 2. The comparison routine compares oids, right? If the oids wrap around 
 when the enum values are created, the ordering isn't what the user expects.

This is a fair point --- it'd be better if the ordering were not
dependent on chance OID assignments.  Not sure what we are willing
to pay to have that though.

 3. 4 bytes per value is wasteful if you're storing simple status codes 
 etc.

I've forgotten exactly which design Tom is proposing to implement here,
but at least one of the contenders involved storing an OID that would be
unique across all enum types.  1 byte is certainly not enough for that
and even 2 bytes would be pretty marginal.  I'm unconvinced by arguments
about 2 bytes being so much better than 4 anyway --- in the majority of
real table layouts, the hoped-for savings would disappear into alignment
padding.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Dave Page

Magnus Hagander wrote:

Also, it compiles fine on MSVC. I still haven't managed to get the MingW
build environment working properly on Win64 even for building Win32
apps, so I haven't been able to build it on MingW yet. It *should* work
since it's all standard functions, but might require further hacks. I'll
try to get around to that later, and Dave has promised to give it a
compile-try as well, but if someone wants to test that, please do ;)


:-(

gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing 
pg_dump.o common.o pg_dump_sort.o pg_backup_archiver.o pg_backup_db.o 
pg_backup_custom.o pg_backup_files.o pg_backup_null.o pg_backup_tar.o 
dumputils.o win32ver.o ../../../src/backend/parser/keywords.o 
-L../../../src/port -lpgport -L../../../src/interfaces/libpq -lpq 
-L../../../src/port -Wl,--allow-multiple-definition   -lpgport -lintl 
-lssleay32 -leay32 -lcomerr32 -lkrb5_32 -lz -lm  -lws2_32 -lshfolder -o 
pg_dump.exe
pg_backup_archiver.o(.text+0x2017):pg_backup_archiver.c: undefined 
reference to `_fseeki64'
pg_backup_archiver.o(.text+0x3dac):pg_backup_archiver.c: undefined 
reference to `_fseeki64'
pg_backup_custom.o(.text+0x773):pg_backup_custom.c: undefined reference 
to `_fseeki64'
pg_backup_custom.o(.text+0xaaa):pg_backup_custom.c: undefined reference 
to `_ftelli64'
pg_backup_custom.o(.text+0xed2):pg_backup_custom.c: undefined reference 
to `_ftelli64'
pg_backup_custom.o(.text+0xf21):pg_backup_custom.c: undefined reference 
to `_fseeki64'
pg_backup_tar.o(.text+0x845):pg_backup_tar.c: undefined reference to 
`_ftelli64'
pg_backup_tar.o(.text+0x10f9):pg_backup_tar.c: undefined reference to 
`_fseeki64'
pg_backup_tar.o(.text+0x1107):pg_backup_tar.c: undefined reference to 
`_ftelli64'
pg_backup_tar.o(.text+0x1162):pg_backup_tar.c: undefined reference to 
`_fseeki64'

collect2: ld returned 1 exit status
make[3]: *** [pg_dump] Error 1
make[3]: Leaving directory `/cvs/pgsql/src/bin/pg_dump'


Regards, Dave.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Magnus Hagander
On Mon, Dec 18, 2006 at 08:56:22PM -0800, Joshua D. Drake wrote:
 On Mon, 2006-12-18 at 23:46 -0500, Tom Lane wrote:
  Magnus Hagander [EMAIL PROTECTED] writes:
   Oh, you mean MB vs Mb. Man, it had to be that simple :)
  
  ISTM we had discussed whether guc.c should accept units strings in
  a case-insensitive manner, and the forces of pedantry won the first
  round.  Shall we reopen that argument?
 
 I don't think that anyone is going to think, oh I am using 1000 Mega Bit
 of ram. Mb == MB in this case. That being said, it is documented and I
 don't know that it makes that much difference as long as the
 documentation is clear.

Is it possible to add an error hint to the message? Along the line of
HINT: Did you perhaps get your casing wrong (with better wording, of
course).

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Dirty pages in freelist cause WAL stuck

2006-12-19 Thread ITAGAKI Takahiro
Simon Riggs [EMAIL PROTECTED] wrote:

 I think what you are saying is: VACUUM places blocks so that they are
 immediately reused. This stops shared_buffers from being polluted by
 vacuumed-blocks, but it also means that almost every write becomes a
 backend dirty write when VACUUM is working, bgwriter or not. That also
 means that we flush WAL more often than we otherwise would.

That's right. I think it's acceptable that vacuuming process writes dirty
buffers made by itself, because only the process slows down; other backends
can run undisturbedly. However, frequent WAL flushing should be avoided.

I found the problem when I ran VACUUM FREEZE separately. But if there were
some backends, dirty buffers made by VACUUM would be reused by those backends,
not by the vacuuming process.

 From above my thinking would be to have a more general implementation:
 Each backend keeps a list of cache buffers to reuse in its local loop,
 rather than using the freelist as a global list. That way the technique
 would work even when we have multiple Vacuums working concurrently. It
 would also then be possible to use this for the SeqScan case as well.

Great idea! The troubles are in the usage of buffers by SeqScan and VACUUM.
The former uses too many buffers and the latter uses too few buffers.
Your cache-looping will work around both cases.

 Another connected thought is the idea of a having a FullBufferList - the
 opposite of a free buffer list. When VACUUM/INSERT/COPY fills a block we
 notify the buffer manager that this block needs writing ahead of other
 buffers, so that the bgwriter can work more effectively. That seems like
 it would help with both this current patch and the additional thoughts
 above.

Do you mean that bgwriter should take care of buffers in freelist, not only
ones in the tail of LRU? We might need activity control of bgwriter. Buffers
are reused rapidly in VACUUM or bulk insert, so bgwriter is not sufficient
if its settings are same as usual.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Magnus Hagander wrote:
 Is it possible to add an error hint to the message? Along the line of
 HINT: Did you perhaps get your casing wrong (with better wording,
 of course).

Or how about we just make everything case-insensitive -- but 
case-preserving! -- on Windows only?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Zeugswetter Andreas ADI SD

  I suspect we might need to create a pg_off_t type or some 
 such gadget.
  
  Bleah.
  
  But it does need to be fixed.
 
 Bummer. That might be what's needed, but I'm going to at least try to
 find some neater way first. I wonder why it didn't happen on MSVC...

I don't see how the error relates, but _fseeki64 and _ftelli64 is
only in msvcr80.dll and newer, not below.

MinGW has fseeko64 and ftello64 with off64_t.

Andreas

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Magnus Hagander
On Tue, Dec 19, 2006 at 10:01:05AM +0100, Peter Eisentraut wrote:
 Magnus Hagander wrote:
  Is it possible to add an error hint to the message? Along the line of
  HINT: Did you perhaps get your casing wrong (with better wording,
  of course).
 
 Or how about we just make everything case-insensitive -- but 
 case-preserving! -- on Windows only?

Wouldn't help me one bit, I had this problem on the install for
search.postgresql.org, which runs on Ubuntu Linux.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Question about debugging bootstrapping and catalog entries

2006-12-19 Thread Gurjeet Singh

On 12/18/06, Tom Lane [EMAIL PROTECTED] wrote:


Gregory Stark [EMAIL PROTECTED] writes:
 Hm, I suppose. Though starting a second gdb is a pain. What I've done in
the
 past is introduce a usleep(3000) in strategic points in the backend
to
 give me a chance to attach.

There is already an option to sleep early in backend startup for the
normal case.  Not sure if it works for bootstrap, autovacuum, etc,
but I could see making it do so.



You are probably referring to the command-line switch -W to posrgres, that
translates to 'PostAuthDelay' GUC variable; I think that kicks in a bit too
late! Once I was trying to debug check_root() (called by main() ), and had
to resort to my own pg_usleep() to make the process wait for
debugger-attach. We should somehow pull the sleep() code into main() as far
up as possible.

BTW, here's how I made PG sleep until I attached to it (should be done only
in the function you intend to debug):

{
 bool waitFor_Debugger = true;
 while( waitForDebugger )
   pg_usleep(100);
}

It will wait forever here, until you set a breakpoint on 'while' and then
set the var to false.

The suggestion of single-stepping

initdb will only work well if you have a version of gdb that can step
into a fork, which is something that's never worked for me :-(.
Otherwise the backend will free-run until it blocks waiting for input
from initdb, which means you are still stuck for debugging startup
crashes ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly





--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | yahoo }.com


Re: [HACKERS] Load distributed checkpoint

2006-12-19 Thread ITAGAKI Takahiro
Takayuki Tsunakawa [EMAIL PROTECTED] wrote:

 I performed some simple tests, and I'll show the results below.

 (1) The default case
 235  80  226 77  240
 (2) No write case
 242  250  244  253  280
 (3) No checkpoint case
 229  252  256  292  276
 (4) No fsync() case
 236  112  215  216  221
 (5) No write by PostgreSQL, but fsync() by another program case
 9  223  260  283  292
 (6) case (5) + O_SYNC by write_fsync
 97  114  126  112  125
 (7) O_SYNC case
 182  103  41  50  74

I posted a patch to PATCHES. Please try out it.
It does write() smoothly, but fsync() at a burst.
I suppose the result will be between (3) and (5).

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Peter Eisentraut
Heikki Linnakangas wrote:
 I'm sorry I missed the original discussions, but I have to ask: Why
 do we want enums in core? The only potential advantage I can see over
 using a look-up table and FK references is performance.

The difference is that foreign-key-referenced data is part of your data 
whereas enums would be part of the type system used to model the data.

An objection to enums on the ground that foreign keys can accomplish the 
same thing could be extended to object to any data type with a finite 
domain.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Yoshiyuki Asaba
Hi,

From: Hiroshi Saito [EMAIL PROTECTED]
Subject: Re: [HACKERS] pg_restore fails with a custom backup file
Date: Fri, 15 Dec 2006 00:57:50 +0900

  Win32 does not implement fseeko() and ftello(). So I think it limit to
  handle a 2GB file. Is this a specification?
 
 Yes, Magnus-san suggested the problem. It is present TODO.  The entire 
 adjustment was still difficult though I had tried it. SetFilePointer might be 
 able to be saved. However, I think it might be an attempt of 8.3...

Is it able to use fsetpos()/fgetpos() instead of ftell()/fseek()?
fpos_t is a 8byte type. I tested pg_dump/pg_restore with the attached
patch.

--
Yoshiyuki Asaba
[EMAIL PROTECTED]
Index: src/include/c.h
===
RCS file: /projects/cvsroot/pgsql/src/include/c.h,v
retrieving revision 1.214
diff -c -r1.214 c.h
*** src/include/c.h 4 Oct 2006 00:30:06 -   1.214
--- src/include/c.h 19 Dec 2006 12:52:05 -
***
*** 74,79 
--- 74,82 
  #include strings.h
  #endif
  #include sys/types.h
+ #ifdef WIN32
+ #define off_t fpos_t
+ #endif
  
  #include errno.h
  #if defined(WIN32) || defined(__CYGWIN__)
Index: src/include/port.h
===
RCS file: /projects/cvsroot/pgsql/src/include/port.h,v
retrieving revision 1.106
diff -c -r1.106 port.h
*** src/include/port.h  28 Nov 2006 01:12:33 -  1.106
--- src/include/port.h  19 Dec 2006 12:52:05 -
***
*** 307,313 
  extern char *crypt(const char *key, const char *setting);
  #endif
  
! #if defined(bsdi) || defined(netbsd)
  extern intfseeko(FILE *stream, off_t offset, int whence);
  extern off_t ftello(FILE *stream);
  #endif
--- 307,313 
  extern char *crypt(const char *key, const char *setting);
  #endif
  
! #if defined(bsdi) || defined(netbsd) || defined(WIN32)
  extern intfseeko(FILE *stream, off_t offset, int whence);
  extern off_t ftello(FILE *stream);
  #endif
Index: src/include/port/win32.h
===
RCS file: /projects/cvsroot/pgsql/src/include/port/win32.h,v
retrieving revision 1.63
diff -c -r1.63 win32.h
*** src/include/port/win32.h19 Oct 2006 20:03:08 -  1.63
--- src/include/port/win32.h19 Dec 2006 12:52:05 -
***
*** 20,25 
--- 20,27 
  #include sys/utime.h/* for non-unicode version */
  #undef near
  
+ #define HAVE_FSEEKO
+ 
  /* Must be here to avoid conflicting with prototype in windows.h */
  #define mkdir(a,b)mkdir(a)
  
Index: src/port/fseeko.c
===
RCS file: /projects/cvsroot/pgsql/src/port/fseeko.c,v
retrieving revision 1.20
diff -c -r1.20 fseeko.c
*** src/port/fseeko.c   5 Mar 2006 15:59:10 -   1.20
--- src/port/fseeko.c   19 Dec 2006 12:52:06 -
***
*** 17,23 
   * We have to use the native defines here because configure hasn't
   * completed yet.
   */
! #if defined(__bsdi__) || defined(__NetBSD__)
  
  #include c.h
  
--- 17,23 
   * We have to use the native defines here because configure hasn't
   * completed yet.
   */
! #if defined(__bsdi__) || defined(__NetBSD__) || defined(WIN32)
  
  #include c.h
  

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] small pg_dump RFE: new --no-prompt (password) option

2006-12-19 Thread Alvaro Herrera
Martijn van Oosterhout wrote:
 On Fri, Dec 01, 2006 at 10:37:07AM -0500, Tom Lane wrote:
  Martijn van Oosterhout kleptog@svana.org writes:
   Seems to me that you could just do this by setting stdin to be
   /dev/null?
  
  No, because the simple_prompt code makes a point of reading from
  /dev/tty.
 
 Sure, but that would fail for any process which has detached from the
 terminal. The backup is stdin, which you can also setup.
 
 Admittedly, the OP may not be in a position to setup the environment in
 such a way as to make it work, which means that such an option may be
 worthwhile...
 
 If it's going to be an option, I'd suggest something like --daemon-mode
 since it's entirely possible other issues tmay come up later that might
 need this kind of attention.

So did we get anywhere with this?  I'd propose something like
--non-interactive rather than --daemon-mode.  Was there a patch
submitted?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] small pg_dump RFE: new --no-prompt (password) option

2006-12-19 Thread Peter Eisentraut
Alvaro Herrera wrote:
 So did we get anywhere with this?  I'd propose something like
 --non-interactive rather than --daemon-mode.  Was there a patch
 submitted?

With the enhanced connection string management, wouldn't it suffice to 
pass a (fake/empty) password to pg_dump on the command line?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Magnus Hagander
On Tue, Dec 19, 2006 at 09:59:05PM +0900, Yoshiyuki Asaba wrote:
 Hi,
 
   Win32 does not implement fseeko() and ftello(). So I think it limit to
   handle a 2GB file. Is this a specification?
  
  Yes, Magnus-san suggested the problem. It is present TODO.  The entire 
  adjustment was still difficult though I had tried it. SetFilePointer might 
  be 
  able to be saved. However, I think it might be an attempt of 8.3...
 
 Is it able to use fsetpos()/fgetpos() instead of ftell()/fseek()?
 fpos_t is a 8byte type. I tested pg_dump/pg_restore with the attached
 patch.

Hmm. Yeah, that should work in principle.

However, did you test the actual backend after that change? Given where you
change the define of off_t, that would affect every call in the backend
that uses off_t, and it just seems very strange that you could get away
with that without touching anything else? (If we're lucky, but I
wouldn't count on it - there ought to be other functions in libc that we
call that takes off_t..)

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Core dump in PL/pgSQL ...

2006-12-19 Thread Hans-Juergen Schoenig

one of our customers here found a bug in PL/pgSQL.
this is how you can create this one:

CREATE OR REPLACE FUNCTION public.make_victim_history () RETURNS  
trigger AS $body$ DECLARE


schemarec RECORD;
exec_schemaselect text;
curs2 refcursor;

BEGIN

  exec_schemaselect := 'SELECT nspname FROM pg_class c JOIN  
pg_namespace n ON n.oid = c.relnamespace WHERE c.oid = ' || TG_RELID;


  OPEN curs2 FOR EXECUTE exec_schemaselect;
  FETCH curs2 INTO schemarec;
  CLOSE curs2;

  RAISE NOTICE 'schemarecord: %',schemarec.nspname;

  RAISE NOTICE 'begin new block';
BEGIN
RAISE NOTICE 'insert now';
EXECUTE 'insert into public_history.victim SELECT * from  
public.victim where id=1;';


EXCEPTION
WHEN OTHERS THEN
 -- do nothing
END;

RETURN NEW;
END;
$body$
LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;


--TABLE ERSTELLEN
CREATE TABLE public.victim (
  id BIGINT,
  name TEXT,
  created TIMESTAMP WITHOUT TIME ZONE,
  create_user BIGINT,
  changed TIMESTAMP WITHOUT TIME ZONE,
  change_user BIGINT,
  state SMALLINT
) WITHOUT OIDS;

INSERT INTO victim VALUES (1, 'hans', now(), 2, now(), 3, 4);

-- TRIGGER ERSTELLEN
CREATE TRIGGER victim_tr BEFORE UPDATE OR DELETE ON  
public.victim FOR EACH ROW EXECUTE PROCEDURE  
public.make_victim_history();


-- BAD BAD STATEMENT
UPDATE public.victim SET changed=NOW(), change_user = 1;


a quick fix is to prevent the language from freeing the tuple twice -  
this should safely prevent the core dump here.

we still have to make sure that the tuple if freed properly. stay tuned.
here is the patch ...


hans



diff -rc postgresql-8.2.0-orig/src/backend/executor/spi.c  
postgresql-8.2.0/src/backend/executor/spi.c
*** postgresql-8.2.0-orig/src/backend/executor/spi.c	Tue Nov 21  
23:35:29 2006

--- postgresql-8.2.0/src/backend/executor/spi.c Tue Dec 19 15:04:42 2006
***
*** 264,270 
/* free Executor memory the same as _SPI_end_call would do */
MemoryContextResetAndDeleteChildren(_SPI_current-execCxt);
/* throw away any partially created tuple-table */
!   SPI_freetuptable(_SPI_current-tuptable);
_SPI_current-tuptable = NULL;
}
  }
--- 264,270 
/* free Executor memory the same as _SPI_end_call would do */
MemoryContextResetAndDeleteChildren(_SPI_current-execCxt);
/* throw away any partially created tuple-table */
! //SPI_freetuptable(_SPI_current-tuptable);
_SPI_current-tuptable = NULL;
}
  }




--
Cybertec Geschwinde  Schönig GmbH
Schöngrabern 134; A-2020 Hollabrunn
Tel: +43/1/205 10 35 / 340
www.postgresql.at, www.cybertec.at




Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Andrew Dunstan

Tom Lane wrote:

Heikki Linnakangas [EMAIL PROTECTED] writes:
  
1. What's the point of having comparison operators for enums? For most 
use cases, there's no natural ordering of enum values.



If you would like to be able to index enum columns, or even GROUP BY one,
you need those; whether the ordering is arbitrary or not is irrelevant.
  



Heikki's assertion is wrong in any case. The enumeration definition 
defines the ordering, and I can think of plenty of use cases where it 
does matter. We do not use an arbitrary ordering. An enum type is an 
*ordered* set of string labels. Without this the feature would be close 
to worthless. But if a particular application doesn't need them ordered, 
it need not use the comparison operators. Leaving aside the uses for 
GROUP BY and indexes, I would ask what the justification would be for 
leaving off comparison operators?


  
2. The comparison routine compares oids, right? If the oids wrap around 
when the enum values are created, the ordering isn't what the user expects.



This is a fair point --- it'd be better if the ordering were not
dependent on chance OID assignments.  Not sure what we are willing
to pay to have that though.
  


This is a non-issue. The code sorts the oids before assigning them:

   /* allocate oids */
   oids = (Oid *) palloc(sizeof(Oid) * n);
   for(i = 0; i  n; i++)
   {
   oids[i] = GetNewOid(pg_enum);
   }
   /* wraparound is unlikely, but just to be safe...*/
   qsort(oids, n, sizeof(Oid), oid_cmp);


  
3. 4 bytes per value is wasteful if you're storing simple status codes 
etc.



I've forgotten exactly which design Tom is proposing to implement here,
but at least one of the contenders involved storing an OID that would be
unique across all enum types.  1 byte is certainly not enough for that
and even 2 bytes would be pretty marginal.  I'm unconvinced by arguments
about 2 bytes being so much better than 4 anyway --- in the majority of
real table layouts, the hoped-for savings would disappear into alignment
padding.


  


Globally unique is the design adopted, after much on-list discussion. 
That was a way of getting it *down* to 4 bytes. The problem is that the 
output routines need enough info from just the internal representation 
of the type value to do their work. The original suggestions was for 8 
bytes - type oid + offset in value set. Having them globally unique lets 
us get down to 4.


As for efficiency, I agree with what Tom says about alignment and 
padding dissolving away any perceived advantage in most cases. If we 
ever get around to optimising record layout we could revisit it.


cheers

andrew


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Core dump in PL/pgSQL ...

2006-12-19 Thread Stefan Kaltenbrunner

Hans-Juergen Schoenig wrote:

[...]

a quick fix is to prevent the language from freeing the tuple twice - 
this should safely prevent the core dump here.

we still have to make sure that the tuple if freed properly. stay tuned.
here is the patch ...


this seems to be already fixed with:

http://archives.postgresql.org/pgsql-committers/2006-12/msg00063.php


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Andrew Dunstan

Magnus Hagander wrote:

On Tue, Dec 19, 2006 at 09:59:05PM +0900, Yoshiyuki Asaba wrote:
  

Hi,



Win32 does not implement fseeko() and ftello(). So I think it limit to
handle a 2GB file. Is this a specification?

Yes, Magnus-san suggested the problem. It is present TODO.  The entire 
adjustment was still difficult though I had tried it. SetFilePointer might be 
able to be saved. However, I think it might be an attempt of 8.3...
  

Is it able to use fsetpos()/fgetpos() instead of ftell()/fseek()?
fpos_t is a 8byte type. I tested pg_dump/pg_restore with the attached
patch.



Hmm. Yeah, that should work in principle.

However, did you test the actual backend after that change? Given where you
change the define of off_t, that would affect every call in the backend
that uses off_t, and it just seems very strange that you could get away
with that without touching anything else? (If we're lucky, but I
wouldn't count on it - there ought to be other functions in libc that we
call that takes off_t..)
  


I'd feel much happier if we could just patch pg_dump, since this is the 
only place we know of that we need to do large file seek/tell operations.


Did you see this from Andreas?


MinGW has fseeko64 and ftello64 with off64_t.
  


Maybe we need separate macros for MSVC and MinGW. Given the other 
interactions we might need to push those deep into the C files after all 
the system headers. Maybe create pg_dump_fseek.h and put them in there 
and then #include that very late.


cheers

andrew




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Core dump in PL/pgSQL ...

2006-12-19 Thread Hans-Juergen Schoenig

oh sorry, i think i missed that one ...
many thanks,

hans



On Dec 19, 2006, at 3:42 PM, Stefan Kaltenbrunner wrote:


Hans-Juergen Schoenig wrote:

[...]

a quick fix is to prevent the language from freeing the tuple  
twice - this should safely prevent the core dump here.
we still have to make sure that the tuple if freed properly. stay  
tuned.

here is the patch ...


this seems to be already fixed with:

http://archives.postgresql.org/pgsql-committers/2006-12/msg00063.php


Stefan

---(end of  
broadcast)---

TIP 6: explain analyze is your friend




--
Cybertec Geschwinde  Schönig GmbH
Schöngrabern 134; A-2020 Hollabrunn
Tel: +43/1/205 10 35 / 340
www.postgresql.at, www.cybertec.at




Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Alvaro Herrera
Andrew Dunstan wrote:

 As for efficiency, I agree with what Tom says about alignment and 
 padding dissolving away any perceived advantage in most cases. If we 
 ever get around to optimising record layout we could revisit it.

I don't, because there are always those that are knowledgeable enough to
know how to reduce space lost to padding.  So it would be nice to have
2-byte enums on-disk, and resolve them based on the column's typid.  But
then, I'm not familiar with the patch at all so I'm not sure if it's
possible.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Magnus Hagander
 However, did you test the actual backend after that change? Given where you
 change the define of off_t, that would affect every call in the backend
 that uses off_t, and it just seems very strange that you could get away
 with that without touching anything else? (If we're lucky, but I
 wouldn't count on it - there ought to be other functions in libc that we
 call that takes off_t..)
   
 
 I'd feel much happier if we could just patch pg_dump, since this is the 
 only place we know of that we need to do large file seek/tell operations.

My thoughts exactly.

 Did you see this from Andreas?
 
 MinGW has fseeko64 and ftello64 with off64_t.
   
 
 Maybe we need separate macros for MSVC and MinGW. Given the other 
 interactions we might need to push those deep into the C files after all 
 the system headers. Maybe create pg_dump_fseek.h and put them in there 
 and then #include that very late.

We need different macrosand possibly functions, yes.
I think I got enough patched at home last night to get it working with
this, I was just too focused on one set of macros at the time. It's not
enough to include them very late - because off_t is used in the shared
datastructures in pg_dump/etc. It is possible to localise it to the
pg_dump binaries, though, given some header redirection *and* given that
we change all those off_t to pgoff_t (or similar). I couldn't find a way
to do it without changing the off_t define.

I'll try to take a look at merging these two efforts (again unless
beaten to it, have to do some of that dreaded christmas shopping as
well...)

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Andrew Dunstan

Alvaro Herrera wrote:

Andrew Dunstan wrote:

  
As for efficiency, I agree with what Tom says about alignment and 
padding dissolving away any perceived advantage in most cases. If we 
ever get around to optimising record layout we could revisit it.



I don't, because there are always those that are knowledgeable enough to
know how to reduce space lost to padding.  So it would be nice to have
2-byte enums on-disk, and resolve them based on the column's typid.  But
then, I'm not familiar with the patch at all so I'm not sure if it's
possible.

  


The trouble is that we have one output routine for all enum types. See 
previous discussions about disallowing extra params to output routines. 
So if all we have is a 2 byte offset into the list of values for the 
given type, we do not have enough info to allow the output routine to 
deduce which particular enum type it is dealing with. With the globally 
unique oid approach it doesn't even  need to care - it just looks up the 
corresponding value. Note that this was a reduction from the previously 
suggested (by TGL) 8 bytes.


I'm not a big fan of ordering columns to optimise record layout, except 
in the most extreme cases (massive DW type apps). I think visible column 
order should be logical, not governed by physical considerations.


cheers

andrew



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 I don't, because there are always those that are knowledgeable enough to
 know how to reduce space lost to padding.  So it would be nice to have
 2-byte enums on-disk, and resolve them based on the column's typid.  But
 then, I'm not familiar with the patch at all so I'm not sure if it's
 possible.

Remember that the value has to be decodable by the output routine.
So the only way we could do that would be by creating a separate output
function for each enum type.  (That is, a separate pg_proc entry
... they could all point at the same C function, which would have to
check which OID it was called as and work backward to determine the enum
type.)

While this is doubtless doable, it's slow, it bloats pg_proc, and
frankly no argument has been offered that's compelling enough to
require it.  The alignment issue takes enough air out of the
space-saving argument that it doesn't seem sufficient to me.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Gregory Stark
Andrew Dunstan [EMAIL PROTECTED] writes:

 I'm not a big fan of ordering columns to optimise record layout, except in the
 most extreme cases (massive DW type apps). I think visible column order should
 be logical, not governed by physical considerations.

Well as long as we're talking shoulds the database should take care of this
for you anyways.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Zeugswetter Andreas ADI SD

 Did you see this from Andreas?
 
  MinGW has fseeko64 and ftello64 with off64_t.

 
 Maybe we need separate macros for MSVC and MinGW. Given the other 

You mean something quick and dirty like this ? That would work.

Andreas


pg_dump_fseeko64.patch
Description: pg_dump_fseeko64.patch

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Magnus Hagander
On Tue, Dec 19, 2006 at 04:25:18PM +0100, Zeugswetter Andreas ADI SD wrote:
 
  Did you see this from Andreas?
  
   MinGW has fseeko64 and ftello64 with off64_t.
 
  
  Maybe we need separate macros for MSVC and MinGW. Given the other 
 
 You mean something quick and dirty like this ? That would work.

Yes, except does that actually work? If so you found the place in the
headers to stick it without breaking things that I couldn't find ;-)

I got compile warnings (note, warnings, not errors, for some reason, but
very significant) about sending 64-bit ints to API functions that were
32-bits and such without creating a separate define for off_t. Could
very well be that I was too tired and too focused on the websearch stuff
when I tried it though :-)

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


column ordering, was Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Andrew Dunstan

Gregory Stark wrote:

Andrew Dunstan [EMAIL PROTECTED] writes:

  

I'm not a big fan of ordering columns to optimise record layout, except in the
most extreme cases (massive DW type apps). I think visible column order should
be logical, not governed by physical considerations.



Well as long as we're talking shoulds the database should take care of this
for you anyways.

  


Sure, but the only sane way I can think of to do that would be have 
separate logical and physical orderings, with a map between the two. I 
guess we'd need to see what the potential space savings would be and 
establish what the processing overhead would be, before considering it. 
One side advantage would be that it would allow us to do the often 
requested add column at position x.


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Zeugswetter Andreas ADI SD

MinGW has fseeko64 and ftello64 with off64_t.
  
   
   Maybe we need separate macros for MSVC and MinGW. Given the other 
  
  You mean something quick and dirty like this ? That would work.
 
 Yes, except does that actually work? If so you found the place in the
 headers to stick it without breaking things that I couldn't find ;-)

Compiles clean without warnings on MinGW, but not tested, sorry also no
time.

Andreas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: column ordering, was Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Martijn van Oosterhout
On Tue, Dec 19, 2006 at 10:48:41AM -0500, Andrew Dunstan wrote:
 Sure, but the only sane way I can think of to do that would be have 
 separate logical and physical orderings, with a map between the two. I 
 guess we'd need to see what the potential space savings would be and 
 establish what the processing overhead would be, before considering it. 
 One side advantage would be that it would allow us to do the often 
 requested add column at position x.

A patch to allow seperate physical and logical orderings was submitted
and rejected. Unless something has changed on that front, any
discussion in this direction isn't really useful.

Once this is possible it would allow a lot of simple savings. For
example, shifting all fixed width fields to the front means they can
all be accessed without looping through the previous columns, for
example.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[HACKERS] ERROR: tuple concurrently updated

2006-12-19 Thread Stephen Frost
Greetings,

  Subject pretty much says it all.  I've put up with this error in the
  past when it has caused me trouble but it's now starting to hit our
  clients on occation which is just unacceptable.

  The way I've seen it happen, and this is just empirically so I'm not
  sure that it's exactly right, is something like this:

  Running with pg_autovacuum on the system
  Run a long-running PL/PgSQL function which creates tables
  Wait for some sort of overlap, and the PL/PgSQL function dies with the
  above error.

  I've also seen it happen when I've got a long-running PL/PgSQL
  function going and I'm creating tables in another back-end.

  From a prior discussion I *think* the issue is the lack of
  versioning/visibility information in the SysCache which means that if
  the long-running function attempts to look-up data about a table which
  was created *after* the long-running function started but was put into
  the common SysCache by another backend, the long-running function gets
  screwed by the 'tuple concurrently updated' query and ends up failing
  and being rolled back.

  If this is correct then the solution seems to be either add versioning
  to the SysCache data, or have an overall 'this SysCache is only good
  for data past transaction X' so that a backend which is prior to that
  version could just accept that it can't use the SysCache and fall back
  to accessing the data directly (if that's possible).  I'm not very
  familiar with the way the SysCache system is envisioned but I'm not a
  terrible programmer (imv anyway) and given some direction on the
  correct approach to solving this problem I'd be happy to spend some
  time working on it.  I'd *really* like to see this error just go away
  completely for all non-broken use-cases.

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] Sync Scan update

2006-12-19 Thread Jeff Davis
I have updated my Synchronized Scan patch and have had more time for
testing.

Go to http://j-davis.com/postgresql/syncscan-results10.html
where you can download the patch, and see the benchmarks that I've run.

The results are very promising. I did not see any significant slowdown
for non-concurrent scans or for scans that fit into memory, although I
do need more testing in this area.

The benchmarks that I ran tested the concurrent performance, and the
results were excellent.

I also added two new simple features to the patch (they're just
#define'd tunables in heapam.h):
(1) If the table is smaller than
effective_cache_size*SYNC_SCAN_THRESHOLD then the patch doesn't do
anything different from current behavior.
(2) The scans can start earlier than the hint implies by setting
SYNC_SCAN_START_OFFSET between 0 and 1. This is helpful because it makes
the scan start in a place where the cache trail is likely to be
continuous between the starting point and the location of an existing scan.

I'd like any feedback, particularly any results that show a slowdown
from current behavior. I think I fixed Luke's problem (actually, it was
a fluke that it was even working at all), but I haven't heard back. Some
new feedback would be very helpful.

Thanks.

Regards,
Jeff Davis

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Bruce Momjian
I have written an article about the complexities of companies
contributing to open source projects:

http://momjian.us/main/writings/pgsql/company_contributions/

If you have any suggestions, please let me know.  I am going to add a
link to this from the developer's FAQ.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Release 8.2.0 done, 8.3 development starts

2006-12-19 Thread Bruce Momjian
The 8.2.0 release went well.  We spent a month more in beta than we
planned, but that time helped to eliminate many bugs, and many that had
existed in previous PostgreSQL major releases as well.  We have had very
few bug reports for 8.2.0, and will be doing a minor release in 1-2
weeks to get those fixes out to the user community.

The development community is now focused on 8.3, and discussion and
patch application has already started.  This is scheduled to be a
shorter release cycle then normal, with feature freeze on April 1, with
major functionality discussed and hopefully reviewed by the community at
least a month before that.  This would put beta in mid-May, and final
release perhaps mid-July.  Of course, this all might change based on
community feedback.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Sync Scan update

2006-12-19 Thread Simon Riggs
On Tue, 2006-12-19 at 09:07 -0800, Jeff Davis wrote:
 I have updated my Synchronized Scan patch and have had more time for
 testing.
 
 Go to http://j-davis.com/postgresql/syncscan-results10.html
 where you can download the patch, and see the benchmarks that I've run.
 
 The results are very promising. I did not see any significant slowdown
 for non-concurrent scans or for scans that fit into memory, although I
 do need more testing in this area.

Yes, very promising.

Like to see some tests with 2 parallel threads, since that is the most
common case. I'd also like to see some tests with varying queries,
rather than all use select count(*). My worry is that these tests all
progress along their scans at exactly the same rate, so are likely to
stay in touch. What happens when we have significantly more CPU work to
do on one scan - does it fall behind??

I'd like to see all testing use log_executor_stats=on for those
sessions. I would like to know whether the blocks are being hit while
still in shared_buffers or whether we rely on the use of the full
filesystem buffer cache to provide performance.

It would be very cool to run a background performance test also, say a
pgbench run with a -S 100. That would show us what its like to try to
run multiple queries when most of the cache is full with something else.

It would be better to have a GUC to control the scanning
e.g.
synch_scan_threshold = 256MB

rather than link it to effective_cache_size always, since that is
related to index scan tuning.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 12:11 -0500, Bruce Momjian wrote:
 I have written an article about the complexities of companies
 contributing to open source projects:
 
   http://momjian.us/main/writings/pgsql/company_contributions/
 
 If you have any suggestions, please let me know.  I am going to add a
 link to this from the developer's FAQ.

This is certainly interesting. Is there a possibility of being able to
read it as a single page? I would like to comment on it but it is hard
to cross reference specific points (at least to me) with the small
sections.

Sincerely,

Joshua D. Drake


 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Sync Scan update

2006-12-19 Thread Jeff Davis
On Tue, 2006-12-19 at 17:43 +, Simon Riggs wrote:
 On Tue, 2006-12-19 at 09:07 -0800, Jeff Davis wrote:
  I have updated my Synchronized Scan patch and have had more time for
  testing.
  
  Go to http://j-davis.com/postgresql/syncscan-results10.html
  where you can download the patch, and see the benchmarks that I've run.
  
  The results are very promising. I did not see any significant slowdown
  for non-concurrent scans or for scans that fit into memory, although I
  do need more testing in this area.
 
 Yes, very promising.
 
 Like to see some tests with 2 parallel threads, since that is the most
 common case. I'd also like to see some tests with varying queries,
 rather than all use select count(*). My worry is that these tests all
 progress along their scans at exactly the same rate, so are likely to
 stay in touch. What happens when we have significantly more CPU work to
 do on one scan - does it fall behind??

Right, that's important. Hopefully the test you describe below sheds
some light on that.

 I'd like to see all testing use log_executor_stats=on for those
 sessions. I would like to know whether the blocks are being hit while
 still in shared_buffers or whether we rely on the use of the full
 filesystem buffer cache to provide performance.

Ok, will do.

 It would be very cool to run a background performance test also, say a
 pgbench run with a -S 100. That would show us what its like to try to
 run multiple queries when most of the cache is full with something else.

Do you mean '-S -s 100' or '-s 100'? Reading the pgbench docs it doesn't
look like '-S' takes an argument.

 It would be better to have a GUC to control the scanning
 e.g.
   synch_scan_threshold = 256MB
 
 rather than link it to effective_cache_size always, since that is
 related to index scan tuning.

I will make it completely unrelated to effective_cache_size. I'll do the
same with sync_scan_start_offset (by the way, does someone have a
better name for that?).

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Sync Scan update

2006-12-19 Thread Gregory Stark
Simon Riggs [EMAIL PROTECTED] writes:

 Like to see some tests with 2 parallel threads, since that is the most
 common case. I'd also like to see some tests with varying queries,
 rather than all use select count(*). My worry is that these tests all
 progress along their scans at exactly the same rate, so are likely to
 stay in touch. What happens when we have significantly more CPU work to
 do on one scan - does it fall behind??

If it's just CPU then I would expect the cache to help the followers keep up
pretty easily. What concerns me is queries that involve more I/O. For example
if the leader is doing a straight sequential scan and the follower is doing a
nested loop join driven by the sequential scan. Or worse, what happens if the
leader is doing a nested loop and the follower which is just doing a straight
sequential scan is being held back?

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Sync Scan update

2006-12-19 Thread Jeff Davis
On Tue, 2006-12-19 at 18:05 +, Gregory Stark wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
 
  Like to see some tests with 2 parallel threads, since that is the most
  common case. I'd also like to see some tests with varying queries,
  rather than all use select count(*). My worry is that these tests all
  progress along their scans at exactly the same rate, so are likely to
  stay in touch. What happens when we have significantly more CPU work to
  do on one scan - does it fall behind??
 
 If it's just CPU then I would expect the cache to help the followers keep up
 pretty easily. What concerns me is queries that involve more I/O. For example
 if the leader is doing a straight sequential scan and the follower is doing a
 nested loop join driven by the sequential scan. Or worse, what happens if the

That would be one painful query: scanning two tables in a nested loop,
neither of which fit into physical memory! ;)

If one table does fit into memory, it's likely to stay there since a
nested loop will keep the pages so hot.

I can't think of a way to test two big tables in a nested loop because
it would take so long. However, it would be worth trying it with an
index, because that would cause random I/O during the scan.

 leader is doing a nested loop and the follower which is just doing a straight
 sequential scan is being held back?
 

The follower will never be held back in my current implementation.

My current implementation relies on the scans to stay close together
once they start close together. If one falls seriously behind, it will
fall outside of the main cache trail and cause the performance to
degrade due to disk seeking and lower cache efficiency.

I think Simon is concerned about CPU because that will be a common case:
if one scan is CPU bound and another is I/O bound, they will progress at
different rates. That's bound to cause seeking and poor cache
efficiency.

Although I don't think either of these cases will be worse than current
behavior, it warrants more testing.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Bruce Momjian
Joshua D. Drake wrote:
 On Tue, 2006-12-19 at 12:11 -0500, Bruce Momjian wrote:
  I have written an article about the complexities of companies
  contributing to open source projects:
  
  http://momjian.us/main/writings/pgsql/company_contributions/
  
  If you have any suggestions, please let me know.  I am going to add a
  link to this from the developer's FAQ.
 
 This is certainly interesting. Is there a possibility of being able to
 read it as a single page? I would like to comment on it but it is hard
 to cross reference specific points (at least to me) with the small
 sections.

Thanks for the feedback, sectioning fixed.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 13:38 -0500, Bruce Momjian wrote:
 Joshua D. Drake wrote:
  On Tue, 2006-12-19 at 12:11 -0500, Bruce Momjian wrote:
   I have written an article about the complexities of companies
   contributing to open source projects:
   
 http://momjian.us/main/writings/pgsql/company_contributions/
   
   If you have any suggestions, please let me know.  I am going to add a
   link to this from the developer's FAQ.
  
  This is certainly interesting. Is there a possibility of being able to
  read it as a single page? I would like to comment on it but it is hard
  to cross reference specific points (at least to me) with the small
  sections.
 
 Thanks for the feedback, sectioning fixed.

Much better, thanks. I will comment shortly.

Sincerely,

Joshua D. Drake

 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Gurjeet Singh

On 12/20/06, Bruce Momjian [EMAIL PROTECTED] wrote:


Thanks for the feedback, sectioning fixed.



Spelling mistake:

because they have gone though a company process

to

because they have gone *through* a company process

Regards,

--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | yahoo }.com


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Bruce Momjian

Fixed, thanks.

---

Gurjeet Singh wrote:
 On 12/20/06, Bruce Momjian [EMAIL PROTECTED] wrote:
 
  Thanks for the feedback, sectioning fixed.
 
 
 Spelling mistake:
 
 because they have gone though a company process
 
 to
 
 because they have gone *through* a company process
 
 Regards,
 
 -- 
 [EMAIL PROTECTED]
 [EMAIL PROTECTED] gmail | hotmail | yahoo }.com

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Gurjeet Singh

On 12/20/06, Bruce Momjian [EMAIL PROTECTED] wrote:


Fixed, thanks.



Follwing statement seems to be a bit mangled:

then when company('s?) needs diverge, going *it*(?) alone, then returning to
the community process at some later time.

--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | yahoo }.com


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Bruce Momjian
Gurjeet Singh wrote:
 On 12/20/06, Bruce Momjian [EMAIL PROTECTED] wrote:
 
  Fixed, thanks.
 
 
 Follwing statement seems to be a bit mangled:
 
 then when company('s?) needs diverge, going *it*(?) alone, then returning to
 the community process at some later time.

Thanks, clarified.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Lukas Kahwe Smith

Hi,

I think another point you need to bring out more clearily is that the 
community is also often miffed if they feel they have been left out of 
the design and testing phases. This is sometimes just a reflex that is 
not always based on technical reasoning. Its just that as you correctly 
point out are worried of being high-jacked by companies.


regards,
Lukas

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake
Hello,

O.k. below are some comments. Your article although well written has a
distinct, from the community perspective ;) and I think there are some
points from the business side that are missed.

---
Employees working in open source communities have two bosses -- the
companies that employ them, and the open source community, which must
review their proposals and patches. Ideally, both groups would want the
same thing, but often companies have different priorities in terms of
deadlines, time investment, and implementation details. And,
unfortunately for companies, open source communities rarely adjust their
requirements to match company needs. They would often rather do
without than tie their needs to those of a company.
---

Employees don't have two bosses at least not in the presentation above.
In the community the employee may choose to do it the communities way or
not. That choice is much more defined within a Boss purview. 

A companies priorities have a priority that is very powerful that the
community does not and I believe should be reflected in a document such
as this. To actually feed the employee. There is a tendency for the
community to forget that every minute spent on community work is a
direct neglect to the immediate (note that I say immediate) bottom line.
That means that priorities must be balanced so that profit can be made,
employees can get bonuses and god forbid a steady paycheck.

---
This makes the employee's job difficult. When working with the
community, it can be difficult to meet company demands. If the company
doesn't understand how the community works, the employee can be seen as
defiant, when in fact the employee has no choice but to work in the
community process and within the community timetable.

By serving two masters, employees often exhibit various behaviors that
make their community involvement ineffective. Below I outline the steps
involved in open source development, and highlight the differences
experienced by employees involved in such activities.
---

The first paragraph seems to need some qualification. An employee is
hired to work at the best interests of the company, not the community.
Those two things may overlap, but that is subject to the companies
declaration. If the employee is not doing the task as delegated that is
defiant.

I am suspecting that your clarification would be something to the effect
of:

When a company sets forth to donate resources to the community, it can
make an employee's job difficult. It is important for the company to
understand exactly what it is giving and the process that gift entails.

Or something like that.

I take subject to the term serving two masters, I am certainly not the
master of my team but that may just be me.

---
Employees usually circulate their proposal inside their companies first
before sharing it with the community. Unfortunately, many employees
never take the additional step of sharing the proposal with the
community. This means the employee is not benefitting from community
oversight and suggestions, often leading to a major rewrite when a patch
is submitted to the community.
---

I think the above is not quite accurate. I see few proposals actually
come across to the community either and those that do seem to get bogged
down instead of progress being made.

The most successful topics I have seen are those that usually have some
footing behind them *before* they bring it to the community.

---
For employees, patch review often happens in the company first. Only
when the company is satisfied is the patch submitted to the community.
This is often done because of the perception that poor patches reflect
badly on the company. The problem with this patch pre-screening is that
it prevents parallel review, where the company and community are both
reviewing the patch. Parallel review speeds completion and avoids
unnecessary refactoring.
---

It does effect the perception of the company. Maybe not to the community
but as someone who reads comments on the patches that comes through... I
do not look forward to the day when I have a customer that says, didn't
you submit that patch that was torn apart by...

---
As you can see, community involvement has unique challenges for company
employees. There are often many mismatches between company needs and
community needs, and the company must decide if it is worth honoring the
community needs or going it alone, without community involvement.
---

Hmmm... That seems a bit unfair don't you think? The people within the
company are likely members of the community. It seems that it makes
sense for the community to actually work with the company as well. 

I am not suggesting that the community develop code to the needs of a
company. I am suggesting that perhaps the community needs and the
company needs often overlap but both sides tend to ignore the overlap
and point at each other instead.

---
Company involvement in the community process usually has unforseen
benefits, but also 

Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Lukas Kahwe Smith

Joshua D. Drake wrote:


O.k. in all Bruce I like your article but I must admit it seems to take
a The community is god perspective and that we must all bend to the
will of said community.

The community could learn a great deal from adopting some of the more
common business practices when it comes to development as well.

In short, I guess I think it is important to recognize that both are
partners in the open source world and that to ignore one over the other
is destined to fail.


I think Bruce article is about painting a realistic picture for 
companies who want to get involved. And the reality is that people tend 
to be worried about company influence quite often and they do expect a 
higher standard for patches coming from a (big) company. For individuals 
they are more forgiving, but if an IBM engineer does a little mistake it 
will produce a few (not necessarily open) snickers.


regards,
Lukas

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Andrej Ricnik-Bay

On 12/20/06, Joshua D. Drake [EMAIL PROTECTED] wrote:


O.k. in all Bruce I like your article but I must admit it seems to take
a The community is god perspective and that we must all bend to the
will of said community.

I'm not really in a position to judge how a company thinks about
donating  resources to a project, but I certainly think that Bruce'
standpoint is correct, and that the community is *indeed* the driver of
a project;  if a company doesn't like how the community deals with
their requirements/needs they can just maintain their own branch.



The community could learn a great deal from adopting some of the more
common business practices when it comes to development as well.

In short, I guess I think it is important to recognize that both are
partners in the open source world and that to ignore one over the other
is destined to fail.

Do you have any statistical data to back that hypothesis?


Sincerely,

Joshua D. Drake

Cheers,
Andrej

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Andrew Dunstan

Magnus Hagander wrote:

We need different macrosand possibly functions, yes.
I think I got enough patched at home last night to get it working with
this, I was just too focused on one set of macros at the time. It's not
enough to include them very late - because off_t is used in the shared
datastructures in pg_dump/etc. It is possible to localise it to the
pg_dump binaries, though, given some header redirection *and* given that
we change all those off_t to pgoff_t (or similar). I couldn't find a way
to do it without changing the off_t define.

I'll try to take a look at merging these two efforts (again unless
beaten to it, have to do some of that dreaded christmas shopping as
well...)


  


What is needed to test this? Just a custom dump file with a member  
2^31 bytes?


cheers

andrew


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Magnus Hagander
Andrew Dunstan wrote:
 Magnus Hagander wrote:
 We need different macrosand possibly functions, yes.
 I think I got enough patched at home last night to get it working with
 this, I was just too focused on one set of macros at the time. It's not
 enough to include them very late - because off_t is used in the shared
 datastructures in pg_dump/etc. It is possible to localise it to the
 pg_dump binaries, though, given some header redirection *and* given that
 we change all those off_t to pgoff_t (or similar). I couldn't find a way
 to do it without changing the off_t define.

 I'll try to take a look at merging these two efforts (again unless
 beaten to it, have to do some of that dreaded christmas shopping as
 well...)


   
 
 What is needed to test this? Just a custom dump file with a member 
 2^31 bytes?

Yeah, I believe so. It backs it up fine, but it won't restore it
properly. You can create such a database with the pgbench tool and the
correct scaling factor, per the original mail in this thread.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake
On Wed, 2006-12-20 at 09:51 +1300, Andrej Ricnik-Bay wrote:
 On 12/20/06, Joshua D. Drake [EMAIL PROTECTED] wrote:
 
  O.k. in all Bruce I like your article but I must admit it seems to take
  a The community is god perspective and that we must all bend to the
  will of said community.
 I'm not really in a position to judge how a company thinks about
 donating  resources to a project, but I certainly think that Bruce'
 standpoint is correct, and that the community is *indeed* the driver of
 a project;  if a company doesn't like how the community deals with
 their requirements/needs they can just maintain their own branch.

It is definitely a tough distinction. My first thought on reply was that
well a companies employees become members of the community but that
reinforces what you say above.

I think my overall thought is the tone seems a bit non-gracious to
companies, when IMO the community should be actively courting companies
to give resources. If companies feel unwelcome, they won't give.

 
 
  The community could learn a great deal from adopting some of the more
  common business practices when it comes to development as well.
 
  In short, I guess I think it is important to recognize that both are
  partners in the open source world and that to ignore one over the other
  is destined to fail.
 Do you have any statistical data to back that hypothesis?

Of which, the community learning or my take that if we ignore one over
the other it is destined to fail?

I don't really want to bring up the first point as it has been hashed
over and over. It lends to the project management, todo list, milestone
debacle :)

The second point is that if the community ignores the company trying to
give resources, the company is likely to ignore the community and thus
we both fail (and vice versa).

Sincerely,

Joshua D. Drkae



 
  Sincerely,
 
  Joshua D. Drake
 Cheers,
 Andrej
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Andrej Ricnik-Bay

On 12/20/06, Joshua D. Drake [EMAIL PROTECTED] wrote:


I think my overall thought is the tone seems a bit non-gracious to
companies, when IMO the community should be actively courting companies
to give resources. If companies feel unwelcome, they won't give.

I appreciate that, but then Bruce' aim was (or at least that's how I
interpreted it) to point out difficulties that he as a long time member of
the postgres hacker community sees;  it would be a bit weird to expect
him to write something from the perspective of a company (even though
he conceivably could as an employee of enterprisedb).


Of which, the community learning or my take that if we ignore one over
the other it is destined to fail?

I meant the failure bit, sorry for the poor quoting.



I don't really want to bring up the first point as it has been hashed
over and over. It lends to the project management, todo list, milestone
debacle :)

Amen


The second point is that if the community ignores the company trying to
give resources, the company is likely to ignore the community and thus
we both fail (and vice versa).

I guess it depends on how you define fail for a group that hasn't
set its mind on making profit.



Sincerely,

Joshua D. Drkae

Cheers,
Andrej

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Andrew Dunstan

Joshua D. Drake wrote:


I think my overall thought is the tone seems a bit non-gracious to
companies, when IMO the community should be actively courting companies
to give resources. If companies feel unwelcome, they won't give.

  


I have not been following closely. But IMNSHO we should be stressing the 
synergy involved in companies contributing to us. They benefit and we 
benefit. Yes there can be conflicts, but these are less likely to occur 
if communication stays open. Doing things behind closed doors is a 
recipe for disaster whether you are a contributing company or 
individual. Example: if I had developed notification payloads without 
getting Tom's redirection, I would have come up with a patch that would 
have been rejected, pissing me off and wasting my company's time. Now I 
feel I can come up with something acceptable.


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 10:01 +0100, Peter Eisentraut wrote:
 Magnus Hagander wrote:
  Is it possible to add an error hint to the message? Along the line of
  HINT: Did you perhaps get your casing wrong (with better wording,
  of course).
 
 Or how about we just make everything case-insensitive -- but 
 case-preserving! -- on Windows only?

Or we could simply add a helpful line to the postgresql.conf.

Sincerely,

Joshua D. Drake


 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake
On Wed, 2006-12-20 at 10:27 +1300, Andrej Ricnik-Bay wrote:
 On 12/20/06, Joshua D. Drake [EMAIL PROTECTED] wrote:
 
  I think my overall thought is the tone seems a bit non-gracious to
  companies, when IMO the community should be actively courting companies
  to give resources. If companies feel unwelcome, they won't give.
 I appreciate that, but then Bruce' aim was (or at least that's how I
 interpreted it) to point out difficulties that he as a long time member of
 the postgres hacker community sees;  it would be a bit weird to expect
 him to write something from the perspective of a company (even though
 he conceivably could as an employee of enterprisedb).

Well of course :), that is why I offered some feedback.


 
  The second point is that if the community ignores the company trying to
  give resources, the company is likely to ignore the community and thus
  we both fail (and vice versa).
 I guess it depends on how you define fail for a group that hasn't
 set its mind on making profit.
 

I am not speaking to profit or loss or money at all. I believe that just
the relationship not being productive between the community and the
company could be considered failing.

Sincerely,

Joshua D. Drake


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 16:30 -0500, Andrew Dunstan wrote:
 Joshua D. Drake wrote:
 
  I think my overall thought is the tone seems a bit non-gracious to
  companies, when IMO the community should be actively courting companies
  to give resources. If companies feel unwelcome, they won't give.
 

 
 I have not been following closely. But IMNSHO we should be stressing the 
 synergy involved in companies contributing to us. They benefit and we 
 benefit. Yes there can be conflicts, but these are less likely to occur 
 if communication stays open. Doing things behind closed doors is a 
 recipe for disaster whether you are a contributing company or 
 individual. Example: if I had developed notification payloads without 
 getting Tom's redirection, I would have come up with a patch that would 
 have been rejected, pissing me off and wasting my company's time. Now I 
 feel I can come up with something acceptable.

+1

Sincerely,

Joshua D. Drake

 
 cheers
 
 andrew
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 13:32 -0800, Joshua D. Drake wrote:
 On Tue, 2006-12-19 at 10:01 +0100, Peter Eisentraut wrote:
  Magnus Hagander wrote:
   Is it possible to add an error hint to the message? Along the line of
   HINT: Did you perhaps get your casing wrong (with better wording,
   of course).
  
  Or how about we just make everything case-insensitive -- but 
  case-preserving! -- on Windows only?
 
 Or we could simply add a helpful line to the postgresql.conf.


Index: postgresql.conf.sample
===
RCS
file: /projects/cvsroot/pgsql/src/backend/utils/misc/postgresql.conf.sample,v
retrieving revision 1.199
diff -c -r1.199 postgresql.conf.sample
*** postgresql.conf.sample  21 Nov 2006 01:23:37 -  1.199
--- postgresql.conf.sample  19 Dec 2006 21:36:28 -
***
*** 24,29 
--- 24,33 
  # settings, which are marked below, require a server shutdown and
restart
  # to take effect.
  
+ #
+ # Any memory setting may use a shortened notation such as 1024MB or
1GB.
+ # Please take note of the case next to the unit size.
+ #
  

#---
# FILE LOCATIONS


 
 Sincerely,
 
 Joshua D. Drake
 
 
  
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Bruce Momjian
Joshua D. Drake wrote:
 On Tue, 2006-12-19 at 10:01 +0100, Peter Eisentraut wrote:
  Magnus Hagander wrote:
   Is it possible to add an error hint to the message? Along the line of
   HINT: Did you perhaps get your casing wrong (with better wording,
   of course).
  
  Or how about we just make everything case-insensitive -- but 
  case-preserving! -- on Windows only?
 
 Or we could simply add a helpful line to the postgresql.conf.

Looking at the documentation I see:

(possibly different) unit can also be specified explicitly.  Valid
memory units are literalkB/literal (kilobytes),
literalMB/literal (megabytes), and literalGB/literal
(gigabytes); valid time units are literalms/literal
(milliseconds), literals/literal (seconds),
literalmin/literal (minutes), literalh/literal (hours),
and literald/literal (days).  Note that the multiplier for
memory units is 1024, not 1000.

The only value to being case-sensitive in this area is to allow
upper/lower case with different meanings, but I don't see us using that,
so why do we bother caring about the case?

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 16:47 -0500, Bruce Momjian wrote:
 Joshua D. Drake wrote:
  On Tue, 2006-12-19 at 10:01 +0100, Peter Eisentraut wrote:
   Magnus Hagander wrote:
Is it possible to add an error hint to the message? Along the line of
HINT: Did you perhaps get your casing wrong (with better wording,
of course).
   
   Or how about we just make everything case-insensitive -- but 
   case-preserving! -- on Windows only?
  
  Or we could simply add a helpful line to the postgresql.conf.
 
 Looking at the documentation I see:
 
 (possibly different) unit can also be specified explicitly.  Valid
 memory units are literalkB/literal (kilobytes),
 literalMB/literal (megabytes), and literalGB/literal
 (gigabytes); valid time units are literalms/literal
 (milliseconds), literals/literal (seconds),
 literalmin/literal (minutes), literalh/literal (hours),
 and literald/literal (days).  Note that the multiplier for
 memory units is 1024, not 1000.
 
 The only value to being case-sensitive in this area is to allow
 upper/lower case with different meanings, but I don't see us using that,
 so why do we bother caring about the case?

Because it is technically correct :).

Sincerely,

Joshua D. Drake

 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Joshua D. Drake wrote:
 + #
 + # Any memory setting may use a shortened notation such as 1024MB or
 1GB.
 + # Please take note of the case next to the unit size.
 + #

Well, if you add that, you should also list all the other valid units.  
But it's quite redundant, because nearly all the parameters that take 
units are already listed with units in the default file.  (Which makes 
Magnus's mistake all the more curios.)

In my mind, this is pretty silly.  There is no reputable precedent 
anywhere for variant capitalization in unit names.  Next thing we point 
out that zeros are significant in the interior of numbers, or what?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 22:59 +0100, Peter Eisentraut wrote:
 Joshua D. Drake wrote:
  + #
  + # Any memory setting may use a shortened notation such as 1024MB or
  1GB.
  + # Please take note of the case next to the unit size.
  + #
 
 Well, if you add that, you should also list all the other valid units.  

Why? It is clearly just an example.

 But it's quite redundant, because nearly all the parameters that take 
 units are already listed with units in the default file.  (Which makes 
 Magnus's mistake all the more curios.)

Not really, most people I know don't even consider the difference
between MB and Mb... shoot most people think that 1000MB equals one
Gigabyte.

 
 In my mind, this is pretty silly.  There is no reputable precedent 
 anywhere for variant capitalization in unit names.

I am not suggestion variant capitalization. I am suggestion a simple
document patch to help eliminate what may not be obvious.

Sincerely,

Joshua D. Drake

-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Magnus Hagander
Peter Eisentraut wrote:
 Joshua D. Drake wrote:
 + #
 + # Any memory setting may use a shortened notation such as 1024MB or
 1GB.
 + # Please take note of the case next to the unit size.
 + #
 
 Well, if you add that, you should also list all the other valid units.  
 But it's quite redundant, because nearly all the parameters that take 
 units are already listed with units in the default file.  (Which makes 
 Magnus's mistake all the more curios.)
 

The explanation is pretty simple. I was in a hurry to set it, just
opened the file up in vi, jumped to effective cache size, and set it. I
remembered that hey, I can spec it in Mb now, I don't have to think,
brilliant, and just typed it in. Restarted pg and noticed it wouldn't
start...

Had I actually read through all the documentation before I did it, it
certainly wouldn't have been a problem. I doubt many users actually do
that, though. In most cases, I just assume they would just assume they
can't use units on it because the default value in the file doesn't have
units.

And frankly, this is the only case I can recall having seen when the
input is actually case sensitive between Mb and MB. Could be that I'm
not exposed to enough systems that take such input, but I can't imagine
there aren't others who would make the same mistake.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Bruce Momjian wrote:
 The only value to being case-sensitive in this area is to allow
 upper/lower case with different meanings, but I don't see us using
 that, so why do we bother caring about the case?

Because the units are what they are.

In broader terms, we may one day want to have other units or a 
units-aware data type, so introducing incompatibilities now would be 
unfortunate.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake

  In my mind, this is pretty silly.  There is no reputable precedent 
  anywhere for variant capitalization in unit names.
 
 I am not suggestion variant capitalization. I am suggestion a simple
 document patch to help eliminate what may not be obvious.

Good lord... *suggesting*

Joshua D. Drake

 Sincerely,
 
 Joshua D. Drake
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Gregory Stark

Tom Lane [EMAIL PROTECTED] writes:

 Magnus Hagander [EMAIL PROTECTED] writes:
 Oh, you mean MB vs Mb. Man, it had to be that simple :)

 ISTM we had discussed whether guc.c should accept units strings in
 a case-insensitive manner, and the forces of pedantry won the first
 round.  Shall we reopen that argument?

Nope, I just checked back in the archive and that's not what happened. There
was an extended discussion about whether to force users to use the silly KiB,
MiB, etc units. Thankfully the pedants lost that round soundly.

There was no particular discussion about case sensitivity though Simon made
the case for user-friendly behaviour:

 I think we are safe assume to that
 
   kB = KB = kb = Kb = 1024 bytes
 
   mB = MB = mb = Mb = 1024 * 1024 bytes
 
   gB = GB = gb = Gb = 1024 * 1024 * 1024 bytes
 
 There's no value in forcing the use of specific case and it will be just
 confusing for people.

http://archives.postgresql.org/pgsql-hackers/2006-07/msg01253.php

And Jim Nasby said something similar:

 Forcing people to use a specific casing scheme is just going to lead to
 confusion and user frustration. If there's not a very solid *functional*
 argument for it, we shouldn't do it. Wanting to enforce a convention that
 people rarely use isn't a good reason.

http://archives.postgresql.org/pgsql-hackers/2006-07/msg01355.php

There was a lone comment from Thomas Hallgren in favour of case sensitivity in
the name of consistency. But Nasby's comment was directly in response and
nobody else piped up after that.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Magnus Hagander wrote:
 In most cases, I just assume they would just assume
 they can't use units on it because the default value in the file
 doesn't have units.

But the default value *does* have units.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Magnus Hagander
Peter Eisentraut wrote:
 Magnus Hagander wrote:
 In most cases, I just assume they would just assume
 they can't use units on it because the default value in the file
 doesn't have units.
 
 But the default value *does* have units.
 
It does? Didn't in my file. I must've overwritten it with a config file
from some earlier beta (or snapshot) that didn't have it or so - my
default value certainly didn't have it ;-)

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Bruce Momjian
Lukas Kahwe Smith wrote:
 Hi,
 
 I think another point you need to bring out more clearily is that the 
 community is also often miffed if they feel they have been left out of 
 the design and testing phases. This is sometimes just a reflex that is 
 not always based on technical reasoning. Its just that as you correctly 
 point out are worried of being high-jacked by companies.

I hate to mention an emotional community reaction in this document. We
normally just highlight the inefficiency of a company doing things on
their own, and the wasted effort of them having to make adjustments.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 23:39 +0100, Magnus Hagander wrote:
 Peter Eisentraut wrote:
  Magnus Hagander wrote:
  In most cases, I just assume they would just assume
  they can't use units on it because the default value in the file
  doesn't have units.
  
  But the default value *does* have units.
  
 It does? Didn't in my file. I must've overwritten it with a config file
 from some earlier beta (or snapshot) that didn't have it or so - my
 default value certainly didn't have it ;-)

Just to confirm, yes the sample I submitted a patch for does have units.

Joshua D. Drake


 
 //Magnus
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Joshua D. Drake wrote:
 I am not suggestion variant capitalization. I am suggestion a simple
 document patch to help eliminate what may not be obvious.

Perhaps it would be more effective to clarify the error message?  Right 
now it just says something to the effect of invalid integer.  I'd 
imagine invalid memory unit: TB would be less confusing.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] psql: core dumped

2006-12-19 Thread Mario

When psql is running and CRTL + \ is pressed, a core dumped show up.
In first place I ran psql into gdb, saw the backtrace and I believed
it was a libc6 bug and I reported to my distro security team
https://launchpad.net/distros/ubuntu/+source/glibc/+bug/76437

 Ubuntu edgy has got libc-2.4, a friend of my tested in Linux with
libc-2.3 and nothing happened. I don't know why in my libc version
(2.4) this happen.

 So, with humility :-) I want to help and I wrote a patch with a
solution, you can have it in the next link:
http://media.forestal.udec.cl/pgsql/startup.diff  I do not wanted to
sent it as an attachment

 My solution is catch the SIGQUIT signal and abort in silence, just that.


  Regards!

PS: sorry my English
--
http://www.advogato.org/person/mgonzalez/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Gregory Stark

Kenneth Marshall [EMAIL PROTECTED] writes:

 My one comment is that a little 'b' is used to indicate bits normally
 and a capital 'B' is used to indicate bytes. So
kb = '1024 bits'
kB = '1024 bytes'
 I do think that whether or not the k/m/g is upper case or lower case
 is immaterial.

Yes, well, no actually there are standard capitalizations for the k and M and
G. A lowercase g is a gram and a lowercase m means milli-.

But I think that only gets you as far as concluding that Postgres ought to
consistently use kB MB and GB in its own output. Which afaik it does.

To reach a conclusion about whether it should restrict valid user input
similarly you would have to make some sort of argument about what problems it
could lead to if we allow users to be sloppy.

I could see such an argument being made but it requires a lot of speculation
about hypothetical future parameters and future problems. When we have known
real problems today.

And yes, btw, the case sensitivity of these units had already surprised and
bothered me earlier and I failed to mention it at the time.


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] psql: core dumped

2006-12-19 Thread Peter Eisentraut
Mario wrote:
  When psql is running and CRTL + \ is pressed, a core dumped show up.
 In first place I ran psql into gdb, saw the backtrace and I believed
 it was a libc6 bug and I reported to my distro security team
 https://launchpad.net/distros/ubuntu/+source/glibc/+bug/76437

This isn't a bug.  It's working as designed.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] psql: core dumped

2006-12-19 Thread Mario

On 20/12/06, Peter Eisentraut [EMAIL PROTECTED] wrote:

Mario wrote:
  When psql is running and CRTL + \ is pressed, a core dumped show up.
 In first place I ran psql into gdb, saw the backtrace and I believed
 it was a libc6 bug and I reported to my distro security team
 https://launchpad.net/distros/ubuntu/+source/glibc/+bug/76437

This isn't a bug.  It's working as designed.



  Even if you get a core dumped every time you press CTRL+\  ?  why?





--
http://www.advogato.org/person/mgonzalez/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] psql: core dumped

2006-12-19 Thread Andrew Dunstan
Mario wrote:
 On 20/12/06, Peter Eisentraut [EMAIL PROTECTED] wrote:
 Mario wrote:
   When psql is running and CRTL + \ is pressed, a core dumped show up.
  In first place I ran psql into gdb, saw the backtrace and I believed
  it was a libc6 bug and I reported to my distro security team
  https://launchpad.net/distros/ubuntu/+source/glibc/+bug/76437

 This isn't a bug.  It's working as designed.


Even if you get a core dumped every time you press CTRL+\  ?  why?


This normally a SIGQUIT, and on my machine at least the default action for
that is a core dump. Perhaps you need to say what you are trying to do and
why.

cheers

andrew




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Bruce Momjian
Gregory Stark wrote:
 
 Kenneth Marshall [EMAIL PROTECTED] writes:
 
  My one comment is that a little 'b' is used to indicate bits normally
  and a capital 'B' is used to indicate bytes. So
 kb = '1024 bits'
 kB = '1024 bytes'
  I do think that whether or not the k/m/g is upper case or lower case
  is immaterial.
 
 Yes, well, no actually there are standard capitalizations for the k and M and
 G. A lowercase g is a gram and a lowercase m means milli-.

I will have 150 grams of shared memory, please.

 But I think that only gets you as far as concluding that Postgres ought to
 consistently use kB MB and GB in its own output. Which afaik it does.
 
 To reach a conclusion about whether it should restrict valid user input
 similarly you would have to make some sort of argument about what problems it
 could lead to if we allow users to be sloppy.
 
 I could see such an argument being made but it requires a lot of speculation
 about hypothetical future parameters and future problems. When we have known
 real problems today.
 
 And yes, btw, the case sensitivity of these units had already surprised and
 bothered me earlier and I failed to mention it at the time.

Agreed.  However, I see 'ms' as milliseconds, so perhaps the M vs. m is
already in use.  I think we at least need to document the case
sensitivity and improve the error message.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] psql: core dumped

2006-12-19 Thread Philip Yarra

Mario wrote:

  Even if you get a core dumped every time you press CTRL+\  ?  why?


Try ulimit -c 0, then run it (you should get no core dump)
Then ulimit -c 50, then run it (you should get a core dump)

SIGQUIT is supposed to dump core. Ulimit settings can suppress 
generation of core files. The difference between your machine and your 
friend's is likely just the ulimit settings.


Regards, Philip.

--
Philip Yarra
Senior Software Engineer, Utiba Pty Ltd
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] psql: core dumped

2006-12-19 Thread Gregory Stark
Mario [EMAIL PROTECTED] writes:

 On 20/12/06, Peter Eisentraut [EMAIL PROTECTED] wrote:
 
  This isn't a bug.  It's working as designed.

   Even if you get a core dumped every time you press CTRL+\  ?  why?

That's what C-\ does. Try it with any other program:

$ sleep 1
Quit (core dumped)


Most distributions ship with coredumpsize limited to 0 by default though, so
you would only cause it to crash without a core dump by default. Either yours
doesn't or you've enabled core dumps with ulimit -c unlimited (not that
that's a bad thing).

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Bruce Momjian
Joshua D. Drake wrote:
 On Wed, 2006-12-20 at 09:51 +1300, Andrej Ricnik-Bay wrote:
  On 12/20/06, Joshua D. Drake [EMAIL PROTECTED] wrote:
  
   O.k. in all Bruce I like your article but I must admit it seems to take
   a The community is god perspective and that we must all bend to the
   will of said community.
  I'm not really in a position to judge how a company thinks about
  donating  resources to a project, but I certainly think that Bruce'
  standpoint is correct, and that the community is *indeed* the driver of
  a project;  if a company doesn't like how the community deals with
  their requirements/needs they can just maintain their own branch.
 
 It is definitely a tough distinction. My first thought on reply was that
 well a companies employees become members of the community but that
 reinforces what you say above.
 
 I think my overall thought is the tone seems a bit non-gracious to
 companies, when IMO the community should be actively courting companies
 to give resources. If companies feel unwelcome, they won't give.
 
  
  
   The community could learn a great deal from adopting some of the more
   common business practices when it comes to development as well.
  
   In short, I guess I think it is important to recognize that both are
   partners in the open source world and that to ignore one over the other
   is destined to fail.
  Do you have any statistical data to back that hypothesis?
 
 Of which, the community learning or my take that if we ignore one over
 the other it is destined to fail?

This actually brings up an important distinction.  Joshua is saying that
the community is painted as god in the article, and I agree there is a
basis for that, but I don't think you can consider the community and
company as equals either.  I remember the president of Great Bridge
saying that the company needs the community, but not visa-vera --- if
the company dies, the community keeps going (as it did after Great
Bridge, without a hickup), but if the community dies, the company dies
too.  Also, the community is developing the software at a rate that
almost no other company can match, so again the company is kind of in
toe if they are working with the community process.  For example, the
community is not submitting patches for the company to approve.

I do think I need to add a more generous outreach to companies in the
article, explaining how valuable they are to the community, so let me
work on that and I will post when I have an update.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 Perhaps it would be more effective to clarify the error message?  Right 
 now it just says something to the effect of invalid integer.  I'd 
 imagine invalid memory unit: TB would be less confusing.

+1 on that, but I think we should just accept the strings
case-insensitively, too.  SQL in general is not case sensitive for
keywords, and neither is anything else in the postgresql.conf file,
so I argue it's inconsistent to be strict about the case for units.

Nor do I believe that we'd ever accept a future patch that made
the distinction between kb and kB significant --- if you think
people are confused now, just imagine what would happen then.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Release 8.2.0 done, 8.3 development starts

2006-12-19 Thread Thomas H.

The 8.2.0 release went well.  We spent a month more in beta than we
planned, but that time helped to eliminate many bugs, and many that had
existed in previous PostgreSQL major releases as well.  We have had very
few bug reports for 8.2.0, and will be doing a minor release in 1-2
weeks to get those fixes out to the user community.


any chance of having the win32 relation file permission bug looked at again 
for the minor release? this still worries us quite a bit here...


best regards,
thomas 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] psql: core dumped

2006-12-19 Thread Jeremy Drake
On Wed, 20 Dec 2006, Philip Yarra wrote:

 Mario wrote:
Even if you get a core dumped every time you press CTRL+\  ?  why?

 Try ulimit -c 0, then run it (you should get no core dump)
 Then ulimit -c 50, then run it (you should get a core dump)

 SIGQUIT is supposed to dump core. Ulimit settings can suppress generation of
 core files. The difference between your machine and your friend's is likely
 just the ulimit settings.

If you want to type CTRL+\ you can redefine what char generates SIGQUIT
with stty quit command.  For instance,

stty quit ^@




-- 
fortune's Contribution of the Month to the Animal Rights Debate:

I'll stay out of animals' way if they'll stay out of mine.
Hey you, get off my plate
-- Roger Midnight

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake

The community could learn a great deal from adopting some of the more
common business practices when it comes to development as well.
   
In short, I guess I think it is important to recognize that both are
partners in the open source world and that to ignore one over the other
is destined to fail.
   Do you have any statistical data to back that hypothesis?
  
  Of which, the community learning or my take that if we ignore one over
  the other it is destined to fail?
 
 This actually brings up an important distinction.  Joshua is saying that
 the community is painted as god in the article, and I agree there is a
 basis for that, but I don't think you can consider the community and
 company as equals either. 

I can agree with that.

  I remember the president of Great Bridge
 saying that the company needs the community, but not visa-vera --- if
 the company dies, the community keeps going (as it did after Great
 Bridge, without a hickup), but if the community dies, the company dies
 too. 

I 95% agree here. If EDB or CMD were go to down in flames, it could hurt
the community quite a bit. It isn't that the community wouldn't go on,
but that it would definitely negatively affect the productivity of the
community for n amount of time.

  Also, the community is developing the software at a rate that
 almost no other company can match, so again the company is kind of in
 toe if they are working with the community process.  For example, the
 community is not submitting patches for the company to approve.

Agreed.

 
 I do think I need to add a more generous outreach to companies in the
 article, explaining how valuable they are to the community, so let me
 work on that and I will post when I have an update.

Cool, that is what I was really looking for.

Sincerely,

Joshua D. Drake



 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 19:16 -0500, Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  Perhaps it would be more effective to clarify the error message?  Right 
  now it just says something to the effect of invalid integer.  I'd 
  imagine invalid memory unit: TB would be less confusing.
 
 +1 on that, but I think we should just accept the strings
 case-insensitively, too.  SQL in general is not case sensitive for
 keywords, and neither is anything else in the postgresql.conf file,
 so I argue it's inconsistent to be strict about the case for units.

Hello,

Attached is a simple patch that replaces strcmp() with pg_strcasecmp().
Thanks to AndrewS for pointing out that I shouldn't use strcasecp().

I compiled and installed, ran an initdb with 32mb (versus 32MB) and it
seems to work correctly with a show shared_buffers;

Sincerely,

Joshua D. Drake




-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


Index: guc.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/utils/misc/guc.c,v
retrieving revision 1.362
diff -c -r1.362 guc.c
*** guc.c	13 Dec 2006 05:54:48 -	1.362
--- guc.c	20 Dec 2006 00:59:15 -
***
*** 3630,3647 
  		while (*endptr == ' ')
  			endptr++;
  
! 		if (strcmp(endptr, kB) == 0)
  		{
  			used = true;
  			endptr += 2;
  		}
! 		else if (strcmp(endptr, MB) == 0)
  		{
  			val *= KB_PER_MB;
  			used = true;
  			endptr += 2;
  		}
! 		else if (strcmp(endptr, GB) == 0)
  		{
  			val *= KB_PER_GB;
  			used = true;
--- 3630,3647 
  		while (*endptr == ' ')
  			endptr++;
  
! 		if (pg_strcasecmp(endptr, kB) == 0)
  		{
  			used = true;
  			endptr += 2;
  		}
! 		else if (pg_strcasecmp(endptr, MB) == 0)
  		{
  			val *= KB_PER_MB;
  			used = true;
  			endptr += 2;
  		}
! 		else if (pg_strcasecmp(endptr, GB) == 0)
  		{
  			val *= KB_PER_GB;
  			used = true;
***
*** 3669,3698 
  		while (*endptr == ' ')
  			endptr++;
  
! 		if (strcmp(endptr, ms) == 0)
  		{
  			used = true;
  			endptr += 2;
  		}
! 		else if (strcmp(endptr, s) == 0)
  		{
  			val *= MS_PER_S;
  			used = true;
  			endptr += 1;
  		}
! 		else if (strcmp(endptr, min) == 0)
  		{
  			val *= MS_PER_MIN;
  			used = true;
  			endptr += 3;
  		}
! 		else if (strcmp(endptr, h) == 0)
  		{
  			val *= MS_PER_H;
  			used = true;
  			endptr += 1;
  		}
! 		else if (strcmp(endptr, d) == 0)
  		{
  			val *= MS_PER_D;
  			used = true;
--- 3669,3698 
  		while (*endptr == ' ')
  			endptr++;
  
! 		if (pg_strcasecmp(endptr, ms) == 0)
  		{
  			used = true;
  			endptr += 2;
  		}
! 		else if (pg_strcasecmp(endptr, s) == 0)
  		{
  			val *= MS_PER_S;
  			used = true;
  			endptr += 1;
  		}
! 		else if (pg_strcasecmp(endptr, min) == 0)
  		{
  			val *= MS_PER_MIN;
  			used = true;
  			endptr += 3;
  		}
! 		else if (pg_strcasecmp(endptr, h) == 0)
  		{
  			val *= MS_PER_H;
  			used = true;
  			endptr += 1;
  		}
! 		else if (pg_strcasecmp(endptr, d) == 0)
  		{
  			val *= MS_PER_D;
  			used = true;

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] psql: core dumped

2006-12-19 Thread Alvaro Herrera
Jeremy Drake wrote:
 On Wed, 20 Dec 2006, Philip Yarra wrote:
 
  Mario wrote:
 Even if you get a core dumped every time you press CTRL+\  ?  why?
 
  Try ulimit -c 0, then run it (you should get no core dump)
  Then ulimit -c 50, then run it (you should get a core dump)
 
  SIGQUIT is supposed to dump core. Ulimit settings can suppress generation of
  core files. The difference between your machine and your friend's is likely
  just the ulimit settings.
 
 If you want to type CTRL+\ you can redefine what char generates SIGQUIT
 with stty quit command.

I think the problem Mario is really trying to solve is quitting at
psql's Password:  prompt.  Ctrl-C is ignored at that point apparently.
SIGQUIT (thus Ctrl-\ in most people's setup) does it but it also dumps
core.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Tom Lane wrote:
 Nor do I believe that we'd ever accept a future patch that made
 the distinction between kb and kB significant --- if you think
 people are confused now, just imagine what would happen then.

As I said elsewhere, I'd imagine future functionality like a units-aware 
data type, which has been talked about several times, and then this 
would be really bad.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Joshua D. Drake wrote:
 I compiled and installed, ran an initdb with 32mb (versus 32MB) and
 it seems to work correctly with a show shared_buffers;

Did it actually allocate 32 millibits of shared buffers?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] pg_restore fails with a custom backup file

2006-12-19 Thread Hiroshi Saito

Hi Asaba-san.

From: Yoshiyuki Asaba


Is it able to use fsetpos()/fgetpos() instead of ftell()/fseek()?
fpos_t is a 8byte type. I tested pg_dump/pg_restore with the attached
patch.


I'm sorry the response ..slowly...my machine reacts for the reasons of 
poverty late. Last night.. I was actually looking at your proposal. 
I was trying correcting by another point. It also saw one solution.
I think content that your proposal also contributes enough. 
However, I think there is worry that Magnus-san means, too. 

There are your proposal, a proposal of Andreas-san, and my 
proposal now. Surely, I think that it offers Magnus-san again after 
arranging these by a good solution.:-)


P.S)
I understand, your a lot time be spent on this work.
patch-test-check-patch-test-check
Oh..., It takes large amount of time to big data...

Anyway, thanks!!:-)

Regards,
Hiroshi Saito







---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Peter Eisentraut
Tom Lane wrote:
 +1 on that, but I think we should just accept the strings
 case-insensitively, too.

I think if we'd allow this to spread, documentation, example files and 
other material would use it inconsistently, and even more people would 
be confused and it would make us look silly.

It's not like anyone has pointed out a real use case here.  The default 
file has the units already, so it's not like they're hard to guess.  
And Magnus's issue was that the error message was confusing.  So let's 
fix that.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Tom Dunstan

Alvaro Herrera wrote:

I don't, because there are always those that are knowledgeable enough to
know how to reduce space lost to padding.  So it would be nice to have
2-byte enums on-disk, and resolve them based on the column's typid.  But
then, I'm not familiar with the patch at all so I'm not sure if it's
possible.


Not with this patch, and AFAIK not possible generally, without writing 
separate I/O functions for each type. I'd love to be able to do that, 
but I don't think it's possible currently. The main stumbling block is 
the output function (and cast-to-text function), because output 
functions do not get provided the oid of the type that they're dealing 
with, for security reasons IIRC. It was never clear to me why I/O 
functions should ever be directly callable by a user (and hence open to 
security issues), but apparently it was enough to purge any that were 
designed like that from the system, so I wasn't going to go down that 
road with the patch.


Cheers

Tom



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake
On Wed, 2006-12-20 at 02:19 +0100, Peter Eisentraut wrote:
 Joshua D. Drake wrote:
  I compiled and installed, ran an initdb with 32mb (versus 32MB) and
  it seems to work correctly with a show shared_buffers;
 
 Did it actually allocate 32 millibits of shared buffers?

Funny :)

Joshua D. Drake



 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] [PATCHES] Enums patch v2

2006-12-19 Thread Tom Dunstan

Peter Eisentraut wrote:
An objection to enums on the ground that foreign keys can accomplish the 
same thing could be extended to object to any data type with a finite 
domain.


Exactly. The extreme case is the boolean type, which could easily be 
represented by a two-value enum. Or, if you were feeling masochistic, a 
FK to a separate table. Which is easier?


People regularly do stuff like having domains over finite text values, 
or having a FK to a separate (static) table, or using some sort of EAV. 
Enums are type-safe, easily ordered, relatively efficient and don't 
leave zillions of little static tables all over the place, a combination 
of attributes that none of the alternative solutions in this space present.


Cheers

Tom


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Jonah H. Harris

On 12/19/06, Bruce Momjian [EMAIL PROTECTED] wrote:

This actually brings up an important distinction.  Joshua is saying that
the community is painted as god in the article, and I agree there is a
basis for that, but I don't think you can consider the community and
company as equals either.


Of course, this seems only true to PostgreSQL, FreeBSD, and very few
others... not the other 99% of open source communities which are open
sourced around a commercial product or consist of a handful of people.
The title of the document is generalized to Open Source
Communities, and most of the items here just don't represent the
majority of open source communities whether we'd like it to or not.


if the company dies, the community keeps going (as it did after Great
Bridge, without a hickup), but if the community dies, the company dies
too.


In my opinion, if a PostgreSQL company dies, there will be a
ripple-effect felt in the community depending on the size of the
company, development/support sponsored, monetary contribution, etc.
If a company buys up (or buys off) the experienced major developers
for an open source project, it could easily spell disaster for the
project.

However, in regard to a dying community killing a company, I disagree
completely.  Commercial software companies most certainly do not rely
on outside contribution to survive.  And, like it or not, any company
could run with PostgreSQL as a normal software company exactly the
same way as they can with code they wrote from scratch.  Many open
source people forget there is a commercial software industry that not
only predates them, but will most likely continue on far into the
future.


Also, the community is developing the software at a rate that
almost no other company can match, so again the company is kind of in
toe if they are working with the community process.


Again, this thinking may apply only to a few projects with a
PostgreSQL-like model.  The reference to, the community seems
directly linked to PostgreSQL.  I can name many communities that could
never compete on a development rate with their commercial
sponsors/counterparts.

Commercial companies (100+ names left out) can develop way more
features than most open source communities in the same span of time or
faster.  And, going back to the article being open-source in general,
most other open source communities don't actually contribute a ton of
code back to the project's parent software base; certainly not more
than the company writes itself.

As this document is supposed to be factual, I'd really like not to get
into a war over lines-of-code development rates vs. bugs, quality (or
lack thereof), etc.  The *fact* is, some commercial software companies
could easily churn out more, better quality code, if they chose to
hire the right people and put enough money and thought into it.


I do think I need to add a more generous outreach to companies in the
article, explaining how valuable they are to the community, so let me
work on that and I will post when I have an update.


The title of the document, How Companies Can Effectively Contribute
To Open Source Communities doesn't seem to fit the content.  I would
consider something more along the lines of, Enterprise Open Source:
Effectively Contributing Commercial Support to Open Source
Communities, or, What to Expect when Contributing to Open Source
Projects.  More specifically, I'd restrict the document to PostgreSQL
because it really doesn't represent the majority of open source
software communities which tend to be commercially-driven.

If this document is meant to help companies help open source and/or
PostgreSQL, I think that's a good idea.  This document doesn't seem to
be written in the way a company, looking to help fund or contribute to
an open source project, would respond favorably to.  It seems more or
less from a community view as to, if you want to help us, this is
what we expect from you; which may be the desired intent.?.

I read it over twice and that was my impression.  While I'm a big fan
of open source, prefer it to *most* commercial software, and think
it's a great thing all around, I'm a realist and am not going to turn
a blind eye to the enormously successful and profitable arena of
commercial software.

You and I have discussed these items before privately and, while we
always seem to disagree, I just figured I'd post them (for better or
worse) on-list.  For both sides of the discussion, I'm sure there are
others who think the same thing but remain silent :)


--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED]
Iselin, New Jersey 08830| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Joshua D. Drake

 Hello,
 
 Attached is a simple patch that replaces strcmp() with pg_strcasecmp().
 Thanks to AndrewS for pointing out that I shouldn't use strcasecp().
 

That should be AndrewD :)

J


 I compiled and installed, ran an initdb with 32mb (versus 32MB) and it
 seems to work correctly with a show shared_buffers;
 
 Sincerely,
 
 Joshua D. Drake
 
 
 
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Tom Lane
Joshua D. Drake [EMAIL PROTECTED] writes:
 I remember the president of Great Bridge
 saying that the company needs the community, but not visa-vera --- if
 the company dies, the community keeps going (as it did after Great
 Bridge, without a hickup), but if the community dies, the company dies
 too. 

 I 95% agree here. If EDB or CMD were go to down in flames, it could hurt
 the community quite a bit.

Josh, I hate to burst your bubble, but Great Bridge employed a much
larger fraction of the hacker-community-at-the-time than either EDB or
CMD do today.  We survived that, and if EDB or CMD or Greenplum or the
entire lot went down tomorrow, we'd survive that too.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake
On Tue, 2006-12-19 at 22:04 -0500, Tom Lane wrote:
 Joshua D. Drake [EMAIL PROTECTED] writes:
  I remember the president of Great Bridge
  saying that the company needs the community, but not visa-vera --- if
  the company dies, the community keeps going (as it did after Great
  Bridge, without a hickup), but if the community dies, the company dies
  too. 
 
  I 95% agree here. If EDB or CMD were go to down in flames, it could hurt
  the community quite a bit.
 
 Josh, I hate to burst your bubble, but Great Bridge employed a much
 larger fraction of the hacker-community-at-the-time than either EDB or
 CMD do today.  We survived that, and if EDB or CMD or Greenplum or the
 entire lot went down tomorrow, we'd survive that too.

I never once suggested that the community would not survive. I said it
would hurt productivity for n amount of time.

Let's just be realistic here:

In one fails swoop:

Devrim, Alvaro, Darcy, Heikki, Bruce, Simon, Greg, Dave, Marc and I are
all suddenly looking for employment...

You don't think there would be an issue that could cause some grief to
the community? Is it surmountable? Of course, that isn't the point. The
point is that it is not painless. 


Sincerely,

Joshua D. Drake


 
   regards, tom lane
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] effective_cache_size vs units

2006-12-19 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Nor do I believe that we'd ever accept a future patch that made
 the distinction between kb and kB significant --- if you think
 people are confused now, just imagine what would happen then.

 As I said elsewhere, I'd imagine future functionality like a units-aware 
 data type, which has been talked about several times, and then this 
 would be really bad.

Only if the units-aware datatype insisted on case-sensitive units, which
is at variance with the SQL spec's treatment of keywords, the existing
practice in postgresql.conf, the existing practice in our datatypes such
as timestamp and interval:

regression=# select '20-dec-2006'::timestamp;
  timestamp  
-
 2006-12-20 00:00:00
(1 row)

regression=# select '20-DEC-2006'::timestamp;
  timestamp  
-
 2006-12-20 00:00:00
(1 row)

regression=# select '20-Dec-2006 America/New_York'::timestamptz;
  timestamptz   

 2006-12-20 00:00:00-05
(1 row)

regression=# select '20-Dec-2006 AMERICA/NEW_york'::timestamptz;
  timestamptz   

 2006-12-20 00:00:00-05
(1 row)

regression=# select '20-Dec-2006 PST'::timestamptz;
  timestamptz   

 2006-12-20 03:00:00-05
(1 row)

regression=# select '20-Dec-2006 pst'::timestamptz;
  timestamptz   

 2006-12-20 03:00:00-05
(1 row)

regression=# select '1 day'::interval;
 interval 
--
 1 day
(1 row)

regression=# select '1 DAY'::interval;
 interval 
--
 1 day
(1 row)

and in general, there is simply not any other part of Postgres or SQL
that you can point to that supports the idea that case sensitivity
for keywords is expected behavior.  So I think we'd flat-out reject
any such datatype.

(Hmm, I wonder what Tom Dunstan's enum patch does about case
sensitivity...)

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PATCHES] Load distributed checkpoint patch

2006-12-19 Thread ITAGAKI Takahiro
Bruce Momjian [EMAIL PROTECTED] wrote:

 OK, if I understand correctly, instead of doing a buffer scan, write(),
 and fsync(), and recyle the WAL files at checkpoint time, you delay the
 scan/write part with the some delay.

Exactly. Actual behavior of checkpoint is not changed by the patch. Compared
with existing checkpoints, it just takes longer time in scan/write part.

 Do you use the same delay autovacuum uses?

What do you mean 'the same delay'? Autovacuum does VACUUM, not CHECKPOINT.
If you think cost-based-delay, I think we cannot use it here. It's hard to
estimate how much checkpoints delay by cost-based sleeping, but we should
finish asynchronous checkpoints by the start of next checkpoint. So I gave
priority to punctuality over load smoothing.

 As I remember, often the checkpoint is caused because
 we are using the last WAL file.  Doesn't this delay the creation of new
 WAL files by renaming the old ones to higher numbers (we can't rename
 them until the checkpoint is complete)?

Checkpoints should be done by the next one, so we need WAL files for two
checkpoints. It is the same as now.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Bruce Momjian
Joshua D. Drake wrote:
   I remember the president of Great Bridge
  saying that the company needs the community, but not visa-vera --- if
  the company dies, the community keeps going (as it did after Great
  Bridge, without a hickup), but if the community dies, the company dies
  too. 
 
 I 95% agree here. If EDB or CMD were go to down in flames, it could hurt
 the community quite a bit. It isn't that the community wouldn't go on,
 but that it would definitely negatively affect the productivity of the
 community for n amount of time.

The assumption is that other companies would jump in to support the paid
individuals affected by the company closings.  If that didn't happen,
there would be an effect, yes.

   Also, the community is developing the software at a rate that
  almost no other company can match, so again the company is kind of in
  toe if they are working with the community process.  For example, the
  community is not submitting patches for the company to approve.
 
 Agreed.
 
  
  I do think I need to add a more generous outreach to companies in the
  article, explaining how valuable they are to the community, so let me
  work on that and I will post when I have an update.
 
 Cool, that is what I was really looking for.

Yes, the original was pretty negative, and the end of it was very
negative, I felt.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Companies Contributing to Open Source

2006-12-19 Thread Joshua D. Drake

 In one fails swoop:

Sorry a beer and email just doesn't mix. The above should be one fell
swoop.

 
 Devrim, Alvaro, Darcy, Heikki, Bruce, Simon, Greg, Dave, Marc and I are
 all suddenly looking for employment...
 
 You don't think there would be an issue that could cause some grief to
 the community? Is it surmountable? Of course, that isn't the point. The
 point is that it is not painless. 
 
 
 Sincerely,
 
 Joshua D. Drake
 
 
  
  regards, tom lane
  
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


  1   2   >