Re: [HACKERS] [COMMITTERS] pgsql/src/interfaces/ecpg ChangeLog preproc/ec ...

2002-05-21 Thread Michael Meskes

On Mon, May 20, 2002 at 10:31:57AM -0400, Tom Lane wrote:
> Yes.  Maybe we should contact the Bison developers and lobby for an
> increase in the standard value.  I don't mind saying "you must use
> Bison 1.36 or newer to rebuild the Postgres grammar" ... but having to
> say "you must use a nonstandardly patched Bison" would really suck :-(

I fully agree.

Michael
-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Future plans

2002-05-21 Thread Michael Meskes

On Tue, May 21, 2002 at 12:35:20AM +0500, Hannu Krosing wrote:
> > - recursive views (you know, I wanted to implement this when I started
> >   my work on PostgreSQL, but never found the time)
> 
> A good start would be to make the parser recognize the full sql99 syntax
> for it. Its quite big - see attached gif I generated from grammar
> extracted from the standard:

Well, the parser seems to be the easier part. :-)

Michael

-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] More schema queries

2002-05-21 Thread Dave Page



> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: 21 May 2002 01:00
> To: Dave Page
> Cc: [EMAIL PROTECTED]
> Subject: Re: [HACKERS] More schema queries 
> 
> 
> "Dave Page" <[EMAIL PROTECTED]> writes:
> > I'm confused. Does the standalone backend not deal with 
> schemas fully 
> > and is silently failing 'cos there's nothing technically wrong with 
> > the pg_catalog.viewname syntax?
> 
> The standalone backend does schemas just fine.  What is 
> supposed to ensure that the views get created in pg_catalog 
> is the bit in initdb:
> 
> PGSQL_OPT="$PGSQL_OPT -O --search_path=pg_catalog"

That said, I'm still surprised that prepending 'pg_catalog.' to the view
names didn't force them into pg_catalog.

> The -- parameter should do the equivalent of
>   SET search_path = pg_catalog;
> but apparently it's not working for you; if it weren't there 
> then the views would indeed get created in public.
> 
> Any idea why it's not working?

Just to be doubly sure, I've installed a fresh Cygwin, and confirmed
that none of Jason's prepackaged 7.2 got in there by mistake. Built and
installed from CVS tip as of about 9:30AM BST 21/5/02. The problem still
remains. 

I've played with initdb, and confirmed that 

$PGSQL_OPT = -F -D/data -o /dev/null -O --search_path=pg_catalog 

immediately prior to the views being created. I then tried running a
single user backend in exactly the same way initdb does (bar the
redirection of the output), and checking the search path:


PC9 $ postgres -F -D/data -O --search_path=pg_catalog template1
LOG:  database system was shut down at 2002-05-21 10:44:50
LOG:  checkpoint record is at 0/49D6B0
LOG:  redo record is at 0/49D6B0; undo record is at 0/0; shutdown TRUE
LOG:  next transaction id: 103; next oid: 16570
LOG:  database system is ready

POSTGRES backend interactive interface
$Revision: 1.267 $ $Date: 2002/05/18 15:44:47 $

backend> select current_schemas();
blank
 1: current_schemas (typeid = 1003, len = -1, typmod = -1,
byval = f)

 1: current_schemas = "{public}"(typeid = 1003, len =
-1, typmod = -1, byval = f)



Which makes sense because as you said previously pg_catalog is implictly
included at the beginning of the search path anyway. It then struck me
that as that is the case, does the --search_path=pg_catalog get ignored?
I tested this by creating a view, and then examining it's
pg_class.relnamespace:


backend> create view testview as select * from pg_class;
backend> select relnamespace from pg_class where relname = 'testview';
blank
 1: relnamespace(typeid = 26, len = 4, typmod = -1,
byval = t)

 1: relnamespace = "2200"   (typeid = 26, len = 4, typmod =
-1, byval = t)



2200 is the oid of 'public', so it seems to me that the
--search_path=pg_catalog is being ignored by the standalone backend for
some reason. I then tried explicitly naming the schema:


backend> create view pg_catalog.testview2 as select * from pg_class;
backend> select relnamespace from pg_class where relname = 'testview2';
blank
 1: relnamespace(typeid = 26, len = 4, typmod = -1,
byval = t)

 1: relnamespace = "11" (typeid = 26, len = 4, typmod = -1,
byval = t)



This appears to work fine, so I hacked initdb to prepend the
'pg_catalog.' to the viewnames. Cleared $PGDATA, confirmed I was running
the correct initdb, and still, the views are in public - Arrrggghhh!

Any suggestions?

Regards, Dave.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Per tuple overhead, cmin, cmax, OID

2002-05-21 Thread Manfred Koizar

On Thu, 02 May 2002 21:10:40 -0400, Tom Lane <[EMAIL PROTECTED]>
wrote:
>Manfred Koizar <[EMAIL PROTECTED]> writes:
>> Is saving 4 bytes per tuple a "darn good reason"?
>
>[...] Now if
>we could get rid of 8 bytes in the header, I'd get excited ;-)

Tom,

what about WITHOUT OIDS?  I know dropping the OID from some tables and
keeping it for others is not trivial, because t_oid is the _first_
field of HeapTupleHeaderData.  I'm vaguely considering a few possible
implementations and will invest more work in a detailed proposal, if
it's wanted.

Servus
 Manfred

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Future plans

2002-05-21 Thread Hannu Krosing

On Tue, 2002-05-21 at 10:18, Michael Meskes wrote:
> On Tue, May 21, 2002 at 12:35:20AM +0500, Hannu Krosing wrote:
> > > - recursive views (you know, I wanted to implement this when I started
> > >   my work on PostgreSQL, but never found the time)
> > 
> > A good start would be to make the parser recognize the full sql99 syntax
> > for it. Its quite big - see attached gif I generated from grammar
> > extracted from the standard:
> 
> Well, the parser seems to be the easier part. :-)

Sure.

My point was that we should put in the _full_ syntax in one shot and
then we could implement in smaller pieces.

--
Hannu



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] More schema queries

2002-05-21 Thread Tom Lane

"Dave Page" <[EMAIL PROTECTED]> writes:
> This appears to work fine, so I hacked initdb to prepend the
> 'pg_catalog.' to the viewnames. Cleared $PGDATA, confirmed I was running
> the correct initdb, and still, the views are in public - Arrrggghhh!

Weird.  Maybe there is more than one bug involved, because adding
pg_catalog. to the create view should definitely have worked.
Will try to duplicate that here.

> Any suggestions?

Try changing the PGOPTS setting to use

-c search_path=pg_catalog 

That shouldn't make any difference but ...

Also, you could try setting a breakpoint at RangeVarGetCreationNamespace
(in backend/catalog/namespace.c) to see what it thinks it's doing and
what's in namespace_search_path at the time.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Thomas Lockhart

...
> Our implementation is broken, then.  Thomas, is this fixable for a 7.2.x
> release, or something for 7.3?

"Our implementation is broken, then" is really not a conclusion to be
reached. The de facto behavior of mktime() on all world-class
implementations is to support pre-1970 times. This has been true forever
afaik, certainly far longer than PostgreSQL (or Postgres) has been in
existance.

Any standard which chooses to ignore pre-1970 dates is fundamentally
broken imho, and I'm really ticked off that the glibc folks have so
glibly introduced a change of this nature and magnitude without further
discussion.

 - Thomas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [GENERAL] Psql 7.2.1 Regress tests failed on RedHat7.3

2002-05-21 Thread Thomas Lockhart

> > They are just wanting to be standard.  I know this; I just can't say how I
> > know this.  But the link to the ISO definition is
> > http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap04.html#tag_04_14
...
> FWIW, here's what I see in the C99 spec pdf for mktime and the tm
> structure info.  I don't have C90 to compare to and I'm not sure that
> there's anywhere else to look, but  I assume that the change is
> over whether returning -1 from mktime is a "successful completion" of
> the function.
...
> 3 The mktime function returns the specified calendar time encoded as a
> value of type time_t. If the calendar time cannot be represented, the
> function returns the value (time_t)(-1).

Right. Both standards refer to what is defined, and neither specifies a
behavior for dates and times prior to 1970. "Undefined" means that the
standard chooses to not cover that territory.

In this case, one could fully conform to the standard by returning "-1"
from mktime() on error. That is *all* that the standard asks. One could
also look at tm_isdst to distinguish if this is a real error or whether
it happens to be a time 1 second before 1970-01-01 (-1 on error, 0 for
no DST, 1 for DST, if initialized to -1 before the call).

I'm not sure how to contact the glibc folks in a way which gives full
discussion to this issue. I'm certain that this recent change was done
in good faith but was arbitrary and capricious in formulation and
application. This fundamentally damages the capabilities of Linux-based
systems everywhere and is not in the best interests of anyone or
anything other than its competitors.

 - Thomas

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Per tuple overhead, cmin, cmax, OID

2002-05-21 Thread Tom Lane

Manfred Koizar <[EMAIL PROTECTED]> writes:
> what about WITHOUT OIDS?  I know dropping the OID from some tables and
> keeping it for others is not trivial, because t_oid is the _first_
> field of HeapTupleHeaderData.  I'm vaguely considering a few possible
> implementations and will invest more work in a detailed proposal, if
> it's wanted.

Yeah, I had been toying with the notion of treating OID like a user
field --- ie, it'd be present in the variable-length part of the record
if at all.  It'd be a bit tricky to find all the places that would need
to change, but I think there are not all that many.

As usual, the major objection to any such change is losing the chance
of doing pg_upgrade.  But we didn't have pg_upgrade during the 7.2
cycle either.  If we put together several such changes and did them
all at once, the benefit might be enough to overcome that complaint.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] More schema queries

2002-05-21 Thread Dave Page



> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: 21 May 2002 14:17
> To: Dave Page
> Cc: [EMAIL PROTECTED]
> Subject: Re: [HACKERS] More schema queries 
> 
> 
> 
> Try changing the PGOPTS setting to use
> 
>   -c search_path=pg_catalog 
> 
> That shouldn't make any difference but ...

Shouldn't but does :-). Checked & double-checked, that works perfectly.

> Also, you could try setting a breakpoint at 
> RangeVarGetCreationNamespace (in backend/catalog/namespace.c) 
> to see what it thinks it's doing and what's in 
> namespace_search_path at the time.

I'm going to try to do this regardless of the fact it now works - this
will be my first play with gdb so it might take me a while but would
probably be a useful learning experience. I'll let you know what I find.

Regards, Dave.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Manuel Sugawara

Lamar Owen <[EMAIL PROTECTED]> writes:

> On Monday 20 May 2002 08:08 pm, Manuel Sugawara wrote:
> > > Where would we go to ferret out the source of this bug?  More to the
> > > point: we need a test case in C that could expose this as a glibc
> > > bug.
> 
> > Seems like mktime(3) is having problems with dates before the
> > epoch. Attached is the a program to test this. The glibc source is now
> > downloading I will try to hunt down this bug but not until the next
> > week.
> 
> It's not a bug.  At least not according to the ISO C standard.  See
> http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap04.html#tag_04_14
> for the definition of 'Seconds Since the Epoch', then
> cross-reference to the man page of mktime.

I see. This behavior is consistent with the fact that mktime is
supposed to return -1 on error, but then is broken in every other Unix
implementation that I know.

Any other workaround than downgrade or install FreeBSD?

Regards,
Manuel.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Unbounded (Possibly) Database Size Increase - Toasting

2002-05-21 Thread Tom Lane

<[EMAIL PROTECTED]> writes:
> The toast table gets about 90 percent of the growth, followed by the toast 
> index at about 9 percent. The actual table + primary key stay at about 2M each.

Odd.  I wonder whether you are looking at an unintended behavior of the
free space map's thresholding mechanism.  The toast table will generally
have large tuples of consistent size (about 2K each).  This will cause
the FSM threshold for whether to remember a page to approach 2K, which
probably will mean that we forget about pages that could still hold one
toast tuple.  That might be enough to cause the growth.  It may be
worth playing around with the details of the threshold-setting policy.

In particular, I'd suggest altering the code in GetPageWithFreeSpace
and RecordAndGetPageWithFreeSpace (in
src/backend/storage/freespace/freespace.c) to make the threshold
converge towards something less than the average request size, perhaps
average/2, which you could do with

-   cur_avg += ((int) spaceNeeded - cur_avg) / 32;
+   cur_avg += (((int) spaceNeeded)/2 - cur_avg) / 32;

Possibly the initial threshold set in create_fsm_rel also needs to be
smaller than it is.  Not sure about that though.

Let me know how that affects your results ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] More schema queries

2002-05-21 Thread Dave Page



> -Original Message-
> From: Dave Page 
> Sent: 21 May 2002 14:39
> To: 'Tom Lane'
> Cc: [EMAIL PROTECTED]
> Subject: RE: [HACKERS] More schema queries 
> 
> 
> > Also, you could try setting a breakpoint at
> > RangeVarGetCreationNamespace (in backend/catalog/namespace.c) 
> > to see what it thinks it's doing and what's in 
> > namespace_search_path at the time.
> 
> I'm going to try to do this regardless of the fact it now 
> works - this will be my first play with gdb so it might take 
> me a while but would probably be a useful learning 
> experience. I'll let you know what I find.
> 

Sorry Tom, I know this isn't strictly a PostgreSQL problem, but despite
much time on Google I'm stuck with gdb. I can attach it to the
standalone backend at the relevant point in initdb, and have got it to
break in RangeVarGetCreationNamespace. I can also see the call stack &
registers etc. 

What I cannot do is get it to show me anything useful. I only seem to be
able to step through the assembly code (is it possible to load the C
source?), and more importantly, adding a watch (or print-ing)
namespace_search_path gives: 167839776. Attempting to watch or print
namespaceId gives 'Error: No symbol "namespaceId" in current context.'.

I'd appreciate any pointers you can give me...

Regards, Dave.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] More schema queries

2002-05-21 Thread Tom Lane

> What I cannot do is get it to show me anything useful.

It sounds like gdb does not have access to debugging symbol tables.

Firstly, did you compile with -g (configure --enable-debug)?

Secondly, did you point gdb at the postgres executable when you
started it?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Per tuple overhead, cmin, cmax, OID

2002-05-21 Thread Manfred Koizar

On Tue, 21 May 2002 09:57:32 -0400, Tom Lane <[EMAIL PROTECTED]>
wrote:
>Manfred Koizar <[EMAIL PROTECTED]> writes:
>> what about WITHOUT OIDS?  I know dropping the OID from some tables and
>> keeping it for others is not trivial, because t_oid is the _first_
>> field of HeapTupleHeaderData.  I'm vaguely considering a few possible
>> implementations and will invest more work in a detailed proposal, if
>> it's wanted.
>
>Yeah, I had been toying with the notion of treating OID like a user
>field --- ie, it'd be present in the variable-length part of the record
>if at all.  It'd be a bit tricky to find all the places that would need
>to change, but I think there are not all that many.
That was one of the possible solutions I thought of, unfortunately the
one I'm most afraid of.  Not because I think it's not the cleanest
way, but I don't (yet) feel comfortable enough with the code to rip
out oids from system tables.  However, if you tell me it's feasible
and if you give me some hints where to start, I'll give it a try...

Other possible implementations would leave the oid in the tuple
header:

. typedef two structs HeapTupleHeaderDataWithOid and
HeapTupleHeaderDataWithoutOid, wrap access to *all* HeapTupleHeader
fields in accessor macros/functions, give these accessors enough
information to know which variant to use.

. Decouple on-disk format from in-memory structures, use
HeapTupleHeaderPack() and HeapTupleHeaderUnpack() to store/extract
header data to/from disk buffers.  Concurrency?

>As usual, the major objection to any such change is losing the chance
>of doing pg_upgrade.  But we didn't have pg_upgrade during the 7.2
>cycle either.

I thought, it is quite common to need pg_dump/restore when upgrading
between releases.  Or are you talking about those hackers, testers and
users(?), who are using a cvs version now?

Anyway, as long as our changes don't make heap tuples larger, it
should be possible to write a tool that converts version x data files
to version y data files.  I've done that before (not for PG though)
and I know it's a lot of work, but wouldn't it be great for the PG
marketing department ;-)

>If we put together several such changes [...]

I can't guarantee that;  my ideas come in drop by drop :-)
BTW, is there a 7.3 schedule?

Servus
 Manfred

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Is 7.3 a good time to increase NAMEDATALEN ?

2002-05-21 Thread Joel Burton

Noticed that increasing NAMEDATALEN to 128 is still on the TODO.

Given that the addition of namespaces for 7.3 is going to require many
client utilities to be updated anyway, is this a reaonable time to bring
this increase into the standard distribution? It seems like it would be
minor pain whenever we do this, and 7.3 could be as good a time as any.

- J.

Joel BURTON | [EMAIL PROTECTED] | joelburton.com | aim: wjoelburton
Knowledge Management & Technology Consultant


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Per tuple overhead, cmin, cmax, OID

2002-05-21 Thread Tom Lane

Manfred Koizar <[EMAIL PROTECTED]> writes:
> That was one of the possible solutions I thought of, unfortunately the
> one I'm most afraid of.  Not because I think it's not the cleanest
> way, but I don't (yet) feel comfortable enough with the code to rip
> out oids from system tables.

The system tables that have OIDs will certainly continue to have OIDs.

I suppose the messiest aspect of that solution would be changing all
the places that currently do "tuple->t_data->t_oid".  If OID is not at
a fixed offset in the tuple then it'll be necessary to change *all*
those places.  Ugh.  While certainly we should have been using accessor
macros for that, I'm not sure I want to try to change it.

> Other possible implementations would leave the oid in the tuple
> header:

> . typedef two structs HeapTupleHeaderDataWithOid and
> HeapTupleHeaderDataWithoutOid, wrap access to *all* HeapTupleHeader
> fields in accessor macros/functions, give these accessors enough
> information to know which variant to use.

If OID is made to be the last fixed-offset field, instead of the first,
then this approach would be fairly workable.  Actually I'd still use
just one struct definition, but do offsetof() calculations to decide
where the null-bitmap starts.

> Decouple on-disk format from in-memory structures, use
> HeapTupleHeaderPack() and HeapTupleHeaderUnpack() to store/extract
> header data to/from disk buffers.  Concurrency?

Inefficient, and you'd have problems still with the changeable fields
(t_infomask etc).

>> As usual, the major objection to any such change is losing the chance
>> of doing pg_upgrade.  But we didn't have pg_upgrade during the 7.2
>> cycle either.

> I thought, it is quite common to need pg_dump/restore when upgrading
> between releases.

Yes, and we get loud complaints every time we require it...

> Anyway, as long as our changes don't make heap tuples larger, it
> should be possible to write a tool that converts version x data files
> to version y data files.  I've done that before (not for PG though)
> and I know it's a lot of work, but wouldn't it be great for the PG
> marketing department ;-)

I'd be afraid to use a conversion-in-place tool for this sort of thing.
If it crashes halfway through, what then?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [GENERAL] Psql 7.2.1 Regress tests failed on RedHat 7.3

2002-05-21 Thread Lamar Owen

On Tuesday 21 May 2002 09:23 am, Thomas Lockhart wrote:
> > While I don't agree with the standard, trying to be standard isn't really
> > a 'problem'.  Relying on a side-effect of a nonstandard call is the
> > problem.

> In my mind no one associated with glibc gets high marks for anything
> having to do with this change. It is unbelievably short sighted.

Oh, I most certainly agree with you on this Thomas.  The glibc people are just 
being adamant about it being 'standard.'  And I certainly didn't mean to step 
on your toes, either -- as I can tell this is a sore point for you.  I'm just 
trying to see what, if anything, we can do about it.

> > Can we pull in the BSD C library's mktime()?  OR otherwise utilize it to
> > fit this bill?

> Maybe, but probably forcing a *really* annoying code fork or patch to
> get the entry points to play nice with glibc. We'll also need to figure
> out how to manage the time zone database and how to keep it in sync.

> This is a seriously big problem, and we'll need to research what to do
> next. One possibility is to strip out all string time zone behavior and
> support only the SQL date/time standard, which specifies only numeric
> offsets and ignores real-world time zone usage and behaviors. Hmm, IBM
> contributed to that standard too, maybe the common thread is not a
> coincidence.

Well, the existing behavior, according to my first read of the code, is to 
assume UTC if the time_t is predicted to be out of range.  There is a macro 
for this, I see.  And the problem is that the out-of-range condition is 
happening at a different place.  I don't like this thought, but the most 
consistent, least-common-denominator tack would to be flag anything prior to 
epoch as out-of-range, even if the underlying calls can handle negative 
time_t.  I don't like that one whit.  But I like inconsistent data even less.

> The new glibc behavior is really disappointing. Help and ideas are
> appreciated; reimplementing an entire date/time system may be a lot of
> work.

Well, it was good foresight on your part to put all the mktime stuff in the 
one place.  I'm going to go through it and see if I understand what I'm 
looking at first.

But I see a couple of possibilities that we can control:
1.) Have configure test for broken mktime and sub our own mktime in that case 
(if this is even possible -- while the various BSD's have mktime and friends, 
how difficult is it going to be to unshackle that from a BSD kernel 
underneath -- I've not looked at the code for OpenBSD's mktime (which I have 
on hand), but I guess I will take a look now);
2.) Rewrite our stuff to not depend on any mktime, and thus be more portable 
(perhaps?).

But, in any case, I didn't mean to step on your toes by any of my comments; I 
completely agree with you that glibc and the ISO C standard cited are daft in 
this.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] More schema queries

2002-05-21 Thread Dave Page



> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: 21 May 2002 16:33
> To: Dave Page
> Cc: [EMAIL PROTECTED]
> Subject: Re: [HACKERS] More schema queries 
> 
> 
> > What I cannot do is get it to show me anything useful.
> 
> It sounds like gdb does not have access to debugging symbol tables.
> 
> Firstly, did you compile with -g (configure --enable-debug)?

Yes, but when I read this I realised that I forget to 'make clean'
before rebuilding. Having done that I then found that gdb eats about
100Mb of memory and 50% of cpu without actually displaying itself until
killed 10 minutes later. I tried this twice - I guess that gdb under
cygwin has trouble with large exe's as my machine should handle it
(PIII-M 1.13GHz, 512Mb).

> Secondly, did you point gdb at the postgres executable when 
> you started it?

Yes, I added a 60 second wait to the appropriate part of initdb (-W 60).
I could also get a stack trace which showed that I had broken in
RangeVarGetCreationNamespace as intended.

Regards, Dave.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [GENERAL] Psql 7.2.1 Regress tests failed on RedHat 7.3

2002-05-21 Thread Thomas Lockhart

...
> But, in any case, I didn't mean to step on your toes by any of my comments; I
> completely agree with you that glibc and the ISO C standard cited are daft in
> this.

No complaints from my toes; I was just ventilating about stupid
breakage.

 - Thomas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] Killing dead index tuples before they get vacuumed

2002-05-21 Thread Tom Lane

I'm planning to tackle the problem of killing index tuples for dead rows
during normal processing (ie, before VACUUM).  We've noticed repeatedly
that visits to dead heap rows consume a lot of time during indexscans
of heavily-updated tables.  This problem has been discussed before,
so the general outline of the solution is apparent, but several design
decisions remain to be made.  Here are my present thoughts:

1. The basic idea is for index_getnext, when it retrieves a tuple that
turns out to be invisible to the current transaction, to test whether
the tuple is dead to *all* transactions; if so, tell the index AM to mark
that index tuple as killed.  Subsequently the index tuple will be ignored
until it's finally vacuumed.  (We cannot try to remove the index tuple
immediately, because of concurrency issues; but not returning it out of
the index AM during an indexscan should largely solve the performance
problem.)  Under normal circumstances the time window between "dead to
my transaction" and "dead to all transactions" should not be very large,
so this approach should not cause very many extra tuple-visibility tests
to be performed.

2. The second visibility test is the same as VACUUM's: is the tuple
committed dead (or never good) and older than any running transaction's
xmin?  To call HeapTupleSatisfiesVacuum we need an idea of the global
xmin, but we surely don't want index_getnext calling GetOldestXmin()
every time it does this.  (Quite aside from the speed penalty, I'm worried
about possible deadlocks due to having to grab SInvalLock there.)  Instead
I propose that we modify GetSnapshotData() to compute the current global
xmin as a byproduct of its existing computation (which it can do almost
for free) and stash that in a global variable someplace.  index_getnext
can then use the global variable to call HeapTupleSatisfiesVacuum.  This
will effectively mean that we do index-tuple killing on the basis of the
global xmin as it stood when we started the current transaction.  In some
cases that might be a little out of date, but using an old xmin cannot
cause incorrect behavior; at worst an index entry will survive a little
longer than it really needs to.

3. What should the API to the index AMs look like?  I propose adding
two fields to the IndexScanDesc data structure:

boolkill_prior_tuple;  /* true if previously returned tuple is dead */
boolignore_killed_tuples; /* true to not return killed entries */

kill_prior_tuple is always set false during RelationGetIndexScan and at
the start of index_getnext.  It's set true when index_getnext detects
a dead tuple and loops around to call the index AM again.  So the index
AM may interpret it as "kill the tuple you last returned, ie, the one
indicated by currentItemData".  Performing this action as part of
amgetnext minimizes the extra overhead needed to kill a tuple --- we don't
need an extra cycle of re-locking the current index page and re-finding
our place.

ignore_killed_tuples will be set true in RelationGetIndexScan, but could
be set false by callers that want to see the killed index tuples.
(Offhand I think only VACUUM would want to do that.)

Within the index AMs, both kill_prior_tuple and ignore_killed_tuples would
be examined only by the topmost amgetnext routine.  A "killed" entry
behaves normally with respect to all internal operations of the index AM;
we just don't return it to callers when ignore_killed_tuples is true.
This will minimize the risk of introducing bugs into the index AMs.
As long as we can loop around for the next index tuple before we've
released page locks inside the AM, we should get most of the possible
performance benefit with just a localized change.

4. How exactly should a killed index tuple be marked on-disk?  While there
is one free bit available in IndexTupleData.t_info, I would prefer to use
that bit to expand the index tuple size field to 14 bits instead of 13.
(This would allow btree index entries to be up to 10K when BLCKSZ is 32K,
rather than being hard-limited to 8K.)  What I am thinking of doing is
using the LP_DELETE bit in ItemIdData.lp_flags --- this appears to be
unused for index tuples.  (I'm not sure it's ever set for heap tuples
either, actually, but it definitely looks free for index tuples.)

Whichever bit we use, the index AM can simply set it and mark the buffer
dirty with SetBufferCommitInfoNeedsSave.  We do not need to WAL-log the
action, just as we do not WAL-log marking heap tuple commit status bits,
because the action could be done over by someone else if it were lost.

Comments?  Anyone see any flaws or better ideas?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Unbounded (Possibly) Database Size Increase - Toasting

2002-05-21 Thread Manfred Koizar

On Tue, 21 May 2002 11:10:04 -0400, Tom Lane <[EMAIL PROTECTED]>
wrote:
>Odd.  I wonder whether you are looking at an unintended behavior of the
>free space map's thresholding mechanism.  The toast table will generally
>have large tuples of consistent size (about 2K each).

So we have 4 tuples per page?

>This will cause
>the FSM threshold for whether to remember a page to approach 2K, which
>probably will mean that we forget about pages that could still hold one
>toast tuple.

I thought I was able to follow you up to here.

>That might be enough to cause the growth.

Here I'm lost.  The effect you mention explains growth up to a state
where each toast table page holds 3 instead of 4 tuples (1.33 *
initial size).  Now with each UPDATE we get pages with significantly
more free space than 2K.  Even if we add a few 1.000 pages being added
before the next VACUUM, we still reach a stable size.  Of course this
only holds if there are enough FSM slots, which Mark claims to have.

So IMHO there have to be additional reasons causing *unbounded*
growth.  Or am I missing something?

Just my 0.02.
Servus
 Manfred

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Trond Eivind Glomsrød

On Tue, 21 May 2002, Lamar Owen wrote:

> On Tuesday 21 May 2002 11:04 am, Manuel Sugawara wrote:
> > I see. This behavior is consistent with the fact that mktime is
> > supposed to return -1 on error, but then is broken in every other Unix
> > implementation that I know.
> 
> > Any other workaround than downgrade or install FreeBSD?
> 
> Complain to Red Hat.  Loudly. However, as this is a glibc change, other 
> distributors are very likely to fold in this change sooner rather than 
> later. 

Relying on nonstandardized/nondocumented behaviour is a program bug, not a 
glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at 
it, but glibc is not the component with a problem.

-- 
Trond Eivind Glomsrød
Red Hat, Inc.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Lamar Owen

On Tuesday 21 May 2002 11:04 am, Manuel Sugawara wrote:
> I see. This behavior is consistent with the fact that mktime is
> supposed to return -1 on error, but then is broken in every other Unix
> implementation that I know.

> Any other workaround than downgrade or install FreeBSD?

Complain to Red Hat.  Loudly. However, as this is a glibc change, other 
distributors are very likely to fold in this change sooner rather than later.

Try using timestamp without timezone?
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Unbounded (Possibly) Database Size Increase - Toasting

2002-05-21 Thread Tom Lane

Manfred Koizar <[EMAIL PROTECTED]> writes:
> Here I'm lost.  The effect you mention explains growth up to a state
> where each toast table page holds 3 instead of 4 tuples (1.33 *
> initial size).  Now with each UPDATE we get pages with significantly
> more free space than 2K.

Good point, it should still stabilize with at worst 33% overhead.  So
maybe I'm barking up the wrong tree.

Still, the FSM code is new in 7.2 and I'm quite prepared to believe that
the effect Mark is seeing indicates some problem in it.  Anyone care to
sit down and read through freespace.c?  It's pretty liberally commented.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] More schema queries

2002-05-21 Thread Tom Lane

"Dave Page" <[EMAIL PROTECTED]> writes:
> Yes, but when I read this I realised that I forget to 'make clean'
> before rebuilding. Having done that I then found that gdb eats about
> 100Mb of memory and 50% of cpu without actually displaying itself until
> killed 10 minutes later. I tried this twice - I guess that gdb under
> cygwin has trouble with large exe's as my machine should handle it
> (PIII-M 1.13GHz, 512Mb).

That's annoying.  gdb is quite memory-hungry when dealing with big
programs, but as long as you're not running out of memory or swap it
should work.  AFAIK anyway.  I remember having to compile only parts
of a big program with debug support, years ago on a machine that was
pretty small and slow by current standards.

If you can't get gdb to work then another possibility is the low-tech
approach: add some debugging printf's to RangeVarGetCreationNamespace.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Per tuple overhead, cmin, cmax, OID

2002-05-21 Thread Manfred Koizar

On Tue, 21 May 2002 11:53:04 -0400, Tom Lane <[EMAIL PROTECTED]>
wrote:
>The system tables that have OIDs will certainly continue to have OIDs.

That's clear. I should have written: "... rip out oids from tuple
headers of system tables."

>Ugh.  While certainly we should have been using accessor
>macros for that, I'm not sure I want to try to change it.

I already did this for xmin, xmax, cmin, cmax, and xvac (see my patch
posted 2002-05-12).

>If OID is made to be the last fixed-offset field, instead of the first,

That would introduce some padding.

>then this approach would be fairly workable.  Actually I'd still use
>just one struct definition, but do offsetof() calculations to decide
>where the null-bitmap starts.

... and for calculating the tuple header size.

>> Decouple on-disk format from in-memory structures, use
>> HeapTupleHeaderPack() and HeapTupleHeaderUnpack() to store/extract
>> header data to/from disk buffers.  Concurrency?
>
>Inefficient,

Just to be sure: You mean the CPU cycles wasted by Pack() and
Unpack()?

>I'd be afraid to use a conversion-in-place tool for this sort of thing.

Me too.  No, not in place!  I thought of a filter reading an old
format data file, one page at a time, and writing a new format data
file.  This would work as long as the conversions don't cause page
overflow.

No comment on a planned 7.3 timeframe? :-(

Servus
 Manfred

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] getting oid of tuple in executePlan

2002-05-21 Thread Tom Lane

Dhruv Pilania <[EMAIL PROTECTED]> writes:
> Basically, I am making a small change in executePlan() function of
> executor/execMain.c. Right after a tupleslot is retrieved, I try to find
> out the oid of the tuple that has been retrieved.

The retrieved tuple doesn't have an OID, because it's not a raw pointer
to a row on disk: it's a computed tuple (the result of ExecProject).

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Per tuple overhead, cmin, cmax, OID

2002-05-21 Thread Tom Lane

Manfred Koizar <[EMAIL PROTECTED]> writes:
> No comment on a planned 7.3 timeframe? :-(

I think we are planning to go beta in late summer (end of August, say).
Probably in July we'll start pressing people to finish up any major
development items, or admit that they won't happen for 7.3.  So we've
still got a couple months of full-tilt development mode before we
start to worry about tightening up for release.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Is 7.3 a good time to increase NAMEDATALEN ?

2002-05-21 Thread Tom Lane

"Joel Burton" <[EMAIL PROTECTED]> writes:
> Noticed that increasing NAMEDATALEN to 128 is still on the TODO.
> Given that the addition of namespaces for 7.3 is going to require many
> client utilities to be updated anyway, is this a reaonable time to bring
> this increase into the standard distribution?

Right at the moment we are still trying to understand/eliminate the
performance penalty from increasing NAMEDATALEN.  At last report
someone had measured it as still being annoying large on pgbench.

I have not done any profiling but my best theory at the moment is that
the remaining cost must be in lookup key matching for in-memory hash
tables.  dynahash.c treats keys as fixed-length and always does a
memcmp(), which is going to get slower with bigger NAMEDATALEN, even
if the actually used names aren't getting longer.

The issue might be fixable by teaching this code to use strcmp() for
Name keys, but I haven't tried.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Lamar Owen

On Tuesday 21 May 2002 12:31 pm, Trond Eivind Glomsrød wrote:
> On Tue, 21 May 2002, Lamar Owen wrote:
> > However, as this is a glibc change, other
> > distributors are very likely to fold in this change sooner rather than
> > later.

> Relying on nonstandardized/nondocumented behaviour is a program bug, not a
> glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at
> it, but glibc is not the component with a problem.

In your opinion.  Not everyone agrees with the new glibc behavior.  In fact, 
some here are rather livid about it.  But that's a digression.  The matter at 
hand is making it work right again.  

It seems to me someone should have thought about this before making the glibc 
change that had the potential for breaking a large application -- regardless 
of who is at fault, it's egg in Red Hat's face for not catching it sooner 
(and egg in my face as well, as I must admit that I of all people should have 
caught this earlier).  

Is the change in glibc behavior noted in the release notes?  The man page 
isn't changed either, from Red Hat 6.2, in fact.  The only change is adhering 
to the ISO definition of 'Seconds Since the Epoch' rather than the defacto 
industry-accepted definition that has been with us a very long time.  

Like PostgreSQL's refusal to upgrade in a sane manner, this should have 
received release notes billing, IMHO.  Then I, nor anyone else, could 
reasonably complain.

But this does show the need for the regression testing packge, no? :-) And it 
also shows the danger in becoming too familiar with certain regression tests 
failing due to locale -- to the extent that a locale issue was the first 
thing thought of.  To the extent that I'm going to change my build process to 
include regression testing as a part of the build -- and any failure will 
abort the build. Maybe that will get my attention.  And anyone else's 
rebuilding from the source RPM.  SuSE already does this.  I wonder how 
they've handled this issue with 8.0? 

In any case, this isn't just a Red Hat problem, as it's going to cause 
problems with the use of timestamps on ANY glibc 2.2.5 dist.  That's more 
than Red Hat, by a large margin.

And I think that it is naive to think that PostgreSQL is the only program that 
has used mktime's behavior in the negative-time_t zone.  Other packages are 
likely impacted, to a greater or lesser extent.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Manuel Sugawara

Trond Eivind Glomsrød <[EMAIL PROTECTED]> writes:

> Relying on nonstandardized/nondocumented behaviour is a program bug,
> not a glibc bug.

The question is: how this thing didn't show up before? ISTM that
someone is not doing his work correctly.

> PostgreSQL needs fixing.

Arguably, however, right now is *a lot easier* to fix glibc, and it's
really needed for production systems using postgreSQL and working on
RedHat. But redhat users doesn't matter, the most important thing is
*strict* conformace to standars, right?

> Since we ship both, we're looking at it, but glibc is not the
  ^^^
The sad true is: you only answered when the 'Complain to Red Hat'
statement appeared, not a single word before and not a single word
when the bug report were closed. I'm really disappointed.

The nice thing is: glibc is free software and we don't have to wait or
relay on some of the redhat staff members (thanks god) for this to get
fixed or say: for the standard to get extended again. The patch to
glibc is pretty straightforward and attached.

Regards,
Manuel.



--- glibc-2.2.5/time/mktime.c.org	Tue May 21 11:37:06 2002
+++ glibc-2.2.5/time/mktime.c	Tue May 21 11:39:28 2002
@@ -259,11 +259,13 @@
 
   int sec_requested = sec;
 
+#if 0
   /* Only years after 1970 are defined.
  If year is 69, it might still be representable due to
  timezone differnces.  */
   if (year < 69)
 return -1;
+#endif
 
 #if LEAP_SECONDS_POSSIBLE
   /* Handle out-of-range seconds specially,



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Trond Eivind Glomsrød

On 21 May 2002, Manuel Sugawara wrote:

> Trond Eivind Glomsrød <[EMAIL PROTECTED]> writes:
> 
> > Relying on nonstandardized/nondocumented behaviour is a program bug,
> > not a glibc bug.
> 
> The question is: how this thing didn't show up before? ISTM that
> someone is not doing his work correctly.

FWIW, I ran the regressions tests some time ago(probably before that 
change to glibc) . Since the tests are known 
to be broken wrt. time issues anyway (as well as currency, math and sorting), 
it's easy to overlook.

> > PostgreSQL needs fixing.
> 
> Arguably, however, right now is *a lot easier* to fix glibc, and it's
> really needed for production systems using postgreSQL and working on
> RedHat. 

You're not "fixing" glibc, you're reintroducing non-standardized, upstream 
removed behaviour. That's typically a very bad thing. If anything, it 
demonstrates the importance of not using or relying on 
unstandardized/undocumented behaviour (and given that time_t is pretty 
restrictive anyway, you'll need something else to keep dates. It doesn't 
even cover all living people, and definitely not historical dates).

> > Since we ship both, we're looking at it, but glibc is not the
>   ^^^
> The sad true is: you only answered when the 'Complain to Red Hat'
> statement appeared, not a single word before and not a single word
> when the bug report were closed. I'm really disappointed.

The bug wasn't open for long, and was closed by someone else.

> The nice thing is: glibc is free software 

Also, notice that this was where the fix came from: The upstream 
maintainers (some of whom work for us, others don't).

-- 
Trond Eivind Glomsrød
Red Hat, Inc.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] More schema queries

2002-05-21 Thread Dave Page



> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: 21 May 2002 20:09
> To: Dave Page
> Cc: [EMAIL PROTECTED]
> Subject: Re: [HACKERS] More schema queries 
>
> 
> I guess your version of getopt() won't cooperate with -- 
> switches. I've committed this change in CVS.

Thanks.

> 
> I'm still interested in why explicitly saying "create view 
> pg_catalog.foo" didn't work ...

I've just been playing with this as you suggested, and using an initdb
with both 'create view foo' and 'create view pg_catalog.bar', with the
-- style switch I get (for both types of view): 

namespace_search_path   = $user,public
newRelation->schemaname = null
namespaceId = 2200 (public)

So I guess the problem is a combination of the getopt() that we've
already found, and schemaname being null in the newRelation structure. 

Using the -c style switch in PGSQL_OPTS gives namespace_search_path =
pg_catalog as expected.

I am interested in learning more about this so any pointers you might
offer would be useful (I seriously doubt I'd find the fault myself
though) but I do understand that you probably have better things to do
than help me begin to understand the internals so I won't be overly
offended if you don't have time :-)

Cheers, Dave.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] 2nd cut at SSL documentation

2002-05-21 Thread Bear Giles

A second cut at SSL documentation



SSL Support in PostgreSQL
=

Who needs it?
=

The sites that require SSL fall into one (or more) of several broad
categories.

*) They have insecure networks. 

   Examples of insecure networks are anyone in a "corporate hotel,"
   any network with 802.11b wireless access points (WAP) (in 2002,
   this protocol has many well-known security weaknesses and even
   'gold' connections can be broken within 8 hours), or anyone 
   accessing their database over the internet.

   These sites need a Virtual Private Network (VPN), and either
   SSH tunnels or direct SSL connections can be used.

*) They are storing extremely sensitive information.

   An example of extremely sensitive information is logs from
   network intrusion detection systems.  This information *must*
   be fully encrypted between front- and back-end since an attacker
   is presumably sniffing all traffic within the VPN, and if they
   learn that you know what they are doing they may attempt to
   cover their tracks with a quick 'rm -rf /' and 'dropdb'

   In the extreme case, the contents of the database itself may
   be encrypted with either the crypt package (which provides
   symmetrical encryption of the records) or the PKIX package
   (which provides public-key encryption of the records).

*) They are storing information which is considered confidential
   by custom, law or regulation.

   This includes all records held by your doctor, lawyer, accountant,
   etc.  In these cases, the motivation for using encryption is not
   a conscious evaulation of risk, but the fear of liability for 
   'failure to perform due diligence' if encryption is available but
   unused and an attacker gains unauthorized access to the harm of
   others.

*) They have 'road warriors.'

   This includes all sites where people need to have direct access
   to the database (not through a proxy such as a secure web page)
   from changing remote addresses.  Client certificates provide a
   clean way to grant this access without opening up the database
   to the world.

Who does not need it?
-

It's at least as important to know who does not need SSL as it
is to know who does.  Sites that do not need SSL fall into several
broad categories.

*) Access is limited to the Unix socket.

*) Access is limited to a physically secure network.

   "Physically secure" networks are common in the clusters and
   colocation sites - all database traffic is restricted to dedicated
   NIC cards and hubs, and all servers and cabling are maintained in
   locked cabinets.


Using SSH/OpenSSH as a Virtual Private Network (VPN)


SSH and OpenSSH can be used to construct a Virtual Private Network
(VPN) to provide confidentiality of PostgreSQL communications.  
These tunnels are widely available and fairly well understood, but 
do not provide any application-level authentication information.

To set up a SSH/OpenSSH tunnel, a shell account for each
user should be set up on the database server.  It is acceptable
for the shell program to be bogus (e.g., /bin/false), if the
tunnel is set up in to avoid launching a remote shell.

On each client system the $HOME/.ssh/config file should contain
an additional line similiar to

 LocalForward  psql.example.com:5432

(replacing psql.example.com with the name of your database server).
By putting this line in the configuration file, instead of specifying
it on the command line, the tunnel will be created whenever a 
connection is made to the remote system.

The psql(1) client (or any client) should be wrapped with a script
that establishes an SSH tunnel when the program is launched:

  #!/bin/sh
  HOST=psql.example.com
  IDENTITY=$HOME/.ssh/identity.psql
  /usr/bin/ssh -1 -i $IDENTITY -n $HOST 'sleep 60' & \
/usr/bin/psql -h $HOST -p  $1

Alternately, the system could run a daemon that establishes and maintains
the tunnel.  This is preferrable when multiple users need to establish
similar tunnels to the same remote site.

Unfortunately, there are many potential drawbacks to SSL tunnels:

*) the SSH implementation or protocol may be flawed.  Serious problems
   are discovered about once every 18- to 24- months.

*) the systems may be misconfigured by accident.

*) the database server must provide shell accounts for all users
   needing access.  This can be a chore to maintain, esp. in if
   all other user access should be denied.

*) neither the front- or back-end can determine the level of
   encryption provided by the SSH tunnel - or even whether an
   SSH tunnel is in use.  This prevents security-aware clients
   from refusing any connection with unacceptly weak encryption.

*) neither the front- or back-end can get any authentication
   information pertaining to the SSH tunnel.

Bottom line: if you just need a VPN, SSH tunnels are a good solution.
But if you explicitly need a secure conn

Re: [HACKERS] More schema queries

2002-05-21 Thread Tom Lane

"Dave Page" <[EMAIL PROTECTED]> writes:
>> Try changing the PGOPTS setting to use
>> -c search_path=pg_catalog 
>> That shouldn't make any difference but ...

> Shouldn't but does :-). Checked & double-checked, that works perfectly.

I guess your version of getopt() won't cooperate with -- switches.
I've committed this change in CVS.

I'm still interested in why explicitly saying "create view pg_catalog.foo"
didn't work ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Is 7.3 a good time to increase NAMEDATALEN ?

2002-05-21 Thread Neil Conway

On Tue, 21 May 2002 11:41:26 -0400
"Joel Burton" <[EMAIL PROTECTED]> wrote:
> Noticed that increasing NAMEDATALEN to 128 is still on the TODO.

The last benchmarks I saw indicate that there's still a significant
performance hit when increasing NAMEDATALEN, whether to 64 or 128.

Given that only a small percentage of PostgreSQL users need long
identifiers, and *everyone* would suffer the performance hit, I'd
rather that we not touch NAMEDATALEN until more work has been
done on attempting to reduce the performance penalty.

Until then, the people who absolutely, positively must have long
identifiers can just raise NAMEDATALEN themselves.

Cheers,

Neil

-- 
Neil Conway <[EMAIL PROTECTED]>
PGP Key ID: DB3C29FC

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Lamar Owen

On Tuesday 21 May 2002 03:09 pm, Trond Eivind Glomsrød wrote:
> FWIW, I ran the regressions tests some time ago(probably before that
> change to glibc) . Since the tests are known
> to be broken wrt. time issues anyway (as well as currency, math and
> sorting), it's easy to overlook.

The time tests have never broken in this manner before on Red Hat.  When the 
original regression failure report was posted, I saw right away that this was 
not the run of the mill locale issue -- this was a real problem.  Regression 
testing must become a regularly scheduled activity, methinks.  In the RPM 
build process, we can control the locale to the extent that the tests will 
pass (except on DST days) reliably.  I am going to implement this for my next 
RPM set.  Along with a patch to this problem -- we _can_ patch around this, I 
believe, but it's not likely going to be an easy one.

We have gotten blind to the regular locale-induced failures -- this is not a 
good thing.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] More schema queries

2002-05-21 Thread Tom Lane

"Dave Page" <[EMAIL PROTECTED]> writes:
>> I'm still interested in why explicitly saying "create view 
>> pg_catalog.foo" didn't work ...

> I've just been playing with this as you suggested, and using an initdb
> with both 'create view foo' and 'create view pg_catalog.bar', with the
> -- style switch I get (for both types of view): 

> namespace_search_path   = $user,public
> newRelation->schemaname = null
> namespaceId = 2200 (public)

> So I guess the problem is a combination of the getopt() that we've
> already found, and schemaname being null in the newRelation structure. 

Given that getopt wasn't working, I'd expect namespace_search_path to be
that, and since there won't be any $user view at initdb time, public
should be the default creation target.  For "create view foo",
newRelation->schemaname *should* be null and thus public would be
selected.  But if you say "create view pg_catalog.foo" then
newRelation->schemaname should be "pg_catalog".  Can you trace it back a
little further and try to see why it's not?  It works fine here AFAICT,
so I'm wondering about portability problems ...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Tom Lane

=?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <[EMAIL PROTECTED]> writes:
> Relying on nonstandardized/nondocumented behaviour is a program bug, not a 
> glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at 
> it, but glibc is not the component with a problem.

A library that can no longer cope with dates before 1970 is NOT my idea
of a component without a problem.  We will be looking at ways to get
around glibc's breakage at the application level, since we have little
alternative other than to declare Linux an unsupported platform;
but it's still glibc (and the ISO spec:-() that are broken.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] More schema queries

2002-05-21 Thread Dave Page



> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]] 
> Sent: 21 May 2002 20:31
> To: Dave Page
> Cc: [EMAIL PROTECTED]
> Subject: Re: [HACKERS] More schema queries 
> 
> Can you 
> trace it back a
> little further and try to see why it's not?  It works fine 
> here AFAICT, so I'm wondering about portability problems ...

This week just gets wierder. I haven't a clue what I overlooked, but
there must have been something - I put initdb back with the -- switch &
pg_catalog. prefixes. Ran it, same problem as expected.

I then added various printf's right back to DefineRelation (iirc), ran
initdb again and _could_ see the schema name in every function, and, the
views were created in pg_catalog!!

Took all the printf's back out, and it still works as expected.

Oh well :-)

Thanks for your help with this Tom, if nothing else, at least I've
learnt a fair bit.

Regards, Dave.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Unbounded (Possibly) Database Size Increase - Toasting

2002-05-21 Thread Mark kirkwood

On Wed, 2002-05-22 at 03:10, Tom Lane wrote:
> (snippage)  That might be enough to cause the growth.  It may be
> worth playing around with the details of the threshold-setting policy.
> 
(snippage)
> Possibly the initial threshold set in create_fsm_rel also needs to be
> smaller than it is.  Not sure about that though.
> 
> Let me know how that affects your results ...
> 
I will try some changes out here (and look at freespace.c in general) -
but dont let that stop anyone else examining it as well... :-)

(I am on holiday for a 10 days as of 24/05, so I may not report anything
for a little while)

regards

Mark  



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Graphical Tool

2002-05-21 Thread Gaston Micheri

Hello, anybody can tell me about a graphical tool that help me when I wanna 
relate tables from a postgre database and make referential integrity between 
them?
Thank you!
Gaston.-

_
Únase con MSN Hotmail al servicio de correo electrónico más grande del 
mundo. http://www.hotmail.com


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Hannu Krosing

On Tue, 2002-05-21 at 21:31, Trond Eivind Glomsrød wrote:
> On Tue, 21 May 2002, Lamar Owen wrote:
> 
> > On Tuesday 21 May 2002 11:04 am, Manuel Sugawara wrote:
> > > I see. This behavior is consistent with the fact that mktime is
> > > supposed to return -1 on error, but then is broken in every other Unix
> > > implementation that I know.
> > 
> > > Any other workaround than downgrade or install FreeBSD?
> > 
> > Complain to Red Hat.  Loudly. However, as this is a glibc change, other 
> > distributors are very likely to fold in this change sooner rather than 
> > later. 
> 
> Relying on nonstandardized/nondocumented behaviour is a program bug, not a 
> glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at 
> it, but glibc is not the component with a problem.

Still it seems kind of silly to have a function that works different
from all other implementations and forces people to use their own
function of the same name (lifted from BSD and also compliant).

Speaking of nonstandardized/nondocumented behaviour, I read from "The
Open Sources" book that if you implement TCP/IP strictly from the RFCs
then it won't interoperate with any other TCP/IP stack. 

I hope that Red Hat is not going to be "standards compliant" here ;)

--
Hannu



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Oliver Elphick

On Tue, 2002-05-21 at 18:24, Lamar Owen wrote:
> In any case, this isn't just a Red Hat problem, as it's going to cause 
> problems with the use of timestamps on ANY glibc 2.2.5 dist.  That's more 
> than Red Hat, by a large margin.

I'm running glibc 2.2.5 on Debian and all regression tests pass OK (with
make check).  I don't see any note in the glibc Debian changelog about
reversing an upstream change to mktime().

I missed the first messages in this thread and I can't find them in the
archive.  What should I be looking for to see if I have the problem you
have encountered or to see why I don't have it if I ought to have?

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C

 "O come, let us worship and bow down; let us kneel
  before the LORD our maker."  Psalms 95:6 



signature.asc
Description: This is a digitally signed message part


[HACKERS] Shouldn't large objects be MVCC-aware?

2002-05-21 Thread Tom Lane

I notice that the large-object operations in pg_largeobject.c and 
inv_api.c all use SnapshotNow to access large-object tuples.  This
means they are not really MVCC compliant.  For example, I could be
reading a large object that someone else is writing; if he commits
mid-read, then I will see some old data and some updated data.
This seems wrong.

In particular, pg_dump cannot promise to dump a consistent snapshot
of large objects, because what it reads will be read under SnapshotNow.

I suggest that large object tuples are user data and so should be
read using the QuerySnapshot established at start of transaction.

Comments anyone?  Is it possible that changing this will break any
existing applications that depend on the current behavior?

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Lamar Owen

On Tuesday 21 May 2002 06:09 pm, Oliver Elphick wrote:
> On Tue, 2002-05-21 at 18:24, Lamar Owen wrote:
> > In any case, this isn't just a Red Hat problem, as it's going to cause
> > problems with the use of timestamps on ANY glibc 2.2.5 dist.  That's more
> > than Red Hat, by a large margin.

> I'm running glibc 2.2.5 on Debian and all regression tests pass OK (with
> make check).  I don't see any note in the glibc Debian changelog about
> reversing an upstream change to mktime().

> I missed the first messages in this thread and I can't find them in the
> archive.  What should I be looking for to see if I have the problem you
> have encountered or to see why I don't have it if I ought to have?

Hmmm.  Compile and run the attached program.  If you get -1, it's the new 
behavior.  It might be interesting to see the differences here.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

#include 
#include 

int main(int argc, char *argv[])
{
int failout;

struct tm fails;

fails.tm_sec = 0;
fails.tm_min = 0;
fails.tm_hour = 0;
fails.tm_hour = 0;
fails.tm_isdst = -1;

fails.tm_year = 69;
fails.tm_mon = 11;
fails.tm_mday = 30;

failout = mktime(&fails);
printf("The system thinks 11/30/1969 is a timestamp of %d \n", failout);

return 0;
}



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Is 7.3 a good time to increase NAMEDATALEN ?

2002-05-21 Thread Dann Corbit

> -Original Message-
> From: Neil Conway [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, May 21, 2002 12:19 PM
> To: Joel Burton
> Cc: [EMAIL PROTECTED]
> Subject: Re: [HACKERS] Is 7.3 a good time to increase NAMEDATALEN ?
> 
> 
> On Tue, 21 May 2002 11:41:26 -0400
> "Joel Burton" <[EMAIL PROTECTED]> wrote:
> > Noticed that increasing NAMEDATALEN to 128 is still on the TODO.
> 
> The last benchmarks I saw indicate that there's still a significant
> performance hit when increasing NAMEDATALEN, whether to 64 or 128.
> 
> Given that only a small percentage of PostgreSQL users need long
> identifiers, and *everyone* would suffer the performance hit, I'd
> rather that we not touch NAMEDATALEN until more work has been
> done on attempting to reduce the performance penalty.
> 
> Until then, the people who absolutely, positively must have long
> identifiers can just raise NAMEDATALEN themselves.

I'm sure that this is an idiotic thing to say, but why not just make it
varchar?

Most of the time the database objects will be small (maybe 10 characters
on average) but sometimes you want them to be really large.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Tom Lane

Manuel Sugawara <[EMAIL PROTECTED]> writes:
 
> +#if 0
>/* Only years after 1970 are defined.
>   If year is 69, it might still be representable due to
>   timezone differnces.  */
>if (year < 69)
>  return -1;
> +#endif

Hm.  If that fixes it, it implies that all the other support for
pre-1970 dates is still there (notably, entries in the timezone tables).

Should we assume that future glibc releases will rip out the timezone
database entries and other support for pre-1970 dates?  Or is the
breakage going to stop with mktime?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] 2nd cut at SSL documentation

2002-05-21 Thread Neil Conway

On Tue, 21 May 2002 14:27:00 -0600 (MDT)
"Bear Giles" <[EMAIL PROTECTED]> wrote:
> A second cut at SSL documentation

I've pointed out some minor things I noticed while reading through.
Yeah, I was bored :-)

> The sites that require SSL fall into one (or more) of several broad
> categories.
> 
> *) They have insecure networks. 
> 
>Examples of insecure networks are anyone in a "corporate hotel,"

What's a corporate hotel?

> *) They have 'road warriors.'

This section title sounds confusingly similar to the 1st item.
Perhaps "They need to authentication clients securely" or something
similar? The need to use client certificates does not apply only to
"road warriors" -- I've seen situations where client-certs are used for
clients to connecting to a server over a LAN.

> *) Access is limited to the Unix socket.

"the" sounds wrong, there's more than just 1 :-)

> *) Access is limited to a physically secure network.
> 
>"Physically secure" networks are common in the clusters and
>colocation sites - all database traffic is restricted to dedicated
>NIC cards and hubs, and all servers and cabling are maintained in
>locked cabinets.

Perhaps add a note on the performance hit here?

> Using SSH/OpenSSH as a Virtual Private Network (VPN)

I'm unsure why you're bothering to differentiate between SSH
and OpenSSH.

> SSH and OpenSSH can be used to construct a Virtual Private Network
> (VPN)

No need to include the abbreviation for VPN here, you've explained
the term before.

> to provide confidentiality of PostgreSQL communications.  
> These tunnels are widely available and fairly well understood, but 
> do not provide any application-level authentication information.

You might want to clarify what "application-level authentication
information" means, or else leave out all discussion of drawbacks
until later.

> To set up a SSH/OpenSSH tunnel, a shell account for each
> user should be set up on the database server.  It is acceptable
> for the shell program to be bogus (e.g., /bin/false), if the
> tunnel is set up in to avoid launching a remote shell.
> 
> On each client system the $HOME/.ssh/config file should contain
> an additional line similiar to
> 
>  LocalForward  psql.example.com:5432

"pgsql.example.com" strikes me as a better example hostname (I always
think that psql == DB client, postgres/postmaster/pgsql == DB server).

> Unfortunately, there are many potential drawbacks to SSL tunnels:

I think you mean SSH tunnels.

> *) the SSH implementation or protocol may be flawed.  Serious problems
>are discovered about once every 18- to 24- months.

I'd be skeptical whether this weakness if specific to SSH -- there
can be security holes in OpenSSL, the SSL protocol, the SSL
implementation in PostgreSQL, etc.

> *) the database server must provide shell accounts for all users
>needing access.  This can be a chore to maintain, esp. in if

Remove the "in".

> *) neither the front- or back-end can determine the level of
>encryption provided by the SSH tunnel - or even whether an
>SSH tunnel is in use.  This prevents security-aware clients
>from refusing any connection with unacceptly weak encryption.

Spelling error.

> Finally, the client library can have one or more trusted root
> certificates compiled into it.  This allows clients to verify
> certificates without the need for local copies.  To do this,
> the source file src/interfaces/libpq/fe-ssl.c must be edited
> and the database recompiled.

"PostgreSQL" recompiled -- database versus RDBMS can be ambiguous.

> Mutual authentication requires that servers and clients each
> authenticate to the other.  This protects the server from
> false clients in addition to protecting the clients from false
> servers.

"false" in this context?

Cheers,

Neil

-- 
Neil Conway <[EMAIL PROTECTED]>
PGP Key ID: DB3C29FC

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Timestamp & Interval - Part 1

2002-05-21 Thread Josh Berkus

Tom, Oliver,

I haven't finished writing up my ideas for INTERVAL.  However, here's 
something to get started:

PROPOSAL FOR ADJUSTMENTS OF POSTGRESQL TIMESTAMP AND INTERVAL HANDLING
Draft 0.1 - Part 1

Timestamp
--
Proposal #1:  TIMESTAMP WITHOUT TIME ZONE as default

Description:  Currently, the data type invoked when users select TIMESTAMP is 
TIMESTAMP WITH TIME ZONE.  We should change this so that TIMESTAMP defaults 
to TIMESTAMP WITHOUT TIME ZONE unless WITH TIME ZONE is specificied.

Reason:  Handling time zones is tricky and non-intuitive for the beginning 
user.  TIMESTAMP WITH TIME ZONE should be reserved for DBAs who know what 
they're doing.


Proposal #2: We need more time zones.

Description:  We need to add, or be able to add, many new time zones to 
Postgresql.  Ideal would be some kind of "create time zone" statement.

Reason:  Current included time zones do not cover all real-world time zones, 
and the situation is likely to get worse as various governments play with 
their calendars.  For example, there is no current time zone which would be 
appropriate for the state of Arizona, i.e. "Central Standard Time without 
Daylight Savings Time".  

Further:  A CREATE TIME ZONE statement would have the following syntax:
CREATE TIME ZONE GMT_adjustment, abbreviation, uses_DST, DST_starts 
(optional), DST_ends (optional) 
This would allow, to some degree, DBA creation of time zones to take into 
account local laws and wierdnesses.

-- 
-Josh Berkus



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Wrong dependency in parallel regression test

2002-05-21 Thread Peter Eisentraut

Notice that the parallel regression test runs

parallel group (7 tests):  create_aggregate create_operator inherit
triggers constraints create_misc create_index

create_index creates an index on a table "onek2" which is created in
create_misc.  I just saw this fail because create_index got there first.
On the next run everything was OK.

It's interesting that no one has seen this before, so it's quite
low-probability.  I'll just mention it here for the archives.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Peter Eisentraut

Lamar Owen writes:

> SuSE already does this.  I wonder how they've handled this issue with
> 8.0?

Their glibc doesn't have that problem.

Personally, I think if you need time (zone) support before 1970, obtain
one of the various operating systems that support it.  There's little
value in hacking around it in PostgreSQL, since the rest of your system
will be broken as well.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Timestamp & Interval - Part 1

2002-05-21 Thread Thomas Lockhart

> Proposal #1:  TIMESTAMP WITHOUT TIME ZONE as default

Hmm. Already done for 7.3 :)

7.2 introduced that data type, and 7.1 did not have it, so we had one
release cycle to allow dump/restore to do the right thing.

> Proposal #2: We need more time zones.

The other complaint is that we have too many time zones. Certainly it is
not ideal (but it may be optimal from an execution time standpoint) that
these time zones are hardcoded into lookup tables; moving these into
external files will be *much* slower, moving these into database tables
will be somewhat slower. But asking us to deal with Arizona may be a bit
too much; those guys do things just to be different ;)

btw, on my Linux box the time zone rule is 'US/Arizona', as in

lockhart=# SET TIME ZONE 'US/Arizona';

My Linux box thinks that for Arizona time input would always be in
'MST', which is recognized by the PostgreSQL date/time parser so things
are handled consistantly (at least until I upgrade glibc :((

Let's see how the glibc breakage discussion pans out. I haven't worried
about pre-1903 dates and times because time zones were not as
standardized then as they are today. But if we end up rolling our own
then we might consider pulling more of this into the backend and getting
rid of our y2038 problems at the same time :))

 - Thomas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Timestamp & Interval - Part 1

2002-05-21 Thread Dann Corbit

Why not fix it completely with this stuff:
ftp://elsie.nci.nih.gov/pub/

Just an idea.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Thomas Lockhart

> > SuSE already does this.  I wonder how they've handled this issue with
> > 8.0?
> Their glibc doesn't have that problem.

My strong recollection is that a SuSE guy was the one applying the
change. So this is coming to those systems too. I may not remember that
correctly though...

> Personally, I think if you need time (zone) support before 1970, obtain
> one of the various operating systems that support it.  There's little
> value in hacking around it in PostgreSQL, since the rest of your system
> will be broken as well.

Yes, I'm afraid I agree. In practice, maybe most applications won't
notice. But after getting the Linux time zone databases set up to be
better than most (Solaris has the best I've found for fidelity to
pre-1970 year-to-year conventions) throwing that work away is just plain
silly. I consider this a major gaff on the part of the commercial Linux
houses to not see this coming and to contribute to a better solution.

 - Thomas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Timestamp & Interval - Part 1

2002-05-21 Thread Thomas Lockhart

> Why not fix it completely with this stuff:
> ftp://elsie.nci.nih.gov/pub/
> Just an idea.

Ah, the real zic implementation. afaik this is public domain or BSD or
at least compatible with our BSD license wrt distribution.

Great idea. We may end up doing this! Though I hate for the project to
pick up the task of maintaining sync with that distro.

We already have a NO_MKTIME_BEFORE_1970 #define'd for AIX and IRIX
(always paragons of standard behavior :/ Funny enough it doesn't
actually guard the mktime() code, since I think that there is a good
signature from the exit from mktime() on those systems (independent of
the return value) to detect that there is a problem. glibc is sinking to
new depths in lack of support for this feature by brute force exiting
early on.

It looks like we might (easily?) get good behavior beyond y2038, since
we might be able to redefine time_t within our code. At the moment zic
looks for it from sys/types.h, but maybe we could isolate it and force
it to be a 64-bit number on systems which support it. Hmm, need to find
how to translate current system time to that representation...

  - Thomas

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Is 7.3 a good time to increase NAMEDATALEN ?

2002-05-21 Thread Tom Lane

"Dann Corbit" <[EMAIL PROTECTED]> writes:
> I'm sure that this is an idiotic thing to say, but why not just make it
> varchar?

The main reason NAME is a fixed-length datatype is that we'd have to
rewrite (and make slower) a lot of catalog-accessing code that expects
to be able to access other fields in catalog tuples at fixed offsets.
I do not think it's worth it.

Also, the existing performance bottlenecks look to me to be associated
with assumptions that NAME is fixed-length.  To convert to varlena NAME,
we'd still have to fix all that code.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Oliver Elphick

On Tue, 2002-05-21 at 23:47, Lamar Owen wrote:
> Hmmm.  Compile and run the attached program.  If you get -1, it's the new 
> behavior.  It might be interesting to see the differences here.

$ a.out
The system thinks 11/30/1969 is a timestamp of -176400 
$ dpkg -l libc6
...
||/ Name   VersionDescription
+++-==-==-
ii  libc6  2.2.5-6GNU C Library: Shared libraries and Timezone

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C

 "We are troubled on every side, yet not distressed; we 
  are perplexed, but not in despair; persecuted, but not
  forsaken; cast down, but not destroyed; Always bearing
  about in the body the dying of the Lord Jesus, that 
  the life also of Jesus might be made manifest in our 
  body."II Corinthians 4:8-10 



signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Wrong dependency in parallel regression test

2002-05-21 Thread Tom Lane

Peter Eisentraut <[EMAIL PROTECTED]> writes:
> create_index creates an index on a table "onek2" which is created in
> create_misc.  I just saw this fail because create_index got there first.
> On the next run everything was OK.

> It's interesting that no one has seen this before, so it's quite
> low-probability.

Wow.  Has anyone tried to do an exhaustive check that the parallel
regression test schedule is OK?

I'd think that it could be done in a reasonable amount of time by
running each test of each parallel group (a) first and (b) last
among its group.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Redhat 7.3 time manipulation bug

2002-05-21 Thread Hannu Krosing

On Wed, 2002-05-22 at 02:14, Tom Lane wrote:
> =?ISO-8859-1?Q?Trond_Eivind_Glomsr=F8d?= <[EMAIL PROTECTED]> writes:
> > Relying on nonstandardized/nondocumented behaviour is a program bug, not a 
> > glibc bug. PostgreSQL needs fixing. Since we ship both, we're looking at 
> > it, but glibc is not the component with a problem.
> 
> A library that can no longer cope with dates before 1970 is NOT my idea
> of a component without a problem.  We will be looking at ways to get
> around glibc's breakage at the application level, since we have little
> alternative other than to declare Linux an unsupported platform;
> but it's still glibc (and the ISO spec:-() that are broken.

IIRC the spec is not _really_ broken - it still allows the correct
behaviour :)

The fact the ISO spec is broken usually means that at least one of the
big vendors involved in ISO spec creation must have had a broken
implementation at that time.

Most likely they have fixed it by now ...

Does anyone know _any_ other libc that has this behaviour ?

--
Hannu



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly