Re: [HACKERS] Memory leaks

2002-10-23 Thread Nigel J. Andrews
On Tue, 22 Oct 2002, Tom Lane wrote:

 Greg Copeland [EMAIL PROTECTED] writes:
 
  Interesting.  Having not looked at memory management schemes used in the
  pl implementations, can you enlighten me by what you mean by integrate
  the memory-context notion?  Does that mean they are not using
  palloc/pfree stuff?
 
 Not everywhere.  plpgsql is full of malloc's and I think the other PL
 modules are too --- and that's not to mention the allocation policies of
 the perl, tcl, etc, language interpreters...

I was going to make the suggestion that malloc et al. could be replaced with
palloc etc but then that raises too many complications without just shooving
everything into a long lived context anyway. Also I think we've got to rely on,
i.e. it is sensible to do so, the underlying language handling memory
correctly.

Hmmm...there do seem to be a few mallocs in plpython.c . I haven't looked very
closely but nothing jumped out at me as being obviously wrong from the grep
output.


-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] 'epoch'::timestamp and Daylight Savings

2002-10-23 Thread Hosen, John
Hi there all PostgreSQL Hackers.

We have just upgraded from 7.1.3 to 7.2.3 over the weekend, and have just
noticed something weird with regards 'epoch'::timestamp.

In 7.1.3, 'epoch'::timestamp always returned the value 1970-01-01
00:00:00+00, following the upgrade we now get:-

  timestamptz   

 1970-01-01 01:00:00+01

Also, any fields set to 'epoch'::timestamp before the upgrade now return:-

e_app_xfer_date 

 1970-01-01 00:00:00+01

If we issue a SET TIMEZONE TO 'GMT' the results are the same as for the
previous release.

This is causing us a problem as we use epoch to indicate whether the data
has been processed, and to log the date and time of processing.

We are running on RedHat 7.1 with kernel 2.4.9-21smp, and glibc
glibc-2.2.4-19.3 with PostgreSQL compiled from the source tarfile.

Output from version():  PostgreSQL 7.2.3 on i686-pc-linux-gnu, compiled by
GCC 2.96

With regards PostgreSQL in general, we have been using it since version
6.4.x and have found it (even the earier versions) extremely reliable an
powerful. Keep up the good work guys!

John Hosen
Senior Network Analyst
Capita RAS
http://www.capitaras.co.uk/


***

This email and any files attached to it are confidential and intended 
solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify 
the system manager.

The message and any files attached to it have been scanned by 
MIMEsweeper with Sophos Sweep and found to be free from all known
viruses.

Information on MIMEsweeper can be found at http://www.mimesweeper.com/

***


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [ADMIN] Security question : Database access control

2002-10-23 Thread dima
edit *pg_hba.conf *
# Allow any user on the local system to connect to any
# database under any username, but only via an IP connection:
host all 127.0.0.1 255.255.255.255trust 
# The same, over Unix-socket connections:
localall  trust
what about reading pg_hba.conf comments?
  localall  md5


or *my nightmare *a cygwin on Win 98 everybody can can access everything 
:-
/me shrugs
i don't use cygwin




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Memory leaks

2002-10-23 Thread Greg Copeland
On Tue, 2002-10-22 at 22:28, Tom Lane wrote:
 Greg Copeland [EMAIL PROTECTED] writes:
 So again, I'm not really sure it they are meaningful at
  this point.
 
 psql might well have some internal leaks; the backend memory-context
 design doesn't apply to it.

Okay.  Thanks.  I'll probably take another look at it a little later and
report back if I find anything of value.

 Does that mean they are not using
  palloc/pfree stuff?
 
 Not everywhere.  plpgsql is full of malloc's and I think the other PL
 modules are too --- and that's not to mention the allocation policies of
 the perl, tcl, etc, language interpreters.  We could use a thorough
 review of that whole area.
 

Okay.  I've started looking at plpython to better understand it's memory
needs.  I'm seeing a mix of mallocs and PLy_malloc. The PLy version is
basically malloc which also checks and reports on memory allocation
errors.  Anyone know if the cases where malloc was used was purposely
done so for performance reasons or simply the flavor or the day?

I thinking for starters, the plpython module could be normalized to use
the PLy_malloc stuff across the board.  Then again, I still need to
spend some more time on it. ;)

  Well, the thing that really got my attention is that dmalloc is
  reporting frees on null pointers.
 
 AFAIK that would dump core on many platforms (it sure does here...),
 so I don't think I believe it without seeing chapter and verse.  

I actually expected it to do that here on my x86-Linux platform but a
quick check showed that it was quiet happy with it.  What platforms are
you using -- just curious?

 But if you can point out where it's really happening, then we must fix it.
 

I'll trying track this down some more this coming week to see if this is
really occurring.  After thinking about it, I'm not sure why dmalloc
would ever report a free on null if it were not actually occurring. 
After all, any call to free should still be showing some memory address
(valid or otherwise).  Off the top of my head, I can't think of an
artifact that would cause it to falsely report it.


Greg



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Memory leaks

2002-10-23 Thread Tom Lane
Nigel J. Andrews [EMAIL PROTECTED] writes:
 On Tue, 22 Oct 2002, Tom Lane wrote:
 Not everywhere.  plpgsql is full of malloc's and I think the other PL
 modules are too --- and that's not to mention the allocation policies of
 the perl, tcl, etc, language interpreters...

 I was going to make the suggestion that malloc et al. could be replaced with
 palloc etc but then that raises too many complications without just shooving
 everything into a long lived context anyway. Also I think we've got to rely on,
 i.e. it is sensible to do so, the underlying language handling memory
 correctly.

If the perl/tcl/etc interpreters have internal memory leaks, there's
little we can do about that except file bug reports.  What I was
wondering about is whether there isn't deliberate action we need to
take to inform those interpreters when data is no longer required.

An example: when a procedure is updated with CREATE OR REPLACE FUNCTION,
the only thing pltcl does about it is a solitary Tcl_DeleteHashEntry();
it doesn't try to tell Tcl to delete the existing Tcl procedure.  That
might happen for free (will we always regenerate the same Tcl procedure
name? not sure), but if the omission causes a leak it's surely not Tcl's
fault.  That's on top of our own data structures about the pltcl
function (pltcl_proc_desc and related stuff), which are definitely
leaked in this scenario.

Sticking all the data about a given function into a memory context
that's specific to the function would make it easier to reclaim our own
memory in this scenario, but we'd still have to tell Tcl to clean up
its own memory.

Actually pltcl's internal structures about a function look simple enough
that it may not be worth using a context for them.  It would definitely
be useful to do that for plpgsql, though, which builds an extremely
complicated structure for each function (and leaks it all on
function redefinition :-().

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Memory leaks

2002-10-23 Thread Tom Lane
Greg Copeland [EMAIL PROTECTED] writes:
 Okay.  I've started looking at plpython to better understand it's memory
 needs.  I'm seeing a mix of mallocs and PLy_malloc. The PLy version is
 basically malloc which also checks and reports on memory allocation
 errors.  Anyone know if the cases where malloc was used was purposely
 done so for performance reasons or simply the flavor or the day?

Probably either oversight or the result of different people's different
coding styles.

 I thinking for starters, the plpython module could be normalized to use
 the PLy_malloc stuff across the board.  Then again, I still need to
 spend some more time on it. ;)

Consistency is good.  What I'd wonder about, though, is whether you
shouldn't be using palloc ;-).  malloc, with or without a PLy_ wrapper,
doesn't provide any leverage to help you get rid of stuff when you don't
want it anymore.

 Well, the thing that really got my attention is that dmalloc is
 reporting frees on null pointers.
 
 AFAIK that would dump core on many platforms (it sure does here...),

I have to take that back: I was thinking about pfree() not free().
The ANSI C spec says that free(NULL) is a legal no-op, and there are
just a few ancient C libraries (perhaps none anymore) where it'll crash.
I tend to do if (ptr) free(ptr) from force of habit, but I notice that
psql (among other places) relies heavily on the ANSI behavior.  It's
probably pointless to try to convince people to change that coding style.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Hans-Jürgen Schönig
First of all PREPARE/EXECUTE is a wonderful thing to speed up things 
significantly.
I wonder if there is a way to store a parsed/rewritten/planned query in 
a table so that it can be loaded again.

This might be useful when it comes to VERY complex queries ( 10 tables).
I many applications the situation is like that:

a. The user connects to the database.
b. The user sends various different queries to the server (some might be 
the same)
c. The user disconnects.

If there was a way to store execution plans in a table the user could 
load the execution plans of the most time consuming stuff into the 
backend without parsing and optimizing it every time he authenticates.

Does it sound useful to anybody? Is it possible to do it or are there 
some technical problems?

Maybe this is worth thinking about.

   Hans

--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] BTree free pages again

2002-10-23 Thread Manfred Koizar
Alvaro,

some time ago I started to collect ideas for btree reorganization but
never got beyond brainstorming.  Maybe it helps if I add them to your
ideas ...

On Tue, 22 Oct 2002 00:12:30 -0300, Alvaro Herrera
[EMAIL PROTECTED] wrote:
I propose instead an hybrid approach:  the metapage has an array of
BlockNumbers, _and_ a pointer to a next freelist page.  That page has
another array of BlockNumbers and another link to next freelist page.
This allows for easier compaction of the freelist, an operation which
should be done on a regular basis (with each VACUUM FULL, for example).
The list of freelist-pages should actually be double linked; that way,
the compaction process can take the BlockNumbers from the last page and
put it on the first to fill it up, etc.

(Remember that each newpage
operation has to sequentially scan the freelist, and put a zero when it
takes one).

What do you mean by sequentially scan the freelist?  Scan the array
of page numbers on the meta page or scan the whole list of freelist
pages?  I'd take a page number from the meta page.  When the list of
free pages on the meta page is empty, read the first freelist page,
move its freelist to the metapage, let the meta page point to its
successor, and return the (formerly) first freelist page as the new
page.

Putting a page on the free list:  If there is room for one more page
number on the meta page, put the page number there and we are
finished.  Otherwise move the freelist from the meta page to the new
page and insert this page at the start of the freelist chain.

To avoid freelist ping-pong you might want to move only, say, 90% or
95% of the freelist from the metapage to the new free page ...

I don't know if we need to doubly link the list.  When do we have to
scan it backwards?


Each time a
tuple is deleted the page is checked to see if it's empty.  If it is,
it's added to a candidate-empty list.

At this point the page is marked as dead (by setting a new flag in
BTPageOpaqueData.btpo_flags) and the lock is released (not sure about
releasing or keeping the pin).  A dead page is not yet available for
reuse.  It is always empty, so it is immediately skipped by index
scans.  The insertion code is modified to never put an index tuple
onto a dead page (you can put it onto the right sibling without
breaking consistency).

At the end of the
btbulkdelete() operation, we have a list of pages that are candidates
for adding into the freelist.

There are several possible approaches:
(a) Maintain a list of dead pages in memory
(b) Rescan the whole index for dead pages
(c) Handle each dead page immediately

Each of these has its drawbacks: (a) leaves behind dead pages when the
reorganization crashes, (b) might have to read lots of non-dead pages,
(c) is inefficient if there are adjacent dead pages.

For each one, the page is
exclusive-locked, and rechecked if it's empty.

The dead-flag guarantees emptiness, just assert it's still dead.
(Can VACUUMs run concurrently?  If yes, this algorithm has to be
adjusted.)  Also no need to lock the page now.

If it is, the parent is
also exclusive-locked (beware of deadlock here!) and also its left and
right siblings.  In the parent, the item pointing to this page is
deleted; in the siblings, the side pointers are updated (high key
on the left sibling also?).

Do one page at a time:  Exclusively lock the parent page, remove the
index tuple pointing to the dead page, unlock parent page.  Lock one
sibling, update pointer, unlock.  Lock the other sibling, update
pointer, unlock.

Attention!  Maybe there's a problem lurking regarding recent splits of
the left sibling!  Have to think more about this ...

Now we have reached a state where the dead page will not be visited by
any new index scan.  There might still be danger from scans that were
paused just before they were about to touch the dead page.  Have to
look at the code whether pins overlap when a scan is sent down from a
parent page or left/right from a sibling ...

Then this page is added to the freelist.
On the add-to-free-list operation, the page is checked to see if it's
the last page of the relation.  If it is, the page just before is
checked for emptyness (using the BTP_FREE flag) iteratively until a
nonfree page is found.

Special handling for freelist pages required here.

All those pages are deleted from the freelist.

You don't have to delete them explicitly.  Just store the maximum free
page number on the meta page and let the free page search ignore (and
set to 0) all page numbers greater than this number.  Making this work
with newroot always extending the relation needs some thought ...

Then the relation is shrinked in that many pages.
[...]
To prevent deadlocks, the newroot operation
does not get a page from the freelist; it always extends the relation.

I think I've put too many things in one email.  Sorry for this.

Me too :-)

Servus
 Manfred

---(end of broadcast)---
TIP 2: you can get off 

Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 01:02 AM 23/10/2002 -0400, Bruce Momjian wrote:
 
 OK, you are saying if we don't have fseeko(), there is no reason to use
 off_t, and we may as well use long.  What limitations does that impose,
 and are the limitations clear to the user.
 
 What I'm saying is that if we have not got fseeko then we should use any 
 'seek-class' function that returns a 64 bit value. We have already made the 
 assumption that off_t is an integer; the same logic that came to that 
 conclusion, applies just as validly to the other seek functions.

Oh, I see, so try to use fsetpos/fgetpos?  I can write wrappers for
those to look like fgetpos/fsetpos and put it in /port.

 Secondly, if there is no 64 bit 'seek-class' function, then we should 
 probably use a size_t, but a long would probably be fine too. I am not 
 particularly attached to this part; long, int etc etc. Whatever is most 
 likely to return an integer and work with whatever function we choose.
 
 As to implications: assuming they are all integers (which as you know I 
 don't like), we should have no problems.
 
 If a system does not have any function to access 64 bit file offsets, then 
 I'd say they are pretty unlikely to have files  2GB.

OK, my OS can handle 64-bit files, but has only fgetpos/fsetpos, so I
could get that working.  The bigger question is what about OS's that
have 64-bit off_t/files but don't have any seek-type functions.  I did
research to find mine, but what about others that may have other
variants?

I think you are right that we have to not use off_t and use long if we
can't find a proper 64-bit seek function, but what are the failure modes
of doing this?  Exactly what happens for larger files?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Memory leaks

2002-10-23 Thread Greg Copeland
On Wed, 2002-10-23 at 08:48, Tom Lane wrote:
 Greg Copeland [EMAIL PROTECTED] writes:
  Okay.  I've started looking at plpython to better understand it's memory
  needs.  I'm seeing a mix of mallocs and PLy_malloc. The PLy version is
  basically malloc which also checks and reports on memory allocation
  errors.  Anyone know if the cases where malloc was used was purposely
  done so for performance reasons or simply the flavor or the day?
 
 Probably either oversight or the result of different people's different
 coding styles.

My local copy has this changed to PLy stuff now.  Testing shows it's
good...then again, I didn't really expect it to change anything.  I'll
submit patches later.

 
  I thinking for starters, the plpython module could be normalized to use
  the PLy_malloc stuff across the board.  Then again, I still need to
  spend some more time on it. ;)
 
 Consistency is good.  What I'd wonder about, though, is whether you
 shouldn't be using palloc ;-).  malloc, with or without a PLy_ wrapper,
 doesn't provide any leverage to help you get rid of stuff when you don't
 want it anymore.

Ya, I'm currently looking to see how the memory is being used and why. 
I'm trying to better understand it's life cycle.  You implying that even
the short term memory should be using the palloc stuff?  What about long
term?  Blanket statement that pretty much all the PLy stuff should
really be using palloc?

 
  Well, the thing that really got my attention is that dmalloc is
  reporting frees on null pointers.
  
  AFAIK that would dump core on many platforms (it sure does here...),
 
 I have to take that back: I was thinking about pfree() not free().
 The ANSI C spec says that free(NULL) is a legal no-op, and there are

Oh really.  I didn't realize that.  I've been using the if(  ptr ) 
stuff for so long I didn't realize I didn't need to anymore.  Thanks for
the update.  That was, of course, the cause for alarm.

 It's
 probably pointless to try to convince people to change that coding style.

Well at this late time, I think it's safe to say that it's not causing
problems for anyone on any of the supported platforms.  So I'll not
waste time looking for it even though I happen think it's a poor
practice just the same.


Thanks,

Greg



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Greg Copeland
If you were using them that frequently, couldn't you just keep a
persistent connection?  If it's not used that often, wouldn't the
overhead of preparing the query following a new connection become noise?

Greg


On Wed, 2002-10-23 at 09:24, Hans-Jürgen Schönig wrote:
 First of all PREPARE/EXECUTE is a wonderful thing to speed up things 
 significantly.
 I wonder if there is a way to store a parsed/rewritten/planned query in 
 a table so that it can be loaded again.
 
 This might be useful when it comes to VERY complex queries ( 10 tables).
 I many applications the situation is like that:
 
 a. The user connects to the database.
 b. The user sends various different queries to the server (some might be 
 the same)
 c. The user disconnects.
 
 If there was a way to store execution plans in a table the user could 
 load the execution plans of the most time consuming stuff into the 
 backend without parsing and optimizing it every time he authenticates.
 
 Does it sound useful to anybody? Is it possible to do it or are there 
 some technical problems?
 
 Maybe this is worth thinking about.
 
 Hans
 
 -- 
 *Cybertec Geschwinde u Schoenig*
 Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
 Tel: +43/1/913 68 09; +43/664/233 90 75
 www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
 http://cluster.postgresql.at, www.cybertec.at 
 http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
 http://archives.postgresql.org



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 01:02 AM 23/10/2002 -0400, Bruce Momjian wrote:
 
 OK, you are saying if we don't have fseeko(), there is no reason to use
 off_t, and we may as well use long.  What limitations does that impose,
 and are the limitations clear to the user.
 
 What I'm saying is that if we have not got fseeko then we should use any 
 'seek-class' function that returns a 64 bit value. We have already made the 
 assumption that off_t is an integer; the same logic that came to that 
 conclusion, applies just as validly to the other seek functions.
 
 Secondly, if there is no 64 bit 'seek-class' function, then we should 
 probably use a size_t, but a long would probably be fine too. I am not 
 particularly attached to this part; long, int etc etc. Whatever is most 
 likely to return an integer and work with whatever function we choose.
 
 As to implications: assuming they are all integers (which as you know I 
 don't like), we should have no problems.
 
 If a system does not have any function to access 64 bit file offsets, then 
 I'd say they are pretty unlikely to have files  2GB.

Let me see if I can be clearer.  With shifting off_t, if that fails, we
will find out right away, at compile time.  I think that is acceptable.

What I am concerned about are cases that fail at runtime, specifically
during a restore of a 2gig file.  In my reading of the code, those
failures will be silent or will produce unusual error messages.  I don't
think we can ship code that has strange failure modes for data restore.

Now, if someone knows those failure cases, I would love to hear about
it.  If not, I will dig into the code today and find out where they are.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Memory leaks

2002-10-23 Thread Tom Lane
Greg Copeland [EMAIL PROTECTED] writes:
 Ya, I'm currently looking to see how the memory is being used and why. 
 I'm trying to better understand it's life cycle.  You implying that even
 the short term memory should be using the palloc stuff?  What about long
 term?  Blanket statement that pretty much all the PLy stuff should
 really be using palloc?

Short-term stuff almost certainly should be using palloc, IMHO; anything
that is not going to survive the current function invocation should be
palloc'd in CurrentMemoryContext.  The main reason for this is that you
don't need to fear leaking such memory if the function is aborted by
elog().  Depending on what you are doing, you may not have to bother
with explicit pfree's at all for such stuff.  (In a PL handler you could
probably get away with omitting pfree's for stuff allocated once per
call, but not for stuff allocated once per statement.  Unless you were to
make a new context that gets reset for each statement ... which might be
a good idea.)

For stuff that is going to live a long time and then be explicitly
freed, I don't think there's a hard-and-fast rule about which to use.
If you are building a complex data structure (parsetree, say) then it's
going to be easier to build it in a memory context and then just free
the context rather than tearing down the data structure piece-by-piece.
But when you are talking about a single object, there's not a heck of a
lot of difference between malloc() and palloc in TopMemoryContext.

I'd lean towards using the palloc routines anyway, for consistency of
coding style, but that's a judgment call not a must-do thing.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Tom Lane
=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
 I wonder if there is a way to store a parsed/rewritten/planned query in 
 a table so that it can be loaded again.

The original version of the PREPARE patch used a shared-across-backends
cache for PREPAREd statements.  We rejected that for a number of
reasons, one being the increased difficulty of keeping such a cache up
to date.  I think actually storing the plans on disk would have all the
same problems, but worse.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] crashes with postgresql 7.2.1 on IRIX 6.5

2002-10-23 Thread Nicolae Mihalache
Hello!

I'm tring to move a database and an application from linux to IRIX and I 
have some problems getting postgresql to work on IRIX.
My problem is that postgres crashes is killed by the kernel because of 
(this is from /var/log/SYSLOG):
unix: |$(0x6db)ALERT: Process [postgres] pid 105818 killed: not enough 
memory to lock stack
This happens when running the regression tests at the trigger and also 
when I try to import my db schema that I've dump with pg_dump on the 
linux machine.
Unfortunately I have not much experience with IRIX, so I don't know if 
it is postgres fault or is a bad configuration. What I've tried to do is 
to increase the maxkmem kernel parameter from 2000 to 2 but it 
makes no difference. The machine has 256MB ram and when running postgres 
seems to take no more than 15MB.

Some system information:
uname -a: IRIX cassini 6.5 04101931 IP32 mips
cc -version: MIPSpro Compilers: Version 7.2.1
src/templates/irix5 file:
CC=cc
CFLAGS='-O2'
LDFLAGS='-O2 -lcurses -n32'

configure command:
./configure --prefix=/disk1/users/robust/famous/local --with-tcl 
--with-includes=/disk1/users/robust/famous/local/include 
--with-libraries=/disk1/users/robust/famous/local/lib --disable-locale 
--without-java --without-CXX


Thank you for any ideea that can help,
Nicolae Mihalache


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [HACKERS] crashes with postgresql 7.2.1 on IRIX 6.5

2002-10-23 Thread Luis Alberto Amigo Navarro
could send the output from:
limit
and
systune | grep shm
also, what kind of machine is
regards



- Original Message - 
From: Nicolae Mihalache [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, October 23, 2002 5:57 PM
Subject: crashes with postgresql 7.2.1 on IRIX 6.5


 Hello!
 
 I'm tring to move a database and an application from linux to IRIX and I 
 have some problems getting postgresql to work on IRIX.
 My problem is that postgres crashes is killed by the kernel because of 
 (this is from /var/log/SYSLOG):
 unix: |$(0x6db)ALERT: Process [postgres] pid 105818 killed: not enough 
 memory to lock stack
 This happens when running the regression tests at the trigger and also 
 when I try to import my db schema that I've dump with pg_dump on the 
 linux machine.
 Unfortunately I have not much experience with IRIX, so I don't know if 
 it is postgres fault or is a bad configuration. What I've tried to do is 
 to increase the maxkmem kernel parameter from 2000 to 2 but it 
 makes no difference. The machine has 256MB ram and when running postgres 
 seems to take no more than 15MB.
 
 Some system information:
 uname -a: IRIX cassini 6.5 04101931 IP32 mips
 cc -version: MIPSpro Compilers: Version 7.2.1
 src/templates/irix5 file:
 CC=cc
 CFLAGS='-O2'
 LDFLAGS='-O2 -lcurses -n32'
 
 configure command:
 ./configure --prefix=/disk1/users/robust/famous/local --with-tcl 
 --with-includes=/disk1/users/robust/famous/local/include 
 --with-libraries=/disk1/users/robust/famous/local/lib --disable-locale 
 --without-java --without-CXX
 
 
 Thank you for any ideea that can help,
 Nicolae Mihalache
 
 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] crashes with postgresql 7.2.1 on IRIX 6.5

2002-10-23 Thread Nicolae Mihalache
Luis Alberto Amigo Navarro wrote:

could send the output from:
limit
and
systune | grep shm
also, what kind of machine is
regards


robustcassini limit
cputime unlimited
filesizeunlimited
datasize2097152 kbytes
stacksize   524288 kbytes
coredumpsizeunlimited
memoryuse   524288 kbytes
vmemoryuse  2097152 kbytes
descriptors 2500
threads 1024

robustcassini systune |grep shm
 group: shm (statically changeable)
sshmseg = 2000 (0x7d0)
shmmni = 1064 (0x428)
shmmin = 1 (0x1)
shmmax = 214748365 (0xccd)

robustcassini hinv
CPU: MIPS R1 Processor Chip Revision: 2.7
FPU: MIPS R10010 Floating Point Chip Revision: 0.0
1 195 MHZ IP32 Processor
Main memory size: 256 Mbytes
Secondary unified instruction/data cache size: 1 Mbyte on Processor 0
Instruction cache size: 32 Kbytes
Data cache size: 32 Kbytes
FLASH PROM version 4.17
Integral SCSI controller 0: Version ADAPTEC 7880
  Disk drive: unit 2 on SCSI controller 0
  CDROM: unit 4 on SCSI controller 0
Integral SCSI controller 1: Version ADAPTEC 7880
  Disk drive: unit 5 on SCSI controller 1
On-board serial ports: tty1
On-board serial ports: tty2
On-board EPP/ECP parallel port
CRM graphics installed
Integral Ethernet: ec0, version 1
Iris Audio Processor: version A3 revision 0
Video: MVP unit 0 version 1.4
AV: AV2 Card version 0, Camera not connected.
Vice: TRE
IEEE 1394 High performance serial bus controller 0: Type: OHCI, Version 
0x104C-1 0


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Bruno Wolff III
On Wed, Oct 23, 2002 at 18:04:01 +0200,
  Hans-Jürgen Schönig [EMAIL PROTECTED] wrote:
 
 An example:
 I have a join across 10 tables  + 2 subselects across 4 tables
 on the machine I use for testing:
planner: 12 seconds
executor: 1 second
 
 The application will stay the same forever.
 I could be 10 times faster if there was a way to load the execution plan 
 into the backend.

One option you have is to explicitly give the join order. You can look at
explain to see what order the joins are done in and then rewrite the sql
to force them to be done in that order. This should keep things simple
for the planner.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Tom Lane
Bruno Wolff III [EMAIL PROTECTED] writes:
   Hans-Jürgen Schönig [EMAIL PROTECTED] wrote:
 I have a join across 10 tables  + 2 subselects across 4 tables
 on the machine I use for testing:
 planner: 12 seconds
 executor: 1 second

 One option you have is to explicitly give the join order.

Yes, this is exactly the sort of situation where forcing the join order
is a big performance win.  See
http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/explicit-joins.html

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] crashes with postgresql 7.2.1 on IRIX 6.5

2002-10-23 Thread Luis Alberto Amigo Navarro
System configuration looks fine to me, there is no upper limit, the only
configuration problem a can see is that stacksize limit is set to 512Mb when
u only have 256Mb ram, try limit stacksize 65536 (64Mb)  a most serious
size, if it does not help I can't be of any more help.
If it helps u may need to change hard limit(this is a soft limit) with
systune.
Regards



- Original Message -
From: Nicolae Mihalache [EMAIL PROTECTED]
To: Luis Alberto Amigo Navarro [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, October 23, 2002 6:09 PM
Subject: Re: crashes with postgresql 7.2.1 on IRIX 6.5


 Luis Alberto Amigo Navarro wrote:
  could send the output from:
  limit
  and
  systune | grep shm
  also, what kind of machine is
  regards
 
 robust@cassini limit
 cputime unlimited
 filesizeunlimited
 datasize2097152 kbytes
 stacksize   524288 kbytes
 coredumpsizeunlimited
 memoryuse   524288 kbytes
 vmemoryuse  2097152 kbytes
 descriptors 2500
 threads 1024

 robust@cassini systune |grep shm
   group: shm (statically changeable)
  sshmseg = 2000 (0x7d0)
  shmmni = 1064 (0x428)
  shmmin = 1 (0x1)
  shmmax = 214748365 (0xccd)

 robust@cassini hinv
 CPU: MIPS R1 Processor Chip Revision: 2.7
 FPU: MIPS R10010 Floating Point Chip Revision: 0.0
 1 195 MHZ IP32 Processor
 Main memory size: 256 Mbytes
 Secondary unified instruction/data cache size: 1 Mbyte on Processor 0
 Instruction cache size: 32 Kbytes
 Data cache size: 32 Kbytes
 FLASH PROM version 4.17
 Integral SCSI controller 0: Version ADAPTEC 7880
Disk drive: unit 2 on SCSI controller 0
CDROM: unit 4 on SCSI controller 0
 Integral SCSI controller 1: Version ADAPTEC 7880
Disk drive: unit 5 on SCSI controller 1
 On-board serial ports: tty1
 On-board serial ports: tty2
 On-board EPP/ECP parallel port
 CRM graphics installed
 Integral Ethernet: ec0, version 1
 Iris Audio Processor: version A3 revision 0
 Video: MVP unit 0 version 1.4
 AV: AV2 Card version 0, Camera not connected.
 Vice: TRE
 IEEE 1394 High performance serial bus controller 0: Type: OHCI, Version
 0x104C-1 0





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] crashes with postgresql 7.2.1 on IRIX 6.5

2002-10-23 Thread Nicolae Mihalache
Luis Alberto Amigo Navarro wrote:

System configuration looks fine to me, there is no upper limit, the only
configuration problem a can see is that stacksize limit is set to 512Mb when
u only have 256Mb ram, try limit stacksize 65536 (64Mb)  a most serious
size, if it does not help I can't be of any more help.


No it does not help :(
Is there any place where I can download binaries for postgresql that 
include support for pltcl?

Thanks,
mache


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Hans-Jürgen Schönig
This is exactly what we do in case of complex stuff. I know that it can 
help to reduce the problem for the planner.
However: If you have explicit joins across 10 tables the SQL statement 
is not that readable any more and it is still slower than a prepared 
execution plan.

I guess it is worth thinking about prepared plans somewhere on disk.
Is there a way to transform ASCII - plan?

   Hans



Bruno Wolff III wrote:

On Wed, Oct 23, 2002 at 18:04:01 +0200,
 Hans-Jürgen Schönig [EMAIL PROTECTED] wrote:
 

An example:
I have a join across 10 tables  + 2 subselects across 4 tables
on the machine I use for testing:
  planner: 12 seconds
  executor: 1 second

The application will stay the same forever.
I could be 10 times faster if there was a way to load the execution plan 
into the backend.
   


--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Hans-Jürgen Schönig
The idea is not to have it accross multiple backends and having it in 
sync with the tables in the database. This is not the point.
My problem is that I have seen many performance critical applications 
sending just a few complex queries to the server. The problem is: If you 
have many queries where the relation time planner/time executor is 
very high (eg. complex joins with just one value as the result).
These applications stay the same for a long time (maybe even years) and 
so there is no need to worry about new tables and so forth - maybe there 
is not even a need to worry about new data. In these cases we could 
speed up the database significantly just by avoiding the use of the planner:

An example:
I have a join across 10 tables  + 2 subselects across 4 tables
on the machine I use for testing:
   planner: 12 seconds
   executor: 1 second

The application will stay the same forever.
I could be 10 times faster if there was a way to load the execution plan 
into the backend.
There is no way to use a persistent connection (many clients on 
different machines, dynamic IPs, etc. ...)
There is no way to have an invalid execution plan because there are no 
changes (new tables etc.) in the database.

Also: If people execute a prepared query and it fails they will know why 
- queries will fail if people drop a table even if these queries are not 
prepared.
A new feature like the one we are discussing might be used rarely but if 
people use it they will benefit A LOT.

If we had a simple ASCII interface to load the stuff into the planner 
people could save MANY cycles.
When talking about tuning it is nice to gain 10% or even 20% but in many 
cases it does not solve a problem - if a problem can be reduced by 90% 
it is a REAL gain.
Gaining 10% can be done by tweaking the database a little - gaining 
1000% cannot be done so it might be worth thinking about it even it the 
feature is only used by 20% of those users out there.  20% of all 
postgres users is most likely more than 15.000 people.

Again; it is not supposed to be a every-day solution. It is a solution 
for applications staying the same for a very long time.

   Hans


Tom Lane wrote:

=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
 

I wonder if there is a way to store a parsed/rewritten/planned query in 
a table so that it can be loaded again.
   


The original version of the PREPARE patch used a shared-across-backends
cache for PREPAREd statements.  We rejected that for a number of
reasons, one being the increased difficulty of keeping such a cache up
to date.  I think actually storing the plans on disk would have all the
same problems, but worse.

			regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 



--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] 'epoch'::timestamp and Daylight Savings

2002-10-23 Thread Hosen, John
Thomas,

Thanks for replying so quickly.

Upon some further investigation, the conversion of the time to BST is OK on
data inserted before the upgrade (the display was just a red herring), the
problem is just on new data inserted into the table with a default value for
the timestamp fields.

The output from \d on the table is below:

capitaras_live=# \d e_application
   Table e_application
  Column   |   Type   |
Modifiers  
---+--+-
---
 e_app_id_num  | integer  | not null default
nextval('e_application_e_app_id_num_seq'::text)
 e_app_cand_id_num | integer  | 
 e_app_job_id  | character varying(30)| not null
 e_app_cid_num | bigint   | 
 e_app_cts | character varying(30)| 
 e_app_mts | character varying(30)| 
 e_app_print_date  | timestamp with time zone | default
timestamp('epoch'::text)
 e_app_xfer_date   | timestamp with time zone | default
timestamp('epoch'::text)
 e_app_completed   | character varying(6) | 
 e_app_xml | text | 
 e_app_modified| timestamp with time zone | 
 e_app_created | timestamp with time zone | 
 e_app_wd_feedback | text | 
 e_app_submitted   | timestamp with time zone | 
Indexes: e_app_completed_idx,
 e_app_job_id_and_completed_idx
Primary key: e_application_pkey
Triggers: RI_ConstraintTrigger_27961

With a bit more playing, it definitely looks like something wrong with the
way the default value has been set follwing the export  import for the
upgrade:-

Create table arnold (
a   int8,
b   timestamp default 'epoch'::timestamp,
c   timestamp default timestamp('epoch'::text)
);

capitaras_test=# \d arnold
  Table arnold
 Column |   Type   |   Modifiers

+--+
---
 a  | bigint   | 
 b  | timestamp with time zone | default '1970-01-01
01:00:00+01'::timestamptz
 c  | timestamp with time zone | default timestamp('epoch'::text) 

capitaras_test=# insert into arnold (a) values (1);
INSERT 182907 1
capitaras_test=# insert into arnold (a) values (2);
INSERT 182907 1

capitaras_test=# set timezone to 'Europe/London';
SET VARIABLE
capitaras_test=# select * from arnold;
 a |   b|   c
---++
 1 | 1970-01-01 01:00:00+01 | 1970-01-01 00:00:00+01
 2 | 1970-01-01 01:00:00+01 | 1970-01-01 00:00:00+01
(2 rows)

capitaras_test=# set timezone to 'GMT';
SET VARIABLE
capitaras_test=# select * from arnold
capitaras_test-# ;
 a |   b|   c
---++
 1 | 1970-01-01 00:00:00+00 | 1969-12-31 23:00:00+00
 2 | 1970-01-01 00:00:00+00 | 1969-12-31 23:00:00+00
(2 rows)


I think that the best way forward for us (short of re-writing the backend to
use NULL) will be to just alter the default value to the one in column b in
the test table above.

With regards the disclaimer, don't you just love lawyers ;-)

Once again, thanks for replying so quickly.

Regards

John.



-Original Message-
From: Thomas Lockhart [mailto:lockhart;fourpalms.org] 
Sent: 23 October 2002 16:06
To: Hosen, John
Cc: '[EMAIL PROTECTED]'
Subject: Re: 'epoch'::timestamp and Daylight Savings


 We have just upgraded from 7.1.3 to 7.2.3 over the weekend, and have 
 just noticed something weird with regards 'epoch'::timestamp. In 
 7.1.3, 'epoch'::timestamp always returned the value 1970-01-01 
 00:00:00+00,...

I would expect 'epoch'::timestamp to always act as though the value were 
the same as Unix time zero. But it did not explicitly return that value:

lockhart=# select version();
-
  PostgreSQL 7.1.2 on i686-pc-linux-gnu, compiled by GCC 2.96

lockhart=# select 'epoch'::timestamp;
--
  epoch

lockhart=# select 'epoch'::timestamp + '0 sec'::interval;

  1969-12-31 16:00:00-08


 following the upgrade we now get:-
   timestamptz   
 
  1970-01-01 01:00:00+01

Which is consistant with the previous result in absolute time.

You will find that the most recent versions of PostgreSQL convert 
'epoch' on input, and that with- and without time zone data types are 
available:

lockhart=# select cast('epoch' as timestamp with time zone);
   timestamptz

  1969-12-31 16:00:00-08

lockhart=# select cast('epoch' as timestamp without time zone);
   timestamp
-
  1970-01-01 00:00:00


 Also, any fields set to 'epoch'::timestamp before the upgrade now return:-
  

Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Greg Copeland
Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy?  I believe this would result in the desired
performance boost and behavior.

Now, the next obvious question...anyone know of any proxy apps available
for postgresql?

Regards,

Greg


On Wed, 2002-10-23 at 11:04, Hans-Jürgen Schönig wrote:
 The idea is not to have it accross multiple backends and having it in 
 sync with the tables in the database. This is not the point.
 My problem is that I have seen many performance critical applications 
 sending just a few complex queries to the server. The problem is: If you 
 have many queries where the relation time planner/time executor is 
 very high (eg. complex joins with just one value as the result).
 These applications stay the same for a long time (maybe even years) and 
 so there is no need to worry about new tables and so forth - maybe there 
 is not even a need to worry about new data. In these cases we could 
 speed up the database significantly just by avoiding the use of the planner:
 
 An example:
 I have a join across 10 tables  + 2 subselects across 4 tables
 on the machine I use for testing:
 planner: 12 seconds
 executor: 1 second
 
 The application will stay the same forever.
 I could be 10 times faster if there was a way to load the execution plan 
 into the backend.
 There is no way to use a persistent connection (many clients on 
 different machines, dynamic IPs, etc. ...)
 There is no way to have an invalid execution plan because there are no 
 changes (new tables etc.) in the database.
 
 Also: If people execute a prepared query and it fails they will know why 
 - queries will fail if people drop a table even if these queries are not 
 prepared.
 A new feature like the one we are discussing might be used rarely but if 
 people use it they will benefit A LOT.
 
 If we had a simple ASCII interface to load the stuff into the planner 
 people could save MANY cycles.
 When talking about tuning it is nice to gain 10% or even 20% but in many 
 cases it does not solve a problem - if a problem can be reduced by 90% 
 it is a REAL gain.
 Gaining 10% can be done by tweaking the database a little - gaining 
 1000% cannot be done so it might be worth thinking about it even it the 
 feature is only used by 20% of those users out there.  20% of all 
 postgres users is most likely more than 15.000 people.
 
 Again; it is not supposed to be a every-day solution. It is a solution 
 for applications staying the same for a very long time.
 
 Hans
 
 
 Tom Lane wrote:
 
 =?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
   
 
 I wonder if there is a way to store a parsed/rewritten/planned query in 
 a table so that it can be loaded again.
 
 
 
 The original version of the PREPARE patch used a shared-across-backends
 cache for PREPAREd statements.  We rejected that for a number of
 reasons, one being the increased difficulty of keeping such a cache up
 to date.  I think actually storing the plans on disk would have all the
 same problems, but worse.
 
  regards, tom lane
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
   
 
 
 
 -- 
 *Cybertec Geschwinde u Schoenig*
 Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
 Tel: +43/1/913 68 09; +43/664/233 90 75
 www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
 http://cluster.postgresql.at, www.cybertec.at 
 http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at
 
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Hans-Jürgen Schönig
Greg Copeland wrote:


Could you use some form of connection proxy where the proxy is actually
keeping persistent connections but your application is making transient
connections to the proxy?  I believe this would result in the desired
performance boost and behavior.

Now, the next obvious question...anyone know of any proxy apps available
for postgresql?

Regards,

	Greg

 


There is one load balancing software available which is based on the ACE 
library.
Just have a look at:

http://freshmeat.net/projects/dbbalancer/

I haven't tested it up to now.

I am now looking for a workaround - I am sure that there are many 
workarounds for this issue (explicit joins, persistent connections, etc. 
...).
I thought it might be useful to have something like a data type (or 
maybe a binary field) used to store execution plans.

People could use this feature as some sort of server side function or 
so ...
It can be seend as some sort of optimized function in the backend which 
can be loaded/executed more efficiently.

Maybe others would like to see that feature as well.

   Hans

--
*Cybertec Geschwinde u Schoenig*
Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria
Tel: +43/1/913 68 09; +43/664/233 90 75
www.postgresql.at http://www.postgresql.at, cluster.postgresql.at 
http://cluster.postgresql.at, www.cybertec.at 
http://www.cybertec.at, kernel.cybertec.at http://kernel.cybertec.at



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] 'epoch'::timestamp and Daylight Savings

2002-10-23 Thread Tom Lane
Hosen, John [EMAIL PROTECTED] writes:

  e_app_print_date  | timestamp with time zone | default
 timestamp('epoch'::text)

Yeah, there's your problem.  You are casting 'epoch' to type timestamp
without time zone, and thence to timestamp with time zone.  The first
step gives midnight and the second assumes that that means midnight
local time.

For example, in US EST zone I get:

regression=# select timestamp('epoch'::text);
  timestamp
-
 1970-01-01 00:00:00
(1 row)

regression=# select (timestamp('epoch'::text))::timestamptz;
   timestamp

 1970-01-01 00:00:00-05
(1 row)

whereas what is wanted is

regression=# select timestamptz('epoch'::text);
  timestamptz

 1969-12-31 19:00:00-05
(1 row)

So you can fix the problem just by setting the default to be
'epoch'::timestamptz.

The problem is probably related to the renaming we've been carrying out
to get closer to the SQL spec: timestamp now means timestamp without
time zone, which is not what it meant in older Postgres releases.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Zeugswetter Andreas SB SD

 The idea is not to have it accross multiple backends and having it in 
 sync with the tables in the database. This is not the point.
 My problem is that I have seen many performance critical applications 
 sending just a few complex queries to the server. The problem is: If you 
 have many queries where the relation time planner/time executor is 
 very high (eg. complex joins with just one value as the result).

The standard approach to such a scenario would imho be to write stored procedures
for the complex queries (e.g. plpgsql) and use that from the client.
Maybe even eliminate a few ping pongs between client and server.

Andreas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] PREPARE / EXECUTE

2002-10-23 Thread Neil Conway
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
 The standard approach to such a scenario would imho be to write
 stored procedures for the complex queries (e.g. plpgsql) and use
 that from the client.  Maybe even eliminate a few ping pongs between
 client and server.

Since PL/PgSQL cached query plans are flushed when the backend exits,
how would this help?

Regarding the original suggestion of storing prepared plans on disk, I
agree with Tom -- it's basically the same idea as storing plans in
shared memory, which we previously considered (and Karel implemented),
but ultimately decided to remove. IMHO, the utility of this feature
doesn't justify the problems that would come with implementing it (see
the archives for the original implementation discussions).

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Peter Eisentraut
Bruce Momjian writes:

 I think you are right that we have to not use off_t and use long if we
 can't find a proper 64-bit seek function, but what are the failure modes
 of doing this?  Exactly what happens for larger files?

First we need to decide what we want to happen and after that think about
how to implement it.  Given sizeof(off_t)  sizeof(long) and no fseeko(),
we have the following options:

1. Disable access to large files.

2. Seek in some other way.

What's it gonna be?

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Peter Eisentraut wrote:
 Bruce Momjian writes:
 
  I think you are right that we have to not use off_t and use long if we
  can't find a proper 64-bit seek function, but what are the failure modes
  of doing this?  Exactly what happens for larger files?
 
 First we need to decide what we want to happen and after that think about
 how to implement it.  Given sizeof(off_t)  sizeof(long) and no fseeko(),
 we have the following options:
 
 1. Disable access to large files.
 
 2. Seek in some other way.
 
 What's it gonna be?

OK, well BSD/OS now works, but I wonder if there are any other quad
off_t OS's out there without fseeko.

How would we disable access to large files?  Do we fstat the file and
see if it is too large?   I suppose we are looking for cases where the
file system has large files, but fseeko doesn't allow us to access them.
Should we leave this issue alone and wait to find another OS with this
problem, and we can then rejigger fseeko.c to handle that OS too?

Looking at the pg_dump code, it seems the fseeks are optional in there
anyway because it already has code to read the file sequentially rather
than use fseek, and the TOC case in pg_backup_custom.c says that is
optional too.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 First we need to decide what we want to happen and after that think about
 how to implement it.  Given sizeof(off_t)  sizeof(long) and no fseeko(),
 we have the following options:

It seems obvious to me that there are no platforms that offer
sizeof(off_t)  sizeof(long) but have no API for doing seeks with off_t.
That would be just plain silly.  IMHO it's acceptable for us to fail at
configure time if we can't figure out how to seek.

The question is *which* seek APIs we need to support.  Are there any
besides fseeko() and fgetpos()?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 How would we disable access to large files?

I think configure should fail if it can't find a way to seek.
Workaround for anyone in that situation is configure --disable-largefile.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] crashes with postgresql 7.2.1 on IRIX 6.5

2002-10-23 Thread mache
Hello again to everybody!

I've just downloaded and installed a binary version of postgresql 7.1.1 
from ftp.postgresql.org and I have the same bad results: crashes during 
regression tests. Can please someone confirm me that the binary version 
posted there is able to pass all the regression tests? This way I will 
know that the problem is somwere in my IRIX setup.

Thank you,
mache

On Wed, 23 Oct 2002, Nicolae Mihalache wrote:

 Hello!
 
 I'm tring to move a database and an application from linux to IRIX and I 
 have some problems getting postgresql to work on IRIX.
 My problem is that postgres crashes is killed by the kernel because of 
 (this is from /var/log/SYSLOG):
 unix: |$(0x6db)ALERT: Process [postgres] pid 105818 killed: not enough 
 memory to lock stack
 This happens when running the regression tests at the trigger and also 
 when I try to import my db schema that I've dump with pg_dump on the 
 linux machine.
 Unfortunately I have not much experience with IRIX, so I don't know if 
 it is postgres fault or is a bad configuration. What I've tried to do is 
 to increase the maxkmem kernel parameter from 2000 to 2 but it 
 makes no difference. The machine has 256MB ram and when running postgres 
 seems to take no more than 15MB.
 
 Some system information:
 uname -a: IRIX cassini 6.5 04101931 IP32 mips
 cc -version: MIPSpro Compilers: Version 7.2.1
 src/templates/irix5 file:
 CC=cc
 CFLAGS='-O2'
 LDFLAGS='-O2 -lcurses -n32'
 
 configure command:
 ./configure --prefix=/disk1/users/robust/famous/local --with-tcl 
 --with-includes=/disk1/users/robust/famous/local/include 
 --with-libraries=/disk1/users/robust/famous/local/lib --disable-locale 
 --without-java --without-CXX
 
 
 Thank you for any ideea that can help,
 Nicolae Mihalache
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  First we need to decide what we want to happen and after that think about
  how to implement it.  Given sizeof(off_t)  sizeof(long) and no fseeko(),
  we have the following options:
 
 It seems obvious to me that there are no platforms that offer
 sizeof(off_t)  sizeof(long) but have no API for doing seeks with off_t.
 That would be just plain silly.  IMHO it's acceptable for us to fail at
 configure time if we can't figure out how to seek.

I would certainly be happy failing at configure time, so we know at the
start what is broken, rather than failures during restore.

 The question is *which* seek APIs we need to support.  Are there any
 besides fseeko() and fgetpos()?

What I have added is BSD/OS specific because only on BSD/OS do I know
fpos_t and off_t are the same type.  If we come up with other platforms,
we will have to deal with it then.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] One 7.3 item left

2002-10-23 Thread Bruce Momjian
Peter Eisentraut wrote:
 Bruce Momjian writes:
 
  OK, we are down to one open item, related to pg_dumping on 64-bit off_t.
  We had discussion today on this so it should be completed shortly.
 
 I hate to spoil the fun, but we have at least the Linux + Perl
 5.8.1 + _GNU_SOURCE and the AIX + Large File + Flex problems to fix.

I was hoping those had gone away.  :-(

Open items updated.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

   P O S T G R E S Q L

  7 . 3  O P E NI T E M S


Current at ftp://momjian.postgresql.org/pub/postgresql/open_items.

Required Changes
---
Schema handling - ready? interfaces? client apps?
Drop column handling - ready for all clients, apps?
Add configure check for sizeof(off_t)  sizeof(long) and no fseek()
Fix Linux + Perl 5.8.1 + _GNU_SOURCE problem
Fix AIX + Large File + Flex problem

Optional Changes


Documentation Changes
-


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Giles Lean

Bruce Momjian [EMAIL PROTECTED] writes:

 OK, well BSD/OS now works, but I wonder if there are any other quad
 off_t OS's out there without fseeko.

NetBSD prior to 1.6, released September 14, 2002. (Source: CVS logs.)

OpenBSD prior to 2.7, released June 15, 2000.  (Source: release notes.)

FreeBSD has had fseeko() for some time, but I'm not sure which release
introduced it -- perhaps 3.2.0, released May, 1999. (Source: CVS logs.)

Regards,

Giles





---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Standalone backend doesn't read postgresql.conf

2002-10-23 Thread Bruce Momjian
Does anyone remember why a standalone backend doesn't read
postgresql.conf?

For example, I just set 'autocommit' to off in postgresql.conf, and
after starting the standalone backend, it doesn't have autocommit off.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 05:50 PM 23/10/2002 -0400, Bruce Momjian wrote:

Looking at the pg_dump code, it seems the fseeks are optional in there
anyway because it already has code to read the file sequentially rather


But there are features that are not available if it can't seek: eg. it will 
not restore in a different order to that in which it was written; it will 
not dump data offsets in the TOC so dump files can not be restored in 
alternate orders; restore times will be large for a single table (it has to 
read the entire file potentially).



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /()   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 05:50 PM 23/10/2002 -0400, Bruce Momjian wrote:
 Looking at the pg_dump code, it seems the fseeks are optional in there
 anyway because it already has code to read the file sequentially rather
 
 But there are features that are not available if it can't seek: eg. it will 
 not restore in a different order to that in which it was written; it will 
 not dump data offsets in the TOC so dump files can not be restored in 
 alternate orders; restore times will be large for a single table (it has to 
 read the entire file potentially).

OK, that helps.  We just got a list of 2 other OS's without fseeko and
with large file support.  Any NetBSD before Auguest 2002 has that
problem.  We are going to need to either get fseeko workarounds for
those, or disable those features in a meaningful way.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Giles Lean wrote:
 
 Bruce Momjian [EMAIL PROTECTED] writes:
 
  OK, well BSD/OS now works, but I wonder if there are any other quad
  off_t OS's out there without fseeko.
 
 NetBSD prior to 1.6, released September 14, 2002. (Source: CVS logs.)

OK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 10:42 AM 23/10/2002 -0400, Bruce Momjian wrote:

What I am concerned about are cases that fail at runtime, specifically
during a restore of a 2gig file.


Please give an example that would still apply assuming we get a working 
seek/tell pair that works with whatever we use as an offset?

If you are concerned about reading a dump file with 8 byte offsets on a 
machine with 4 byte off_t, that case and it's permutations are already covered.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /()   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 10:42 AM 23/10/2002 -0400, Bruce Momjian wrote:
 What I am concerned about are cases that fail at runtime, specifically
 during a restore of a 2gig file.
 
 Please give an example that would still apply assuming we get a working 
 seek/tell pair that works with whatever we use as an offset?

If we get this, everything is fine.  I have done that for BSD/OS today. 
I may need to do the same for NetBSD/OpenBSD too.

 If you are concerned about reading a dump file with 8 byte offsets on a 
 machine with 4 byte off_t, that case and it's permutations are already covered.

No, I know that is covered because it will report a proper error message
on the restore on the 4-byte off_t machine.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 11:50 PM 23/10/2002 +0200, Peter Eisentraut wrote:


1. Disable access to large files.

2. Seek in some other way.


This gets my vote, but I would like to see a clean implementation (not huge 
quantities if ifdefs every time we call fseek); either we write our own 
fseek as Bruce seems to be suggesting, or we have a single header file that 
defines the FSEEK/FTELL/OFF_T to point to the 'right' functions, where 
'right' is defined as 'most likely to generate an integer and which makes 
use of the largest number of bytes'.

The way the code is currently written it does not matter if this is a 16 or 
3 byte value - so long as it is an integer.




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 11:50 PM 23/10/2002 +0200, Peter Eisentraut wrote:
 
 1. Disable access to large files.
 
 2. Seek in some other way.
 
 This gets my vote, but I would like to see a clean implementation (not huge 
 quantities if ifdefs every time we call fseek); either we write our own 
 fseek as Bruce seems to be suggesting, or we have a single header file that 
 defines the FSEEK/FTELL/OFF_T to point to the 'right' functions, where 
 'right' is defined as 'most likely to generate an integer and which makes 
 use of the largest number of bytes'.

We have to write another function because fsetpos doesn't do SEEK_CUR so
you have to implement it with more complex code.  It isn't a drop in
place thing.

 The way the code is currently written it does not matter if this is a 16 or 
 3 byte value - so long as it is an integer.

Right. What we are assuming now is that off_t can be seeked using
whatever we defined for fseeko, which is incorrect in one, and now I
hear more than one OS.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 09:36 PM 23/10/2002 -0400, Bruce Momjian wrote:

We are going to need to either get fseeko workarounds for
those, or disable those features in a meaningful way.


? if we have not got a 64 bit seek function of any kind, then use a 32 
bit seek - the features don't need to be disabled. AFAICT, this is a 
non-issue: no 64 bit seek means no large files.

I'm not sure we should even worry about it, but if you are genuinely 
concerned that we have no 64 bit seek call, but we do have files  4GB, 
then If you really want to disable seek, just modify the code that sets 
'hasSeek' - don't screw around with every seek call. But only modify clear 
it if the file is  4GB.







Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 09:41 PM 23/10/2002 -0400, Bruce Momjian wrote:

If we get this, everything is fine.  I have done that for BSD/OS today.
I may need to do the same for NetBSD/OpenBSD too.


What did you do to achieve this?



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /()   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 09:41 PM 23/10/2002 -0400, Bruce Momjian wrote:
 If we get this, everything is fine.  I have done that for BSD/OS today.
 I may need to do the same for NetBSD/OpenBSD too.
 
 What did you do to achieve this?

See src/port/fseeko.c in current CVS, with some configure.in glue.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 09:45 PM 23/10/2002 -0400, Bruce Momjian wrote:

We have to write another function because fsetpos doesn't do SEEK_CUR so
you have to implement it with more complex code.  It isn't a drop in
place thing.


The only code that uses SEEK_CUR is the code to check if seek is available 
- I am ver happy to change that to SEEK_SET - I can't even recall why I 
used SEEK_CUR. The code that does the real seeks uses SEEK_SET.




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 11:55 AM 24/10/2002 +1000, Philip Warner wrote:


The only code that uses SEEK_CUR is the code to check if seek is available 
- I am ver happy to change that to SEEK_SET - I can't even recall why I 
used SEEK_CUR. The code that does the real seeks uses SEEK_SET.

Come to think of it:

ctx-hasSeek = (fseeko(AH-FH, 0, SEEK_CUR) == 0);

should be replaced by:

#ifdef HAS_FSEEK[O]
ctx-hasSeek = TRUE;
#else
ctx-hasSeek = FALSE;
#endif

Since we're now checking for it in configure, we should remove the checks 
from the pg_dump code.





Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /()   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 09:45 PM 23/10/2002 -0400, Bruce Momjian wrote:
 We have to write another function because fsetpos doesn't do SEEK_CUR so
 you have to implement it with more complex code.  It isn't a drop in
 place thing.
 
 The only code that uses SEEK_CUR is the code to check if seek is available 
 - I am ver happy to change that to SEEK_SET - I can't even recall why I 
 used SEEK_CUR. The code that does the real seeks uses SEEK_SET.

There are other problems.  fgetpos() expects a pointer to an fpos_t,
while ftello just returns off_t, so you need a local variable in the
function to pass to fgetpos() and they return that from the function.

It is much cleaner to just duplicate the entire API so you don't have
any limitations or failure cases.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian

Well, that certainly changes the functionality of the code.  I thought
that fseeko test was done so that things that couldn't be seeked on were
detected.  Not sure what isn't seek-able, maybe named pipes.  I thought
it was testing that so I didn't touch that variable.

This was my original thought, that we have non-fseeko code in place. 
Can we just trigger the non-fseeko code on HAS_FSEEKO.  The code would
be something like:

if (sizeof(long) = sizeof(off_t))
ctx-hasSeek = TRUE;
else
#ifdef HAVE_FSEEKO
 ctx-hasSeek = TRUE;
#else
 ctx-hasSeek = FALSE;
#endif

---

Philip Warner wrote:
 At 11:55 AM 24/10/2002 +1000, Philip Warner wrote:
 
 The only code that uses SEEK_CUR is the code to check if seek is available 
 - I am ver happy to change that to SEEK_SET - I can't even recall why I 
 used SEEK_CUR. The code that does the real seeks uses SEEK_SET.
 
 Come to think of it:
 
  ctx-hasSeek = (fseeko(AH-FH, 0, SEEK_CUR) == 0);
 
 should be replaced by:
 
 #ifdef HAS_FSEEK[O]
  ctx-hasSeek = TRUE;
 #else
  ctx-hasSeek = FALSE;
 #endif
 
 Since we're now checking for it in configure, we should remove the checks 
 from the pg_dump code.
 
 
 
 
 
 Philip Warner| __---_
 Albatross Consulting Pty. Ltd.   |/   -  \
 (A.B.N. 75 008 659 498)  |  /(@)   __---_
 Tel: (+61) 0500 83 82 81 | _  \
 Fax: (+61) 0500 83 82 82 | ___ |
 Http://www.rhyme.com.au  |/   \|
   |----
 PGP key available upon request,  |  /
 and from pgp5.ai.mit.edu:11371   |/
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 10:08 PM 23/10/2002 -0400, Bruce Momjian wrote:

Well, that certainly changes the functionality of the code.  I thought
that fseeko test was done so that things that couldn't be seeked on were
detected.


You are quite correct. It should read:

#ifdef HAVE_FSEEKO
 ctx-hasSeek = fseeko(...,SEEK_SET);
#else
 ctx-hasSeek = FALSE;
#endif

pipes are the main case for which we are checking.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Philip Warner
At 10:03 PM 23/10/2002 -0400, Bruce Momjian wrote:

It is much cleaner to just duplicate the entire API so you don't have
any limitations or failure cases.


We may still end up using macros in pg_dump to cope with cases where off_t 
 fseeko are not defined - if there are any. I presume we would then just 
revert to calling fseek/ftell etc.





Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /()   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian
Philip Warner wrote:
 At 10:03 PM 23/10/2002 -0400, Bruce Momjian wrote:
 It is much cleaner to just duplicate the entire API so you don't have
 any limitations or failure cases.
 
 We may still end up using macros in pg_dump to cope with cases where off_t 
  fseeko are not defined - if there are any. I presume we would then just 
 revert to calling fseek/ftell etc.

Well, we have fseeko falling back to fseek already, so that is working
fine.  I don't think we will find any OS's without off_t.  We just need
a little smarts.  Let me see if I can work on it now.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Giles Lean

 OK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?

Yes:

int
fgetpos(FILE *stream, fpos_t *pos);

int
fsetpos(FILE *stream, const fpos_t *pos);

Per comments in stdio.h fpos_t is the same format as off_t, and
off_t and fpos_t have been 64 bit since 1994.

http://cvsweb.netbsd.org/bsdweb.cgi/basesrc/include/stdio.h

Regards,

Giles





---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian

Looks like I have some more work to do.  Thanks.

---

Giles Lean wrote:
 
  OK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?
 
 Yes:
 
 int
 fgetpos(FILE *stream, fpos_t *pos);
 
 int
 fsetpos(FILE *stream, const fpos_t *pos);
 
 Per comments in stdio.h fpos_t is the same format as off_t, and
 off_t and fpos_t have been 64 bit since 1994.
 
 http://cvsweb.netbsd.org/bsdweb.cgi/basesrc/include/stdio.h
 
 Regards,
 
 Giles
 
 
 
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] One 7.3 item left

2002-10-23 Thread Bruce Momjian
Marc G. Fournier wrote:
 On Tue, 22 Oct 2002, Tom Lane wrote:
 
  Peter Eisentraut [EMAIL PROTECTED] writes:
   Bruce Momjian writes:
   OK, we are down to one open item, related to pg_dumping on 64-bit off_t.
   We had discussion today on this so it should be completed shortly.
 
   I hate to spoil the fun, but we have at least the Linux + Perl
   5.8.1 + _GNU_SOURCE and the AIX + Large File + Flex problems to fix.
 
  We should not, however, wait longer before pushing out a beta3 release.
  Portability problems on individual platforms may hold up RC1, but we're
  overdue to put out a final beta...
 
 Was just about to ask that ... Friday sound reasonable for beta3 then?
 Bruce, can you have all your files updated by then?

Done.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] pg_dump and large files - is this a problem?

2002-10-23 Thread Bruce Momjian

OK, NetBSD added.  

Any other OS's need this?  Is it safe for me to code something that
assumes fpos_t and off_t are identical?  I can't think of a good way to
test if two data types are identical.  I don't think sizeof is enough.

---

Giles Lean wrote:
 
  OK, does pre-1.6 NetBSD have fgetpos/fsetpos that is off_t/quad?
 
 Yes:
 
 int
 fgetpos(FILE *stream, fpos_t *pos);
 
 int
 fsetpos(FILE *stream, const fpos_t *pos);
 
 Per comments in stdio.h fpos_t is the same format as off_t, and
 off_t and fpos_t have been 64 bit since 1994.
 
 http://cvsweb.netbsd.org/bsdweb.cgi/basesrc/include/stdio.h
 
 Regards,
 
 Giles
 
 
 
 
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] One 7.3 item left

2002-10-23 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Marc G. Fournier wrote:
 Was just about to ask that ... Friday sound reasonable for beta3 then?
 Bruce, can you have all your files updated by then?

 Done.

Are we going to back-merge CVS tip into the REL7_3_STABLE branch now?
What about opening CVS tip for 7.4 development?

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] One 7.3 item left

2002-10-23 Thread Marc G. Fournier
On Wed, 23 Oct 2002, Tom Lane wrote:

 Bruce Momjian [EMAIL PROTECTED] writes:
  Marc G. Fournier wrote:
  Was just about to ask that ... Friday sound reasonable for beta3 then?
  Bruce, can you have all your files updated by then?

  Done.

 Are we going to back-merge CVS tip into the REL7_3_STABLE branch now?
 What about opening CVS tip for 7.4 development?

considering the number of changes that have lead up to beta3, I'd saw wait
... I'm almost thinking that rc1 makes more sense to do it, since once rc1
goes out, then we're at the point of only critical changes, which means
alot less commits then what we see through beta ...



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org