Re: [HACKERS] CVS tip problems

2004-05-31 Thread Tom Lane
Oliver Elphick [EMAIL PROTECTED] writes:
 1.  There are regression failures on timestamptz and horology which seem
 to have come about either on input or output of timestamps with
 fractional seconds.

Hmm ... I'm seeing slightly different misbehaviors in the
integer-datetimes and float-datetimes cases.  I think we are about to
have to dig down to the root of the off-by-one-second cases that Tom
Lockhart noticed but never really solved.  I suspect that the issue has
to do with platform-specific rounding of negative values to integer, but
I haven't got the full story quite yet.

 I tried debugging this but got a segmentation fault and apparent stack
 corruption in gdb, with the reported break point not anywhere I had set
 one.  I don't know what to do about that.

Recompiling with -O0 might help debuggability.

 2.  If the postmaster is not running, there is garbage in psql's error
 message:

Works OK for me --- could you trace that one more fully?

 3.  There is a compilation warning that a constant will not fit into a
 long in adt.c.  There are two more files where INT64CONST() is required
 but not supplied.  Patch attached.

Not sure about these ... does anyone else get these warnings?

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] Can't detect time zone

2004-05-31 Thread Christopher Kings-Lynne
Any reason why postgres can't detect time zone on my machine automatically?
LOG:  could not recognize system timezone, defaulting to Etc/GMT-8
HINT:  You can specify the correct timezone in postgresql.conf.
postmaster starting
-bash-2.05b$ LOG:  database system was shut down at 2004-05-31 15:19:00 
GMT-8
LOG:  checkpoint record is at 0/A47C80
LOG:  redo record is at 0/A47C80; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 536; next OID: 17218
LOG:  database system is ready

-bash-2.05b$ date
Mon May 31 15:21:10 WST 2004
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Can't detect time zone

2004-05-31 Thread Claudio Natoli


Christopher Kings-Lynne writes:
 Any reason why postgres can't detect time zone on my machine 
 automatically?
 
 LOG:  could not recognize system timezone, defaulting to Etc/GMT-8
 HINT:  You can specify the correct timezone in postgresql.conf.
 postmaster starting
 -bash-2.05b$ LOG:  database system was shut down at 
 2004-05-31 15:19:00 
 GMT-8
 LOG:  checkpoint record is at 0/A47C80
 LOG:  redo record is at 0/A47C80; undo record is at 0/0; shutdown TRUE
 LOG:  next transaction ID: 536; next OID: 17218
 LOG:  database system is ready
 
 
 -bash-2.05b$ date
 Mon May 31 15:21:10 WST 2004

I'm guessing the TZ code isn't identifying Aussie timezones. I'm getting the
same message, albeit Etc/GMT-10. If anyone who hacked on the TZ code
(Magnus, Bruce, ...) wants me to add a few printfs to trace this down, pleae
get in touch with me.

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] yet another contrib module

2004-05-31 Thread Darko Prenosil
Here:
http://archives.postgresql.org/pgsql-hackers-win32/2004-05/msg00125.php

I', not shure about Makefile changes, because I think Makefiles for contrib 
were changed since then.

Regards !

On Monday 31 May 2004 04:48, Oleg Bartunov wrote:
 On Sun, 30 May 2004, Darko Prenosil wrote:
  Oleg can You take a look at my diffs for tsearch2 (win32 build) ?
  Since the changes are trivial, can You make those changes and commit
  together with schema support?

 Where we can get the diff ?

  Regards !
 
  - Original Message -
  From: Bruce Momjian [EMAIL PROTECTED]
  To: Oleg Bartunov [EMAIL PROTECTED]
  Cc: Pgsql Hackers [EMAIL PROTECTED]
  Sent: Friday, May 28, 2004 9:23 PM
  Subject: Re: [HACKERS] yet another contrib module
 
   Sounds fine.
  
   ---
  ---
 
  -
 
   Oleg Bartunov wrote:
Hello,
   
June 1 is pretty close now, so I'm asking about yet another
contrib module, pg_trgm which is rather mature and quite useful.
Is't worth to put it into 7.5 contrib ?
   
trgm - Trigram matching for PostgreSQL
--
   
The pg_trgm contrib module provides functions and index classes
for determining the similarity of text based on trigram
matching.
   
Also, we plan to submit schema support to contrib/tsearch2 this
weekend.
   
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
   
---(end of
broadcast)--- TIP 1: subscribe and
unsubscribe commands go to [EMAIL PROTECTED]
  
   --
 Bruce Momjian|  http://candle.pha.pa.us
 [EMAIL PROTECTED]   |  (610) 359-1001
 +  If your life is a hard drive, |  13 Roberts Road
 +  Christ can be your backup.|  Newtown Square, Pennsylvania
 
  19073
 
   ---(end of
   broadcast)--- TIP 1: subscribe and unsubscribe
   commands go to [EMAIL PROTECTED]
 
  ---(end of broadcast)---
  TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

   Regards,
   Oleg
 _
 Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
 Sternberg Astronomical Institute, Moscow University (Russia)
 Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
 phone: +007(095)939-16-83, +007(095)939-23-83

 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Can't detect time zone

2004-05-31 Thread Christopher Kings-Lynne

Any reason why postgres can't detect time zone on my machine 
automatically?

I'm guessing the TZ code isn't identifying Aussie timezones. I'm getting the
same message, albeit Etc/GMT-10. If anyone who hacked on the TZ code
(Magnus, Bruce, ...) wants me to add a few printfs to trace this down, pleae
get in touch with me.
Yeah, I set australian_timezones to true in my postgresql.conf and it 
still fails.

Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Converting postgresql.conf parameters to kilobytes

2004-05-31 Thread Shridhar Daithankar
On Sunday 30 May 2004 21:33, Tom Lane wrote:
 Shridhar Daithankar [EMAIL PROTECTED] writes:
  I was toying around with idea of converting all the memory related
  parameters in postgresql.conf to kilobytes for simplicity and
  uniformity.

 Why is that a good idea?

Right now following are measured in pages
wal_buffers
shared_buffers 
effective_cachesize 

Following are in kbs
work_mem 
maintenance_work_mem 
max_stack_depth 

while rest of the memory parameters are in kb. I thought being uniform would 
be good. Besides it will make it independent of page size as well.

 Shridhar

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] pg_dump --comment?

2004-05-31 Thread Jan Wieck
This can be accomplished with 10-20 lines of shell scripting. I don't 
think it's worth the trouble bloating pg_dump with it at all.

Jan
On 5/27/2004 10:10 PM, Chris Campbell wrote:
I've encountered a situation where I'd like to store some information 
about the database when I do a pg_dump. For instance, the timestamp of 
the dump. And some other information that I pull from the database.

If pg_dump had a --comment flag that allowed me to pass a string that 
would be stored inside the dumpfile, that I could then retrieve in some 
way (without doing a full restore of the dump), that would meet my 
needs. In discussing this idea with other people, it sounded like a 
general-use feature that mankind as a whole could benefit from. :)

Here's what I'm envisioning:
pg_dump --comment 'This is a comment' more pg_dump args
That would store the comment ('This is a comment') in the dump file 
somehow. The definition of somehow would vary depending on the output 
format (text, tar, or custom). Initially, since I only use the custom 
format, I would only focus on getting it to work with that. But for the 
text format, there could be a SQL comment at the top of the file with

-- COMMENT: This is a comment
or something. In the tar format, there could be a comment file in the 
archive that contains the text This is a comment.

For the custom format...I haven't looked at the format specification, so 
I don't know exactly where the comment would go. It could go at the very 
top of the file, and have a special delimiter after it. pg_restore would 
just skim over the file until the delimiter is reached, and then go on 
about its business. The benefit of this scheme is that any program could 
read the comment -- just open a file and read the bytes until the delimiter.

There could also be a pg_dump or pg_restore option that prints out the 
comment stored in a given dump file, or another binary (pg_comment?) 
that does that.

Is this a desirable feature? Should I work it up like described and 
submit a patch? Any comments/suggestions?

Thanks!
- Chris

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Converting postgresql.conf parameters to kilobytes

2004-05-31 Thread Bruce Momjian
Shridhar Daithankar wrote:
 On Sunday 30 May 2004 21:33, Tom Lane wrote:
  Shridhar Daithankar [EMAIL PROTECTED] writes:
   I was toying around with idea of converting all the memory related
   parameters in postgresql.conf to kilobytes for simplicity and
   uniformity.
 
  Why is that a good idea?
 
 Right now following are measured in pages
 wal_buffers
 shared_buffers 
 effective_cachesize 
 
 Following are in kbs
 work_mem 
 maintenance_work_mem 
 max_stack_depth 
 
 while rest of the memory parameters are in kb. I thought being uniform would 
 be good. Besides it will make it independent of page size as well.

Sounds good to me.  How are you handling cases where the value has to be
a multiple of page size --- rounding or throwing an error?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Nested xacts: looking for testers and review

2004-05-31 Thread Jan Wieck
On 5/28/2004 2:52 PM, Alvaro Herrera wrote:
On Fri, May 28, 2004 at 01:43:16PM -0400, Bruce Momjian wrote:
In this case, I want to try all of the inserts, but any of them can
fail, then I want the bottom part done.
I wonder where everyone eas when I asked this question a lot of time
ago.  I said I thought the behavior should be like I described, and no
one objected.
Personally I think it would be a mistake to allow the COMMIT for the
subtransaction to ignore the fact that the subxact was aborted.  However
I realize what you are proposing, and maybe this can be implemented
using a parameter to COMMIT (indicating to not propagate the error if
it's in aborted state, but commit normally otherwise).
I agree on this one. Subtransactions are a feature to add more fine 
control to applications, not to ignore error checking for scripting.

However if everyone disagrees, I can take that part out, and the code
would be simpler.  IMHO however, it would be less reliable.
Please don't.
Jan
--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] yet another contrib module

2004-05-31 Thread Teodor Sigaev
Commited, thank you.
I don't apply Makefile changes, becouse symbol links to pg_reg* functions will 
be resolved at runtime. And I did some small chenges in spell.c (add pg_wchar 
conversion to regcomp call).


Darko Prenosil wrote:
Here:
http://archives.postgresql.org/pgsql-hackers-win32/2004-05/msg00125.php
I', not shure about Makefile changes, because I think Makefiles for contrib 
were changed since then.

Regards !
On Monday 31 May 2004 04:48, Oleg Bartunov wrote:
On Sun, 30 May 2004, Darko Prenosil wrote:
Oleg can You take a look at my diffs for tsearch2 (win32 build) ?
Since the changes are trivial, can You make those changes and commit
together with schema support?
Where we can get the diff ?

Regards !
- Original Message -
From: Bruce Momjian [EMAIL PROTECTED]
To: Oleg Bartunov [EMAIL PROTECTED]
Cc: Pgsql Hackers [EMAIL PROTECTED]
Sent: Friday, May 28, 2004 9:23 PM
Subject: Re: [HACKERS] yet another contrib module

Sounds fine.
---
---
-

Oleg Bartunov wrote:
Hello,
June 1 is pretty close now, so I'm asking about yet another
contrib module, pg_trgm which is rather mature and quite useful.
Is't worth to put it into 7.5 contrib ?
trgm - Trigram matching for PostgreSQL
--
The pg_trgm contrib module provides functions and index classes
for determining the similarity of text based on trigram
matching.
Also, we plan to submit schema support to contrib/tsearch2 this
weekend.
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of
broadcast)--- TIP 1: subscribe and
unsubscribe commands go to [EMAIL PROTECTED]
--
 Bruce Momjian|  http://candle.pha.pa.us
 [EMAIL PROTECTED]   |  (610) 359-1001
 +  If your life is a hard drive, |  13 Roberts Road
 +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073

---(end of
broadcast)--- TIP 1: subscribe and unsubscribe
commands go to [EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
--
Teodor Sigaev  E-mail: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pg_dump --comment?

2004-05-31 Thread Bruce Momjian
Jan Wieck wrote:
 This can be accomplished with 10-20 lines of shell scripting. I don't 
 think it's worth the trouble bloating pg_dump with it at all.

Considering how much bloat -v adds, adding a datestamp to it is trivial.
 -v outputs things like:

--
-- TOC entry 1449 (class 0 OID 0)
-- Dependencies: 4
-- Name: public; Type: ACL; Schema: -; Owner: postgres
--

so adding a timestamp at the top is probably a good extension.

Actually, the pg_dump -v docs are incorrect because they only mention
additional stderr output, not the additional comments.  I updated the
text to:

Specifies verbose mode.  This will cause
applicationpg_dump/application to output detailed object
comments in the dump file, and progress messages to standard error.

This -v functionality controlling comments is new in 7.5.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Converting postgresql.conf parameters to kilobytes

2004-05-31 Thread Shridhar Daithankar
On Monday 31 May 2004 18:41, Bruce Momjian wrote:
 Shridhar Daithankar wrote:
  On Sunday 30 May 2004 21:33, Tom Lane wrote:
   Shridhar Daithankar [EMAIL PROTECTED] writes:
I was toying around with idea of converting all the memory related
parameters in postgresql.conf to kilobytes for simplicity and
uniformity.
  
   Why is that a good idea?
 
  Right now following are measured in pages
  wal_buffers
  shared_buffers
  effective_cachesize
 
  Following are in kbs
  work_mem
  maintenance_work_mem
  max_stack_depth
 
  while rest of the memory parameters are in kb. I thought being uniform
  would be good. Besides it will make it independent of page size as well.

 Sounds good to me.  How are you handling cases where the value has to be
 a multiple of page size --- rounding or throwing an error?

The patch rounds it down owing to assignment of possible real number to 
integer.  but I didn't imagine of this requirement, to be honest. 

This seems to be better behavior than throwing an error.

 Shridhar

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] yet another contrib module

2004-05-31 Thread Darko Prenosil
Looking forward to see it work on WIN32 ! 

Regards !


On Monday 31 May 2004 15:32, Teodor Sigaev wrote:
 Commited, thank you.
 I don't apply Makefile changes, becouse symbol links to pg_reg* functions
 will be resolved at runtime. And I did some small chenges in spell.c (add
 pg_wchar conversion to regcomp call).

 Darko Prenosil wrote:
  Here:
  http://archives.postgresql.org/pgsql-hackers-win32/2004-05/msg00125.php
 
  I', not shure about Makefile changes, because I think Makefiles for
  contrib were changed since then.
 
  Regards !
 
  On Monday 31 May 2004 04:48, Oleg Bartunov wrote:
 On Sun, 30 May 2004, Darko Prenosil wrote:
 Oleg can You take a look at my diffs for tsearch2 (win32 build) ?
 Since the changes are trivial, can You make those changes and commit
 together with schema support?
 
 Where we can get the diff ?
 
 Regards !
 
 - Original Message -
 From: Bruce Momjian [EMAIL PROTECTED]
 To: Oleg Bartunov [EMAIL PROTECTED]
 Cc: Pgsql Hackers [EMAIL PROTECTED]
 Sent: Friday, May 28, 2004 9:23 PM
 Subject: Re: [HACKERS] yet another contrib module
 
 Sounds fine.
 
 ---
 ---
 
 -
 
 Oleg Bartunov wrote:
 Hello,
 
 June 1 is pretty close now, so I'm asking about yet another
 contrib module, pg_trgm which is rather mature and quite useful.
 Is't worth to put it into 7.5 contrib ?
 
 trgm - Trigram matching for PostgreSQL
 --
 
 The pg_trgm contrib module provides functions and index classes
 for determining the similarity of text based on trigram
 matching.
 
 Also, we plan to submit schema support to contrib/tsearch2 this
 weekend.
 
 Regards,
 Oleg
 _
 Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
 Sternberg Astronomical Institute, Moscow University (Russia)
 Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
 phone: +007(095)939-16-83, +007(095)939-23-83
 
 ---(end of
 broadcast)--- TIP 1: subscribe and
 unsubscribe commands go to [EMAIL PROTECTED]
 
 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania
 
 19073
 
 ---(end of
 broadcast)--- TIP 1: subscribe and unsubscribe
 commands go to [EMAIL PROTECTED]
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
 
 Regards,
 Oleg
 _
 Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
 Sternberg Astronomical Institute, Moscow University (Russia)
 Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
 phone: +007(095)939-16-83, +007(095)939-23-83
 
 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings
 
  ---(end of broadcast)---
  TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] CVS tip compiler error with --enable-thread-safety

2004-05-31 Thread Shridhar Daithankar
On Sunday 30 May 2004 19:37, Bruce Momjian wrote:
  The relevant configure messages read
  -
  checking whether pthreads work without any flags... no
  checking whether pthreads work with -Kthread... no
  checking whether pthreads work with -kthread... no
  checking for the pthreads library -llthread... no
  checking whether pthreads work with -pthread... yes
  -
 
  Here is relevant portion of src/Makefile.global
 
  -
  PTHREAD_CFLAGS  = -pthread -D_REENTRANT -D_THREAD_SAFE
  -D_POSIX_PTHREAD_SEMANTICS
  PTHREAD_LIBS=
  LIBS = -lz -lreadline -ltermcap -lcrypt -lresolv -lnsl -ldl -lm
  -
 
  It worked after I manually added -lpthread to LIBS and did a make
  clean;make

 OK, I have applied the following patch which should fix it.  Turns out I
 wasn't using the thread libs as part of library creation.

I updated anoncvs, the patch in src/port/Makefile is there but initdb still 
fails to compile.

I suspect this is because in Makefile.global PTHREAD_LIBS is still blank. I 
need to add -lpthread these to get stuff working. However I could not figure 
out what create PTHREAD_LIBS in Makefile.global.

Is there something else I should try?

 Shridhar

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] CVS tip compiler error with --enable-thread-safety

2004-05-31 Thread Bruce Momjian
Shridhar Daithankar wrote:
 On Sunday 30 May 2004 19:37, Bruce Momjian wrote:
   The relevant configure messages read
   -
   checking whether pthreads work without any flags... no
   checking whether pthreads work with -Kthread... no
   checking whether pthreads work with -kthread... no
   checking for the pthreads library -llthread... no
   checking whether pthreads work with -pthread... yes
   -
  
   Here is relevant portion of src/Makefile.global
  
   -
   PTHREAD_CFLAGS= -pthread -D_REENTRANT -D_THREAD_SAFE
   -D_POSIX_PTHREAD_SEMANTICS
   PTHREAD_LIBS  =
   LIBS = -lz -lreadline -ltermcap -lcrypt -lresolv -lnsl -ldl -lm
   -
  
   It worked after I manually added -lpthread to LIBS and did a make
   clean;make
 
  OK, I have applied the following patch which should fix it.  Turns out I
  wasn't using the thread libs as part of library creation.
 
 I updated anoncvs, the patch in src/port/Makefile is there but initdb still 
 fails to compile.
 
 I suspect this is because in Makefile.global PTHREAD_LIBS is still blank. I 
 need to add -lpthread these to get stuff working. However I could not figure 
 out what create PTHREAD_LIBS in Makefile.global.

Well, that certainly is strange.  Config.log should show checks for a
variety of thread flags, and it should add any ones that your compiler
supports.  I would look in there for a cause or email me the whole file
offlist.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Converting postgresql.conf parameters to kilobytes

2004-05-31 Thread Tom Lane
Shridhar Daithankar [EMAIL PROTECTED] writes:
 Right now following are measured in pages
 wal_buffers
 shared_buffers 
 effective_cachesize 
 while rest of the memory parameters are in kb. I thought being uniform would 
 be good. Besides it will make it independent of page size as well.

It would probably be reasonable to change effective_cache_size, since we
really do not know what the kernel's unit of buffering is (except on
Linux, where we *do* know that it ain't 8K ;-)).  Personally I'd opt for
measuring it in MB not KB, though; that would be a much more convenient
unit on modern machines.  We could easily make it a float for anyone who
thinks they know the cache size to sub-MB accuracy.

As for the others, I'll side with Emerson: a foolish consistency is the
hobgoblin of little minds.  We know very well what the unit of
allocation of those is, and it's pages.  There's no advantage to using
KB except making it harder to work out what's really happening.  We
could measure max_connections in KB too if we had a mind to: there's
a very definite shared-mem cost per connection slot.  Or the FSM
parameters, or checkpoint_segments, or max_locks_per_transaction.
The fact that they have quantifiable space costs doesn't mean that space
is the most useful way to measure them.

BTW, were you intending to convert KB to NBuffers by charging exactly 8K
per buffer, or were you intending to allow for the additional shmem
costs such as buffer headers, per-buffer LWLocks, etc?  If not the
latter, then what are you really measuring?  For sure it's not shared
memory size --- charging an artificial number isn't going to help anyone
who's trying to pick shared_buffers to arrive at a particular actual
shmem size.  But if it is the latter then it'll become even more
impossible to tell what's really happening, and we'll be forced to
invent some way of reading out how many buffers really got allocated.

So I disagree with the premise.  Measuring these things in KB is not an
improvement.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Converting postgresql.conf parameters to kilobytes

2004-05-31 Thread Bruce Momjian
Tom Lane wrote:
 Shridhar Daithankar [EMAIL PROTECTED] writes:
  Right now following are measured in pages
  wal_buffers
  shared_buffers 
  effective_cachesize 
  while rest of the memory parameters are in kb. I thought being uniform would 
  be good. Besides it will make it independent of page size as well.
 
 It would probably be reasonable to change effective_cache_size, since we
 really do not know what the kernel's unit of buffering is (except on
 Linux, where we *do* know that it ain't 8K ;-)).  Personally I'd opt for
 measuring it in MB not KB, though; that would be a much more convenient
 unit on modern machines.  We could easily make it a float for anyone who
 thinks they know the cache size to sub-MB accuracy.

I thought the idea was that you could put 'm', 'k', or maybe 'p' after
the value to specify the units.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Can't detect time zone

2004-05-31 Thread Tom Lane
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 Any reason why postgres can't detect time zone on my machine 
 automatically?

 I'm guessing the TZ code isn't identifying Aussie timezones.

So what *is* your zone, exactly?  I don't see how you expect us to
figure out why it's not matching when we don't know what it's not
matching.

There already are some debug printouts in CVS tip.  If you set
log_min_messages to debug4 or debug5 you should get reams of 
log messages about why each tested TZ setting didn't seem to match
the system's timezone.  If you could send along the output for
the zones that seem like reasonable candidates to match, it'd help.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] New pg_ctl has retrogressed in error messages

2004-05-31 Thread Tom Lane
7.4, on not finding a postmaster:

[EMAIL PROTECTED] pgsql]$ pg_ctl stop
/home/tgl/version74/bin/pg_ctl: line 274: kill: (15273) - No such process
waiting for postmaster to shut 
down failed
pg_ctl: postmaster does not shut down

CVS tip, same scenario:

[EMAIL PROTECTED] pgsql]$ pg_ctl stop
stop signal failed

Not waiting for the results of a failed kill() is good, but the error
message is exceedingly unhelpful.  It should mention the PID it tried to
kill and give the errno string.  Perhaps

failed to signal postmaster process 15273: No such process

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[HACKERS] Official Freeze Date for 7.5: July 1st, 2004

2004-05-31 Thread Marc G. Fournier
Just so that everyone is aware, we are going to push the freeze date for 
7.5 to July 1st.

Although we feel that there are enough improvements and features already 
in place for 7.5, Tom's felt that if we gave it that extra month, we could 
also have PITR in place for 7.5 ...

If anyone is working on other features that they feel can be polished off 
before the July 1st deadline, we would be most happy to incorporate those 
as well, but do recommend submitting patches for review *sooner*, rather 
then later, so that any recommended corrections can be addressed before 
teh deadline.


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] CVS tip problems

2004-05-31 Thread Tom Lane
Oliver Elphick [EMAIL PROTECTED] writes:
 1.  There are regression failures on timestamptz and horology which seem
 to have come about either on input or output of timestamps with
 fractional seconds.

I believe I've fixed this.

 2.  If the postmaster is not running, there is garbage in psql's error
 message:

I can't duplicate that here.  It looks to me like the probable
explanation is a broken or incompatible version of strerror_r() on your
machine.  Does the failure go away if you build without thread-safety?

 3.  There is a compilation warning that a constant will not fit into a
 long in adt.c.  There are two more files where INT64CONST() is required
 but not supplied.  Patch attached.

Patch applied, thanks.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Ingres to be released as open source

2004-05-31 Thread Christopher Browne
Quoth [EMAIL PROTECTED] (Christopher Kings-Lynne):
 Ingres is to be released as open source:

 http://developers.slashdot.org/article.pl?sid=04/05/25/0043219mode=nestedtid=126tid=137tid=163tid=185tid=198

 Like the article says, I wonder if these is any synergy between the
 products.  ie. Can we grab features from their codebase?

I would be quite surprised if that were permissible.  CA is making up
yet another open source license, and it seems really unlikely that
this would represent a thin enough rewriting of the BSD license to
allow inclusion in PostgreSQL.  

Most new licenses seem to be intended to do something akin to what
MySQL does, with dual licenses; some open source license, for 'toy'
usage and to allow free publicity via inclusion with Linux
distributions, but a traditional proprietary license for anyone making
'serious' use of their product.  It would be surprising for CA's
instance to turn out otherwise...
-- 
output = reverse(moc.enworbbc @ enworbbc)
http://www3.sympatico.ca/cbbrowne/finances.html
Rules of the Evil Overlord  #105. I will design all doomsday machines
myself. If I must hire a mad  scientist to assist me, I will make sure
that he is sufficiently twisted to never regret his evil ways and seek
to undo the damage he's caused. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread John Hansen
On Fri, 2004-05-28 at 08:15, [EMAIL PROTECTED] wrote:
  [EMAIL PROTECTED] wrote:
 
 You are absolutely wrong on all accounts here. A RAID5 system is slower
 than a single spindle as it is only as fast as the slowest disk in the
 stripe and the overhead of the RAID.
 
  Huh, what kind of controller do you use... Sounds like some value IDE
  one. I'd never suggest IDE raid5 for DBMS purposes anyway.
 
 Actually, my RAID system, currently on my test system, is fully UWLVD SCSI
 with fast spindles.
 
 Here is a logical factual question for you to answer: how can a set of
 disks, lets say 7, 6 data drives with one parity, deliver results faster
 than the slowest drive in the stripe?
 
 If you say predictive and intelligent caching, yea, maybe, but *all* disks
 today have caching, but the initial request still has to wait for the
 longest seek time across all spindles and the slowest spindle position.
 I've been dealing with RAID systems for almost a decade now, and they are
 not a magic bullet.
 
 RAID systems are always slower than their compnent disks. This is the
 drawback to using them and a fundimental limitation. A single disk will
 average 1/2 spindle seek, assuming its initial head placement is random,
 and average 1/2 spindle revolution to track, assuming no out of order
 sector access. A RAID system has to wait for the slowest disk, thus while
 a single disk can average 1/2 seek and rotation, two disks will not. So,
 your raid disk access will ALWAYS be slower or as slow as a single disk
 access not including the additional RAID processing.
 

Some high end SCSI drives comes with an option for using an external
source for spindle syncronization. These drives will thus not have to
wait for rotation, as head positions are aligned.

 The advantage to a RAID is that a number of smaller disks can look like a
 big disk with some redundency. The advantage to a RAID controller is that
 the RAID processing and parity generation overhead is done on an external
 device. Using a RAID controller that presents a SCSI LUN is great because
 you don't need to trust third party drivers. All in all, RAID is a good
 idea, but it isn't faster.
 
 As for IDE RAID, IDE RAID is an awesome idea. SCSI disks are just too
 expensive. Infortrend has a cool IDE to SCSI or Fibre RAID system that
 rocks.
 
Addonics has these too, I've been using them with great results.
 
 
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] Help

2004-05-31 Thread Edgar Cante
Hello I have my data base postgresql installed in linux, but when I want to
make consultations by means of php does not respond to me and it generates
following mensage to me.

Call to undefined function:  pg_connect()


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread Andreas Pflug
[EMAIL PROTECTED] wrote:
As for IDE RAID, IDE RAID is an awesome idea. SCSI disks are just too
expensive. Infortrend has a cool IDE to SCSI or Fibre RAID system that
rocks.
 

Obviously, you're caught by those marketing geeks. You're taking 
bandwidth (MB/s)as performance index, which is irrelevant for database 
access. Limiting factor is average access time, and there's still no 3ms 
seek time ide disk. This is not a problem of the interface, it's just a 
fact that (for marketing reasons?) all server grade disks are not 
equipped with ide.
A good raid system will be able to have independend seeks issued on all 
disks in parallel, thus scaling by spindle number (only for parallel 
accessing processes of course, not for serialized access). What you're 
proposing is that the app should parallelize it, instead of leaving this 
to the instance that can (should) do this better.

Regards,
Andreas
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread pgsql
 [EMAIL PROTECTED] wrote:


As for IDE RAID, IDE RAID is an awesome idea. SCSI disks are just too
expensive. Infortrend has a cool IDE to SCSI or Fibre RAID system that
rocks.



 Obviously, you're caught by those marketing geeks. You're taking
 bandwidth (MB/s)as performance index, which is irrelevant for database
 access. Limiting factor is average access time, and there's still no 3ms
 seek time ide disk. This is not a problem of the interface, it's just a
 fact that (for marketing reasons?) all server grade disks are not
 equipped with ide.

Depending on your application,  IDE RAID is a very cost effective system.
Sometimes speed is not important.

 A good raid system will be able to have independend seeks issued on all
 disks in parallel, thus scaling by spindle number (only for parallel
 accessing processes of course, not for serialized access). What you're
 proposing is that the app should parallelize it, instead of leaving this
 to the instance that can (should) do this better.

I'm not suggesting this at all, and clearly you have not read what I
wrote. It is physically impossible for RAID to be faster than its
component disks. Period. To argue that a single RAID system is faster than
separate (comparable) disks managed independently is just not true. I have
even explained why.

Yes, RAID systems do scale by spindle, and seeks are issued in parallel,
but you STILL need to wait for all spindles to complete the operation.
Operations on a RAID system are at least as slow as the slowest disk.

What you are missing is that the RAID is dealing with the multiple drives
as one drive. Two operations have to happen serially, one after the other,
where as with separate disks, the two can happen simultaneously.




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread Andreas Pflug
[EMAIL PROTECTED] wrote:
What you are missing is that the RAID is dealing with the multiple drives
as one drive. Two operations have to happen serially,
You're kidding or vastly underestimating raid controllers. The average 
db access is well served with a single block of data, stored on a single 
drive. Nicely parallelizable by a raid controller if it has a minimum of 
smartness.

Regards,
Andreas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread Andreas Pflug
[EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
   

What you are missing is that the RAID is dealing with the multiple drives
as one drive. Two operations have to happen serially,
 

You're kidding or vastly underestimating raid controllers. The average
db access is well served with a single block of data, stored on a single
drive. Nicely parallelizable by a raid controller if it has a minimum of
smartness.
   

The data contained on a RAID is spread across all the drives in the raid,
is this not true?
 

Data is spread *blockwise*, usually 32k or 64k blocks of data. This 
means, that typically 8 to 16 database blocks will reside on a *single* 
disk, with additional parity data on other disks.

To access data on a drive, one must get the data off all of the drives at
the same time, is this not true?
The data is usually completely on a single drive.
If you perform two different operations on the RAID, you must access each
RAID drive twice.
If you perform different operations on multiple different drives, you can
access the same amount of data as you would with the RAID, but have
parallelized operations.
This is a fact. It is *the* drawback to RAID system. If you do not
understand this, then you do not understand RAID systems.
 

You indicate clearly that it's you having strange opinions of raid 
controller/subsystem functionality executing multiple commands.

Perform any benchmark you want. Take any RAID system you want. Or,
actually, I have a factual reason why RAID systems perform worse than
multiple single drives, I have written a quick program to show it. I have
even double checked on my own RAID system here. 

As I said, the benchmark you wrote does by no means simulate DBMS 
access patterns, it might be good to show video streaming performance or 
so.
Please do read dbms disk io white papers, e.g. 
http://msdn.microsoft.com/archive/en-us/dnarsqlsg/html/sqlperftune.asp
Teaching hardware issues is OT for this list.

Regards,
Andreas
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Win32, PITR, nested transactions, tablespaces

2004-05-31 Thread Christopher Browne
[EMAIL PROTECTED] (Bruce Momjian) wrote:
   Win32 has 98% of its code in CVS, so it will make it
   Tablespaces - Christopher says it is ready, and has run tests
   PITR - some functionality might be in 7.5, but we aren't sure
   Nested transactions - Alvaro thinks it is close, but we don't know

Does this mean that 2PC is likely to be deferred?  I believe that
integration work on that was awaiting having nested transactions in
the codebase...
-- 
(reverse (concatenate 'string gro.mca @ enworbbc))
http://www3.sympatico.ca/cbbrowne/emacs.html
Rules  of the  Evil  Overlord #30.   All  bumbling conjurers,  clumsy
squires, no-talent  bards, and  cowardly thieves in  the land  will be
preemptively put  to death.  My foes  will surely give  up and abandon
their quest if they have no source of comic relief.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread pgsql
 [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:



What you are missing is that the RAID is dealing with the multiple
 drives
as one drive. Two operations have to happen serially,



You're kidding or vastly underestimating raid controllers. The average
db access is well served with a single block of data, stored on a single
drive. Nicely parallelizable by a raid controller if it has a minimum of
smartness.




The data contained on a RAID is spread across all the drives in the raid,
is this not true?


 Data is spread *blockwise*, usually 32k or 64k blocks of data. This
 means, that typically 8 to 16 database blocks will reside on a *single*
 disk, with additional parity data on other disks.

That may or may not be true depending on the RAID OEM, setup, and caching
parameters.


To access data on a drive, one must get the data off all of the drives at
the same time, is this not true?

 The data is usually completely on a single drive.

That may or may not be true, and you *don't* know that because the RAID
shields you from it.



If you perform two different operations on the RAID, you must access each
RAID drive twice.

If you perform different operations on multiple different drives, you can
access the same amount of data as you would with the RAID, but have
parallelized operations.

This is a fact. It is *the* drawback to RAID system. If you do not
understand this, then you do not understand RAID systems.



 You indicate clearly that it's you having strange opinions of raid
 controller/subsystem functionality executing multiple commands.

Wait, it gets better.


Perform any benchmark you want. Take any RAID system you want. Or,
actually, I have a factual reason why RAID systems perform worse than
multiple single drives, I have written a quick program to show it. I have
even double checked on my own RAID system here.


 As I said, the benchmark you wrote does by no means simulate DBMS
 access patterns, it might be good to show video streaming performance or
 so.
 Please do read dbms disk io white papers, e.g.
 http://msdn.microsoft.com/archive/en-us/dnarsqlsg/html/sqlperftune.asp
 Teaching hardware issues is OT for this list.

From the top of the very article you site:
Archived content. No warranty is made as to technical accuracy
Typical Microsoft hogwash, but they do have a few nuggets:

Note   As a general rule of thumb, be sure to stripe across as many disks
as necessary to achieve solid performance. Windows NT/SQL Performance
Monitor will indicate if Windows NT disk I/O is bottlenecking on a
particular RAID array. Be ready to add disks and redistribute data across
RAID arrays and/or SCSI channels as necessary to balance disk I/O and
maximize performance.

They are suggesting that you use multiple RAID arrays or data channels.
Hmmm, sound familiar? Isn't that EXACTLY what I've been saying?

How about this heading title:
Creating as Much Disk I/O Parallelism as Possible
Distinct disk I/O channels refer mainly to distinct sets of hard drives
or distinct RAID arrays, because hard drives are the most likely point of
disk I/O bottleneck. But also consider distinct sets of RAID or SCSI
controllers and distinct sets of PCI buses as ways to separate SQL Server
activity if additional RAID controllers and PCI buses are available.

Your own documents don't even support your claims.




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread Andreas Pflug
Dear anonymous,
This is really making me tired, and still OT. May anybody interested 
read the document you're citing abusively, or believe you that storage 
controllers are only capable of one command at a time or not.

Regards,
Andreas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread pgsql
 Dear anonymous,

 This is really making me tired, and still OT. May anybody interested
 read the document you're citing abusively, or believe you that storage
 controllers are only capable of one command at a time or not.

I would say this is totally off topic except that it does present opinions
of how to optimize a database.

What I find troubling, is you are not debating this with facts, you use
insults and try to impugn my opinion or abilities, rather than present a
solid reason why you hold the position you do. I know why I know what I
know, I've shown you examples, and explained why it acts as it does.

You present a Microsoft document that basically supports what I've been
saying, and accuse me of citing abusively (whatever the hell that
means.)

While I hate that this sort of exchange is on Hackers, the content, if
debated reasonably, should have been very useful for people stuggling with
these issues.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] tablespaces and DB administration

2004-05-31 Thread pgsql
 [EMAIL PROTECTED] wrote:


What you are missing is that the RAID is dealing with the multiple drives
as one drive. Two operations have to happen serially,


 You're kidding or vastly underestimating raid controllers. The average
 db access is well served with a single block of data, stored on a single
 drive. Nicely parallelizable by a raid controller if it has a minimum of
 smartness.


The data contained on a RAID is spread across all the drives in the raid,
is this not true?

To access data on a drive, one must get the data off all of the drives at
the same time, is this not true? (yes there is flexibility with mirror and
ignoring parity on reads)

If you perform two different operations on the RAID, you must access each
RAID drive twice.

If you perform different operations on multiple different drives, you can
access the same amount of data as you would with the RAID, but have
parallelized operations.

This is a fact. It is *the* drawback to RAID system. If you do not
understand this, then you do not understand RAID systems.

Perform any benchmark you want. Take any RAID system you want. Or,
actually, I have a factual reason why RAID systems perform worse than
multiple single drives, I have written a quick program to show it. I have
even double checked on my own RAID system here. Now, your turn, show me
some facts that support your position, not just that it should. Show me
how it can, show proof as to how a RAID system can be faster than its
component disks.

If you can't, I'm pretty sure I can safely ignore this part of the thread.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Help

2004-05-31 Thread pgsql
 Hello I have my data base postgresql installed in linux, but when I want
 to
 make consultations by means of php does not respond to me and it generates
 following mensage to me.

 Call to undefined function:  pg_connect()


This really isn't the right group for this, but, you need to make sure
that the PostgreSQL interface library is either compiled into PHP (it
isn't by default) or if it is a shared module, that it is loaded in
php.ini.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Official Freeze Date for 7.5: July 1st, 2004

2004-05-31 Thread Simon Riggs
On Mon, 2004-05-31 at 19:09, Marc G. Fournier wrote:
 Although we feel that there are enough improvements and features already 
 in place for 7.5, Tom's felt that if we gave it that extra month, we could 
 also have PITR in place for 7.5 ...
 

You have my full support and commitment for 1 July freeze.

...as-early-as-possible is understood...

Best Regards, Simon Riggs, 2nd Quadrant




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] CVS tip problems

2004-05-31 Thread Oliver Elphick
On Mon, 2004-05-31 at 19:55, Tom Lane wrote:
 Oliver Elphick [EMAIL PROTECTED] writes:
  1.  There are regression failures on timestamptz and horology which seem
  to have come about either on input or output of timestamps with
  fractional seconds.
 
 I believe I've fixed this.

All regression tests pass now.

  2.  If the postmaster is not running, there is garbage in psql's error
  message:
 
 I can't duplicate that here.  It looks to me like the probable
 explanation is a broken or incompatible version of strerror_r() on your
 machine.  Does the failure go away if you build without thread-safety?

Yes it does.

I'll see if I can run with a debugging libc and find it.

-- 
Oliver Elphick  [EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/A54310EA  92C8 39E7 280E 3631 3F0E  1EC0 5664 7A2F A543 10EA
 
 How precious also are thy thoughts unto me, O God! how
  great is the sum of them! If I should count them, they
  are more in number than the sand; when I awake, I am 
  still with thee.Psalms 139: 17,18 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Extended customizing, SQL functions,

2004-05-31 Thread Shridhar Daithankar
On Saturday 29 May 2004 18:10, [EMAIL PROTECTED] wrote:
 Having internal PostgreSQL variables that are not present on disk, or
 maybe, variables that are mirrored on disk may be good.

Yes. I agree. I can see why you proposed no transactions few posts ago. Take 
an example of a count variable. It may not have transactions but it is 
expected not to be very accurate anyways.

If I can declare variables which can be changed/read in locked fashion and 
visible to all the backends would be a real great use. It shouldn't have 
transactions because it is not data but a state. It is in database so that 
other connections and stored procedures could see it.

Coupled with the fact that postgresql has custom data types, there is no end 
how this could be put to use. Lot more things that sit in application layer 
will be inside postgresql, I can image.

 The whole reason why I made this post was to see if other people have had
 similar issues and looked for a similar solution, and to think about if
 there is a solution that fits within PostgreSQL and how it would work.

AFAIK, there is no way of doing it in postgresql. But I would love to see it 
happen. (I wish I could work on it...:-( )

 Shridhar

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] Embedded SQL - Unable to connect to PostgreSQL Database

2004-05-31 Thread vicky
Hello Experts,
  Trying to run sample  Postgrel's ECPG(Embedded
SQL)programs on RHL 9.0.

Im unable to connect to PostgreSQL database (sirishadb)  when I run
the program 


# su postgres (enter)

# /usr/bin/ecpg -t -I/usr/include/pgsql/sample1.pgc (enter)

# gcc -g -o sample1 -I /usr/include/pgsql sample1.c -lecpg -lpq
(enter)

# ./sample1(enter)

Error Code: -220
Message: No such connection NULL in line 12 ,
Rows : 0
Warning:
sql error No such connection Null in line 18


I did lot of 'googling' ,searched usenet groups and changed the
following ..


*


In the source code (sample1.pcg) I tried out various
combinations of connect statements they are
-

exec sql connect to '[EMAIL PROTECTED]'  /* where sirishadb is
databasename */

exec sql connect to '[EMAIL PROTECTED]'  /* where sirishadb is
databasename and lucky is hostname of the server */

exec sql connect to tcp:postgresql://localhost:5432/sirishadb as
connect_2 user postgresql using post123


In  /var/lib/pgsql/data/postgresql.conf

tcpip_socket = true 

In  /var/lib/pgsql/data/pg_hba.conf

local all all trust
host all all 127.0.0.1 255.255.255.255 trust
host all all xxx.xxx.xxx.0 255.255.255.0 trust

In /etc/rc.d/init.d/postgresql 
--
In this file I added -i to the following statement

su -l postgres -s /bin/sh -c /usr/bin/pg_ctl -D $PGDATA -p
/usr/bin/postmaster -o '-p ${PGPORT} -i' start  /dev/null 21 
/dev/null

*


Plz help me in connecting to the database .Thanx in advance

Regards, 
Vikram



Source code of sample1.pcg
---
#include stdlib.h
#include stdio.h

exec sql include sqlca;

main() {

exec sql connect to  '[EMAIL PROTECTED]';

/*exec sql connect to tcp:postgresql://localhost:5432/sirishadb as
connect_2 user postgresql using post123; */


exec sql BEGIN WORK;

exec sql UPDATE XXchildren SET fname = 'Emma' WHERE age = 0;

printf(error code %d, message %s, rows %d, warning %c\n,   
sqlca.sqlcode, sqlca.sqlerrm.sqlerrmc, sqlca.sqlerrd[2],   
sqlca.sqlwarn[0]);

exec sql COMMIT WORK;

exec sql disconnect all;

return EXIT_SUCCESS;

}

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Win32, PITR, nested transactions, tablespaces

2004-05-31 Thread Christopher Browne
The world rejoiced as [EMAIL PROTECTED] (Greg Stark) wrote:
 Tom Lane [EMAIL PROTECTED] writes:

 Greg Stark [EMAIL PROTECTED] writes:
  This is the only place where I see hardly any movement on major
  items the whole development cycle, then a rush of radical changes
  just before the freeze.
 
 [blink] There's been plenty of stuff done all through this
 development cycle (and previous ones too).  Read the CVS logs if
 you've forgotten.

 Sure, but that's parallel to what I'm saying. This is the only place
 I see Please delay the freeze so I can squeeze this major change in
 just before the release. In other projects I see Please hurry up
 and release so I can start committing major changes again.

 Perhaps it's an artifact of people doing most of their work offline
 and submitting patches, rather than using the CVS tree as their
 development environment. Or perhaps it's an artifact of ~nobody
 using the CVS version of postgres except for testing patches. Or
 perhaps it's a consequence of the freeze period being so long.

I did a compile the other day of the latest CVS version on AIX; I
wasn't seriously testing it in general, but was at least pleased to
see it passing regression cleanly.

One of the things that looks interesting is 2 Phase Commit.  THAT has
been holding off for nested transactions to get in place, as both
these features twiddle with how transactions work.  Regrettably,
integrating it all together is sure to take a while.  From what I see
now, I hope nested transactions make it into 7.5 so that 2PC can make
it into 7.6.

 Or perhaps the serious postgres developers are just so good that
 they're justified in being confident applying major changes just
 before a freeze.  Experience does seem to justify that somewhat;
 I've been repeatedly impressed at how such drastic changes seem to
 just work with hardly any changes.

I suspect we may need additional regression tests to check the
behaviour of lazy vacuum and ARC, since they are _usually_ supposed to
be pretty invisible to applications; the regular application and
reapplication of the set of regression tests has been pretty effective
at preventing the code from getting too far away from working validly
along the way.

 Fwiw, I do feel that 7.4 is pretty fresh. At least in my case I
 don't plan on upgrading to 7.5 immediately because 7.4 meets all our
 needs. When we upgrade it'll probably be for PITR, but we don't
 really need it yet.

ARC and lazy vacuum look like valuable things for the transactional
systems I'm working with.  It looks like we sometimes see delays
coming from checkpoints that 7.5 looks like it may moderate.  Mind
you, some systems only recently leapt to 7.4, so it would be quite
surprising to quickly leap again to 7.5.

PITR may turn out to be a don't care item if Slony1 winds up
providing its own approach to PITR.  (e.g. - if you  write out to
disk the sets of SQL statements that are to be applied to a replica,
then the spooled sets of these statements represent a history of
updates that can be used to do PITR.)
-- 
cbbrowne,@,ntlug.org
http://www.ntlug.org/~cbbrowne/emacs.html
I'm sorry Dave, I can't let you do that.
Why don't you lie down and take a stress pill?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] pg_dump --comment?

2004-05-31 Thread Chris Campbell
Harald Fuchs wrote:
Why don't you just do
  ( echo -- This is my comment
pg_dump whatever
  )  dumpfile
?
How could I dump using the custom format, and then use dumpfile with 
pg_restore to restore the dump? If I just prepend the comment to the 
file, then pg_restore will choke, since the file won't be in the proper 
custom format. I would have to remove the comment before sending the 
file to pg_restore. Is there an easy way to do that? That can be easily 
automated, and not take a huge amount of time given a 4 gig dump file 
that must be modified?

Given those requirements, building a commenting mechanism into the 
custom format would work out very nicely, I think.

Thanks!
- Chris


smime.p7s
Description: S/MIME Cryptographic Signature


[HACKERS] Fast index build vs. PITR

2004-05-31 Thread Tom Lane
I was just about to commit a patch that revises the btree index build
procedure as discussed here:
http://archives.postgresql.org/pgsql-general/2004-05/msg00480.php
specifically, not using shared buffers during index build and bypassing
WAL-logging in favor of just fsyncing the index file before commit.

I was actually writing the commit message when it occurred to me that
this would seriously break PITR.  If the WAL datastream doesn't contain
enough info to rebuild the index then rolling forward from a past backup
isn't gonna work.

I thought for a little bit about a magic reconstruct the index WAL
entry that would invoke the index build procedure in toto, but that
doesn't look like it will fly either.  (Two problems: during crash
recovery, you couldn't be sure that what's on disk for the underlying
table exactly matches the index you need to build --- it could be a
later state of the table; and besides, the environment of the WAL replay
process isn't capable of running user-defined functions, so it couldn't
work for functional indexes.)

So AFAICS, we've got to dump the index contents into WAL to support
PITR.  This is a tad annoying.

What I'm thinking about right now is tweaking the index-build code to
write to WAL only if it sees that PITR is actually in use.  It would
have to look at the GUC variables to determine whether WAL archiving
is enabled.  If archiving isn't turned on, then we could assume that
rollforward from a past backup isn't needed in this installation, and
use the WAL-less index build method.

Comments?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] CVS tip problems

2004-05-31 Thread Tom Lane
Oliver Elphick [EMAIL PROTECTED] writes:
 On Mon, 2004-05-31 at 19:55, Tom Lane wrote:
 I can't duplicate that here.  It looks to me like the probable
 explanation is a broken or incompatible version of strerror_r() on your
 machine.  Does the failure go away if you build without thread-safety?

 Yes it does.
 I'll see if I can run with a debugging libc and find it.

First you might want to check which flavor of strerror_r() your platform
has --- does it return int or char* ?  The Linux man page for
strerror_r() says

   strerror_r() with prototype as given above is specified by  SUSv3,  and
   was  in  use  under Digital Unix and HP Unix. An incompatible function,
   with prototype

   char *strerror_r(int errnum, char *buf, size_t n);

   is a GNU extension used by glibc (since 2.0), and must be  regarded  as
   obsolete  in view of SUSv3.  The GNU version may, but need not, use the
   user-supplied buffer.  If it does, the result may be truncated in  case
   the  supplied buffer is too small. The result is always NUL-terminated.

The code we have appears to assume that the result will always be placed
in the user-supplied buffer, which is apparently NOT what the glibc
version does.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Can't detect time zone

2004-05-31 Thread Claudio Natoli


 So what *is* your zone, exactly?  I don't see how you expect us to
 figure out why it's not matching when we don't know what it's not
 matching.

In my case, one of these should be right:

DEBUG:  Reject TZ Australia/NSW: at 1086048000 EST versus AUS Eastern
Standard Time 
DEBUG:  Reject TZ Australia/Sydney: at 1086048000 EST versus AUS
Eastern Standard Time 


I imagine Christopher ought to be matching one of these two; presumably with
an error message as per above except with WST and Western.

DEBUG:  Reject TZ Australia/Perth: at 1086048000 2004-06-01 08:00:00 std
versus 2004-06-01 10:00:00 std 
DEBUG:  Reject TZ Australia/West: at 1086048000 2004-06-01 08:00:00 std
versus 2004-06-01 10:00:00 std 


Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] sync vs. fsync question

2004-05-31 Thread Christopher Kings-Lynne
Hi,
I had this question posed to me on IRC and I didn't know the answer.
If all that is needed to ensure integrity is that the WAL is fsynced, 
what is wrong with just going:

wal_sync_method = fsync
fsync = false
??
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Gavin Sherry
On Mon, 31 May 2004, Tom Lane wrote:

[snip]

 I thought for a little bit about a magic reconstruct the index WAL
 entry that would invoke the index build procedure in toto, but that
 doesn't look like it will fly either.  (Two problems: during crash
 recovery, you couldn't be sure that what's on disk for the underlying
 table exactly matches the index you need to build --- it could be a
 later state of the table; and besides, the environment of the WAL replay
 process isn't capable of running user-defined functions, so it couldn't
 work for functional indexes.)

 So AFAICS, we've got to dump the index contents into WAL to support
 PITR.  This is a tad annoying.

Is it possible in this case to dump the index block by block into the log
after it has been generated? It seems to me this should reduce the
amount of data we write to WAL and (therefore) speed up the rebuild.

Gavin

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Official Freeze Date for 7.5: July 1st, 2004

2004-05-31 Thread Oliver Jowett
Marc G. Fournier wrote:
If anyone is working on other features that they feel can be polished 
off before the July 1st deadline, we would be most happy to incorporate 
those as well, but do recommend submitting patches for review *sooner*, 
rather then later, so that any recommended corrections can be addressed 
before teh deadline.
I have a patch for delayed planning of unnamed statements when using the 
extended query protocol that's in need of review:

  http://archives.postgresql.org/pgsql-patches/2004-05/msg00348.php
-O
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Can't detect time zone

2004-05-31 Thread Tom Lane
Claudio Natoli [EMAIL PROTECTED] writes:
 In my case, one of these should be right:

 DEBUG:  Reject TZ Australia/NSW: at 1086048000 EST versus AUS Eastern
 Standard Time 
 DEBUG:  Reject TZ Australia/Sydney: at 1086048000 EST versus AUS
 Eastern Standard Time 

Hm, is this a Windows box?  We already suspected that we'd have to fudge
on matching the timezone abbreviations on Windows ..,

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] sync vs. fsync question

2004-05-31 Thread Bruce Momjian
Christopher Kings-Lynne wrote:
 Hi,
 
 I had this question posed to me on IRC and I didn't know the answer.
 
 If all that is needed to ensure integrity is that the WAL is fsynced, 
 what is wrong with just going:
 
 wal_sync_method = fsync
 fsync = false

wal_sync_method is only used if fsync is true.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Christopher Kings-Lynne
What I'm thinking about right now is tweaking the index-build code to
write to WAL only if it sees that PITR is actually in use.  It would
have to look at the GUC variables to determine whether WAL archiving
is enabled.  If archiving isn't turned on, then we could assume that
rollforward from a past backup isn't needed in this installation, and
use the WAL-less index build method.
Seems reasonable.
Chris
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] CVS tip problems

2004-05-31 Thread Bruce Momjian
Tom Lane wrote:
 Oliver Elphick [EMAIL PROTECTED] writes:
  On Mon, 2004-05-31 at 19:55, Tom Lane wrote:
  I can't duplicate that here.  It looks to me like the probable
  explanation is a broken or incompatible version of strerror_r() on your
  machine.  Does the failure go away if you build without thread-safety?
 
  Yes it does.
  I'll see if I can run with a debugging libc and find it.
 
 First you might want to check which flavor of strerror_r() your platform
 has --- does it return int or char* ?  The Linux man page for
 strerror_r() says
 
strerror_r() with prototype as given above is specified by  SUSv3,  and
was  in  use  under Digital Unix and HP Unix. An incompatible function,
with prototype
 
char *strerror_r(int errnum, char *buf, size_t n);
 
is a GNU extension used by glibc (since 2.0), and must be  regarded  as
obsolete  in view of SUSv3.  The GNU version may, but need not, use the
user-supplied buffer.  If it does, the result may be truncated in  case
the  supplied buffer is too small. The result is always NUL-terminated.
 
 The code we have appears to assume that the result will always be placed
 in the user-supplied buffer, which is apparently NOT what the glibc
 version does.

What does may, but need not, use the user-supplied buffer supposed to
mean in practical terms.  How do they expect us to use it?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Bruce Momjian
Christopher Kings-Lynne wrote:
  What I'm thinking about right now is tweaking the index-build code to
  write to WAL only if it sees that PITR is actually in use.  It would
  have to look at the GUC variables to determine whether WAL archiving
  is enabled.  If archiving isn't turned on, then we could assume that
  rollforward from a past backup isn't needed in this installation, and
  use the WAL-less index build method.
 
 Seems reasonable.

What happens if someone turns on archiving while the index is being
built?  Is that possible?

I assume if someone turns on archiving in postgresql.conf, sighups the
postmaster, then does a tar backup, they should be able to do archiving,
no?


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Converting postgresql.conf parameters to kilobytes

2004-05-31 Thread Neil Conway
Tom Lane wrote:
So I disagree with the premise.  Measuring these things in KB is not an
improvement.
I agree, although I think changing effective_cache_size to be 
measured in KB/MB is worth doing.

-Neil
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Can't detect time zone

2004-05-31 Thread Claudio Natoli


Tom Lane writes:
 Hm, is this a Windows box?  We already suspected that we'd 
 have to fudge on matching the timezone abbreviations on Windows ..,

In my case, yes, Win XP.

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] CVS tip problems

2004-05-31 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 The code we have appears to assume that the result will always be placed
 in the user-supplied buffer, which is apparently NOT what the glibc
 version does.

 What does may, but need not, use the user-supplied buffer supposed to
 mean in practical terms.  How do they expect us to use it?

AFAICS they expect you to use the function's return value.

The current PG code is really erroneous for *both* strerror_r specs,
since the SUS-spec version doesn't promise to put anything into the
buffer if it returns a failure code.  I think you will have to write
some autoconf code to detect which return type is provided, and do
the right things with the return value in both cases.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 I assume if someone turns on archiving in postgresql.conf, sighups the
 postmaster, then does a tar backup, they should be able to do archiving,
 no?

I would have zero problem with labeling the archive parameter as
changeable only at postmaster start.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Alvaro Herrera
On Tue, Jun 01, 2004 at 11:34:15AM +1000, Gavin Sherry wrote:
 On Mon, 31 May 2004, Tom Lane wrote:
 
  I thought for a little bit about a magic reconstruct the index WAL
  entry that would invoke the index build procedure in toto, but that
  doesn't look like it will fly either.  (Two problems: during crash
  recovery, you couldn't be sure that what's on disk for the underlying
  table exactly matches the index you need to build --- it could be a
  later state of the table; and besides, the environment of the WAL replay
  process isn't capable of running user-defined functions, so it couldn't
  work for functional indexes.)
 
  So AFAICS, we've got to dump the index contents into WAL to support
  PITR.  This is a tad annoying.
 
 Is it possible in this case to dump the index block by block into the log
 after it has been generated? It seems to me this should reduce the
 amount of data we write to WAL and (therefore) speed up the rebuild.

Is this less log traffic?  You save a lot of per-record overhead, but
you have to save internal pages (which are not saved with the standard
code).  Plus it would be a lot different from standard btree WAL
traffic, so it'd be more code.

Maybe there's more to be saved by logging only leaf pages.  But then
there would be even more code to be able to reconstruct the index from
only leaf pages, and there are not that many internal pages either.

A completely different idea would be to log a logical index creation,
so that during normal recovery those entries are saved somewhere; after
the rest of WAL recovery is done, the system is taken into a more normal
post-recovery pre-usable state, on which those indexes are recreated
from user data.  This would be cheapest in WAL traffic, but probably
it'll also require more code and new hooks in the startup mechanism.
Also, it'd require examining later WAL entries that refer to the index
and act accordingly (e.g. ignore the entry if it modifies the index, and
forget the creation if it's a DROP INDEX command.)

Not that I like neither of those ideas really ... issuing normal WAL
index creation traffic if PITR is active is certainly the easiest way.

-- 
Alvaro Herrera (alvherre[a]dcc.uchile.cl)
In Europe they call me Niklaus Wirth; in the US they call me Nickel's worth.
That's because in Europe they call me by name, and in the US by value!


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  I assume if someone turns on archiving in postgresql.conf, sighups the
  postmaster, then does a tar backup, they should be able to do archiving,
  no?
 
 I would have zero problem with labeling the archive parameter as
 changeable only at postmaster start.

I guess the question is whether it would be possible to start/stop it at
other times.  And what process are we going to use to do a tar backup? 
Do they turn off archiving before doing the tar, or tell the system to
tar to another location?  

And you brought up the issue of how do we feed multilple archive files
back into the xlog directory during restore if they don't all fit on the
disk.

I think we need to explore the procedures we are going to use for PITR.

Also, do we need to do tar in a special way, like tar up a specific file
first?


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] sync vs. fsync question

2004-05-31 Thread Tom Lane
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 what is wrong with just going:

 wal_sync_method = fsync
 fsync = false

fsync = false causes wal_sync_method to be ignored.  You get no syncing.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Tom Lane
Gavin Sherry [EMAIL PROTECTED] writes:
 So AFAICS, we've got to dump the index contents into WAL to support
 PITR.  This is a tad annoying.

 Is it possible in this case to dump the index block by block into the log
 after it has been generated?

That's what we do now, and it more or less doubles the amount of I/O
needed to create an index ...

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Fast index build vs. PITR

2004-05-31 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 A completely different idea would be to log a logical index creation,
 so that during normal recovery those entries are saved somewhere; after
 the rest of WAL recovery is done, the system is taken into a more normal
 post-recovery pre-usable state, on which those indexes are recreated
 from user data.

I think an actually implementable version of this would be:

1. Don't log any index operations at all in WAL.

2. When recovering from WAL, restore all the table contents by WAL
replay.  (This would of course include the system catalog contents that
describe the indexes.)  Then sit there and do a global REINDEX to
rebuild all the indexes.

This would gain a reduction of some percentage in WAL traffic, at the
cost of a hugely expensive recovery cycle any time you actually needed
to use the WAL.  I guess this could be attractive to some installations,
but I'm not sure very many people would want it ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Official Freeze Date for 7.5: July 1st, 2004

2004-05-31 Thread Tom Lane
Oliver Jowett [EMAIL PROTECTED] writes:
 I have a patch for delayed planning of unnamed statements when using the 
 extended query protocol that's in need of review:

Right, I have it on my to-do list.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Nesting level in protocol?

2004-05-31 Thread Alvaro Herrera
Hackers,

Is there a way to add the current transaction nesting level in the be/fe
protocol without breaking compatibility?  AFAICS the ReadyForQuery
message is not expansible (sp?) :-(

-- 
Alvaro Herrera (alvherre[a]dcc.uchile.cl)
The problem with the future is that it keeps turning into the present
(Hobbes)


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Converting postgresql.conf parameters to kilobytes

2004-05-31 Thread Greg Stark
Neil Conway [EMAIL PROTECTED] writes:

 Tom Lane wrote:
  So I disagree with the premise.  Measuring these things in KB is not an
  improvement.
 
 I agree, although I think changing effective_cache_size to be measured in KB/MB
 is worth doing.

I have to say as a user the parameters that are measured in postgres blocks
are really annoying and confusing. Really really annoying and confusing.

If someone's playing with this I would suggest they should work something like
dd parameters and take a unit. So you could specify effective_cache=500k or
effective_cache=1M or effective_cache=1G or whatever unit you wanted. 

And this should be true for _all_ parameters that measure space. Consistency
makes it much much easier for people to learn a new system.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] SPI issue with nested xacts

2004-05-31 Thread Alvaro Herrera
I was playing with PL/pgSQL functions that would start and commit
transactions and got very weird behavior.  First I just take the check
for TransactionStmt off, so I could execute them.  With this change I
can write a function that opens a subtransaction, and it works, sort of;
I can open a subtransaction, but I can't open another one because
CommitTransactionCommand is not called, so when the next
BeginTransactionBlock is called the transaction state is not valid.

To get a sane behavior I had to modify SPI so that whenever a
TransactionStmt is executed, it issues CommitTransactionCommand()
immediately followed by StartTransactionCommand().  Again it seems to
works ... sort of.

The current problem is that a function along the lines of

begin   -- plpgsql's begin
start transaction;
commit transaction;
end;-- plpgsql's end

causes a SIGSEGV because the commit closes the outermost transaction,
because we were not in a transaction block when it started, only in a
transaction command.  At this point the SPI info is freed, and when the
function ends the SPI info is corrupted :-(

Of course, if I execute the function inside a transaction block it works
fine.

One way to fix this would be to automatically enclose calls to SPI in a
transaction block.  This may carry a nontrivial overhead, but only that
of starting a subtransaction (because the overhead of the main
transaction was already paid anyway).  Is this acceptable?  Other ideas?

-- 
Alvaro Herrera (alvherre[a]dcc.uchile.cl)
Aprende a avergonzarte más ante ti que ante los demás (Demócrito)


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Nesting level in protocol?

2004-05-31 Thread Oliver Jowett
Alvaro Herrera wrote:
Hackers,
Is there a way to add the current transaction nesting level in the be/fe
protocol without breaking compatibility?  AFAICS the ReadyForQuery
message is not expansible (sp?) :-(
Perhaps you could treat the nesting level as a runtime parameter and 
generate a ParameterStatus message whenever it changes? (and SHOW 
nesting_level sounds useful anyway)

-O
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] CVS tip problems

2004-05-31 Thread Oliver Elphick
On Tue, 2004-06-01 at 01:33, Tom Lane wrote:
 First you might want to check which flavor of strerror_r() your platform
 has --- does it return int or char* ?  The Linux man page for
 strerror_r() says

From the definition in /usr/include/string.h, glibc 2.3.2 still has the
version that returns char* 

-- 
Oliver Elphick  [EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/A54310EA  92C8 39E7 280E 3631 3F0E  1EC0 5664 7A2F A543 10EA
 
 Thou will show me the path of life; in thy presence 
  is fullness of joy; at thy right hand there are  
  pleasures for evermore. Psalms 16:11 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html