Re: [HACKERS] weird buildfarm failures on arm/mipsel and --with-tcl

2007-01-08 Thread Stefan Kaltenbrunner
Tom Lane wrote:
> Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
>> one of my new buildfarm boxes (an Debian/Etch based ARM box) is
>> sometimes failing to stop the database during the regression tests:
> 
>> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=quagga&dt=2007-01-08%2003:03:03
> 
>> this only seems to happen sometimes and only if --with-tcl is enabled on
>> quagga.
> 
>> lionfish (my mipsel box) is able to trigger that on every build if I
>> enable --with-tcl but it is nearly impossible to debug it there because
>> of the low amount of memory and diskspace it has.
> 
> Hm, could pl/tcl somehow be preventing the backend from exiting once
> it's run any pl/tcl stuff?  I have no idea why though, and even less
> why it wouldn't be repeatable. 
> 
>> After the stopdb failure we still have those processes running:
>> pgbuild   3488  0.0  2.4  43640  6300 ?Ss   06:15   0:01
>> postgres: pgbuild pl_regression [local] idle
> 
> Can you get a stack trace from this process?

(gdb) bt
#0  0x406b9d80 in __pthread_sigsuspend () from /lib/libpthread.so.0
#1  0x406b8a7c in __pthread_wait_for_restart_signal () from
/lib/libpthread.so.0
#2  0x406b91f8 in pthread_onexit_process () from /lib/libpthread.so.0
#3  0x40438658 in exit () from /lib/libc.so.6
#4  0x40438658 in exit () from /lib/libc.so.6
Previous frame identical to this frame (corrupt stack?)



> 
>> pgbuild   3489  0.0  0.0  0 0 ?Z06:15   0:00
>> [postgres] 
> 
> This is a bit odd ... if that process is a direct child of the
> postmaster it should have been reaped promptly.  Could it be a child
> of the other backend?  If so, why was it started?  Please try the
> ps again with whatever switch it needs to list parent process ID.

looks you are right - the defunct 3489 seems to be a child of 3488:

 PPID   PID  PGID   SID TTY  TPGID STAT   UID   TIME COMMAND
1  3389 18341 18341 ?   -1 S 1001   0:03
/home/pgbuild/pgbuildfarm/HEAD/inst/bin/postgres -D data
 3389  3391  3391  3391 ?   -1 Ss1001   0:00 postgres:
writer process
 3389  3392  3392  3392 ?   -1 Ss1001   0:00 postgres: stats
collector process
 3389  3488  3488  3488 ?   -1 Ss1001   0:01 postgres:
pgbuild pl_regression [local] idle
 3488  3489  3488  3488 ?   -1 Z 1001   0:00 [postgres]



Stefan


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [COMMITTERS] pgsql: Widen the money type to 64 bits.

2007-01-08 Thread Tom Lane
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
> i'm sure it got stuck in the maia filter (happens all the time with
> delays of a couple of hours up to several days) or in the moderator
> steve (if the sender was'nt on the list allow. can happen since it was
> a long time since he last committed).

D'Arcy hasn't committed anything since [ checks CVS history... ]
2003-02-27, so I'm betting on theory B.  But can we change that setup
for the -committers list?  Any currently-authorized committer should be
whitelisted whether he's committed anything recently or not.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [COMMITTERS] pgsql: Widen the money type to 64 bits.

2007-01-08 Thread Magnus Hagander
> > > This commit message is rather old, I remarked on its absence earlier, but
> > > better late than never I guess ;)
> > 
> > Not sure what you mean.  I just committed it recently.  I held off
> > until after the release as requested.  What do you mean by "old?"
> 
> You committed it five days ago, because the date on the email was Jan 2,
> but it only arrived today.  It should have appeard within minutes of the
> commit.
> 

i'm sure it got stuck in the maia filter (happens all the time with delays of a 
couple of hours up to several days) or in the moderator steve (if the sender 
was'nt on the list allow. can happen since it was a long time since he last 
committed). check the headers if you still have the original mail around to see 
where it stuck.

/Magnus


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] is "add MOVE point" in ToDo valid still?

2007-01-08 Thread Pavel Stehule

Pavel Stehule wrote:
> Hello,
>
> I am working on scrollable cursors, and MOVE statement has relation to 
it.


You mean we should add MOVE to pl/pgsql.  Yes, still on the TODO list,
and I don't think it is completed --- not sure why it is so hard.

--


It's simply task, but it needs some enhanced SPI. Next week I'll send patch 
for srollable support in SPI, and after commit I can send patch for MOVE in 
plpgsql and scrollable support in plpgsql.


Regards

Pavel Stehule

_
Citite se osamele? Poznejte nekoho vyjmecneho diky Match.com. 
http://www.msn.cz/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] vcbuild optional packages

2007-01-08 Thread Tom Lane
Magnus Hagander <[EMAIL PROTECTED]> writes:
> This patch fixes vcbuild so you can build without OpenSSL and libz
> should you want to. This disables the sslinfo and pgcrypto modules
> because they require the libraries to build at all.

Applied.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] README for vcbuild

2007-01-08 Thread Tom Lane
Magnus Hagander <[EMAIL PROTECTED]> writes:
>>> Attached is a simple README file (part from Dave P, if my memory is not
>>> mistaken) for the vcbuild stuff. Please put in src/tools/msvc.

Applied, with Mark K's fix and a bit of polishing of my own.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] ideas for auto-processing patches

2007-01-08 Thread Michael Glaesemann


On Jan 8, 2007, at 19:25 , Jim C. Nasby wrote:

Actually, I see point in both... I'd think you'd want to know if a  
patch

worked against the CVS checkout it was written against.


Regardless, it's unlikely that the patch was tested against all of  
the platforms available on the build farm. If it fails on some of the  
build|patch farm animals, or if it fails due to bitrot, the point is  
it fails: whatever version the patch was generated against is pretty  
much moot: the patch needs to be fixed. (And isn't the version number  
included in the patch if generated as a diff anyway?)


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [COMMITTERS] pgsql: Widen the money type to 64 bits.

2007-01-08 Thread Bruce Momjian
D'Arcy J.M. Cain wrote:
> On Sun, 7 Jan 2007 21:04:09 -0800 (PST)
> Jeremy Drake <[EMAIL PROTECTED]> wrote:
> > This commit message is rather old, I remarked on its absence earlier, but
> > better late than never I guess ;)
> 
> Not sure what you mean.  I just committed it recently.  I held off
> until after the release as requested.  What do you mean by "old?"

You committed it five days ago, because the date on the email was Jan 2,
but it only arrived today.  It should have appeard within minutes of the
commit.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] is "add MOVE point" in ToDo valid still?

2007-01-08 Thread Bruce Momjian
Pavel Stehule wrote:
> Hello,
> 
> I am working on scrollable cursors, and MOVE statement has relation to it.

You mean we should add MOVE to pl/pgsql.  Yes, still on the TODO list,
and I don't think it is completed --- not sure why it is so hard.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] SGML index build fix

2007-01-08 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Peter Eisentraut wrote:
> >> The problem is that this requires two runs even to proof the documentation,
> >> which I think no one wants.
> 
> > So what would the API be to signal you want a draft build?
> > gmake DRAFT="Y" html
> 
> I'd vote for
> 
>   gmake draft

OK, I used that syntax (and needed another use of recursion to do it).
Attached.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/Makefile
===
RCS file: /cvsroot/pgsql/doc/src/sgml/Makefile,v
retrieving revision 1.87
diff -c -c -r1.87 Makefile
*** doc/src/sgml/Makefile   7 Jan 2007 08:49:31 -   1.87
--- doc/src/sgml/Makefile   9 Jan 2007 03:13:06 -
***
*** 65,71 
  ## Man pages
  ##
  
! .PHONY: man
  
  DEFAULTSECTION := $(sqlmansect_dummy)
  
--- 65,71 
  ## Man pages
  ##
  
! .PHONY: man draft
  
  DEFAULTSECTION := $(sqlmansect_dummy)
  
***
*** 95,115 
  ifeq ($(vpath_build), yes)
@cp $(srcdir)/stylesheet.css .
  endif
  
  
  COLLATEINDEX := LC_ALL=C $(PERL) $(COLLATEINDEX) -f -g
  
! # If HTML.index does not exist, create a dummy bookindex.sgml.  During the
! # next build, create bookindex.sgml with the proper index contents.  A proper
! # bookindex.sgml is required to have an index in the output.
! ifeq (,$(wildcard HTML.index))
! bookindex.sgml:
!   $(COLLATEINDEX) -o $@ -N
  else
! bookindex.sgml: HTML.index
!   $(COLLATEINDEX) -i 'bookindex' -o $@ $<
  endif
  
  version.sgml: $(top_builddir)/src/Makefile.global
{ \
  echo ""; \
--- 95,132 
  ifeq ($(vpath_build), yes)
@cp $(srcdir)/stylesheet.css .
  endif
+ ifndef DRAFT
+   @cmp -s HTML.index.start HTML.index || $(MAKE) $*
+ endif
  
  
  COLLATEINDEX := LC_ALL=C $(PERL) $(COLLATEINDEX) -f -g
  
! draft:
! ifndef DRAFT
! ifneq ($(MAKECMDGOALS), draft)
! # Call ourselves with the DRAFT value set.  This seems to be the only
! # way to set gmake variables in a rule.
!   @$(MAKE) DRAFT="Y" $(MAKECMDGOALS))
  else
! # run default 'all' rule
!   @$(MAKE) DRAFT="Y" html
! endif
  endif
  
+ bookindex.sgml: HTML.index
+ # create a dummy bookindex.html
+   test -s HTML.index || $(COLLATEINDEX) -o $@ -N
+ # If HTML.index is valid, create a valid bookindex.sgml.  This 
+ # is required so the output has a proper index.
+   test ! -s HTML.index || $(COLLATEINDEX) -i 'bookindex' -o $@ $<
+ # save copy of HTML.index for later comparison
+   @cp HTML.index HTML.index.start
+ 
+ HTML.index:
+ # create HTML.index if it does not exist
+   @$(if $(wildcard HTML.index), , touch HTML.index)
+ 
  version.sgml: $(top_builddir)/src/Makefile.global
{ \
  echo ""; \
***
*** 141,155 
--- 158,184 
  
  %-A4.tex-ps: %.sgml $(ALLSGML) stylesheet.dsl bookindex.sgml
$(JADE.tex.call) -V texdvi-output -V '%paper-type%'=A4 -o $@ $<
+ ifndef DRAFT
+   @cmp -s HTML.index.start HTML.index || $(MAKE) $*
+ endif
  
  %-US.tex-ps: %.sgml $(ALLSGML) stylesheet.dsl bookindex.sgml
$(JADE.tex.call) -V texdvi-output -V '%paper-type%'=USletter -o $@ $<
+ ifndef DRAFT
+   @cmp -s HTML.index.start HTML.index || $(MAKE) $*
+ endif
  
  %-A4.tex-pdf: %.sgml $(ALLSGML) stylesheet.dsl bookindex.sgml
$(JADE.tex.call) -V texpdf-output -V '%paper-type%'=A4 -o $@ $<
+ ifndef DRAFT
+   @cmp -s HTML.index.start HTML.index || $(MAKE) $*
+ endif
  
  %-US.tex-pdf: %.sgml $(ALLSGML) stylesheet.dsl bookindex.sgml
$(JADE.tex.call) -V texpdf-output -V '%paper-type%'=USletter -o $@ $<
+ ifndef DRAFT
+   @cmp -s HTML.index.start HTML.index || $(MAKE) $*
+ endif
  
  %.dvi: %.tex-ps
@rm -f $*.aux $*.log
***
*** 291,297 
  # print
rm -f *.rtf *.tex-ps *.tex-pdf *.dvi *.aux *.log *.ps *.pdf *.out *.eps 
*.fot
  # index
!   rm -f HTML.index $(GENERATED_SGML)
  # text
rm -f INSTALL HISTORY regress_README
  # XSLT
--- 320,326 
  # print
rm -f *.rtf *.tex-ps *.tex-pdf *.dvi *.aux *.log *.ps *.pdf *.out *.eps 
*.fot
  # index
!   rm -f HTML.index HTML.index.start $(GENERATED_SGML)
  # text
rm -f INSTALL HISTORY regress_README
  # XSLT
Index: doc/src/sgml/docguide.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/docguide.sgml,v
retrieving revision 1.63
diff -c -c -r1.63 docguide.sgml
*** doc/src/sgml/docguide.sgml  15 Dec 2006 16:50:07 -  1.63
--- doc/src/sgml/docguide.sgml  9 Jan 2007 03:13:06 -
***
*** 542,552 
 
  
 
! When the HTML documentation is built, the process also generates
! the linking information for the index entries.  Thus, if you want
! your documentation to have a concept index at the end, you need to

Re: [HACKERS] pltcl regression failures with ORDER BY ... USING change

2007-01-08 Thread Tom Lane
Jeremy Drake <[EMAIL PROTECTED]> writes:
> It looks like pltcl regression tests are failing due to the recent ORDER
> BY ... USING change.

Ooops ... I checked the contrib regression tests but it didn't occur to
me to try pltcl :-(

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] [HACKERS] [Fwd: Index Advisor]

2007-01-08 Thread Gurjeet Singh

On 1/8/07, Tom Lane <[EMAIL PROTECTED]> wrote:


(This is not a statement that I approve of the specific plugin hooks
proposed --- I don't particularly.  But if we can come up with something
a bit cleaner, that's how I'd approach it.)



I have another idea for making the hooks a bit more cleaner; I will try that
and run it through you guys later today.

Best regards,


--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | yahoo }.com


Re: [HACKERS] weird buildfarm failures on arm/mipsel and --with-tcl

2007-01-08 Thread Tom Lane
Stefan Kaltenbrunner <[EMAIL PROTECTED]> writes:
> one of my new buildfarm boxes (an Debian/Etch based ARM box) is
> sometimes failing to stop the database during the regression tests:

> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=quagga&dt=2007-01-08%2003:03:03

> this only seems to happen sometimes and only if --with-tcl is enabled on
> quagga.

> lionfish (my mipsel box) is able to trigger that on every build if I
> enable --with-tcl but it is nearly impossible to debug it there because
> of the low amount of memory and diskspace it has.

Hm, could pl/tcl somehow be preventing the backend from exiting once
it's run any pl/tcl stuff?  I have no idea why though, and even less
why it wouldn't be repeatable. 

> After the stopdb failure we still have those processes running:
> pgbuild   3488  0.0  2.4  43640  6300 ?Ss   06:15   0:01
> postgres: pgbuild pl_regression [local] idle

Can you get a stack trace from this process?

> pgbuild   3489  0.0  0.0  0 0 ?Z06:15   0:00
> [postgres] 

This is a bit odd ... if that process is a direct child of the
postmaster it should have been reaped promptly.  Could it be a child
of the other backend?  If so, why was it started?  Please try the
ps again with whatever switch it needs to list parent process ID.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] pltcl regression failures with ORDER BY ... USING change

2007-01-08 Thread Jeremy Drake
It looks like pltcl regression tests are failing due to the recent ORDER
BY ... USING change.

http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=mongoose&dt=2007-01-09%2002:30:01

-- 
Horse sense is the thing a horse has which keeps it from betting on
people.
-- W. C. Fields

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Load distributed checkpoint

2007-01-08 Thread Takayuki Tsunakawa
Happy new year

From: "Simon Riggs" <[EMAIL PROTECTED]>
> Have you tried setting deadline scheduler on the WAL device and CFQ
on
> the data device? That should allow the I/Os to move through
different
> queues and prevent interference.

No, I've not tried yet.  Inaam-san told me that Linux had a few I/O
schedulers but I'm not familiar with them.  I'll find information
about them (how to change the scheduler settings) and try the same
test.


- Original Message - 
From: "Simon Riggs" <[EMAIL PROTECTED]>
To: "Takayuki Tsunakawa" <[EMAIL PROTECTED]>
Cc: "ITAGAKI Takahiro" <[EMAIL PROTECTED]>;

Sent: Thursday, December 28, 2006 7:07 AM
Subject: Re: [HACKERS] Load distributed checkpoint


> On Mon, 2006-12-18 at 14:47 +0900, Takayuki Tsunakawa wrote:
>> Hello, Itagaki-san, all
>>
>> Sorry for my long mail.  I've had trouble in sending this mail
because
>> it's too long for pgsql-hackers to accept (I couldn't find how
large
>> mail is accepted.)  So I'm trying to send several times.
>> Please see the attachment for the content.
>
> Your results for fsync are interesting.
>
> I've noticed that a checkpoint seems to increase the activity on the
WAL
> drive as well as increasing I/O wait times. That doesn't correspond
to
> any real increase in WAL traffic I'm aware of.
>
> Have you tried setting deadline scheduler on the WAL device and CFQ
on
> the data device? That should allow the I/Os to move through
different
> queues and prevent interference.
>
> -- 
>  Simon Riggs
>  EnterpriseDB   http://www.enterprisedb.com
>
>
>



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] ideas for auto-processing patches

2007-01-08 Thread Jim C. Nasby
On Fri, Jan 05, 2007 at 11:02:32PM -0600, Andrew Dunstan wrote:
> Tom Lane wrote:
> > "Andrew Dunstan" <[EMAIL PROTECTED]> writes:
> >> Jim Nasby wrote:
> >>> More important, I see no reason to tie applying patches to pulling
> >>> from CVS. In fact, I think it's a bad idea: you want to build just
> >>> what's in CVS first, to make sure that it's working, before you start
> >>> testing any patches against it.
> >
> >> Actually, I think a patch would need to be designated against a
> >> particular
> >> branch and timestamp, and the buildfarm member would need to "update" to
> >> that on its temp copy before applying the patch.
> >
> > I think I like Jim's idea better: you want to find out if some other
> > applied patch has broken the patch-under-test, so I cannot see a reason
> > for testing against anything except branch tip.
> >
> > There certainly is value in being able to test against a non-HEAD branch
> > tip, but I don't see the point in testing against a back timestamp.
> >
> 
> OK, if the aim is to catch patch bitrot, then you're right, of course.

Actually, I see point in both... I'd think you'd want to know if a patch
worked against the CVS checkout it was written against. But of course
each member would only need to test that once. You'd also want to set
something up to capture the exact timestamp that a repo was checked out
at so that you could submit that info along with your patch (btw, a plus
to subversion is that you'd be able to refer to the exact checkout with
a single version number).

But since setting that up would require non-trivial additional work, I'd
just save it for latter and get testing against the latest HEAD up and
running.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] SGML index build fix

2007-01-08 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Peter Eisentraut wrote:
>> The problem is that this requires two runs even to proof the documentation,
>> which I think no one wants.

> So what would the API be to signal you want a draft build?
>   gmake DRAFT="Y" html

I'd vote for

gmake draft

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Strange sort operators vs the regression tests

2007-01-08 Thread Tom Lane
There was some discussion recently of simplifying the sorting code (and
hopefully making it a tad faster) by eliminating support for using
randomly-chosen operators in ORDER BY ... USING, and requiring USING to
specify an operator that is the < or > member of some btree opclass.
This strikes me as a good idea in any case since a USING operator that
doesn't act like < or > is probably not going to yield a consistent sort
order.

However, when I went to do this as part of the NULLS FIRST/LAST + DESC
index order patch I'm working on, I found out that removing this feature
makes the regression tests fail: specifically, there are tests that
assume they can ORDER BY with these operators:

<(circle,circle)circle_lt
<<(polygon,polygon) poly_left
<(box,box)  box_lt
<<(point,point) point_left

I thought for a bit about adding btree opclasses covering these cases,
but it seems a bit silly ... and actually I think the sorts on poly_left
and point_left are not even self-consistent because I don't think these
operators satisfy the trichotomy law.

What I'm inclined to do is change the tests a bit, eg do "ORDER BY
area(circle)", which is what's really happening with circle_lt anyway.
Anybody unhappy with that plan?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] -f option for pg_dumpall

2007-01-08 Thread Andreas Pflug
Jim C. Nasby wrote:
>
>
> It might make sense to provide a programmatic interface to pg_dump to
> provide tools like pgAdmin more flexibility. 
Are you talking about "pg_dump in a lib"? Certainly a good idea, because
it allows better integration (e.g. progress bar). 
> But it certainly doesn't make sense to re-create the dumping logic.
>
> In terms of integrating pg_dumpall and pg_dump; I don't really care if
> that happens, 
I can't make too much sense of integrating pg_dumpall anywhere. Dumping
a whole cluster is certainly much of a planned job, not an interactive
online one, because its output usually won't be usable except for
disaster recovery. Not much function to "re-create" here, single
exception is extracting cluster wide data, the -g option, that's why I
mentioned scripting. But apparently this didn't get into pgadmin svn any
more, so I need to retract this proposal.

Regards,
Andreas


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Bruce Momjian
Devrim GUNDUZ wrote:
-- Start of PGP signed section.
> Hi Bruce,
> 
> On Mon, 2007-01-08 at 11:35 -0500, Bruce Momjian wrote:
> 
> > OK, naming suggestions?
> 
> BTW, why do you keep those pages in your homepage, but not in
> postgresql.org? Just wondering.
> 
> --and personally, I'd prefer to see them in our (PG) web page.

Because the minute I add something to the queue, it has to be visible. 
Uploading it to postgresql.org adds an unnecessary delay, and deleting
it unnecessary overhead. I actually am momjian.postgresql.org, so I
don't see the issue of which machine it is on.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Devrim GUNDUZ
Hi Bruce,

On Mon, 2007-01-08 at 11:35 -0500, Bruce Momjian wrote:

> OK, naming suggestions?

BTW, why do you keep those pages in your homepage, but not in
postgresql.org? Just wondering.

--and personally, I'd prefer to see them in our (PG) web page.

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/





signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] -f option for pg_dumpall

2007-01-08 Thread Jim C. Nasby
On Fri, Jan 05, 2007 at 08:05:26PM +, Dave Page wrote:
> Andreas Pflug wrote:
> >Dave Page wrote:
> >>In pgAdmin we use pg_dump's -f option to write backup files. The IO
> >>streams are redirected to display status and errors etc. in the GUI.
> >>
> >>In order to enhance the interface to allow backup of entire clusters as
> >>well as role and tablespace definitions, we need to be able to get
> >>pg_dumpall to write it's output directly to a file in the same way,
> >>because we cannot redirect the child pg_dump IO streams (which also
> >>means we may miss errors, but I need to think about that some more).
> >>
> >>As far as I can see, adding a -f option to pg_dumpall should be straight
> >>forward, the only issue being that we'd need to pass pg_dump an
> >>additional (undocumented?) option to tell it to append to the output
> >>file instead of writing it as normal.
> >>
> >>Any thoughts or better ideas?
> >>  
> >Use pgAdmin's "create script" funcion on the server.
> 
> We said long ago we weren't going to replicate pg_dump functionality in 
> pgAdmin - a brief discussion on the pgadmin-hackers list earlier 
> indicates that people still feel the same way, and that time would be 
> better spend fixing pg_dump/pg_dumpall.

It might make sense to provide a programmatic interface to pg_dump to
provide tools like pgAdmin more flexibility. But it certainly doesn't
make sense to re-create the dumping logic.

In terms of integrating pg_dumpall and pg_dump; I don't really care if
that happens, but I would like to see pg_dumpall supporting all the
options that pg_dump does (a quick diff shows that pg_dumpall is
currently missing support for -f, -F, -v, -Z, -C (which may or may not
make sense), -E, -n & -t (might not make sense), -U and -W (probably
OK).
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] (SETOF) RECORD AS complex_type

2007-01-08 Thread David Fetter
On Mon, Jan 08, 2007 at 05:25:17PM +0200, Hannu Krosing wrote:
> Ühel kenal päeval, K, 2006-12-27 kell 14:06, kirjutas David Fetter:
> > Folks,
> > 
> > While using DBI-Link, I've noticed a little lacuna in how functions
> > returning (SETOF) RECORD work, namely, that you have to cast them to
> > explicit lists of columns, even when that list of columns corresponds
> > to an existing complex type.
> > 
> > What would be involved in fixing the casting operation so that the
> > following would work?
> > 
> > CREATE TYPE foo AS (
> > a INT4,
> > b INT8,
> > c POINT,
> > d TEXT
> > );
> > 
> > CREATE FUNCTION bar(output_type TEXT)
> > RETURNS SETOF RECORD
> > ...
> > 
> > SELECT * FROM bar('foo') AS foo;
> > 
> > Cheers,
> > D
> 
> using OUT parameters works nice for me
> 
> hannu=# CREATE FUNCTION bar(IN cnt INT, OUT a INT4, OUT b INT8, OUT c
> POINT, OUT d TEXT)
> hannu-# RETURNS SETOF RECORD
> hannu-# LANGUAGE SQL
> hannu-# AS $$
> hannu$# SELECT '1'::INT4,'1'::INT8,'(1,1)'::POINT,'text'::TEXT FROM
> generate_series(1,3);

You're assuming here that you know at function creation time what the
structure of the returning rowset will be.  In the case of DBI-Link, I
don't.

Cheers,
D
> hannu$# $$;
> CREATE FUNCTION
> hannu=# select * from bar(1);
>  a | b |   c   |  d
> ---+---+---+--
>  1 | 1 | (1,1) | text
>  1 | 1 | (1,1) | text
>  1 | 1 | (1,1) | text
> (3 rows)
> 
> 
> 
> -- 
> 
> Hannu Krosing
> Database Architect
> Skype Technologies OÜ
> Akadeemia tee 21 F, Tallinn, 12618, Estonia
> 
> Skype me:  callto:hkrosing
> Get Skype for free:  http://www.skype.com
> 
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend

-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] weird buildfarm failures on arm/mipsel and --with-tcl

2007-01-08 Thread Stefan Kaltenbrunner
one of my new buildfarm boxes (an Debian/Etch based ARM box) is
sometimes failing to stop the database during the regression tests:

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=quagga&dt=2007-01-08%2003:03:03

this only seems to happen sometimes and only if --with-tcl is enabled on
quagga.

lionfish (my mipsel box) is able to trigger that on every build if I
enable --with-tcl but it is nearly impossible to debug it there because
of the low amount of memory and diskspace it has. (two consecutive
failures will run the kernel out of memory due to the resources consumed
by the still running processes).

After the stopdb failure we still have those processes running:

pgbuild   3389  0.0  1.5  39632  4112 ?S06:14   0:03
/home/pgbuild/pgbuildfarm/HEAD/inst/bin/postgres -D data
pgbuild   3391  0.0  0.9  39632  2540 ?Ss   06:14   0:00
postgres: writer process
pgbuild   3392  0.0  0.5  11220  1348 ?Ss   06:14   0:00
postgres: stats collector process
pgbuild   3488  0.0  2.4  43640  6300 ?Ss   06:15   0:01
postgres: pgbuild pl_regression [local] idle
pgbuild   3489  0.0  0.0  0 0 ?Z06:15   0:00
[postgres] 


Any ideas on how to debug that any further ?


Stefan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Mark/Restore and avoiding RandomAccess sorts

2007-01-08 Thread Jim C. Nasby
On Mon, Jan 08, 2007 at 10:37:25AM +, Heikki Linnakangas wrote:
> >Simon Riggs wrote:
> >>Implementing the variable mark/restore buffer as a dumb Tuplestore would
> >>mean that the space usage of the Sort could in worst case go as high as
> >>x2 total space. The worst case is where the inner scan is all a single
> >>value. The best case is where the inner scan is sufficiently unique over
> >>all its values that it never writes back to disk at all. 
> >>
> >>So a further refinement of this idea would be to simply defer the final
> >>merge operation for the sort until the history required for the Mark
> >>operation exceeded, say, 10% of the sort size. That would then be
> >>sufficient to improve performance for most common cases, without risking
> >>massive space overflow for large and highly non-unique data. There's no
> >>problem with running the final merge slightly later than before;
> >>everything's still there to allow it. Reusing space in the tuplestore is
> >>also straightforward since that's exactly what the final merge already
> >>does, so some rework of that code should be sufficient.
> 
> Should definitely be done by reusing the space in the tuplestore, we 
> don't want to use double the space we do now in the degenerate case.

Another idea comes to mind, which would apply to all sorts needing
random access.

Rather than completely copying every tuple to build a resultset you can
step through randomnly, why not just build a list of tuple locations as
the sort returns results? That would allow seeking to any position in
the resultset with minimal overhead.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [COMMITTERS] pgsql: Widen the money type to 64 bits.

2007-01-08 Thread D'Arcy J.M. Cain
On Sun, 7 Jan 2007 21:04:09 -0800 (PST)
Jeremy Drake <[EMAIL PROTECTED]> wrote:
> This commit message is rather old, I remarked on its absence earlier, but
> better late than never I guess ;)

Not sure what you mean.  I just committed it recently.  I held off
until after the release as requested.  What do you mean by "old?"

-- 
D'Arcy J.M. Cain  |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Bruce Momjian
Heikki Linnakangas wrote:
> Bruce Momjian wrote:
> > Heikki Linnakangas wrote:
> >> Bruce Momjian wrote:
> >>> Simon Riggs wrote:
>  All have been awaiting review for at least a month (though in one case
>  the latest version is quite recent). They probably ought to be on the
>  hold queue; all are ready to be reviewed for final
>  application/rejection.
> 
>  I'd hasten to add that none of those are mine. My patches have received
>  good attention, so I'm not complaining just completing admin.
> >>> You might remember months ago that people were complaining I was pushing
> >>> things into CVS too quickly, so while the patches are in my mailbox,
> >>> they are not in the queue until I feel the community has the time to
> >>> focus on it.
> >> So, there's a queue of patches in your mailbox waiting to get to the 
> >> queue? A queue to the queue :). All the patches clearly need review, so 
> >> let's not rush them into the CVS, but it'd be nice to have them all in 
> >> one queue.
> > 
> > Right, because even the decision of whether they should be in the queue
> > is a decision for us.  The hold queue additions are less stringent than
> > the main patch queue.
> 
> I'm confused, I thought the difference between the pgpatches queue and 
> the pgpatches_hold queue is the release the patch is targeted for. If 
> there's a third queue for patches that need review before being added to 
> another queue, could we have that visible somewhere, so that we know 
> what's in it?

Well, sort of.  During 8.2 feature freeze the 8.2 hold queue was for
8.3, and the patches queue was for 8.2, but once we started 8.3, they
were both for 8.3.

> 
> >> Ps. I agree with the later comments that the naming of the two patch 
> >> queues is a bit confusing. Having queues named after the release numbers 
> >> the patches are targeted for seems like a good idea.
> > 
> > OK, naming suggestions?
> 
> The "8.3 patch queue", and the "8.4 patch queue"?

Not really, no, as outlined above.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Bruce Momjian
Dave Page wrote:
> Bruce Momjian wrote:
> 
> > Right, because even the decision of whether they should be in the queue
> > is a decision for us.  The hold queue additions are less stringent than
> > the main patch queue.
> 
> Isn't that always the case though, not just after FF when the hold queue
> starts getting activity again? That would imply the need to a permanent
> triage(?) queue, and a version specific one imho.

The hold queue is not used during normal development. I don't see value
in a triage queue.


-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] [HACKERS] [Fwd: Index Advisor]

2007-01-08 Thread Simon Riggs
On Mon, 2007-01-08 at 12:16 -0500, Tom Lane wrote:
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > On Mon, 2007-01-08 at 11:28 -0500, Bruce Momjian wrote:
> >> The complex part of this is that the feature requires patches to the
> >> backend, and has a /contrib component.
> 
> > The plugin approach is exactly what happened with the debugger. The
> > backend has an appropriate plugin API and the debugger is a plugin.
> 
> > The patch to the backend shouldn't be in contrib, definitely.
> 
> > I would say its up to the installer to offer the opportunity to load the
> > adviser plugin, or not. I like plugins because they encourage faster
> > paced development, diversity and choice.
> 
> I would suggest that if we want to encourage faster development, we
> should do the same thing we did with the plpgsql debugger support:
> put the plugin hooks into the backend and keep the actual plugin(s)
> as separate pgfoundry projects.  That way the index advisor can have
> a release every few weeks if it needs it  and it will, for awhile.
> Stuff in contrib is necessarily tied to the backend release cycle.
> 
> (This is not a statement that I approve of the specific plugin hooks
> proposed --- I don't particularly.  But if we can come up with something
> a bit cleaner, that's how I'd approach it.)

Sounds good to me.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Lukas Kahwe Smith

Andrew Dunstan wrote:


The latter does not exist, AFAIK. Before feature freeze for cycle X, we
don't usually hold patches for release X+1, as I understand it.

In general, we should try to hold patches as little amount of time as
possible. That way they don't go stale as easily.


I did not follow this thread closely, but it would be nice if someone 
could compile all of these defacto standards into a wiki page.


regards,
Lukas

PS: Dont me make read this entire thread just to create this wiki page 
myself :P


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PATCHES] [HACKERS] [Fwd: Index Advisor]

2007-01-08 Thread Tom Lane
"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Mon, 2007-01-08 at 11:28 -0500, Bruce Momjian wrote:
>> The complex part of this is that the feature requires patches to the
>> backend, and has a /contrib component.

> The plugin approach is exactly what happened with the debugger. The
> backend has an appropriate plugin API and the debugger is a plugin.

> The patch to the backend shouldn't be in contrib, definitely.

> I would say its up to the installer to offer the opportunity to load the
> adviser plugin, or not. I like plugins because they encourage faster
> paced development, diversity and choice.

I would suggest that if we want to encourage faster development, we
should do the same thing we did with the plpgsql debugger support:
put the plugin hooks into the backend and keep the actual plugin(s)
as separate pgfoundry projects.  That way the index advisor can have
a release every few weeks if it needs it  and it will, for awhile.
Stuff in contrib is necessarily tied to the backend release cycle.

(This is not a statement that I approve of the specific plugin hooks
proposed --- I don't particularly.  But if we can come up with something
a bit cleaner, that's how I'd approach it.)

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Andrew Dunstan
Heikki Linnakangas wrote:

> I'm confused,

So I see.

> I thought the difference between the pgpatches queue and
> the pgpatches_hold queue is the release the patch is targeted for. If
> there's a third queue for patches that need review before being added to
> another queue, could we have that visible somewhere, so that we know
> what's in it?
>

>>
>> OK, naming suggestions?
>
> The "8.3 patch queue", and the "8.4 patch queue"?
>

The latter does not exist, AFAIK. Before feature freeze for cycle X, we
don't usually hold patches for release X+1, as I understand it.

In general, we should try to hold patches as little amount of time as
possible. That way they don't go stale as easily.

cheers

andrew


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Heikki Linnakangas

Bruce Momjian wrote:

Heikki Linnakangas wrote:

Bruce Momjian wrote:

Simon Riggs wrote:

All have been awaiting review for at least a month (though in one case
the latest version is quite recent). They probably ought to be on the
hold queue; all are ready to be reviewed for final
application/rejection.

I'd hasten to add that none of those are mine. My patches have received
good attention, so I'm not complaining just completing admin.

You might remember months ago that people were complaining I was pushing
things into CVS too quickly, so while the patches are in my mailbox,
they are not in the queue until I feel the community has the time to
focus on it.
So, there's a queue of patches in your mailbox waiting to get to the 
queue? A queue to the queue :). All the patches clearly need review, so 
let's not rush them into the CVS, but it'd be nice to have them all in 
one queue.


Right, because even the decision of whether they should be in the queue
is a decision for us.  The hold queue additions are less stringent than
the main patch queue.


I'm confused, I thought the difference between the pgpatches queue and 
the pgpatches_hold queue is the release the patch is targeted for. If 
there's a third queue for patches that need review before being added to 
another queue, could we have that visible somewhere, so that we know 
what's in it?


Ps. I agree with the later comments that the naming of the two patch 
queues is a bit confusing. Having queues named after the release numbers 
the patches are targeted for seems like a good idea.


OK, naming suggestions?


The "8.3 patch queue", and the "8.4 patch queue"?

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PATCHES] [HACKERS] [Fwd: Index Advisor]

2007-01-08 Thread Simon Riggs
On Mon, 2007-01-08 at 11:28 -0500, Bruce Momjian wrote:
> Simon Riggs wrote:
> > On Sat, 2007-01-06 at 16:08 -0500, Bruce Momjian wrote:
> > 
> > > I have looked over this patch, and it completes part of this TODO item:
> > > 
> > > o Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM
> > >   ANALYZE, and CLUSTER
> > 
> > > It involves a patch to the backend, and a /contrib module to access it.
> > > 
> > > I think we have to decide if we want this, and whether it should be in
> > > /contrib or fully integrated into the backend.  I am thinking the API
> > > needs to be simpified, perhaps by removing the system table and having
> > > the recommendations just logged to the server logs.
> > 
> > The patch to the backend is in the form of a plugin API, which does
> > nothing when there is no plugin. IMHO there is a significant amount of
> > code there and it is too early to try to get all of that into the
> > backend, especially when more tested things like Tsearch2 haven't.
> > Plugins are cool because we can update them without needing to bounce a
> > production server, which means the code can evolve faster than it would
> > do if it was directly in the backend. (You do need to reconnect to allow
> > local_preload_libraries to be re-read). Tuning out the wierd
> > recommendations will take some time/effort - I don't know there are any,
> > but then my gut tells me there very likely are some.
> > 
> > The output isn't a system table, its a user space table. The reason for
> > having an output table is that we can use multiple invocations of the
> > adviser to build up a set of new indexes for a complete workload.
> > Reading things back out of the log would make that more difficult, since
> > we really want this to be automated by pgAdmin et al.
> 
> The complex part of this is that the feature requires patches to the
> backend, and has a /contrib component.  If it could be just in /contrib,
> I agree we would just keep it there until there is a clear direction,
> but having it in both places seems difficult.  I don't think we can
> maintain a patch to the backend code in /contrib, so it would have to
> ship with our backend code.  That's why I was asking about getting it
> integrated fully.

The plugin approach is exactly what happened with the debugger. The
backend has an appropriate plugin API and the debugger is a plugin.

The patch to the backend shouldn't be in contrib, definitely.

I would say its up to the installer to offer the opportunity to load the
adviser plugin, or not. I like plugins because they encourage faster
paced development, diversity and choice. e.g. Multiple java language
plugins give users choice. We could include an adviser plugin with the
main distribution, as happens with PL/pgSQL...

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Dave Page
Bruce Momjian wrote:

> Right, because even the decision of whether they should be in the queue
> is a decision for us.  The hold queue additions are less stringent than
> the main patch queue.

Isn't that always the case though, not just after FF when the hold queue
starts getting activity again? That would imply the need to a permanent
triage(?) queue, and a version specific one imho.

Regards, Dave

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Bruce Momjian
Heikki Linnakangas wrote:
> Bruce Momjian wrote:
> > Simon Riggs wrote:
> >> All have been awaiting review for at least a month (though in one case
> >> the latest version is quite recent). They probably ought to be on the
> >> hold queue; all are ready to be reviewed for final
> >> application/rejection.
> >>
> >> I'd hasten to add that none of those are mine. My patches have received
> >> good attention, so I'm not complaining just completing admin.
> > 
> > You might remember months ago that people were complaining I was pushing
> > things into CVS too quickly, so while the patches are in my mailbox,
> > they are not in the queue until I feel the community has the time to
> > focus on it.
> 
> So, there's a queue of patches in your mailbox waiting to get to the 
> queue? A queue to the queue :). All the patches clearly need review, so 
> let's not rush them into the CVS, but it'd be nice to have them all in 
> one queue.

Right, because even the decision of whether they should be in the queue
is a decision for us.  The hold queue additions are less stringent than
the main patch queue.

> Ps. I agree with the later comments that the naming of the two patch 
> queues is a bit confusing. Having queues named after the release numbers 
> the patches are targeted for seems like a good idea.

OK, naming suggestions?

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] [HACKERS] [Fwd: Index Advisor]

2007-01-08 Thread Bruce Momjian
Simon Riggs wrote:
> On Sat, 2007-01-06 at 16:08 -0500, Bruce Momjian wrote:
> 
> > I have looked over this patch, and it completes part of this TODO item:
> > 
> > o Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM
> >   ANALYZE, and CLUSTER
> 
> > It involves a patch to the backend, and a /contrib module to access it.
> > 
> > I think we have to decide if we want this, and whether it should be in
> > /contrib or fully integrated into the backend.  I am thinking the API
> > needs to be simpified, perhaps by removing the system table and having
> > the recommendations just logged to the server logs.
> 
> The patch to the backend is in the form of a plugin API, which does
> nothing when there is no plugin. IMHO there is a significant amount of
> code there and it is too early to try to get all of that into the
> backend, especially when more tested things like Tsearch2 haven't.
> Plugins are cool because we can update them without needing to bounce a
> production server, which means the code can evolve faster than it would
> do if it was directly in the backend. (You do need to reconnect to allow
> local_preload_libraries to be re-read). Tuning out the wierd
> recommendations will take some time/effort - I don't know there are any,
> but then my gut tells me there very likely are some.
> 
> The output isn't a system table, its a user space table. The reason for
> having an output table is that we can use multiple invocations of the
> adviser to build up a set of new indexes for a complete workload.
> Reading things back out of the log would make that more difficult, since
> we really want this to be automated by pgAdmin et al.

The complex part of this is that the feature requires patches to the
backend, and has a /contrib component.  If it could be just in /contrib,
I agree we would just keep it there until there is a clear direction,
but having it in both places seems difficult.  I don't think we can
maintain a patch to the backend code in /contrib, so it would have to
ship with our backend code.  That's why I was asking about getting it
integrated fully.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] (SETOF) RECORD AS complex_type

2007-01-08 Thread Hannu Krosing
Ühel kenal päeval, K, 2006-12-27 kell 14:06, kirjutas David Fetter:
> Folks,
> 
> While using DBI-Link, I've noticed a little lacuna in how functions
> returning (SETOF) RECORD work, namely, that you have to cast them to
> explicit lists of columns, even when that list of columns corresponds
> to an existing complex type.
> 
> What would be involved in fixing the casting operation so that the
> following would work?
> 
> CREATE TYPE foo AS (
> a INT4,
> b INT8,
> c POINT,
> d TEXT
> );
> 
> CREATE FUNCTION bar(output_type TEXT)
> RETURNS SETOF RECORD
> ...
> 
> SELECT * FROM bar('foo') AS foo;
> 
> Cheers,
> D

using OUT parameters works nice for me

hannu=# CREATE FUNCTION bar(IN cnt INT, OUT a INT4, OUT b INT8, OUT c
POINT, OUT d TEXT)
hannu-# RETURNS SETOF RECORD
hannu-# LANGUAGE SQL
hannu-# AS $$
hannu$# SELECT '1'::INT4,'1'::INT8,'(1,1)'::POINT,'text'::TEXT FROM
generate_series(1,3);
hannu$# $$;
CREATE FUNCTION
hannu=# select * from bar(1);
 a | b |   c   |  d
---+---+---+--
 1 | 1 | (1,1) | text
 1 | 1 | (1,1) | text
 1 | 1 | (1,1) | text
(3 rows)



-- 

Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me:  callto:hkrosing
Get Skype for free:  http://www.skype.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [BUGS] BUG #2873: Function that returns an empty set with a 'not null' domain errors in 8.2 but not 8.1

2007-01-08 Thread Tom Lane
"Jonathan Hull" <[EMAIL PROTECTED]> writes:
> The key feature for the error is that when a result structure (eg : pg_foo)
> is defined with a domain type that is not null, only PG 8.2 errors if the
> result is an empty set.

The problem is explained well enough by this comment in plpgsql's code
for FOR-over-query:

/*
 * If the query didn't return any rows, set the target to NULL and return
 * with FOUND = false.
 */

At the time this code was written, there weren't any potential negative
side-effects of trying to set a row value to all NULLs, but now it's
possible that that fails because of domain constraints.

I think the idea was to ensure that a record variable would have the
correct structure (matching the query output) post-loop, even if the
query produced zero rows.  But it's not clear that that is really
useful for anything, given plpgsql's dearth of introspection facilities.
So we could make Jonathan's problem go away if we just take out the
assignment of nulls, and say that FOR over no rows leaves the record
variable unchanged.  The documentation doesn't specify the current
behavior.

Looking through the code, I see another place that does the same thing:
FETCH from a cursor, when the cursor has no more rows to return.  It's
a bit harder to argue that it's sane to leave the variable unchanged
in this case.  However, the documentation doesn't actually promise that
the target gets set to null in this case either.

Thoughts?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [PATCHES] Patch to log usage of temporary files

2007-01-08 Thread Bruce Momjian

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Bill Moran wrote:
> In response to Tom Lane <[EMAIL PROTECTED]>:
> 
> > Bill Moran <[EMAIL PROTECTED]> writes:
> > > Andrew Dunstan <[EMAIL PROTECTED]> wrote:
> > >>> Might be more robust to say
> > >>> if (trace_temp_files >= 0)
> > 
> > > I specified in the GUC config that minimum allowable value is -1.
> > 
> > I'd still tend to go with Andrew's suggestion because it makes this
> > particular bit of code self-defending against bad values.  Yes, it's
> > reasonably safe given that bit of coding way over yonder in guc.c,
> > but there's no particularly good reason why this code has to depend
> > on that to avoid doing something stupid.  And it's easier to understand
> > too --- you don't have to go looking in guc.c to convince yourself it's
> > safe.
> 
> Ahh ... well, I've probably already argued about it more than it's worth.
> The patch is easy enough to adjust, find attached.
> 
> -- 
> Bill Moran
> Collaborative Fusion Inc.

[ Attachment, skipping... ]

> 
> ---(end of broadcast)---
> TIP 7: You can help support the PostgreSQL project by donating at
> 
> http://www.postgresql.org/about/donate

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] [HACKERS] [Fwd: Index Advisor]

2007-01-08 Thread Kenneth Marshall
One problem with only putting this information in the system logs
is that when we provide database services to a member of our
community we do not actually give them an account of the DB server
or log server. This means that this very useful information would
need to be passed through an intermediary or another tool developed
to allow access to this information. I think that having this available
from a table would be very nice. My two cents.

Ken

On Sat, Jan 06, 2007 at 04:08:24PM -0500, Bruce Momjian wrote:
> 
> I have looked over this patch, and it completes part of this TODO item:
> 
> o Add SET PERFORMANCE_TIPS option to suggest INDEX, VACUUM, VACUUM
>   ANALYZE, and CLUSTER
> 
> Here is the foundation of it:
> 
>   For an incoming EXPLAIN command, the planner generates the plan and, if
>   the Index Adviser is enabled, then the query is sent to the Index
>   Adviser for any suggestions it can make. The Adviser derives a set of
>   potentially useful indexes (index candidates) for this query by
>   analyzing the query predicates. These indexes are inserted into the
>   system catalog as virtual indexes; that is, they are not created on
>   disk.
>   
>   Then, the query is again sent to the planner, and this time the planner
>   makes it's decisions taking the just-created vitual indexes into account
>   too. All index candidates used in the final plan represent the
>   recommendation for the query and are inserted into the advise_index
>   table by the Adviser.
>   
>   The gain of this recommendation is estimated by comparing the execution
>   cost difference of this plan to the plan generated before virtual
>   indexes were created.
> 
> It involves a patch to the backend, and a /contrib module to access it.
> 
> I think we have to decide if we want this, and whether it should be in
> /contrib or fully integrated into the backend.  I am thinking the API
> needs to be simpified, perhaps by removing the system table and having
> the recommendations just logged to the server logs.
> 
> ---
> 
> Gurjeet Singh wrote:
> > Hi All,
> > 
> >Please find attached the latest version of the patch attached. It
> > is based on REL8_2_STABLE.
> > 
> >It includes a few bug fixes and an improvement to the size
> > estimation function. It also includes a work-around to circumvent the
> > problem we were facing earlier in xact.c; it now fakes itself to be a
> > PL/xxx module by surrounding the BIST()/RARCST() calls inside an
> > SPI_connect()/SPI_finish() block.
> > 
> >Please note that the sample_*.txt files in the contrib module,
> > which show a few different sample runs, may be a little out of date.
> > 
> > Best regards,
> > 
> > 
> > -- 
> > [EMAIL PROTECTED]
> > [EMAIL PROTECTED] gmail | hotmail | yahoo }.com
> 
> [ Attachment, skipping... ]
> 
> > 
> > ---(end of broadcast)---
> > TIP 5: don't forget to increase your free space map settings
> 
> -- 
>   Bruce Momjian   [EMAIL PROTECTED]
>   EnterpriseDBhttp://www.enterprisedb.com
> 
>   + If your life is a hard drive, Christ can be your backup. +
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
> 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] -f option for pg_dumpall

2007-01-08 Thread Dave Page
Dave Page wrote:
>> I don't object to it in principle, but I think a bit more thought is
>> needed as to what's the goal.  A stupid "append" option would be enough
>> for pg_dumpall's current capabilities (ie, text output only) --- but is
>> it reasonable to consider generalizing -Fc and -Ft modes to deal with
>> multiple databases, and if so how would that need to change pg_dump's
>> API?  (I'm not at all sure this is feasible, but let's think about it
>> before plastering warts onto pg_dump, not after.)
> 
> Hmm, OK. I'll need to have a good look at the code before I can even
> think about commenting on that, which will have to wait until after I've
> finished bundling releases.

And having done so, I agree that it's not really feasible without
significant effort to allow each archive format to be closed and
re-opened between multiple instances of pg_dump and pg_dumpall, as well
as to allow them to support multiple databases and global objects
(though they can effectively live in the default DB of course) within a
single archive. I'm fairly certain it would be easier to merge the two
programs as originally suggested, though that does indeed look trickier
(and more dangerous) than I originally envisaged.

How about adding the append option, but leaving it undocumented. That
way if anyone gets the itch to do a full rewrite in the future we
haven't necessarily got to continue to support an option we no longer want?

Regards, Dave.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Mark/Restore and avoiding RandomAccess sorts

2007-01-08 Thread Heikki Linnakangas

Bruce Momjian wrote:

I saw no replies to this.


Sounds like a good idea to me.

(further comments below)


Simon Riggs wrote:

Implementing the variable mark/restore buffer as a dumb Tuplestore would
mean that the space usage of the Sort could in worst case go as high as
x2 total space. The worst case is where the inner scan is all a single
value. The best case is where the inner scan is sufficiently unique over
all its values that it never writes back to disk at all. 


So a further refinement of this idea would be to simply defer the final
merge operation for the sort until the history required for the Mark
operation exceeded, say, 10% of the sort size. That would then be
sufficient to improve performance for most common cases, without risking
massive space overflow for large and highly non-unique data. There's no
problem with running the final merge slightly later than before;
everything's still there to allow it. Reusing space in the tuplestore is
also straightforward since that's exactly what the final merge already
does, so some rework of that code should be sufficient.


Should definitely be done by reusing the space in the tuplestore, we 
don't want to use double the space we do now in the degenerate case.



This is a separate, but related idea of being able to avoid
mark/restores completely when the outer scan is provably unique. 


We probably wouldn't need to try to avoid the mark/restore completely, 
if the buffering scheme has low-enough overhead when restore is not called.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] 8.3 pending patch queue

2007-01-08 Thread Heikki Linnakangas

Bruce Momjian wrote:

Simon Riggs wrote:

All have been awaiting review for at least a month (though in one case
the latest version is quite recent). They probably ought to be on the
hold queue; all are ready to be reviewed for final
application/rejection.

I'd hasten to add that none of those are mine. My patches have received
good attention, so I'm not complaining just completing admin.


You might remember months ago that people were complaining I was pushing
things into CVS too quickly, so while the patches are in my mailbox,
they are not in the queue until I feel the community has the time to
focus on it.


So, there's a queue of patches in your mailbox waiting to get to the 
queue? A queue to the queue :). All the patches clearly need review, so 
let's not rush them into the CVS, but it'd be nice to have them all in 
one queue.


Ps. I agree with the later comments that the naming of the two patch 
queues is a bit confusing. Having queues named after the release numbers 
the patches are targeted for seems like a good idea.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [PATCHES] SGML index build fix

2007-01-08 Thread Peter Eisentraut
Am Montag, 8. Januar 2007 05:10 schrieb Bruce Momjian:
> Here is a patch that runs the build twice when HTML.index does not
> exist, and once every time after that.  This is not ideal, but it is a
> start.

The problem is that this requires two runs even to proof the documentation,
which I think no one wants.

> ! # If HTML.index is zero length, create a dummy bookindex.sgml
> ! test -s HTML.index || $(COLLATEINDEX) -o $@ -N
> ! # If HTML.index is valid, create valid bookindex.sgml.  This 
> ! # is required so the output has a proper index.
> ! test ! -s HTML.index || $(COLLATEINDEX) -i 'bookindex' -o $@ $<

Please indent the comments properly so they don't appear in the output.

> ! HTML.index:
> ! test -f HTML.index || (touch HTML.index && $(MAKE) $(MAKECMDGOALS))

I think this is partially redundant.  If HTML.index exists, then this
rule will never be called.

> ! rm -f HTML.manifest *.html *.gif bookindex.skip

I don't see bookindex.skip mentioned anywhere else.  Left over from a
previous version?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] security definer default for some PL languages (SQL/PSM)?

2007-01-08 Thread Albe Laurenz
Peter Eisentraut wrote:
>> SQL/PSM default for SQL procedures are SECURITY DEFINER (like views),
> 
> I can't find this in the standard.  Where did you get this 
> information?

I only have a draft version of SQL:2003, which says in the 'Foundation'
book, chapter 11.50 (""), about the ""
(which can be "SQL SECURITY INVOKER" or "SQL SECURITY DEFINER") in
Syntax Rule
19) b):

"If R is an SQL routine, then if  is not specified,
then SQL SECURITY DEFINER is implicit."

It adds, however, in chapter 4.27.2, that
"An SQL routine is an SQL-invoked routine whose 
specifies SQL."

Rule 20) b) of chapter 11.50 says that

"If R is an external routine, then if  is not
specified, then EXTERNAL SECURITY IMPLEMENTATION DEFINED is implicit."

Here, "An external routine is one whose  does not
specify SQL."

"Implementation defined" here means the obvious thing.


So one can make a case that SQL functions should be SECURITY DEFINER by
default, but for all other procedural languages the standard explicitly
sets no rules.

For me, who comes from a UNIX background, SECURITY INVOKER is the
natural
default value, and the standard's decision surprises me. I guess that it
is counter-intuitive to most people, and moreover it would break
compatibility with current behaviour.
I think that it is wise to break with the standard in this case,
SECURITY INVOKER being the safer option.

It should be noted, however, that Oracle's PL/SQL functions have
AUTHID DEFINER by default, which corresponds to our SECURITY DEFINER.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] is "add MOVE point" in ToDo valid still?

2007-01-08 Thread Pavel Stehule

Hello,

I am working on scrollable cursors, and MOVE statement has relation to it.

Regards
Pavel Stehule

_
Chcete sdilet sve obrazky a hudbu s prateli? http://messenger.msn.cz/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly