Re: [HACKERS] Is there really no interest in SQL Standard?

2011-09-15 Thread Brendan Jurd
On 16 September 2011 16:24, Susanne Ebrecht  wrote:
> Isn't it possible to create a closed mailing list - a list that won't get
> published - on which
> I can discuss SQL Standard stuff with the folk who wants to support me?
>
> I don't fear to make decisions on my own - but speaking for the whole
> project without
> getting feedback - is a worse feeling.
>
> Usually, when I feel unsure how I should decide I just bother Peter - but I
> would prefer
> to have some more ppl in my background.

I for one would definitely be interested in reading such a list.

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Is there really no interest in SQL Standard?

2011-09-15 Thread Susanne Ebrecht

Hello all,

what I saw on PHP unconference told me that I should ask all again.

I feel lonely. Believe me it is a bad feeling when it seems that nobody has
interests in what you are doing.

Since 4 years I am PostgreSQL representative in SQL Standard committee.

Always, when I suggested to talk about my work in the SQL committee on 
community
events, a committee rejected it. This just showed me that nobody really 
has interests

in my work.

I now learned that such a event committee not always is able to estimate 
interests correct.


The only two persons who sometimes support me are David F. and Peter.

The next ISO meeting will be soon - and of course there are lots of 
drafts that needs

decisions.

I am not allowed to share the drafts in public. Because the drafts are 
confidential.
But I am allowed to share the drafts with the group of ppl who are 
supporting me.
Of course I am allowed to discuss the drafts with my folk before I will 
give my votes and comments.


Is there really only David and Peter who have interests?

I not really want to believe it.

Isn't it possible to create a closed mailing list - a list that won't 
get published - on which

I can discuss SQL Standard stuff with the folk who wants to support me?

I don't fear to make decisions on my own - but speaking for the whole 
project without

getting feedback - is a worse feeling.

Usually, when I feel unsure how I should decide I just bother Peter - 
but I would prefer

to have some more ppl in my background.

Susanne

--
Susanne Ebrecht - 2ndQuadrant
PostgreSQL Development, 24x7 Support, Training and Services
www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-15 Thread Fujii Masao
On Thu, Sep 15, 2011 at 11:37 PM, Tom Lane  wrote:
> This seems like it's already predetermining the outcome of the argument
> about recovery.conf.  Mind you, I'm not unhappy with this choice, but
> it's hardly implementing only behavior that's not being debated.
>
> If we're satisfied with not treating recovery-only parameters different
> from run-of-the-mill GUCs, this is fine.

Okay, we need to reach a consensus about the treatment of
recovery.conf.

We have three choices. Which do you like the best?

#1
Use empty recovery.ready file to enter arhicve recovery. recovery.conf
is not read automatically. All recovery parameters are expected to be
specified in postgresql.conf. If you must specify them in recovery.conf,
you need to add "include 'recovery.conf'" into postgresql.conf. But note
that that recovery.conf will not be renamed to recovery.done at the
end of recovery. This is what the patch I've posted does. This is
simplest approach, but might confuse people who use the tools which
depend on recovery.conf.

#2
Use empty recovery.ready file to enter archive recovery. recovery.conf
is read automatically. You can specify recovery parameters in
recovery.conf without adding "include 'recovery.conf'" into
postgresql.conf.  But note that that recovery.conf will not be renamed
to recovery.done at the end of recovery. If we adopt this, we might
need to implement what Dimitri suggested. I guess this is not so difficult
thing.
http://archives.postgresql.org/pgsql-hackers/2011-09/msg00745.php

#3
Use recovery.conf file to enter archive recovery. recovery.conf is read
automatically, and will be renamed to recovery.done at the end of
recovery. This is least confusing approach for the existing users, but
I guess we need to add lots of code (e.g., as Peter suggested, we might
need to invent new context setting like PGC_RECOVERY) to address
the problem I pointed before.
http://archives.postgresql.org/pgsql-hackers/2011-09/msg00482.php


If we want to use recovery.conf as a temporary configuration file for
recovery (i.e., configuration file disappears after use), we must choose
#3. If we can live with that recovery.conf is treated as a permanent
configuration file but don't want to add "include 'recovery.conf'" into
postgresql.conf, #2 is best. If we don't use recovery.conf or mind
editing postgresql.conf to use recovery.conf, #1 is best. Which behavior
are you expecting?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Separating bgwriter and checkpointer

2011-09-15 Thread Fujii Masao
On Fri, Sep 16, 2011 at 7:53 AM, Simon Riggs  wrote:
> This patch splits bgwriter into 2 processes: checkpointer and
> bgwriter, seeking to avoid contentious changes. Additional changes are
> expected in this release to build upon these changes for both new
> processes, though this patch stands on its own as both a performance
> vehicle and in some ways a refcatoring to simplify the code.

I like this idea to simplify the code. How much performance gain can we
expect by this patch?

> Current patch has a bug at shutdown I've not located yet, but seems
> likely is a simple error. That is mainly because for personal reasons
> I've not been able to work on the patch recently. I expect to be able
> to fix that later in the CF.

You seem to have forgotten to include checkpointor.c and .h in the patch.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql setenv command

2011-09-15 Thread Andrew Dunstan
On Thu, September 15, 2011 6:10 pm, Josh Kupershmidt wrote:
> On Thu, Sep 15, 2011 at 10:46 AM, Andrew Dunstan 
> wrote:
>> this time with patch.
>
> I think help.c should document the \setenv command. And a link from
> the "Environment" section[1] of psql's doc page to the section about
> \setenv might help too.


Good point.

>
> The existing \set command lists all internal variables and their
> values when called with no arguments. Perhaps \setenv with no
> arguments should work similarly: we could display only those
> environment variables interesting to psql if there's too much noise.
> Or at least, I think we need some way to check the values of
> environment variables inside psql, if we're able to set them from
> inside psql. To be fair, I notice that \pset doesn't appear to offer a
> way to show its current values either, but maybe that should be fixed
> separately as well.

\! echo $foo

(which is how I tested the patch, of course)

>
> Typo: s/vakue/value/
>


Yeah, I noticed that after, thanks.

cheers

andrew



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Initialization of ResultTupleSlot in AppendNode

2011-09-15 Thread Tom Lane
Amit Kapila  writes:
> I observed that during initialization of planstate for Append Node, we
> allocate ResulttupleSlot, however it is used only to send NULL slot indicate
> no more tuples. 

> Is it right or there is any other purpose of it?

That also holds the plan's output tuple descriptor.  If you tried to
remove it, I think the ExecAssignResultTypeFromTL call would crash.
And if you removed *that*, upper nodes would get unhappy, cf
ExecGetResultType.

The use as an end-of-scan signal seems a bit vestigial, since we
could just as well return NULL, but it doesn't really cost enough
to be worth changing ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql setenv command

2011-09-15 Thread Josh Kupershmidt
On Thu, Sep 15, 2011 at 10:46 AM, Andrew Dunstan  wrote:
> this time with patch.

I think help.c should document the \setenv command. And a link from
the "Environment" section[1] of psql's doc page to the section about
\setenv might help too.

The existing \set command lists all internal variables and their
values when called with no arguments. Perhaps \setenv with no
arguments should work similarly: we could display only those
environment variables interesting to psql if there's too much noise.
Or at least, I think we need some way to check the values of
environment variables inside psql, if we're able to set them from
inside psql. To be fair, I notice that \pset doesn't appear to offer a
way to show its current values either, but maybe that should be fixed
separately as well.

Typo: s/vakue/value/

Josh
--
[1] 
http://www.postgresql.org/docs/current/static/app-psql.html#APP-PSQL-ENVIRONMENT

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-09-15 Thread Alex Hunsaker
On Thu, Sep 15, 2011 at 15:53, David E. Wheeler  wrote:
> On Sep 15, 2011, at 4:41 PM, Alex Hunsaker wrote:
>
>> ExtUtils searches @INC, privlibexp maybe we should do that?
>
> Yes, I just got an email from David Golden to that effect. So perhaps the 
> attached patch is better?

Close, seems I was wrong about the typemap ExtUtils::ParseXS does not
install a new one so we still need to point to the one in privlib.
Also xsubpp is not executable so the test should be -r or something.

Also don't think we should change the configure switch tests to test XSUBPPDIR.

Find those plus some minor typos fixed in the attached.
*** a/src/pl/plperl/GNUmakefile
--- b/src/pl/plperl/GNUmakefile
***
*** 55,60  endif
--- 55,63 
  # where to find psql for running the tests
  PSQLDIR = $(bindir)
  
+ # where to find xsubpp for building XS.
+ XSUBPPDIR = $(shell $(PERL) -e 'use List::Util qw(first); print first { -r "$$_/ExtUtils/xsubpp" } @INC')
+ 
  include $(top_srcdir)/src/Makefile.shlib
  
  plperl.o: perlchunks.h plperl_opmask.h
***
*** 71,81  all: all-lib
  
  SPI.c: SPI.xs
  	@if [ x"$(perl_privlibexp)" = x"" ]; then echo "configure switch --with-perl was not specified."; exit 1; fi
! 	$(PERL) $(perl_privlibexp)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap $< >$@
  
  Util.c: Util.xs
  	@if [ x"$(perl_privlibexp)" = x"" ]; then echo "configure switch --with-perl was not specified."; exit 1; fi
! 	$(PERL) $(perl_privlibexp)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap $< >$@
  
  
  install: all install-lib install-data
--- 74,84 
  
  SPI.c: SPI.xs
  	@if [ x"$(perl_privlibexp)" = x"" ]; then echo "configure switch --with-perl was not specified."; exit 1; fi
! 	$(PERL) $(XSUBPPDIR)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap $< >$@
  
  Util.c: Util.xs
  	@if [ x"$(perl_privlibexp)" = x"" ]; then echo "configure switch --with-perl was not specified."; exit 1; fi
! 	$(PERL) $(XSUBPPDIR)/ExtUtils/xsubpp -typemap $(perl_privlibexp)/ExtUtils/typemap $< >$@
  
  
  install: all install-lib install-data
*** a/src/tools/msvc/Mkvcbuild.pm
--- b/src/tools/msvc/Mkvcbuild.pm
***
*** 13,18  use Project;
--- 13,20 
  use Solution;
  use Cwd;
  use File::Copy;
+ use Config;
+ use List::Util qw(first);
  
  use Exporter;
  our (@ISA, @EXPORT_OK);
***
*** 106,116  sub mkvcbuild
  (my $xsc = $xs) =~ s/\.xs/.c/;
  if (Solution::IsNewer("$plperlsrc$xsc","$plperlsrc$xs"))
  {
  print "Building $plperlsrc$xsc...\n";
  system( $solution->{options}->{perl}
. '/bin/perl '
. $solution->{options}->{perl}
!   . '/lib/ExtUtils/xsubpp -typemap '
. $solution->{options}->{perl}
. '/lib/ExtUtils/typemap '
. "$plperlsrc$xs "
--- 108,119 
  (my $xsc = $xs) =~ s/\.xs/.c/;
  if (Solution::IsNewer("$plperlsrc$xsc","$plperlsrc$xs"))
  {
+ my $xsubppdir = first { -e "$_\\ExtUtils\\xsubpp.BAT" } @INC;
  print "Building $plperlsrc$xsc...\n";
  system( $solution->{options}->{perl}
. '/bin/perl '
. $solution->{options}->{perl}
!   . "$xsubppdir/ExtUtils/xsubpp -typemap "
. $solution->{options}->{perl}
. '/lib/ExtUtils/typemap '
. "$plperlsrc$xs "

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-09-15 Thread David E. Wheeler
On Sep 15, 2011, at 4:41 PM, Alex Hunsaker wrote:

> ExtUtils searches @INC, privlibexp maybe we should do that?

Yes, I just got an email from David Golden to that effect. So perhaps the 
attached patch is better?

Best,

David


xsubpp2.patch
Description: Binary data



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Perl xsubpp

2011-09-15 Thread Alex Hunsaker
On Thu, Sep 15, 2011 at 10:44, David E. Wheeler  wrote:
> Hackers,
>
> Since installing Perl 5.14.1, I installed newer version of ExtUtils::ParseXS 
> from CPAN. I installed it with `make install UNINST=1`, which removes the 
> copy of xsubpp that ships with core Perl. This results in an error during 
> PostgreSQL `make`:
>
> make -C plperl install
> gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
> -Wdeclaration-after-statement -Wendif-labels -Wformat-security 
> -fno-strict-aliasing -fwrapv  -I. -I. -I../../../src/include 
> -I/usr/local/include/libxml2  -I/usr/local/include 
> -I/usr/local/lib/perl5/5.14.1/darwin-thread-multi-2level/CORE  -c -o plperl.o 
> plperl.c
> '/usr/local/bin/perl' /usr/local/lib/perl5/5.14.1/ExtUtils/xsubpp -typemap 
> /usr/local/lib/perl5/5.14.1/ExtUtils/typemap SPI.xs >SPI.c
> Can't open perl script "/usr/local/lib/perl5/5.14.1/ExtUtils/xsubpp": No such 
> file or directory
>
> I [asked][] Perl 5 Porters for the proper way to find xsubpp, and was 
> [told][] that it was probably best to look in @Config{qw(installsitebin 
> installvendorbin installbin)}.

Doesn't work for me :-( I have:
  'installbin' => '/usr/bin',
  'installsitebin' => '/usr/bin',
  'installvendorbin' => '/usr/bin',
  'installscript' => '/usr/bin/core_perl',
  'installprivlib' => '/usr/share/perl5/core_perl',
  'installsitescript' => '/usr/bin/site_perl',

$ ls /usr/bin/xsubpp
ls: cannot access /usr/bin/xsubpp: No such file or directory

$ ls /usr/bin/core_perl/xsubpp
/usr/bin/core_perl/xsubpp

The worst part is it tells me I need to configure with --with-perl.
Seems it complaining that it couldn't find xsubpp, I did configure
with perl!

Normally it uses the one in /usr/share/perl5/core_perl/ExtUtils/xsubpp.

Also it looks like it uses the wrong typemap file, still uses the one
from privlib.

So then I tried to install the newer ExtUtils::ParseXS to see where it
installed xsubpp for me. It reports:
...
Installing /usr/share/perl5/site_perl/ExtUtils/xsubpp

Installing /usr/bin/site_perl/xsubpp

Looking at its makefile looks like installs xsubpp into
installsitescript. Seems  install(site|vendor)bin is quite right :-(.

ExtUtils searches @INC, privlibexp maybe we should do that?

ExtUtils/MM_Unix.pm:

# line 3456
sub tool_xsubpp {

my @xsubpp_dirs = @INC;

# Make sure we pick up the new xsubpp if we're building perl.
unshift @xsubpp_dirs, $self->{PERL_LIB} if $self->{PERL_CORE};

foreach my $dir (@xsubpp_dirs) {
$xsdir = $self->catdir($dir, 'ExtUtils');
if( -r $self->catfile($xsdir, "xsubpp") ) {
last;
}
}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ginfastupdate.. slow

2011-09-15 Thread Oleg Bartunov

Jesper,

are you sure you have autovacuum configured properly, so posting lists don't
grow too much. It's true, that concurrency of posting lists isn't good, since
they all appended.

Oleg
On Thu, 15 Sep 2011, Jesper Krogh wrote:


Hi List.

This is just an "observation" I'll try to reproduce it in a test set later.

I've been trying to performancetune a database system which does
a lot of updates on GIN indexes. I currently have 24 workers running
executing quite cpu-intensive stored procedures that helps generate
the body for the gin index (full-text-search).

The system is all memory resident for the data that gets computed on
and there is a 1GB BBWC before data hits the disk-system. The iowait
is 5-10% while running.

The system is nearly twice as fast with fastupdate=off as with fastupdate=on.
Benchmark done on a 9.0.latest

System AMD Opteron, 4x12 cores @ 2.2ghz, 128GB memory.

It is probably not as surprising as it may seem, since the "fastupdate" is
about batching up in a queue for processing later, but when the "later"
arrives, concurrency seems to stop.

Is it worth a documentation comment?




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Double sorting split patch

2011-09-15 Thread Alexander Korotkov
On Thu, Sep 15, 2011 at 7:27 PM, Heikki Linnakangas <
heikki.linnakan...@enterprisedb.com> wrote:

> I've looked at the patch, and took a brief look at the paper - but I still
> don't understand the algorithm. I just can't get my head around the concepts
> of split pairs and left/right groups. Can you explain that in layman's
> terms? Perhaps an example would help?
>

In short algorithm works as following:
1) Each box can be projected to the axis as an interval. Box (x1,y1)-(x2,y2)
are projected to X axis as (x1,x2) interval and to the Y axis as (y1,y2)
interval. At the first step we search for splits of those intervals and
select the best one.
2) At the second step produced split are converting into terms of boxes
and ambiguities are solving.

Let's see a little deeper how intervals split search are performed by
considering an example. We've intervals (0,1), (1,3), (2,3), (2,4). We
assume intervals of the groups to be (0,a), (b,4). So, "a" can be some upper
bound of interval: {1,3,4}, and "b" can be some lower bound of inteval:
{0,1,2}.
We consider following splits: each "a" with greatest possible "b"
(0,1), (1,4)
(0,3), (2,4)
(0,4), (2,4)
and each "b" with least possible "a". In this example splits will be:
(0,1), (0,4)
(0,1), (1,4)
(0,3), (2,4)
By removing the duplicates we've following splits:
(0,1), (0,4)
(0,1), (1,4)
(0,3), (2,4)
(0,4), (2,4)
Proposed algorithm finds following splits by single pass on two arrays: one
sorted by lower bound of interval and another sorted by upper bound of
interval.

--
With best regards,
Alexander Korotkov.


[HACKERS] ginfastupdate.. slow

2011-09-15 Thread Jesper Krogh

Hi List.

This is just an "observation" I'll try to reproduce it in a test set later.

I've been trying to performancetune a database system which does
a lot of updates on GIN indexes. I currently have 24 workers running
executing quite cpu-intensive stored procedures that helps generate
the body for the gin index (full-text-search).

The system is all memory resident for the data that gets computed on
and there is a 1GB BBWC before data hits the disk-system. The iowait
is 5-10% while running.

The system is nearly twice as fast with fastupdate=off as with 
fastupdate=on.

Benchmark done on a 9.0.latest

System AMD Opteron, 4x12 cores @ 2.2ghz, 128GB memory.

It is probably not as surprising as it may seem, since the "fastupdate" is
about batching up in a queue for processing later, but when the "later"
arrives, concurrency seems to stop.

Is it worth a documentation comment?

--
Jesper

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch: Perl xsubpp

2011-09-15 Thread David E. Wheeler
Hackers,

Since installing Perl 5.14.1, I installed newer version of ExtUtils::ParseXS 
from CPAN. I installed it with `make install UNINST=1`, which removes the copy 
of xsubpp that ships with core Perl. This results in an error during PostgreSQL 
`make`:

make -C plperl install
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wformat-security 
-fno-strict-aliasing -fwrapv  -I. -I. -I../../../src/include 
-I/usr/local/include/libxml2  -I/usr/local/include 
-I/usr/local/lib/perl5/5.14.1/darwin-thread-multi-2level/CORE  -c -o plperl.o 
plperl.c
'/usr/local/bin/perl' /usr/local/lib/perl5/5.14.1/ExtUtils/xsubpp -typemap 
/usr/local/lib/perl5/5.14.1/ExtUtils/typemap SPI.xs >SPI.c
Can't open perl script "/usr/local/lib/perl5/5.14.1/ExtUtils/xsubpp": No such 
file or directory

I [asked][] Perl 5 Porters for the proper way to find xsubpp, and was [told][] 
that it was probably best to look in @Config{qw(installsitebin installvendorbin 
installbin)}.

[asked]: 
http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2011-09/msg00501.html
[told]: 
http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2011-09/msg00686.html

The attached patch makes this change. I've tested it on Mac OS X and it works 
fine. Someone else will have to test it on Windows.

Best,

David


xsubpp.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Initialization of ResultTupleSlot in AppendNode

2011-09-15 Thread Amit Kapila
Hi All,

 

I observed that during initialization of planstate for Append Node, we
allocate ResulttupleSlot, however it is used only to send NULL slot indicate
no more tuples. 

Is it right or there is any other purpose of it?

 

Amit

 


***
This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

 



Re: [HACKERS] memory barriers (was: Yes, WaitLatch is vulnerable to weak-memory-ordering bugs)

2011-09-15 Thread Heikki Linnakangas

On 14.09.2011 23:29, Robert Haas wrote:

On Mon, Aug 8, 2011 at 7:47 AM, Robert Haas  wrote:

I've been thinking about this too and actually went so far as to do
some research and put together something that I hope covers most of
the interesting cases.  The attached patch is pretty much entirely
untested, but reflects my present belief about how things ought to
work.


And, here's an updated version, with some of the more obviously broken
things fixed.


s/atomic/barrier/


+/*
+ * A compiler barrier need not (and preferably should not) emit any actual
+ * machine code, but must act as an optimization fence: the compiler must not
+ * reorder loads or stores to main memory around the barrier.  However, the
+ * CPU may still reorder loads or stores at runtime, if the architecture's
+ * memory model permits this.
+ *
+ * A memory barrier must act as a compiler barrier, and in addition must
+ * guarantee that all loads and stores issued prior to the barrier are
+ * completed before any loads or stores issued after the barrier.  Unless
+ * loads and stores are totally ordered (which is not the case on most
+ * architectures) this requires issuing some sort of memory fencing
+ * instruction.


This seems like a strange way to explain the problem. I would suggest 
structuring those paragraphs along the lines of:


"
A PostgreSQL memory barrier guarantees that any loads/stores before the 
barrier are completely finished and visible to other CPUs, before the 
loads/stores after the barrier are performed.


That involves two things: 1. We must stop the compiler from rearranging 
loads/stores across the barrier. 2. We must stop the CPU from reordering 
the loads/stores across the barrier.

"

Do we have any use for compiler barriers that are not also memory 
barriers? If not, I would suggest not exposing the pg_compiler_barrier() 
macro, but keep that as an implementation detail in the implementations 
of pg_memory_barrier().


Some examples on the correct usage of these barriers would be nice, too.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for cursor calling with named parameters

2011-09-15 Thread Yeb Havinga

On 2011-09-15 16:31, Cédric Villemain wrote:

There exist also a mecanism to order the parameters of  'EXECUTE ...
USING ...'  (it's using a cursor), can the current work benefit to
EXECUTE USING to use named parameters ?


I looked at it a bit but it seems there is no benefit, since the dynamic 
sql handling vs the cursor declaration and opening touch different code 
paths in both the plpgsql grammar and the SPI functions that are called.


regards,
Yeb


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Double sorting split patch

2011-09-15 Thread Heikki Linnakangas

On 11.09.2011 22:30, Alexander Korotkov wrote:

Hackers,

I've got my patch with double sorting picksplit impementation for GiST into
more acceptable form. A little of testing is below. Index creation time is
slightly higher, but search is much faster. The testing datasets were
following:
1) uniform dataset - 10M rows
2) geonames points - 7.6M rows


I've looked at the patch, and took a brief look at the paper - but I 
still don't understand the algorithm. I just can't get my head around 
the concepts of split pairs and left/right groups. Can you explain that 
in layman's terms? Perhaps an example would help?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql setenv command

2011-09-15 Thread Andrew Dunstan
On Thu, September 15, 2011 10:44 am, Andrew Dunstan wrote:
>
> As discussed, patch attached.
>


this time with patch.



setenv.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql setenv command

2011-09-15 Thread Andrew Dunstan

As discussed, patch attached.

cheers

andrew


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-15 Thread Tom Lane
Fujii Masao  writes:
> It seems to need a bit more time until we've reached a consensus about
> the treatment of recovery.conf. How about committing the core patch
> first, and addressing the recovery.conf issue as a different patch later?

> The attached patch provides a core part of the feature, i.e., it moves
> every recovery parameters from recovery.conf to postgresql.conf.
> Even if you create recovery.conf, the server doesn't read it automatically.

> The patch renames recovery.conf to recovery.ready, so if you want to
> enter archive recovery or standby mode, you need to create
> recovery.ready file in the cluster data directory. Since recovery.ready is
> just a signal file, its contents have no effect.

This seems like it's already predetermining the outcome of the argument
about recovery.conf.  Mind you, I'm not unhappy with this choice, but
it's hardly implementing only behavior that's not being debated.

If we're satisfied with not treating recovery-only parameters different
from run-of-the-mill GUCs, this is fine.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for cursor calling with named parameters

2011-09-15 Thread Cédric Villemain
2011/9/15 Yeb Havinga :
> Hello list,
>
> The following patch implements cursor calling with named parameters in
> addition to the standard positional argument lists.
>
> c1 cursor (param1 int, param2 int) for select * from rc_test where a >
> param1 and b > param2;
> open c1($1, $2);                     -- this is currently possible
> open c1(param2 := $2, param1 := $1); -- this is the new feature
>
> Especially for cursors with a lot of arguments, this increases readability
> of code. This was discussed previously in
> http://archives.postgresql.org/pgsql-hackers/2010-09/msg01433.php. We
> actually made two patches: one with => and then one with := notation.
> Attached is the patch with := notation.
>
> Is it ok to add it to the next commitfest?

I think it is, as you have provided a patch.

There exist also a mecanism to order the parameters of  'EXECUTE ...
USING ...'  (it's using a cursor), can the current work benefit to
EXECUTE USING to use named parameters ?



> regards,
> Yeb Havinga, Willem Dijkstra
>
> --
> Yeb Havinga
> http://www.mgrid.net/
> Mastering Medical Data
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>



-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-15 Thread Robert Haas
On Thu, Sep 15, 2011 at 5:58 AM, Peter Eisentraut  wrote:
> Alternatively, we could just forget about the whole thing and move
> everything to postgresql.conf and treat recovery.conf as a simple empty
> signal file.  I don't know if that's necessarily better.

Seems like it might be simpler.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-15 Thread Peter Eisentraut
On tor, 2011-09-15 at 16:54 +0900, Fujii Masao wrote:
> On Wed, Sep 14, 2011 at 6:33 PM, Peter Eisentraut  wrote:
> > On tis, 2011-09-13 at 17:10 +0100, Simon Riggs wrote:
> >> So treat postgresql.conf as if it has an automatic "include
> >> recovery.conf" in it. The file format is the same.
> >
> > Sounds good.  That would also have the merit that you could use, say,
> > different memory settings during recovery.
> 
> One problem of this is that recovery.conf is relative to the configuration
> file directory instead of data directory if we treat it as an "include" file.

Treat it *as if* it were an included file.  A special included file if
you will.

> If we'd like to treat recovery.conf as if it's under the data directory, I'm
> afraid that we should add complicated code to parse recovery.conf after
> the value of data_directory has been determined from postgresql.conf.
> Furthermore, what if recovery.conf has another setting of data_directory?

Perhaps that could be addresses by inventing a new context setting
PGC_RECOVERY so that you can only set sane values.

> Since recovery.conf is a configuration file, it's intuitive for me to put it
> in configuration file directory rather than data directory. So I'm not 
> inclined
> to treat recovery.conf as if it's under data directory. Is this OK?

It's not a configuration file, because it disappears after use.  (And a
lot of configuration file management systems would be really upset if we
had a configuration file that behaved that way.)  The whole point of
this exercise is allowing the permanent configuration file parameters to
be moved to a real configuration file (postgresql.conf), while leaving
the temporary settings separately.

Alternatively, we could just forget about the whole thing and move
everything to postgresql.conf and treat recovery.conf as a simple empty
signal file.  I don't know if that's necessarily better.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] pg_last_xact_insert_timestamp

2011-09-15 Thread Fujii Masao
On Wed, Sep 14, 2011 at 6:21 PM, Kyotaro HORIGUCHI
 wrote:
> Hi, This is a review for pg_last_xact_insert_timestamp patch.
> (https://commitfest.postgresql.org/action/patch_view?id=634)

Thanks for the review!

> Q1: The shmem entry for timestamp is not initialized on
> allocating. Is this OK? (I don't know that for OSs other than
> Linux) And zeroing double field is OK for all OSs?

CreateSharedBackendStatus() initializes that shmem entries by doing
MemSet(BackendStatusArray, 0, size). You think this is not enough?

> Nevertheless this is ok for all OSs, I don't know whether
> initializing TimestampTz(double, int64 is ok) field with 8 bytes
> zeros is OK or not, for all platforms. (It is ok for
> IEEE754-binary64).

Which code are you concerned about?

> == Modification detection protocol in pgstat.c
>
> In pgstat_report_xact_end_timestamp, `beentry->st_changecount
> protocol' is used. It is for avoiding reading halfway-updated
> beentry as described in pgstat.h. Meanwhile,
> beentry->st_xact_end_timestamp is not read or (re-)initialized in
> pgstat.c and xlog.c reads only this field of whole beentry and
> st_changecount is not get cared here..

No, st_changecount is used to read st_xact_end_timestamp.
st_xact_end_timestamp is read from the shmem to the local memory
in pgstat_read_current_status(), and this function always checks
st_changecount when reading the shmem value.

> == Code duplication in xact.c
>
> in xact.c, same lines inserted into the end of both IF and ELSE
> blocks.
>
> xact.c:1047>    pgstat_report_xact_end_timestamp(xlrec.xact_time);
> xact.c:1073>    pgstat_report_xact_end_timestamp(xlrec.xact_time);
>
> These two lines refer to xlrec.xact_time, both of which comes
> from xactStopTimestamp freezed at xact.c:986
>
> xact.c:986>     SetCurrentTransactionStopTimestamp();
> xact.c:1008>    xlrec.xact_time = xactStopTimestamp;
> xact.c:1051>    xlrec.xact_time = xactStopTimestamp;
>
> I think it is better to move this line to just after this ELSE
> block using xactStopTimestamp instead of xlrec.xact_time.

Okay, I've changed the patch in that way.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 13996,14001  SELECT set_config('log_statement_stats', 'off', false);
--- 13996,14004 
  pg_current_xlog_location
 
 
+ pg_last_xact_insert_timestamp
+
+
  pg_start_backup
 
 
***
*** 14049,14054  SELECT set_config('log_statement_stats', 'off', false);
--- 14052,14064 


 
+ pg_last_xact_insert_timestamp()
+ 
+timestamp with time zone
+Get last transaction log insert time stamp
+   
+   
+
  pg_start_backup(label text , fast boolean )
  
 text
***
*** 14175,14180  postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
--- 14185,14197 
 
  
 
+ pg_last_xact_insert_timestamp displays the time stamp of last inserted
+ transaction. This is the time at which the commit or abort WAL record for that transaction.
+ If there has been no transaction committed or aborted yet since the server has started,
+ this function returns NULL.
+
+ 
+
  For details about proper usage of these functions, see
  .
 
*** a/doc/src/sgml/high-availability.sgml
--- b/doc/src/sgml/high-availability.sgml
***
*** 867,872  primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
--- 867,881 
   ps command (see  for details).
  
  
+  You can also calculate the lag in time stamp by comparing the last
+  WAL insert time stamp on the primary with the last WAL replay
+  time stamp on the standby. They can be retrieved using
+  pg_last_xact_insert_timestamp on the primary and
+  the pg_last_xact_replay_timestamp on the standby,
+  respectively (see  and
+   for details).
+ 
+ 
   You can retrieve a list of WAL sender processes via the
   
   pg_stat_replication view. Large differences between
*** a/src/backend/access/transam/xact.c
--- b/src/backend/access/transam/xact.c
***
*** 1066,1071  RecordTransactionCommit(void)
--- 1066,1074 
  
  			(void) XLogInsert(RM_XACT_ID, XLOG_XACT_COMMIT_COMPACT, rdata);
  		}
+ 
+ 		/* Save timestamp of latest transaction commit record */
+ 		pgstat_report_xact_end_timestamp(xactStopTimestamp);
  	}
  
  	/*
***
*** 1434,1439  RecordTransactionAbort(bool isSubXact)
--- 1437,1445 
  
  	(void) XLogInsert(RM_XACT_ID, XLOG_XACT_ABORT, rdata);
  
+ 	/* Save timestamp of latest transaction abort record */
+ 	pgstat_report_xact_end_timestamp(xlrec.xact_time);
+ 
  	/*
  	 * Report the latest async abort LSN, so that the WAL writer knows to
  	 * flush this abort. There's nothing to be gained by delaying this, sin

Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-15 Thread Dimitri Fontaine
Fujii Masao  writes:
> If we'd like to treat recovery.conf as if it's under the data directory, I'm
> afraid that we should add complicated code to parse recovery.conf after
> the value of data_directory has been determined from postgresql.conf.
> Furthermore, what if recovery.conf has another setting of data_directory?

We already do this dance for custom_variable_classes.  IIRC the only
thing you have to care about is setting the GUC before any other one.  I
guess you could just process the data_directory the same way, before the
main loop, and be done with it.

> Since recovery.conf is a configuration file, it's intuitive for me to put it
> in configuration file directory rather than data directory. So I'm not 
> inclined
> to treat recovery.conf as if it's under data directory. Is this OK?

I think that if we keep some compatibility with recovery.conf, then we
need to include finding it in the data_directory or the configuration
file directory, in that order, and with a LOG message that says where we
did find it.

I think we should *NOT* load both of them with some priority rules.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch for cursor calling with named parameters

2011-09-15 Thread Yeb Havinga

Hello list,

The following patch implements cursor calling with named parameters in 
addition to the standard positional argument lists.


c1 cursor (param1 int, param2 int) for select * from rc_test where a > 
param1 and b > param2;

open c1($1, $2); -- this is currently possible
open c1(param2 := $2, param1 := $1); -- this is the new feature

Especially for cursors with a lot of arguments, this increases 
readability of code. This was discussed previously in 
http://archives.postgresql.org/pgsql-hackers/2010-09/msg01433.php. We 
actually made two patches: one with => and then one with := notation. 
Attached is the patch with := notation.


Is it ok to add it to the next commitfest?

regards,
Yeb Havinga, Willem Dijkstra

--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data

diff --git a/src/pl/plpgsql/src/gram.y b/src/pl/plpgsql/src/gram.y
new file mode 100644
index 92b54dd..192f278
*** a/src/pl/plpgsql/src/gram.y
--- b/src/pl/plpgsql/src/gram.y
*** read_sql_expression(int until, const cha
*** 2335,2340 
--- 2335,2352 
  			  "SELECT ", true, true, NULL, NULL);
  }
  
+ /*
+  * Convenience routine to read a single unchecked expression with two possible
+  * terminators, returning an expression with an empty sql prefix.
+  */
+ static PLpgSQL_expr *
+ read_sql_one_expression(int until, int until2, const char *expected,
+ 		int *endtoken)
+ {
+ 	return read_sql_construct(until, until2, 0, expected,
+ 			  "", true, false, NULL, endtoken);
+ }
+ 
  /* Convenience routine to read an expression with two possible terminators */
  static PLpgSQL_expr *
  read_sql_expression2(int until, int until2, const char *expected,
*** check_labels(const char *start_label, co
*** 3384,3399 
  /*
   * Read the arguments (if any) for a cursor, followed by the until token
   *
!  * If cursor has no args, just swallow the until token and return NULL.
!  * If it does have args, we expect to see "( expr [, expr ...] )" followed
!  * by the until token.  Consume all that and return a SELECT query that
!  * evaluates the expression(s) (without the outer parens).
   */
  static PLpgSQL_expr *
  read_cursor_args(PLpgSQL_var *cursor, int until, const char *expected)
  {
  	PLpgSQL_expr *expr;
! 	int			tok;
  
  	tok = yylex();
  	if (cursor->cursor_explicit_argrow < 0)
--- 3396,3418 
  /*
   * Read the arguments (if any) for a cursor, followed by the until token
   *
!  * If cursor has no args, just swallow the until token and return NULL.  If it
!  * does have args, we expect to see "( expr [, expr ...] )" followed by the
!  * until token, where expr may be a plain expression, or a named parameter
!  * assignment of the form IDENT := expr. Consume all that and return a SELECT
!  * query that evaluates the expression(s) (without the outer parens).
   */
  static PLpgSQL_expr *
  read_cursor_args(PLpgSQL_var *cursor, int until, const char *expected)
  {
  	PLpgSQL_expr *expr;
! 	PLpgSQL_row *row;
! 	int tok;
! 	int argc = 0;
! 	char **argv;
! 	StringInfoData ds;
! 	char *sqlstart = "SELECT ";
! 	int startlocation = yylloc;
  
  	tok = yylex();
  	if (cursor->cursor_explicit_argrow < 0)
*** read_cursor_args(PLpgSQL_var *cursor, in
*** 3412,3417 
--- 3431,3439 
  		return NULL;
  	}
  
+ 	row = (PLpgSQL_row *) plpgsql_Datums[cursor->cursor_explicit_argrow];
+ 	argv = (char **) palloc0(sizeof(char *) * row->nfields);
+ 
  	/* Else better provide arguments */
  	if (tok != '(')
  		ereport(ERROR,
*** read_cursor_args(PLpgSQL_var *cursor, in
*** 3420,3429 
  		cursor->refname),
   parser_errposition(yylloc)));
  
! 	/*
! 	 * Read expressions until the matching ')'.
! 	 */
! 	expr = read_sql_expression(')', ")");
  
  	/* Next we'd better find the until token */
  	tok = yylex();
--- 3442,3527 
  		cursor->refname),
   parser_errposition(yylloc)));
  
! 	for (argc = 0; argc < row->nfields; argc++)
! 	{
! 		int argpos;
! 		int endtoken;
! 		PLpgSQL_expr *item;
! 
! 		if (plpgsql_isidentassign())
! 		{
! 			/* Named parameter assignment */
! 			for (argpos = 0; argpos < row->nfields; argpos++)
! if (strncmp(row->fieldnames[argpos], yylval.str, strlen(row->fieldnames[argpos])) == 0)
! 	break;
! 
! 			if (argpos == row->nfields)
! ereport(ERROR,
! 		(errcode(ERRCODE_SYNTAX_ERROR),
! 		 errmsg("cursor \"%s\" has no argument named \"%s\"",
! cursor->refname, yylval.str),
! 		 parser_errposition(yylloc)));
! 		}
! 		else
! 		{
! 			/* Positional parameter assignment */
! 			argpos = argc;
! 		}
! 
! 		/*
! 		 * Read one expression at a time until the matching endtoken. Checking
! 		 * the expressions is postponed until the positional argument list is
! 		 * made.
! 		 */
! 		item = read_sql_one_expression(',', ')', ",\" or \")", &endtoken);
! 
! 		if (endtoken == ')' && !(argc == row->nfields - 1))
! 			ereport(ERROR,
! 	(errcode(ERRCODE_SYNTAX_ERROR),
! 	 errmsg("not enough 

Re: [HACKERS] unite recovery.conf and postgresql.conf

2011-09-15 Thread Fujii Masao
On Wed, Sep 14, 2011 at 6:33 PM, Peter Eisentraut  wrote:
> On tis, 2011-09-13 at 17:10 +0100, Simon Riggs wrote:
>> So treat postgresql.conf as if it has an automatic "include
>> recovery.conf" in it. The file format is the same.
>
> Sounds good.  That would also have the merit that you could use, say,
> different memory settings during recovery.

One problem of this is that recovery.conf is relative to the configuration
file directory instead of data directory if we treat it as an "include" file.
If your configuration file directory is the same as the data directory, you
don't need to worry about this problem. But if you set data_directory to
the directory other than ConfigDir, the problem might break your tool
which depends on recovery.conf under the data directory.

If we'd like to treat recovery.conf as if it's under the data directory, I'm
afraid that we should add complicated code to parse recovery.conf after
the value of data_directory has been determined from postgresql.conf.
Furthermore, what if recovery.conf has another setting of data_directory?

Since recovery.conf is a configuration file, it's intuitive for me to put it
in configuration file directory rather than data directory. So I'm not inclined
to treat recovery.conf as if it's under data directory. Is this OK?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DBI-LINK not support special support?

2011-09-15 Thread Albe Laurenz
paulo matadr wrote:
[has problems displaying non-ASCII characters]

> Environment has been applied
> [postgres@gcomdesenv oracle]$ export
[...]
> declare -x NLS_LANG="AMERICAN_AMERICA.AL32UTF8"

The environment has not been applied correctly, because you
see question marks, which is Oracle's replacement for characters
it cannot translate. I don't know what went wrong. The environment
variable must be set when the postmaster is started.

[...]
> You can specify environment variables by passing them in YAML format
> as fifth argument to make_accessor_functions(), for example:
[...]
> ERROR:  error from Perl function "make_accessor_functions": error from
Perl function
> "set_up_connection": error from Perl function
"add_dbi_connection_environment": In
> dbi_link.add_dbi_connection_environment, settings is a >HASH<, not an
array reference at line 8. at
> line 94. at line 35.

Your YAML is wrong.

> Any ideia for help

As Robert Haas mentioned, write to dbi-link-general:
http://lists.pgfoundry.org/mailman/listinfo/dbi-link-general

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] What Would You Like To Do?

2011-09-15 Thread Kaare Rasmussen

On 2011-09-14 17:27, David E. Wheeler wrote:

On Sep 14, 2011, at 5:49 AM, Kaare Rasmussen wrote:


[brief]: http://postgresopen.org/2011/schedule/presentations/83/

You list Job scheduling as one item here,

but not here

Here's my preliminary list:

Could you expand your idea about this here?

It was something suggested to me on IRC a few months ago, but I don't know who 
would do it. Also, I think that pgAgent might actually offer the functionality.

   http://www.pgadmin.org/docs/1.4/pgagent.html

I would vote for inclusion of such a feature in PostgreSQL.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers