Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-12 Thread Etsuro Fujita
(2014/12/12 11:33), Etsuro Fujita wrote:
 (2014/12/12 11:19), Tom Lane wrote:
 Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 (2014/12/12 10:37), Tom Lane wrote:
 Yeah, this is clearly a thinko: really, nothing in the planner should
 be using get_parse_rowmark().  I looked around for other errors of the
 same type and found that postgresGetForeignPlan() is also using
 get_parse_rowmark().  While that's harmless at the moment because we
 don't support foreign tables as children, it's still wrong.  Will
 fix that too.

 I don't think we need to fix that too.  In order to support that, I'm
 proposing to modify postgresGetForeignPlan() in the following way [1]
 (see fdw-inh-5.patch).

 My goodness, that's ugly.  And it's still wrong, because this is planner
 code so it shouldn't be using get_parse_rowmark at all.  The whole point
 here is that the rowmark info has been transformed into something
 appropriate for the planner to use.  While that transformation is
 relatively trivial today, it might not always be so.
 
 OK, I'll update the inheritance patch on top of the revison you'll make.

Thanks for your speedy work.

While updating the inheritance patch, I noticed that the fix for
postgresGetForeignPlan() is not right.  Since PlanRowMarks for foreign
tables get the ROW_MARK_COPY markType during preprocess_rowmarks(), so
we can't get the locking strength from the PlanRowMarks, IIUC.  In order
to get the locking strength, I think we need to see the RowMarkClauses
and thus still need to use get_parse_rowmark() in
postgresGetForeignPlan(), though I agree with you that that is ugly.

Thanks,

Best regards,
Etsuro Fujita


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical column ordering

2014-12-12 Thread Andres Freund
On 2014-12-10 19:06:28 -0800, Josh Berkus wrote:
 On 12/10/2014 05:14 PM, Stephen Frost wrote:
  * Andres Freund (and...@2ndquadrant.com) wrote:
   But the scheduling of commits with regard to the 9.5 schedule actually
   opens a relevant question: When are we planning to release 9.5? Because
   If we try ~ one year from now it's a whole different ballgame than if we
   try to go back to september. And I think there's pretty good arguments
   for both.
  This should really be on its own thread for discussion...  I'm leaning,
  at the moment at least, towards the September release schedule.  I agree
  that having a later release would allow us to get more into it, but
  there's a lot to be said for the consistency we've kept up over the past
  few years with a September (our last non-September release was 8.4).
 
 Can we please NOT discuss this in the thread about someone's patch?  Thanks.

Well, it's relevant for the arguments made about the patches future...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review of Refactoring code for sync node detection

2014-12-12 Thread Heikki Linnakangas

On 12/12/2014 04:29 AM, Michael Paquier wrote:

On Thu, Dec 11, 2014 at 10:07 PM, Heikki Linnakangas 
hlinnakan...@vmware.com wrote:


I propose the attached (I admit I haven't tested it).


Actually if you do it this way I think that it would be worth adding the
small optimization Fujii-san mentioned upthread: if priority is equal to 1,
we leave the loop earlier and return immediately the pointer. All those
things gathered give the patch attached, that I actually tested FWIW with
multiple standbys and multiple entries in s_s_names.


Ok, committed.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review of Refactoring code for sync node detection

2014-12-12 Thread Michael Paquier
On Fri, Dec 12, 2014 at 9:38 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 On 12/12/2014 04:29 AM, Michael Paquier wrote:

 On Thu, Dec 11, 2014 at 10:07 PM, Heikki Linnakangas 
 hlinnakan...@vmware.com wrote:

 I propose the attached (I admit I haven't tested it).

 Actually if you do it this way I think that it would be worth adding the
 small optimization Fujii-san mentioned upthread: if priority is equal to
 1,
 we leave the loop earlier and return immediately the pointer. All those
 things gathered give the patch attached, that I actually tested FWIW with
 multiple standbys and multiple entries in s_s_names.


 Ok, committed.
Thanks!
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Peter Eisentraut
On 12/9/14 1:18 PM, Josh Berkus wrote:
 The one exception I might make above is pg_standby.  What do we need
 this for today, exactly?

This was discussed recently and people wanted to keep it.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Peter Eisentraut
On 12/9/14 4:10 PM, Alvaro Herrera wrote:
 Maybe it makes sense to have a distinction between client programs and
 server programs.  Can we have src/sbin/ and move stuff that involves the
 server side in there?  I think that'd be pg_xlogdump, pg_archivecleanup,
 pg_upgrade, pg_test_timing, pg_test_fsync.  (If we were feeling bold we
 could also move pg_resetxlog, pg_controldata and initdb there.)

I was thinking about that.  What do others think?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Peter Eisentraut
On 12/9/14 4:32 PM, Bruce Momjian wrote:
 On Tue, Dec  9, 2014 at 06:10:02PM -0300, Alvaro Herrera wrote:
 (For pg_upgrade you also need to do something about pg_upgrade_support,
 which is good because that is one very ugly crock.)
 
 FYI, pg_upgrade_support was segregated from pg_upgrade only because we
 wanted separate binary and shared object build/install targets.

I think the actual reason is that the makefile structure won't let you
have them both in the same directory.  I don't see why you would need
separate install targets.

How about we move these support functions into the backend?  It's not
like we don't already have other pg_upgrade hooks baked in all over the
place.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Heikki Linnakangas

On 12/12/2014 03:07 PM, Peter Eisentraut wrote:

On 12/9/14 4:10 PM, Alvaro Herrera wrote:

Maybe it makes sense to have a distinction between client programs and
server programs.  Can we have src/sbin/ and move stuff that involves the
server side in there?  I think that'd be pg_xlogdump, pg_archivecleanup,
pg_upgrade, pg_test_timing, pg_test_fsync.  (If we were feeling bold we
could also move pg_resetxlog, pg_controldata and initdb there.)


I was thinking about that.  What do others think?


Sounds good. We already separate server and client programs in the docs, 
and packagers put them in different packages too. This should make 
packagers' life a little bit easier in the long run.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Andres Freund
On 2014-12-12 15:11:01 +0200, Heikki Linnakangas wrote:
 On 12/12/2014 03:07 PM, Peter Eisentraut wrote:
 On 12/9/14 4:10 PM, Alvaro Herrera wrote:
 Maybe it makes sense to have a distinction between client programs and
 server programs.  Can we have src/sbin/ and move stuff that involves the
 server side in there?  I think that'd be pg_xlogdump, pg_archivecleanup,
 pg_upgrade, pg_test_timing, pg_test_fsync.  (If we were feeling bold we
 could also move pg_resetxlog, pg_controldata and initdb there.)
 
 I was thinking about that.  What do others think?
 
 Sounds good. We already separate server and client programs in the docs, and
 packagers put them in different packages too. This should make packagers'
 life a little bit easier in the long run.

Wouldn't a make install-server/client targets or something similar
actually achieve the same thing? Seems simpler to maintain to me.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Heikki Linnakangas

On 12/12/2014 03:11 PM, Heikki Linnakangas wrote:

On 12/12/2014 03:07 PM, Peter Eisentraut wrote:

On 12/9/14 4:10 PM, Alvaro Herrera wrote:

Maybe it makes sense to have a distinction between client programs and
server programs.  Can we have src/sbin/ and move stuff that involves the
server side in there?  I think that'd be pg_xlogdump, pg_archivecleanup,
pg_upgrade, pg_test_timing, pg_test_fsync.  (If we were feeling bold we
could also move pg_resetxlog, pg_controldata and initdb there.)


I was thinking about that.  What do others think?


Sounds good. We already separate server and client programs in the docs,
and packagers put them in different packages too. This should make
packagers' life a little bit easier in the long run.


src/sbin might not be a good name for the directory, though. We're not 
going to install the programs in /usr/sbin, are we? Maybe src/server-bin 
and src/client-bin.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching

2014-12-12 Thread Robert Haas
On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra t...@fuzzy.cz wrote:
 The idea was that if we could increase the load a bit (e.g. using 2
 tuples per bucket instead of 1), we will still use a single batch in
 some cases (when we miss the work_mem threshold by just a bit). The
 lookups will be slower, but we'll save the I/O.

 Yeah.  That seems like a valid theory, but your test results so far
 seem to indicate that it's not working out like that - which I find
 quite surprising, but, I mean, it is what it is, right?

 Not exactly. My tests show that as long as the outer table batches fit
 into page cache, icreasing the load factor results in worse performance
 than batching.

 When the outer table is sufficiently small, the batching is faster.

 Regarding the sufficiently small - considering today's hardware, we're
 probably talking about gigabytes. On machines with significant memory
 pressure (forcing the temporary files to disk), it might be much lower,
 of course. Of course, it also depends on kernel settings (e.g.
 dirty_bytes/dirty_background_bytes).

Well, this is sort of one of the problems with work_mem.  When we
switch to a tape sort, or a tape-based materialize, we're probably far
from out of memory.  But trying to set work_mem to the amount of
memory we have can easily result in a memory overrun if a load spike
causes lots of people to do it all at the same time.  So we have to
set work_mem conservatively, but then the costing doesn't really come
out right.  We could add some more costing parameters to try to model
this, but it's not obvious how to get it right.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2014-12-12 Thread Robert Haas
On Thu, Dec 11, 2014 at 10:33 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Tue, Dec 9, 2014 at 4:09 AM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Dec 7, 2014 at 9:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
  * parameter should be SUSET - it doesn't *need* to be set only at
  server start since all records are independent of each other

 Why not USERSET?  There's no point in trying to prohibit users from
 doing things that will cause bad performance because they can do that
 anyway.

 Using SUSET or USERSET has a small memory cost: we should
 unconditionally palloc the buffers containing the compressed data
 until WAL is written out. We could always call an equivalent of
 InitXLogInsert when this parameter is updated but that would be
 bug-prone IMO and it does not plead in favor of code simplicity.

I don't understand what you're saying here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Robert Haas
On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
 compression = 'on'  : 1838 secs
 = 'off' : 1701 secs

 Different is around 140 secs.

 OK, so the compression took 2x the cpu and was 8% slower.  The only
 benefit is WAL files are 35% smaller?

Compression didn't take 2x the CPU.  It increased user CPU from 354.20
s to 562.67 s over the course of the run, so it took about 60% more
CPU.

But I wouldn't be too discouraged by that.  At least AIUI, there are
quite a number of users for whom WAL volume is a serious challenge,
and they might be willing to pay that price to have less of it.  Also,
we have talked a number of times before about incorporating Snappy or
LZ4, which I'm guessing would save a fair amount of CPU -- but the
decision was made to leave that out of the first version, and just use
pg_lz, to keep the initial patch simple.  I think that was a good
decision.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_regress writes into source tree

2014-12-12 Thread Alvaro Herrera
Peter Eisentraut wrote:
 When using a vpath build pg_regress writes the processed input/*.source
 files into the *source* tree, which isn't supposed to happen.
 
 This appears to be a thinko introduced in this patch:
 e3fc4a97bc8ee82a78605b5ffe79bd4cf3c6213b

Oh, I noticed this while doing the dummy_seclabel move to
src/test/modules and I thought it was on purpose; if I'm not
mistaken this is why we had to add the .sql file to .gitignore.

Another thing in that patch was that I had to add the sql/ directory to
the source tree, but other than that .gitignore file it was empty.
Maybe pg_regress should create the sql/ directory in the build dir if it
doesn't exist.  This is only a problem if a pg_regress suite only runs
stuff from input/, because otherwise the sql/ dir already exists in the
source.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Streaming replication and WAL archive interactions

2014-12-12 Thread Heikki Linnakangas
There have been a few threads on the behavior of WAL archiving, after a 
standby server is promoted [1] [2]. In short, it doesn't work as you 
might expect. The standby will start archiving after it's promoted, but 
it will not archive files that were replicated from the old master via 
streaming replication. If those files were not already archived in the 
master before the promotion, they are not archived at all. That's not 
good if you wanted to restore from a base backup + the WAL archive later.


The basic setup is a master server, a standby, a WAL archive that's 
shared by both, and streaming replication between the master and 
standby. This should be a very common setup in the field, so how are 
people doing it in practice? Just live with the wisk that you might miss 
some files in the archive if you promote? Don't even realize there's a 
problem? Something else?


And how would we like it to work?

There was some discussion in August on enabling WAL archiving in the 
standby, always [3]. That's a related idea, but it assumes that you have 
a separate archive in the master and the standby. The problem at 
promotion happens when you have a shared archive between the master and 
standby.


[1] 
http://www.postgresql.org/message-id/CAHGQGwHVYqbX=a+zo+avfbvhlgoypo9g_qdkbabexgxbvgd...@mail.gmail.com


[2] http://www.postgresql.org/message-id/20140904175036.310c6466@erg

[3] 
http://www.postgresql.org/message-id/CAHGQGwHNMs-syU=mevsesthna+exd9pfo_ohhfpjcwovayr...@mail.gmail.com.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Robert Haas
On Thu, Dec 11, 2014 at 5:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Josh Berkus j...@agliodbs.com writes:
 How about *you* run the next one, Tom?

 I think the limited amount of time I can put into a commitfest is better
 spent on reviewing patches than on managing the process.

That's not really the point.  The point is that managing the last
CommitFest in particular is roughly equivalent to having your arm
boiled in hot oil.  A certain percentage of the people whose patches
are obviously not ready to commit complain and moan about how (a)
their patch really is ready for prime-time, despite all appearances to
the contrary, and/or (b) their patch is so important that it deserves
and exception, and/or (c) how you are a real jerk for treating them so
unfairly.  This is not fun, which is why I've given up on doing it.  I
could not get a single person to support me when I tried to enforce
any scheduling discipline, so my conclusion was that the community did
not care about hitting the schedule; and it took weeks of 24x7 effort
to build a consensus to reject even one large, problematic patch whose
author wasn't willing to admit defeat.  If the community is prepared
to invest some trusted individuals with real authority, then we might
be able to remove some of the pain here, but when that was discussed
at a PGCon developer meeting a few years back, it was clear that no
more than 20% of the people in the room were prepared to support that
concept.

At this point, though, I'm not sure how much revisiting that
discussion would help.  I think the problem we need to solve here is
that there are just not enough senior people with an adequate amount
of time to review.  Whether it's because the patches are more complex
or that there are more of them or that those senior people have become
less available due to other commitments, we still need more senior
people involved to be able to handle the patches we've got in a timely
fashion without unduly compromising stability.  And we also need to do
a better job recruiting and retaining mid-level reviewers, both
because that's where senior people eventually come from, and because
it reduces the load on the senior people we've already got.

(I note that the proposal to have the CFM review everything is merely
one way of meeting the need to have senior people spend more time
reviewing.  But I assure all of you that I spend as much time
reviewing as I can find time for.  If someone wants to pay me the same
salary I'm making now to do nothing but review patches, I'll think
about it.  But even then, that would also mean that I wasn't spending
time writing patches of my own.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BUG: *FF WALs under 9.2 (WAS: .ready files appearing on slaves)

2014-12-12 Thread Heikki Linnakangas

On 12/10/2014 04:32 PM, Dennis Kögel wrote:

Hi,

Am 04.09.2014 um 17:50 schrieb Jehan-Guillaume de Rorthais j...@dalibo.com:

Since few months, we occasionally see .ready files appearing on some slave
instances from various context. The two I have in mind are under 9.2.x. […]
So it seems for some reasons, these old WALs were forgotten by the
restartpoint mechanism when they should have been recylced/deleted.


Am 08.10.2014 um 11:54 schrieb Heikki Linnakangas hlinnakan...@vmware.com:

1. Where do the FF files come from? In 9.2, FF-segments are not supposed to 
created, ever. […]
2. Why are the .done files sometimes not being created?




We’ve encountered behaviour which seems to match what has been described here: 
On Streaming Replication slaves, there is an odd piling up of old WALs and 
.ready files in pg_xlog, going back several months.

The fine people on IRC have pointed me to this thread, and have encouraged me 
to revive it with our observations, so here we go:

Environment:

Master,  9.2.9
|- Slave S1, 9.2.9, on the same network as the master
'- Slave S2, 9.2.9, some 100 km away (occassional network hickups; *not* a 
cascading replication)

wal_keep_segments M=100 S1=100 S2=30
checkpoint_segments M=100 S1=30 S2=30
wal_level hot_standby (all)
archive_mode on (all)
archive_command on both slaves: /bin/true
archive_timeout 600s (all)


- On both slaves, we have „ghost“ WALs and corresponding .ready files (currently 
600 of each on S2, slowly becoming a disk space problem)

- There’s always gaps in the ghost WAL names, often roughly 0x20, but not always

- The slave with the „bad“ network link has significantly more of these files, 
which suggests that disturbances of the Streaming Replication increase chances 
of triggering this bug; OTOH, the presence of a name gap pattern suggests the 
opposite

- We observe files named *FF as well


As you can see in the directory listings below, this setup is *very* low 
traffic, which may explain the pattern in WAL name gaps (?).

I’ve listed the entries by time, expecting to easily match WALs to their .ready 
files.
There sometimes is an interesting delay between the WAL’s mtime and the .ready 
file — especially for *FF, where there’s several days between the WAL and the 
.ready file.

- Master:   http://pgsql.privatepaste.com/52ad612dfb
- Slave S1: http://pgsql.privatepaste.com/58b4f3bb10
- Slave S2: http://pgsql.privatepaste.com/a693a8d7f4


I’ve only skimmed through the thread; my understanding is that there were 
several patches floating around, but nothing was committed.
If there’s any way I can help, please let me know.


Yeah. It wasn't totally clear how all this should work, so I got 
distracted with other stuff an dropped the ball; sorry.


I'm thinking that we should change the behaviour on master so that the 
standby never archives any files from older timelines, only the new one 
that it generates itself. That will solve the immediate problem of old 
WAL files accumulating, and bogus .ready files appearing in the standby. 
However, it will not solve the bigger problem of how do you ensure that 
all WAL files are archived, when you promote a standby server. There is 
no guarantee on that today anyway, but this will make it even less 
reliable, because it will increase the chances that you miss a file on 
the old timeline in the archive, after promoting. I'd argue that that's 
a good thing; it makes the issue more obvious, so you are more likely to 
encounter it in testing, and you won't be surprised in an emergency. But 
I've started a new thread on that bigger issue, hopefully we'll come up 
with a solution 
(http://www.postgresql.org/message-id/548af1cb.80...@vmware.com).


Now, what do we do with the back-branches? I'm not sure. Changing the 
behaviour in back-branches could cause nasty surprises. Perhaps it's 
best to just leave it as it is, even though it's buggy.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On partitioning

2014-12-12 Thread Robert Haas
On Thu, Dec 11, 2014 at 11:43 PM, Amit Langote
langote_amit...@lab.ntt.co.jp wrote:
 In case of what we would have called a 'LIST' partition, this could look like

 ... FOR VALUES (val1, val2, val3, ...)

 Assuming we only support partition key to contain only one column in such a 
 case.

 In case of what we would have called a 'RANGE' partition, this could look like

 ... FOR VALUES (val1min, val2min, ...) TO (val1max, val2max, ...)

 How about BETWEEN ... AND ... ?

Sure.  Mind you, I'm not proposing that the syntax I just mooted is
actually for the best.  What I'm saying is that we need to talk about
it.

 I am not sure but perhaps RANGE and LIST as partitioning kinds may as well 
 just be noise keywords. We can parse those values into a parse node such that 
 we don’t have to care about whether they describe partition as being one kind 
 or the other. Say a List of something like,

 typedef struct PartitionColumnValue
 {
 NodeTagtype,
 Oid*partitionid,
 char   *partcolname,
 Node   *partrangelower,
 Node   *partrangeupper,
 List   *partlistvalues
 };

 Or we could still add a (char) partkind just to say which of the fields 
 matter.

 We don't need any defining values here for hash partitions if and when we add 
 support for the same. We would either be using a system-wide common hash 
 function or we could add something with partitioning key definition.

Yeah, range and list partition definitions are very similar, but hash
partition definitions are a different kettle of fish.  I don't think
we really need hash partitioning for anything right away - it's pretty
useless unless you've got, say, a way for the partitions to be foreign
tables living on remote servers - but we shouldn't pick a design that
will make it really hard to add later.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 03:13:44PM +0200, Heikki Linnakangas wrote:
 On 12/12/2014 03:11 PM, Heikki Linnakangas wrote:
 On 12/12/2014 03:07 PM, Peter Eisentraut wrote:
 On 12/9/14 4:10 PM, Alvaro Herrera wrote:
 Maybe it makes sense to have a distinction between client programs and
 server programs.  Can we have src/sbin/ and move stuff that involves the
 server side in there?  I think that'd be pg_xlogdump, pg_archivecleanup,
 pg_upgrade, pg_test_timing, pg_test_fsync.  (If we were feeling bold we
 could also move pg_resetxlog, pg_controldata and initdb there.)
 
 I was thinking about that.  What do others think?
 
 Sounds good. We already separate server and client programs in the docs,
 and packagers put them in different packages too. This should make
 packagers' life a little bit easier in the long run.
 
 src/sbin might not be a good name for the directory, though. We're
 not going to install the programs in /usr/sbin, are we? Maybe
 src/server-bin and src/client-bin.

I am confused by the above because you are mixing /src and /bin.  If we
install the binaries in new directories, that is going to require
multiple adjustments to $PATH --- that doesn't seem like a win, and we
only have 25 binaries in pgsql/bin now (my Debian /usr/bin has 2306
binaries).  I assume I am misunderstanding something.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Andres Freund
On 2014-12-12 08:27:59 -0500, Robert Haas wrote:
 On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
  compression = 'on'  : 1838 secs
  = 'off' : 1701 secs
 
  Different is around 140 secs.
 
  OK, so the compression took 2x the cpu and was 8% slower.  The only
  benefit is WAL files are 35% smaller?
 
 Compression didn't take 2x the CPU.  It increased user CPU from 354.20
 s to 562.67 s over the course of the run, so it took about 60% more
 CPU.
 
 But I wouldn't be too discouraged by that.  At least AIUI, there are
 quite a number of users for whom WAL volume is a serious challenge,
 and they might be willing to pay that price to have less of it.

And it might actually result in *higher* performance in a good number of
cases if the the WAL flushes are a significant part of the cost.

IIRC he test used a single process - that's probably not too
representative...

 Also,
 we have talked a number of times before about incorporating Snappy or
 LZ4, which I'm guessing would save a fair amount of CPU -- but the
 decision was made to leave that out of the first version, and just use
 pg_lz, to keep the initial patch simple.  I think that was a good
 decision.

Agreed.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_rewind in contrib

2014-12-12 Thread Heikki Linnakangas

Hi,

I'd like to include pg_rewind in contrib. I originally wrote it as an 
external project so that I could quickly get it working with the 
existing versions, and because I didn't feel it was quite ready for 
production use yet. Now, with the WAL format changes in master, it is a 
lot more maintainable than before. Many bugs have been fixed since the 
first prototypes, and I think it's fairly robust now.


I propose that we include pg_rewind in contrib/ now. Attached is a patch 
for that. It just includes the latest sources from the current pg_rewind 
repository at https://github.com/vmware/pg_rewind. It is released under 
the PostgreSQL license.


For those who are not familiar with pg_rewind, it's a tool that allows 
repurposing an old master server as a new standby server, after 
promotion, even if the old master was not shut down cleanly. That's a 
very often requested feature.


- Heikki
commit 2300e28b0d07328c7b37a92f7150e75edf24b10c
Author: Heikki Linnakangas heikki.linnakan...@iki.fi
Date:   Fri Dec 12 16:08:14 2014 +0200

Add pg_rewind to contrib.

diff --git a/contrib/Makefile b/contrib/Makefile
index 195d447..2fe861f 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -32,6 +32,7 @@ SUBDIRS = \
 		pg_buffercache	\
 		pg_freespacemap \
 		pg_prewarm	\
+		pg_rewind	\
 		pg_standby	\
 		pg_stat_statements \
 		pg_test_fsync	\
diff --git a/contrib/pg_rewind/.gitignore b/contrib/pg_rewind/.gitignore
new file mode 100644
index 000..cb50df2
--- /dev/null
+++ b/contrib/pg_rewind/.gitignore
@@ -0,0 +1,32 @@
+# Object files
+*.o
+
+# Libraries
+*.lib
+*.a
+
+# Shared objects (inc. Windows DLLs)
+*.dll
+*.so
+*.so.*
+*.dylib
+
+# Executables
+*.exe
+*.app
+
+# Dependencies
+.deps
+
+# Files generated during build
+/xlogreader.c
+
+# Binaries
+/pg_rewind
+
+# Generated by test suite
+/tmp_check/
+/regression.diffs
+/regression.out
+/results/
+/regress_log/
diff --git a/contrib/pg_rewind/Makefile b/contrib/pg_rewind/Makefile
new file mode 100644
index 000..d50a8cf
--- /dev/null
+++ b/contrib/pg_rewind/Makefile
@@ -0,0 +1,47 @@
+# Makefile for pg_rewind
+#
+# Copyright (c) 2013 VMware, Inc. All Rights Reserved.
+#
+
+PGFILEDESC = pg_rewind - repurpose an old master server as standby
+PGAPPICON = win32
+
+PROGRAM = pg_rewind
+OBJS	= pg_rewind.o parsexlog.o xlogreader.o util.o datapagemap.o timeline.o \
+	fetch.o copy_fetch.o libpq_fetch.o filemap.o
+
+REGRESS = basictest extrafiles databases
+REGRESS_OPTS=--use-existing --launcher=./launcher
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS = $(libpq_pgport)
+
+override CPPFLAGS := -DFRONTEND $(CPPFLAGS)
+
+EXTRA_CLEAN = $(RMGRDESCSOURCES) xlogreader.c
+
+all: pg_rewind
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/pg_rewind
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
+
+xlogreader.c: % : $(top_srcdir)/src/backend/access/transam/%
+	rm -f $@  $(LN_S) $ .
+
+check-local:
+	echo Running tests against local data directory, in copy-mode
+	bindir=$(bindir) TEST_SUITE=local $(MAKE) installcheck
+
+check-remote:
+	echo Running tests against a running standby, via libpq
+	bindir=$(bindir) TEST_SUITE=remote $(MAKE) installcheck
+
+check-both: check-local check-remote
diff --git a/contrib/pg_rewind/README b/contrib/pg_rewind/README
new file mode 100644
index 000..cac6095
--- /dev/null
+++ b/contrib/pg_rewind/README
@@ -0,0 +1,100 @@
+pg_rewind
+=
+
+pg_rewind is a tool for synchronizing a PostgreSQL data directory with another
+PostgreSQL data directory that was forked from the first one. The result is
+equivalent to rsyncing the first data directory (referred to as the old cluster
+from now on) with the second one (the new cluster). The advantage of pg_rewind
+over rsync is that pg_rewind uses the WAL to determine changed data blocks,
+and does not require reading through all files in the cluster. That makes it
+a lot faster when the database is large and only a small portion of it differs
+between the clusters.
+
+Download
+
+
+The latest version of this software can be found on the project website at
+https://github.com/vmware/pg_rewind.
+
+Installation
+
+
+Compiling pg_rewind requires the PostgreSQL source tree to be available.
+There are two ways to do that:
+
+1. Put pg_rewind project directory inside PostgreSQL source tree as
+contrib/pg_rewind, and use make to compile
+
+or
+
+2. Pass the path to the PostgreSQL source tree to make, in the top_srcdir
+variable: make USE_PGXS=1 top_srcdir=path to PostgreSQL source tree
+
+In addition, you must have pg_config in $PATH.
+
+The current version of pg_rewind is compatible with PostgreSQL version 9.4.
+
+Usage
+-
+
+pg_rewind --target-pgdata=path \
+--source-server=new server's conn string
+
+The contents of the old data directory will be overwritten with the new data
+so that after pg_rewind finishes, the 

Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 08:11:31AM -0500, Peter Eisentraut wrote:
 On 12/9/14 4:32 PM, Bruce Momjian wrote:
  On Tue, Dec  9, 2014 at 06:10:02PM -0300, Alvaro Herrera wrote:
  (For pg_upgrade you also need to do something about pg_upgrade_support,
  which is good because that is one very ugly crock.)
  
  FYI, pg_upgrade_support was segregated from pg_upgrade only because we
  wanted separate binary and shared object build/install targets.
 
 I think the actual reason is that the makefile structure won't let you
 have them both in the same directory.  I don't see why you would need
 separate install targets.
 
 How about we move these support functions into the backend?  It's not
 like we don't already have other pg_upgrade hooks baked in all over the
 place.

Yes, we can easily do that, and it makes sense.  The functions are
already protected to not do anything unless the server is in binary
upgrade mode.  If we move them into the backend I think we need to add a
super-user check as well.  The reason we don't have one now is that they
are installed/uninstalled by the super-user as part of the pg_upgrade
process.

Moving pg_upgrade out of contrib is going to give me additional gloating
opportunities at conferences.  :-)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2014-12-12 Thread Michael Paquier
On Fri, Dec 12, 2014 at 10:23 PM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Dec 11, 2014 at 10:33 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 On Tue, Dec 9, 2014 at 4:09 AM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Dec 7, 2014 at 9:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
  * parameter should be SUSET - it doesn't *need* to be set only at
  server start since all records are independent of each other

 Why not USERSET?  There's no point in trying to prohibit users from
 doing things that will cause bad performance because they can do that
 anyway.

 Using SUSET or USERSET has a small memory cost: we should
 unconditionally palloc the buffers containing the compressed data
 until WAL is written out. We could always call an equivalent of
 InitXLogInsert when this parameter is updated but that would be
 bug-prone IMO and it does not plead in favor of code simplicity.

 I don't understand what you're saying here.
I just meant that the scratch buffers used to store temporarily the
compressed and uncompressed data should be palloc'd all the time, even
if the switch is off.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Alvaro Herrera
Robert Haas wrote:

 (I note that the proposal to have the CFM review everything is merely
 one way of meeting the need to have senior people spend more time
 reviewing.  But I assure all of you that I spend as much time
 reviewing as I can find time for.  If someone wants to pay me the same
 salary I'm making now to do nothing but review patches, I'll think
 about it.  But even then, that would also mean that I wasn't spending
 time writing patches of my own.)

I have heard the idea of a cross-company PostgreSQL foundation of some
sort that would hire a developer just to manage commitfests, do patch
reviews, apply bugfixes, etc, without the obligations that come from
individual companies' schedules for particular development roadmaps,
customer support, and the like.  Of course, only a senior person would
be able to fill this role because it requires considerable experience.

Probably this person should be allowed to work on their own patches if
they so desire; otherwise there is a risk that experience dilutes.
Also, no single company should dictate what this person's priorities
are, other than general guidelines: general stability, submitted patches
get attention, bugs get closed, releases get out, coffee gets brewed.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote:
 On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
  compression = 'on'  : 1838 secs
  = 'off' : 1701 secs
 
  Different is around 140 secs.
 
  OK, so the compression took 2x the cpu and was 8% slower.  The only
  benefit is WAL files are 35% smaller?
 
 Compression didn't take 2x the CPU.  It increased user CPU from 354.20
 s to 562.67 s over the course of the run, so it took about 60% more
 CPU.
 
 But I wouldn't be too discouraged by that.  At least AIUI, there are
 quite a number of users for whom WAL volume is a serious challenge,
 and they might be willing to pay that price to have less of it.  Also,
 we have talked a number of times before about incorporating Snappy or
 LZ4, which I'm guessing would save a fair amount of CPU -- but the
 decision was made to leave that out of the first version, and just use
 pg_lz, to keep the initial patch simple.  I think that was a good
 decision.

Well, the larger question is why wouldn't we just have the user compress
the entire WAL file before archiving --- why have each backend do it? 
Is it the write volume we are saving?  I though this WAL compression
gave better performance in some cases.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Peter Eisentraut
On 12/8/14 10:50 PM, Tom Lane wrote:
 oid2name and vacuumlo, besides being of very dubious general utility,
 are fails from a namespacing standpoint.  If we were to promote them
 into standard install components I think a minimum requirement should be
 to rename them to pg_something.  (oid2name is an entirely bogus name for
 what it does, anyway.)  That would also be a good opportunity to revisit
 their rather-ad-hoc APIs.

I'm going to leave these two out for now.

I'll start investigating whether they can be removed, or replaced by
something else.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-12 Thread Andres Freund
Hi,

On 2014-12-12 16:13:13 +0200, Heikki Linnakangas wrote:
 I'd like to include pg_rewind in contrib. I originally wrote it as an
 external project so that I could quickly get it working with the existing
 versions, and because I didn't feel it was quite ready for production use
 yet. Now, with the WAL format changes in master, it is a lot more
 maintainable than before. Many bugs have been fixed since the first
 prototypes, and I think it's fairly robust now.

Obviously there's a need for a fair amount of review, but generally I
think it should be included.

Not sure if the copyright notices in the current form are actually ok?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Andres Freund
On 2014-12-12 09:18:01 -0500, Bruce Momjian wrote:
 On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote:
  On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
   compression = 'on'  : 1838 secs
   = 'off' : 1701 secs
  
   Different is around 140 secs.
  
   OK, so the compression took 2x the cpu and was 8% slower.  The only
   benefit is WAL files are 35% smaller?
  
  Compression didn't take 2x the CPU.  It increased user CPU from 354.20
  s to 562.67 s over the course of the run, so it took about 60% more
  CPU.
  
  But I wouldn't be too discouraged by that.  At least AIUI, there are
  quite a number of users for whom WAL volume is a serious challenge,
  and they might be willing to pay that price to have less of it.  Also,
  we have talked a number of times before about incorporating Snappy or
  LZ4, which I'm guessing would save a fair amount of CPU -- but the
  decision was made to leave that out of the first version, and just use
  pg_lz, to keep the initial patch simple.  I think that was a good
  decision.
 
 Well, the larger question is why wouldn't we just have the user compress
 the entire WAL file before archiving --- why have each backend do it? 
 Is it the write volume we are saving?  I though this WAL compression
 gave better performance in some cases.

Err. Streaming?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
 On 2014-12-12 09:18:01 -0500, Bruce Momjian wrote:
  On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote:
   On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us wrote:
compression = 'on'  : 1838 secs
= 'off' : 1701 secs
   
Different is around 140 secs.
   
OK, so the compression took 2x the cpu and was 8% slower.  The only
benefit is WAL files are 35% smaller?
   
   Compression didn't take 2x the CPU.  It increased user CPU from 354.20
   s to 562.67 s over the course of the run, so it took about 60% more
   CPU.
   
   But I wouldn't be too discouraged by that.  At least AIUI, there are
   quite a number of users for whom WAL volume is a serious challenge,
   and they might be willing to pay that price to have less of it.  Also,
   we have talked a number of times before about incorporating Snappy or
   LZ4, which I'm guessing would save a fair amount of CPU -- but the
   decision was made to leave that out of the first version, and just use
   pg_lz, to keep the initial patch simple.  I think that was a good
   decision.
  
  Well, the larger question is why wouldn't we just have the user compress
  the entire WAL file before archiving --- why have each backend do it? 
  Is it the write volume we are saving?  I though this WAL compression
  gave better performance in some cases.
 
 Err. Streaming?

Well, you can already set up SSL for compression while streaming.  In
fact, I assume many are already using SSL for streaming as the majority
of SSL overhead is from connection start.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Peter Eisentraut
On 12/12/14 8:13 AM, Andres Freund wrote:
 Wouldn't a make install-server/client targets or something similar
 actually achieve the same thing? Seems simpler to maintain to me.

Adding non-standard makefile targets comes with its own set of
maintenance issues.

Restructuring the source tree and having the existing makefile structure
just work might end up being simpler.

Just to be clear, I'm far from convinced that any of this is worthwhile;
I'm just keeping the conversation going.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Alvaro Herrera
Bruce Momjian wrote:
 On Fri, Dec 12, 2014 at 03:13:44PM +0200, Heikki Linnakangas wrote:
  On 12/12/2014 03:11 PM, Heikki Linnakangas wrote:

  Sounds good. We already separate server and client programs in the docs,
  and packagers put them in different packages too. This should make
  packagers' life a little bit easier in the long run.
  
  src/sbin might not be a good name for the directory, though. We're
  not going to install the programs in /usr/sbin, are we? Maybe
  src/server-bin and src/client-bin.
 
 I am confused by the above because you are mixing /src and /bin.  If we
 install the binaries in new directories, that is going to require
 multiple adjustments to $PATH --- that doesn't seem like a win, and we
 only have 25 binaries in pgsql/bin now (my Debian /usr/bin has 2306
 binaries).  I assume I am misunderstanding something.

We already have src/bin/; the mixture of src/ and bin/ predates us.
Of course, the stuff we keep in there is not binaries but source code
that produces binaries.

As for src/sbin/, we wouldn't install anything to the system's
/usr/sbin/ of course, only /usr/bin/, just like the stuff in src/bin/.
But it would be slightly more clear what we keep in each src/ subdir.

I think our current src/bin/ is a misnomer, but it seems late to fix
that.  In a greenfield I think we could have src/clients/ and
src/srvtools/ or something like that, and everything would install to
/usr/bin.  Then there would be no doubt where to move each program from
contrib.

Maybe there is no point to all of this and we should just move it all to
src/bin/ as originally proposed, which is simpler anyway.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Andres Freund
On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
 On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
   Well, the larger question is why wouldn't we just have the user compress
   the entire WAL file before archiving --- why have each backend do it? 
   Is it the write volume we are saving?  I though this WAL compression
   gave better performance in some cases.
  
  Err. Streaming?
 
 Well, you can already set up SSL for compression while streaming.  In
 fact, I assume many are already using SSL for streaming as the majority
 of SSL overhead is from connection start.

That's not really true. The overhead of SSL during streaming is
*significant*. Both the kind of compression it does (which is far more
expensive than pglz or lz4) and the encyrption itself. In many cases
it's prohibitively expensive - there's even a fair number on-list
reports about this.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 03:20:47PM +0100, Andres Freund wrote:
 Hi,
 
 On 2014-12-12 16:13:13 +0200, Heikki Linnakangas wrote:
  I'd like to include pg_rewind in contrib. I originally wrote it as an
  external project so that I could quickly get it working with the existing
  versions, and because I didn't feel it was quite ready for production use
  yet. Now, with the WAL format changes in master, it is a lot more
  maintainable than before. Many bugs have been fixed since the first
  prototypes, and I think it's fairly robust now.
 
 Obviously there's a need for a fair amount of review, but generally I
 think it should be included.

I certainly think it is useful enough to be in /contrib.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2014-12-12 Thread Michael Paquier
On Fri, Dec 12, 2014 at 11:32 PM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Dec 12, 2014 at 9:15 AM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 I just meant that the scratch buffers used to store temporarily the
 compressed and uncompressed data should be palloc'd all the time, even
 if the switch is off.

 If they're fixed size, you can just put them on the heap as static globals.
 static char space_for_stuff[65536];
Well sure :)

 Or whatever you need.
 I don't think that's a cost worth caring about.
OK, I thought it was.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 9:15 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 I just meant that the scratch buffers used to store temporarily the
 compressed and uncompressed data should be palloc'd all the time, even
 if the switch is off.

If they're fixed size, you can just put them on the heap as static globals.

static char space_for_stuff[65536];

Or whatever you need.

I don't think that's a cost worth caring about.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 9:15 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Robert Haas wrote:
 (I note that the proposal to have the CFM review everything is merely
 one way of meeting the need to have senior people spend more time
 reviewing.  But I assure all of you that I spend as much time
 reviewing as I can find time for.  If someone wants to pay me the same
 salary I'm making now to do nothing but review patches, I'll think
 about it.  But even then, that would also mean that I wasn't spending
 time writing patches of my own.)

 I have heard the idea of a cross-company PostgreSQL foundation of some
 sort that would hire a developer just to manage commitfests, do patch
 reviews, apply bugfixes, etc, without the obligations that come from
 individual companies' schedules for particular development roadmaps,
 customer support, and the like.  Of course, only a senior person would
 be able to fill this role because it requires considerable experience.

 Probably this person should be allowed to work on their own patches if
 they so desire; otherwise there is a risk that experience dilutes.
 Also, no single company should dictate what this person's priorities
 are, other than general guidelines: general stability, submitted patches
 get attention, bugs get closed, releases get out, coffee gets brewed.

Yeah, that would be great, and even better if we could get 2 or 3
positions funded so that the success or failure isn't too much tied to
a single individual.  But even getting 1 position funded in a
stable-enough fashion that someone would be willing to bet on it seems
like a challenge.  (Maybe other people here are less risk-averse than
I am.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 9:34 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 I don't think that's a cost worth caring about.
 OK, I thought it was.

Space on the heap that never gets used is basically free.  The OS
won't actually allocate physical memory unless the pages are actually
accessed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-12 Thread Michael Paquier
On Fri, Dec 12, 2014 at 11:13 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 I'd like to include pg_rewind in contrib. I originally wrote it as an
 external project so that I could quickly get it working with the existing
 versions, and because I didn't feel it was quite ready for production use
 yet. Now, with the WAL format changes in master, it is a lot more
 maintainable than before. Many bugs have been fixed since the first
 prototypes, and I think it's fairly robust now.

 I propose that we include pg_rewind in contrib/ now. Attached is a patch for
 that. It just includes the latest sources from the current pg_rewind
 repository at https://github.com/vmware/pg_rewind. It is released under the
 PostgreSQL license.

 For those who are not familiar with pg_rewind, it's a tool that allows
 repurposing an old master server as a new standby server, after promotion,
 even if the old master was not shut down cleanly. That's a very often
 requested feature.
Indeed the code got quite cleaner with the new WAL API. Btw, gitignore
has many unnecessary entries.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Rahila Syed
Hello,

Well, the larger question is why wouldn't we just have the user compress
the entire WAL file before archiving --- why have each backend do it?
Is it the write volume we are saving?

IIUC,  the idea here is to not only save the on disk size of WAL but to
reduce the overhead of flushing WAL records to disk in servers with heavy
write operations. So yes improving the performance by saving write volume
is a part of the requirement.

Thank you,
Rahila Syed


On Fri, Dec 12, 2014 at 7:48 PM, Bruce Momjian br...@momjian.us wrote:

 On Fri, Dec 12, 2014 at 08:27:59AM -0500, Robert Haas wrote:
  On Thu, Dec 11, 2014 at 11:34 AM, Bruce Momjian br...@momjian.us
 wrote:
   compression = 'on'  : 1838 secs
   = 'off' : 1701 secs
  
   Different is around 140 secs.
  
   OK, so the compression took 2x the cpu and was 8% slower.  The only
   benefit is WAL files are 35% smaller?
 
  Compression didn't take 2x the CPU.  It increased user CPU from 354.20
  s to 562.67 s over the course of the run, so it took about 60% more
  CPU.
 
  But I wouldn't be too discouraged by that.  At least AIUI, there are
  quite a number of users for whom WAL volume is a serious challenge,
  and they might be willing to pay that price to have less of it.  Also,
  we have talked a number of times before about incorporating Snappy or
  LZ4, which I'm guessing would save a fair amount of CPU -- but the
  decision was made to leave that out of the first version, and just use
  pg_lz, to keep the initial patch simple.  I think that was a good
  decision.

 Well, the larger question is why wouldn't we just have the user compress
 the entire WAL file before archiving --- why have each backend do it?
 Is it the write volume we are saving?  I though this WAL compression
 gave better performance in some cases.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + Everyone has their own god. +



Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 03:27:33PM +0100, Andres Freund wrote:
 On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
  On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
Well, the larger question is why wouldn't we just have the user compress
the entire WAL file before archiving --- why have each backend do it? 
Is it the write volume we are saving?  I though this WAL compression
gave better performance in some cases.
   
   Err. Streaming?
  
  Well, you can already set up SSL for compression while streaming.  In
  fact, I assume many are already using SSL for streaming as the majority
  of SSL overhead is from connection start.
 
 That's not really true. The overhead of SSL during streaming is
 *significant*. Both the kind of compression it does (which is far more
 expensive than pglz or lz4) and the encyrption itself. In many cases
 it's prohibitively expensive - there's even a fair number on-list
 reports about this.

Well, I am just trying to understand when someone would benefit from WAL
compression.  Are we saying it is only useful for non-SSL streaming?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Michael Paquier
On Wed, Dec 10, 2014 at 11:25 PM, Bruce Momjian br...@momjian.us wrote:

 On Wed, Dec 10, 2014 at 07:40:46PM +0530, Rahila Syed wrote:
  The tests ran for around 30 mins.Manual checkpoint was run before each
 test.
 
  Compression   WAL generated%compressionLatency-avg   CPU usage
  (seconds)  TPS
  Latency
  stddev
 
 
  on  1531.4 MB  ~35 %  7.351 ms

user diff: 562.67s system diff: 41.40s  135.96

13.759 ms
 
 
  off  2373.1 MB 6.781
 ms
user diff: 354.20s  system diff: 39.67s147.40

14.152 ms
 
  The compression obtained is quite high close to 35 %.
  CPU usage at user level when compression is on is quite noticeably high
 as
  compared to that when compression is off. But gain in terms of reduction
 of WAL
  is also high.

 I am sorry but I can't understand the above results due to wrapping.
 Are you saying compression was twice as slow?


I got curious to see how the compression of an entire record would perform
and how it compares for small WAL records, and here are some numbers based
on the patch attached, this patch compresses the whole record including the
block headers, letting only XLogRecord out of it with a flag indicating
that the record is compressed (note that this patch contains a portion for
replay untested, still this patch gives an idea on how much compression of
the whole record affects user CPU in this test case). It uses a buffer of 4
* BLCKSZ, if the record is longer than that compression is simply given up.
Those tests are using the hack upthread calculating user and system CPU
using getrusage() when a backend.

Here is the simple test case I used with 512MB of shared_buffers and small
records, filling up a bunch of buffers, dirtying them and them compressing
FPWs with a checkpoint.
#!/bin/bash
psql EOF
SELECT pg_backend_pid();
CREATE TABLE aa (a int);
CREATE TABLE results (phase text, position pg_lsn);
CREATE EXTENSION IF NOT EXISTS pg_prewarm;
ALTER TABLE aa SET (FILLFACTOR = 50);
INSERT INTO results VALUES ('pre-insert', pg_current_xlog_location());
INSERT INTO aa VALUES (generate_series(1,700)); -- 484MB
SELECT pg_size_pretty(pg_relation_size('aa'::regclass));
SELECT pg_prewarm('aa'::regclass);
CHECKPOINT;
INSERT INTO results VALUES ('pre-update', pg_current_xlog_location());
UPDATE aa SET a = 700 + a;
CHECKPOINT;
INSERT INTO results VALUES ('post-update', pg_current_xlog_location());
SELECT * FROM results;
EOF

Note that autovacuum and fsync are off.
=# select phase, user_diff, system_diff,
pg_size_pretty(pre_update - pre_insert),
pg_size_pretty(post_update - pre_update) from results;
   phase| user_diff | system_diff | pg_size_pretty |
pg_size_pretty
+---+-++
 Compression FPW| 42.990799 |0.868179 | 429 MB | 567 MB
 No compression | 25.688731 |1.236551 | 429 MB | 727 MB
 Compression record | 56.376750 |0.769603 | 429 MB | 566 MB
(3 rows)
If we do record-level compression, we'll need to be very careful in
defining a lower-bound to not eat unnecessary CPU resources, perhaps
something that should be controlled with a GUC. I presume that this stands
true as well for the upper bound.

Regards,
-- 
Michael
From f1579d37a9f293d7cc911ea048b68d3270b2cdf5 Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Wed, 10 Dec 2014 22:10:16 +0900
Subject: [PATCH] Prototype to support record-level compression

This will be enough for tests with compression.
---
 src/backend/access/transam/xlog.c   |  1 +
 src/backend/access/transam/xloginsert.c | 64 +
 src/backend/access/transam/xlogreader.c | 17 +
 src/backend/utils/misc/guc.c| 10 ++
 src/include/access/xlog.h   |  1 +
 src/include/access/xlogrecord.h |  5 +++
 6 files changed, 98 insertions(+)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 0f09add..a0e15be 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -88,6 +88,7 @@ char	   *XLogArchiveCommand = NULL;
 bool		EnableHotStandby = false;
 bool		fullPageWrites = true;
 bool		wal_log_hints = false;
+bool		wal_compression = false;
 bool		log_checkpoints = false;
 int			sync_method = DEFAULT_SYNC_METHOD;
 int			wal_level = WAL_LEVEL_MINIMAL;
diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c
index f3d610f..a395842 100644
--- a/src/backend/access/transam/xloginsert.c
+++ b/src/backend/access/transam/xloginsert.c
@@ -29,6 +29,7 @@
 #include storage/proc.h
 #include utils/memutils.h
 #include pg_trace.h
+#include utils/pg_lzcompress.h
 
 /*
  * For each block reference registered with XLogRegisterBuffer, we fill in
@@ -56,6 +57,9 @@ static 

Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Andres Freund
On 2014-12-12 09:46:13 -0500, Bruce Momjian wrote:
 On Fri, Dec 12, 2014 at 03:27:33PM +0100, Andres Freund wrote:
  On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
   On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
 Well, the larger question is why wouldn't we just have the user 
 compress
 the entire WAL file before archiving --- why have each backend do it? 
 Is it the write volume we are saving?  I though this WAL compression
 gave better performance in some cases.

Err. Streaming?
   
   Well, you can already set up SSL for compression while streaming.  In
   fact, I assume many are already using SSL for streaming as the majority
   of SSL overhead is from connection start.
  
  That's not really true. The overhead of SSL during streaming is
  *significant*. Both the kind of compression it does (which is far more
  expensive than pglz or lz4) and the encyrption itself. In many cases
  it's prohibitively expensive - there's even a fair number on-list
  reports about this.
 
 Well, I am just trying to understand when someone would benefit from WAL
 compression.  Are we saying it is only useful for non-SSL streaming?

No, not at all. It's useful in a lot more situations:

* The amount of WAL in pg_xlog can make up a significant portion of a
  database's size. Especially in large OLTP databases. Compressing
  archives doesn't help with that.
* The original WAL volume itself can be quite problematic because at
  some point its exhausting the underlying IO subsystem. Both due to the
  pure write rate and to the fsync()s regularly required.
* ssl compression can often not be used for WAL streaming because it's
  too slow as it's uses a much more expensive algorithm. Which is why we
  even have a GUC to disable it.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-12 Thread Heikki Linnakangas

On 12/12/2014 04:20 PM, Andres Freund wrote:

Not sure if the copyright notices in the current form are actually ok?


Hmm. We do have such copyright notices in the source tree, but I know 
that we're trying to avoid it in new code. They had to be there when the 
code lived as a separate project, but now that I'm contributing this to 
PostgreSQL proper, I can remove them if necessary.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Andres Freund
On 2014-12-12 23:50:43 +0900, Michael Paquier wrote:
 I got curious to see how the compression of an entire record would perform
 and how it compares for small WAL records, and here are some numbers based
 on the patch attached, this patch compresses the whole record including the
 block headers, letting only XLogRecord out of it with a flag indicating
 that the record is compressed (note that this patch contains a portion for
 replay untested, still this patch gives an idea on how much compression of
 the whole record affects user CPU in this test case). It uses a buffer of 4
 * BLCKSZ, if the record is longer than that compression is simply given up.
 Those tests are using the hack upthread calculating user and system CPU
 using getrusage() when a backend.
 
 Here is the simple test case I used with 512MB of shared_buffers and small
 records, filling up a bunch of buffers, dirtying them and them compressing
 FPWs with a checkpoint.
 #!/bin/bash
 psql EOF
 SELECT pg_backend_pid();
 CREATE TABLE aa (a int);
 CREATE TABLE results (phase text, position pg_lsn);
 CREATE EXTENSION IF NOT EXISTS pg_prewarm;
 ALTER TABLE aa SET (FILLFACTOR = 50);
 INSERT INTO results VALUES ('pre-insert', pg_current_xlog_location());
 INSERT INTO aa VALUES (generate_series(1,700)); -- 484MB
 SELECT pg_size_pretty(pg_relation_size('aa'::regclass));
 SELECT pg_prewarm('aa'::regclass);
 CHECKPOINT;
 INSERT INTO results VALUES ('pre-update', pg_current_xlog_location());
 UPDATE aa SET a = 700 + a;
 CHECKPOINT;
 INSERT INTO results VALUES ('post-update', pg_current_xlog_location());
 SELECT * FROM results;
 EOF
 
 Note that autovacuum and fsync are off.
 =# select phase, user_diff, system_diff,
 pg_size_pretty(pre_update - pre_insert),
 pg_size_pretty(post_update - pre_update) from results;
phase| user_diff | system_diff | pg_size_pretty |
 pg_size_pretty
 +---+-++
  Compression FPW| 42.990799 |0.868179 | 429 MB | 567 MB
  No compression | 25.688731 |1.236551 | 429 MB | 727 MB
  Compression record | 56.376750 |0.769603 | 429 MB | 566 MB
 (3 rows)
 If we do record-level compression, we'll need to be very careful in
 defining a lower-bound to not eat unnecessary CPU resources, perhaps
 something that should be controlled with a GUC. I presume that this stands
 true as well for the upper bound.

Record level compression pretty obviously would need a lower boundary
for when to use compression. It won't be useful for small heapam/btree
records, but it'll be rather useful for large multi_insert, clean or
similar records...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-12 Thread Tom Lane
Heikki Linnakangas hlinnakan...@vmware.com writes:
 I'd like to include pg_rewind in contrib.

I don't object to adding the tool as such, but let's wait to see what
happens with Peter's proposal to move contrib command-line tools into
src/bin/.  If it should be there it'd be less code churn if it went
into there in the first place.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread David Fetter
On Thu, Dec 11, 2014 at 05:55:56PM -0500, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
  How about *you* run the next one, Tom?
 
 I think the limited amount of time I can put into a commitfest is
 better spent on reviewing patches than on managing the process.

With utmost respect, Tom, you seem to carve off an enormous amount of
time to follow -bugs and -general.  What say you unsubscribe to those
lists for the duration of your tenure as CFM?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Andres Freund
On 2014-12-12 11:27:01 -0300, Alvaro Herrera wrote:
 We already have src/bin/; the mixture of src/ and bin/ predates us.
 Of course, the stuff we keep in there is not binaries but source code
 that produces binaries.
 
 As for src/sbin/, we wouldn't install anything to the system's
 /usr/sbin/ of course, only /usr/bin/, just like the stuff in src/bin/.
 But it would be slightly more clear what we keep in each src/ subdir.

I think sbin is a spectactularly bad name, let's not go there. If
anything, make it srvbin or something like that.

 I think our current src/bin/ is a misnomer, but it seems late to fix
 that.  In a greenfield I think we could have src/clients/ and
 src/srvtools/ or something like that, and everything would install to
 /usr/bin.  Then there would be no doubt where to move each program from
 contrib.

Maybe. We could just do that now - git's file change tracking is good
enough for that kind of move.

 Maybe there is no point to all of this and we should just move it all to
 src/bin/ as originally proposed, which is simpler anyway.

+1. Packagers already don't use the current boundaries for packaging...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Tom Lane
Heikki Linnakangas hlinnakan...@vmware.com writes:
 On 12/12/2014 03:07 PM, Peter Eisentraut wrote:
 On 12/9/14 4:10 PM, Alvaro Herrera wrote:
 Maybe it makes sense to have a distinction between client programs and
 server programs.  Can we have src/sbin/ and move stuff that involves the
 server side in there?  I think that'd be pg_xlogdump, pg_archivecleanup,
 pg_upgrade, pg_test_timing, pg_test_fsync.  (If we were feeling bold we
 could also move pg_resetxlog, pg_controldata and initdb there.)

 I was thinking about that.  What do others think?

 Sounds good. We already separate server and client programs in the docs, 
 and packagers put them in different packages too. This should make 
 packagers' life a little bit easier in the long run.

I'm pretty much -1 on relocating anything that's under src/bin already.
The history mess and back-patching pain would outweigh any notional
cleanliness --- and AFAICS it's entirely notional.  As an ex-packager
I can tell you that where stuff sits in the source tree makes precisely
*zero* difference to a packager.  She's going to do make install-world
and then her package recipe will list out which files in the install tree
go into which sub-package.  Perhaps it would get clearer to packagers if
we also installed stuff into $INSTALLDIR/sbin, but I doubt that such a
change is going to fly with anyone else.  The bin vs sbin distinction
is not universal.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On 12/9/14 4:32 PM, Bruce Momjian wrote:
 On Tue, Dec  9, 2014 at 06:10:02PM -0300, Alvaro Herrera wrote:
 (For pg_upgrade you also need to do something about pg_upgrade_support,
 which is good because that is one very ugly crock.)

 FYI, pg_upgrade_support was segregated from pg_upgrade only because we
 wanted separate binary and shared object build/install targets.

 I think the actual reason is that the makefile structure won't let you
 have them both in the same directory.  I don't see why you would need
 separate install targets.

 How about we move these support functions into the backend?  It's not
 like we don't already have other pg_upgrade hooks baked in all over the
 place.

I don't particularly object to having the C code built into the backend;
there's not that much of it, and if we could static-ize some of the global
variables that are involved presently, it'd be a Good Thing IMO.  However,
the current arrangement makes sure that the function are not accessible
except during pg_upgrade, and that seems like a Good Thing as well.  So
I think pg_upgrade should continue to have SQL scripts that create and
delete the SQL function definitions for these.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On 12/12/14 8:13 AM, Andres Freund wrote:
 Wouldn't a make install-server/client targets or something similar
 actually achieve the same thing? Seems simpler to maintain to me.

 Adding non-standard makefile targets comes with its own set of
 maintenance issues.

It would be of zero value to packagers anyway; certainly so for those
following the Red Hat tradition, in which you tell the package Makefile
to install everything and then what goes into which subpackage is
sorted out in a separate, subsequent step.  Possibly Debian or other
packaging infrastructures do it differently, but I doubt that.

Really, if we want to tell packagers that foo is a client program and
bar is a server-side program, the documentation is where to address it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Andres Freund
On 2014-12-12 07:10:40 -0800, David Fetter wrote:
 On Thu, Dec 11, 2014 at 05:55:56PM -0500, Tom Lane wrote:
  Josh Berkus j...@agliodbs.com writes:
   How about *you* run the next one, Tom?
  
  I think the limited amount of time I can put into a commitfest is
  better spent on reviewing patches than on managing the process.
 
 With utmost respect,

FWIW, the way you frequently use this phrase doesn't come over as
actually being respectful.

 Tom, you seem to carve off an enormous amount of
 time to follow -bugs and -general.  What say you unsubscribe to those
 lists for the duration of your tenure as CFM?

And why on earth would that be a good idea? These bugs need to be fixed
- we're actually behind on that front. Are we now really trying to
dictate how other developers manage their time? It's one thing to make
up rules that say one review for one commit or something, it's
something entirely else to try to assign tasks to them.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Andres Freund
On 2014-12-12 10:20:58 -0500, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  On 12/12/14 8:13 AM, Andres Freund wrote:
  Wouldn't a make install-server/client targets or something similar
  actually achieve the same thing? Seems simpler to maintain to me.
 
  Adding non-standard makefile targets comes with its own set of
  maintenance issues.
 
 It would be of zero value to packagers anyway; certainly so for those
 following the Red Hat tradition, in which you tell the package Makefile
 to install everything and then what goes into which subpackage is
 sorted out in a separate, subsequent step.  Possibly Debian or other
 packaging infrastructures do it differently, but I doubt that.

Debian has that step as well - you don't really have to use it, but the
postgres debian packages do so. They already don't adhere to the current
distinction.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-12-12 Thread Alex Shulgin
Alex Shulgin a...@commandprompt.com writes:

 Alex Shulgin a...@commandprompt.com writes:

 Here's an attempt to revive this patch.

 Here's the patch rebased against current HEAD, that is including the
 recently committed action_at_recovery_target option.

 The default for the new GUC is 'pause', as in HEAD, and
 pause_at_recovery_target is removed completely in favor of it.

 I've also taken the liberty to remove that part that errors out when
 finding $PGDATA/recovery.conf.  Now get your rotten tomatoes ready. ;-)

This was rather short-sighted, so I've restored that part.

Also, rebased on current HEAD, following the rename of
action_at_recovery_target to recovery_target_action.

--
Alex



recovery_guc_v5.5.patch.gz
Description: application/gzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Fri, Dec 12, 2014 at 9:15 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
 Robert Haas wrote:
 (I note that the proposal to have the CFM review everything is merely
 one way of meeting the need to have senior people spend more time
 reviewing.  But I assure all of you that I spend as much time
 reviewing as I can find time for.  If someone wants to pay me the same
 salary I'm making now to do nothing but review patches, I'll think
 about it.  But even then, that would also mean that I wasn't spending
 time writing patches of my own.)

 I have heard the idea of a cross-company PostgreSQL foundation of some
 sort that would hire a developer just to manage commitfests, do patch
 reviews, apply bugfixes, etc, without the obligations that come from
 individual companies' schedules for particular development roadmaps,
 customer support, and the like.  Of course, only a senior person would
 be able to fill this role because it requires considerable experience.

 Yeah, that would be great, and even better if we could get 2 or 3
 positions funded so that the success or failure isn't too much tied to
 a single individual.  But even getting 1 position funded in a
 stable-enough fashion that someone would be willing to bet on it seems
 like a challenge.  (Maybe other people here are less risk-averse than
 I am.)

Yeah, it would be hard to sell anyone on that unless the foundation
was so well funded that it could clearly afford to keep paying you
for years into the future.

I'm not really on board with the CFM-reviews-everything idea anyway.
I don't think that can possibly work well, because it supposes that senior
reviewers are interchangeable, which they aren't.  Everybody's got pieces
of the system that they know better than other pieces.

Also, one part of the point of the review mechanism is that it's supposed
to provide an opportunity for less-senior reviewers to look at parts of
the code that they maybe don't know so well, and thereby help grow them
into senior people.  If we went over to the notion of some one (or a few)
senior people doing all the reviewing, it might make the review process
more expeditious but it would lose the training aspect.  Of course, maybe
the training aspect was never worth anything; I'm not in a position to
opine on that.  But I don't really think that centralizing that
responsibility would be a good thing in the long run.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Alvaro Herrera
Tom Lane wrote:
 Heikki Linnakangas hlinnakan...@vmware.com writes:
  On 12/12/2014 03:07 PM, Peter Eisentraut wrote:
  On 12/9/14 4:10 PM, Alvaro Herrera wrote:
  Maybe it makes sense to have a distinction between client programs and
  server programs.  Can we have src/sbin/ and move stuff that involves the
  server side in there?  I think that'd be pg_xlogdump, pg_archivecleanup,
  pg_upgrade, pg_test_timing, pg_test_fsync.  (If we were feeling bold we
  could also move pg_resetxlog, pg_controldata and initdb there.)
 
  I was thinking about that.  What do others think?
 
  Sounds good. We already separate server and client programs in the docs, 
  and packagers put them in different packages too. This should make 
  packagers' life a little bit easier in the long run.
 
 I'm pretty much -1 on relocating anything that's under src/bin already.

So let's put the whole bunch under src/bin/ and be done with it.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 10:04 AM, Andres Freund and...@anarazel.de wrote:
 Note that autovacuum and fsync are off.
 =# select phase, user_diff, system_diff,
 pg_size_pretty(pre_update - pre_insert),
 pg_size_pretty(post_update - pre_update) from results;
phase| user_diff | system_diff | pg_size_pretty |
 pg_size_pretty
 +---+-++
  Compression FPW| 42.990799 |0.868179 | 429 MB | 567 MB
  No compression | 25.688731 |1.236551 | 429 MB | 727 MB
  Compression record | 56.376750 |0.769603 | 429 MB | 566 MB
 (3 rows)
 If we do record-level compression, we'll need to be very careful in
 defining a lower-bound to not eat unnecessary CPU resources, perhaps
 something that should be controlled with a GUC. I presume that this stands
 true as well for the upper bound.

 Record level compression pretty obviously would need a lower boundary
 for when to use compression. It won't be useful for small heapam/btree
 records, but it'll be rather useful for large multi_insert, clean or
 similar records...

Unless I'm missing something, this test is showing that FPW
compression saves 298MB of WAL for 17.3 seconds of CPU time, as
against master.  And compressing the whole record saves a further 1MB
of WAL for a further 13.39 seconds of CPU time.  That makes
compressing the whole record sound like a pretty terrible idea - even
if you get more benefit by reducing the lower boundary, you're still
burning a ton of extra CPU time for almost no gain on the larger
records.  Ouch!

(Of course, I'm assuming that Michael's patch is reasonably efficient,
which might not be true.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Andres Freund
On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
 Unless I'm missing something, this test is showing that FPW
 compression saves 298MB of WAL for 17.3 seconds of CPU time, as
 against master.  And compressing the whole record saves a further 1MB
 of WAL for a further 13.39 seconds of CPU time.  That makes
 compressing the whole record sound like a pretty terrible idea - even
 if you get more benefit by reducing the lower boundary, you're still
 burning a ton of extra CPU time for almost no gain on the larger
 records.  Ouch!

Well, that test pretty much doesn't have any large records besides FPWs
afaics. So it's unsurprising that it's not beneficial.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 11:00 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 I'm pretty much -1 on relocating anything that's under src/bin already.

I agree.  I can't see packagers putting it anywhere except for
$SOMETHING/bin in the final install, so what do we get out of dividing
it up in some weird way in our tree?

 So let's put the whole bunch under src/bin/ and be done with it.

I'm not really convinced this is a very good idea.  What do we get out
of moving everything, or even anything, from contrib?  It will make
back-patching harder, but more importantly, it will possibly create
the false impression that everything we distribute is on equal
footing.  Right now, we've got stuff like vacuumlo in contrib which is
useful but, let's face it, also a cheap hack.  If we decide that
executables can no longer live in contrib, then every time somebody
submits something in the future, we've got to decide whether it
deserves parity with psql and pg_dump or whether we shouldn't include
it at all.  contrib is a nice middle-ground.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 11:12 AM, Andres Freund and...@anarazel.de wrote:
 On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
 Unless I'm missing something, this test is showing that FPW
 compression saves 298MB of WAL for 17.3 seconds of CPU time, as
 against master.  And compressing the whole record saves a further 1MB
 of WAL for a further 13.39 seconds of CPU time.  That makes
 compressing the whole record sound like a pretty terrible idea - even
 if you get more benefit by reducing the lower boundary, you're still
 burning a ton of extra CPU time for almost no gain on the larger
 records.  Ouch!

 Well, that test pretty much doesn't have any large records besides FPWs
 afaics. So it's unsurprising that it's not beneficial.

Not beneficial is rather an understatement.  It's actively harmful,
and not by a small margin.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-12 Thread Tom Lane
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 (2014/12/12 10:37), Tom Lane wrote:
 Yeah, this is clearly a thinko: really, nothing in the planner should
 be using get_parse_rowmark().  I looked around for other errors of the
 same type and found that postgresGetForeignPlan() is also using
 get_parse_rowmark().  While that's harmless at the moment because we
 don't support foreign tables as children, it's still wrong.  Will
 fix that too.

 While updating the inheritance patch, I noticed that the fix for
 postgresGetForeignPlan() is not right.  Since PlanRowMarks for foreign
 tables get the ROW_MARK_COPY markType during preprocess_rowmarks(), so
 we can't get the locking strength from the PlanRowMarks, IIUC.

Ugh, you're right.

 In order
 to get the locking strength, I think we need to see the RowMarkClauses
 and thus still need to use get_parse_rowmark() in
 postgresGetForeignPlan(), though I agree with you that that is ugly.

I think this needs more thought; I'm still convinced that having the FDW
look at the parse rowmarks is the Wrong Thing.  However, we don't need
to solve it in existing branches.  With 9.4 release so close, the right
thing is to revert that change for now and consider a HEAD-only patch
later.  (One idea is to go ahead and make a ROW_MARK_COPY item, but
add a field to PlanRowMark to record the original value.  We should
probably also think about allowing FDWs to change these settings if
they want to.  The real source of trouble here is that planner.c
has a one-size-fits-all approach to row locking for FDWs; and we're
now seeing that that one size doesn't fit postgres_fdw.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Andres Freund
On 2014-12-12 11:15:46 -0500, Robert Haas wrote:
 On Fri, Dec 12, 2014 at 11:12 AM, Andres Freund and...@anarazel.de wrote:
  On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
  Unless I'm missing something, this test is showing that FPW
  compression saves 298MB of WAL for 17.3 seconds of CPU time, as
  against master.  And compressing the whole record saves a further 1MB
  of WAL for a further 13.39 seconds of CPU time.  That makes
  compressing the whole record sound like a pretty terrible idea - even
  if you get more benefit by reducing the lower boundary, you're still
  burning a ton of extra CPU time for almost no gain on the larger
  records.  Ouch!
 
  Well, that test pretty much doesn't have any large records besides FPWs
  afaics. So it's unsurprising that it's not beneficial.
 
 Not beneficial is rather an understatement.  It's actively harmful,
 and not by a small margin.

Sure, but that's just because it's too simplistic. I don't think it
makes sense to make any inference about the worthyness of the general
approach from the, nearly obvious, fact that compressing every tiny
record is a bad idea.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Joshua D. Drake


On 12/08/2014 07:50 PM, Tom Lane wrote:

Peter Eisentraut pete...@gmx.net writes:

Last time this was attempted, the discussion got lost in exactly which
extensions are worthy enough to be considered official or something like
that.  I want to dodge that here by starting at the opposite end:
1. move programs to src/bin/



Here are the contrib programs:



oid2name
pg_archivecleanup
pg_standby
pg_test_fsync
pg_test_timing
pg_upgrade
pg_xlogdump
pgbench
vacuumlo



The proposal would basically be to mv contrib/$x src/bin/$x and also
move the reference pages in the documentation.


Personally, I'm good with moving pg_archivecleanup, pg_standby,
pg_upgrade, pg_xlogdump, and pgbench this way.  (Although wasn't there
just some discussion about pg_standby being obsolete?  If so, shouldn't
we remove it instead of promoting it?)  As for the others:



Let's not forget pg_upgrade which is arguably the most important of 
everything listed.


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I'm not really convinced this is a very good idea.  What do we get out
 of moving everything, or even anything, from contrib?  It will make
 back-patching harder, but more importantly, it will possibly create
 the false impression that everything we distribute is on equal
 footing.  Right now, we've got stuff like vacuumlo in contrib which is
 useful but, let's face it, also a cheap hack.  If we decide that
 executables can no longer live in contrib, then every time somebody
 submits something in the future, we've got to decide whether it
 deserves parity with psql and pg_dump or whether we shouldn't include
 it at all.  contrib is a nice middle-ground.

Yeah, that's a good point.  I think part of the motivation here is the
thought that some of these programs, like pg_upgrade, *should* now be
considered on par with pg_dump et al.  But it does not follow that
everything in contrib is, or should be, on that level.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Andres Freund
On 2014-12-12 11:14:56 -0500, Robert Haas wrote:
 I'm not really convinced this is a very good idea.  What do we get out
 of moving everything, or even anything, from contrib?

The benefit of moving relevant stuff is that it'll actually be installed
by default when installing postgres on many platforms. That's currently
often not the case. The contrib umbrella, as used by many other
projects, actually justifies not doing so.

I don't think that's a good argument for moving everything, rather the
contrary, but relevant stuff that we properly support should imo be
moved.

 It will make back-patching harder

I think the amount of effort a simple renamed directory which wholly
contains a binary creates is acceptable. Just use patch -p4 instead of
patch -p1...

 Right now, we've got stuff like vacuumlo in contrib which is
 useful but, let's face it, also a cheap hack.

On the other hand, we really don't provide any other solution. Since
large objects are part of core we really ought to provide at least some
support for cleanup.

 If we decide that executables can no longer live in contrib, then
 every time somebody submits something in the future, we've got to
 decide whether it deserves parity with psql and pg_dump or whether we
 shouldn't include it at all.  contrib is a nice middle-ground.

I think it makes sense to still have it as a middleground for future
things.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 11:26 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 I'm not really convinced this is a very good idea.  What do we get out
 of moving everything, or even anything, from contrib?  It will make
 back-patching harder, but more importantly, it will possibly create
 the false impression that everything we distribute is on equal
 footing.  Right now, we've got stuff like vacuumlo in contrib which is
 useful but, let's face it, also a cheap hack.  If we decide that
 executables can no longer live in contrib, then every time somebody
 submits something in the future, we've got to decide whether it
 deserves parity with psql and pg_dump or whether we shouldn't include
 it at all.  contrib is a nice middle-ground.

 Yeah, that's a good point.  I think part of the motivation here is the
 thought that some of these programs, like pg_upgrade, *should* now be
 considered on par with pg_dump et al.  But it does not follow that
 everything in contrib is, or should be, on that level.

Yeah.  We have put enough effort collectively into pg_upgrade that I
think it's fair to say that it is on a part with pg_dump.  I still
think the architecture there is awfully fragile and we should try to
improve it, but it's very widely-used and people rely on it to work,
which it generally does.  And certainly we have put a lot of sweat
into making it work.

I would also say that pg_archivecleanup is a fundamental server tool
and that it belongs in src/bin.

But after that, I get fuzzy.  For me, the next tier of things would
consist of pgbench, pg_test_fsync, pg_test_timing, and pg_xlogdump.
Those are all useful, but I would also classify them as optional.  If
you are running a PostgreSQL installation, you definitely need initdb
and postgres and pg_dump and pg_dumpall and psql, but you don't
definitely need these.  I think they are all robust enough to go in
src/bin, but they are not as necessary as much of the stuff that is in
that directory today, so it's unclear to me whether we want to put
them there.

Finally, there is the stuff that is either hacky or deprecated:
oid2name, pg_standby, vacuumlo.  Putting that stuff in src/bin clearly
makes no sense IMV.  But I wouldn't necessarily want to remove it all
either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 11:40 AM, Andres Freund and...@2ndquadrant.com wrote:
 The benefit of moving relevant stuff is that it'll actually be installed
 by default when installing postgres on many platforms. That's currently
 often not the case. The contrib umbrella, as used by many other
 projects, actually justifies not doing so.

Agreed.  See my other response for my thoughts on that topic.

 It will make back-patching harder

 I think the amount of effort a simple renamed directory which wholly
 contains a binary creates is acceptable. Just use patch -p4 instead of
 patch -p1...

That is fine if you are manually applying a patch that touches only
that directory, but if the patch also touches other stuff then it's
not as simple.  And I don't know how well git cherry-pick will follow
the moves.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Andres Freund
On 2014-12-12 11:42:57 -0500, Robert Haas wrote:
  I think the amount of effort a simple renamed directory which wholly
  contains a binary creates is acceptable. Just use patch -p4 instead of
  patch -p1...
 
 That is fine if you are manually applying a patch that touches only
 that directory, but if the patch also touches other stuff then it's
 not as simple.

I think backpatchable commits that touch individual binaries and other
code at the same time are (and ought to be!) pretty rare.

 And I don't know how well git cherry-pick will follow
 the moves.

Not well if the patch is done in master first.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching

2014-12-12 Thread Tomas Vondra
On 12.12.2014 14:19, Robert Haas wrote:
 On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra t...@fuzzy.cz wrote:

 Regarding the sufficiently small - considering today's hardware, we're
 probably talking about gigabytes. On machines with significant memory
 pressure (forcing the temporary files to disk), it might be much lower,
 of course. Of course, it also depends on kernel settings (e.g.
 dirty_bytes/dirty_background_bytes).
 
 Well, this is sort of one of the problems with work_mem.  When we
 switch to a tape sort, or a tape-based materialize, we're probably far
 from out of memory.  But trying to set work_mem to the amount of
 memory we have can easily result in a memory overrun if a load spike
 causes lots of people to do it all at the same time.  So we have to
 set work_mem conservatively, but then the costing doesn't really come
 out right.  We could add some more costing parameters to try to model
 this, but it's not obvious how to get it right.

Ummm, I don't think that's what I proposed. What I had in mind was a
flag the batches are likely to stay in page cache. Because when it is
likely, batching is probably faster (compared to increased load factor).

Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2014-12-12 11:42:57 -0500, Robert Haas wrote:
 And I don't know how well git cherry-pick will follow
 the moves.

 Not well if the patch is done in master first.

FWIW, I always patch master first, and have zero intention of changing
that workflow.  (I have given reasons for that in the past, and don't
feel like repeating them right now.)  So I'm really not on board with
moving code around without *very* good reasons.  This thread hasn't
done very well at coming up with good reasons to move stuff out of
contrib.

In the particular case of pg_upgrade, while it may be now on par
usefulness-wise with src/bin stuff, I think it is and always will
be a special case anyway so far as packagers are concerned; the
reason being that it needs to ride along with back-branch executables.
So I'm not sure that we're making their lives easier by moving it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 10:16:05AM -0500, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  On 12/9/14 4:32 PM, Bruce Momjian wrote:
  On Tue, Dec  9, 2014 at 06:10:02PM -0300, Alvaro Herrera wrote:
  (For pg_upgrade you also need to do something about pg_upgrade_support,
  which is good because that is one very ugly crock.)
 
  FYI, pg_upgrade_support was segregated from pg_upgrade only because we
  wanted separate binary and shared object build/install targets.
 
  I think the actual reason is that the makefile structure won't let you
  have them both in the same directory.  I don't see why you would need
  separate install targets.
 
  How about we move these support functions into the backend?  It's not
  like we don't already have other pg_upgrade hooks baked in all over the
  place.
 
 I don't particularly object to having the C code built into the backend;
 there's not that much of it, and if we could static-ize some of the global
 variables that are involved presently, it'd be a Good Thing IMO.  However,
 the current arrangement makes sure that the function are not accessible
 except during pg_upgrade, and that seems like a Good Thing as well.  So
 I think pg_upgrade should continue to have SQL scripts that create and
 delete the SQL function definitions for these.

Oh, hmmm, would pg_upgrade_support still be a separate shared object
file, or would we just link to functions that already exist in the
backend binary, i.e. it is just the SQL-callabiity you want pg_upgrade
to do?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Alvaro Herrera
Robert Haas wrote:
 On Fri, Dec 12, 2014 at 11:00 AM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:

  So let's put the whole bunch under src/bin/ and be done with it.
 
 I'm not really convinced this is a very good idea.  What do we get out
 of moving everything, or even anything, from contrib?

We show that it's no longer contrib (== possibly low quality) stuff
anymore.  At the beginning of pg_upgrade, for example, we didn't want it
in src/bin because it wasn't stable enough, it was full of bugs, there
were always going to be scenarios it wouldn't handle.  Now that is all
gone, so we promote it to the next status level.

 It will make back-patching harder,

Yes.  We can deal with that.  It's not that hard anyway.

 but more importantly, it will possibly create the false impression
 that everything we distribute is on equal footing.

Stuff in contrib is of lower quality.  Some items have improved enough
that we can let them out of that sack now.  What we're doing is create
the correct impression that stuff that's no longer in contrib is of
better quality than what remains in contrib.

 Right now, we've got stuff like vacuumlo in contrib which is
 useful but, let's face it, also a cheap hack.

Then we don't move vacuumlo.  I agree we shouldn't move it.  (And
neither oid2names.)

 If we decide that executables can no longer live in contrib,

Nobody is saying that.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 05:19:42PM +0100, Andres Freund wrote:
 On 2014-12-12 11:15:46 -0500, Robert Haas wrote:
  On Fri, Dec 12, 2014 at 11:12 AM, Andres Freund and...@anarazel.de wrote:
   On 2014-12-12 11:08:52 -0500, Robert Haas wrote:
   Unless I'm missing something, this test is showing that FPW
   compression saves 298MB of WAL for 17.3 seconds of CPU time, as
   against master.  And compressing the whole record saves a further 1MB
   of WAL for a further 13.39 seconds of CPU time.  That makes
   compressing the whole record sound like a pretty terrible idea - even
   if you get more benefit by reducing the lower boundary, you're still
   burning a ton of extra CPU time for almost no gain on the larger
   records.  Ouch!
  
   Well, that test pretty much doesn't have any large records besides FPWs
   afaics. So it's unsurprising that it's not beneficial.
  
  Not beneficial is rather an understatement.  It's actively harmful,
  and not by a small margin.
 
 Sure, but that's just because it's too simplistic. I don't think it
 makes sense to make any inference about the worthyness of the general
 approach from the, nearly obvious, fact that compressing every tiny
 record is a bad idea.

Well, it seems we need to see some actual cases where compression does
help before moving forward.  I thought Amit had some amazing numbers for
WAL compression --- has that changed?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Bruce Momjian
On Fri, Dec 12, 2014 at 10:50:56AM -0500, Tom Lane wrote:
 Also, one part of the point of the review mechanism is that it's supposed
 to provide an opportunity for less-senior reviewers to look at parts of
 the code that they maybe don't know so well, and thereby help grow them
 into senior people.  If we went over to the notion of some one (or a few)
 senior people doing all the reviewing, it might make the review process
 more expeditious but it would lose the training aspect.  Of course, maybe
 the training aspect was never worth anything; I'm not in a position to
 opine on that.  But I don't really think that centralizing that
 responsibility would be a good thing in the long run.

That is a very good point --- we have certainly had people doing reviews
long enough to know if the review process is preparing developers for
more complex tasks.  I don't know the answer myself, which might say
something.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb generator functions

2014-12-12 Thread Andrew Dunstan


On 12/08/2014 01:00 PM, Andrew Dunstan wrote:


On 12/08/2014 04:21 AM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


OK, here is a new patch version that

  * uses find_coercion_path() to find the cast function if any, as
discussed elsewhere
  * removes calls to getTypeOutputInfo() except where required
  * honors a cast to json only for rendering both json and jsonb
  * adds processing for the date type that was previously missing in
datum_to_jsonb

Did this go anywhere?



Not, yet. I hope to get to it this week.





OK, here is a new version.

The major change is that the aggregate final functions now clone the 
transition value rather than modifying it directly, avoiding a similar 
nearby error which Tom fixed recently.


Also here is a patch factored out which applies the 
find_coercion_pathway change to json.c. I'm inclined to say we should 
backpatch this to 9.4 (and with a small change 9.3). Thoughts?


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb generator functions

2014-12-12 Thread Andrew Dunstan


On 12/12/2014 01:10 PM, Andrew Dunstan wrote:


On 12/08/2014 01:00 PM, Andrew Dunstan wrote:


On 12/08/2014 04:21 AM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


OK, here is a new patch version that

  * uses find_coercion_path() to find the cast function if any, as
discussed elsewhere
  * removes calls to getTypeOutputInfo() except where required
  * honors a cast to json only for rendering both json and jsonb
  * adds processing for the date type that was previously missing in
datum_to_jsonb

Did this go anywhere?



Not, yet. I hope to get to it this week.





OK, here is a new version.

The major change is that the aggregate final functions now clone the 
transition value rather than modifying it directly, avoiding a similar 
nearby error which Tom fixed recently.


Also here is a patch factored out which applies the 
find_coercion_pathway change to json.c. I'm inclined to say we should 
backpatch this to 9.4 (and with a small change 9.3). Thoughts?




Er this time with patches.

cheers

andrew

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index da138e1..ef69b94 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -10245,9 +10245,10 @@ table2-mapping
 
   para
xref linkend=functions-json-creation-table shows the functions that are
-   available for creating typejson/type values.
-   (Currently, there are no equivalent functions for typejsonb/, but you
-   can cast the result of one of these functions to typejsonb/.)
+   available for creating typejson/type and typejsonb/type values.
+   (There are no equivalent functions for typejsonb/, of the literalrow_to_json/
+   and literalarray_to_json/ functions. However, the literalto_jsonb/
+   function supplies much the same functionality as these functions would.)
   /para
 
   indexterm
@@ -10268,6 +10269,18 @@ table2-mapping
   indexterm
primaryjson_object/primary
   /indexterm
+  indexterm
+   primaryto_jsonb/primary
+  /indexterm
+  indexterm
+   primaryjsonb_build_array/primary
+  /indexterm
+  indexterm
+   primaryjsonb_build_object/primary
+  /indexterm
+  indexterm
+   primaryjsonb_object/primary
+  /indexterm
 
   table id=functions-json-creation-table
 titleJSON Creation Functions/title
@@ -10282,17 +10295,18 @@ table2-mapping
  /thead
  tbody
   row
+   entryparaliteralto_json(anyelement)/literal
+  /paraparaliteralto_jsonb(anyelement)/literal
+   /para/entry
entry
- literalto_json(anyelement)/literal
-   /entry
-   entry
- Returns the value as JSON.  Arrays and composites are converted
+ Returns the value as typejson/ or typejsonb/.
+ Arrays and composites are converted
  (recursively) to arrays and objects; otherwise, if there is a cast
  from the type to typejson/type, the cast function will be used to
- perform the conversion; otherwise, a JSON scalar value is produced.
+ perform the conversion; otherwise, a scalar value is produced.
  For any scalar type other than a number, a Boolean, or a null value,
- the text representation will be used, properly quoted and escaped
- so that it is a valid JSON string.
+ the text representation will be used, in such a fashion that it is a 
+ valid typejson/ or typejsonb/ value.
/entry
entryliteralto_json('Fred said Hi.'::text)/literal/entry
entryliteralFred said \Hi.\/literal/entry
@@ -10321,9 +10335,9 @@ table2-mapping
entryliteral{f1:1,f2:foo}/literal/entry
   /row
   row
-   entry
- literaljson_build_array(VARIADIC any)/literal
-   /entry
+   entryparaliteraljson_build_array(VARIADIC any)/literal
+  /paraparaliteraljsonb_build_array(VARIADIC any)/literal
+   /para/entry
entry
  Builds a possibly-heterogeneously-typed JSON array out of a variadic
  argument list.
@@ -10332,9 +10346,9 @@ table2-mapping
entryliteral[1, 2, 3, 4, 5]/literal/entry
   /row
   row
-   entry
- literaljson_build_object(VARIADIC any)/literal
-   /entry
+   entryparaliteraljson_build_object(VARIADIC any)/literal
+  /paraparaliteraljsonb_build_object(VARIADIC any)/literal
+   /para/entry
entry
  Builds a JSON object out of a variadic argument list.  By
  convention, the argument list consists of alternating
@@ -10344,9 +10358,9 @@ table2-mapping
entryliteral{foo: 1, bar: 2}/literal/entry
   /row
   row
-   entry
- literaljson_object(text[])/literal
-   /entry
+   entryparaliteraljson_object(text[])/literal
+  /paraparaliteraljsonb_object(text[])/literal
+   /para/entry
entry
  Builds a JSON object out of a text array.  The array must have either
  exactly one dimension with an even number of members, in which case
@@ -10359,9 +10373,9 @@ table2-mapping
entryliteral{a: 1, b: def, c: 

Re: [HACKERS] Commitfest problems

2014-12-12 Thread David Fetter
On Fri, Dec 12, 2014 at 04:21:43PM +0100, Andres Freund wrote:
 On 2014-12-12 07:10:40 -0800, David Fetter wrote:
  On Thu, Dec 11, 2014 at 05:55:56PM -0500, Tom Lane wrote:
   Josh Berkus j...@agliodbs.com writes:
How about *you* run the next one, Tom?
   
   I think the limited amount of time I can put into a commitfest is
   better spent on reviewing patches than on managing the process.
  
  With utmost respect,
 
 FWIW, the way you frequently use this phrase doesn't come over as
 actually being respectful.

Respect is quantified, and in this case, the most afforded is the most
earned.  In the case of criticizing the work of others without an
offer of help them do it better, respect for that behavior does have
some pretty sharp upper limits, so yes, utmost is in that context.

  Tom, you seem to carve off an enormous amount of time to follow
  -bugs and -general.  What say you unsubscribe to those lists for
  the duration of your tenure as CFM?
 
 And why on earth would that be a good idea?

Because Tom Lane is not the person whose time is best spent screening
these mailing lists.

 These bugs need to be fixed - we're actually behind on that front.

So you're proposing a bug triage system, which is a separate
discussion.  Let's have that one in a separate thread.

 Are we now really trying to dictate how other developers manage
 their time?

I was merely pointing out that time can be allocated, and that it
appeared it could be allocated from a bucket for which persons less
knowledgeable--perhaps a good bit less knowledgeable--about the entire
code base than Tom are well suited.

 It's one thing to make up rules that say one review for one commit
 or something,

And how do you think that would work out.  Are you up for following
it?

 it's something entirely else to try to assign tasks to them.

I was, as I mentioned, merely pointing out that trade-offs are
available.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Josh Berkus
On 12/12/2014 06:30 AM, Robert Haas wrote:
 Yeah, that would be great, and even better if we could get 2 or 3
 positions funded so that the success or failure isn't too much tied to
 a single individual.  But even getting 1 position funded in a
 stable-enough fashion that someone would be willing to bet on it seems
 like a challenge.  (Maybe other people here are less risk-averse than
 I am.)

Well, first, who would that person be? Last I checked, all of the senior
committers were spoken for.  I like this idea, but the list of people
who could fill the role is pretty short, and I couldn't possibly start
fundraising unless I had a candidate.

Second, I don't think someone's employment will make a difference in
fixing the commitfest and patch review *process* unless the contributors
agree that it needs fixing and that they are willing to make changes to
their individual workflow to fix it.  Right now there is no consensus
about moving forward in our patch review process; everyone seems to want
the problem to go away without changing anything.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-12 Thread Simon Riggs
On 12 December 2014 at 18:04, Bruce Momjian br...@momjian.us wrote:

 Well, it seems we need to see some actual cases where compression does
 help before moving forward.  I thought Amit had some amazing numbers for
 WAL compression --- has that changed?

For background processes, like VACUUM, then WAL compression will be
helpful. The numbers show that only applies to FPWs.

I remain concerned about the cost in foreground processes, especially
since the cost will be paid immediately after checkpoint, making our
spikes worse.

What I don't understand is why we aren't working on double buffering,
since that cost would be paid in a background process and would be
evenly spread out across a checkpoint. Plus we'd be able to remove
FPWs altogether, which is like 100% compression.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb generator functions

2014-12-12 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 Also here is a patch factored out which applies the 
 find_coercion_pathway change to json.c. I'm inclined to say we should 
 backpatch this to 9.4 (and with a small change 9.3). Thoughts?

Meh.  Maybe I'm just feeling gunshy because I broke something within
the past 24 hours, but at this point (with 9.4.0 wrap only 3 days away)
I'm inclined to avoid any 9.4 code churn that's not clearly necessary.
You argued upthread that this change would not result in any behavioral
changes in which cast method gets selected.  If that's true, then we don't
really need to back-patch; while if it turns out not to be true, we
definitely don't want it in 9.3 and I'd argue it's too late for 9.4 also.

In short, I think it's fine for the 9.4 JSON code to start diverging
from HEAD at this point ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Tomas Vondra
On 12.12.2014 19:07, Bruce Momjian wrote:
 On Fri, Dec 12, 2014 at 10:50:56AM -0500, Tom Lane wrote:
 Also, one part of the point of the review mechanism is that it's
 supposed to provide an opportunity for less-senior reviewers to
 look at parts of the code that they maybe don't know so well, and
 thereby help grow them into senior people. If we went over to the
 notion of some one (or a few) senior people doing all the
 reviewing, it might make the review process more expeditious but it
 would lose the training aspect. Of course, maybe the training
 aspect was never worth anything; I'm not in a position to opine on
 that. But I don't really think that centralizing that 
 responsibility would be a good thing in the long run.
 
 That is a very good point --- we have certainly had people doing
 reviews long enough to know if the review process is preparing
 developers for more complex tasks. I don't know the answer myself,
 which might say something.

I can't speak for the others, but for me it certainly is a useful way to
learn new stuff. Maybe not as important as working on my own patches,
but it usually forces me to learn something new, and gives me a
different perspective.

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Simon Riggs
On 12 December 2014 at 15:10, David Fetter da...@fetter.org wrote:
 On Thu, Dec 11, 2014 at 05:55:56PM -0500, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
  How about *you* run the next one, Tom?

 I think the limited amount of time I can put into a commitfest is
 better spent on reviewing patches than on managing the process.

IIRC Tom was pretty much the only person doing patch review for
probably 5 years during 2003-2008, maybe others. AFAICS he was
managing that process. Thank you, Tom.

I've never seen him moan loudly about this, so I'm surprised to hear
such things from people that have done much less.

Any solution to our current problems will come from working together,
not by fighting.

We just need to do more reviews. Realising this, I have begun to do
more. I encourage others to do this also.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb generator functions

2014-12-12 Thread Andrew Dunstan


On 12/12/2014 01:55 PM, Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:

Also here is a patch factored out which applies the
find_coercion_pathway change to json.c. I'm inclined to say we should
backpatch this to 9.4 (and with a small change 9.3). Thoughts?

Meh.  Maybe I'm just feeling gunshy because I broke something within
the past 24 hours, but at this point (with 9.4.0 wrap only 3 days away)
I'm inclined to avoid any 9.4 code churn that's not clearly necessary.
You argued upthread that this change would not result in any behavioral
changes in which cast method gets selected.  If that's true, then we don't
really need to back-patch; while if it turns out not to be true, we
definitely don't want it in 9.3 and I'd argue it's too late for 9.4 also.

In short, I think it's fine for the 9.4 JSON code to start diverging
from HEAD at this point ...


Ok

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Joshua D. Drake


On 12/11/2014 02:55 PM, Tom Lane wrote:

Josh Berkus j...@agliodbs.com writes:

How about *you* run the next one, Tom?


I think the limited amount of time I can put into a commitfest is better
spent on reviewing patches than on managing the process.


Agreed but

That means committers/hackers have to suck it up when the manager closes 
the commit fest.


We don't get our cake and eat it too. We either accept that the CFM has 
the authority to do exactly what they are supposed to do, or we don't.

JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Joshua D. Drake


On 12/12/2014 06:30 AM, Robert Haas wrote:


Yeah, that would be great, and even better if we could get 2 or 3
positions funded so that the success or failure isn't too much tied to
a single individual.  But even getting 1 position funded in a
stable-enough fashion that someone would be willing to bet on it seems
like a challenge.  (Maybe other people here are less risk-averse than
I am.)


We (not CMD, the community) with proper incentive could fund this. It 
really wouldn't be that hard. That said, there would have to be a clear 
understanding of expectations, results, and authority.


JD






--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Joshua D. Drake


On 12/12/2014 10:59 AM, Simon Riggs wrote:


On 12 December 2014 at 15:10, David Fetter da...@fetter.org wrote:

On Thu, Dec 11, 2014 at 05:55:56PM -0500, Tom Lane wrote:

Josh Berkus j...@agliodbs.com writes:

How about *you* run the next one, Tom?


I think the limited amount of time I can put into a commitfest is
better spent on reviewing patches than on managing the process.


IIRC Tom was pretty much the only person doing patch review for
probably 5 years during 2003-2008, maybe others. AFAICS he was
managing that process. Thank you, Tom.

I've never seen him moan loudly about this, so I'm surprised to hear
such things from people that have done much less.

Any solution to our current problems will come from working together,
not by fighting.

We just need to do more reviews. Realising this, I have begun to do
more. I encourage others to do this also.



Simon,

Well said but again, I think a lot of people are hand waving about a 
simple problem (within the current structure) and that problem is just 
one of submission.


Those doing the patch review/writing need to submit to the authority of 
the CFM or CFC (commit fest comittee). That happens and a lot of the 
angst around this process goes away.


Sincerely,

Joshua D. Drake


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Alvaro Herrera
Joshua D. Drake wrote:
 
 On 12/12/2014 06:30 AM, Robert Haas wrote:
 
 Yeah, that would be great, and even better if we could get 2 or 3
 positions funded so that the success or failure isn't too much tied to
 a single individual.  But even getting 1 position funded in a
 stable-enough fashion that someone would be willing to bet on it seems
 like a challenge.  (Maybe other people here are less risk-averse than
 I am.)
 
 We (not CMD, the community) with proper incentive could fund this. It really
 wouldn't be that hard. That said, there would have to be a clear
 understanding of expectations, results, and authority.

Uh, really?  Last I looked at the numbers from SPI treasurer reports,
they are not impressive enough to hire a full-time engineer, let alone a
senior one.

The Linux Foundation has managed to pay for Linus Torvalds somehow, so
it does sound possible.  We have a number of companies making money all
over the globe, at least.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Tomas Vondra
On 11.12.2014 16:06, Bruce Momjian wrote:
 On Wed, Dec 10, 2014 at 11:00:21PM -0800, Josh Berkus wrote:

 I will add:

 4. commitfest managers have burned out and refuse to do it again
 
 Agreed. The fun, if it was ever there, has left the commitfest 
 process.

I've never been a CFM, but from my experience as a patch author and
reviewer, I think there are two or three reasons for that (of course,
I'm not saying those are the most important ones or the only ones):

1) unclear definition of what CFM is expected to do

   The current wiki page describing the role of CFM [1] is rather
   obsolete, IMHO. For example it says that CFM assigns patches ro
   reviewers, posts announcements to pgsql-rrreviewers, etc.

   I don't think this was really followed in recent CFs.

   This however results in people filling the gaps with what they
   believe the CFM should do, causing misunderstandings etc. Shall
   we update the description a bit, to reflect the current state
   of affairs?

   Maybe we should also consider which responsibilities should be
   shifted back to the developers and reviewers. E.g. do we really
   expect the CFM to assign patches to reviewers?


2) not really following the rules

   We do have a few rules that we don't follow as much as we should,
   notably:

   * 1:1 for patches:reviews (one review for each submitted patch)
   * no new patches after the CF starts (post it to the next one)
   * CF ends at a specific date

   I believe violating those rules is related to (1) because it may
   lead to perception that CFM makes them up or does not enforce them
   equally for all patches.


3) manual processing that could be automated

   I think the CF site was a huge step forward, but maybe we could
   improve it, to automate some of the CFM tasks? For example
   integrating it a bit more tightly with the mailinglist (which would
   make the life easier even for patch authors and reviewers)?


However as I said before, I never was a CFM - I'd like to hear from the
actual CFMs what's their opinion on this.


kind regards
Tomas

[1] https://wiki.postgresql.org/wiki/Running_a_CommitFest


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Josh Berkus
On 12/12/2014 11:35 AM, Alvaro Herrera wrote:
 Uh, really?  Last I looked at the numbers from SPI treasurer reports,
 they are not impressive enough to hire a full-time engineer, let alone a
 senior one.
 
 The Linux Foundation has managed to pay for Linus Torvalds somehow, so
 it does sound possible.  We have a number of companies making money all
 over the globe, at least.

You're looking at this wrong.  We have that amount of money in the
account based on zero fundraising whatsoever, which we don't do because
we don't spend the money.  We get roughly $20,000 per year just by
putting up a donate link, and not even promoting it.

So, what this would take is:

1) a candidate who is currently a known major committer

2) clear goals for what this person would spend their time doing

3) buy-in from the Core Team, the committers, and the general hackers
community (including buy-in to the idea of favorable publicity for
funding supporters)

4) an organizing committee with the time to deal with managing
foundation funds

If we had those four things, the fundraising part would be easy.  I
speak as someone who used to raise $600,000 per year for a non-profit in
individual gifts alone.

However, *I'm* not clear on what problems this non-profit employed
person would be solving for the community.  I doubt anyone else is
either.  Until we have consensus on that, there's no point in talking
about anything else.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Joshua D. Drake


On 12/12/2014 11:35 AM, Alvaro Herrera wrote:


We (not CMD, the community) with proper incentive could fund this. It really
wouldn't be that hard. That said, there would have to be a clear
understanding of expectations, results, and authority.


Uh, really?


Yeah I think so. Money can be easy to get, when clear leadership and 
goals are presented.



Last I looked at the numbers from SPI treasurer reports,
they are not impressive enough to hire a full-time engineer, let alone a
senior one.


1. We don't need a full-time engineer to manage a commitfest. We need a 
manager or PM.


2. The original idea came from cross-company (which is part of the 
community)


3. There is more non-profits in this game than just SPI

4. We could do it on a 6 month for 1 year contract

Sincerely,

Joshua D. Drake



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, @cmdpromptinc
If we send our children to Caesar for their education, we should
 not be surprised when they come back as Romans.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Magnus Hagander
On Fri, Dec 12, 2014 at 8:43 PM, Tomas Vondra t...@fuzzy.cz wrote:
 On 11.12.2014 16:06, Bruce Momjian wrote:
 On Wed, Dec 10, 2014 at 11:00:21PM -0800, Josh Berkus wrote:
 3) manual processing that could be automated

I think the CF site was a huge step forward, but maybe we could
improve it, to automate some of the CFM tasks? For example
integrating it a bit more tightly with the mailinglist (which would
make the life easier even for patch authors and reviewers)?

Just as a note abot this one part along (I'll read the rest later). I
do have the new version of the CF app more or less ready to deploy,
but I got bogged down by thinking I'll do it between two commitfests
to not be disruptive. But there has been no between two
commitfests. Hopefully I can get around to doing it during the
holidays. It does integrate much tighter with the archives, that's
probably the core feature of it.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Josh Berkus
On 12/12/2014 11:52 AM, Magnus Hagander wrote:
 On Fri, Dec 12, 2014 at 8:43 PM, Tomas Vondra t...@fuzzy.cz wrote:
 On 11.12.2014 16:06, Bruce Momjian wrote:
 On Wed, Dec 10, 2014 at 11:00:21PM -0800, Josh Berkus wrote:
 3) manual processing that could be automated

I think the CF site was a huge step forward, but maybe we could
improve it, to automate some of the CFM tasks? For example
integrating it a bit more tightly with the mailinglist (which would
make the life easier even for patch authors and reviewers)?
 
 Just as a note abot this one part along (I'll read the rest later). I
 do have the new version of the CF app more or less ready to deploy,
 but I got bogged down by thinking I'll do it between two commitfests
 to not be disruptive. But there has been no between two
 commitfests. Hopefully I can get around to doing it during the
 holidays. It does integrate much tighter with the archives, that's
 probably the core feature of it.

It also automates a bunch of the emailing no?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Christoph Berg
Re: Andres Freund 2014-12-12 20141212152723.go31...@awork2.anarazel.de
 On 2014-12-12 10:20:58 -0500, Tom Lane wrote:
  Peter Eisentraut pete...@gmx.net writes:
   On 12/12/14 8:13 AM, Andres Freund wrote:
   Wouldn't a make install-server/client targets or something similar
   actually achieve the same thing? Seems simpler to maintain to me.

Ack. The default install location would still be .../bin, but invoked
from different targets.

   Adding non-standard makefile targets comes with its own set of
   maintenance issues.
  
  It would be of zero value to packagers anyway; certainly so for those
  following the Red Hat tradition, in which you tell the package Makefile
  to install everything and then what goes into which subpackage is
  sorted out in a separate, subsequent step.  Possibly Debian or other
  packaging infrastructures do it differently, but I doubt that.
 
 Debian has that step as well - you don't really have to use it, but the
 postgres debian packages do so. They already don't adhere to the current
 distinction.

The standard Debian package installs into debian/tmp/ and then picks
files from there into individual packages.

However, for PostgreSQL this means lengthy debian/*.install files
(the equivalent of %files in rpm spec speak):

$ wc -l debian/*.install
   2 debian/libecpg6.install
   1 debian/libecpg-compat3.install
  17 debian/libecpg-dev.install
   1 debian/libpgtypes3.install
   2 debian/libpq5.install
  14 debian/libpq-dev.install
  39 debian/postgresql-9.4.install
  40 debian/postgresql-client-9.4.install
  65 debian/postgresql-contrib-9.4.install
   2 debian/postgresql-doc-9.4.install
   3 debian/postgresql-plperl-9.4.install
   2 debian/postgresql-plpython3-9.4.install
   3 debian/postgresql-plpython-9.4.install
   5 debian/postgresql-pltcl-9.4.install
   3 debian/postgresql-server-dev-9.4.install
 199 total

If there were separate install-client, install-server, and
install-contrib targets, that would probably shorten those files
quite a bit. Especially messy is the part where *.so needs to be
sorted into server/contrib, along with an similar large bunch of
binaries.

Of course that would only solve part of the problem (I'm not going to
suggest creating 15 targets for the 15 binary packages we are
building), but it would solve the uglier part.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Tom Lane
Christoph Berg c...@df7cb.de writes:
 However, for PostgreSQL this means lengthy debian/*.install files
 (the equivalent of %files in rpm spec speak):

Right ...

 If there were separate install-client, install-server, and
 install-contrib targets, that would probably shorten those files
 quite a bit. Especially messy is the part where *.so needs to be
 sorted into server/contrib, along with an similar large bunch of
 binaries.

Pardon me for not knowing much about Debian packages, but how would
that work exactly?  Is it possible to do make install-client, then
package the installed files, then rm -rf the install tree, then
repeat for install-server and install-contrib?  In the RPM world
this would never work because the build/install step happens in
toto before the packaging step.  Even without that, it seems like
it'd be hard to make it entirely automatic since some files would
be installed in multiple cases (and directories even more so).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Magnus Hagander
On Fri, Dec 12, 2014 at 9:05 PM, Josh Berkus j...@agliodbs.com wrote:
 On 12/12/2014 11:52 AM, Magnus Hagander wrote:
 On Fri, Dec 12, 2014 at 8:43 PM, Tomas Vondra t...@fuzzy.cz wrote:
 On 11.12.2014 16:06, Bruce Momjian wrote:
 On Wed, Dec 10, 2014 at 11:00:21PM -0800, Josh Berkus wrote:
 3) manual processing that could be automated

I think the CF site was a huge step forward, but maybe we could
improve it, to automate some of the CFM tasks? For example
integrating it a bit more tightly with the mailinglist (which would
make the life easier even for patch authors and reviewers)?

 Just as a note abot this one part along (I'll read the rest later). I
 do have the new version of the CF app more or less ready to deploy,
 but I got bogged down by thinking I'll do it between two commitfests
 to not be disruptive. But there has been no between two
 commitfests. Hopefully I can get around to doing it during the
 holidays. It does integrate much tighter with the archives, that's
 probably the core feature of it.

 It also automates a bunch of the emailing no?

Yes.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving from contrib to bin

2014-12-12 Thread Alvaro Herrera
Tom Lane wrote:
 Christoph Berg c...@df7cb.de writes:
  However, for PostgreSQL this means lengthy debian/*.install files
  (the equivalent of %files in rpm spec speak):
 
 Right ...
 
  If there were separate install-client, install-server, and
  install-contrib targets, that would probably shorten those files
  quite a bit. Especially messy is the part where *.so needs to be
  sorted into server/contrib, along with an similar large bunch of
  binaries.
 
 Pardon me for not knowing much about Debian packages, but how would
 that work exactly?  Is it possible to do make install-client, then
 package the installed files, then rm -rf the install tree, then
 repeat for install-server and install-contrib?  In the RPM world
 this would never work because the build/install step happens in
 toto before the packaging step.

Uh, couldn't you just run make install-client DESTDIR=.../client for
client-only files, and so on?  You would end up with separate
directories containing files for each subpackage.

 Even without that, it seems like it'd be hard to make it entirely
 automatic since some files would be installed in multiple cases (and
 directories even more so).

Yeah, you would need to fix that somehow.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Peter Geoghegan
On Fri, Dec 12, 2014 at 12:30 PM, Magnus Hagander mag...@hagander.net wrote:
 It also automates a bunch of the emailing no?

 Yes.

Please let me know the details (privately or otherwise). I'd like to
try it out again.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Josh Berkus

 Just as a note abot this one part along (I'll read the rest later). I
 do have the new version of the CF app more or less ready to deploy,
 but I got bogged down by thinking I'll do it between two commitfests
 to not be disruptive. But there has been no between two
 commitfests. Hopefully I can get around to doing it during the
 holidays. It does integrate much tighter with the archives, that's
 probably the core feature of it.

 It also automates a bunch of the emailing no?
 
 Yes.

I can key in a bunch of the backlog of patches into the new app over the
holidays, but not before then.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Jim Nasby

On 12/12/14, 2:38 PM, Josh Berkus wrote:



Just as a note abot this one part along (I'll read the rest later). I
do have the new version of the CF app more or less ready to deploy,
but I got bogged down by thinking I'll do it between two commitfests
to not be disruptive. But there has been no between two
commitfests. Hopefully I can get around to doing it during the
holidays. It does integrate much tighter with the archives, that's
probably the core feature of it.


It also automates a bunch of the emailing no?


Yes.


I can key in a bunch of the backlog of patches into the new app over the
holidays, but not before then.


FWIW, I suspect a call for help on -general or IRC would find volunteers for 
any necessary data entry work...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: hashjoin - gracefully increasing NTUP_PER_BUCKET instead of batching

2014-12-12 Thread Robert Haas
On Fri, Dec 12, 2014 at 11:50 AM, Tomas Vondra t...@fuzzy.cz wrote:
 On 12.12.2014 14:19, Robert Haas wrote:
 On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra t...@fuzzy.cz wrote:

 Regarding the sufficiently small - considering today's hardware, we're
 probably talking about gigabytes. On machines with significant memory
 pressure (forcing the temporary files to disk), it might be much lower,
 of course. Of course, it also depends on kernel settings (e.g.
 dirty_bytes/dirty_background_bytes).

 Well, this is sort of one of the problems with work_mem.  When we
 switch to a tape sort, or a tape-based materialize, we're probably far
 from out of memory.  But trying to set work_mem to the amount of
 memory we have can easily result in a memory overrun if a load spike
 causes lots of people to do it all at the same time.  So we have to
 set work_mem conservatively, but then the costing doesn't really come
 out right.  We could add some more costing parameters to try to model
 this, but it's not obvious how to get it right.

 Ummm, I don't think that's what I proposed. What I had in mind was a
 flag the batches are likely to stay in page cache. Because when it is
 likely, batching is probably faster (compared to increased load factor).

How will you know whether to set the flag?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-12 Thread Jim Nasby

On 12/12/14, 1:44 PM, Joshua D. Drake wrote:

1. We don't need a full-time engineer to manage a commitfest. We need a manager 
or PM.


I don't think that's actually true. The major points on this thread are that 1) 
we don't have enough capacity for doing reviews and 2) the CFM has no authority 
to enforce anything.

I see no way that #2 can be addressed by a mere manager/PM. If three *very* 
senior community members (David, Josh and Robert) couldn't get this done 
there's no way a PM could. (Well, I suppose of Tom was standing behind them 
with a flaming sword it might work...)

Even so, this still wouldn't address the real problem, which is lack of review 
capacity.

FWIW, I faced the same problem at Enova: good, solid reviews were very 
important for maintaining the quality of the data and database code, yet were a 
constant source of pain and friction. And that was with people being paid to do 
them, as well as a very extensive set of unit tests.

In other words, this isn't an easy problem to solve.

One thing that I think would help is modifying the CF app to allow for multiple 
reviewers. IIRC I reviewed 4 or 5 patches but I didn't mark myself as reviewer 
of any of them because I don't feel I have enough knowledge to fulfill that 
role.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >