Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-02-04 Thread Robert Haas
On Sat, Feb 2, 2013 at 4:38 PM, Andres Freund  wrote:
> On 2013-01-28 16:55:52 -0500, Steve Singer wrote:
>> If your using non-surragate /natural primary keys this tends to come up
>> occasionally due to data-entry errors or renames.  I'm  looking at this from
>> the point of view of what do I need to use this as a source for a production
>> replication system with fewer sharp-edges compared to trigger source slony.
>> My standard is a bit higher than 'first' version because I intent to use it
>> in the version 3.0 of slony not 1.0.  If others feel I'm asking for too much
>> they should speak up, maybe I am. Also the way things will fail if someone
>> were to try and update a primary key value is pretty nasty (it will leave
>> them with inconsistent databases).We could  install UPDATE triggers to
>> try and detect this type of thing but I'd rather see us just log the old
>> values so we can use them during replay.
>
> I pushed support for this. I am not yet 100% happy with this due to two
> issues:
>
> * it increases the xlog size logged by heap_update by 2 bytes even with
>   wal_level < logical as it uses a variant of xl_heap_header that
>   includes its lenght. Conditionally using xl_heap_header would make the
>   code even harder to read. Is that acceptable?

I think it's important to avoid adding to DML WAL volume when
wal_level < logical.  I am not positive that 2 bytes is noticeable,
but I'm not positive that it isn't either: heap insert/update must be
our most commonly-used WAL records.  On the other hand, we also need
to keep in mind that branches in hot code paths aren't free either.  I
would be concerned more about the increased run-time cost of
constructing the correct WAL record than with the related code
complexity.  None of that code is simple anyway.

> * multi_insert should be converted to use xl_heap_header_len as well,
>   instead of using xl_multi_insert_tuple, that would also reduce the
>   amount of multi-insert specific code in decode.c
> * both for update and delete we should denote more explicitly that
>   ->oldtuple points to an index tuple, not to an full tuple

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-02-02 Thread Andres Freund
On 2013-01-28 16:55:52 -0500, Steve Singer wrote:
> If your using non-surragate /natural primary keys this tends to come up
> occasionally due to data-entry errors or renames.  I'm  looking at this from
> the point of view of what do I need to use this as a source for a production
> replication system with fewer sharp-edges compared to trigger source slony.
> My standard is a bit higher than 'first' version because I intent to use it
> in the version 3.0 of slony not 1.0.  If others feel I'm asking for too much
> they should speak up, maybe I am. Also the way things will fail if someone
> were to try and update a primary key value is pretty nasty (it will leave
> them with inconsistent databases).We could  install UPDATE triggers to
> try and detect this type of thing but I'd rather see us just log the old
> values so we can use them during replay.

I pushed support for this. I am not yet 100% happy with this due to two
issues:

* it increases the xlog size logged by heap_update by 2 bytes even with
  wal_level < logical as it uses a variant of xl_heap_header that
  includes its lenght. Conditionally using xl_heap_header would make the
  code even harder to read. Is that acceptable?
* multi_insert should be converted to use xl_heap_header_len as well,
  instead of using xl_multi_insert_tuple, that would also reduce the
  amount of multi-insert specific code in decode.c
* both for update and delete we should denote more explicitly that
  ->oldtuple points to an index tuple, not to an full tuple

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-28 Thread Steve Singer

On 13-01-28 06:17 AM, Andres Freund wrote:

Hi,

3.  Pass the delete (with no key values) onto the replication client and let
it deal with it (see 1 and 2)
Hm.


While I agree that nicer behaviour would be good I think the real
enforcement should happen on a higher level, e.g. with event triggers or
somesuch. It seems way too late to do anything about it when we're
already decoding. The transaction will already have committed...


Ideally the first line of enforcement would be with event triggers. The 
thing with user-level mechanisms for enforcing things is that they 
sometimes can be disabled or by-passed.  I don't have a lot of sympathy 
for people who do this but I like the idea of at least having the option 
coding defensively to detect the situation and whine to the user.



How do you plan on dealing with sequences?
I don't see my plugin being called on sequence changes and I don't see
XLOG_SEQ_LOG listed in DecodeRecordIntoReorderBuffer.  Is there a reason why
this can't be easily added?

I basically was hoping for Simon's sequence-am to get in before doing
anything real here. That didn't really happen yet.
I am not sure whether there's a real usecase in decoding normal
XLOG_SEQ_LOG records, their content isn't all that easy to interpet
unless youre rather familiar with pg's innards.

So, adding support wouldn't hard from a technical pov but it seems the
semantics are a bit hard to nail down.


Also what do we want to do about TRUNCATE support.  I could always leave a
TRUNCATE trigger in place that logged the truncate to a sl_truncates and
have my replication daemon respond to the insert on a   sl_truncates table
by actually truncating the data on the replica.

I have planned to add some generic "table_rewrite" handling, but I have
to admit I haven't thought too much about it yet. Currently if somebody
rewrites a table, e.g. with an ALTER ... ADD COLUMN .. DEFAULT .. or
ALTER COLUMN ... USING ..., you will see INSERTs into a temporary
table. That basically seems to be a good thing, but the user needs to be
told about that ;)


I've spent some time this weekend updating my prototype plugin that
generates slony 2.2 style COPY output.  I have attached my progress here
(also https://github.com/ssinger/slony1-engine/tree/logical_repl).  I have
not gotten as far as modifying slon to act as a logical log receiver, or
made a version of the slony apply trigger that would process these
changes.

I only gave it a quick look and have a couple of questions and
remarks. The way you used the options it looks like youre thinking of
specifying all the tables as options? I would have thought those would
get stored & queried locally and only something like the 'replication
set' name or such would be set as an option.


The way slony works today is that the list of tables to pull for a SYNC 
comes from the subscriber because the subscriber might be behind the 
provider, where a table has been removed from the set in the meantime.  
The subscriber still needs to receive data from that table until it is 
caught up to the point where that removal happens.


Having a time-travelled version of a user table (sl_table) might fix 
that problem but I haven't yet figured out how that needs to work with 
cascading (since that is a feature of slony today I can't ignore the 
problem). I'm also not sure how that will work with table renames.  
Today if the user renames a table inside of an EXECUTE SCRIPT slony will 
update the name of the table in sl_table.   This type of change wouldn't 
be visible (yet) in the time-travelled catalog.  There might be a 
solution to this yet but I haven't figured out it.  Sticking with what 
slony does today seemed easier as a first step.



Iterating over a list with
for(i = 0; i < options->length; i= i + 2 )
{
DefElem * def_schema = (DefElem*) list_nth(options,i);
is not a good idea btw, thats quadratic in complexity ;)


Thanks I'll rewrite this to walk a list of  ListCell objects with next.



In the REORDER_BUFFER_CHANGE_UPDATE I suggest using
relation->rd_primary, just as in the DELETE case, that should always
give you a consistent candidate key in an efficient manner.


I haven't looked into the details of what is involved in setting up a
subscription with the snapshot exporting.

That hopefully shouldn't be too hard... At least thats the idea :P


I couldn't get the options on the START REPLICATION command to parse so I
just hard coded some list building code in the init method. I do plan on
pasing the list of tables to replicate from the replica to the plugin
(because this list comes from the replica).   Passing what could be a few
thousand table names as a list of arguments is a bit ugly and I admit my
list processing code is rough.  Does this make us want to reconsider the
format of the option_list ?

Yea, something's screwed up there, sorry. Will push a fix later today.


I guess should provide an opinion on if I think that the patch in this CF,
if commi

Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-28 Thread Andres Freund
Hi,

On 2013-01-27 23:07:51 -0500, Steve Singer wrote:
> A few more comments;
> 
> In decode.c DecodeDelete
> 
> +   if (r->xl_len <= (SizeOfHeapDelete + SizeOfHeapHeader))
> +   {
> +   elog(DEBUG2, "huh, no primary key for a delete on wal_level =
> logical?");
> +   return;
> +   }
> +
 
> I think we should be passing delete's with candidate key data logged to the
> plugin.  If the table isn't a replicated table then ignoring the delete is
> fine.  If the table is a replicated table but someone has deleted the unique
> index from the table then the plugin will receive INSERT changes on the
> table but not DELETE changes. If this happens the plugin would have any way
> of knowing that it is missing delete changes.  If my plugin gets passed a
> DELETE change record but with no key data then my plugin could do any of

I basically didn't do that because I thought people would forget to
check whether oldtuple is empty I have no problem with addind support
for that though.

> 1.  Start screaming for help (ie log errors)

Yes.

> 2.  Drop the table from replication

No, you can't write from an output plugin, and I don't immediately see
support for that comming. There's no fundamental blockers, just makes
things more complicated.

> 3.  Pass the delete (with no key values) onto the replication client and let
> it deal with it (see 1 and 2)

Hm.


While I agree that nicer behaviour would be good I think the real
enforcement should happen on a higher level, e.g. with event triggers or
somesuch. It seems way too late to do anything about it when we're
already decoding. The transaction will already have committed...

> Also, 'huh' isn't one of our standard log message phrases :)

You're right there ;). I bascially wanted to remove the log message
almost instantly but it was occasionally useful so I kept it arround...

> How do you plan on dealing with sequences?
> I don't see my plugin being called on sequence changes and I don't see
> XLOG_SEQ_LOG listed in DecodeRecordIntoReorderBuffer.  Is there a reason why
> this can't be easily added?

I basically was hoping for Simon's sequence-am to get in before doing
anything real here. That didn't really happen yet.
I am not sure whether there's a real usecase in decoding normal
XLOG_SEQ_LOG records, their content isn't all that easy to interpet
unless youre rather familiar with pg's innards.

So, adding support wouldn't hard from a technical pov but it seems the
semantics are a bit hard to nail down.

> Also what do we want to do about TRUNCATE support.  I could always leave a
> TRUNCATE trigger in place that logged the truncate to a sl_truncates and
> have my replication daemon respond to the insert on a   sl_truncates table
> by actually truncating the data on the replica.

I have planned to add some generic "table_rewrite" handling, but I have
to admit I haven't thought too much about it yet. Currently if somebody
rewrites a table, e.g. with an ALTER ... ADD COLUMN .. DEFAULT .. or
ALTER COLUMN ... USING ..., you will see INSERTs into a temporary
table. That basically seems to be a good thing, but the user needs to be
told about that ;)

> I've spent some time this weekend updating my prototype plugin that
> generates slony 2.2 style COPY output.  I have attached my progress here
> (also https://github.com/ssinger/slony1-engine/tree/logical_repl).  I have
> not gotten as far as modifying slon to act as a logical log receiver, or
> made a version of the slony apply trigger that would process these
> changes.

I only gave it a quick look and have a couple of questions and
remarks. The way you used the options it looks like youre thinking of
specifying all the tables as options? I would have thought those would
get stored & queried locally and only something like the 'replication
set' name or such would be set as an option.

Iterating over a list with
for(i = 0; i < options->length; i= i + 2 )
{
DefElem * def_schema = (DefElem*) list_nth(options,i);
is not a good idea btw, thats quadratic in complexity ;)

In the REORDER_BUFFER_CHANGE_UPDATE I suggest using
relation->rd_primary, just as in the DELETE case, that should always
give you a consistent candidate key in an efficient manner.

> I haven't looked into the details of what is involved in setting up a
> subscription with the snapshot exporting.

That hopefully shouldn't be too hard... At least thats the idea :P

> I couldn't get the options on the START REPLICATION command to parse so I
> just hard coded some list building code in the init method. I do plan on
> pasing the list of tables to replicate from the replica to the plugin
> (because this list comes from the replica).   Passing what could be a few
> thousand table names as a list of arguments is a bit ugly and I admit my
> list processing code is rough.  Does this make us want to reconsider the
> format of the option_list ?

Yea, something's screwed up there, sorry. Will push a fix later today.

> I guess should provide

Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-28 Thread Andres Freund
On 2013-01-26 16:20:33 -0500, Steve Singer wrote:
> On 13-01-24 11:15 AM, Steve Singer wrote:
> >On 13-01-24 06:40 AM, Andres Freund wrote:
> >>
> >>Fair enough. I am also working on a user of this infrastructure but that
> >>doesn't help you very much. Steve Singer seemed to make some stabs at
> >>writing an output plugin as well. Steve, how far did you get there?
> >
> >I was able to get something that generated output for INSERT statements in
> >a format similar to what a modified slony apply trigger would want.  This
> >was with the list of tables to replicate hard-coded in the plugin.  This
> >was with the patchset from the last commitfest.I had gotten a bit hung up
> >on the UPDATE and DELETE support because slony allows you to use an
> >arbitrary user specified unique index as your key.  It looks like better
> >support for tables with a unique non-primary key is in the most recent
> >patch set.  I am hoping to have time this weekend to update my plugin to
> >use parameters passed in on the init and other updates in the most recent
> >version.  If I make some progress I will post a link to my progress at the
> >end of the weekend.  My big issue is that I have limited time to spend on
> >this.
> >
>
> This isn't a complete review just a few questions I've hit so far that I
> thought I'd ask to see if I'm not seeing something related to updates.

> + extern void relationFindPrimaryKey(Relation pkrel, Oid *indexOid,
> +int16 *nratts, int16 *attnums, Oid
> *atttypids,
> +Oid *opclasses);
> +
>
> I don't see this defined anywhere could it be left over from a previous
> version of the patch?

Yes, its dead and now gone.

> In decode.c
> DecodeUpdate:
> +
> +   /*
> +* FIXME: need to get/save the old tuple as well if we want primary key
> +* changes to work.
> +*/
> +   change->newtuple = ReorderBufferGetTupleBuf(reorder);
>
> I also don't see any code in heap_update to find + save the old primary key
> values like you added to heap_delete.   You didn't list "Add ability to
> change the primary key on an UPDATE" in the TODO so I'm wondering if I'm
> missing something.  Is there another way I can bet the primary key values
> for the old_tuple?

Nope, there isn't any right now. I have considered as something not all
that interesting for real-world usecases based on my experience, but
adding support shouldn't be that hard anymore, so I can just bite the
bullet...

> I think the name of the test contrib module was changed but you didn't
> update the make file. This fixes it

Yea, I had forgotten to add that hunk when committing. Fixed.

Thanks,

Andres

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-27 Thread Steve Singer

On 13-01-24 11:15 AM, Steve Singer wrote:

On 13-01-24 06:40 AM, Andres Freund wrote:


Fair enough. I am also working on a user of this infrastructure but that
doesn't help you very much. Steve Singer seemed to make some stabs at
writing an output plugin as well. Steve, how far did you get there?


I was able to get something that generated output for INSERT 
statements in a format similar to what a modified slony apply trigger 
would want.  This was with the list of tables to replicate hard-coded 
in the plugin.  This was with the patchset from the last commitfest.I 
had gotten a bit hung up on the UPDATE and DELETE support because 
slony allows you to use an arbitrary user specified unique index as 
your key.  It looks like better support for tables with a unique 
non-primary key is in the most recent patch set.  I am hoping to have 
time this weekend to update my plugin to use parameters passed in on 
the init and other updates in the most recent version.  If I make some 
progress I will post a link to my progress at the end of the weekend.  
My big issue is that I have limited time to spend on this.






A few more comments;

In decode.c DecodeDelete

+   if (r->xl_len <= (SizeOfHeapDelete + SizeOfHeapHeader))
+   {
+   elog(DEBUG2, "huh, no primary key for a delete on wal_level = 
logical?");

+   return;
+   }
+

I think we should be passing delete's with candidate key data logged to 
the plugin.  If the table isn't a replicated table then ignoring the 
delete is fine.  If the table is a replicated table but someone has 
deleted the unique index from the table then the plugin will receive 
INSERT changes on the table but not DELETE changes. If this happens the 
plugin would have any way of knowing that it is missing delete changes.  
If my plugin gets passed a DELETE change record but with no key data 
then my plugin could do any of

1.  Start screaming for help (ie log errors)
2.  Drop the table from replication
3.  Pass the delete (with no key values) onto the replication client and 
let it deal with it (see 1 and 2)


Also, 'huh' isn't one of our standard log message phrases :)


How do you plan on dealing with sequences?
I don't see my plugin being called on sequence changes and I don't see 
XLOG_SEQ_LOG listed in DecodeRecordIntoReorderBuffer.  Is there a reason 
why this can't be easily added?


Also what do we want to do about TRUNCATE support.  I could always leave 
a TRUNCATE trigger in place that logged the truncate to a sl_truncates 
and have my replication daemon respond to the insert on a   sl_truncates 
table by actually truncating the data on the replica.


I've spent some time this weekend updating my prototype plugin that 
generates slony 2.2 style COPY output.  I have attached my progress here 
(also https://github.com/ssinger/slony1-engine/tree/logical_repl).  I 
have not gotten as far as modifying slon to act as a logical log 
receiver, or made a version of the slony apply trigger that would 
process these changes.  I haven't looked into the details of what is 
involved in setting up a subscription with the snapshot exporting.


I couldn't get the options on the START REPLICATION command to parse so 
I just hard coded some list building code in the init method. I do plan 
on pasing the list of tables to replicate from the replica to the plugin 
(because this list comes from the replica).   Passing what could be a 
few thousand table names as a list of arguments is a bit ugly and I 
admit my list processing code is rough.  Does this make us want to 
reconsider the format of the option_list ?


I guess should provide an opinion on if I think that the patch in this 
CF, if committed could be used to act as a source for slony instead of 
the log trigger.



The biggest missing piece I mentioned in my email yesterday, that we 
aren't logging the old primary key on row UPDATEs.  I don't see building 
a credible replication system where you don't allow users to update any 
column of a row.


The other issues I've raised (DecodeDelete hiding bad deletes, 
replication options not parsing for me) look like easy fixes


 no wal decoding support for sequences or truncate are things that I 
could work around by doing things much like slony does today.  The SYNC 
can still capture the sequence changes in  a table (where the INSERT's 
would be logged) and I can have a trigger capture truncates.


I mostly did this review from the point of view of someone trying to use 
the feature, I haven't done a line-by-line review of the code.


I suspect Andres can address these issues and get an updated patch out 
during this CF.   I think a more detailed code review by someone more 
familiar with postgres internals will reveal a handful of other issues 
that hopefully can be fixed without a lot of effort. If this were the 
only patch in the commitfest I would encourage Andres to push to get 
these changes done.  If the standard for CF4 is that a patch needs to be 
basically in a commitable stat

Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-26 Thread Steve Singer

On 13-01-24 11:15 AM, Steve Singer wrote:

On 13-01-24 06:40 AM, Andres Freund wrote:


Fair enough. I am also working on a user of this infrastructure but that
doesn't help you very much. Steve Singer seemed to make some stabs at
writing an output plugin as well. Steve, how far did you get there?


I was able to get something that generated output for INSERT 
statements in a format similar to what a modified slony apply trigger 
would want.  This was with the list of tables to replicate hard-coded 
in the plugin.  This was with the patchset from the last commitfest.I 
had gotten a bit hung up on the UPDATE and DELETE support because 
slony allows you to use an arbitrary user specified unique index as 
your key.  It looks like better support for tables with a unique 
non-primary key is in the most recent patch set.  I am hoping to have 
time this weekend to update my plugin to use parameters passed in on 
the init and other updates in the most recent version.  If I make some 
progress I will post a link to my progress at the end of the weekend.  
My big issue is that I have limited time to spend on this.




This isn't a complete review just a few questions I've hit so far that I 
thought I'd ask to see if I'm not seeing something related to updates.



*** a/src/include/catalog/index.h
--- b/src/include/catalog/index.h
*** extern bool ReindexIsProcessingHeap(Oid
*** 114,117 
--- 114,121 
  extern bool ReindexIsProcessingIndex(Oid indexOid);
  extern OidIndexGetRelation(Oid indexId, bool missing_ok);

+ extern void relationFindPrimaryKey(Relation pkrel, Oid *indexOid,
+int16 *nratts, int16 *attnums, Oid 
*atttypids,

+Oid *opclasses);
+
  #endif   /* INDEX_H */


I don't see this defined anywhere could it be left over from a previous 
version of the patch?



In decode.c
DecodeUpdate:
+
+   /*
+* FIXME: need to get/save the old tuple as well if we want primary key
+* changes to work.
+*/
+   change->newtuple = ReorderBufferGetTupleBuf(reorder);

I also don't see any code in heap_update to find + save the old primary 
key values like you added to heap_delete.   You didn't list "Add ability 
to change the primary key on an UPDATE" in the TODO so I'm wondering if 
I'm missing something.  Is there another way I can bet the primary key 
values for the old_tuple?


Also,

I think the name of the test contrib module was changed but you didn't 
update the make file. This fixes it


diff --git a/contrib/Makefile b/contrib/Makefile
index 1cc30fe..36e6bfe 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -50,7 +50,7 @@ SUBDIRS = \
tcn \
test_parser \
test_decoding   \
-   test_logical_replication \
+   test_logical_decoding \
tsearch2\
unaccent\
vacuumlo\







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Bruce Momjian
On Fri, Jan 25, 2013 at 02:16:09AM +0100, Andres Freund wrote:
> What I am afraid though is that it basically goes on like this in the
> next commitfests:
> * 9.4-CF1: no "serious" reviewer comments because they are busy doing release 
> work
> * 9.4-CF2: all are relieved that the release is over and a bit tired
> * 9.4-CF3: first deeper review, some more complex restructuring required
> * 9.4-CF4: too many changes to commit.
> 
> If you look at the development of the feature, after the first prototype
> and the resulting design changes nobody with decision power had a more
> than cursory look at the proposed interfaces. Thats very, very, very
> understandable, you all are busy people and the patch & the interfaces
> are complex so it takes noticeable amounts of time, but it unfortunately
> doesn't help in getting an acceptable interface nailed down.
> 
> The problem with that is not only that it sucks huge amounts of energy
> out of me and others but also that its very hard to really build the
> layers/users above changeset extraction without being able to rely on
> the interface and semantics. So we never get to the actually benefits
> :(, and we don't get the users people require for the feature to be
> committed.
> 
> So far, the only really effective way of getting people to comment on
> patches in this state & complexity is the threat of an upcoming commit
> because of the last commitfest :(
> 
> I honestly don't know how to go on about this...

This is very accurate and the big challenge of large, invasive patches. 
You almost need to hit it perfect the first time to get it committed in
less than a year.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Andres Freund
Hi!

On 2013-01-24 13:27:00 -0500, Robert Haas wrote:
> On Thu, Jan 24, 2013 at 6:14 AM, Andres Freund  wrote:

> Before getting bogged down in technical commentary, let me say this
> very clearly: I am enormously grateful for your work on this project.
> Logical replication based on WAL decoding is a feature of enormous
> value that PostgreSQL has needed for a long time, and your work has
> made that look like an achievable goal.  Furthermore, it seems to me
> that you have pursued the community process with all the vigor and
> sincerity for which anyone could ask.  Serious design concerns were
> raised early in the process and you made radical changes to the design
> which I believe have improved it tremendously, and you've continued to
> display an outstanding attitude at every phase of this process about
> which I can't say enough good things.

Very much appreciated. Especially as I can echo your feeling of not only
having positive feelings about the process ;)

> Now, the bad news is, I don't think it's very reasonable to try to
> commit this to 9.3.  I think it is just too much stuff too late in the
> cycle.  I've reviewed some of the patches from time to time but there
> is a lot more stuff and it's big and complicated and it's not really
> clear that we have the interface quite right yet, even though I think
> it's also clear that we are a lot of closer than we were.  I don't
> want to be fixing that during beta, much less after release.

It pains me to admit that you have a point there.

What I am afraid though is that it basically goes on like this in the
next commitfests:
* 9.4-CF1: no "serious" reviewer comments because they are busy doing release 
work
* 9.4-CF2: all are relieved that the release is over and a bit tired
* 9.4-CF3: first deeper review, some more complex restructuring required
* 9.4-CF4: too many changes to commit.

If you look at the development of the feature, after the first prototype
and the resulting design changes nobody with decision power had a more
than cursory look at the proposed interfaces. Thats very, very, very
understandable, you all are busy people and the patch & the interfaces
are complex so it takes noticeable amounts of time, but it unfortunately
doesn't help in getting an acceptable interface nailed down.

The problem with that is not only that it sucks huge amounts of energy
out of me and others but also that its very hard to really build the
layers/users above changeset extraction without being able to rely on
the interface and semantics. So we never get to the actually benefits
:(, and we don't get the users people require for the feature to be
committed.

So far, the only really effective way of getting people to comment on
patches in this state & complexity is the threat of an upcoming commit
because of the last commitfest :(

I honestly don't know how to go on about this...

> > I tried very, very hard to get the basics of the design & interface
> > solid. Which obviously doesn't man I am succeeding - luckily not being
> > superhuman after all ;). And I think thats very much where input is
> > desparetely needed and where I failed to raise enough attention. The
> > "output plugin" interface follewed by the walsender interface is what
> > needs to be most closely vetted.
> > Those are the permanent, user/developer exposed UI and the one we should
> > try to keep as stable as possible.
> >
> > The output plugin callbacks are defined here:
> > http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=src/include/replication/output_plugin.h;hb=xlog-decoding-rebasing-cf4
> > To make it more agnostic of the technology to implement changeset
> > extraction we possibly should replace the ReorderBuffer(TXN|Change)
> > structs being passed by something more implementation agnostic.
> >
> > walsender interface:
> > http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=src/backend/replication/repl_gram.y;hb=xlog-decoding-rebasing-cf4
> > The interesting new commands are:
> > 1) K_INIT_LOGICAL_REPLICATION NAME NAME
> > 2) K_START_LOGICAL_REPLICATION NAME RECPTR plugin_options
> > 3) K_FREE_LOGICAL_REPLICATION NAME
> >
> > 1 & 3 allocate (respectively free) the permanent state associated with
> > one changeset consumer whereas START_LOGICAL_REPLICATION streams out
> > changes starting at RECPTR.
> 
> Forgive me for not having looked at the patch, but to what extent is
> all this, ah, documented?

There are several mails on -hackers where I ask for input on whether
that interface is what people want but all the comments have been from
non-core pg people, although mildly favorable.

I couldn't convince myself of writing real low-level documentation
instead of just the example code I needed for testing anyway and some
more higher-level docs before I had input from that side. Perhaps that
was a mistake.

So, here's a slightly less quick overview of the walsender interface:

Whenever a new replication consumer wants to stream dat

Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Heikki Linnakangas

On 24.01.2013 20:27, Robert Haas wrote:

Before getting bogged down in technical commentary, let me say this
very clearly: I am enormously grateful for your work on this project.
Logical replication based on WAL decoding is a feature of enormous
value that PostgreSQL has needed for a long time, and your work has
made that look like an achievable goal.  Furthermore, it seems to me
that you have pursued the community process with all the vigor and
sincerity for which anyone could ask.  Serious design concerns were
raised early in the process and you made radical changes to the design
which I believe have improved it tremendously, and you've continued to
display an outstanding attitude at every phase of this process about
which I can't say enough good things.


+1. I really appreciate all the work you Andres have put into this. I've 
argued in the past myself that there should be a little tool that 
scrapes the WAL to do logical replication. Essentially, just what you've 
implemented.


That said (hah, you knew there would be a "but" ;-)), now that I see 
what that looks like, I'm feeling that maybe it wasn't such a good idea 
after all. It sounded like a fairly small patch that greatly reduces the 
overhead in the master with existing replication systems like slony, but 
it turned out to be a huge patch with a lot of new concepts and interfaces.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> Now, the bad news is, I don't think it's very reasonable to try to
> commit this to 9.3.  I think it is just too much stuff too late in the
> cycle.  I've reviewed some of the patches from time to time but there
> is a lot more stuff and it's big and complicated and it's not really
> clear that we have the interface quite right yet, even though I think
> it's also clear that we are a lot of closer than we were.  I don't
> want to be fixing that during beta, much less after release.

The only way to avoid this happening again and again, imv, is to get it
committed early in whatever cycle it's slated to release for.  We've got
some serious challenges there though because we want to encourage
everyone to focus on beta testing and going through the release process,
plus we don't want to tag/branch too early or we create more work for
ourselves.

It would have been nice to get this into 9.3, but I can certainly
understand needing to move it back, but can we get a slightly more
specific plan around getting it in then?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Robert Haas
On Thu, Jan 24, 2013 at 6:14 AM, Andres Freund  wrote:
> Thats way much more along the lines of what I am afraid of than the
> performance stuff - but Heikki cited those, so I replied to that.
>
> Note that I didn't say this must, must go in - I just don't think
> Heikki's reasoning about why not hit the nail on the head.

Fair enough, no argument.

Before getting bogged down in technical commentary, let me say this
very clearly: I am enormously grateful for your work on this project.
Logical replication based on WAL decoding is a feature of enormous
value that PostgreSQL has needed for a long time, and your work has
made that look like an achievable goal.  Furthermore, it seems to me
that you have pursued the community process with all the vigor and
sincerity for which anyone could ask.  Serious design concerns were
raised early in the process and you made radical changes to the design
which I believe have improved it tremendously, and you've continued to
display an outstanding attitude at every phase of this process about
which I can't say enough good things.  There is no question in my mind
that this work is going to be the beginning of a process that
revolutionizes the way people think about replication and PostgreSQL,
and you deserve our sincere thanks for that.

Now, the bad news is, I don't think it's very reasonable to try to
commit this to 9.3.  I think it is just too much stuff too late in the
cycle.  I've reviewed some of the patches from time to time but there
is a lot more stuff and it's big and complicated and it's not really
clear that we have the interface quite right yet, even though I think
it's also clear that we are a lot of closer than we were.  I don't
want to be fixing that during beta, much less after release.

> I tried very, very hard to get the basics of the design & interface
> solid. Which obviously doesn't man I am succeeding - luckily not being
> superhuman after all ;). And I think thats very much where input is
> desparetely needed and where I failed to raise enough attention. The
> "output plugin" interface follewed by the walsender interface is what
> needs to be most closely vetted.
> Those are the permanent, user/developer exposed UI and the one we should
> try to keep as stable as possible.
>
> The output plugin callbacks are defined here:
> http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=src/include/replication/output_plugin.h;hb=xlog-decoding-rebasing-cf4
> To make it more agnostic of the technology to implement changeset
> extraction we possibly should replace the ReorderBuffer(TXN|Change)
> structs being passed by something more implementation agnostic.
>
> walsender interface:
> http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=src/backend/replication/repl_gram.y;hb=xlog-decoding-rebasing-cf4
> The interesting new commands are:
> 1) K_INIT_LOGICAL_REPLICATION NAME NAME
> 2) K_START_LOGICAL_REPLICATION NAME RECPTR plugin_options
> 3) K_FREE_LOGICAL_REPLICATION NAME
>
> 1 & 3 allocate (respectively free) the permanent state associated with
> one changeset consumer whereas START_LOGICAL_REPLICATION streams out
> changes starting at RECPTR.

Forgive me for not having looked at the patch, but to what extent is
all this, ah, documented?

> Btw, there are currently *no* changes to the wal format at all if
> wal_format < logical except that xl_running_xacts are logged more
> frequently which obviously could easily be made conditional. Baring bugs
> of course.
> The changes with wal_level>=logical aren't that big either imo:
> * heap_insert, heap_update prevent full page writes from removing their
>   normal record by using a separate XLogRecData block for the buffer and
>   the record
> * heap_delete adds more data (the pkey of the tuple) after the unchanged
>   xl_heap_delete struct
> * On changes to catalog tables (relfilenode, tid, cmin, cmax) are logged.
>
> No changes to mvcc for normal backends at all, unless you count the very
> slightly changed *Satisfies interface (getting passed a HeapTuple
> instead of HeapTupleHeader).
>
> I am not sure what you're concerned about WRT the on-disk format of the
> tuples? We are pretty much nailed down on that due to pg_upgrade'ability
> anyway and it could be changed from this patches POV without a problem,
> the output plugin just sees normal HeapTuples? Or are you concerned
> about the code extracting them from the xlog records?

Mostly, my concern is that you've accidentally broken something, or
that your code will turn out to be flaky in ways we can't now predict.
 My only really specific concern at this point is about the special
treatment of catalog tables.  We've never done anything like that
before, and it feels like a bad idea.  In particular, the fact that
you have to WAL-log new information about cmin/cmax really suggests
that we're committing ourselves to the MVCC infrastructure in a way
that we weren't previously.  There's some category of stuff that our
MVCC 

Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Steve Singer

On 13-01-24 06:40 AM, Andres Freund wrote:


Fair enough. I am also working on a user of this infrastructure but that
doesn't help you very much. Steve Singer seemed to make some stabs at
writing an output plugin as well. Steve, how far did you get there?


I was able to get something that generated output for INSERT statements 
in a format similar to what a modified slony apply trigger would want.  
This was with the list of tables to replicate hard-coded in the plugin.  
This was with the patchset from the last commitfest.I had gotten a bit 
hung up on the UPDATE and DELETE support because slony allows you to use 
an arbitrary user specified unique index as your key.  It looks like 
better support for tables with a unique non-primary key is in the most 
recent patch set.  I am hoping to have time this weekend to update my 
plugin to use parameters passed in on the init and other updates in the 
most recent version.  If I make some progress I will post a link to my 
progress at the end of the weekend.  My big issue is that I have limited 
time to spend on this.




BTW, why does all the transaction reordering stuff has to be in core?

It didn't use to, but people argued pretty damned hard that no undecoded
data should ever allowed to leave the postgres cluster. And to be fair
it makes writing an output plugin *way* much easier. Check
http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=contrib/test_decoding/test_decoding.c;hb=xlog-decoding-rebasing-cf4
If you skip over tuple_to_stringinfo(), which is just pretty generic
scaffolding for converting a whole tuple to a string, writing out the
changes in some format by now is pretty damn simple.



I think we will find that the  replication systems won't be the only 
users of this feature.  I have often seen systems that have a logging 
requirement for auditing purposes or to log then reconstruct the 
sequence of changes made to a set of tables in order to feed a 
downstream application.  Triggers and a journaling table are the 
traditional way of doing this but it should be pretty easy to write a 
plugin to accomplish the same thing that should give better 
performance.  If the reordering stuff wasn't in core this would be much 
harder.




How much of this infrastructure is to support replicating DDL changes? IOW,
if we drop that requirement, how much code can we slash?

Unfortunately I don't think too much unless we add in other code that
allows us to check whether the current definition of a table is still
the same as it was back when the tuple was logged.


Any other features or requirements that could be dropped? I think it's clear at 
this stage that
this patch is not going to be committed as it is. If you can reduce it to a
fraction of what it is now, that fraction might have a chance. Otherwise,
it's just going to be pushed to the next commitfest as whole, and we're
going to be having the same doubts and discussions then.

One thing that reduces complexity is to declare the following as
unsupported:
- CREATE TABLE foo(data text);
- DECODE UP TO HERE;
- INSERT INTO foo(data) VALUES(very-long-to-be-externally-toasted-tuple);
- DROP TABLE foo;
- DECODE UP TO HERE;

but thats just a minor thing.

I think what we can do more realistically than to chop of required parts
of changeset extraction is to start applying some of the preliminary
patches independently:
- the relmapper/relfilenode changes + pg_relation_by_filenode(spc,
   relnode) should be independently committable if a bit boring
- allowing walsenders to connect to a database possibly needs an interface 
change
   but otherwise it should be fine to go in independently. It also has
   other potential use-cases, so I think thats fair.
- logging xl_running_xact's more frequently could also be committed
   independently and makes sense independently as it allows a standby to
   enter HS faster if the master is busy
- Introducing InvalidCommandId should be relatively uncontroversial. The
   fact that no invalid value for command ids exists is imo an oversight
- the *Satisfies change could be applied and they are imo ready but
   there's no use-case for it without the rest, so I am not sure whether
   theres a point
- currently not separately available, but we could add wal_level=logical
   independently. There would be no user of it, but it would be partial
   work. That includes the relcache support for keeping track of the
   primary key which already is available separately.


Greetings,

Andres Freund





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Andres Freund
On 2013-01-24 12:38:25 +0200, Heikki Linnakangas wrote:
> On 24.01.2013 00:30, Andres Freund wrote:
> >Hi,
> >
> >I decided to reply on the patches thread to be able to find this later.
> >
> >On 2013-01-23 22:48:50 +0200, Heikki Linnakangas wrote:
> >>"logical changeset generation v4"
> >>This is a boatload of infrastructure for supporting logical replication, yet
> >>we have no code actually implementing logical replication that would go with
> >>this. The premise of logical replication over trigger-based was that it'd be
> >>faster, yet we cannot asses that without a working implementation. I don't
> >>think this can be committed in this state.
> >
> >Its a fair point that this is a huge amount of code without a user in
> >itself in-core.
> >But the reason it got no user included is because several people
> >explicitly didn't want a user in-core for now but said the first part of
> >this would be to implement the changeset generation as a separate
> >piece. Didn't you actually prefer not to have any users of this in-core
> >yourself?
>
> Yes, I certainly did. But we still need to see the other piece of the puzzle
> to see how this fits with it.

Fair enough. I am also working on a user of this infrastructure but that
doesn't help you very much. Steve Singer seemed to make some stabs at
writing an output plugin as well. Steve, how far did you get there?

> BTW, why does all the transaction reordering stuff has to be in core?

It didn't use to, but people argued pretty damned hard that no undecoded
data should ever allowed to leave the postgres cluster. And to be fair
it makes writing an output plugin *way* much easier. Check
http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=contrib/test_decoding/test_decoding.c;hb=xlog-decoding-rebasing-cf4
If you skip over tuple_to_stringinfo(), which is just pretty generic
scaffolding for converting a whole tuple to a string, writing out the
changes in some format by now is pretty damn simple.

> How much of this infrastructure is to support replicating DDL changes? IOW,
> if we drop that requirement, how much code can we slash?

Unfortunately I don't think too much unless we add in other code that
allows us to check whether the current definition of a table is still
the same as it was back when the tuple was logged.

> Any other features or requirements that could be dropped? I think it's clear 
> at this stage that
> this patch is not going to be committed as it is. If you can reduce it to a
> fraction of what it is now, that fraction might have a chance. Otherwise,
> it's just going to be pushed to the next commitfest as whole, and we're
> going to be having the same doubts and discussions then.

One thing that reduces complexity is to declare the following as
unsupported:
- CREATE TABLE foo(data text);
- DECODE UP TO HERE;
- INSERT INTO foo(data) VALUES(very-long-to-be-externally-toasted-tuple);
- DROP TABLE foo;
- DECODE UP TO HERE;

but thats just a minor thing.

I think what we can do more realistically than to chop of required parts
of changeset extraction is to start applying some of the preliminary
patches independently:
- the relmapper/relfilenode changes + pg_relation_by_filenode(spc,
  relnode) should be independently committable if a bit boring
- allowing walsenders to connect to a database possibly needs an interface 
change
  but otherwise it should be fine to go in independently. It also has
  other potential use-cases, so I think thats fair.
- logging xl_running_xact's more frequently could also be committed
  independently and makes sense independently as it allows a standby to
  enter HS faster if the master is busy
- Introducing InvalidCommandId should be relatively uncontroversial. The
  fact that no invalid value for command ids exists is imo an oversight
- the *Satisfies change could be applied and they are imo ready but
  there's no use-case for it without the rest, so I am not sure whether
  theres a point
- currently not separately available, but we could add wal_level=logical
  independently. There would be no user of it, but it would be partial
  work. That includes the relcache support for keeping track of the
  primary key which already is available separately.


Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Heikki Linnakangas
One random thing that caught my eye in the patch, I though I'd mention 
it while I still remember: In heap_delete, you call heap_form_tuple() in 
a critical section. That's a bad idea, because if it runs out of memory 
-> PANIC.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Andres Freund
Hi Robert, Hi all,

On 2013-01-23 20:17:04 -0500, Robert Haas wrote:
> On Wed, Jan 23, 2013 at 5:30 PM, Andres Freund  wrote:
> > The only reason the submitted version of logical decoding is
> > comparatively slow is that its xmin update policy is braindamaged,
> > working on that right now.
> 
> I agree.  The thing that scares me about the logical replication stuff
> is not that it might be slow (and if your numbers are to be believed,
> it isn't), but that I suspect it's riddled with bugs and possibly some
> questionable design decisions.  If we commit it and release it, then
> we're going to be stuck maintaining it for a very, very long time.  If
> it turns out to have serious bugs that can't be fixed without a new
> major release, it's going to be a serious black eye for the project.

Thats way much more along the lines of what I am afraid of than the
performance stuff - but Heikki cited those, so I replied to that.

Note that I didn't say this must, must go in - I just don't think
Heikki's reasoning about why not hit the nail on the head.

> Of course, I have no evidence that that will happen.  But it is a
> really big piece of code, and therefore unless you are superman, it's
> probably got a really large number of bugs.  The scary thing is that
> it is not as if we can say, well, this is a big hunk of code, but it
> doesn't really touch the core of the system, so if it's broken, it'll
> be broken itself, but it won't break anything else.  Rather, this code
> is deeply in bed with WAL, with MVCC, and with the on-disk format of
> tuples, and makes fundamental changes to the first two of those.  You
> agreed with Tom that 9.2 is the buggiest release in recent memory, but
> I think logical replication could easily be an order of magnitude
> worse.

I tried very, very hard to get the basics of the design & interface
solid. Which obviously doesn't man I am succeeding - luckily not being
superhuman after all ;). And I think thats very much where input is
desparetely needed and where I failed to raise enough attention. The
"output plugin" interface follewed by the walsender interface is what
needs to be most closely vetted.
Those are the permanent, user/developer exposed UI and the one we should
try to keep as stable as possible.

The output plugin callbacks are defined here:
http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=src/include/replication/output_plugin.h;hb=xlog-decoding-rebasing-cf4
To make it more agnostic of the technology to implement changeset
extraction we possibly should replace the ReorderBuffer(TXN|Change)
structs being passed by something more implementation agnostic.

walsender interface:
http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=blob;f=src/backend/replication/repl_gram.y;hb=xlog-decoding-rebasing-cf4
The interesting new commands are:
1) K_INIT_LOGICAL_REPLICATION NAME NAME
2) K_START_LOGICAL_REPLICATION NAME RECPTR plugin_options
3) K_FREE_LOGICAL_REPLICATION NAME

1 & 3 allocate (respectively free) the permanent state associated with
one changeset consumer whereas START_LOGICAL_REPLICATION streams out
changes starting at RECPTR.

Btw, there are currently *no* changes to the wal format at all if
wal_format < logical except that xl_running_xacts are logged more
frequently which obviously could easily be made conditional. Baring bugs
of course.
The changes with wal_level>=logical aren't that big either imo:
* heap_insert, heap_update prevent full page writes from removing their
  normal record by using a separate XLogRecData block for the buffer and
  the record
* heap_delete adds more data (the pkey of the tuple) after the unchanged
  xl_heap_delete struct
* On changes to catalog tables (relfilenode, tid, cmin, cmax) are logged.

No changes to mvcc for normal backends at all, unless you count the very
slightly changed *Satisfies interface (getting passed a HeapTuple
instead of HeapTupleHeader).

I am not sure what you're concerned about WRT the on-disk format of the
tuples? We are pretty much nailed down on that due to pg_upgrade'ability
anyway and it could be changed from this patches POV without a problem,
the output plugin just sees normal HeapTuples? Or are you concerned
about the code extracting them from the xlog records?

So I think the "won't break anything else" argument can be made rather
fairly if the heapam.c changes, which aren't that complex, are vetted
closely.

Now, the disucssion about all the code thats active *during* decoding is
something else entirely :/

> You
> agreed with Tom that 9.2 is the buggiest release in recent memory, but
> I think logical replication could easily be an order of magnitude
> worse.

I unfortunately think that not providing more builtin capabilities in
this area also has significant dangers. Imo this is one of the weakest,
or even the weakest, area of postgres.

I personally have the impression that just about nobody did actual beta
testing of the lastest releases, especially 9.2, and t

Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Heikki Linnakangas

On 24.01.2013 00:30, Andres Freund wrote:

Hi,

I decided to reply on the patches thread to be able to find this later.

On 2013-01-23 22:48:50 +0200, Heikki Linnakangas wrote:

"logical changeset generation v4"
This is a boatload of infrastructure for supporting logical replication, yet
we have no code actually implementing logical replication that would go with
this. The premise of logical replication over trigger-based was that it'd be
faster, yet we cannot asses that without a working implementation. I don't
think this can be committed in this state.


Its a fair point that this is a huge amount of code without a user in
itself in-core.
But the reason it got no user included is because several people
explicitly didn't want a user in-core for now but said the first part of
this would be to implement the changeset generation as a separate
piece. Didn't you actually prefer not to have any users of this in-core
yourself?


Yes, I certainly did. But we still need to see the other piece of the 
puzzle to see how this fits with it.


BTW, why does all the transaction reordering stuff has to be in core?

How much of this infrastructure is to support replicating DDL changes? 
IOW, if we drop that requirement, how much code can we slash? Any other 
features or requirements that could be dropped? I think it's clear at 
this stage that this patch is not going to be committed as it is. If you 
can reduce it to a fraction of what it is now, that fraction might have 
a chance. Otherwise, it's just going to be pushed to the next commitfest 
as whole, and we're going to be having the same doubts and discussions then.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-24 Thread Simon Riggs
On 24 January 2013 01:17, Robert Haas  wrote:

> I agree.  The thing that scares me about the logical replication stuff
> is not that it might be slow (and if your numbers are to be believed,
> it isn't), but that I suspect it's riddled with bugs and possibly some
> questionable design decisions.  If we commit it and release it, then
> we're going to be stuck maintaining it for a very, very long time.  If
> it turns out to have serious bugs that can't be fixed without a new
> major release, it's going to be a serious black eye for the project.
>
> Of course, I have no evidence that that will happen.


This is a generic argument against applying any invasive patch. I
agree 9.2 had major bugs on release, though that was because of the
invasive nature of some of the changes, even in seemingly minor
patches.

The most invasive and therefore risky changes in this release are
already committed - changes to the way WAL reading and timelines work.
If we don't apply a single additional patch in this CF, we will still
in my opinion have a major requirement for beta testing prior to
release.

The code executed here is isolated to users of the new feature and is
therefore low risk to non-users. Of course there will be bugs.
Everybody understands what new feature means and we as a project
aren't exposed to risks from this. New feature also means
groundbreaking new capabilities, so the balance of high reward, low
risk means this gets my vote to apply. I'm just about to spend some
days giving a final review on it to confirm/refute that opinion in
technical detail.

Code using these features is available and marked them clearly as full
copyright transfer to PGDG, TPL licenced. That code is external not by
author's choice, but at the specific request of the project to make it
thay way. I personally will be looking to add code to core over time.
It was useful for everybody that replication solutions started out of
core, but replication is now a core requirement for databases and we
must fully deliver on that thought.

I agree with your concern re: checksums and foreign key locks. FK
locks has had considerable review and support, so I expect that to be
a manageable issue.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-23 Thread Joshua D. Drake


On 01/23/2013 05:17 PM, Robert Haas wrote:


Of course, I have no evidence that that will happen.  But it is a
really big piece of code, and therefore unless you are superman, it's
probably got a really large number of bugs.  The scary thing is that
it is not as if we can say, well, this is a big hunk of code, but it
doesn't really touch the core of the system, so if it's broken, it'll
be broken itself, but it won't break anything else.  Rather, this code
is deeply in bed with WAL, with MVCC, and with the on-disk format of
tuples, and makes fundamental changes to the first two of those.  You
agreed with Tom that 9.2 is the buggiest release in recent memory, but
I think logical replication could easily be an order of magnitude
worse.


Command Prompt worked for YEARS to get logical replication right and we 
never got it to the point where I would have been happy submitting it to 
-core.


It behooves .Org to be extremely conservative about this feature. 
Granted, it is a feature we should have had years ago but still. It is 
not a simple thing, it is not an easy thing. It is complicated and 
complex to get correcft.




JD




--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-23 Thread Robert Haas
On Wed, Jan 23, 2013 at 5:30 PM, Andres Freund  wrote:
> pgbench upstream:
> tps: 22275.941409
> space overhead: 0%
> pgbench logical-submitted
> tps: 16274.603046
> space overhead: 2.1%
> pgbench logical-HEAD (will submit updated version tomorrow or so):
> tps: 20853.341551
> space overhead: 2.3%
> pgbench single plpgsql trigger (INSERT INTO log(data) VALUES(NEW::text))
> tps: 14101.349535
> space overhead: 369%
>
> Note that in the single trigger case nobody consumed the queue while the
> logical version streamed the changes out and stored them to disk.
>
> Adding a default NOW() or similar to the tables immediately makes
> logical decoding faster by a factor of about 3 in comparison to the
> above trivial trigger.
>
> The only reason the submitted version of logical decoding is
> comparatively slow is that its xmin update policy is braindamaged,
> working on that right now.

I agree.  The thing that scares me about the logical replication stuff
is not that it might be slow (and if your numbers are to be believed,
it isn't), but that I suspect it's riddled with bugs and possibly some
questionable design decisions.  If we commit it and release it, then
we're going to be stuck maintaining it for a very, very long time.  If
it turns out to have serious bugs that can't be fixed without a new
major release, it's going to be a serious black eye for the project.

Of course, I have no evidence that that will happen.  But it is a
really big piece of code, and therefore unless you are superman, it's
probably got a really large number of bugs.  The scary thing is that
it is not as if we can say, well, this is a big hunk of code, but it
doesn't really touch the core of the system, so if it's broken, it'll
be broken itself, but it won't break anything else.  Rather, this code
is deeply in bed with WAL, with MVCC, and with the on-disk format of
tuples, and makes fundamental changes to the first two of those.  You
agreed with Tom that 9.2 is the buggiest release in recent memory, but
I think logical replication could easily be an order of magnitude
worse.

I also have serious concerns about checksums and foreign key locks.
Any single one of those three patches could really inflict
unprecedented damage on our community's reputation for stability and
reliability if they turn out to be seriously buggy, and unfortunately
I don't consider that an unlikely outcome.  I don't know what to do
about it, either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4 - Heikki's thoughts about the patch state

2013-01-23 Thread Andres Freund
Hi,

I decided to reply on the patches thread to be able to find this later.

On 2013-01-23 22:48:50 +0200, Heikki Linnakangas wrote:
> "logical changeset generation v4"
> This is a boatload of infrastructure for supporting logical replication, yet
> we have no code actually implementing logical replication that would go with
> this. The premise of logical replication over trigger-based was that it'd be
> faster, yet we cannot asses that without a working implementation. I don't
> think this can be committed in this state.

Its a fair point that this is a huge amount of code without a user in
itself in-core.
But the reason it got no user included is because several people
explicitly didn't want a user in-core for now but said the first part of
this would be to implement the changeset generation as a separate
piece. Didn't you actually prefer not to have any users of this in-core
yourself?

Also, while the apply side surely isn't benchmarkable without any being
submitted, the changeset generation can very well be benchmarked.

A very, very adhoc benchmark:
 -c max_wal_senders=10
 -c max_logical_slots=10 --disabled for anything but logical
 -c wal_level=logical --hot_standby for anything but logical
 -c checkpoint_segments=100
 -c log_checkpoints=on
 -c shared_buffers=512MB
 -c autovacuum=on
 -c log_min_messages=notice
 -c log_line_prefix='[%p %t] '
 -c wal_keep_segments=100
 -c fsync=off
 -c synchronous_commit=off

pgbench -p 5440 -h /tmp -n -M prepared -c 16 -j 16 -T 30 

pgbench upstream:
tps: 22275.941409
space overhead: 0%
pgbench logical-submitted
tps: 16274.603046
space overhead: 2.1%
pgbench logical-HEAD (will submit updated version tomorrow or so):
tps: 20853.341551
space overhead: 2.3%
pgbench single plpgsql trigger (INSERT INTO log(data) VALUES(NEW::text))
tps: 14101.349535
space overhead: 369%

Note that in the single trigger case nobody consumed the queue while the
logical version streamed the changes out and stored them to disk.

Adding a default NOW() or similar to the tables immediately makes
logical decoding faster by a factor of about 3 in comparison to the
above trivial trigger.

The only reason the submitted version of logical decoding is
comparatively slow is that its xmin update policy is braindamaged,
working on that right now.

Greetings,

Andres

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers