Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Robert Haas
On Mon, Oct 7, 2013 at 9:32 AM, Andres Freund and...@2ndquadrant.com wrote:
 - RelationIsDoingTimetravel is still a crappy name.  How about
 RelationRequiredForLogicalDecoding?  And maybe the reloption
 treat_as_catalog_table can become required_for_logical_decoding.

 Hm. I don't really like the name, required seems to imply that it's
 necessary to turn this on to get data replicated in that relation. How
 about accessible_during_logical_decoding or user_catalog_table? The
 latter would allow us to use it to add checks for user relations used in
 indexes which need a treatment similar to enums.

user_catalog_table is a pretty good description, but should we worry
about the fact that logical replication isn't mentioned in there
anywhere?

In what way do you feel that it's more clear to say *accessible
during* rather than *required for* logical decoding?

I was trying to make the naming consistent; i.e. if we have
RelationRequiredForLogicalDecoding then name the option to match.

 All in all, it seems to me that we shouldn't try to punt.  Maybe we
 should have something that works like ALTER TABLE name CLUSTER ON
 index_name to configure which index should be used for logical
 replication.  Possibly this same syntax could be used as ALTER
 MATERIALIZED VIEW to set the candidate key for that case.

 How about using the current logic by default but allow to tune it
 additionally with an option like that?

I'm OK with defaulting to the primary key if there is one, but I think
that no other candidate key should be entertained unless the user
configures it.  I think the behavior we get without that will be just
too weird.  We could use the same logic you're proposing here for
CLUSTER, too, but we don't; that's because we've (IMHO, rightly)
decided that the choice of index is too important to be left to
chance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Steve Singer

On 10/07/2013 09:32 AM, Andres Freund wrote:

Todo:
* rename treat_as_catalog_table, after agreeing on the new name
* rename remaining timetravel function names
* restrict SuspendDecodingSnapshots usage to RelationInitPhysicalAddr,
   that ought to be enough.
* add InLogicalDecoding() function.
* throw away older data when reading xl_running_xacts records, to deal
   with immediate shutdowns/crashes


What is your current plan for decoding sequence updates?  Is this 
something that you were going to hold-off on supporting till a future 
version? ( know this was discussed a while ago but I don't remember 
where it stands now)


From a Slony point of view this isn't a big deal, I can continue to 
capture sequence changes in sl_seqlog when I create each SYNC event and 
then just replicate the INSERT statements in sl_seqlog via logical 
decoding.  I can see why someone building a replication system not based 
on the concept of a SYNC would have a harder time with this.


I am guessing we would want to pass sequence operations to the plugins 
as we encounter the WAL for them out-of-band of any transaction.   This 
would mean that a set of operations like


begin;
insert into a (id) values(4);
insert into a (id) values(nextval('some_seq'));
commit

would be replayed on the replicas as
setval('some_seq',100);
begin;
insert into a (id) values (4);
insert into a (id) values (100);
commit;




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Andres Freund
On 2013-10-08 15:02:39 -0400, Steve Singer wrote:
 On 10/07/2013 09:32 AM, Andres Freund wrote:
 Todo:
 * rename treat_as_catalog_table, after agreeing on the new name
 * rename remaining timetravel function names
 * restrict SuspendDecodingSnapshots usage to RelationInitPhysicalAddr,
that ought to be enough.
 * add InLogicalDecoding() function.
 * throw away older data when reading xl_running_xacts records, to deal
with immediate shutdowns/crashes
 
 What is your current plan for decoding sequence updates?  Is this something
 that you were going to hold-off on supporting till a future version? ( know
 this was discussed a while ago but I don't remember where it stands now)

I don't plan to implement it as part of this - the optimizations in
sequences make it really unsuitable for that (nontransaction, allocated
in bulk, ...).
Simon had previously posted about sequence AMs, and I have a prototype
patch that implements that concept (which needs considerable cleanup). I
plan to post about it whenever this is finished.

I think many replication solutions that care about sequences in a
nontrivial will want to implement their own sequence logic anyway, so I
think that's not a bad path.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Andres Freund
On 2013-10-08 12:20:22 -0400, Robert Haas wrote:
 On Mon, Oct 7, 2013 at 9:32 AM, Andres Freund and...@2ndquadrant.com wrote:
  - RelationIsDoingTimetravel is still a crappy name.  How about
  RelationRequiredForLogicalDecoding?  And maybe the reloption
  treat_as_catalog_table can become required_for_logical_decoding.
 
  Hm. I don't really like the name, required seems to imply that it's
  necessary to turn this on to get data replicated in that relation. How
  about accessible_during_logical_decoding or user_catalog_table? The
  latter would allow us to use it to add checks for user relations used in
  indexes which need a treatment similar to enums.
 
 user_catalog_table is a pretty good description, but should we worry
 about the fact that logical replication isn't mentioned in there
 anywhere?

I personally don't worry about it, although I see why somebody could.

 In what way do you feel that it's more clear to say *accessible
 during* rather than *required for* logical decoding?

Because required for can easily be understood that you need to set it
if you want a table's changes to be replicated. Which is not the case...

 I was trying to make the naming consistent; i.e. if we have
 RelationRequiredForLogicalDecoding then name the option to match.

Maybe this should be RelationAccessibleInLogicalDecoding() then - that
seems like a better description anyway?

  All in all, it seems to me that we shouldn't try to punt.  Maybe we
  should have something that works like ALTER TABLE name CLUSTER ON
  index_name to configure which index should be used for logical
  replication.  Possibly this same syntax could be used as ALTER
  MATERIALIZED VIEW to set the candidate key for that case.
 
  How about using the current logic by default but allow to tune it
  additionally with an option like that?
 
 I'm OK with defaulting to the primary key if there is one, but I think
 that no other candidate key should be entertained unless the user
 configures it.  I think the behavior we get without that will be just
 too weird.  We could use the same logic you're proposing here for
 CLUSTER, too, but we don't; that's because we've (IMHO, rightly)
 decided that the choice of index is too important to be left to
 chance.

I don't understand why this would be a good path. If you DELETE/UPDATE
and you don't have a primary key you get something that definitely
identifies the row with the current behaviour. It might not be the best
thing, but it sure is better than nothing. E.g. for auditing it's
probably quite sufficient to just use any of the candidate keys if
there (temporarily) is no primary key.
If you implement a replication solution and don't want that behaviour
there, you are free to guard against it there - which is a good thing
to do.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-02 Thread Andres Freund
On 2013-10-02 10:56:38 -0400, Robert Haas wrote:
 On Tue, Oct 1, 2013 at 1:56 PM, Andres Freund and...@2ndquadrant.com wrote:
  On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
  - It seems that HeapSatisfiesHOTandKeyUpdate is now
  HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
  was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
  afraid that something unscalable is happening to this function.  On a
  related node, any overhead added here costs broadly; I'm not sure if
  there's enough to worry about.
 
  Ok, I had to think a bit, but now I remember why I think these changes
  are not really problem: Neither the addition of keys nor candidate keys
  will add any additional comparisons since the columns compared for
  candidate keys are a subset of the set of key columns which in turn are a
  subset of the columns checked for HOT. Right?
 
 TBH, my primary concern was with maintainability more than performance.
 
 On performance, I think any time you add code it's going to cost
 somehow.  However, it might not be enough to care about.

The easy alternative seems to be to call such a function multiple times
- which I think is prohibitive from a performance POV. More radically we
could simply compute the overall set/bitmap of differening columns and
then use bms_is_subset() to determine whether any index columns/key/ckey
columns changed. But that will do comparisons we don't do today...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-02 Thread Robert Haas
On Wed, Oct 2, 2013 at 11:05 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-10-02 10:56:38 -0400, Robert Haas wrote:
 On Tue, Oct 1, 2013 at 1:56 PM, Andres Freund and...@2ndquadrant.com wrote:
  On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
  - It seems that HeapSatisfiesHOTandKeyUpdate is now
  HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
  was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
  afraid that something unscalable is happening to this function.  On a
  related node, any overhead added here costs broadly; I'm not sure if
  there's enough to worry about.
 
  Ok, I had to think a bit, but now I remember why I think these changes
  are not really problem: Neither the addition of keys nor candidate keys
  will add any additional comparisons since the columns compared for
  candidate keys are a subset of the set of key columns which in turn are a
  subset of the columns checked for HOT. Right?

 TBH, my primary concern was with maintainability more than performance.

 On performance, I think any time you add code it's going to cost
 somehow.  However, it might not be enough to care about.

 The easy alternative seems to be to call such a function multiple times
 - which I think is prohibitive from a performance POV. More radically we
 could simply compute the overall set/bitmap of differening columns and
 then use bms_is_subset() to determine whether any index columns/key/ckey
 columns changed. But that will do comparisons we don't do today...

Yeah, there may be no better alternative to doing things as you've
done them here.  It just looks grotty, so I was hoping we had a better
idea.  Maybe not.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-02 Thread Andres Freund
On 2013-10-02 11:06:59 -0400, Robert Haas wrote:
 On Wed, Oct 2, 2013 at 11:05 AM, Andres Freund and...@2ndquadrant.com wrote:
  On 2013-10-02 10:56:38 -0400, Robert Haas wrote:
  On Tue, Oct 1, 2013 at 1:56 PM, Andres Freund and...@2ndquadrant.com 
  wrote:
   On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
   - It seems that HeapSatisfiesHOTandKeyUpdate is now
   HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
   was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
   afraid that something unscalable is happening to this function.  On a
   related node, any overhead added here costs broadly; I'm not sure if
   there's enough to worry about.
  
   Ok, I had to think a bit, but now I remember why I think these changes
   are not really problem: Neither the addition of keys nor candidate keys
   will add any additional comparisons since the columns compared for
   candidate keys are a subset of the set of key columns which in turn are a
   subset of the columns checked for HOT. Right?
 
  TBH, my primary concern was with maintainability more than performance.
 
  On performance, I think any time you add code it's going to cost
  somehow.  However, it might not be enough to care about.
 
  The easy alternative seems to be to call such a function multiple times
  - which I think is prohibitive from a performance POV. More radically we
  could simply compute the overall set/bitmap of differening columns and
  then use bms_is_subset() to determine whether any index columns/key/ckey
  columns changed. But that will do comparisons we don't do today...
 
 Yeah, there may be no better alternative to doing things as you've
 done them here.  It just looks grotty, so I was hoping we had a better
 idea.  Maybe not.

Imo the code now looks easier to understand - which is not saying much -
than in 9.3/HEAD...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-02 Thread Robert Haas
On Tue, Oct 1, 2013 at 1:56 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
 - It seems that HeapSatisfiesHOTandKeyUpdate is now
 HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
 was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
 afraid that something unscalable is happening to this function.  On a
 related node, any overhead added here costs broadly; I'm not sure if
 there's enough to worry about.

 Ok, I had to think a bit, but now I remember why I think these changes
 are not really problem: Neither the addition of keys nor candidate keys
 will add any additional comparisons since the columns compared for
 candidate keys are a subset of the set of key columns which in turn are a
 subset of the columns checked for HOT. Right?

TBH, my primary concern was with maintainability more than performance.

On performance, I think any time you add code it's going to cost
somehow.  However, it might not be enough to care about.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Robert Haas
Review comments on 0004:

- In heap_insert and heap_multi_insert, please rewrite the following
comment for clarity: add record for the buffer without actual content
thats removed if fpw is done for that buffer.
- In heap_delete, the assignment to need_tuple_data() need not
separately check RelationNeedsWAL(), as RelationIsLogicallyLogged()
does that.
- It seems that HeapSatisfiesHOTandKeyUpdate is now
HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
afraid that something unscalable is happening to this function.  On a
related node, any overhead added here costs broadly; I'm not sure if
there's enough to worry about.
- MarkCurrentTransactionIdLoggedIfAny has superfluous braces.
- AssignTransactionId changes Mustn't to May not, which seems like
an entirely pointless change.
- You've removed a blank line just before IsSystemRelation; this is an
unnecessary whitespace change.
- Do none of the callers of IsSystemRelation() care about the fact
that you've considerably changed the semantics?
- RelationIsDoingTimetravel is still a crappy name.  How about
RelationRequiredForLogicalDecoding?  And maybe the reloption
treat_as_catalog_table can become required_for_logical_decoding.
- I don't understand the comment in xl_heap_new_cid to the effect that
the combocid isn't needed for decoding.  How not?
- xlogreader.h includes an additional header with no other changes.
Doesn't seem right.
- relcache.h has a cuddled curly brace.

Review comments on 0003:

I have no problem with caching the primary key in the relcache, or
with using that as the default key for logical decoding, but I'm
extremely uncomfortable with the fallback strategy when no primary key
exists.  Choosing any old unique index that happens to present itself
as the primary key feels wrong to me.  The choice of key is
user-visible.  If we say, update the row with a = 1 to
(a,b,c)=(2,2,2), that's different than saying update the row with b =
1 to (a,b,c)=(2,2,2).  Suppose the previous contents of the target
table are (a,b,c)=(1,2,3) and (a,b,c)=(2,1,4).  You get different
answers depending on which you choose.  I think multi-master
replication just isn't going to work unless the two sides agree on the
key, and I think you'll get strange conflicts unless that key is
chosen by the user according to their business logic.

In single-master replication, being able to pick the key is clearly
not essential for correctness, but it's still desirable, because if
the system picks the wrong key, the change stream will in the end
get the database to the right state, but it may do so by turning one
record into a different one from the user's perspective.

All in all, it seems to me that we shouldn't try to punt.  Maybe we
should have something that works like ALTER TABLE name CLUSTER ON
index_name to configure which index should be used for logical
replication.  Possibly this same syntax could be used as ALTER
MATERIALIZED VIEW to set the candidate key for that case.

What happens if new unique indexes are created or old ones dropped
while logical replication is running?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Andres Freund
Hi,

On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
 - AssignTransactionId changes Mustn't to May not, which seems like
 an entirely pointless change.

It was Musn't before ;). I am not sure why I changed it to May not
instead of Mustn't.

 - Do none of the callers of IsSystemRelation() care about the fact
 that you've considerably changed the semantics?

Afaics no. I think the semantics are actually consistent until somebody
manually creates a relation in pg_catalog (using allow_...). And in that
case the new semantics actually seem more useful.

 - RelationIsDoingTimetravel is still a crappy name.  How about
 RelationRequiredForLogicalDecoding?  And maybe the reloption
 treat_as_catalog_table can become required_for_logical_decoding.

Fine with me.

 - I don't understand the comment in xl_heap_new_cid to the effect that
 the combocid isn't needed for decoding.  How not?

We don't use the combocid for antything - since we have the original
cmin/cmax, we can just use those and ignore the value of the combocid itself.

 - xlogreader.h includes an additional header with no other changes.
 Doesn't seem right.

Hm. I seem to remember having a reason for that, but for the heck can't
see it anymore...

 I have no problem with caching the primary key in the relcache, or
 with using that as the default key for logical decoding, but I'm
 extremely uncomfortable with the fallback strategy when no primary key
 exists.  Choosing any old unique index that happens to present itself
 as the primary key feels wrong to me.
 [stuff I don't disagree with]

People lobbied vigorously to allow candidate keys before. I personally
would never want to use anything but an actual primary key for
replication, but there's other usecases than replication.

I think it's going to be the domain of the replication solution to
enforce the presence of primary keys. I.e. they should (be able to) use
event triggers or somesuch to enforce it...

 All in all, it seems to me that we shouldn't try to punt.  Maybe we
 should have something that works like ALTER TABLE name CLUSTER ON
 index_name to configure which index should be used for logical
 replication.  Possibly this same syntax could be used as ALTER
 MATERIALIZED VIEW to set the candidate key for that case.

I'd be fine with that, but I am also not particularly interested in it
because I personally don't see much of a usecase.
For replication ISTM the only case where there would be no primary key
is a) initial load b) replacing the primary key by another index.

 What happens if new unique indexes are created or old ones dropped
 while logical replication is running?

Should just work, but I'll make sure the tests cover this.

The output plugin needs to lookup the current index used, and it will
use a consistent syscache state and thus will find the same index.
In bdr the output plugin simply includes the name of the index used in
the replication stream to make sure things are somewhat consistent.

Will fix or think about the rest.

Thanks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Robert Haas
On Tue, Oct 1, 2013 at 10:31 AM, Andres Freund and...@2ndquadrant.com wrote:
 I have no problem with caching the primary key in the relcache, or
 with using that as the default key for logical decoding, but I'm
 extremely uncomfortable with the fallback strategy when no primary key
 exists.  Choosing any old unique index that happens to present itself
 as the primary key feels wrong to me.
 [stuff I don't disagree with]

 People lobbied vigorously to allow candidate keys before. I personally
 would never want to use anything but an actual primary key for
 replication, but there's other usecases than replication.

I like allowing candidate keys; I just don't like assuming that any
old one we select will be as good as any other.

 All in all, it seems to me that we shouldn't try to punt.  Maybe we
 should have something that works like ALTER TABLE name CLUSTER ON
 index_name to configure which index should be used for logical
 replication.  Possibly this same syntax could be used as ALTER
 MATERIALIZED VIEW to set the candidate key for that case.

 I'd be fine with that, but I am also not particularly interested in it
 because I personally don't see much of a usecase.
 For replication ISTM the only case where there would be no primary key
 is a) initial load b) replacing the primary key by another index.

The latter is the case I'd be principally concerned about.  I once had
to change the columns that formed the key for a table being used in a
production web application; fortunately, it has traditionally not
mattered much whether a unique index is the primary key, so creating a
new unique index and dropping the old primary key was good enough.
But I would have wanted to control the point at which we changed our
notion of what the candidate key was, I think.

One other thought: you could just log the whole old tuple if there's
no key available.  That would let this work on tables that don't have
indexes.  Replaying the changes might be horribly complex and slow,
but extracting them would work.  If a replication plugin got old
tuple, new tuple with no information on keys, it could find *a* tuple
(not all tuples) that match the old tuple exactly and update each
column to the value from new tuple.  From a correctness point of view,
there's no issue there; it's all about efficiency.  But the user can
solve that problem whenever they like by indexing the destination
table.  It need not even be a unique index, so long as it's reasonably
selective.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Andres Freund
On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
 - It seems that HeapSatisfiesHOTandKeyUpdate is now
 HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
 was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
 afraid that something unscalable is happening to this function.  On a
 related node, any overhead added here costs broadly; I'm not sure if
 there's enough to worry about.

Ok, I had to think a bit, but now I remember why I think these changes
are not really problem: Neither the addition of keys nor candidate keys
will add any additional comparisons since the columns compared for
candidate keys are a subset of the set of key columns which in turn are a
subset of the columns checked for HOT. Right?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-09-30 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 Attached you can find an updated version of the series taking in some of
 the review comments

I don't know whether this is related to the previously-reported
build problems, but when I apply each patch in turn, with make -j4
world  make check-world for each step, I die during compile of
0004.

make[4]: Entering directory `/home/kgrittn/pg/master/src/backend/access/transam'
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g 
-I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2   -c -o xlog.o 
xlog.c -MMD -MP -MF .deps/xlog.Po
xlog.c:44:33: fatal error: replication/logical.h: No such file or directory
compilation terminated.
make[4]: *** [xlog.o] Error 1

I tried maintainer-clean and a new ./configure to see if that would
get me past it; no joy.  I haven't dug further, but if this is not
a known issue I can poke around.  If it is known -- how do I get
past it?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-09-28 Thread Steve Singer

On 09/27/2013 11:44 AM, Andres Freund wrote:

I'm encountering a make error:
Gah. Lastminute changes. Always the same... Updated patch attached.

Greetings,

Andres Freund




I'm still encountering an error in the make.

make clean
.
.make[3]: Entering directory 
`/usr/local/src/postgresql/src/bin/pg_basebackup'

rm -f pg_basebackup pg_receivexlog pg_recvlogical(X) \
pg_basebackup.o pg_receivexlog.o pg_recvlogical.o \
receivelog.o streamutil.o
/bin/sh: 1: Syntax error: ( unexpected
make[3]: *** [clean] Error 2

I had to add a quotes in to the clean commands to make it work









--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-09-28 Thread Alvaro Herrera
Steve Singer wrote:

 I'm still encountering an error in the make.
 
 make clean
 .
 .make[3]: Entering directory
 `/usr/local/src/postgresql/src/bin/pg_basebackup'
 rm -f pg_basebackup pg_receivexlog pg_recvlogical(X) \
 pg_basebackup.o pg_receivexlog.o pg_recvlogical.o \
 receivelog.o streamutil.o
 /bin/sh: 1: Syntax error: ( unexpected
 make[3]: *** [clean] Error 2
 
 I had to add a quotes in to the clean commands to make it work

The proper fix is to add a $ to the pg_recvlogical(X) in clean -- should be 
$(X)

There's another bug in the Makefile: the install target is installing
recvlogical$(X) as receivellog$(X).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-09-27 Thread Thom Brown
On 27 September 2013 16:14, Andres Freund and...@2ndquadrant.com wrote:
 Hi,

 Attached you can find an updated version of the series taking in some of
 the review comments (the others are queued, not ignored), including:
 * split of things from the big Introduce wal decoding via ... patch
 * fix the bug Steve notice where CreateLogicalDecodingContext was passed
   the wrong is_init = false where it should have been true
 * A number of smaller bugs I noticed while reviewing
 * Renaming of some variables, including guaranteedlyLogged ;)
 * Comment improvements in decode.c
 * rename pg_receivellog to pg_recvlogical

 I'll work more on the other points in the next days, so far they are
 clear of other big stuff.


 0001 wal_decoding: Allow walsender's to connect to a specific database
 - as before

 0002 wal_decoding: Log xl_running_xact's at a higher frequency than 
 checkpoints are done
 - as before

 0003 wal_decoding: Add information about a tables primary key to struct 
 RelationData
 - as before

 0004 wal_decoding: Add wal_level = logical and log data required for logical 
 decoding
 - splitof patch that contains the wal format changes including the
   addition of a new wal_level option

 0005 wal_decoding: Add option to treat additional tables as catalog tables
 - Option to treat user defined table as a catalog table which means it
   can be accessed during logical decoding from an output plugin

 0006 wal_decoding: Introduce wal decoding via catalog timetravel
 - The guts of changeset extraction, without a user interface

 0007 wal_decoding: logical changeset extraction walsender interface
 - splitof patch containing the walsender changes, which allow to receive
   the changeset data in a streaming fashion, supporting sync rep and
   such fancy things

 0008 wal_decoding: Only peg the xmin horizon for catalog tables during 
 logical decoding
 - splitof optimization which reduces the pain 06 introduces by pegging
   the xmin horizon to the smallest of the logical decoding slots. Now
   it's pegged differently for data tables than from catalog tables

 0009 wal_decoding: test_decoding: Add a simple decoding module in contrib
 - Example output plugin which is also used in tests

 0010 wal_decoding: pg_recvlogical: Introduce pg_receivexlog equivalent for 
 logical changes
 - renamed client for the walsender interface

 0011 wal_decoding: test_logical_decoding: Add extension for easier testing of 
 logical decoding
 - SQL SRF to get data from a decoding slot, also used as a vehicle for
   tests

 0012 wal_decoding: design document v2.4 and snapshot building design doc v0.5

I'm encountering a make error:

install  pg_basebackup '/home/thom/Development/psql/bin/pg_basebackup'
install  pg_receivexlog '/home/thom/Development/psql/bin/pg_receivexlog'
install  pg_recvlogical(X) '/home/thom/Development/psql/bin/pg_receivellog'
/bin/dash: 1: Syntax error: ( unexpected
make[3]: *** [install] Error 2
make[3]: Leaving directory
`/home/thom/Development/postgresql/src/bin/pg_basebackup'
make[2]: *** [install-pg_basebackup-recurse] Error 2

Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-09-27 Thread Andres Freund
On 2013-09-27 16:35:53 +0100, Thom Brown wrote:
 On 27 September 2013 16:14, Andres Freund and...@2ndquadrant.com wrote:
  Hi,
 
  Attached you can find an updated version of the series taking in some of
  the review comments (the others are queued, not ignored), including:
  * split of things from the big Introduce wal decoding via ... patch
  * fix the bug Steve notice where CreateLogicalDecodingContext was passed
the wrong is_init = false where it should have been true
  * A number of smaller bugs I noticed while reviewing
  * Renaming of some variables, including guaranteedlyLogged ;)
  * Comment improvements in decode.c
  * rename pg_receivellog to pg_recvlogical
 
  I'll work more on the other points in the next days, so far they are
  clear of other big stuff.
 
 
  0001 wal_decoding: Allow walsender's to connect to a specific database
  - as before
 
  0002 wal_decoding: Log xl_running_xact's at a higher frequency than 
  checkpoints are done
  - as before
 
  0003 wal_decoding: Add information about a tables primary key to struct 
  RelationData
  - as before
 
  0004 wal_decoding: Add wal_level = logical and log data required for 
  logical decoding
  - splitof patch that contains the wal format changes including the
addition of a new wal_level option
 
  0005 wal_decoding: Add option to treat additional tables as catalog tables
  - Option to treat user defined table as a catalog table which means it
can be accessed during logical decoding from an output plugin
 
  0006 wal_decoding: Introduce wal decoding via catalog timetravel
  - The guts of changeset extraction, without a user interface
 
  0007 wal_decoding: logical changeset extraction walsender interface
  - splitof patch containing the walsender changes, which allow to receive
the changeset data in a streaming fashion, supporting sync rep and
such fancy things
 
  0008 wal_decoding: Only peg the xmin horizon for catalog tables during 
  logical decoding
  - splitof optimization which reduces the pain 06 introduces by pegging
the xmin horizon to the smallest of the logical decoding slots. Now
it's pegged differently for data tables than from catalog tables
 
  0009 wal_decoding: test_decoding: Add a simple decoding module in contrib
  - Example output plugin which is also used in tests
 
  0010 wal_decoding: pg_recvlogical: Introduce pg_receivexlog equivalent for 
  logical changes
  - renamed client for the walsender interface
 
  0011 wal_decoding: test_logical_decoding: Add extension for easier testing 
  of logical decoding
  - SQL SRF to get data from a decoding slot, also used as a vehicle for
tests
 
  0012 wal_decoding: design document v2.4 and snapshot building design doc 
  v0.5
 
 I'm encountering a make error:

Gah. Lastminute changes. Always the same... Updated patch attached.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


0010-wal_decoding-pg_recvlogical-Introduce-pg_receivexlog.patch.gz
Description: application/patch-gzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers