Re: [GENERAL] [HACKERS] pgjdbc logical replication client throwing exception

2017-09-15 Thread Dipesh Dangol
Hi Vladimir,
Ya, initially I was trying with withStatusInterval(20, TimeUnit.SECONDS),
that didn't work so, then only I switched to  .withStatusInterval(20,
TimeUnit.MILLISECONDS)
but it is not working as well. I am not aware of type of test cases that
you are pointing.
Could you please send me any link for that one.

For generating the load, I am using benchmarkSQL, which will generate
around 9000
transactions per second. I am trying to run streamAPI at the same time
BenchmarskSQL is generating load. If i don't run benchmarkSQL it works fine
I mean
when there are only few transactions to replicate at a time, it works fine.
But when
I run it with that benchmarskSql and try to add some logic like some
conditions, then
it breaks down in between, most of the time within few seconds.

Hi Andres,
I haven't check the server log yet. Now, I don't access to my working
environment, I will be able to check that only on Monday. If I find any
suspicious
thing in log, I will let you know.

Thank you guys.







On Fri, Sep 15, 2017 at 10:05 PM, Andres Freund  wrote:

> On 2017-09-15 20:00:34 +, Vladimir Sitnikov wrote:
> > ++pgjdbc dev list.
> >
> > >I am facing unusual connection breakdown problem. Here is the simple
> code
> > that I am using to read WAL file:
> >
> > Does it always fails?
> > Can you create a test case? For instance, if you file a pull request with
> > the test, it will get automatically tested across various PG versions, so
> > it would be easier to reson about
> >
> > Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20
> > millis? I don't think it matter much, however 20ms seems to be an
> overkill.
>
> Also, have you checked the server log?
>
> - Andres
>


Re: [GENERAL] New interface to PG from Chapel?

2017-09-15 Thread Steve Atkins

> On Sep 15, 2017, at 12:56 PM, Thelonius Buddha  wrote:
> 
> I’m interested to know the level of effort to build a psycopg2-like library 
> for Chapel: http://chapel.cray.com/ Not being much of a programmer myself, 
> does someone have an educated opinion on this?

It looks like you can call C libraries from Chapel, so you can use libpq 
directly. Psycopg2 complies with the python database API, but there doesn't 
seem anything like that for Chapel, so there's not really an equivalent.

So it depends on how complex a wrapper you want. At the low end, very little 
effort - libpq exists; you can call it from Chapel if you just declare the api, 
I think.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New interface to PG from Chapel?

2017-09-15 Thread John R Pierce

On 9/15/2017 12:56 PM, Thelonius Buddha wrote:
I’m interested to know the level of effort to build a psycopg2-like 
library for Chapel: http://chapel.cray.com/ Not being much of a 
programmer myself, does someone have an educated opinion on this?



I don't see any standard database interface frameworks to hang a SQL 
library/driver on.


the fact that its a heavily concurrent/parallel language would likely 
mean there's many boobytraps en route to successfully using SQL, as you 
need to ensure that one PG connection is only ever used by the thread 
that created it



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] pgjdbc logical replication client throwing exception

2017-09-15 Thread Andres Freund
On 2017-09-15 20:00:34 +, Vladimir Sitnikov wrote:
> ++pgjdbc dev list.
> 
> >I am facing unusual connection breakdown problem. Here is the simple code
> that I am using to read WAL file:
> 
> Does it always fails?
> Can you create a test case? For instance, if you file a pull request with
> the test, it will get automatically tested across various PG versions, so
> it would be easier to reson about
> 
> Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20
> millis? I don't think it matter much, however 20ms seems to be an overkill.

Also, have you checked the server log?

- Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] pgjdbc logical replication client throwing exception

2017-09-15 Thread Vladimir Sitnikov
++pgjdbc dev list.

>I am facing unusual connection breakdown problem. Here is the simple code
that I am using to read WAL file:

Does it always fails?
Can you create a test case? For instance, if you file a pull request with
the test, it will get automatically tested across various PG versions, so
it would be easier to reson about

Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20
millis? I don't think it matter much, however 20ms seems to be an overkill.

Vladimir

пт, 15 сент. 2017 г. в 19:57, Dipesh Dangol :

> hi,
>
> I am trying to implement logical replication stream API of postgresql.
> I am facing unusual connection breakdown problem. Here is the simple code
> that I am
> using to read WAL file:
>
> String url = "jdbc:postgresql://pcnode2:5432/benchmarksql";
> Properties props = new Properties();
> PGProperty.USER.set(props, "benchmarksql");
> PGProperty.PASSWORD.set(props, "benchmarksql");
> PGProperty.ASSUME_MIN_SERVER_VERSION.set(props, "9.4");
> PGProperty.REPLICATION.set(props, "database");
> PGProperty.PREFER_QUERY_MODE.set(props, "simple");
>
> Connection conn = DriverManager.getConnection(url, props);
> PGConnection replConnection = conn.unwrap(PGConnection.class);
>
> PGReplicationStream stream = replConnection.getReplicationAPI()
> .replicationStream().logical()
> .withSlotName("replication_slot3")
> .withSlotOption("include-xids", true)
> .withSlotOption("include-timestamp", "on")
> .withSlotOption("skip-empty-xacts", true)
> .withStatusInterval(20, TimeUnit.MILLISECONDS).start();
> while (true) {
>
> ByteBuffer msg = stream.read();
>
> if (msg == null) {
> TimeUnit.MILLISECONDS.sleep(10L);
> continue;
> }
>
> int offset = msg.arrayOffset();
> byte[] source = msg.array();
> int length = source.length - offset;
> String data = new String(source, offset, length);
>* System.out.println(data);*
>
> stream.setAppliedLSN(stream.getLastReceiveLSN());
> stream.setFlushedLSN(stream.getLastReceiveLSN());
>
> }
>
> Even the slightest modification in the code like commenting
> *System.out.println(data)*;
> which is just printing the data in the console, causes connection
> breakdown problem with
> following error msg
>
> org.postgresql.util.PSQLException: Database connection failed when reading
> from copy
> at
> org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1028)
> at
> org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:41)
> at
> org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData(V3PGReplicationStream.java:155)
> at
> org.postgresql.core.v3.replication.V3PGReplicationStream.readInternal(V3PGReplicationStream.java:124)
> at
> org.postgresql.core.v3.replication.V3PGReplicationStream.read(V3PGReplicationStream.java:70)
> at Server.main(Server.java:52)
> Caused by: java.net.SocketException: Socket closed
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> at java.net.SocketInputStream.read(SocketInputStream.java:171)
> at java.net.SocketInputStream.read(SocketInputStream.java:141)
> at
> org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:140)
> at
> org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:109)
> at
> org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:191)
> at org.postgresql.core.PGStream.receive(PGStream.java:495)
> at org.postgresql.core.PGStream.receive(PGStream.java:479)
> at
> org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1161)
> at
> org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1026)
> ... 5 more
>
> I am trying to implement some logic like filtering out the unrelated table
> after reading log.
> But due to this unusual behavior I couldn't implement properly.
> Can somebody give me some hint how to solve this problem.
>
> Thank you.
>


[GENERAL] New interface to PG from Chapel?

2017-09-15 Thread Thelonius Buddha
I’m interested to know the level of effort to build a psycopg2-like library for 
Chapel: http://chapel.cray.com/  Not being much of a 
programmer myself, does someone have an educated opinion on this?

Thank you,
b

Re: [GENERAL] cursor declare

2017-09-15 Thread Tom Lane
Peter Koukoulis  writes:
> This is my first cursor attempt:

> according to docs

> DECLARE
> curs1 refcursor;
> curs2 CURSOR FOR SELECT * FROM tenk1;
> curs3 CURSOR (key integer) FOR SELECT * FROM tenk1 WHERE unique1 = key;

> this should work, but getting error:

> ft_node=# declare cur_test1 CURSOR (key integer) for select * from test1
> where x=key;
> ERROR:  syntax error at or near "("
> LINE 1: declare cur_test1 CURSOR (key integer) for select * from tes...

It looks like you're trying to use the plpgsql syntax for a cursor
variable as part of a DECLARE CURSOR SQL-level command.  They're not
the same thing at all.  In particular, there isn't any concept of
parameters in the SQL DECLARE CURSOR command.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Rafal Pietrak


W dniu 15.09.2017 o 20:49, Rob Sargent pisze:
> 
> 
> On 09/15/2017 12:45 PM, Adam Brusselback wrote:
>>
>> I cannot image a single postgres index covering more than one
>> physical table. Are you really asking for that?
>>
>>
>> While not available yet, that is a feature that has had discussion
>> before.  Global indexes are what i've seen it called in those
>> discussions.  One of the main use cases is to provide uniqueness
>> across multiple tables, which would also allow things like foreign
>> keys on partitioned tables.
> I had a sneaking suspicion that partitioning would be the use-case, but
> clearly there's at least the 'notion' of a single entity
> 

But (from my particular application perspective) it's quite vital.

Still, pondering ways to restructure my schema I came to conclusions
that having an index covering inherited hierarchy could help automate
partitioning - as opposed to current requirements of putting explicit
CHECKs into every child tables of that hierarchy and keeping those
check consisting as the schema grows. I'm not too sure though, as I'm
not familiar with postgres implementation internals.

-R


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cursor declare

2017-09-15 Thread Peter Koukoulis
Hi

This is my first cursor attempt:

according to docs

DECLARE
curs1 refcursor;
curs2 CURSOR FOR SELECT * FROM tenk1;
curs3 CURSOR (key integer) FOR SELECT * FROM tenk1 WHERE unique1 = key;

this should work, but getting error:

ft_node=# declare cur_test1 CURSOR (key integer) for select * from test1
where x=key;
ERROR:  syntax error at or near "("
LINE 1: declare cur_test1 CURSOR (key integer) for select * from tes...

Table is defined as:
psql (9.6.4)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384,
bits: 256, compression: off)
Type "help" for help.

ft_node=# \d+ test1
Table "public.test1"
 Column | Type  | Modifiers | Storage  | Stats target |
Description
+---+---+--+--+-
 x  | integer   |   | plain|  |
 y  | character varying(20) |   | extended |  |


Can somebody please help?

P


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Christopher Browne
On 15 September 2017 at 14:45, Adam Brusselback
 wrote:
>> I cannot image a single postgres index covering more than one physical
>> table. Are you really asking for that?
>
>
> While not available yet, that is a feature that has had discussion before.
> Global indexes are what i've seen it called in those discussions.  One of
> the main use cases is to provide uniqueness across multiple tables, which
> would also allow things like foreign keys on partitioned tables.

It certainly does come up periodically; it seems like a challengingly different
thing to implement (as compared to "regular" indexes), from two perspectives:

a) The present index implementation only needs to reference tuples from one
table, so the tuple references can be direct heap references.

If multiple tables (partitions being the most obvious case) were to be covered,
each index entry would also require indication of which table it comes from.

b) Referencing which index entries can be dropped (e.g. - vacuumed out)
is a fair bit more complex because the index entries depend on multiple
tables.  This adds, um, concurrency complications, if data is being deleted
from multiple tables concurrently.  (Over-simplifying question:  "When
a table that participates in the sharing is vacuumed, does the shared
index get vacuumed?  What if two such tables are vacuumed concurrently?")

This has added up to make it not an easy thing to implement.

To be sure, if a shared index required greatly worsened locking to do
maintenance, or suffered from inability to keep it tidy, that would make the
feature of rather less interest...
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Rob Sargent



On 09/15/2017 12:45 PM, Adam Brusselback wrote:


I cannot image a single postgres index covering more than one
physical table. Are you really asking for that?


While not available yet, that is a feature that has had discussion 
before.  Global indexes are what i've seen it called in those 
discussions.  One of the main use cases is to provide uniqueness 
across multiple tables, which would also allow things like foreign 
keys on partitioned tables.
I had a sneaking suspicion that partitioning would be the use-case, but 
clearly there's at least the 'notion' of a single entity




Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Adam Brusselback
>
> I cannot image a single postgres index covering more than one physical
> table. Are you really asking for that?


While not available yet, that is a feature that has had discussion before.
Global indexes are what i've seen it called in those discussions.  One of
the main use cases is to provide uniqueness across multiple tables, which
would also allow things like foreign keys on partitioned tables.


[GENERAL] pgjdbc logical replication client throwing exception

2017-09-15 Thread Dipesh Dangol
hi,

I am trying to implement logical replication stream API of postgresql.
I am facing unusual connection breakdown problem. Here is the simple code
that I am
using to read WAL file:

String url = "jdbc:postgresql://pcnode2:5432/benchmarksql";
Properties props = new Properties();
PGProperty.USER.set(props, "benchmarksql");
PGProperty.PASSWORD.set(props, "benchmarksql");
PGProperty.ASSUME_MIN_SERVER_VERSION.set(props, "9.4");
PGProperty.REPLICATION.set(props, "database");
PGProperty.PREFER_QUERY_MODE.set(props, "simple");

Connection conn = DriverManager.getConnection(url, props);
PGConnection replConnection = conn.unwrap(PGConnection.class);

PGReplicationStream stream = replConnection.getReplicationAPI()
.replicationStream().logical()
.withSlotName("replication_slot3")
.withSlotOption("include-xids", true)
.withSlotOption("include-timestamp", "on")
.withSlotOption("skip-empty-xacts", true)
.withStatusInterval(20, TimeUnit.MILLISECONDS).start();
while (true) {

ByteBuffer msg = stream.read();

if (msg == null) {
TimeUnit.MILLISECONDS.sleep(10L);
continue;
}

int offset = msg.arrayOffset();
byte[] source = msg.array();
int length = source.length - offset;
String data = new String(source, offset, length);
   * System.out.println(data);*

stream.setAppliedLSN(stream.getLastReceiveLSN());
stream.setFlushedLSN(stream.getLastReceiveLSN());

}

Even the slightest modification in the code like commenting
*System.out.println(data)*;
which is just printing the data in the console, causes connection breakdown
problem with
following error msg

org.postgresql.util.PSQLException: Database connection failed when reading
from copy
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryE
xecutorImpl.java:1028)
at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImp
l.java:41)
at org.postgresql.core.v3.replication.V3PGReplicationStream.rec
eiveNextData(V3PGReplicationStream.java:155)
at org.postgresql.core.v3.replication.V3PGReplicationStream.rea
dInternal(V3PGReplicationStream.java:124)
at org.postgresql.core.v3.replication.V3PGReplicationStream.
read(V3PGReplicationStream.java:70)
at Server.main(Server.java:52)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.postgresql.core.VisibleBufferedInputStream.readMore(Visi
bleBufferedInputStream.java:140)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(V
isibleBufferedInputStream.java:109)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleB
ufferedInputStream.java:191)
at org.postgresql.core.PGStream.receive(PGStream.java:495)
at org.postgresql.core.PGStream.receive(PGStream.java:479)
at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(
QueryExecutorImpl.java:1161)
at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryE
xecutorImpl.java:1026)
... 5 more

I am trying to implement some logic like filtering out the unrelated table
after reading log.
But due to this unusual behavior I couldn't implement properly.
Can somebody give me some hint how to solve this problem.

Thank you.


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Rob Sargent



Isn't this typically handled with an inheritance (parent-children)
setup.  MasterDocument has id, subtype and any common columns (create
date etc) then dependents use the same id from master to complete the
data for a given type.  This is really common in ORM tools.  Not clear
from the description if the operations could be similarly handled
(operation id, operation type as master of 17 dependent
operationSpecifics; there is also the "Activity Model")

I do that, but may be I do that badly.

Currently I do have 6 levels of inheritance which partition my
document-class space. But I cannot see any way to have a unique index
(unique constraint) to cover all those partitions at once.

This is actually the core of my question: How to make one?


I cannot image a single postgres index covering more than one physical 
table. Are you really asking for that? Remember each dependent record 
has an entry in the master so the master guarantees a unique set of keys 
across all the dependents.  Now if you have enough documents you may get 
into partitioning but that's a separate issue.
How you model the work done on (or state transition of) those documents 
is a yet another design, but at least the work flow model can safely, 
consistently refer to the master table.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance with high correlation in group by on PK

2017-09-15 Thread Alban Hertroys
On 8 September 2017 at 00:23, Jeff Janes  wrote:
> On Tue, Aug 29, 2017 at 1:20 AM, Alban Hertroys  wrote:
>>
>> On 28 August 2017 at 21:32, Jeff Janes  wrote:
>> > On Mon, Aug 28, 2017 at 5:22 AM, Alban Hertroys 
>> > wrote:
>> >>
>> >> Hi all,
>> >>
>> >> It's been a while since I actually got to use PG for anything serious,
>> >> but we're finally doing some experimentation @work now to see if it is
>> >> suitable for our datawarehouse. So far it's been doing well, but there
>> >> is a particular type of query I run into that I expect we will
>> >> frequently use and that's choosing a sequential scan - and I can't
>> >> fathom why.
>> >>
>> >> This is on:
>> >>
>> >>
>> >> The query in question is:
>> >> select "VBAK_MANDT", max("VBAK_VBELN")
>> >>   from staging.etl1_vbak
>> >>  group by "VBAK_MANDT";
>> >>
>> >> This is the header-table for another detail table, and in this case
>> >> we're already seeing a seqscan. The thing is, there are 15M rows in
>> >> the table (disk usage is 15GB), while the PK is on ("VBAK_MANDT",
>> >> "VBAK_VBELN") with very few distinct values for "VBAK_MANDT" (in fact,
>> >> we only have 1 at the moment!).
>> >
>> >
>> > You need an "index skip-scan" or "loose index scan".  PostgreSQL doesn't
>> > currently detect and implement them automatically, but you can use a
>> > recursive CTE to get it to work.  There are some examples at
>> > https://wiki.postgresql.org/wiki/Loose_indexscan
>>
>> Thanks Jeff, that's an interesting approach. It looks very similar to
>> correlated subqueries.
>>
>> Unfortunately, it doesn't seem to help with my issue. The CTE is
>> indeed fast, but when querying the results from the 2nd level ov the
>> PK with the CTE results, I'm back at a seqscan on pdw2_vbak again.
>
>
> Something like this works:
>
> create table foo as select trunc(random()*5) as col1, random() as col2 from
> generate_series(1,1);
> create index on foo (col1, col2);
> vacuum analyze foo;
>
>
> with recursive t as (
>select * from (select col1, col2 from foo order by col1 desc, col2 desc
> limit 1) asdfsaf
> union all
>   select
>  (select col1 from foo where foo.col1 < t.col1 order by col1 desc, col2
> desc limit 1) as col1,
>  (select col2 from foo where foo.col1 < t.col1 order by col1 desc, col2
> desc limit 1) as col2
>from t where t.col1 is not null
> )
> select * from t where t is not null;
>
> It is pretty ugly that you need one subquery in the select list for each
> column to be returned.  Maybe someone can find a way to avoid that part.  I
> tried using lateral joins to get around it, but couldn't make that work.
>
> Cheers,
>
> Jeff

Thanks Jeff. That does indeed look ugly.

Since we're dealing with a 4GL language (FOCUS) that translates to
SQL, I don't think we'll attempt your workaround, even though we can
use SQL directly in our reports if we want to.

But, I just remembered giving someone else in a similar situation some
advice on this very list; Obviously, when my first primary key field
is not very selective, I should change the order of the fields in the
PK!

But let's first enjoy the weekend.

Alban.
-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table partition - parent table use

2017-09-15 Thread Francisco Olarte
Luiz:

1st thing, do not top-quote. It's hard to read and I, personally,
consider it insulting ( not the first time it's done, and for obvious
reasons ).

On Fri, Sep 15, 2017 at 4:24 PM, Luiz Hugo Ronqui  wrote:
> Our usage allows us to insert all rows into the hot partition, since its a 
> rare event to receive data that otherwise would have to be redirected to a 
> "colder" partition.
> This way, its not a problem that the parent table would always be searched. 
> In fact it would guarantee that these bits, received "out of time", would get 
> accounted.

The problem of always being searched is not for recent rows, but for
historic. Imagine hot=2016-7, warm=2013-5 and cold=rest

If hot=parent and you make a query for 2014 data it's going to search
hot and warm, not just warm. If hot!=parent it is going to search
parent and warm ( and use a seq-scan in parent in the normal case, as
stats show it as empty , and it will be if things are going well ).

> The number of partitions, especially the "cold" ones, is not a hard limit... 
> we can expand it with time.

I know, my recomendation was to made them in such a way that once a
row lands in an historic partition it never moves if you use more than
one ( i.e., use things as cold-200x, cold-201x, not cold-prev-decade,
cold-two-decades-ago )

> The idea includes schemas and tablespaces, along with its management 
> benefits,  specifically for these partitioned data. One of our current 
> problems is exactly the time it takes for backup and restore operations. I 
> did not mentioned it before because of the size of the original message.

We normally do the schema trick, and as 90% of data is in historic
schema, we skip most of it.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] Table partition - parent table use

2017-09-15 Thread Luiz Hugo Ronqui
Thanks for your tips!


Our usage allows us to insert all rows into the hot partition, since its a rare 
event to receive data that otherwise would have to be redirected to a "colder" 
partition.

This way, its not a problem that the parent table would always be searched. In 
fact it would guarantee that these bits, received "out of time", would get 
accounted. 

The number of partitions, especially the "cold" ones, is not a hard limit... we 
can expand it with time.

The idea includes schemas and tablespaces, along with its management benefits,  
specifically for these partitioned data. One of our current problems is exactly 
the time it takes for backup and restore operations. I did not mentioned it 
before because of the size of the original message.


Luiz Hugo Ronqui


-Mensagem original-
De: pgsql-general-owner+m233282=lronqui=tce.sp.gov...@postgresql.org 
[mailto:pgsql-general-owner+m233282=lronqui=tce.sp.gov...@postgresql.org] Em 
nome de Francisco Olarte
Enviada em: sexta-feira, 15 de setembro de 2017 08:37
Para: Luiz Hugo Ronqui
Cc: pgsql-general@postgresql.org
Assunto: Re: [GENERAL] Table partition - parent table use

Hi Luiz:

On Thu, Sep 14, 2017 at 11:06 PM, Luiz Hugo Ronqui
 wrote:
...
> We have a database with data being inserted for almost 10 years and no
> policy defined to get rid of old records, even though we mostly use only the
> current and last year's data. Some etl processes run on older data from time
> to time.
> After this time, some tables have grown to a point where even their indexes
> are bigger than the server's available RAM. Because some queries were
> getting slower, despite optimizations, we started experimenting with table
> partitioning.
> The idea was creating 3 partitions for each table of interest: the "hot",
> the "warm" and the "cold". The first would have the last 2 years. The
> second, data from 3 to 5 years and the third, all the rest.

I would consider using more than one cold partition, and maybe moving
them AND warm to a different schema. Maybe 5 years in each, something
like cold-2000-2009, cold-2010-2019. You can update the constraints
adequately, but the thing is you periodically update your constraints
in the hot, warm and last cold, moving data among them appropiately,
then do a really good backup of warm and colds and you can forget
about them in daily backups, and also if you want to drop "stale" in
the future, or un-inherit them to speed up queries, it is easier to
do.

...
> Then one thing came to mind: Why not to use the "parent" table as the hot
> one, without doing any redirection at all? That way we could:
> 1)  Keep the referential integrity of the current model untouched;
> 2)  Dismiss the trigger redirection along with the hybernate issue;
> 3)  Have a much smaller dataset to use in most of our queries;
> 4)  Have all the historic data when needed

You can do it, but remember parent normally does not have constraints,
so it is always scanned ( fastly as it is known empty ). Also select
from only parent is useful to detect when you are missing partitions,
won't work in this case. But you can test it.

...
> I have run some basic tests and all seemed to work as expected, but since I
> couldn't find any use of  the parent table besides  being the head of the
> hierarchy, I am affraid of doing something that could stop  because it wasnt
> designed to work like that to begin with...

Seems fine to me. Never used that because y normally use special
insertion programs for my partitiones tables ( my usage allows thats
), so I insert directly in the appropiate partition always ( so I just
use inheritance, no triggers or rules ).

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 10 beta 4 foreign table partition check constraint broken?

2017-09-15 Thread Paul Jones
Is this a bug in Postgres 10b4?  Looks like neither partition ranges
nor check constraints are honored in 10b4 when inserting into
partitions that are foreign tables.

Here is a nearly shovel-ready example.  Just replace with your
servers/passwords.

-- --
-- Server 1
-- --

CREATE DATABASE cluster;
\c cluster

CREATE EXTENSION postgres_fdw;

CREATE SERVER server2 FOREIGN DATA WRAPPER postgres_fdw
OPTIONS(host 'server2', dbname 'cluster');

CREATE USER MAPPING FOR postgres SERVER server2
OPTIONS(user 'postgres', password 'pgpassword');

CREATE TABLE foo (
id  INT NOT NULL,
nameTEXT
) PARTITION BY RANGE (id);

CREATE FOREIGN TABLE foo_1
PARTITION OF foo
FOR VALUES FROM (0) TO (1)
SERVER server2 OPTIONS (table_name 'foo_1');

-- --
-- Server 2
-- --

CREATE DATABASE cluster;
\c cluster

CREATE TABLE foo_1 (
id  INT NOT NULL,
nameTEXT
);

-- --
-- Server 1
-- --

INSERT INTO foo_1 VALUES(0,'funky bug'),
(100, 'wiggle frank'),
(15000, 'boegger snot');

SELECT * FROM foo;

DROP FOREIGN TABLE foo_1;

CREATE FOREIGN TABLE foo_1
PARTITION OF foo
(id CONSTRAINT f1 CHECK ((id >= 0) AND (id < 1)))
FOR VALUES FROM (0) TO (1)
SERVER server2 OPTIONS (table_name 'foo_1');

INSERT INTO foo_1 VALUES(0,'funky bug'),
(100, 'wiggle frank'),
(15000, 'boegger snot');

SELECT * FROM foo;


.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Rafal Pietrak


W dniu 15.09.2017 o 13:36, Alban Hertroys pisze:
> On 15 September 2017 at 11:03, Rafal Pietrak  wrote:
> 

[-]
> 
> For example, if we define:
> create table master_table (
> year int
> ,   month int
> ,   example text
> ,   primary key (year, month)
> );
> 
> create child2016_table () inherits master_table;
> 
> alter table child_table add constraint child2016_year_chk check (year = 2016);
> alter table child_table add constraint child2016_pk primary key (year, month);
> 
> create child2017_table () inherits master_table;
> 
> alter table child_table add constraint child2017_year_chk check (year = 2017);
> alter table child_table add constraint child2017_pk primary key (year, month);
> 
> In above, the three separate primary keys are guaranteed to contain
> distinct ranges of year - provided that we forbid any records to go
> directly into the master table or that those records do not have years
> already covered by one of the child tables.
> 
> Perhaps you can apply this concept to your problem?
> 

I do it exactly this way.

The problem is, that the documents undergo "postprocessing" - 17 other
tables "would" describe those  and MUST keep track of what's done
and to which document, which is done by FK into relevant document table.

Having this partitioning, instead of having those 17 "process-tables" I
have to create 17 * 12 = 204 tables to be able to implement those FK (ID
+ child-selector); while if only I could avoid that "child-selector"
(like YEAR in your example), it would let me reduce my schema like
10-fold (hmm, not exactly 12+17 = 29 tables v.s. 204 tables, but close
enough).

The complexity of this design have already made me stop adding
functionality ... so I'm looking for means of reducing it. An obvious
candidate would be something like "global primary key" over all the
partitions of the master document table.

But I understand, no such thing exists.

Thenx anyway,

-R



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Kenneth Marshall
> >
> Hmm...
> 
> 2 4 6 8 10
> 
> 3 6
> 
> 5 10
> 
> Adding a prime as an increment is not sufficient to guarantee uniqueness!
> 
> You have to ensure that the product of the 2 smallest primes you use
> is greater than any number you'd need to generate.  With such large
> primes you may run out of sequence numbers faster than you would
> like!
> 
> 
> Cheers,
> Gavin

Yes, you are right. That would not help.

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread Justin Pryzby
On Fri, Sep 15, 2017 at 06:49:06AM -0500, Ron Johnson wrote:
> On 09/15/2017 06:34 AM, Justin Pryzby wrote:
> [snip]
> >But you might consider: 1) looping around tables/indices rather than "REINDEX
> >DATABASE", and then setting a statement_timeout=9s for each REINDEX 
> >statement;
> 
> Is there a way to do that within psql?  (Doing it from bash is trivial, but
> I'd rather do it from SQL.)

Not that I know, but it wouldn't help me, since our script also calls pg_repack
(for indices on system and some other tables), and also has logic to handle
differently historic partition tables.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread Ron Johnson

On 09/15/2017 06:34 AM, Justin Pryzby wrote:
[snip]

But you might consider: 1) looping around tables/indices rather than "REINDEX
DATABASE", and then setting a statement_timeout=9s for each REINDEX statement;


Is there a way to do that within psql?  (Doing it from bash is trivial, but 
I'd rather do it from SQL.)


--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table partition - parent table use

2017-09-15 Thread Francisco Olarte
Hi Luiz:

On Thu, Sep 14, 2017 at 11:06 PM, Luiz Hugo Ronqui
 wrote:
...
> We have a database with data being inserted for almost 10 years and no
> policy defined to get rid of old records, even though we mostly use only the
> current and last year's data. Some etl processes run on older data from time
> to time.
> After this time, some tables have grown to a point where even their indexes
> are bigger than the server's available RAM. Because some queries were
> getting slower, despite optimizations, we started experimenting with table
> partitioning.
> The idea was creating 3 partitions for each table of interest: the "hot",
> the "warm" and the "cold". The first would have the last 2 years. The
> second, data from 3 to 5 years and the third, all the rest.

I would consider using more than one cold partition, and maybe moving
them AND warm to a different schema. Maybe 5 years in each, something
like cold-2000-2009, cold-2010-2019. You can update the constraints
adequately, but the thing is you periodically update your constraints
in the hot, warm and last cold, moving data among them appropiately,
then do a really good backup of warm and colds and you can forget
about them in daily backups, and also if you want to drop "stale" in
the future, or un-inherit them to speed up queries, it is easier to
do.

...
> Then one thing came to mind: Why not to use the "parent" table as the hot
> one, without doing any redirection at all? That way we could:
> 1)  Keep the referential integrity of the current model untouched;
> 2)  Dismiss the trigger redirection along with the hybernate issue;
> 3)  Have a much smaller dataset to use in most of our queries;
> 4)  Have all the historic data when needed

You can do it, but remember parent normally does not have constraints,
so it is always scanned ( fastly as it is known empty ). Also select
from only parent is useful to detect when you are missing partitions,
won't work in this case. But you can test it.

...
> I have run some basic tests and all seemed to work as expected, but since I
> couldn't find any use of  the parent table besides  being the head of the
> hierarchy, I am affraid of doing something that could stop  because it wasnt
> designed to work like that to begin with...

Seems fine to me. Never used that because y normally use special
insertion programs for my partitiones tables ( my usage allows thats
), so I insert directly in the appropiate partition always ( so I just
use inheritance, no triggers or rules ).

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Alban Hertroys
On 15 September 2017 at 11:03, Rafal Pietrak  wrote:

>> Isn't this typically handled with an inheritance (parent-children)
>> setup.  MasterDocument has id, subtype and any common columns (create
>> date etc) then dependents use the same id from master to complete the
>> data for a given type.  This is really common in ORM tools.  Not clear
>> from the description if the operations could be similarly handled
>> (operation id, operation type as master of 17 dependent
>> operationSpecifics; there is also the "Activity Model")
>
> I do that, but may be I do that badly.
>
> Currently I do have 6 levels of inheritance which partition my
> document-class space. But I cannot see any way to have a unique index
> (unique constraint) to cover all those partitions at once.
>
> This is actually the core of my question: How to make one?
>
> So far I only have separate unique indexes on all those 12 child-table
> document-class subtables. Is there a way to combine those indexes? I
> experimented, and an index created on parent table does not cover
> content of child/inheriting tables. If it was, that would solve the problem.
>
>  or I've just missinterpreted you MasterDocument suggestion?

With table partitioning, provided the partitions are based on the
value(s) of a particular field that is part of the primary key of the
master table, the combination of the child tables' primary key and the
partition's check constraint on the partitioning field guarantee that
records across the partitioned tables are unique.

For example, if we define:
create table master_table (
year int
,   month int
,   example text
,   primary key (year, month)
);

create child2016_table () inherits master_table;

alter table child_table add constraint child2016_year_chk check (year = 2016);
alter table child_table add constraint child2016_pk primary key (year, month);

create child2017_table () inherits master_table;

alter table child_table add constraint child2017_year_chk check (year = 2017);
alter table child_table add constraint child2017_pk primary key (year, month);

In above, the three separate primary keys are guaranteed to contain
distinct ranges of year - provided that we forbid any records to go
directly into the master table or that those records do not have years
already covered by one of the child tables.

Perhaps you can apply this concept to your problem?

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread Justin Pryzby
On Fri, Sep 15, 2017 at 12:25:58PM +0200, s19n wrote:

> 1. with "\set AUTOCOMMIT off" in my psqlrc, issue a
> "SELECT * FROM pg_stat_activity;" and leave it there
This probably obtains a read lock on some shared, system tables/indices..

> 2. in a different connection, issue a database REINDEX (of any database
> different from 'postgres')
.. and this waits to get an EXCLUSIVE lock on those tables/inds, but has to
wait on the read lock;

> * Any further attempt to create new connections to the server, to any
> database, does not succeed and leads to a "FATAL: canceling authentication
> due to timeout" in the server logs.
.. and logins are apparently waiting on the reindex (itself waiting to get
exclusive) lock.

You can look at the locks (granted vs waiting) in SELECT * FROM pg_locks

But you might consider: 1) looping around tables/indices rather than "REINDEX
DATABASE", and then setting a statement_timeout=9s for each REINDEX statement;
and/or, 2) use pg_repack, but I don't think it handles system tables.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread Michael Paquier
On Fri, Sep 15, 2017 at 7:25 PM, s19n  wrote:
> Is this expected? I am failing to see the relation between an idle
> transaction in the 'postgres' database, a reindex operation and subsequent
> logins.

REINDEX DATABASE processes as well system indexes, and takes an
exclusive lock on them in order to process. The lock being hold by the
transaction of session 1 conflicts by what REINDEX tries to take, and
REINDEX is able to process only when the index is free from any
lookups. The reason why logins are not possible is this was likely
waiting for a lock of an index of pg_authid which is looked up at
authentication.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread s19n


Hello,

I am experiencing the following while using PostgreSQL 
9.6.3-1.pgdg16.04+1 (deb package from official repository):


1. with "\set AUTOCOMMIT off" in my psqlrc, issue a
"SELECT * FROM pg_stat_activity;" and leave it there

2. in a different connection, issue a database REINDEX (of any database 
different from 'postgres')


* Any further attempt to create new connections to the server, to any 
database, does not succeed and leads to a "FATAL: canceling authentication 
due to timeout" in the server logs.


* The REINDEX doesn't actually complete unless I end the transaction 
started at point 1.


Is this expected? I am failing to see the relation between an idle 
transaction in the 'postgres' database, a reindex operation and 
subsequent logins.


Thank you very much for your attention,
Best regards

--
https://s19n.net


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-15 Thread Rafal Pietrak


W dniu 14.09.2017 o 23:15, Rob Sargent pisze:
> 
> 
> On 09/14/2017 02:39 PM, Rafal Pietrak wrote:
>>
>> W dniu 14.09.2017 o 19:30, Rob Sargent pisze:
>>>
>>> On 09/14/2017 11:11 AM, Rafal Pietrak wrote:

[--]
>>
>> Throwing actual numbers: 12 basic classes of documents; 17 tables
>> registering various operations document may undergo during its lifetime.
>> Variant (2) above make it 12*17 = 204 tables, which I'm currently
>> maintaining and it's too much. With variant (1) I simply wasn't able
>> to effectively keep document attributes consistent.
>>
>> Thus I'm searching for tools (paradigms/sql-idioms) that would fit the
>> problem.
> 
> Isn't this typically handled with an inheritance (parent-children)
> setup.  MasterDocument has id, subtype and any common columns (create
> date etc) then dependents use the same id from master to complete the
> data for a given type.  This is really common in ORM tools.  Not clear
> from the description if the operations could be similarly handled
> (operation id, operation type as master of 17 dependent
> operationSpecifics; there is also the "Activity Model")

I do that, but may be I do that badly.

Currently I do have 6 levels of inheritance which partition my
document-class space. But I cannot see any way to have a unique index
(unique constraint) to cover all those partitions at once.

This is actually the core of my question: How to make one?

So far I only have separate unique indexes on all those 12 child-table
document-class subtables. Is there a way to combine those indexes? I
experimented, and an index created on parent table does not cover
content of child/inheriting tables. If it was, that would solve the problem.

 or I've just missinterpreted you MasterDocument suggestion?


-R


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_rewind copy so much data

2017-09-15 Thread Michael Paquier
On Fri, Sep 15, 2017 at 2:57 PM, Hung Phan  wrote:
> [...]

Please do not top-post. This breaks the logic of the thread.

> I use ver 9.5.3.

You should update to the latest minor version available, there have
been quite a couple of bug fixes in Postgres since this 9.5.3.

> I have just run again and get the debug log. It is very long so I attach in 
> mail
In this case the LSN where the promoted standby and the rewound node
diverged is clear:
servers diverged at WAL position 2/D69820C8 on timeline 12
rewinding from last common checkpoint at 2/D6982058 on timeline 12
The last segment on timeline 13 is 000D000200E0, which may
be a recycled segment, still that's up to 160MB worth of data...

And from what I can see a lot of the data comes from WAL segments from
past timelines, close to 1.3GB. The rest is more or less completely
coming from relation files from a different tablespace than the
default, tables with OID 16665 and 16683 covering the largest part of
it. What is strange to begin with is that there are many segments from
past timelines. Those should not stick around.

Could you check if the relfilenodes of 16665 and 16683 exist on source
server but do *not* exist on the target server? When issuing a rewind,
a relation file that exists on both has no action taken on (see
process_source_file in filemap.c), and only a set of block are
registered. Based on what comes from your log file, the file is being
copied from the source to the target, not its blocks:
pg_tblspc/16386/PG_9.5_201510051/16387/16665 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/16665.1 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/16665_fsm (COPY)
And this leads to an increase of the data included in what is rewound.
So aren't you for example re-creating a new database after the standy
is promoted or something like that?
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_rewind copy so much data

2017-09-15 Thread Hung Phan
I use ver 9.5.3. I have just run again and get the debug log. It seems to
be that I cannot send such big file to postgresql mail so I copy some parts
here:

fetched file "global/pg_control", length 8192
fetched file "pg_xlog/000D.history", length 475
servers diverged at WAL position 2/D69820C8 on timeline 12
rewinding from last common checkpoint at 2/D6982058 on timeline 12
backup_label.old (COPY)
base/1/1247_fsm (COPY)
base/1/1247_vm (COPY)
base/1/1249_fsm (COPY)
base/1/1249_vm (COPY)
base/1/1255_fsm (COPY)
base/1/1255_vm (COPY)
base/1/1259_fsm (COPY)
base/1/1259_vm (COPY)
base/1/13125_fsm (COPY)
base/1/13125_vm (COPY)
base/1/13130_fsm (COPY)
base/1/13130_vm (COPY)
base/1/13135_fsm (COPY)
base/1/13135_vm (COPY)
base/1/13140_fsm (COPY)
base/1/13140_vm (COPY)
base/1/13145_fsm (COPY)
base/1/13145_vm (COPY)
base/1/13150_fsm (COPY)
base/1/13150_vm (COPY)
base/1/1417_vm (COPY)
base/1/1418_vm (COPY)
base/1/2328_vm (COPY)
base/1/2336_vm (COPY)
base/1/2600_fsm (COPY)


global/3592_vm (COPY)
global/4060_vm (COPY)
global/6000_vm (COPY)
global/pg_control (COPY)
global/pg_filenode.map (COPY)
global/pg_internal.init (COPY)
pg_clog/ (COPY)
pg_clog/0001 (COPY)
pg_clog/0002 (COPY)
pg_hba.conf (COPY)
pg_ident.conf (COPY)
pg_multixact/members/ (COPY)
pg_multixact/offsets/ (COPY)
pg_notify/ (COPY)
pg_stat_tmp/db_0.stat (COPY)
pg_stat_tmp/db_13294.stat (COPY)
pg_stat_tmp/global.stat (COPY)
pg_subtrans/0025 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/112 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/113 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1247 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1247_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1247_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1249 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1249_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1249_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1255 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1255_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1255_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1259 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1259_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1259_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13125 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13125_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13125_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13127 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13129 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13130 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13130_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13130_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13132 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13134 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13135 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13135_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13135_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13137 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13139 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13140 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13140_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13140_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13142 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13144 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13145 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13145_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13145_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13147 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13149 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13150 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13150_fsm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13150_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13152 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13154 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13155 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13157 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/13159 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1417 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1417_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1418 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/1418_vm (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/16401 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/16422 (COPY)
pg_tblspc/16386/PG_9.5_201510051/16387/16440 (COPY)



received chunk for file "base/1/2603_fsm", offset 0, size 24576
received chunk for file "base/1/2603_vm", offset 0, size 8192
received chunk for file "base/1/2605_fsm", offset 0, size 24576
received chunk for file "base/1/2605_vm", offset 0, size 8192
received chunk for file "base/1/2606_fsm", offset 0, size 24576
received chunk for file "base/1/2606_vm", offset 0, size 8192
received chunk for file "base/1/2607_fsm", offset 0, size 24576
received chunk for file "base/1/2607_vm", offset 0, size 8192
received chunk for file "base/1/2608_fsm", offset 0, size 24576
received chunk for file "base/1/2608_vm", offset 0, size 8192
received chunk for file "base/1/2609_fsm", offset 0, size 24576