Re: [HACKERS] Choosing parallel_degree

2016-04-11 Thread tushar

On 04/08/2016 08:53 PM, Robert Haas wrote:

On Fri, Apr 8, 2016 at 1:22 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:

Other than that, patch looks good and I have marked it as Ready For
Committer.  Hope, we get this for 9.6.

Committed.  I think this is likely to make parallel query
significantly more usable in 9.6.


While testing ,I observed couple of things -

Case 1 =Not accepting parallel seq scan when parallel_degree is set to 0

postgres=# create table fok2(n int) with (parallel_degree=0);
CREATE TABLE
postgres=# insert into fok2 values (generate_series(1,100)); analyze 
fok2; vacuum fok2;

INSERT 0 100
ANALYZE
VACUUM
postgres=# set max_parallel_degree =5;
SET
postgres=# explain analyze verbose   select * from fok2  where n<=10;
  QUERY PLAN
--
 Seq Scan on public.fok2  (cost=0.00..16925.00 rows=100 width=4) 
(actual time=0.027..217.882 rows=10 loops=1)

   Output: n
   Filter: (fok2.n <= 10)
   Rows Removed by Filter: 90
 Planning time: 0.084 ms
 Execution time: 217.935 ms
(6 rows)

I am assuming parallel_degree=0 is as same as not using it  , i.e
create table fok2(n int) with (parallel_degree=0);  = create table 
fok2(n int);


so in this case it should have accepted the parallel seq .scan.

Case 2=Total no# of workers are NOT matching with the workers information -

postgres=# alter table fok set (parallel_degree=10);
ALTER TABLE
postgres=# set max_parallel_degree =9;
SET
postgres=# explain analyze verbose   select * from fok  where n<=1;
   QUERY PLAN
-
 Gather  (cost=1000.00..6823.89 rows=100 width=4) (actual 
time=0.621..107.755 rows=1 loops=1)

   Output: n
*   Number of Workers: 9*
   ->  Parallel Seq Scan on public.fok  (cost=0.00..5814.00 rows=11 
width=4) (actual time=83.382..95.157 rows=0 loops=9)

 Output: n
 Filter: (fok.n <= 1)
 Rows Removed by Filter: 11
 Worker 0: actual time=82.181..82.181 rows=0 loops=1
 Worker 1: actual time=97.236..97.236 rows=0 loops=1
 Worker 2: actual time=93.586..93.586 rows=0 loops=1
 Worker 3: actual time=94.159..94.159 rows=0 loops=1
 Worker 4: actual time=88.459..88.459 rows=0 loops=1
 Worker 5: actual time=90.245..90.245 rows=0 loops=1
 Worker 6: actual time=101.577..101.577 rows=0 loops=1
 Worker 7: actual time=102.955..102.955 rows=0 loops=1
 Planning time: 0.119 ms
 Execution time: 108.585 ms
(17 rows)

Expected = Expecting worker8 information , also loops=10 (including the 
Master)


Case 3=Getting error if we set the max value in max_parallel_degree as 
well in parallel_degree  .


postgres=# create table abd(n int) with (parallel_degree=262144);
ERROR:  value 262144 out of bounds for option "parallel_degree"
DETAIL:  Valid values are between "0" and "262143".

postgres=# create table abd(n int) with (parallel_degree=262143);
CREATE TABLE
postgres=# insert into abd values (generate_series(1,100)); analyze 
abd; vacuum abd;

INSERT 0 100
ANALYZE

postgres=# set max_parallel_degree =262;
ERROR:  262 is outside the valid range for parameter 
"max_parallel_degree" (0 .. 262143)


postgres=# set max_parallel_degree =262143;
SET
postgres=#

postgres=# explain analyze verbose select * from abd  where n<=1;
ERROR:  requested shared memory size overflows size_t

if we remove the analyze keyword then query running successfully.

Expected = Is it not better to throw the error at the time of setting 
max_parallel_degree, if not supported ?


--
regards,tushar



Re: [HACKERS] Choosing parallel_degree

2016-04-12 Thread tushar

On 04/11/2016 09:14 PM, Robert Haas wrote:

postgres=# explain analyze verbose select * from abd  where n<=1;
>>ERROR:  requested shared memory size overflows size_t
>>
>>if we remove the analyze keyword then query running successfully.
>>
>>Expected = Is it not better to throw the error at the time of setting
>>max_parallel_degree, if not supported ?

>
>+1

It surprises me that that request overflowed size_t.  I guess we
should look into why that's happening.  Did you test this on a 32-bit
system?

No, I tested on 64 bit machine.

--
regards,tushar



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Choosing parallel_degree

2016-04-12 Thread tushar

On 04/11/2016 08:57 PM, Julien Rouhaud wrote:

>Expected = Expecting worker8 information , also loops=10 (including the
>Master)
>

Even if you set a per-table parallel_degree higher than
max_parallel_degree, it'll be maxed at max_parallel_degree.

Then, the explain shows that the planner assumed it'll launch 9 workers,
but only 8 were available (or needed perhaps) at runtime.

Right, if we increase max_worker_processes value in postgresql.conf file 
then

we are able to see the worker information in explain plan.

if parallel_degree value is higher than max_parallel_degree i.e

parallel_degree = 20,  max_parallel_degree=10  => [ select query 
accepting 10 workers ]


but in general  where table doesn't have parallel_degree set and 
max_parallel_degree is

set to 10 then select query is showing only 2 workers .

--
regards,tushar



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] "parallel= " information is not coming in pg_dumpall for create aggregate

2016-04-18 Thread tushar

Hi,

I checked in PG 9.6 , if we create an aggregate function with saying - 
parallel=safe/restricted/unsafe and then take
a pg_dumpall of the entire cluster , "parallel= " is missing from create 
aggregate syntax


Steps to reproduce -

.)connect to psql terminal and create an aggregate function

postgres=# CREATE AGGREGATE unsafe_sum100 (float8)
(
stype = float8,
sfunc = float8pl,
mstype = float8,
msfunc = float8pl,
minvfunc = float8mi,
*parallel=safe*);
CREATE AGGREGATE

.)perform pg_dumpall against that cluster

.)check the content of create aggregate unsafe_sum100 in the file

"
-
-- Name: unsafe_sum100(double precision); Type: AGGREGATE; Schema: 
public; Owner: centos

--

CREATE AGGREGATE unsafe_sum100(double precision) (
SFUNC = float8pl,
STYPE = double precision,
MSFUNC = float8pl,
MINVFUNC = float8mi,
MSTYPE = double precision
);

"

--
regards,tushar



Re: [HACKERS] "parallel= " information is not coming in pg_dumpall for create aggregate

2016-04-21 Thread tushar

On 04/21/2016 08:36 AM, Robert Haas wrote:

Nice catch, Tushar.  Thanks for the patch, Fabrízio.  Committed.
Thanks, Verified against the latest sources of PG9.6 - issue has been 
fixed now.


--
regards,tushar



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Index Scans

2017-02-01 Thread tushar

On 02/01/2017 06:50 PM, Amit Kapila wrote:

Used large table parallel index scans (both forward and backward
scans).  These tests have been done by Tushar and you can find
detailed report up thread [2].  Apart from that, the patch has been
tested with TPC-H queries at various scale factors and it is being
used in multiple queries and we have verified the results of same as
well.  TPC-H tests have been done by Rafia.

Tushar has done some further extensive test of this patch.  Tushar,
can you please share your test results?

Yes, We have
0)Tested on a high end machine with this following configuration

[edb@ip-10-0-38-61 pg_log]$ lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):128
On-line CPU(s) list:   0-127
Thread(s) per core:2
Core(s) per socket:16
Socket(s): 4
NUMA node(s):  4
Vendor ID: GenuineIntel
CPU family:6
Model: 63
Model name:Intel(R) Xeon(R) CPU E7-8880 v3 @ 2.30GHz

[edb@ip-10-0-38-61 pg_log]$ df -h
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs961G   60K  961G   1% /dev
tmpfs   961G  556K  961G   1% /dev/shm
/dev/xvda1  197G  156G   42G  80% /

[edb@ip-10-0-38-61 pg_log]$ free
 total   used   free sharedbuffers cached
Mem:2014742800  170971292 1843771508 142668 166128 162463396
-/+ buffers/cache:8341768 2006401032
Swap:0  0  0

1)Executed the testcases with multiple clients ( e.g run our testcase 
file against 4 different psql terminal of the same server 
simultaneously)  for concurrency,
   We made a effort to execute same set of tests (testcase.sql file) 
via different terminals against the same server.
2) We checked count(*) of the query  before and after disabling/enabling 
max_parallel_workers_per_gather to make sure end result(o/p) is consistent.
3) We are able to get parallel workers =14 (highest degree of 
parallelism ) in our case


pgbench with -scaling factor =10,000 ( taken 149 GB data in the 
database, 100 million rows is inserted) on amanzon instance (128 cores 
,4 nodes)


We are able to see 14  workers launched  out of 14 workers planned 
against  this below query


postgres=# \di+ pgbench_accounts_pkey
List of relations
 Schema | Name  | Type  | Owner |  Table | Size | 
Description
+---+---+---+--+---+- 

 public | pgbench_accounts_pkey | index | edb   | pgbench_accounts |*21 
GB *|

(1 row)

index size is now 21 GB

postgres=# explain analyse verbose select * from pgbench_accounts where 
aid <5000 and bid <=1 ;

QUERY PLAN
--
 Gather  (cost=0.57..1745380.10 rows=4691 width=97) (actual 
time=0.546..2316.118 rows=10 loops=1)

   Output: aid, bid, abalance, filler
   Workers Planned: 14
   Workers Launched: 14
   ->  Parallel Index Scan using pgbench_accounts_pkey on 
public.pgbench_accounts  (cost=0.57..1745380.10 rows=335 width=97) 
(actual time=0.081..2253.234 rows=6667 loops=15)

 Output: aid, bid, abalance, filler
 Index Cond: (pgbench_accounts.aid < 5000)
 Filter: (pgbench_accounts.bid <= 1)
 Rows Removed by Filter: 3326667
 Worker 0: actual time=0.069..2251.456 rows=7036 loops=1
 Worker 1: actual time=0.070..2256.772 rows=6588 loops=1
 Worker 2: actual time=0.071..2257.164 rows=6954 loops=1
 Worker 3: actual time=0.079..2255.166 rows=6222 loops=1
 Worker 4: actual time=0.063..2254.814 rows=6588 loops=1
 Worker 5: actual time=0.091..2253.872 rows=6588 loops=1
 Worker 6: actual time=0.093..2254.237 rows=6222 loops=1
 Worker 7: actual time=0.068..2254.749 rows=7320 loops=1
 Worker 8: actual time=0.060..2253.953 rows=6588 loops=1
 Worker 9: actual time=0.127..2253.546 rows=8052 loops=1
 Worker 10: actual time=0.091..2252.737 rows=7686 loops=1
 Worker 11: actual time=0.087..2252.056 rows=7320 loops=1
 Worker 12: actual time=0.091..2252.600 rows=7320 loops=1
 Worker 13: actual time=0.057..2252.341 rows=7686 loops=1
 Planning time: 0.165 ms
 Execution time: 2357.132 ms
(25 rows)

even for array keys,  index size is in MB . we are able to see 09 
workers launched out of 09 workers planned


postgres=# set enable_bitmapscan =0;
SET
postgres=# set enable_seqscan =0;
SET
postgres=# \di+ ary_idx
List of relations
 Schema |  Name   | Type  | Owner |  Table  | Size  | Description
+-+---+---+-+---+-
 public | ary_idx | index | edb   | ary_tab | 56 MB |
(1 row)

postgres=# explain analyze

Re: [HACKERS] Parallel Index Scans

2017-02-01 Thread tushar

On 02/01/2017 06:50 PM, Amit Kapila wrote:

Used large table parallel index scans (both forward and backward
scans).  These tests have been done by Tushar and you can find
detailed report up thread [2].  Apart from that, the patch has been
tested with TPC-H queries at various scale factors and it is being
used in multiple queries and we have verified the results of same as
well.  TPC-H tests have been done by Rafia.

Tushar has done some further extensive test of this patch.  Tushar,
can you please share your test results?

Yes, We have
0)Tested on a high end machine with this following configuration

[edb@ip-10-0-38-61 pg_log]$ lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):128
On-line CPU(s) list:   0-127
Thread(s) per core:2
Core(s) per socket:16
Socket(s): 4
NUMA node(s):  4
Vendor ID: GenuineIntel
CPU family:6
Model: 63
Model name:Intel(R) Xeon(R) CPU E7-8880 v3 @ 2.30GHz

[edb@ip-10-0-38-61 pg_log]$ df -h
Filesystem  Size  Used Avail Use% Mounted on
devtmpfs961G   60K  961G   1% /dev
tmpfs   961G  556K  961G   1% /dev/shm
/dev/xvda1  197G  156G   42G  80% /

[edb@ip-10-0-38-61 pg_log]$ free
 total   used   free sharedbuffers cached
Mem:2014742800  170971292 1843771508 142668 166128 162463396
-/+ buffers/cache:8341768 2006401032
Swap:0  0  0

1)Executed the testcases with multiple clients ( e.g run our testcase 
file against 4 different psql terminal of the same server 
simultaneously) for concurrency,
   We made a effort to execute same set of tests (testcase.sql file) 
via different terminals against the same server.
2) We checked count(*) of the query  before and after disabling/enabling 
max_parallel_workers_per_gather to make sure end result(o/p) is consistent.
3) We are able to get parallel workers =14 (highest degree of 
parallelism ) in our case


pgbench with -scaling factor =10,000 ( taken 149 GB data in the 
database, 100 million rows is inserted) on amanzon instance (128 cores 
,4 nodes)


We are able to see 14  workers launched  out of 14 workers planned 
against  this below query


postgres=# \di+ pgbench_accounts_pkey
List of relations
 Schema | Name  | Type  | Owner |  Table | Size | 
Description
+---+---+---+--+---+- 

 public | pgbench_accounts_pkey | index | edb   | pgbench_accounts |*21 
GB *|

(1 row)

index size is now 21 GB

postgres=# explain analyse verbose select * from pgbench_accounts where 
aid <5000 and bid <=1 ;

QUERY PLAN
--
 Gather  (cost=0.57..1745380.10 rows=4691 width=97) (actual 
time=0.546..2316.118 rows=10 loops=1)

   Output: aid, bid, abalance, filler
   Workers Planned: 14
   Workers Launched: 14
   ->  Parallel Index Scan using pgbench_accounts_pkey on 
public.pgbench_accounts  (cost=0.57..1745380.10 rows=335 width=97) 
(actual time=0.081..2253.234 rows=6667 loops=15)

 Output: aid, bid, abalance, filler
 Index Cond: (pgbench_accounts.aid < 5000)
 Filter: (pgbench_accounts.bid <= 1)
 Rows Removed by Filter: 3326667
 Worker 0: actual time=0.069..2251.456 rows=7036 loops=1
 Worker 1: actual time=0.070..2256.772 rows=6588 loops=1
 Worker 2: actual time=0.071..2257.164 rows=6954 loops=1
 Worker 3: actual time=0.079..2255.166 rows=6222 loops=1
 Worker 4: actual time=0.063..2254.814 rows=6588 loops=1
 Worker 5: actual time=0.091..2253.872 rows=6588 loops=1
 Worker 6: actual time=0.093..2254.237 rows=6222 loops=1
 Worker 7: actual time=0.068..2254.749 rows=7320 loops=1
 Worker 8: actual time=0.060..2253.953 rows=6588 loops=1
 Worker 9: actual time=0.127..2253.546 rows=8052 loops=1
 Worker 10: actual time=0.091..2252.737 rows=7686 loops=1
 Worker 11: actual time=0.087..2252.056 rows=7320 loops=1
 Worker 12: actual time=0.091..2252.600 rows=7320 loops=1
 Worker 13: actual time=0.057..2252.341 rows=7686 loops=1
 Planning time: 0.165 ms
 Execution time: 2357.132 ms
(25 rows)

even for array keys,  index size is in MB . we are able to see 09 
workers launched out of 09 workers planned


postgres=# set enable_bitmapscan =0;
SET
postgres=# set enable_seqscan =0;
SET
postgres=# \di+ ary_idx
List of relations
 Schema |  Name   | Type  | Owner |  Table  | Size  | Description
+-+---+---+-+---+-
 public | ary_idx | index | edb   | ary_tab | 56 MB |
(1 row)

postgres=# explain analyze

Re: [HACKERS] Parallel Index-only scan

2017-01-30 Thread tushar

On 01/19/2017 05:37 PM, Rafia Sabih wrote:

Please find the attached file rebased patch of parallel index-only
scan on the latest Parallel index-scan patch [1].
We did some testing of  this feature and written few testcases.  PFA the 
sql script(along with the expected .out files)


In addition we have generated the LCOV (code coverage) report and 
compared the files which are changed for this.
You can see the numbers for  "with_patch" V/s "with_patch+TestCases"  
(.pdf file is attached)


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company

--create database
--create database test;
-- Running script for pgbanch tables against a scaling factor of 1,000 
--\! ./pgbench -i -s 1000 test > /tmp/my_logs 2>&1
--\c test
-- TestCase: 1
--Description 
/*
Explain plan catching Parallel index Only scan when in WHERE condition
   a) Single column of integer Type having index.
   b) condition: where the column having index is trying to fetch NOT NULL values("IS NOT NULL").
-- Need to disable SEQUENTIAL SCAN  to reproduce explain plan catch "parallel index Only scan"
*/
\c test
set enable_seqscan =0;
explain analyze  verbose select count(*) from pgbench_accounts where aid is not null;
-- TestCase: 2
--Description 
/*
Explain plan catching  parallel index Only scan:
   a) both columns are of integer Type.
   b) 1 column is having index and another is non key column.
   c) condition: where the column having BETWEEN .. AND .. condition both in Index and Non-Index column in WHERE condition.
*/
explain analyze verbose  select count(aid) from pgbench_accounts where aid between 1000 and 9000 ;

-- TestCase: 3
--Description 
/*
Explain plan catching  parallel index Only scan :
   a) both columns are of integer Type.
   b) 1 column is having index and another is non key column.
   c) condition: The column having SAFE FUNCTION against Index column in WHERE condition.
*/
CREATE or replace function fun_pis (n int) returns int parallel safe  as $$ begin return 1000; end; $$ language 'plpgsql';
explain  analyze verbose select aid from pgbench_accounts where aid > fun_pis(9) and aid < 9000 ;
-- TestCase: 4
--Description 
/*
Explain plan catching Parallel index only scan when  in WHERE condition
   a) 3 columns, 1 column is having PRIMARY KEY on "int" Datatype and another non key columns having "int" and "char" datatype.
   b) condition: WHERE clause having 3 conditions, index column is selecting more records as compaired to other column conditions.
-- Need to disable SEQUENTIAL  to reproduce explain plan catches "parallel index Only scan"
*/
\c test
CREATE TABLE tt2(c1 serial primary key, c2 int, c3 char(10));
INSERT INTO tt2(c2, c3) VALUES (generate_series(1,30), 'abc');
INSERT INTO tt2(c2, c3) VALUES (generate_series(31,100), 'pqrs');
analyze tt2;
set enable_seqscan =0;
explain analyze select count(c1) from tt2 where c1 < 00;

-- TestCase: 5
--Description
/*
Explain plan catching Parallel index only scan when in WHERE condition
   a) 3 columns, 2 column is having composite index on "int" and "character" Datatype and another non key columns having "int" datatype.
   b) condition: WHERE clause having 1 multi-column condition selecting few records.
  -- Need to disable BITMAPSCAN, SEQUENTIALSCAN to reproduce explain plan catch "parallel index only scan"
*/

set enable_seqscan =0;
explain analyze verbose select count(*) from tt2 where (c1) NOT  IN((100));
-- TestCase: 6
--Description 
/*
Explain plan catching Parallel index only scan when in WHERE condition
   a) 2 columns, 1 non-key column having "text" datatype and another column having "array of integer[]" Datatype having index.
   b) condition: WHERE clause having 2 conditions, the array index column is selecting more records as compaired to other non key column condition.
*/
CREATE TABLE ary_tab (c1 text, c2 integer[]);
INSERT INTO ary_tab VALUES ('one', '{1,2,3}');
INSERT INTO ary_tab VALUES ('two', '{4,5,6}');
INSERT INTO ary_tab VALUES ('three', '{2,4,6}');
INSERT INTO ary_tab  (select 'four', '{7,8,9,10}' from generate_series(1,50));
INSERT INTO ary_tab  (select 'five', '{7,8,9,10}' from generate_series(1,100));
CREATE INDEX ary_idx on ary_tab (c2);
analyze;
explain analyze  verbose select count(*) from ary_tab where ARRAY[7,8,9,10]=c2 ;

-- TestCase: 7
--Description 
/*
Explain plan catching Parallel index only scan when in WHERE condition
   a) 4 columns, 1 non-key column having "TEXT" datatype and others are "INTEGER", "FLOAT", "VARCHAR" column having "COMPOSIT INDEX", and the same "INTEGER" column have "INDEX".
   b) condition: WHERE clause having 1 conditions, the index column is selecting more records.
  -- Need to disable SEQUENTIALSCAN /BITMAP  to repro

Re: [HACKERS] Parallel bitmap heap scan

2017-01-09 Thread tushar

On 01/09/2017 07:22 PM, Dilip Kumar wrote:

Thanks, Tushar. I have fixed it. The defect was in 0002. I have also
observed another issue related to code refactoring, Actually, there
was some code present in 0001 which supposed to be in 0003.

Thanks, I have checked at my end and it is fixed now.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel bitmap heap scan

2017-01-11 Thread tushar

On 01/10/2017 05:16 PM, Dilip Kumar wrote:

  Please try attached patch and confirm from your
side.

Thanks,issue seems to be fixed now.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel bitmap heap scan

2017-01-09 Thread tushar

On 01/09/2017 01:05 PM, Dilip Kumar wrote:

This patch can be used by 0003-parallel-bitmap-heap-scan-v7.patch
attached in the mail and also parallel-index-scan[2] can be rebased on
this, if this get committed,
After applying your patches against the fresh sources of PG v10 , not 
able to perform initdb


centos@tusharcentos7 bin]$ ./initdb -D data
The files belonging to this database system will be owned by user "centos".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... sh: line 1: 30709 
Segmentation fault 
"/home/centos/PG10_9ja/postgresql/edbpsql/bin/postgres" --single -F -O 
-j -c search_path=pg_catalog -c exit_on_error=true template1 > /dev/null

child process exited with exit code 139
initdb: removing data directory "data"
[centos@tusharcentos7 bin]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel bitmap heap scan

2017-01-09 Thread tushar

On 01/09/2017 04:36 PM, Dilip Kumar wrote:

I have taken the latest code, applied all 3 patches and compiled.
Initdb is working fine for me.

Can you please verify, do you have any extra patch along with my patch?
Did you properly clean the code?
Thanks Dilip. issue is reproducible  if  we  uses '--enable-cassert' 
switch in the configure. We are able to reproduce it only with 
--enable-cassert' .


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel bitmap heap scan

2017-01-10 Thread tushar

On 01/10/2017 11:29 AM, tushar wrote:

On 01/09/2017 07:22 PM, Dilip Kumar wrote:

Thanks, Tushar. I have fixed it. The defect was in 0002. I have also
observed another issue related to code refactoring, Actually, there
was some code present in 0001 which supposed to be in 0003.

Thanks, I have checked at my end and it is fixed now.

We found a regression , earlier the testcase was working fine (against 
the older patches of Parallel bitmap heap scan)  but now getting a 
server crash

against v8 patches.

Testcase - (one of the table of TPC-H )

postgres=#explain analyze verbose
SELECT SUM(l_extendedprice) FROM lineitem
WHERE (l_shipdate >= '1995-01-01'::date)
AND (l_shipdate <='1996-03-31'::date);
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.

server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

Here is the stack trace - ( Two core dump file generated)

[centos@centos-cpula bin]$  gdb -q -c data/core.25434 
/home/centos/PG10_10jan/postgresql/edbpsql/bin/postgres
Reading symbols from 
/home/centos/PG10_10jan/postgresql/edbpsql/bin/postgres...done.

[New Thread 25434]
Missing separate debuginfo for
Try: yum --enablerepo='*-debug*' install 
/usr/lib/debug/.build-id/7f/719af91ee951b4fcb6647e7868f95f766a616b
Reading symbols from /usr/lib64/libssl.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libssl.so.10
Reading symbols from /usr/lib64/libcrypto.so.10...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib64/libcrypto.so.10
Reading symbols from /lib64/librt.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.

[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libgssapi_krb5.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libgssapi_krb5.so.2
Reading symbols from /lib64/libkrb5.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5.so.3
Reading symbols from /lib64/libcom_err.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libcom_err.so.2
Reading symbols from /lib64/libk5crypto.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libk5crypto.so.3
Reading symbols from /lib64/libz.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libkrb5support.so.0...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkrb5support.so.0
Reading symbols from /lib64/libkeyutils.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libkeyutils.so.1
Reading symbols from /lib64/libresolv.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libresolv.so.2
Reading symbols from /lib64/libselinux.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libselinux.so.1
Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libnss_files.so.2
Core was generated by `postgres: bgworker: parallel worker for PID 
25433  '.

Program terminated with signal 11, Segmentation fault.
#0  0x006f2fa6 in pagetable_destroy (tb=0x2079bf0) at 
../../../src/include/lib/simplehash.h:361

361tb->alloc->HashFree(tb->data, tb->alloc->args);
Missing separate debuginfos, use: debuginfo-install 
glibc-2.12-1.192.el6.x86_64 keyutils-libs-1.4-5.el6.x86_64 
krb5-libs-1.10.3-57.el6.x86_64 libcom_err-1.41.12-22.el6.x86_64 
libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-48.el6_8.1.x86_64 
zlib-1.2.3-29.el6.x86_64

(gdb) bt
#0  0x006f2fa6 in pagetable_destroy (tb=0x2079bf0) at 
../../../src/include/lib/simplehash.h:361

#1  0x006f3b52 in tbm_free (tbm=0x2077fe0) at tidbitmap.c:296
#2  0x006ab29b in ExecEndBitmapHeapScan (node=0x207e760) at 
nodeBitmapHeapscan.c:717

#3  0x00691701 in ExecEndNode (node=0x207e760) at execProcnode.c:689
#4  0x006a8f86 in ExecEndAgg (node=0x207e878)

Re: [HACKERS] Parallel Index Scans

2016-12-22 Thread tushar

On 12/22/2016 01:35 PM, tushar wrote:

On 12/22/2016 09:49 AM, Amit Kapila wrote:

I think you can focus on the handling of array scan keys for testing.
In general, one of my colleagues has shown interest in testing this
patch and I think he has tested as well but never posted his findings.
I will request him to share his findings and what kind of tests he has
done, if any.
Sure, We (Prabhat and I) have done some testing for this feature 
internally but never published the test-scripts on this forum. PFA the 
sql scripts ( along with the expected .out files) we have used for 
testing for your ready reference.


In addition we had generated the LCOV (code coverage) report and 
compared the files which are changed for the "Parallel index scan" patch.
You can see the numbers for  "with patch" V/s "Without patch" (.pdf 
file is attached)


In addition to that, we  run the sqlsmith against PG v10+PIS (parallel 
index scan) patches and found a crash  but that is coming on plain  PG 
v10 (without applying any patches) as well


postgres=# select
   70 as c0,
   pg_catalog.has_server_privilege(
cast(ref_0.indexdef as text),
cast(cast(coalesce((select name from pg_catalog.pg_settings 
limit 1 offset 16)

,
   null) as text) as text)) as c1,
   pg_catalog.pg_export_snapshot() as c2,
   ref_0.indexdef as c3,
   ref_0.indexname as c4
 from
  pg_catalog.pg_indexes as ref_0
 where (ref_0.tablespace = ref_0.tablespace)
   or (46 = 22)
 limit 103;
TRAP: FailedAssertion("!(keylen < 64)", File: "hashfunc.c", Line: 139)
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: 2016-12-23 
11:19:50.627 IST [2314] LOG:  server process (PID 2322) was terminated 
by signal 6: Aborted
2016-12-23 11:19:50.627 IST [2314] DETAIL:  Failed process was running: 
select

   70 as c0,
   pg_catalog.has_server_privilege(
cast(ref_0.indexdef as text),
cast(cast(coalesce((select name from 
pg_catalog.pg_settings limit 1 offset 16)

,
   null) as text) as text)) as c1,
   pg_catalog.pg_export_snapshot() as c2,
   ref_0.indexdef as c3,
   ref_0.indexname as c4
 from
  pg_catalog.pg_indexes as ref_0
 where (ref_0.tablespace = ref_0.tablespace)
   or (46 = 22)
 limit 103;
2016-12-23 11:19:50.627 IST [2314] LOG:  terminating any other active 
server processes
2016-12-23 11:19:50.627 IST [2319] WARNING:  terminating connection 
because of crash of another server process
2016-12-23 11:19:50.627 IST [2319] DETAIL:  The postmaster has commanded 
this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2016-12-23 11:19:50.627 IST [2319] HINT:  In a moment you should be able 
to reconnect to the database and repeat your command.
2016-12-23 11:19:50.629 IST [2323] FATAL:  the database system is in 
recovery mode

Failed.
!> 2016-12-23 11:19:50.629 IST [2314] LOG:  all server processes 
terminated; reinitializing
2016-12-23 11:19:50.658 IST [2324] LOG:  database system was 
interrupted; last known up at 2016-12-23 11:19:47 IST
2016-12-23 11:19:50.810 IST [2324] LOG:  database system was not 
properly shut down; automatic recovery in progress
2016-12-23 11:19:50.812 IST [2324] LOG:  invalid record length at 
0/155E408: wanted 24, got 0

2016-12-23 11:19:50.812 IST [2324] LOG:  redo is not required
2016-12-23 11:19:50.819 IST [2324] LOG:  MultiXact member wraparound 
protections are now enabled
2016-12-23 11:19:50.822 IST [2314] LOG:  database system is ready to 
accept connections

2016-12-23 11:19:50.822 IST [2328] LOG:  autovacuum launcher started

--
regards,tushar



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Server Crash while running sqlsmith [TRAP: FailedAssertion("!(keylen < 64)", File: "hashfunc.c", Line: 139) ]

2016-12-23 Thread tushar
key=0x7ffdfabd09e0) at catcache.c:209
#6  0x0095a62b in SearchCatCache (cache=0x1e96750, v1=33238784, 
v2=0, v3=0, v4=0) at catcache.c:1144
#7  0x0096ebac in SearchSysCache (cacheId=29, key1=33238784, 
key2=0, key3=0, key4=0) at syscache.c:1006
#8  0x0096ecc8 in GetSysCacheOid (cacheId=29, key1=33238784, 
key2=0, key3=0, key4=0) at syscache.c:1084
#9  0x006c7ab0 in get_foreign_server_oid (servername=0x1fb2f00 
"CREATE UNIQUE INDEX pg_authid_rolname_index ON pg_authid USING btree 
(rolname)", missing_ok=0 '\000')

at foreign.c:688
#10 0x008453cc in convert_server_name (servername=0x1fb2e68) at 
acl.c:3995
#11 0x00845187 in has_server_privilege_name (fcinfo=0x1f76fe8) 
at acl.c:3885
#12 0x0068aef0 in ExecMakeFunctionResultNoSets 
(fcache=0x1f76f78, econtext=0x1f7b178, isNull=0x1f8cac1 "", 
isDone=0x1f8ccac) at execQual.c:2046
#13 0x0068b7f1 in ExecEvalFunc (fcache=0x1f76f78, 
econtext=0x1f7b178, isNull=0x1f8cac1 "", isDone=0x1f8ccac) at 
execQual.c:2442
#14 0x00691f39 in ExecTargetList (targetlist=0x1f8cbf8, 
tupdesc=0x1f7b340, econtext=0x1f7b178, values=0x1f8ca68, 
isnull=0x1f8cac0 "", itemIsDone=0x1f8cca8,

isDone=0x7ffdfabd0f0c) at execQual.c:5501
#15 0x006926b5 in ExecProject (projInfo=0x1f8cae0, 
isDone=0x7ffdfabd0f0c) at execQual.c:5725
#16 0x006b1624 in ExecNestLoop (node=0x1f7b060) at 
nodeNestloop.c:267
#17 0x00687571 in ExecProcNode (node=0x1f7b060) at 
execProcnode.c:476

#18 0x006a9bac in ExecLimit (node=0x1f7ae28) at nodeLimit.c:91
#19 0x00687676 in ExecProcNode (node=0x1f7ae28) at 
execProcnode.c:531
#20 0x00683101 in ExecutePlan (estate=0x1f757f8, 
planstate=0x1f7ae28, use_parallel_mode=0 '\000', operation=CMD_SELECT, 
sendTuples=1 '\001', numberTuples=0,

direction=ForwardScanDirection, dest=0x7fe3b9ccafe8) at execMain.c:1580
#21 0x006811da in standard_ExecutorRun (queryDesc=0x1f11d68, 
direction=ForwardScanDirection, count=0) at execMain.c:340
#22 0x0068105e in ExecutorRun (queryDesc=0x1f11d68, 
direction=ForwardScanDirection, count=0) at execMain.c:288
#23 0x008253ea in PortalRunSelect (portal=0x1f737e8, forward=1 
'\001', count=0, dest=0x7fe3b9ccafe8) at pquery.c:946
#24 0x0082507d in PortalRun (portal=0x1f737e8, 
count=9223372036854775807, isTopLevel=1 '\001', dest=0x7fe3b9ccafe8, 
altdest=0x7fe3b9ccafe8,

completionTag=0x7ffdfabd1300 "") at pquery.c:787
#25 0x0081f37f in exec_simple_query (
query_string=0x1ec5978 "select\n", ' ' , "70 as 
c0,\n", ' ' , "pg_catalog.has_server_privilege(\n", ' 
' , "cast(ref_0.indexdef as text),\n", ' ' times>, "cast(cast(coalesce((select name from pg_catalog.pg_settings 
limit 1 offset"...) at postgres.c:1094
#26 0x00823433 in PostgresMain (argc=1, argv=0x1e71eb8, 
dbname=0x1e47de8 "postgres", username=0x1e71d20 "centos") at postgres.c:4072

#27 0x0079718f in BackendRun (port=0x1e69730) at postmaster.c:4275
#28 0x00796917 in BackendStartup (port=0x1e69730) at 
postmaster.c:3947

#29 0x0079302e in ServerLoop () at postmaster.c:1704
#30 0x0079266d in PostmasterMain (argc=3, argv=0x1e45c60) at 
postmaster.c:1312

#31 0x006db982 in main (argc=3, argv=0x1e45c60) at main.c:228
(gdb) ^CQuit
(gdb)

--
regards,tushar



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Index Scans

2016-12-23 Thread tushar

On 12/23/2016 05:38 PM, Robert Haas wrote:

So why are you reporting it here rather than on a separate thread?
We found it -while testing parallel index scan and later it turned out 
to be crash in general.

Sure- make sense ,will do that.

--
regards,tushar



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Getting server crash after running sqlsmith

2017-03-28 Thread tushar

Hi,

After runinng sqlsmith against latest sources of PG v10  , able to see a 
crash -


here is the standalone testcase -

Make sure 'logging_collector=on' in postgresql.conf file
Connect to psql terminal ,run this query
postgres=# select
  80 as c0,
  pg_catalog.pg_backend_pid() as c1,
  68 as c2,
  subq_1.c0 as c3,
  subq_1.c0 as c4
from
  (select
ref_0.specific_schema as c0
  from
information_schema.role_routine_grants as ref_0,
lateral (select
  ref_0.grantor as c0,
  50 as c1
from
  information_schema.routines as ref_1
where (63 = 86)
  or (pg_catalog.pg_advisory_lock(
  cast(ref_1.result_cast_datetime_precision as 
integer),

  cast(pg_catalog.bttidcmp(
cast(null as tid),
cast(null as tid)) as integer)) is NULL)
limit 143) as subq_0
  where pg_catalog.pg_rotate_logfile() is NULL) as subq_1
where 50 <> 45;
 c0 | c1 | c2 | c3 | c4
++++
(0 rows)

postgres=# select 1;
FATAL:  terminating connection due to unexpected postmaster exit
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> \q

here is the stack trace -
[centos@tushar-centos bin]$ gdb -q -c mdata/core.4254 
/home/centos/pg10_28march/postgresql/edbpsql/bin/postgres
Reading symbols from 
/home/centos/pg10_28march/postgresql/edbpsql/bin/postgres...done.

[New Thread 4254]
Missing separate debuginfo for
Try: yum --disablerepo='*' --enablerepo='*-debug*' install 
/usr/lib/debug/.build-id/5f/7d4ef6f6ba15505d3c42a7a09e2a7ca9ae5ba6

--
--
Loaded symbols for /lib/libkrb5support.so.0
Reading symbols from /lib/libkeyutils.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libkeyutils.so.1
Reading symbols from /lib/libselinux.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libselinux.so.1
Reading symbols from /lib/libnss_files.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libnss_files.so.2
Core was generated by 
`/home/centos/pg10_28march/postgresql/edbpsql/bin/postgres -D mdata'.

Program terminated with signal 11, Segmentation fault.
#0  0x00a75424 in __kernel_vsyscall ()
Missing separate debuginfos, use: debuginfo-install 
glibc-2.12-1.132.el6.i686 keyutils-libs-1.4-5.el6.i686 
krb5-libs-1.10.3-57.el6.i686 libcom_err-1.41.12-22.el6.i686 
libselinux-2.0.94-7.el6.i686 openssl-1.0.1e-48.el6_8.4.i686 
zlib-1.2.3-29.el6.i686

(gdb) bt
#0  0x00a75424 in __kernel_vsyscall ()
#1  0x00aa1d7b in sigprocmask () from /lib/libc.so.6
#2  0x083d2d79 in sigusr1_handler (postgres_signal_arg=10) at 
postmaster.c:5081

#3  
#4  0x00a75424 in __kernel_vsyscall ()
#5  0x00aa1d7b in sigprocmask () from /lib/libc.so.6
#6  0x083d2d79 in sigusr1_handler (postgres_signal_arg=10) at 
postmaster.c:5081

#7  
#8  0x00a75424 in __kernel_vsyscall ()
#9  0x00aa1d7b in sigprocmask () from /lib/libc.so.6
#10 0x083d2d79 in sigusr1_handler (postgres_signal_arg=10) at 
postmaster.c:5081

#11 
#12 0x00a75424 in __kernel_vsyscall ()
--
--
#52380 0x00a75424 in __kernel_vsyscall ()
#52381 0x00aa1d7b in sigprocmask () from /lib/libc.so.6
#52382 0x083d2d79 in sigusr1_handler (postgres_signal_arg=10) at 
postmaster.c:5081

#52383 
#52384 0x00a75424 in __kernel_vsyscall ()
#52385 0x00b5208d in ___newselect_nocancel () from /lib/libc.so.6
#52386 0x083ce40e in ServerLoop () at postmaster.c:1693
#52387 0x083cdbcb in PostmasterMain (argc=3, argv=0x957ca10) at 
postmaster.c:1337

#52388 0x083236fc in main (argc=3, argv=0x957ca10) at main.c:228

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [HACKERS] Getting server crash after running sqlsmith

2017-03-30 Thread tushar

On 03/29/2017 12:06 AM, Tom Lane wrote:

Hm ... I don't see a crash here,

I am getting this issue only on Linux3.

  but I wonder whether you have parameters
set that would cause this query to be run as a parallel query?  Because
pg_rotate_logfile() is marked as parallel-safe in pg_proc, which seems
probably insane.
No, i have not changed any parameters except logging_collector=on in 
postgresql.conf file.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increasing the default WAL segment size

2017-03-09 Thread tushar

On 03/10/2017 11:23 AM, Beena Emerson wrote:



Thank you for your reviews Kuntal, Jim, Ashutosh

Attached in an updated 02 patch which:

 1. Call RetrieveXLogSegSize(conn) in pg_receivewal.c
 2. Remove the warning in Windows
 3. Change PATH_MAX in pg_waldump with MAXPGPATH

Regarding the usage of the wal file size as the XLogSegSize, I agree 
with what Robert has said. Generally, the wal size will be of the 
expected wal_segment_size and to have it any other size, esspecially 
of a valid power2 value is extremely rare and I feel it is not a major 
cause of concern.

We (Prabhat and I) have started basic  testing of this feature -
2 quick issue -

1)at the time of initdb, we have set - "--wal-segsize 4"  ,so all the 
WAL file size should be 4 MB each  but in the postgresql.conf file , it 
is  mentioned


#wal_keep_segments = 0  # in logfile segments,*16MB each*; 0 
disables


so the comment  (16MB ) mentioned against parameter 'wal_keep_segments'  
looks wrong , either we should remove this or modify it .


2)Getting "Aborted (core dumped)"  error at the time of running 
pg_basebackup  , *(this issue is only coming on Linux32 ,not on Linux64)

* we have  double check to confirm it .*
*
Steps to reproduce on Linux32
===
fetch the sources
apply both the patches
 ./configure --with-zlib   --enable-debug  --enable-cassert 
--enable-depend --prefix=$PWD/edbpsql --with-openssl CFLAGS="-g -O0"; 
make all install

Performed initdb with switch "--wal-segsize 4"
start the server
run pg_basebackup

[centos@tushar-centos bin]$ ./pg_basebackup -v -D /tmp/myslave
*** glibc detected *** ./pg_basebackup: free(): invalid pointer: 
0x08da7f00 ***

=== Backtrace: =
/lib/libc.so.6[0xae7e31]
/home/centos/pg10_10mar/postgresql/edbpsql/lib/libpq.so.5(PQclear+0x16d)[0x6266f5]
./pg_basebackup[0x8051441]
./pg_basebackup[0x804e7b5]
/lib/libc.so.6(__libc_start_main+0xe6)[0xa8dd26]
./pg_basebackup[0x804a231]
=== Memory map: 
00153000-0017b000 r-xp  fc:01 1271 /lib/libk5crypto.so.3.1
0017b000-0017c000 r--p 00028000 fc:01 1271 /lib/libk5crypto.so.3.1
0017c000-0017d000 rw-p 00029000 fc:01 1271 /lib/libk5crypto.so.3.1
0017d000-0017e000 rw-p  00:00 0
0017e000-0018 r-xp  fc:01 1241 /lib/libkeyutils.so.1.3
0018-00181000 r--p 1000 fc:01 1241 /lib/libkeyutils.so.1.3
00181000-00182000 rw-p 2000 fc:01 1241 /lib/libkeyutils.so.1.3
002ad000-002b9000 r-xp  fc:01 1152 /lib/libnss_files-2.12.so
002b9000-002ba000 r--p b000 fc:01 1152 /lib/libnss_files-2.12.so
002ba000-002bb000 rw-p c000 fc:01 1152 /lib/libnss_files-2.12.so
004ad000-004b r-xp  fc:01 1267 /lib/libcom_err.so.2.1
004b-004b1000 r--p 2000 fc:01 1267 /lib/libcom_err.so.2.1
004b1000-004b2000 rw-p 3000 fc:01 1267 /lib/libcom_err.so.2.1
004ec000-005c3000 r-xp  fc:01 1199   /lib/libkrb5.so.3.3
005c3000-005c9000 r--p 000d6000 fc:01 1199   /lib/libkrb5.so.3.3
005c9000-005ca000 rw-p 000dc000 fc:01 1199   /lib/libkrb5.so.3.3
00617000-00642000 r-xp  fc:01 2099439 
/home/centos/pg10_10mar/postgresql/edbpsql/lib/libpq.so.5.10
00642000-00644000 rw-p 0002a000 fc:01 2099439 
/home/centos/pg10_10mar/postgresql/edbpsql/lib/libpq.so.5.10

00792000-0079c000 r-xp  fc:01 1255 /lib/libkrb5support.so.0.1
0079c000-0079d000 r--p 9000 fc:01 1255 /lib/libkrb5support.so.0.1
0079d000-0079e000 rw-p a000 fc:01 1255 /lib/libkrb5support.so.0.1
007fd000-0083b000 r-xp  fc:01 1280 /lib/libgssapi_krb5.so.2.2
0083b000-0083c000 r--p 0003e000 fc:01 1280 /lib/libgssapi_krb5.so.2.2
0083c000-0083d000 rw-p 0003f000 fc:01 1280 /lib/libgssapi_krb5.so.2.2
0083f000-009ed000 r-xp  fc:01 292057 /usr/lib/libcrypto.so.1.0.1e
009ed000-009fd000 r--p 001ae000 fc:01 292057 /usr/lib/libcrypto.so.1.0.1e
009fd000-00a04000 rw-p 001be000 fc:01 292057 /usr/lib/libcrypto.so.1.0.1e
00a04000-00a07000 rw-p  00:00 0
00a51000-00a6f000 r-xp  fc:01 14109  /lib/ld-2.12.so
00a6f000-00a7 r--p 0001d000 fc:01 14109  /lib/ld-2.12.so
00a7-00a71000 rw-p 0001e000 fc:01 14109  /lib/ld-2.12.so
00a77000-00c08000 r-xp  fc:01 14110  /lib/libc-2.12.so
00c08000-00c0a000 r--p 00191000 fc:01 14110  /lib/libc-2.12.so
00c0a000-00c0b000 rw-p 00193000 fc:01 14110  /lib/libc-2.12.so
00c0b000-00c0e000 rw-p  00:00 0
00c1-00c22000 r-xp  fc:01 14355  /lib/libz.so.1.2.3
00c22000-00c23000 r--p 00011000 fc:01 14355  /lib/libz.so.1.2.3
00c23000-00c24000 rw-p 00012000 fc:01 14355  /lib/libz.so.1.2.3
00c52000-00c55000 r-xp  fc:01 14375  /lib/libdl-2.12.so
00c55000-00c56000 r--p 2000 fc:01 14375  /lib/libdl-2.12.so
00c56000-00c57000 rw-p 3000 fc:01 14375  /lib/libdl-2.12.so
00c59000-00c7 r-xp  fc:01 14379 /lib/libpthread-2.12.so
00c7-00c71000 r--p 00016000 fc:01 14379 /lib/libpthread-2.12.so
00c71000-00c72000 rw-p 00017000 fc:01 14379 /

Re: [HACKERS] increasing the default WAL segment size

2017-03-14 Thread tushar

On 03/14/2017 11:14 AM, Beena Emerson wrote:

Hello,

Attached is the updated patch. It fixes the issues and also updates 
few code comments.


 Can you please check with the new patch?

Thanks, both issues has been fixed now.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



[HACKERS] pg_upgrade failed if view contain natural left join condition

2017-07-20 Thread tushar

Steps to reproduce -

v9.6

postgres=# create table t(n int);
CREATE TABLE
postgres=# create table t1(a int);
CREATE TABLE
postgres=# create view ttt1 as SELECT e.n FROM t e NATURAL LEFT JOIN t1 d;
CREATE VIEW

v10 -

run pg_upgrade -

going to fail ,with this error -

"
pg_restore: creating TABLE "public.t"
pg_restore: creating TABLE "public.t1"
pg_restore: creating VIEW "public.ttt1"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 187; 1259 16390 VIEW 
ttt1 edb
pg_restore: [archiver (db)] could not execute query: ERROR:  syntax 
error at or near ")"

LINE 16:  LEFT JOIN "t1" "d");
^
Command was:
-- For binary upgrade, must preserve pg_type oid
SELECT 
pg_catalog.binary_upgrade_set_next_pg_type_oid('16392'::pg_catalog.oid);



-- For binary upgrade, must preserve pg_type array oid
SELECT 
pg_catalog.binary_upgrade_set_next_array_pg_type_oid('16391'::pg_catalog.oid);



-- For binary upgrade, must preserve pg_class oids
SELECT 
pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('16390'::pg_catalog.oid);


CREATE VIEW "ttt1" AS
 SELECT "e"."n"
   FROM ("t" "e"
 LEFT JOIN "t1" "d");

"
I think -this issue should be there in the older branches as well but 
not checked that.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_upgrade failed if view is based on sequence

2017-07-20 Thread tushar

Steps to reproduce -

v9.6

postgres=# create sequence seq_9166 start 1 increment 1;
CREATE SEQUENCE
postgres=# create or replace view v3_9166 as select * from seq_9166;
CREATE VIEW

v10

run pg_upgrade , going to fail with this error


command: "./pg_restore" --host 
/home/centos/pg10_14july/postgresql/edbpsql/bin --port 50432 --username 
edb --exit-on-error --verbose --dbname 'dbname=postgres' 
"pg_upgrade_dump_13269.custom" >> "pg_upgrade_dump_13269.log" 2>&1

pg_restore: connecting to database for restore
pg_restore: creating pg_largeobject "pg_largeobject"
pg_restore: creating pg_largeobject_metadata "pg_largeobject_metadata"
pg_restore: creating COMMENT "postgres"
pg_restore: creating SCHEMA "public"
pg_restore: creating COMMENT "SCHEMA "public""
pg_restore: creating TABLE "public.fb17136_tab1"
pg_restore: creating SEQUENCE "public.seq_9166"
pg_restore: creating VIEW "public.v3_9166"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 187; 1259 16392 VIEW 
v3_9166 edb
pg_restore: [archiver (db)] could not execute query: ERROR:  column 
seq_9166.sequence_name does not exist

LINE 14:  SELECT "seq_9166"."sequence_name",
 ^
Command was:
-- For binary upgrade, must preserve pg_type oid
SELECT 
pg_catalog.binary_upgrade_set_next_pg_type_oid('16394'::pg_catalog.oid);



-- For binary upgrade, must preserve pg_type array oid
SELECT 
pg_catalog.binary_upgrade_set_next_array_pg_type_oid('16393'::pg_catalog.oid);



-- For binary upgrade, must preserve pg_class oids
SELECT 
pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('16392'::pg_catalog.oid);


CREATE VIEW "v3_9166" AS
 SELECT "seq_9166"."sequence_name",
"seq_9166"."last_value",
"seq_9166"."start_value",
"seq_9166"."increment_by",
"seq_9166"."max_value",
"seq_9166"."min_value",
"seq_9166"."cache_value",
"seq_9166"."log_cnt",
"seq_9166"."is_cycled",
"seq_9166"."is_called"
   FROM "seq_9166";

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Create language syntax is not proper in pg_dumpall and not working using pg_upgrade

2017-07-25 Thread tushar

v9.6

postgres=# CREATE LANGUAGE alt_lang1 HANDLER plpgsql_call_handler;
CREATE LANGUAGE
postgres=# \q

v10 , run pg_upgrade - failing with this error -

pg_restore: creating pg_largeobject_metadata "pg_largeobject_metadata"
pg_restore: creating COMMENT "postgres"
pg_restore: creating SCHEMA "public"
pg_restore: creating COMMENT "SCHEMA "public""
pg_restore: creating PROCEDURAL LANGUAGE "alt_lang1"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 560; 2612 16384 
PROCEDURAL LANGUAGE alt_lang1 edb
pg_restore: [archiver (db)] could not execute query: ERROR: unsupported 
language "alt_lang1"
HINT:  The supported languages are listed in the pg_pltemplate system 
catalog.

Command was: CREATE OR REPLACE PROCEDURAL LANGUAGE "alt_lang1";


take the cluster dump using pg_dumpall
"
--
-- Name: alt_lang1; Type: PROCEDURAL LANGUAGE; Schema: -; Owner: edb
--

CREATE OR REPLACE PROCEDURAL LANGUAGE alt_lang1;


ALTER PROCEDURAL LANGUAGE alt_lang1 OWNER TO edb;
"

Handler part is missing and due to that  it is throwing an error ,if we 
try to execute on psql terminal.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump does not handle indirectly-granted permissions properly

2017-07-26 Thread tushar

On 07/26/2017 02:12 AM, Tom Lane wrote:

AFAICT, pg_dump has no notion that it needs to be careful about the order
in which permissions are granted.  I did

regression=# create user joe;
CREATE ROLE
regression=# create user bob;
CREATE ROLE
regression=# create user alice;
CREATE ROLE
regression=# \c - joe
You are now connected to database "regression" as user "joe".
regression=> create table joestable(f1 int);
CREATE TABLE
regression=> grant select on joestable to alice with grant option;
GRANT
regression=> \c - alice
You are now connected to database "regression" as user "alice".
regression=> grant select on joestable to bob;
GRANT

and then pg_dump'd that.  The ACL entry for joestable looks like this:

--
-- TOC entry 5642 (class 0 OID 0)
-- Dependencies: 606
-- Name: joestable; Type: ACL; Schema: public; Owner: joe
--

SET SESSION AUTHORIZATION alice;
GRANT SELECT ON TABLE joestable TO bob;
RESET SESSION AUTHORIZATION;
GRANT SELECT ON TABLE joestable TO alice WITH GRANT OPTION;

Unsurprisingly, that fails to restore:

db2=# SET SESSION AUTHORIZATION alice;
SET
db2=> GRANT SELECT ON TABLE joestable TO bob;
ERROR:  permission denied for relation joestable


I am not certain whether this is a newly introduced bug or not.
However, I tried it in 9.2-9.6, and all of them produce the
GRANTs in the right order:

GRANT SELECT O
I am also getting the same error while doing pg_upgrade from v9.6 to v10 
,although  not able to reproduce v9.5->v9.6 pg_upgrade.


v9.6

CREATE TABLE deptest (f1 serial primary key, f2 text);
\set VERBOSITY default
CREATE USER regress_dep_user0;
CREATE USER regress_dep_user1;
CREATE USER regress_dep_user2;
SET SESSION AUTHORIZATION regress_dep_user0;
REASSIGN OWNED BY regress_dep_user0 TO regress_dep_user1;
REASSIGN OWNED BY regress_dep_user1 TO regress_dep_user0;
CREATE TABLE deptest1 (f1 int unique);
GRANT ALL ON deptest1 TO regress_dep_user1 WITH GRANT OPTION;
SET SESSION AUTHORIZATION regress_dep_user1;
GRANT ALL ON deptest1 TO regress_dep_user2;

v10 - run pg_upgrade.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_upgrade failed with error - ERROR: column "a" in child table must be marked NOT NULL

2017-07-26 Thread tushar

v9.5/9.6

create these objects -
CREATE TABLE constraint_rename_test (a int CONSTRAINT con1 CHECK (a > 
0), b int, c int);
CREATE TABLE constraint_rename_test2 (a int CONSTRAINT con1 CHECK (a > 
0), d int) INHERITS (constraint_rename_test);

ALTER TABLE constraint_rename_test ADD CONSTRAINT con3 PRIMARY KEY (a);

v9.6/v10 - run pg_upgrade

pg_restore: creating COMMENT "SCHEMA "public""
pg_restore: creating TABLE "public.constraint_rename_test"
pg_restore: creating TABLE "public.constraint_rename_test2"
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 351; 1259 16388 TABLE 
constraint_rename_test2 edb
pg_restore: [archiver (db)] could not execute query: ERROR:  column "a" 
in child table must be marked NOT NULL

Command was:
-- For binary upgrade, must preserve pg_type oid
SELECT 
pg_catalog.binary_upgrade_set_next_pg_type_oid('16390'::pg_catalog.oid);


manually i am able to create all these objects .

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] increasing the default WAL segment size

2017-07-06 Thread tushar

On 07/06/2017 12:04 PM, Beena Emerson wrote:

The 04-initdb-walsegsize_v2.patch has the following improvements:
- Rebased over new 03 patch
- Pass the wal-segsize intidb option as command-line option rathern
than in an environment variable.
- Since new function check_wal_size had only had two checks and was
sed once, moved the code to ReadControlFile where it is used and
removed this function.
- improve comments and add validations where required.
- Use DEFAULT_XLOG_SEG_SIZE to set the min_wal_size and
max_wal_size,instead of the value 16.
- Use XLogSegMaxSize and XLogSegMinSize to calculate the range of guc
wal_segment_size instead 16 - INT_MAX.
Thanks Beena. I tested with below following scenarios  and all are 
working as expected
.)Different WAL segment size i.e 1,2,8,16,32,64,512,1024 at the time of 
initdb

.)SR setup in place.
.)Combinations of  max/min_wal_size in postgresql.conf with different 
wal_segment_size.
.)shutdown the server forcefully (kill -9) / promote slave / to make 
sure -recovery happened successfully.

.)with different utilities like pg_resetwal/pg_upgrade/pg_waldump
.)running pg_bench for substantial workloads (~ 1 hour)
.)WAL segment size is not default (changed at the time of ./configure) + 
different wal_segment_size (at the time of initdb) .


Things looks fine.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] If subscription to foreign table valid ?

2017-05-12 Thread tushar

On 05/11/2017 07:13 PM, Petr Jelinek wrote:

I think it does make sense to add check for this into CREATE/ALTER
SUBSCRIBER though so that user is informed immediately about the mistake
rather than by errors in the logs later.
+1 , there are  few similar cases   - where user does not  get error at 
prompt , for instance

--when publication doesn't not exist
postgres=# create subscription sub connection 'dbname=postgres port=5000 
user=centos password=a' publication nowhere;

NOTICE:  synchronized table states
NOTICE:  created replication slot "sub" on publisher
CREATE SUBSCRIPTION
--No check validation for Publication name in ALTER
postgres=# alter subscription sub set publication _ refresh;
ALTER SUBSCRIPTION
--slot name given in ALTER
postgres=# alter subscription sub with ( slot name='nowhere');
ALTER SUBSCRIPTION
--and due to that , we are not able to drop it later.
postgres=# drop subscription sub;
ERROR:  could not drop the replication slot "nowhere" on publisher
DETAIL:  The error was: ERROR:  replication slot "nowhere" does not exist
postgres=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Getting error at the time of dropping subscription and few more issues

2017-05-12 Thread tushar

Hi,

There are few more issues , found in logical replication

(1)ERROR:  tuple concurrently updated

Publication Server - (X machine)
\\create table \ create publication \ insert rows
 create table t(n int);
 create publication pub for table t;
 insert into t values (generate_series(1,100));

Subscription Server-(Y machine)
\\create table t / create subscription
create table t(n int);
create subscription sub connection 'dbname=postgres  port=5000 
user=centos  password=a' publication pub;


\\drop subscription and  re-create  (repeat this 2-3 times)
postgres=# drop subscription sub;
NOTICE:  dropped replication slot "sub" on publisher
DROP SUBSCRIPTION
postgres=# create subscription sub connection 'dbname=postgres port=5000 
user=centos  password=a' publication pub;

NOTICE:  synchronized table states
NOTICE:  created replication slot "sub" on publisher
CREATE SUBSCRIPTION
postgres=# select count(*) from t;
  count
-
 100
(1 row)

postgres=# drop subscription sub;
ERROR:  tuple concurrently updated

(2) Not able to drop the subscription even 'nocreate slot' is specified

postgres=# create subscription s2s1 connection 'dbname=postgres 
port=5000 user=t  password=a' publication t with (nocreate 
slot,enabled,copydata,SYNCHRONOUS_COMMIT='on');

NOTICE:  synchronized table states
CREATE SUBSCRIPTION

--not able to drop subscription ,  i have checked on Publication - no 
such slot created but still it is looking for slot.

postgres=# drop subscription s2s1;
ERROR:  could not drop the replication slot "s2s1" on publisher
DETAIL:  The error was: ERROR:  replication slot "s2s1" does not exist

(3)Alter publication SET  command doesn't give you NOTICE message about 
tables which got removed.


postgres=# create publication pub for table t,t1,t2 ;
CREATE PUBLICATION

postgres=# select * from pg_publication_tables ;
 pubname | schemaname | tablename
-++---
 pub | public | t
 pub | public | t1
 pub | public | t2
(3 rows)

postgres=# alter publication pub set table t;
ALTER PUBLICATION

postgres=# select * from pg_publication_tables ;
 pubname | schemaname | tablename
-++---
 pub | public | t
(1 row)

in subscription -  (we are getting NOTICE message, about tables which 
got removed)


postgres=#  alter subscription sub set publication pub refresh;
NOTICE:  removed subscription for table public.t1
NOTICE:  removed subscription for table public.t2
ALTER SUBSCRIPTION

I think  - in publication too ,we should provide NOTICE messages.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



https://sites.google.com/a/enterprisedb.com/old-new-touplestores/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Create publication syntax is not coming properly in pg_dump / pg_dumpall

2017-05-15 Thread tushar

Hi,

I observed that in pg_dump/pg_dumpall - 'create publication' syntax is 
not coming properly if only specified value  is mentioned in publish.


Testcase to reproduce -

\\create a publication

postgres=# CREATE PUBLICATION abc for all tables with (publish='insert');
CREATE PUBLICATION

\\take the plain dump

[centos@centos-cpula bin]$ ./pg_dump -FP -p 5000 postgres  > /tmp/a.a

\\check the syntax

[centos@centos-cpula bin]$ cat /tmp/a.a |grep 'create publication abc' -i
CREATE PUBLICATION abc FOR ALL TABLES WITH (publish = 'insert, , ');

\\try to execute the same syntax against psql terminal

postgres=# CREATE PUBLICATION abc FOR ALL TABLES WITH (publish = 
'insert, , ');

ERROR:  invalid publish list

Same is valid for pg_dumpall as well..

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Server Crashes if try to provide slot_name='none' at the time of creating subscription.

2017-05-15 Thread tushar

Hi,

Server Crashes if we try to provide slot_name='none' at the time of 
creating subscription -


postgres=#  create subscription sub2 connection 'dbname=postgres 
port=5000 user=centos password=f' publication abc with (slot_name='none');

NOTICE:  synchronized table states
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] If subscription to foreign table valid ?

2017-05-11 Thread tushar

Hi,

I observed that -we cannot publish "foreign table" in Publication

postgres=# create foreign table t (n int) server db1_server options 
(table_name 't1');

CREATE FOREIGN TABLE

postgres=# create publication pub for table t;
ERROR:  "t" is not a table
DETAIL:  Only tables can be added to publications.
postgres=#

but same thing is not true for Subscription

postgres=# create foreign table t (n int) server db1_server options 
(table_name 't');

CREATE FOREIGN TABLE
postgres=# alter subscription sub refresh publication ;
NOTICE:  added subscription for table public.t
ALTER SUBSCRIPTION

Is this an expected behavior ?   if we cannot publish then how  can we  
add subscription for it.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] 'nocopy data' option is set in SUBSCRIPTION but still data is getting migrated

2017-05-10 Thread tushar

Hi,

Please refer this scenario -where 'nocopy data' option is set in 
SUBSCRIPTION but still data is getting migrated


Publication - (X)
create table t(n int);
insert into t values (generate_series(1,99));
create publication pub for table  t;

Subscription (Y)
create table t(n int);
CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres host=localhost 
port=5000 user=centos password=a' PUBLICATION pub WITH (copy 
data,SYNCHRONOUS_COMMIT=on);

select count(*) from t;  ->showing 99 rows
alter subscription sub refresh publication with (nocopy data);
restart the server (Y)

X - insert more records into table 't'
Y - check the row count , rows have been migrated from X .

Is it the right behavior in this case where nocopy data option is set ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Issues with replication slots(which created manually) against logical replication

2017-05-09 Thread tushar

Hi,

While testing 'logical replication' against v10 , i encountered couple 
of issue when created logical/physical slot manually.


Case 1 - when used with   logical replication slot (which created manually)
Publication Server(X)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table/Insert 1 row -> create table test(n int); insert into t 
values (1);

\\create publication for all -> create publication pub for table t;
\\create logical replication slot  but before that  - do 'make and make 
install' against "contrib/test_decoding" contrib folder
select * from 
pg_create_logical_replication_slot('my_logical','test_decoding');



Subscription Serve(Y)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table -> create table test(n int);
\\create Subscription , used the existing slot

postgres=#  CREATE SUBSCRIPTION sub CONNECTION 'host=localhost 
dbname=postgres  port=5000 user=centos ' publication pub with (NOCREATE 
SLOT ,Slot name=my_logical);

NOTICE:  synchronized table states
CREATE SUBSCRIPTION

if we check the publication server (x) and subscription server(y) , we 
are getting this error in log file -


2017-05-09 10:41:49.570 BST [1809] LOG:  starting logical replication 
worker for subscription "sub"
2017-05-09 10:41:49.579 BST [2346] LOG:  logical replication apply for 
subscription sub started
2017-05-09 10:41:49.588 BST [2346] ERROR:  could not receive data from 
WAL stream: ERROR:  option "proto_version" = "1" is unknown
CONTEXT:  slot "my_logical", output plugin "test_decoding", in the 
startup callback
2017-05-09 10:41:49.589 BST [1801] LOG:  worker process: logical 
replication worker for subscription 16391 (PID 2346) exited with exit code 1


Case 2 -When  used with  physical  replication slot


Publication Server(X)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table/Insert 1 row -> create table test(n int); insert into t 
values (1);

\\create publication for all -> create publication pub for table t;

\\create physical replication slot
postgres=# select * from pg_create_physical_replication_slot('my_test');
 slot_name | wal_position
---+--
 my_test   |
(1 row)


Subscription Serve(Y)
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table -> create table test(n int);
\\create Subscription , used the existing slot ,which is physical

postgres=# CREATE SUBSCRIPTION sub CONNECTION 'host=localhost 
dbname=postgres  port=5000 user=centos ' publication pub with (NOCREATE 
SLOT ,Slot name=my_test);

NOTICE:  synchronized table states
CREATE SUBSCRIPTION
postgres=#

in the subscription server log file , we are getting this error -

2017-05-09 10:51:44.037 BST [2738] ERROR:  could not receive data from 
WAL stream: ERROR:  cannot use physical replication slot for logical 
decoding
2017-05-09 10:51:44.038 BST [1801] LOG:  worker process: logical 
replication worker for subscription 16393 (PID 2738) exited with exit code 1


I think -we should throw an error while creating subscription.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Server Crashes if try to provide slot_name='none' at the time of creating subscription.

2017-05-16 Thread tushar

On 05/16/2017 06:35 AM, Masahiko Sawada wrote:

I've updated Kuntal's patch, added regression test for option
combination and updated documentation.
While testing the patch - I  found that after dump/restore , we are 
getting an error in the log file once we enable the subscription


\\create subscription

postgres=# CREATE SUBSCRIPTION m1 CONNECTION 'dbname=postgres port=5000 
' PUBLICATION qdd WITH (slot_name='none');

NOTICE:  synchronized table states
CREATE SUBSCRIPTION

\\take the dump
[centos@centos-cpula bin]$ ./pg_dump -Fp  -p 9000 postgres > /tmp/d.c
\\check the syntax
[centos@centos-cpula bin]$ cat /tmp/d.c |grep 'create subsc*' -i
CREATE SUBSCRIPTION m1 CONNECTION 'dbname=postgres port=5000 ' 
PUBLICATION qdd WITH (connect = false, slot_name = '');

\\execute this same syntax against a new database
postgres=# create database  test;
CREATE DATABASE
postgres=# \c test
You are now connected to database "test" as user "centos".
test=# CREATE SUBSCRIPTION m1 CONNECTION 'dbname=postgres port=5000 ' 
PUBLICATION qdd WITH (connect = false, slot_name = '');
WARNING:  tables were not subscribed, you will have to run ALTER 
SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables

CREATE SUBSCRIPTION

test=# alter subscription m1 refresh publication ;
ERROR:  ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled 
subscriptions

test=# alter subscription m1 enable ;
ALTER SUBSCRIPTION

Check the message in  log file

017-05-16 14:04:48.373 BST [18219] LOG:  logical replication apply for 
subscription m1 started
2017-05-16 14:04:48.381 BST [18219] ERROR:  could not start WAL 
streaming: ERROR:  replication slot name "" is too short
2017-05-16 14:04:48.382 BST [17843] LOG:  worker process: logical 
replication worker for subscription 16386 (PID 18219) exited with exit 
code 1
2017-05-16 14:04:53.388 BST [17850] LOG:  starting logical replication 
worker for subscription "m1"
2017-05-16 14:04:53.396 BST [18224] LOG:  logical replication apply for 
subscription m1 started
2017-05-16 14:04:53.403 BST [18224] ERROR:  could not start WAL 
streaming: ERROR:  replication slot name "" is too short


Is this error message (ERROR:  replication slot name "" is too short ) 
is expected now ?


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] synchronous_commit option is not visible after pressing TAB

2017-05-16 Thread tushar

Hi,

While creating subscription - if we press TAB button to see the 
available parameters , synchronous_commit parameter is not visible.


postgres=# CREATE SUBSCRIPTION sub123 CONNECTION 'dbname=postgres 
port=5000' PUBLICATION pub WITH (

CONNECT  COPY_DATACREATE_SLOT  ENABLED  SLOT_NAME

synchronous_commit option is not visible

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_filedump doesn't compile with v10 sources

2017-06-26 Thread tushar

Hi,

While trying to do - make of pg_filedump against v10 sources , getting 
an  errors


[centos@centos-cpula pg_filedump]$ make
cc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic 
-DLINUX_OOM_ADJ=0 -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing 
-fwrapv  -I/home/centos/pg10_/postgresql/src/include/ pg_filedump.c -c

pg_filedump.c: In function ‘FormatControl’:
pg_filedump.c:1650: error: ‘ControlFileData’ has no member named 
‘enableIntTimes’

make: *** [pg_filedump.o] Error 1
[centos@centos-cpula pg_filedump]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] HINT message for "ALTER SUBSCRIPTION.. WITH" need to change with SET keyword

2017-05-19 Thread tushar

Hi,

There is small issue in the HINT message which we provide at the time of 
dropping subscription ,where we are saying -WITH (slot_name) which need 
to change with SET (slot_name).


postgres=# drop subscription sub;
ERROR:  could not connect to publisher when attempting to drop the 
replication slot "pub"
DETAIL:  The error was: could not connect to server: No such file or 
directory

Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
HINT:  Use ALTER SUBSCRIPTION ... *WITH* (slot_name = NONE) to 
disassociate the subscription from the slot.


expected = "HINT:  Use ALTER SUBSCRIPTION ... *SET* (slot_name = NONE) 
to disassociate the subscription from the slot."


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



[HACKERS] ALTER SUBSCRIPTION ..SET PUBLICATION refresh is not throwing error.

2017-05-23 Thread tushar

Hi,

ALTER SUBSCRIPTION ..SET PUBLICATION  refresh  is removing all 
the attached subscription('s).


X Machine -

s=# create table t(n int);
CREATE TABLE
s=# create table t1(n int);
CREATE TABLE
s=# create publication pub for table  t,t1;
CREATE PUBLICATION
s=#

Y Machine -

s=# create table t(n int);
CREATE TABLE
s=# create table t1(n int);
CREATE TABLE
s=# create subscription s1 connection 'dbname=s port=5432 user=centos 
host=localhost' publication pub;

NOTICE:  synchronized table states
NOTICE:  created replication slot "s1" on publisher
CREATE SUBSCRIPTION

s=# alter subscription s1 set publication  skip refresh ;
NOTICE:  removed subscription for table public.t
NOTICE:  removed subscription for table public.t1
ALTER SUBSCRIPTION
s=#

I think - this is probably due to not given publication NAME in the sql 
query .


we are doing a syntax check at the time of REFRESH but not with SKIP 
REFRESH


s=# alter subscription s1 set publication   refresh ;
ERROR:  syntax error at or near ";"
LINE 1: alter subscription s1 set publication   refresh ;
^
s=# alter subscription s1 set publication  skip refresh ;
ALTER SUBSCRIPTION
s=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash after running sqlsmith

2017-05-22 Thread tushar

On 03/29/2017 12:06 AM, Tom Lane wrote:

Hm ... I don't see a crash here, but I wonder whether you have parameters
set that would cause this query to be run as a parallel query?  Because
pg_rotate_logfile() is marked as parallel-safe in pg_proc, which seems
probably insane.
Well, I am able to see a crash .  Enable "logging_collector=on" in 
postgresql.conf file / restart the server and fire below sql query - 5 
or 6 times


select
  80 as c0,
  pg_catalog.pg_backend_pid() as c1,
  68 as c2,
  subq_1.c0 as c3,
  subq_1.c0 as c4
from
  (select
ref_0.specific_schema as c0
  from
information_schema.role_routine_grants as ref_0,
lateral (select
  ref_0.grantor as c0,
  50 as c1
from
  information_schema.routines as ref_1
where (63 = 86)
  or (pg_catalog.pg_advisory_lock(
  cast(ref_1.result_cast_datetime_precision as 
integer),

  cast(pg_catalog.bttidcmp(
cast(null as tid),
cast(null as tid)) as integer)) is NULL)
limit 143) as subq_0
  where pg_catalog.pg_rotate_logfile() is NULL) as subq_1
where 50 <> 45;

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting server crash after running sqlsmith

2017-05-23 Thread tushar

On 05/23/2017 06:25 PM, Robert Haas wrote:

Just out of curiosity, what happens if you try it with the attached patch?

Thanks, issue seems to be fixed after applying your patch.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] server closed the connection message in the log file of standby while performing logical replication synchronization

2017-05-24 Thread tushar

Hi,

I am getting server closed the connection message in the log file of 
standby after 'logical replication synchronization worker finished 
processing' LOG message


2017-05-24 08:58:04.451 BST [28496] LOG:  logical replication sync for 
subscription s_5434, table t1039 started
2017-05-24 08:58:05.447 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:05.453 BST [28498] LOG:  logical replication sync for 
subscription s_5434, table t104 started
2017-05-24 08:58:06.449 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:06.455 BST [28500] LOG:  logical replication sync for 
subscription s_5434, table t1040 started
2017-05-24 08:58:07.451 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:07.458 BST [28502] LOG:  logical replication sync for 
subscription s_5434, table t1041 started
2017-05-24 08:58:08.453 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:08.461 BST [28504] LOG:  logical replication sync for 
subscription s_5434, table t1042 started
2017-05-24 08:58:23.540 BST [28463] LOG:  logical replication 
synchronization worker finished processing
2017-05-24 08:58:23.550 BST [28461] LOG:  logical replication 
synchronization worker finished processing
2017-05-24 08:58:08.453 BST [28388] LOG:  starting logical replication 
worker for subscription "s_5434"
2017-05-24 08:58:08.461 BST [28504] LOG:  logical replication sync for 
subscription s_5434, table t1042 started
2017-05-24 08:58:23.540 BST [28463] LOG:  logical replication 
synchronization worker finished processing
2017-05-24 08:58:23.550 BST [28461] LOG:  logical replication 
synchronization worker finished processing

server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
2017-05-24 08:59:23.622 BST [28465] ERROR:  error reading result of 
streaming command: server closed the connection unexpectedly

This probably means the server terminated abnormally
before or while processing the request.
2017-05-24 08:59:23.627 BST [28340] LOG:  worker process: logical 
replication worker for subscription 31385 sync 19463 (PID 28465) exited 
with exit code 1

server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
2017-05-24 08:59:33.624 BST [28467] ERROR:  error reading result of 
streaming command: server closed the connection unexpectedly

This probably means the server terminated abnormally
before or while processing the request.

Steps to reproduce -

X Cluster -
Modify/add this below parameters in postgresql.conf file
wal_level=logical
logging_collector=on
max_replication_slots = 100
max_worker_processes = 50
max_logical_replication_workers = 35
max_sync_workers_per_subscription = 25
max_wal_senders=50
log_min_duration_statement = 0
port=5432

Start the server

connect to psql
create 5000 tables
create publication -> create publication pub for all tables;

Y Cluster-

Modify/add this below parameters in postgresql.conf file

wal_level=logical
logging_collector=on
max_replication_slots = 100
max_worker_processes = 50
max_logical_replication_workers = 35
max_sync_workers_per_subscription = 25
max_wal_senders=50
log_min_duration_statement = 0
port=5433

Start the server

connect to psql
create 5000 tables
create subscription ->create subscription suv connection 
'dbname=postgres port=5432 user=centos host=localhost' publication pub;


check the log file on standby ( in 5-10 minutes , server closed ... 
message should come)


attaching both log files ( master and standby).

reproducing this issue is not consistent. Out of 5 times,  i am able to 
reproduce it 2 times only.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



files.tar.bz2
Description: application/bzip

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical replication - still unstable after all these months

2017-05-26 Thread tushar

On 05/26/2017 12:57 PM, Erik Rijkers wrote:
The failure is that in the result state the replicated tables differ 
from the original tables. 

I am also getting similar behavior

Master=
run pgbench with scaling factor =1  (./pg_bench -i -s 1 postgres  )
delete rows from pgbench_history  ( delete from pgbench_history)
create publication  (create publication pub for table pgbench_history)

Slave=
run pgbench with scaling factor =1  (./pg_bench -i -s 1 postgres -p 5000  )
delete rows from pgbench_history  ( delete from pgbench_history)
create subscription (create subscription sub connection 'dbname=postgres 
host=localhost user=centos) publication pub;


create a test.sql file , having an insert statement
[centos@centos-cpula bin]$ cat test.sql
insert into pgbench_history values (1,1,1,1,now(),'anv');

now run pgbench with -T / -c / -j options
First time = ./pgbench -t 5   -c 90 -j 90  -f test.sql  postgres

count on Master/slave are SAME .

run second time =
./pgbench -T 20   -c 90 -j 90  -f test.sql  postgres

check the row count on master/standby
Master=
postgres=# select count(*) from pgbench_history ;
 count

 536836
(1 row)

Standby =

postgres=#  select count(*) from pgbench_history ;
  count
-
 1090959
(1 row)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table

2017-05-25 Thread tushar

On 05/25/2017 12:44 AM, Petr Jelinek wrote:

There is still outstanding issue that sync worker will keep running
inside the long COPY because the invalidation messages are also not
processed until it finishes but all the original issues reported here
disappear for me with the attached patches applied.
After applying all your patches, drop subscription no  more hangs while 
dropping  subscription but there is an error   "ERROR:  tuple 
concurrently updated" in the log file of

standby.

---
logical replication synchronization worker finished processing
2017-05-25 09:15:52.654 BST [18575] LOG:  logical replication 
synchronization worker finished processing
2017-05-25 09:15:52.656 BST [18563] LOG:  starting logical replication 
worker for subscription "sub"
2017-05-25 09:15:52.662 BST [18577] LOG:  logical replication sync for 
subscription sub, table t14 started
2017-05-25 09:15:53.657 BST [18563] LOG:  starting logical replication 
worker for subscription "sub"
2017-05-25 09:15:53.663 BST [18579] LOG:  logical replication sync for 
subscription sub, table t15 started
2017-05-25 09:15:53.724 BST [18563] FATAL:  terminating logical 
replication worker due to administrator command
2017-05-25 09:15:53.725 BST [18521] LOG:  worker process: logical 
replication worker for subscription 16684 (PID 18563) exited with exit 
code 1

2017-05-25 09:15:54.734 BST [18579] ERROR:  tuple concurrently updated
2017-05-25 09:15:54.735 BST [18577] ERROR:  tuple concurrently updated
2017-05-25 09:15:54.736 BST [18521] LOG:  worker process: logical 
replication worker for subscription 16684 sync 16426 (PID 18579) exited 
with exit code 1
2017-05-25 09:15:54.736 BST [18521] LOG:  worker process: logical 
replication worker for subscription 16684 sync 16423 (PID 18577) exited 
with exit code 1

~
~
~

Steps to reproduce -
X cluster -> create 100 tables , publish all tables  (create publication 
pub for all tables);
Y Cluster -> create 100 tables ,create subscription(create subscription 
sub connection 'user=centos host=localhost' publication pub;

Y cluster ->drop subscription - drop subscription sub;

check the log file on Y cluster.

Sometime , i have seen this error on psql prompt and drop subscription 
operation got failed at first attempt.


postgres=# drop subscription sub;
ERROR:  tuple concurrently updated
postgres=# drop subscription sub;
NOTICE:  dropped replication slot "sub" on publisher
DROP SUBSCRIPTION

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread tushar

On 05/22/2017 05:31 PM, Tom Lane wrote:

Do we have a prohibition against publishing/subscribing anything
in pg_catalog?

Yes.

postgres=# create publication pub for table pg_catalog.pg_AM;
ERROR:  "pg_am" is a system table
DETAIL:  System tables cannot be added to publications.
postgres=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread tushar

On 05/22/2017 05:12 PM, Kuntal Ghosh wrote:

pg_dump ignores anything created under object name "pg_*" or
"information_schema".
In this below scenario  , I am able to see - pg_dump catch the 
information of table which is created under information_schema


postgres=# create database  ntest;
\CREATE DATABASE
postgres=# \c ntest
You are now connected to database "ntest" as user "centos".
ntest=# create table information_schema.abc(n int);
CREATE TABLE
ntest=# create   view e1  as select * from information_schema.abc;
CREATE VIEW

[centos@centos-cpula regress]$ pg_dump -Fp  ntest > /tmp/a.a

cat /tmp/a.a

SET search_path = public, pg_catalog;

--
-- Name: e1; Type: VIEW; Schema: public; Owner: centos
--

CREATE VIEW e1 AS
 SELECT abc.n
   FROM information_schema.abc;


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] "create publication..all tables" ignore 'partition not supported' error

2017-05-22 Thread tushar

Hi,

I observed that - "create publication..all tables" ignore 'partition not 
supported' error


\\create a partition table

You are now connected to database "s" as user "centos".
s=# CREATE TABLE measurement (
s(# city_id int not null,
s(# logdate date not null,
s(# peaktempint,
s(# unitsales   int
s(# ) PARTITION BY RANGE (logdate);
CREATE TABLE
s=#

\\try to publish only this table

s=# create publication p for table  measurement;
ERROR:  "measurement" is a partitioned table
DETAIL:  Adding partitioned tables to publications is not supported.
HINT:  You can add the table partitions individually.

\\try to create publication for all tables
s=# create publication p for all tables ;
CREATE PUBLICATION
s=# \d+ measurement
 Table "public.measurement"
  Column   |  Type   | Collation | Nullable | Default | Storage | Stats 
target | Description

---+-+---+--+-+-+--+-
 city_id   | integer |   | not null | | plain 
|  |
 logdate   | date|   | not null | | plain 
|  |
 peaktemp  | integer |   |  | | plain 
|  |
 unitsales | integer |   |  | | plain 
|  |

Partition key: RANGE (logdate)
Publications:
"p"

Publication 'p' has been set against partition table ,which is not 
supported.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread tushar

Hi,

pg_dump is ignoring tables which created under information_schema 
schema  for  CREATE PUBLICATION .


postgres=# create database  test;
CREATE DATABASE
postgres=# \c test
You are now connected to database "test" as user "centos".
test=# create table information_schema.abc(n int);
CREATE TABLE
test=# create publication test for table information_schema.abc;
CREATE PUBLICATION
test=# select * from pg_publication_tables;
 pubname | schemaname | tablename
-++---
 test| information_schema | abc
(1 row)

test=# \q
[centos@centos-cpula regress]$ pg_dump -Fp  test > /tmp/a.a
[centos@centos-cpula regress]$ cat /tmp/a.a|grep publication -i
-- Name: test; Type: PUBLICATION; Schema: -; Owner: centos
CREATE PUBLICATION test WITH (publish = 'insert, update, delete');
ALTER PUBLICATION test OWNER TO centos;
[centos@centos-cpula regress]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter subscription..SET - NOTICE message is coming for table which is already removed

2017-05-25 Thread tushar

On 05/25/2017 04:40 PM, Masahiko Sawada wrote:

I think you did ALTER SUBSCRIPTION while table sync for 100 tables is
running, right?

Yes, i didn't wait too much while executing the commands.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Alter subscription..SET - NOTICE message is coming for table which is already removed

2017-05-25 Thread tushar

Hi,

While performing - Alter subscription..SET  , I found that NOTICE 
message is coming duplicate next time , which  is not needed anymore.


X cluster=
create 100 tables
create publication ( create publication pub for all tables;)

Y cluster=
create 100 tables
create subscription ( create subscription sub connection 
'dbname=postgres user=centos host=localhost ) publication pub;


X cluster =
create 1 more table (create table r(n int));
create publication for this above table only (create publication pub1 
for table r;)


Y cluster=
create table r - create table r(n int);
alter publication -alter subscription sub set publication pub1 refresh;

in the notice message -> table 'r' added / 100 tables removed from the 
subscription


postgres=# alter subscription sub set publication pub1 refresh;
NOTICE:  added subscription for table public.r
NOTICE:  removed subscription for table public.t1
NOTICE:  removed subscription for table public.t2
NOTICE:  removed subscription for table public.t3
NOTICE:  removed subscription for table public.t4
NOTICE:  removed subscription for table public.t5
NOTICE:  removed subscription for table public.t6
--
--
--
ALTER SUBSCRIPTION

now again fire the same sql query

postgres=# alter subscription sub set publication pub1 refresh;
NOTICE:  removed subscription for table public.t78
ALTER SUBSCRIPTION

This notice message should not  come  as t.78 is already removed from 
earlier same command.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Alter subscription..SET - NOTICE message is coming for table which is already removed

2017-05-25 Thread tushar

On 05/25/2017 03:38 PM, tushar wrote:
While performing - Alter subscription..SET  , I found that NOTICE 
message is coming duplicate next time , which  is not needed anymore. 
There is an another example - where i am getting "ERROR: subscription 
table 16435 in subscription 16684 does not exist" in standby log file


2017-05-25 13:51:48.825 BST [32138] NOTICE:  removed subscription for 
table public.t96
2017-05-25 13:51:48.825 BST [32138] NOTICE:  removed subscription for 
table public.t97
2017-05-25 13:51:48.826 BST [32138] NOTICE:  removed subscription for 
table public.t98
2017-05-25 13:51:48.826 BST [32138] NOTICE:  removed subscription for 
table public.t99
2017-05-25 13:51:48.826 BST [32138] NOTICE:  removed subscription for 
table public.t100
2017-05-25 13:51:48.827 BST [32138] LOG:  duration: 35.404 ms statement: 
alter subscription c1 set publication p1 refresh;
2017-05-25 13:51:49.192 BST [32347] LOG:  starting logical replication 
worker for subscription "c1"
2017-05-25 13:51:49.198 BST [32368] LOG:  logical replication sync for 
subscription c1, table t16 started
2017-05-25 13:51:49.198 BST [32368] ERROR:  subscription table 16429 in 
subscription 16684 does not exist
2017-05-25 13:51:49.199 BST [32347] LOG:  starting logical replication 
worker for subscription "c1"
2017-05-25 13:51:49.200 BST [32065] LOG:  worker process: logical 
replication worker for subscription 16684 sync 16429 (PID 32368) exited 
with exit code 1
2017-05-25 13:51:49.204 BST [32369] LOG:  logical replication sync for 
subscription c1, table t17 started
2017-05-25 13:51:49.204 BST [32369] ERROR:  subscription table 16432 in 
subscription 16684 does not exist
2017-05-25 13:51:49.205 BST [32347] LOG:  starting logical replication 
worker for subscription "c1"
2017-05-25 13:51:49.205 BST [32065] LOG:  worker process: logical 
replication worker for subscription 16684 sync 16432 (PID 32369) exited 
with exit code 1
2017-05-25 13:51:49.209 BST [32370] LOG:  logical replication sync for 
subscription c1, table t18 started
2017-05-25 13:51:49.209 BST [32370] ERROR:  subscription table 16435 in 
subscription 16684 does not exist
2017-05-25 13:51:49.210 BST [32347] LOG:  starting logical replication 
worker for subscription "c1"
2017-05-25 13:51:49.210 BST [32065] LOG:  worker process: logical 
replication worker for subscription 16684 sync 16435 (PID 32370) exited 
with exit code 1
2017-05-25 13:51:49.213 BST [32371] LOG:  logical replication sync for 
subscription c1, table t19 started
2017-05-25 13:51:49.213 BST [32371] ERROR:  subscription table 16438 in 
subscription 16684 does not exist
2017-05-25 13:51:49.214 BST [32347] LOG:  starting logical replication 
worker for subscription "c1"



Steps to reproduce -
X cluster ->
create 100 tables , publish all tables (create publication pub for table 
t1,t2,t2..t100;)
create one more table (create table t101(n int), create publication , 
publish only that table (create publication p1 for table t101;)


Y Cluster ->
create subscription (create subscription c1 connection 'host=localhost  
port=5432 user=centos ' publication pub;

alter subscription c1 set publication p1 refresh;
alter subscription c1 set publication pub refresh;
alter subscription c1 set publication p1 refresh;

check the log file.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] No parameter values checking while creating Alter subscription...Connection

2017-05-25 Thread tushar

Hi,

We usually check connection  parameter values while creating create 
subscription


\\port is WRONG

postgres=# create subscription c1 connection 'port=4000 ' publication pub;
ERROR:  could not connect to the publisher: could not connect to server: 
No such file or directory

Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.4000"?
postgres=#

\\when database doesn't exist

postgres=# create subscription c1 connection 'dbname=postgre ' 
publication pub;
ERROR:  could not connect to the publisher: FATAL:  database "postgre" 
does not exist

postgres=#

but such checking is not done at the time of alter subscription .. 
connection


postgres=# alter subscription c1 connection 'port=4000';
ALTER SUBSCRIPTION
postgres=# alter subscription c1 connection 'dbname=cc';
ALTER SUBSCRIPTION

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetwal is broken if run from v10 against older version of PG data directory

2017-05-29 Thread tushar

On 05/29/2017 03:10 PM, Amit Kapila wrote:

  What makes you think above is a valid usage and should
pass?
with earlier versions ,for instance - v.96 v/s v9.5 ,pg_resetwal was 
giving pg_control values .


Installed v9.6 and v9.5  and run pg_resetwal of v9.6 against data 
directory of v9.5.


[centos@centos-cpula ~]$ /tmp/pg9.6/bin/pg_resetxlog -D /tmp/pg9.5/bin/data/
pg_resetxlog: pg_control exists but is broken or unknown version; 
ignoring it

Guessed pg_control values:

pg_control version number:960
Catalog version number:   201608131
Database system identifier:   6425491233437069295
Latest checkpoint's TimeLineID:   1
Latest checkpoint's full_page_writes: off
Latest checkpoint's NextXID:  0:3
Latest checkpoint's NextOID:  1
Latest checkpoint's NextMultiXactId:  1
Latest checkpoint's NextMultiOffset:  0
Latest checkpoint's oldestXID:3
Latest checkpoint's oldestXID's DB:   0
Latest checkpoint's oldestActiveXID:  0
Latest checkpoint's oldestMultiXid:   1
Latest checkpoint's oldestMulti's DB: 0
Latest checkpoint's oldestCommitTsXid:0
Latest checkpoint's newestCommitTsXid:0
Maximum data alignment:   8
Database block size:  8192
Blocks per segment of large relation: 131072
WAL block size:   8192
Bytes per WAL segment:16777216
Maximum length of identifiers:64
Maximum columns in an index:  32
Maximum size of a TOAST chunk:1996
Size of a large-object chunk: 2048
Date/time type storage:   64-bit integers
Float4 argument passing:  by value
Float8 argument passing:  by value
Data page checksum version:   0


Values to be changed:

First log segment after reset:00010002

If these values seem acceptable, use -f to force reset.
[centos@centos-cpula ~]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_resetwal is broken if run from v10 against older version of PG data directory

2017-05-29 Thread tushar

Hi,

I have installed PG v9.6 / v9.5 , if we run pg_resetwal from v10 
binaries against data directory of v9.6/9.5 ,getting this error -


centos@centos-cpula bin]$ ./pg_resetwal -D /tmp/pg9.6/bin/data/
pg_resetwal: pg_control exists but is broken or unknown version; ignoring it
pg_resetwal: could not open directory "pg_wal": No such file or directory
[centos@centos-cpula bin]$

Steps to reproduce-
installed PG v9.6
installed PG v10
go to bin directory of v10 and run pg_resetwal , provide -D = data 
directory of v9.6.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Error while creating subscription when server is running in single user mode

2017-05-31 Thread tushar

Hi,

There is an error while creating subscription when server is running in 
single user mode


centos@centos-cpula bin]$ ./postgres --single postgres -D m1data
PostgreSQL stand-alone backend 10beta1
backend> create subscription sub connection 'dbname=postgres port=5433 
user=centos' publication p with (create_slot=0,enabled=off);
2017-05-31 12:53:09.318 BST [10469] LOG:  statement: create subscription 
sub connection 'dbname=postgres port=5433 user=centos' publication p 
with (create_slot=0,enabled=off);


2017-05-31 12:53:09.326 BST [10469] ERROR:  epoll_ctl() failed: Bad file 
descriptor


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] statement_timeout is not working as expected with postgres_fdw

2017-05-04 Thread tushar

On 05/04/2017 08:01 AM, Robert Haas wrote:

Patch attached.


I tried at my end after applying the patch against PG HEAD,

Case 1 - without setting statement_timeout i.e default

X machine -
create table test1(a int);

Y machine -
CREATE SERVER myserver_ppas FOREIGN DATA WRAPPER postgres_fdw OPTIONS 
(host 'X', dbname 'postgres', port '5432', connect_timeout '3');
CREATE USER MAPPING FOR centos SERVER myserver_ppas OPTIONS (user 
'centos', password 'adminedb');
create foreign table ft_test_ppas (a  int ) server myserver_ppas options 
(table_name 'test1');

 statement_timeout =0;
\timing
insert  into ft_test_ppas  values  (generate_series(1,1000));

X machine-
disconnect network

Y machine -
postgres=# insert  into ft_test_ppas  values (generate_series(1,1000));
^CCancel request sent
^CCancel request sent
^CCancel request sent
^CCancel request sent
^CCancel request sent
^CCancel request sent
^CCancel request sent
^CCancel request sent
^CCancel request sent
^CCancel request sent
WARNING:  could not send cancel request: PQcancel() -- connect() failed: 
Connection timed out


ERROR:  canceling statement due to user request
Time: 81073.872 ms (01:21.074)

Case 2- when statement_timeout=6000

Y machine -
CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'X', dbname 'postgres', port '5432',keepalives '1', keepalives_interval 
'3',keepalives_idle '3', keepalives_count '1');
CREATE USER MAPPING FOR centos SERVER myserver OPTIONS (user 'centos', 
password 'adminedb');
create foreign table ft_test_ppas1 (a  int ) server myserver options 
(table_name 'test1');

set statement_timeout=6000;
\timing
insert  into ft_test_ppas1  values  (generate_series(1,1000));

X machine-
disconnect network

Y machine
postgres=# insert  into ft_test_ppas1  values 
(generate_series(1,1000));
WARNING:  could not send cancel request: PQcancel() -- connect() failed: 
Connection timed out


ERROR:  canceling statement due to statement timeout
Time: 69009.875 ms (01:09.010)
postgres=#

Case 3-when statement_timeout=2

Y machine -
CREATE SERVER myserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'X', dbname 'postgres', port '5432',keepalives '1', keepalives_interval 
'3',keepalives_idle '3', keepalives_count '1');
CREATE USER MAPPING FOR centos SERVER myserver OPTIONS (user 'centos', 
password 'adminedb');
create foreign table ft_test_ppas1 (a  int ) server myserver options 
(table_name 'test1');

set statement_timeout=2;
\timing
insert  into ft_test_ppas1  values  (generate_series(1,1000));

X machine-
disconnect network

Y machine -
postgres=# insert  into ft_test_ppas1  values 
(generate_series(1,1000));
WARNING:  could not send cancel request: PQcancel() -- connect() failed: 
Connection timed out

ERROR:  canceling statement due to statement timeout
Time: 83014.503 ms (01:23.015)

We can see statement_timeout is working but it is taking some extra 
time,not sure this is an expected behavior in above case or not.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] statement_timeout is not working as expected with postgres_fdw

2017-05-04 Thread tushar

On 05/04/2017 03:53 PM, tushar wrote:
We can see statement_timeout is working but it is taking some extra 
time,not sure this is an expected behavior in above case or not. 
This is only when remote server is involved . in case when both the 
servers are on the same machine , then this is working as expected.


d1=# CREATE SERVER myserver_ppas FOREIGN DATA WRAPPER postgres_fdw 
OPTIONS (host 'localhost', dbname 'postgres', port '5432', 
connect_timeout '3');

CREATE SERVER
d1=# CREATE USER MAPPING FOR centos SERVER myserver_ppas OPTIONS (user 
'centos', password 'adminedb');

CREATE USER MAPPING
d1=# create foreign table ft_test_ppas (a  int ) server myserver_ppas 
options (table_name 'test1');

CREATE FOREIGN TABLE
d1=#
d1=# insert into ft_test_ppas values (1);
INSERT 0 1
Case 1-
d1=# \timing
Timing is on.
d1=# set statement_timeout =6000;
SET
Time: 0.360 ms
d1=# insert  into ft_test_ppas  values (generate_series(1,1000));
ERROR:  canceling statement due to statement timeout
Time: 6002.509 ms (00:06.003)
d1=#
Case 2 -
d1=# set statement_timeout =2;
SET
Time: 0.693 ms
d1=# insert  into ft_test_ppas  values (generate_series(1,1000));
ERROR:  canceling statement due to statement timeout
Time: 20001.741 ms (00:20.002)
d1=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Not getting error if ALTER SUBSCRIPTION syntax is wrong.

2017-05-05 Thread tushar

Hi,

While testing 'logical replication' against v10 , i encountered one 
issue where data stop migrating after ALTER PUBLICATION.


X Server
\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table/Insert 1 row -> create table test(n int); insert into t 
values (1);

\\create publication for all -> create publication pub for ALL TABLES ;


Y server

\\ Make sure wal_level is set to logical in postgresql.conf file
\\create table -> create table test(n int);

\\create Subscription

CREATE SUBSCRIPTION sub CONNECTION 'host=localhost dbname=postgres 
port=5432 ' PUBLICATION pub;


postgres=# select * from test;
 n
---
 1
(1 row)

\\Alter subscription
postgres=# alter subscription sub connection 'host=localhost 
dbname=postgres PUBLICATION pub';

ALTER SUBSCRIPTION

X server
postgres=# insert into test values (1);
INSERT 0 1
postgres=# select * from test;
 n
---
 1
 1
(2 rows)

Y server
postgres=# select * from test;
 n
---
 1
(1 row)

I think probably syntax of alter subscription is not correct but 
surprisingly it is not throwing an error.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] parallel worker (PID ) exited with exit code 1

2017-10-06 Thread tushar

Hi,

I got some few queries after running sqlsmith against PG HEAD , where i 
am getting LOG message like - "parallel worker (PID) exited with exit 
code 1"


set force_parallel_mode =1;
 select
  pg_catalog.pg_wal_replay_pause() as c0,
  ref_0.ev_type as c1
    from
  pg_catalog.pg_rewrite as ref_0
    where ref_0.ev_enabled > ref_0.ev_type
    limit 53;

2017-10-06 13:15:34.785 BST [5680] LOG:  background worker "parallel 
worker" (PID 5964) exited with exit code 1

ERROR:  recovery is not in progress
HINT:  Recovery control functions can only be executed during recovery.
CONTEXT:  parallel worker

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallelize queries containing initplans

2017-10-11 Thread tushar

On 10/09/2017 03:26 PM, Amit Kapila wrote:

I have reverted the check
in the attached patch.


I have applied this patch against PG HEAD and run sqlsmith and analyzed 
results . didn't find any specific failures against this patch.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Query started showing wrong result after Ctrl+c

2017-10-12 Thread tushar

Hi,

Steps to reproduce -

\\ PG HEAD / PG v10  sources . Connect to psql terminal -  create these 
following object


create table tv(n int,n1 char(100));
insert into tv values (generate_series(1,100),'aaa');
insert into tv values (generate_series(1,100),'a');
analyze tv;
vacuum tv;

\\1st  query

postgres=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * 
from (select n from tv limit 1) c)) as c  ;

 n
---
 1
(1 row)

\\2nd query
postgres=# SELECT  *  FROM ( SELECT n   from  tv  where n!=ALL (select * 
from (select n from tv) c)) as c  ;  [query was taking time so pressed 
CTRL-C)
^C2017-10-12 10:54:49.004 BST [9073] ERROR:  canceling statement due to 
user request
2017-10-12 10:54:49.004 BST [9073] STATEMENT:  SELECT  *  FROM ( SELECT 
n   from  tv  where n!=ALL (select * from (select n from tv) c)) as c  ;
2017-10-12 10:54:49.004 BST [9129] FATAL:  terminating connection due to 
administrator command
2017-10-12 10:54:49.004 BST [9129] STATEMENT:  SELECT  *  FROM ( SELECT 
n   from  tv  where n!=ALL (select * from (select n from tv) c)) as c  ;
2017-10-12 10:54:49.004 BST [9130] FATAL:  terminating connection due to 
administrator command
2017-10-12 10:54:49.004 BST [9130] STATEMENT:  SELECT  *  FROM ( SELECT 
n   from  tv  where n!=ALL (select * from (select n from tv) c)) as c  ;

Cancel request sent
2017-10-12 10:54:49.005 BST [9058] LOG:  background worker "parallel 
worker" (PID 9129) exited with exit code 1
2017-10-12 10:54:49.005 BST [9058] LOG:  background worker "parallel 
worker" (PID 9130) exited with exit code 1

ERROR:  canceling statement due to user request

\\again fired 1st query

postgres=# vacuum ANALYZE tv;
VACUUM
postgres=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * 
from (select n from tv limit 1) c)) as c  ;

  n
--
 3713
(1 row)

This time , query is started showing wrong result.  Is this an expected 
behavior and if yes -then how to get the correct result ?


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Query started showing wrong result after Ctrl+c

2017-10-12 Thread tushar

On 10/12/2017 03:46 PM, Marko Tiikkaja wrote:

The subquery:

    select n from tv limit 1

could in theory return any row due to the lack of ORDER BY. What I'm 
guessing happened is that you're seeing a synchronized sequential scan 
in follow-up queries.  Add an ORDER BY.


Bang on . After adding order by clause - i am getting same result 
consistently. but why i got the  different result after canceling the 
query only?


test=# \c f2
You are now connected to database "f2" as user "centos".
f2=# create table tv(n int,n1 char(100));
CREATE TABLE
f2=# insert into tv values (generate_series(1,100),'aaa');
INSERT 0 100
f2=# insert into tv values (generate_series(1,100),'a');
INSERT 0 990001
f2=# analyze tv;
ANALYZE
f2=# vacuum tv;
VACUUM
f2=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * from 
(select n from tv  limit 1) c)) as c  ;

 n
---
 1
(1 row)

f2=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * from 
(select n from tv  limit 1) c)) as c  ;

 n
---
 1
(1 row)

f2=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * from 
(select n from tv  limit 1) c)) as c  ;

 n
---
 1
(1 row)

f2=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * from 
(select n from tv  limit 1) c)) as c  ;

 n
---
 1
(1 row)

f2=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * from 
(select n from tv  limit 1) c)) as c  ;

 n
---
 1
(1 row)

f2=#

even after restarting the server ,  i am getting the same result.

now after canceling the operation , next time - result is coming different ?

f2=# SELECT  *  FROM ( SELECT n   from  tv  where n!=ALL (select * from 
(select n from tv) c)) as c  ;

^CCancel request sent
ERROR:  canceling statement due to user request
f2=# SELECT  *  FROM ( SELECT n   from  tv  where n= (select * from 
(select n from tv  limit 1) c)) as c  ;

  n
--
 3713
(1 row)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SendRowDescriptionMessage() is slow for queries with a lot of columns

2017-09-29 Thread tushar

On 09/27/2017 10:50 PM, Andres Freund wrote:

This'll allow the later patches to allow the compiler to perform the
relevant optimizations. It also allows to optimize e.g. pq_sendint64()
to avoid having to do multiple byteswaps.


After applying all the required patches, able to see some performance gain

Virtual Machine configuration - Centos 6.5 x64 / 16 GB RAM / 8 VCPU core 
processor


./pgbench -M prepared -j 10 -c 10 -f /tmp/pgbench-many-cols.sql postgres 
-T TIME


After taking Median of 3 run  -

Case 1 – TIME=300

PG HEAD =>41285.089261 (excluding connections establishing)
PG HEAD+patch =>tps= 42446.626947(2.81+% vs. head)

Case 2- TIME=500

PG HEAD =>tps = 41252.897670 (excluding connections establishing)
PG HEAD+patch =>tps= 42257.439550(2.43+% vs. head)

Case 3- TIME=1000

PG HEAD =>tps = 1061.031463 (excluding connections establishing)
PG HEAD+patch => tps= 8011.784839(3.30+% vs. head)

Case 4-TIME=1500

PG HEAD =>tps = 40365.099628 (excluding connections establishing)
PG HEAD+patch =>tps= 42385.372848(5.00+% vs. head)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [HACKERS] Binary search in fmgr_isbuiltin() is a bottleneck.

2017-09-28 Thread tushar

On 09/27/2017 05:29 PM, tushar wrote:
After discussion with Jeevan Ladhe, we created a sql query which 
contain lots of inbuild function and tested that against pgbench    
with master v/s patch and found an improvement


I tested it again and found around +2% improvement

./pgbench -c 8 -j 8 -f /tmp/mytest.sql -T =TIME


After taking Median of 3 run  -

Case 1 – TIME=300

PG HEAD =>tps = 7831.999245 (excluding connections establishing)
PG HEAD+patch =>tps= 8008.895177 (2.26+% vs. head)

Case 2- TIME=500

PG HEAD =>tps = 7817.781756 (excluding connections establishing)
PG HEAD+patch =>tps= 8050.410040(2.98+% vs. head)

Case 3- TIME=1000

PG HEAD =>tps = 7817.173640 (excluding connections establishing)
PG HEAD+patch => tps= 8011.784839(2.48+% vs. head)

Case 4-TIME=1500

PG HEAD =>tps = 7764.607133 (excluding connections establishing)
PG HEAD+patch =>tps= 8013.421628(3.20+% vs. head)
--

regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [HACKERS] Improve catcache/syscache performance.

2017-09-26 Thread tushar

On 09/22/2017 11:45 AM, Andres Freund wrote:

Here's a variant that cleans up the previous changes a bit, and adds
some further improvements:


I tested with different pgbench options with  master v/s patch and found an 
improvement.  I have applied 001 and 003 patch on PG Head ,patch 0002 was 
already committed.

Virtual Machine configuration - Centos 6.5 x64 / 16 GB RAM / 8 VCPU core 
processor

Scaling factor=30

pgbench -M prepared -T 200 postgres

PG Head   -  tps = 902.225954 (excluding connections establishing).
PG HEAD+patch -  tps = 1001.896381 (10.97+% vs. head)


pgbench -M prepared -T 300 postgres

PG Head   -  tps = 920.108333 (excluding connections establishing).
PG HEAD+patch -  tps = 1023.89542 (11.19+% vs. head)

pgbench -M prepared -T 500 postgres

PG Head   -  tps = 995.178227 (excluding connections establishing)
PG HEAD+patch -  tps = 1078.3 (+8.34% vs. head)


Later I modified the create_many_cols.sql file (previously attached) and 
instead of
only using int  , I mixed it with varchar/int4/numeric/float and run pgbench
with different time duration


pgbench -M prepared -f /tmp/pgbench-many-cols.sql -T 300  postgres

PG Head   -  tps =  5540.143877 (excluding connections establishing).
PG HEAD+patch -  tps =  5679.713493 (2.50+% vs. head)


pgbench -M prepared -f /tmp/pgbench-many-cols.sql -T 500  postgres

PG Head   -  tps = 5519.212709 (excluding connections establishing).
PG HEAD+patch -  tps = 5967.059155 (8.11+% vs. head)


pgbench -M prepared -f /tmp/pgbench-many-cols.sql -T 700  postgres

PG Head   -  tps = 5640.314495(excluding connections establishing).
PG HEAD+patch -  tps = 6012.223147 (6.59+% vs. head)

-- regards,tushar
EnterpriseDBhttps://www.enterprisedb.com/
The Enterprise PostgreSQL Company





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Binary search in fmgr_isbuiltin() is a bottleneck.

2017-09-27 Thread tushar

On 09/14/2017 12:21 PM, Andres Freund wrote:

Hi,

Surprising myself I discovered that in workloads that do a large number
of fmgr_info* lookups, fmgr_isbuiltin() is actually quite the
bottleneck.

After discussion with Jeevan Ladhe, we created a sql query which contain 
lots of inbuild function and tested that against pgbench    with  master 
v/s patch and found an improvement


Virtual Machine configuration - Centos 6.5 x64 / 16 GB RAM / 8 VCPU core 
processor



pgbench -c 8 -j 8 -f /tmy/mytest.sql  -T 300 postgres

PG Head -   tps = 5309.810807 (excluding connections 
establishing).

PG HEAD+patch -  tps =  5751.745767(8.32+% vs. head)


pgbench -c 8 -j 8 -f /tmp/mytest.sql   -T 500 postgres

PG Head -  tps = 7701.176220(excluding connections 
establishing).

PG HEAD+patch -  tps = 7953.934043(3.27+% vs. head)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



mytest.sql
Description: application/sql

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallelize queries containing initplans

2017-10-30 Thread tushar

On 10/30/2017 09:02 AM, Amit Kapila wrote:

Thanks a lot Tushar for testing this patch.  In the latest patch, I
have just rebased some comments, there is no code change, so I don't
expect any change in behavior, but feel free to test it once again.


Thanks Amit. Sure.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallelize queries containing initplans

2017-10-31 Thread tushar

On 10/30/2017 01:36 PM, tushar wrote:

On 10/30/2017 09:02 AM, Amit Kapila wrote:

Thanks a lot Tushar for testing this patch.  In the latest patch, I
have just rebased some comments, there is no code change, so I don't
expect any change in behavior, but feel free to test it once again.


Thanks Amit. Sure.


I have done re-verification ,Everything looks good.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers