Re: [HACKERS] Allow pg_dumpall to work without pg_authid

2017-03-15 Thread Sachin Kotwal
Hi Stephen,

Thanks. I understand this is small but new feature and not bug fix.
But we should be able to backpatch if there is no dependency.

It will help users to get benefit of this feature for g96 and pg95 in RDS
until they will have pg10 in RDS.


If It is against community policy then it is ok. I can understand.


Regards,
Sachin

On Wed, Mar 15, 2017 at 6:49 PM, Stephen Frost <sfr...@snowman.net> wrote:

> Greetings,
>
> * Sachin Kotwal (kotsac...@gmail.com) wrote:
> > Can we have backpatch this patch to PostgreSQL 9.6 and earlier releases ?
>
> No.  This is a new feature and new features are not back-patched.
>
> Thanks!
>
> Stephen
>



-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] Allow pg_dumpall to work without pg_authid

2017-03-15 Thread Sachin Kotwal
Thanks for nice patch related to AWS RDS.

Can we have backpatch this patch to PostgreSQL 9.6 and earlier releases ?



Regards,
Sachin

On Sun, Mar 5, 2017 at 12:30 PM, Simon Riggs <si...@2ndquadrant.com> wrote:

> On 28 February 2017 at 17:49, Simon Riggs <si...@2ndquadrant.com> wrote:
>
> > I've edited the stated reason for the patch on the CF app, so its
> > clearer as to why this might be acceptable.
>
> Robins,
>
> I'm looking to commit the patch version I posted, so I would like your
> comments that it does continue to solve the problems you raised.
>
> Thanks
>
> --
> Simon Riggshttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] Why postgres take RowExclusiveLock on all partition

2016-09-16 Thread Sachin Kotwal
Hi Tom,

What I understood from this
https://www.postgresql.org/docs/9.5/static/explicit-locking.html#TABLE-LOCK-COMPATIBILITY
is :

The RowExclusiveLock conflicts with queries want SHARE, SHARE ROW EXCLUSIVE,
EXCLUSIVE ACCESS EXCLUSIVE locks.

In one of our customer environment we want do some DDL operation everyday
through cronjobs . This cronjobs get blocked by RowExclusiveLock lock taken
by UPDATE query.  And then lot more queries are waiting on this cronjob as
sqls under cronjob have hold ACCESS EXCLUSIVE on related tables  involved
in other select queries.


If we can not reduce locking in partition scenario, then it is fine. We can
consider this is limitation of PostgreSQL or any other RDBMS system.


Regards,
Sachin

On Fri, Sep 16, 2016 at 7:41 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Sachin Kotwal <kotsac...@gmail.com> writes:
> > Does it release locks after taking decision and then perform actual
> update
> > operation on partition table?
>
> No, there's no attempt to do that, and we're unlikely to consider doing so
> because it would result in more lock-table thrashing.  Why do you care?
> RowExclusiveLock does not block any normal DML operation, so there's no
> apparent benefit from releasing it early.
>
>     regards, tom lane
>



-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] Why postgres take RowExclusiveLock on all partition

2016-09-16 Thread Sachin Kotwal
Hi Tom,


Thanks for reply.

To take decision it should get locks for very small interval.
Does it release locks after taking decision and then perform actual update
operation on partition table?
I feel update operation can take longer time than planner to examine and
will not require lock in later stage of query execution.

Locking all partition tables leads to blocking all queries(mostly select *
... kind of)  who want lock on other partition.
If we able to release lock immediately  after planner examination it will
help to get locks to other running queries on other partitions.

If will be happy we will be able to reduce locking in above scenario.


Regards,
Sachin

On Fri, Sep 16, 2016 at 7:16 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Sachin Kotwal <kotsac...@gmail.com> writes:
> > In another Terminal :
>
> > postgres=# select locktype, database::regclass ,
> > relation::regclass,virtualtransaction, pid, mode , granted from pg_locks
> > where locktype='relation';
> >  locktype | database | relation | virtualtransaction |  pid  |   mode
> > | granted
> > --+--+--++--
> -+--+-
> >  relation | 13241| pg_locks | 3/3867 | 28635 |
> > AccessShareLock  | t
> >  relation | 13241| t1_p2| 2/14038| 28633 |
> > RowExclusiveLock | t
> >  relation | 13241| t1_p1| 2/14038| 28633 |
> > RowExclusiveLock | t
> >  relation | 13241| t1   | 2/14038| 28633 |
> > RowExclusiveLock | t
> > (4 rows)
>
> The planner must take some type of lock on each partition, because it
> has to examine that table and decide whether or not it needs to be
> scanned, and that at least requires locking the table's DDL state.
> So those locks will be there whether or not the query ultimately scans
> the tables.  This isn't a bug.
>
> regards, tom lane
>



-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] Why postgres take RowExclusiveLock on all partition

2016-09-16 Thread Sachin Kotwal
Hi Ashutosh,

Thanks for reply.

Below are my findings:


In 1 Terminal:

postgres=# create table t1 (a int, b int);
CREATE TABLE
postgres=# show constraint_exclusion ;
 constraint_exclusion
--
 partition
(1 row)
postgres=# create table t1_p1() inherits (t1);
CREATE TABLE
postgres=# alter table t1_p1 add constraint a_part check (a > 0 and a <
100);
ALTER TABLE
postgres=# create table t1_p2() inherits (t1);
CREATE TABLE
postgres=# alter table t1_p2 add constraint a_part check (a > 100 and a <
200);
ALTER TABLE
postgres=# insert into t1_p1 select i,i from generate_series(1, 5) i;
INSERT 0 5
postgres=# insert into t1_p2 select i,i from generate_series(101, 105) i;
INSERT 0 5
postgres=# select * from t1_p1;
 a | b
---+---
 1 | 1
 2 | 2
 3 | 3
 4 | 4
 5 | 5
(5 rows)
postgres=# select * from t1_p2;
  a  |  b
-+-
 101 | 101
 102 | 102
 103 | 103
 104 | 104
 105 | 105
(5 rows)
postgres=# begin;
BEGIN
postgres=# update t1 set b=555 where a=101;
UPDATE 1


In another Terminal :

postgres=# select locktype, database::regclass ,
relation::regclass,virtualtransaction, pid, mode , granted from pg_locks
where locktype='relation';
 locktype | database | relation | virtualtransaction |  pid  |   mode
| granted
--+--+--++---+--+-
 relation | 13241| pg_locks | 3/3867 | 28635 |
AccessShareLock  | t
 relation | 13241| t1_p2| 2/14038| 28633 |
RowExclusiveLock | t
 relation | 13241| t1_p1| 2/14038| 28633 |
RowExclusiveLock | t
 relation | 13241| t1   | 2/14038| 28633 |
RowExclusiveLock | t
(4 rows)


Hope above findings will help you to understand problem.


Regards,
Sachin


On Fri, Sep 16, 2016 at 6:20 PM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:

> On Fri, Sep 16, 2016 at 4:31 PM, Sachin Kotwal <kotsac...@gmail.com>
> wrote:
> > Hi Hackers,
> >
> >
> > I checked if there  is update transaction on master table involved in
> > partition.
> > Postgresql takes  RowExclusiveLock on all partition tables.
> >
> > constraint exclusion is set to on.
>
> I checked this under the debugger and found that only the partitions
> which are scanned. The partitions excluded by constraints are not
> locked.
>
> postgres=# create table t1 (a int);
> CREATE TABLE
> postgres=# set constraint_exclusion to partition;
> SET
> postgres=# create table t1_p1() inherits (t1);
> CREATE TABLE
> postgres=# alter table t1_p1 add constraint a_part check (a > 0 and a <
> 100);
> ALTER TABLE
> postgres=# create table t1_p2() inherits (t1);
> CREATE TABLE
> postgres=# alter table t1_p2 add constraint a_part check (a > 100 and a <
> 200);
> ALTER TABLE
> postgres=# insert into t1_p1 select i from generate_series(1, 5) i;
> INSERT 0 5
> postgres=# insert into t1_p2 select i from generate_series(101, 105) i;
> INSERT 0 5
> postgres=# explain verbose select * from t1 where a > 100;
>  QUERY PLAN
> -
>  Append  (cost=0.00..41.88 rows=851 width=4)
>->  Seq Scan on public.t1  (cost=0.00..0.00 rows=1 width=4)
>  Output: t1.a
>  Filter: (t1.a > 100)
>->  Seq Scan on public.t1_p2  (cost=0.00..41.88 rows=850 width=4)
>  Output: t1_p2.a
>  Filter: (t1_p2.a > 100)
> (7 rows)
>
> postgres=# explain verbose update t1 set a = a where a > 100;
>   QUERY PLAN
> --
>  Update on public.t1  (cost=0.00..41.88 rows=851 width=10)
>Update on public.t1
>Update on public.t1_p2
>->  Seq Scan on public.t1  (cost=0.00..0.00 rows=1 width=10)
>  Output: t1.a, t1.ctid
>  Filter: (t1.a > 100)
>->  Seq Scan on public.t1_p2  (cost=0.00..41.88 rows=850 width=10)
>  Output: t1_p2.a, t1_p2.ctid
>  Filter: (t1_p2.a > 100)
> (9 rows)
>
> The RowExclusiveLock is taken in InitPlan(), which is called after the
> partitions have been excluded.
>
>  817│ foreach(l, resultRelations)
>  818│ {
>  819│ Index   resultRelationIndex =
> lfirst_int(l);
>  820│ Oid resultRelationOid;
>  821│ RelationresultRelation;
>  822│
>  823│ resultRelationOid =
> getrelid(resultRelationIndex, rangeTable);
>  824├>resultRelation =
> heap_open(resultRelationOid, RowExclusiveLock);
>  825│ InitResultRelInfo(resultRelInfo,
&g

[HACKERS] Why postgres take RowExclusiveLock on all partition

2016-09-16 Thread Sachin Kotwal
Hi Hackers,


I checked if there  is update transaction on master table involved in
partition.
Postgresql takes  RowExclusiveLock on all partition tables.

constraint exclusion is set to on.

My question is  why it locks on all partition tables instead only one
partition tables where data is resides?


Feel free to ask if any further information is required .


-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] pgbench unable to scale beyond 100 concurrent connections

2016-06-30 Thread Sachin Kotwal
Hi All,

Sorry for trouble you with small environment setup for testing.
I should to test this with large machine.
What I was testing were involved multiple things same time so quite
confusing .

possible reason for this testing failure is :
1. small hardware
2. haproxy not able to balance connection 100-100 on each server.
3. postgres_fdw foreign server unable to established large number of
connection with remote server/Shard.


I was testing  multiple coordinator using postgres_fdw (sharding) and
haproxy on top of it for load balancing.

as below

pg_fdw (conn=100, diff pg
instance on diff machine)  |
/
\|
pgbench (haproxy-port)->Haproxy/ (should accept 200 conn)
   \ | Shards/Nodes (1…N)
  \
/ | remote
pg servers
\
/ |
   pg_fdw(conn=100, diff pg
instance on diff machine) |


Hope i will test this scenario in detail once i get time and good hardware.

If some one test this scenario please let me know.

Thanks and regards,
Sachin Kotwal




On Thu, Jun 30, 2016 at 4:03 AM, Craig Ringer <cr...@2ndquadrant.com> wrote:

> On 29 June 2016 at 21:49, Sachin Kotwal <kotsac...@gmail.com> wrote:
>
>> Hi,
>>
>>
>> On Wed, Jun 29, 2016 at 6:29 PM, Craig Ringer <cr...@2ndquadrant.com>
>> wrote:
>>
>>> On 29 June 2016 at 18:47, Sachin Kotwal <kotsac...@gmail.com> wrote:
>>>
>>>
>>>> I am testing pgbench with more than 100 connections.
>>>> also set max_connection in postgresql.conf more than 100.
>>>>
>>>> Initially pgbench tries to scale nearby 150 but later it come down to
>>>> 100 connections and stable there.
>>>>
>>>> It this limitation of pgbench? or bug? or i am doing it wrong way?
>>>>
>>>
>>> What makes you think this is a pgbench limitation?
>>>
>>
>> As I mentioned when I tried same thing with sysbench It can give me 200+
>> concurrent connection with same method and same machine.
>>
>
> What command lines / configs are you using? Details are necessary, talking
> about this in general hand-waving terms is not getting anywhere.
>
>
>>
>>
>>> It sounds like you're benchmarking the client and server on the same
>>> system. Couldn't this be a limitation of the backend PostgreSQL server?
>>>
>>> I think having client and server on same server should not be problem.
>> As i can do this with different benchmarking tool It should not be
>> limitation of backend PostgreSQL server.
>>
>
> OK, so your sysbench use is actually talking to PostgreSQL as well. Then
> yes. Assuming they're testing roughly the same thing, which I somewhat
> doubt.
>
> There should not be connection and disconnection because I am not using -C
>> option of pgbench which cause connection and disconnection for each query.
>>
>
> OK, in that case it's hard to explain the behaviour you're seeing.
>
> More details please.
>
>
>> If I set max_connection of postgresql.conf to 200 and testing with -c 150
>> .
>> This should work fine, but it is not.
>>
>
> If you're using FDWs to connect to the same server again, you'll need a
> max_connections slot for each FDW connection as well.
>
>
>
>> I am testing one scenario of multiple coordinator with help of
>> postgres_fdw to enhance connection ability of postgres without any
>> connection pooling .
>> Setup might be difficult to explain here but will explain if required.
>>
>
> Yes, you need to explain it.
>
>
>> can you test simply 100 scale database size with pgbench and run pgbench
>> with 200+ connection of small virtual box to see same observation ?
>>
>
> It works fine - of course. There's more to this story than you've
> explained so far.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>



-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] pgbench unable to scale beyond 100 concurrent connections

2016-06-29 Thread Sachin Kotwal
Hi,


On Wed, Jun 29, 2016 at 6:29 PM, Craig Ringer <cr...@2ndquadrant.com> wrote:

> On 29 June 2016 at 18:47, Sachin Kotwal <kotsac...@gmail.com> wrote:
>
>
>> I am testing pgbench with more than 100 connections.
>> also set max_connection in postgresql.conf more than 100.
>>
>> Initially pgbench tries to scale nearby 150 but later it come down to 100
>> connections and stable there.
>>
>> It this limitation of pgbench? or bug? or i am doing it wrong way?
>>
>
> What makes you think this is a pgbench limitation?
>

As I mentioned when I tried same thing with sysbench It can give me 200+
concurrent connection with same method and same machine.



> It sounds like you're benchmarking the client and server on the same
> system. Couldn't this be a limitation of the backend PostgreSQL server?
>
> I think having client and server on same server should not be problem.
As i can do this with different benchmarking tool It should not be
limitation of backend PostgreSQL server.



> It also sounds like your method of counting concurrent connections is
> probably flawed. You're not allowing for setup and teardown time; if you
> want over 200 connections really running at very high rates of connection
> and disconnection you'll probably need to raise max_connections a bit to
> allow for the ones that're starting up or tearing down at any given time.
>
> May be. Please let me know how I can count concurrent connection in this
case.
There should not be connection and disconnection because I am not using -C
option of pgbench which cause connection and disconnection for each query.
If I set max_connection of postgresql.conf to 200 and testing with -c 150 .
This should work fine, but it is not.




> Really, though, why would you want to do this? I can measure my car's
> speed falling off a cliff, but that's not a very interesting benchmark for
> a car. I can't imagine any sane use of the database this way, with
> incredibly rapid setup and teardown of lots of connections. Look into
> connection pooling, either client side or in a proxy like pgbouncer.
>
>
> I am testing one scenario of multiple coordinator with help of
postgres_fdw to enhance connection ability of postgres without any
connection pooling .
Setup might be difficult to explain here but will explain if required.

can you test simply 100 scale database size with pgbench and run pgbench
with 200+ connection of small virtual box to see same observation ?

Please let me know if I can help to know to reproduce this problem.





> --
>  Craig Ringer       http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>



-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] pgbench unable to scale beyond 100 concurrent connections

2016-06-29 Thread Sachin Kotwal
Hi Fabien,

Sorry for very short report.
I feel pgbench is not so complex tool.

Please see below answers to your questions.


On Wed, Jun 29, 2016 at 5:07 PM, Fabien COELHO <coe...@cri.ensmp.fr> wrote:

>
> Hello Sachin,
>
> Your report is very imprecise so it is hard to tell anything.
>
> What version of client and server are you running?

I am testing it with 9.6-beta1 binaries. For server and client it is same.
I am using pgbench on top of postgres_fdw.



> On what hardware ? (200 connections => 200 active postgres processes, how
> many processes per core are you expecting to run?

I am running in small virtual box machine. with 1GB RAM and 2 cores.
I think there should not be problem with 200 processes on 2 core machines.
I tested same number of concurrent connections on same machine with
sysbench it is working  fine.

I am not sure what is difference between pgbench and sysbench, they might
be process based or thread based.
But I can say if I can create 200+ concurrent connection with sysbench , i
should be able create same with pgbench.

Thoughts?



> the recommanded value is about 2 connections per physical core...)

I think 2 connections per core is very small value . for 200 i need atleast
100 core machine , which is not good.


What precise command is started?




> How to you know it "comes down to 100 connections"?


I put watch on live connections to database.
something like : watch -n 1 'ps -ef | grep postgres | grep 192.168.56.101 |
wc -l'

NOTE: grep cxommand may change as per environment.


Also user below query to see active connection.
# select count(*) from pg_stat_activity;




> Are there error messages from pgbench or postgresql?
>
> postgresql does not give any error.

pgbench says:
client 36 aborted in state 2: ERROR:  could not connect to server "server_1"
DETAIL:  FATAL:  sorry, too many clients already



> My random guess would be that you start too many connections with only one
> thread client side and/or on a too small hardware client or server-side for
> the expected scale, so given the load and latency some connections just
> never get to do anything?
>
> This may be reason but it should be able to maintain idle connection for
that time if never get to do anything.



> Maybe try with "-j 20" so that there are not too many connections per
> pgbench thread?
>
> I do not have such good hardware for now.
I feel pgbench should be able to perform well on small hardware.


Feel free to ask any question regarding setup.




> I am testing pgbench with more than 100 connections. also set
>> max_connection in postgresql.conf more than 100.
>>
>> Initially pgbench tries to scale nearby 150 but later it come down to 100
>> connections and stable there.
>>
>> It this limitation of pgbench? or bug? or i am doing it wrong way?
>>
>> ---
>> I tested it with max_connection = 200 in postgresql.conf
>> and pgbench witn -c 180/190/200
>>
>
> Please reply.
>>
>
> Please send precise information instead of expecting people to guess...
>
> --
> Fabien
>



-- 

Thanks and Regards,
Sachin Kotwal


[HACKERS] pgbench unable to scale beyond 100 concurrent connections

2016-06-29 Thread Sachin Kotwal
Hi,

I am testing pgbench with more than 100 connections.
also set max_connection in postgresql.conf more than 100.

Initially pgbench tries to scale nearby 150 but later it come down to 100
connections and stable there.

It this limitation of pgbench? or bug? or i am doing it wrong way?

---
I tested it with max_connection = 200 in postgresql.conf
and pgbench witn -c 180/190/200

Please reply.

-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] Template for commit messages

2016-01-31 Thread Sachin Kotwal
Sorry for little late.

Can we add Severity level of patch? with only three levels as (High,
Moderate, Low)

Many of our customers might not understand overall important of patch.
If we add this people/customers can choose patch is important for them or
not.
Other than Author and hackers can not easily understand overall importance
of patch.

Please consider if you feel it is important to add this parameter in commit
message format.







On Mon, Feb 1, 2016 at 10:04 AM, Joshua D. Drake <j...@commandprompt.com>
wrote:

> On 01/31/2016 04:34 PM, Michael Paquier wrote:
>
>> On Mon, Feb 1, 2016 at 2:44 AM, Joshua D. Drake wrote:
>>
>>> On 01/29/2016 03:05 PM, Alvaro Herrera wrote:
>>>
>>>> Joshua D. Drake wrote:
>>>> One of the offers is to credit them (I'm not exactly clear
>>>> on what is the group to benefit from this, but the phrasing used in the
>>>> meeting was "contributors to the release") by having a section somewhere
>>>> in the release notes with a list of their names.
>>>>
>>>
>>>
>>> I can see this as being a nice thing but knowing that someone is a
>>> contributor isn't hard. There is a contributor list on the website and
>>> it is
>>> obvious from mail lists, archives and simple searches who is actually
>>> participating.
>>>
>>
>> This page would need a refresh IMO. I think it has not been touched
>> for the last couple of years.
>>
>
> No doubt but if we can't bother to keep that refreshed what makes us think
> that a structured format in commit messages that magically (through a lot
> of hard work and extra parsing anyway) is going to be any more accurate?
>
> Sincerely,
>
> JD
>
>
>>
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 

Thanks and Regards,
Sachin Kotwal


Re: [HACKERS] WAL replay bugs

2014-04-10 Thread sachin kotwal

I executed given  steps many times to produce this bug.
But still I unable to hit this bug.
I used attached scripts to produce this bug.

Can I get scripts to produce this bug?


wal_replay_bug.sh
http://postgresql.1045698.n5.nabble.com/file/n5799512/wal_replay_bug.sh  



-
Thanks and Regards,

Sachin Kotwal
NTT-DATA-OSS Center (Pune)
--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/WAL-replay-bugs-tp5799053p5799512.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers