Re: [GENERAL] Streaming Replica Master-Salve Config.

2016-08-07 Thread Eduardo Morras
On Fri, 5 Aug 2016 12:43:43 -0700
John R Pierce  wrote:

> On 8/4/2016 9:15 AM, Eduardo Morras wrote:
> > If you set max_connections too high, those connections will
> > compete/figth for same resources, CPU processing, I/O to disks,
> > Memory and caches, Locks, and postgres will spend more time
> > managing the resources than doing real work. Believe me (or us) set
> > it as we say and use a bouncer like pgbouncer. It can run on the
> > same server.
> 
> idle connections only use a small amount of memory, a process, a
> socket, and some file handles.when you have multiple databases,
> its impossible to share a connection pool across them.
> 
> the OP is talking about having 350 'tenants' each with their own 
> database and user on a single server.

No, I was answering to Periko, the mail you cite is from Moreno Andreo,
which I c and indent here:

> Il 04/08/2016 18:15, Eduardo Morras ha scritto:
>> [...]
>> a) As others said, max_connections = 200 is too high. Set it at your
>> number of cores (I use number of cores -1)
> Excuse me for crossthreading, but I have to make things clearer to me.
> That's one of the things I feel hard to understand how to approach in
> my architecture.
> My server has 350 DB with 350 users, everyone with its DB. Every user 
> has a "persistent" connection (used to replicate with rubyrep) and
> some "burst" connections to connect to colleagues DB when necessary.
> I'm going to split it across 2 servers, but it doesn't change things.
> Even with pgbouncer, how can I manage having all these replication 
> connections (plus "burst" connections) with pgbouncer and a low
> (<100) max_connections? Is it even possible?
> Thanks
> Moreno.

He asks for a different scenario, with multiuser & multidatabase.

> your 1 connection per core suggestion is ludicrious for this 
> scenario. in many database applications, most connections are
> idle most of the time.   sure you don't want much over about 2-4X
> your cpu thread count actually active doing queries at the same time
> if you want the max transaction/second aggregate throughput, but you
> can still get acceptable performance several times higher than that,
> depending on the workload, in my benchmarks the aggregate TPS rolls
> off fairly slowly for quite a ways past the 2-4 connections per
> hardware thread or core level, at least doing simple OLTP stuff on a
> high concurrency storage system (lots of fast disks in raid10)

Yes, for this scenario, where multiple users/apps has its own database
(A*(1:1)) or various (A*(1:D)) you are right. The OP, Periko, asked for
a pentaho setup of type (A*(C:1)) where pg_bouncer could help and OLAP
queries, which have cpu/io bottleneck.

A*(C:D)
A = number of users/clients/Apps
C = number of Connections per A
D = number of Databases in server
a = small A value (lower than a "normal" threshold/value)
c = small C value (lower than a "normal" threshold/value)
d = small D value (lower than a "normal" threshold/value)


> -- 
> john r pierce, recycling bits in santa cruz

---   ---
Eduardo Morras 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replica Master-Salve Config.

2016-08-05 Thread John R Pierce

On 8/4/2016 9:15 AM, Eduardo Morras wrote:

If you set max_connections too high, those connections will compete/figth for 
same resources, CPU processing, I/O to disks, Memory and caches, Locks, and 
postgres will spend more time managing the resources than doing real work. 
Believe me (or us) set it as we say and use a bouncer like pgbouncer. It can 
run on the same server.


idle connections only use a small amount of memory, a process, a socket, 
and some file handles.when you have multiple databases, its 
impossible to share a connection pool across them.


the OP is talking about having 350 'tenants' each with their own 
database and user on a single server.


your 1 connection per core suggestion is ludicrious for this 
scenario. in many database applications, most connections are idle 
most of the time.   sure you don't want much over about 2-4X your cpu 
thread count actually active doing queries at the same time if you want 
the max transaction/second aggregate throughput, but you can still get 
acceptable performance several times higher than that, depending on the 
workload, in my benchmarks the aggregate TPS rolls off fairly slowly for 
quite a ways past the 2-4 connections per hardware thread or core level, 
at least doing simple OLTP stuff on a high concurrency storage system 
(lots of fast disks in raid10)






--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [GENERAL] Streaming Replica Master-Salve Config.

2016-08-05 Thread Moreno Andreo

Il 04/08/2016 18:15, Eduardo Morras ha scritto:

[...]
a) As others said, max_connections = 200 is too high. Set it at your number of 
cores (I use number of cores -1)

Excuse me for crossthreading, but I have to make things clearer to me.
That's one of the things I feel hard to understand how to approach in my 
architecture.
My server has 350 DB with 350 users, everyone with its DB. Every user 
has a "persistent" connection (used to replicate with rubyrep) and some 
"burst" connections to connect to colleagues DB when necessary.

I'm going to split it across 2 servers, but it doesn't change things.
Even with pgbouncer, how can I manage having all these replication 
connections (plus "burst" connections) with pgbouncer and a low (<100) 
max_connections? Is it even possible?

Thanks
Moreno.

---   ---
Eduardo Morras 







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replica Master-Salve Config.

2016-08-04 Thread Ian Barwick
On 8/5/16 12:35 AM, Periko Support wrote:
>  Hi.
> 
>  I'm testing streaming replica with ubuntu 14 and psql9.3.
> 
>  I'm trying to understand how this works, my plan is to setup the
> slave server for pentaho reports for queries only.
> 
>  Now, my master server it has 128GB max_connections = 200 maybe I will add 
> more.
>  shared_memory=18GB.
> 
>  My slave server doesn't have that resources, is a Xeon with 32GB of
> RAM with psql same version
> running on ubuntu14.
> 
> My questions, does the slave server need to have the same resources?
> 128GB and the same settings in specific max_connections and
> shared_memory?

max_connections must be the same (or higher) on the standby
as on the master:

  
https://www.postgresql.org/docs/current/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS

but shared_buffers (which is what I guess you mean with "shared_memory")
can be a different value appropriate to the standby's hardware resources.


Regards

Ian Barwick



-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Replica Master-Salve Config.

2016-08-04 Thread Eduardo Morras
On Thu, 4 Aug 2016 08:35:37 -0700
Periko Support  wrote:

>  Hi.
> 
>  I'm testing streaming replica with ubuntu 14 and psql9.3.
> 
>  I'm trying to understand how this works, my plan is to setup the
> slave server for pentaho reports for queries only.
> 
>  Now, my master server it has 128GB max_connections = 200 maybe I
> will add more. shared_memory=18GB.
> 
>  My slave server doesn't have that resources, is a Xeon with 32GB of
> RAM with psql same version
> running on ubuntu14.
> 
> My questions, does the slave server need to have the same resources?
> 128GB and the same settings in specific max_connections and
> shared_memory?
> 
> Any comment appreciated, thanks!!!

a) As others said, max_connections = 200 is too high. Set it at your number of 
cores (I use number of cores -1) 
b) Use pgbouncer as middleware between your apps and the master.
c) Upgrade postgres to current 9.5.
d) Are you sure pentaho reports queries aren't time and recurse hungry? They 
are (afaik) OLAP queries not OLTP. Perhaps you should switch the master with 
the slave. 
e) To answer your question, no, slave don't need to have the same resources or 
settings.
f) For slave, if pentaho queries are OLAP, increase the work_mem setting.
g) Do some metrics, check, more metrics, recheck, (by metrics I want to say to 
measure performance, calculate statistics and compare results).
h) Read the documentation and wiki chapters on these topics.
i) Ask.

If you set max_connections too high, those connections will compete/figth for 
same resources, CPU processing, I/O to disks, Memory and caches, Locks, and 
postgres will spend more time managing the resources than doing real work. 
Believe me (or us) set it as we say and use a bouncer like pgbouncer. It can 
run on the same server.

 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


---   ---
Eduardo Morras 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming Replica Master-Salve Config.

2016-08-04 Thread Periko Support
 Hi.

 I'm testing streaming replica with ubuntu 14 and psql9.3.

 I'm trying to understand how this works, my plan is to setup the
slave server for pentaho reports for queries only.

 Now, my master server it has 128GB max_connections = 200 maybe I will add more.
 shared_memory=18GB.

 My slave server doesn't have that resources, is a Xeon with 32GB of
RAM with psql same version
running on ubuntu14.

My questions, does the slave server need to have the same resources?
128GB and the same settings in specific max_connections and
shared_memory?

Any comment appreciated, thanks!!!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general