RE: Range partitioning and overlap

2020-11-13 Thread Edson Richter
De: Tom Lane 
Enviado: sexta-feira, 13 de novembro de 2020 17:58
Para: Edson Richter 
Cc: David G. Johnston ; pgsql-general 

Assunto: Re: Range partitioning and overlap

Edson Richter  writes:
> Further on the documentation: "When creating a range partition, the lower 
> bound specified with FROM is an inclusive bound, whereas the upper bound 
> specified with TO is an exclusive bound."

> I'm pretty sure I cannot find this statement in PostgreSQL 13 documentation 
> page about partitioning. May be this statement is in another page?

It's in the CREATE TABLE reference page.  Seems like it would be a good
idea to have it also in ddl.sgml's discussion of partitioning, though.

regards, tom lane

Thanks!

Regards,

Edson



RE: Range partitioning and overlap

2020-11-13 Thread Edson Richter

De: David G. Johnston 
Enviado: sexta-feira, 13 de novembro de 2020 17:32
Para: Edson Richter 
Cc: pgsql-general 
Assunto: Re: Range partitioning and overlap

On Fri, Nov 13, 2020 at 1:29 PM Edson Richter 
mailto:edsonrich...@hotmail.com>> wrote:
"Range Partitioning

The table is partitioned into “ranges” defined by a key column or set of 
columns, with no overlap between the ranges of values assigned to different 
partitions. For example, one might partition by date ranges, or by ranges of 
identifiers for particular business objects."

Is there a misinterpretation from my side, or examples are inconsistent with 
the paragraph above?

Further on the documentation: "When creating a range partition, the lower bound 
specified with FROM is an inclusive bound, whereas the upper bound specified 
with TO is an exclusive bound."

David J.

I'm pretty sure I cannot find this statement in PostgreSQL 13 documentation 
page about partitioning. May be this statement is in another page?
Would you mind to share the URL with this statement?

Thanks,

Edson




Range partitioning and overlap

2020-11-13 Thread Edson Richter
Hi,

Using PostgreSQL 13.1 - I need your guidance about corretly implementing 
partition by timestamp ranges.

Looking at documentation ( 
https://www.postgresql.org/docs/13/ddl-partitioning.html ) there a statement 
saying explicit

"Range Partitioning

The table is partitioned into “ranges” defined by a key column or set of 
columns, with no overlap between the ranges of values assigned to different 
partitions. For example, one might partition by date ranges, or by ranges of 
identifiers for particular business objects."


But afterwards, looking into examples, the ranges overlaps:


CREATE TABLE measurement_y2006m02 PARTITION OF measurement
FOR VALUES FROM ('2006-02-01') TO ('2006-03-01');

CREATE TABLE measurement_y2006m03 PARTITION OF measurement
FOR VALUES FROM ('2006-03-01') TO ('2006-04-01');

...
CREATE TABLE measurement_y2007m11 PARTITION OF measurement
FOR VALUES FROM ('2007-11-01') TO ('2007-12-01');

CREATE TABLE measurement_y2007m12 PARTITION OF measurement
FOR VALUES FROM ('2007-12-01') TO ('2008-01-01')
TABLESPACE fasttablespace;

Is there a misinterpretation from my side, or examples are inconsistent with 
the paragraph above?

Thanks,

Edson




"Go" (lang) standard driver

2020-08-17 Thread Edson Richter
Is there any standard PostgreSQL driver for Go language?
Do members of this group could share experiences with Go + PostgreSQL 
development?

Thanks,

E.R.


RE: How to double-quote a double quoted identifier?

2020-06-15 Thread Edson Richter
De: Tom Lane 
Enviado: segunda-feira, 15 de junho de 2020 22:24
Para: Edson Richter 
Cc: pgsql-general 
Assunto: Re: How to double-quote a double quoted identifier?

Edson Richter  writes:
> How do I remote execute que following command:
> CREATE EXTENSION "uuid-ossp"
> SCHEMA public
> VERSION "1.1";

> I'm using PostgreSQL 9.6, Linux x64, and bash.

> I've tried the following:

> ssh -l postgres -2 -C -p 2022 192.168.0.70 "psql -h 127.0.0.1 -d test_db -p 
> 5432 -c \"CREATE EXTENSION ""uuid-ossp"" SCHEMA pg_catalog;\" "

Yeah, nesting quoting levels in shell is kind of a bear.

> I know this is a kind of "bash-psql" issue, but I would appreciate if anyone 
> could share experience with this.

Since you're using bash, you can get bash to do the work for you,
via multiple iterations of the ${variable@Q} construct.

$ SQLCMD='CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'
$ echo $SQLCMD
CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;
$ echo ${SQLCMD@Q}
'CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'
-- not too exciting so far, but wrap it into another layer of quoting:
$ SHCMD="psql -h 127.0.0.1 -d test_db -p 5432 -c ${SQLCMD@Q}"
$ echo $SHCMD
psql -h 127.0.0.1 -d test_db -p 5432 -c 'CREATE EXTENSION "uuid-ossp" SCHEMA 
pg_catalog;'
$ echo ${SHCMD@Q}
'psql -h 127.0.0.1 -d test_db -p 5432 -c '\''CREATE EXTENSION "uuid-ossp" 
SCHEMA pg_catalog;'\'''
-- and now your answer is:
$ echo ssh -l postgres -2 -C -p 2022 192.168.0.70 ${SHCMD@Q}
ssh -l postgres -2 -C -p 2022 192.168.0.70 'psql -h 127.0.0.1 -d test_db -p 
5432 -c '\''CREATE EXTENSION "uuid-ossp" SCHEMA pg_catalog;'\'''

You could, of course, just use the end result of that -- but it's
probably better to have a shell script recalculate it on the fly
given the desired SQL command as input.

regards, tom lane
Ingenious!
Thanks a lot!

Edson Richter



How to double-quote a double quoted identifier?

2020-06-15 Thread Edson Richter
How do I remote execute que following command:

CREATE EXTENSION "uuid-ossp"
SCHEMA public
VERSION "1.1";

I'm using PostgreSQL 9.6, Linux x64, and bash.

I've tried the following:

ssh -l postgres -2 -C -p 2022 192.168.0.70 "psql -h 127.0.0.1 -d test_db -p 
5432 -c \"CREATE EXTENSION ""uuid-ossp"" SCHEMA pg_catalog;\" "

I know this is a kind of "bash-psql" issue, but I would appreciate if anyone 
could share experience with this.

Thanks a lot,

Edson


Re: Table partitioning for cloud service?

2020-05-23 Thread Edson Richter


-- Mensagem original--
De: Ravi Krishna
Data: qui, 21 de mai de 2020 18:50
Para:
Cc:PostgreSQL Mailing Lists;
Assunto:Re: Table partitioning for cloud service?


>
> The database/schema per tenant solution can be tedious when you want to 
> modify something on the structure and you have numerous tenants.
> Therefore I used the "tables with tenant_id" version in a similar situation 
> but with a slight twist. One of the biggest issue of this solution is that if 
> you forget to add the tenant_id to the where clause you are going to reveal 
> one tenant's data to another.
> I came up with the solution that the database user have no privileges for 
> accessing the base tables. Instead of that I generate views for each tenant 
> and they can access their own data in the underlying table through these 
> views. Now if forget to address the right tenant in my client code(it still 
> happens sometimes) and try to directly access the base tables I get a 
> strongly worded reminder from the server.

1. If you have 50 tables and say 100 tenants, we are talking about 5,000 views. 
I am not sure
whether it is any more elegant than having 100 schemas.
2. In your approach you can do any phased DDL upgrade.  It is typical to do 
rolling upgrades
in a multi tenant databases, starting with least risky tenant.


Be carefull to plan your backup/recovery strategy. How do you plan to recover 
one customer from backup without interfering with the others?

What will be your disaster recover strategy? Have you considered replication?

Large databases take longer to backup.

Keep schemas in sync can be easily solved.


Regards,


Edson


RE: It is possible to force periodically switch xlog?

2020-04-01 Thread Edson Richter



De: Alvaro Herrera 
Enviado: quarta-feira, 1 de abril de 2020 23:38
Para: Edson Richter 
Cc: pgsql-general 
Assunto: Re: It is possible to force periodically switch xlog?

On 2020-Apr-02, Edson Richter wrote:

> Actually, I do have a stand by server I wish to update a bit more
> frequently in low traffic servers.

It is better to use streaming replication for that.  Using the WAL
archive for that is an archaic mechanism now.

--
Álvaro Herrera
https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.2ndQuadrant.com%2Fdata=02%7C01%7C%7Cfc4cfe08909e429dc12f08d7d6af0184%7C84df9e7fe9f640afb435%7C1%7C0%7C637213919432611092sdata=y5CpqDka4uAADykZjBOZHoTBOLPiO%2FADqAazhOnXgsQ%3Dreserved=0
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Actually, I do use streaming 
Our scenario is a bit more complex than ordinay wal archive or streaming.
With this setup we are able to backup 1.5 TB of data in less than 12 hours even 
with geographically distributted servers, and we have production.

1) We do have streaming from production to report server (geographically 
distributted)
2) Wal archive is setup from standby server to backup server (same location)
3) From time-to-time, we do barman backups from standby server (same location)
4) Twice a week, we restore every single database in a backup server to test 
backups.

Besides non standard, this setup is working really well for our needs.

But not everthing is shine like gold, and sometimes, for low traffic servers, 
barman complains not all wal segments were received.
Then we need to manually execute pg_switch_xlog at master, and after a "barman 
check-database" at backup server: this is what we would like to automate.
And it went well with a bash script and cron. At least, for 40 databases it is 
working really well.

Regards,

Edson






RE: It is possible to force periodically switch xlog?

2020-04-01 Thread Edson Richter



De: Kyotaro Horiguchi 
Enviado: quarta-feira, 1 de abril de 2020 22:22
Para: da...@pgmasters.net 
Cc: edsonrich...@hotmail.com ; 
pgsql-gene...@postgresql.org ; 
alvhe...@2ndquadrant.com 
Assunto: Re: It is possible to force periodically switch xlog?

At Wed, 1 Apr 2020 21:09:20 -0400, David Steele  wrote in
> On 4/1/20 7:19 PM, Edson Richter wrote:
> > IIRC archive_timeout does precisely that.
> > I would assume that "archive_mode" must be "on" or "always" - but I do
> > have "archive_mode=off".
>
> I'm not sure what the point of switching the WAL segment would be with
> archive_mode=off.
>
> What are you trying to accomplish?

Also I'm a bit curious about that, but anyway archive_timeout works
even with archive_mode=off. The name looks being a bit off of its
substance.


regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center


Good to know!
But meanwhile I've wrote a small bash script that executs pg_switch_xlog() and 
scheduled it in Crond.

Regards,

Edson



RE: It is possible to force periodically switch xlog?

2020-04-01 Thread Edson Richter



De: David Steele 
Enviado: quarta-feira, 1 de abril de 2020 22:09
Para: Edson Richter ; pgsql-general 

Cc: Alvaro Herrera 
Assunto: Re: It is possible to force periodically switch xlog?

On 4/1/20 7:19 PM, Edson Richter wrote:
>
> IIRC archive_timeout does precisely that.
>
> I would assume that "archive_mode" must be "on" or "always" - but I do
> have "archive_mode=off".

I'm not sure what the point of switching the WAL segment would be with
archive_mode=off.

What are you trying to accomplish?


Actually, I do have a stand by server I wish to update a bit more frequently in 
low traffic servers.

Regards,


Regards,
--
-David
da...@pgmasters.net


RE: It is possible to force periodically switch xlog?

2020-04-01 Thread Edson Richter


De: Alvaro Herrera 
Enviado: quarta-feira, 1 de abril de 2020 19:55
Para: Abdul Sayeed 
Cc: Edson Richter ; pgsql-general 

Assunto: Re: It is possible to force periodically switch xlog?

On 2020-Apr-02, Abdul Sayeed wrote:

> Hi Edson,
>
> You can switch by using below command below PG 10.
>
> Select pg_switch_xlog();
>
> And there is no parameter present to switch it.

IIRC archive_timeout does precisely that.

I would assume that "archive_mode" must be "on" or "always" - but I do have 
"archive_mode=off".

Regards,

Edson

--
Álvaro Herrera
https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.2ndquadrant.com%2Fdata=02%7C01%7C%7Cbcfb62c8875b46399dcd08d7d68fbab1%7C84df9e7fe9f640afb435%7C1%7C0%7C637213785102248653sdata=6AHDGdb6Ommbv%2FdFImDVFJvsJ4eomP7jWWMlJjiOgvg%3Dreserved=0
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


It is possible to force periodically switch xlog?

2020-04-01 Thread Edson Richter
I would like to know if there is any parameter I can set to force postgres to 
switch xlog periodically, even if it not reach the 16MB size?

Thanks,

Edson.



RE: How to query "primary_slot_name" in slave server?

2020-03-25 Thread Edson Richter
De: Paul Förster 
Enviado: quarta-feira, 25 de março de 2020 11:42
Para: Edson Richter 
Cc: pgsql-general 
Assunto: Re: How to query "primary_slot_name" in slave server?

Hi Edson,

query the following on the replica:

pg_stat_wal_receiver


Perfect, thanks!

Edson



Cheers,
Paul

> On 25. Mar, 2020, at 13:59, Edson Richter  wrote:
>
> Hi!
>
> I've been searching over the net, but cannot find a reference.
> I know that in server I can execute "select * from pg_replication_slots" and 
> get all created slots.
> But how can I query slave to know wich primay slot it is connecting to?
>
> Thanks in advance,
>
> Edson



How to query "primary_slot_name" in slave server?

2020-03-25 Thread Edson Richter
Hi!

I've been searching over the net, but cannot find a reference.
I know that in server I can execute "select * from pg_replication_slots" and 
get all created slots.
But how can I query slave to know wich primay slot it is connecting to?

Thanks in advance,

Edson


RE: Replication: slave server has 3x size of production server?

2020-02-23 Thread Edson Richter


De: Adrian Klaver 
Enviado: domingo, 23 de fevereiro de 2020 15:42
Para: Edson Richter ; pgsql-general 

Assunto: Re: Replication: slave server has 3x size of production server?

On 2/23/20 8:04 AM, Edson Richter wrote:
> 
>
> *De:* Adrian Klaver 
> *Enviado:* sábado, 22 de fevereiro de 2020 20:34
>     *Para:* Edson Richter ; pgsql-general
> 
> *Assunto:* Re: Replication: slave server has 3x size of production
> server?
>     On 2/22/20 2:51 PM, Edson Richter wrote:
>
> >
> > Yes, it is working. Last X'log file is present on all thee servers.
> > Also, comparting last transaction number on master and slave shows that
> > all are in sync.
> > Last, but not least, select max(id) from a busy table shows same id
> > (when queried almost simultaneously using a simple test routine).
>
> Well something is keeping those WAL file around. You probably should
> analyze your complete setup to see what else is touching those servers.
>
>
> It is safe to add a "--remove-source-files" into my archive_command as
> folows into my slave server?

I would say not. See:

https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fdocs%2F12%2Fwal-configuration.htmldata=02%7C01%7C%7Cb49e9c01f11a4b9fe4d108d7b8902bd2%7C84df9e7fe9f640afb435%7C1%7C0%7C637180801653706393sdata=QY24D6R%2B%2BJ7VgnctERdK964reKEp7XbxERiXGC2XL8Y%3Dreserved=0

"Checkpoints are points in the sequence of transactions at which it is
guaranteed that the heap and index data files have been updated with all
information written before that checkpoint. At checkpoint time, all
dirty data pages are flushed to disk and a special checkpoint record is
written to the log file. (The change records were previously flushed to
the WAL files.) In the event of a crash, the crash recovery procedure
looks at the latest checkpoint record to determine the point in the log
(known as the redo record) from which it should start the REDO
operation. Any changes made to data files before that point are
guaranteed to be already on disk. Hence, after a checkpoint, log
segments preceding the one containing the redo record are no longer
needed and can be recycled or removed. (When WAL archiving is being
done, the log segments must be archived before being recycled or removed.)"

So there is a window where a WAL is written but before the data it
represents is check pointed, so it still needed.

I see. Makes sense.
I suppose that long lifed xlog files are of no use then... I would expect 
PostgreSQL delete them automatically.
Perhaps, since I have full backups happening every odd days, I can create a 
"post backup command" in barman script so it will delete files above 1 week 
from the server it is backup up from...
I understand there is no guarantee that these files have already been 
processed... but if they are needed, they can be recovered from the barman 
server...

Thanks,

Edson

>
>
> archive_command = 'rsync --remove-source-files -e "ssh -2 -C -p 2022"
> -az %p barman@192.168.0.2:/dados/barman/dbcluster/incoming/%f'
>
>
> and remove the xlog file after copy to barman?
> I mean, whem the archive command starts, the wal has been already
> processed by the slave server, so we don't need them after copying to
> backup server, right?
>
>
> Regards,
>
> Edson
>
> >
> > >
> > >
> > > The files are about 7 months old.
> >
>     > Are there newer files that would indicate that the streaming is 
> working?
> >
> >
> > Yes, streaming is working properly (as stated above).
> >
> > Thanks,
> >
> >
> > Edson Richter
> >
> >
> >>
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


--
Adrian Klaver
adrian.kla...@aklaver.com


RE: Replication: slave server has 3x size of production server?

2020-02-23 Thread Edson Richter


De: Adrian Klaver 
Enviado: sábado, 22 de fevereiro de 2020 20:34
Para: Edson Richter ; pgsql-general 

Assunto: Re: Replication: slave server has 3x size of production server?

On 2/22/20 2:51 PM, Edson Richter wrote:

>
> Yes, it is working. Last X'log file is present on all thee servers.
> Also, comparting last transaction number on master and slave shows that
> all are in sync.
> Last, but not least, select max(id) from a busy table shows same id
> (when queried almost simultaneously using a simple test routine).

Well something is keeping those WAL file around. You probably should
analyze your complete setup to see what else is touching those servers.

It is safe to add a "--remove-source-files" into my archive_command as folows 
into my slave server?


archive_command = 'rsync --remove-source-files -e "ssh -2 -C -p 2022" -az %p 
barman@192.168.0.2:/dados/barman/dbcluster/incoming/%f'


and remove the xlog file after copy to barman?
I mean, whem the archive command starts, the wal has been already processed by 
the slave server, so we don't need them after copying to backup server, right?


Regards,

Edson

>
> >
> >
> > The files are about 7 months old.
>
> Are there newer files that would indicate that the streaming is working?
>
>
> Yes, streaming is working properly (as stated above).
>
> Thanks,
>
>
> Edson Richter
>
>
>>



--
Adrian Klaver
adrian.kla...@aklaver.com


RE: Replication: slave server has 3x size of production server?

2020-02-22 Thread Edson Richter


De: Adrian Klaver 
Enviado: sábado, 22 de fevereiro de 2020 18:12
Para: Edson Richter ; pgsql-general 

Assunto: Re: Replication: slave server has 3x size of production server?

On 2/22/20 11:23 AM, Edson Richter wrote:
> 
>
> *De:* Adrian Klaver 
> *Enviado:* sábado, 22 de fevereiro de 2020 16:16
>     *Para:* Edson Richter ; pgsql-general
> 
> *Assunto:* Re: Replication: slave server has 3x size of production
> server?
> On 2/22/20 11:03 AM, Edson Richter wrote:
> > 
> 
> >
>
> >
> >
> > Streaming replication. Initiated via pg_basebackup.
> >
> > Settings on master server:
> >
> > # - Sending Server(s) -
> > # Set these on the master and on any standby that will send replication
> > data.
> > max_wal_senders = 2 # max number of walsender processes
> > (change requires restart)
> > wal_keep_segments = 25  # in logfile segments, 16MB each; 0 
> disables
> > #wal_sender_timeout = 60s   # in milliseconds; 0 disables
> > max_replication_slots = 2   # max number of replication
> > slots (change requires restart)
> > #track_commit_timestamp = off   # collect timestamp of transaction
> > commit (change requires restart)
> > # - Master Server -
> > # These settings are ignored on a standby server.
> > #synchronous_standby_names = '' # standby servers that provide sync
> > rep number of sync standbys and comma-separated list of
> > application_name from standby(s); '*' = all
> > #vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is
> > delayed
> >
> >
> >
> > Settings on slave server:
> >
> > # - Standby Servers -
> > # These settings are ignored on a master server.
> > hot_standby = on# "on" allows queries during
> > recovery (change requires restart)
> > max_standby_archive_delay = -1  # max delay before canceling
> > queries when reading WAL from archive; -1 allows indefinite delay
> > max_standby_streaming_delay = -1# max delay before canceling
> > queries when reading streaming WAL; -1 allows indefinite delay
> > wal_receiver_status_interval = 10s  # send replies at least this
> > often 0 disables
> > hot_standby_feedback = on   # send info from standby to
> > prevent query conflicts
> > wal_receiver_timeout = 0# time that receiver waits for
> > communication from master in milliseconds; 0 disables
> > wal_retrieve_retry_interval = 5s# time to wait before retrying
> > to retrieve WAL after a failed attempt
>
> What are the settings for:
>
> archive_mode
> archive_command
>
> on the standby?
>
> Are the files in pg_xlog on the standby mostly from well in the past?
>
>
> Actually, standby server is sending wals to a backup (barman) server:
>
> archive_mode = always   # enables archiving; off, on, or always
> (change requires restart)
> archive_command = 'rsync -e "ssh -2 -C -p 2022" -az %p
> barman@192.168.0.2:/dados/barman/dbcluster/incoming/%f'

And the above is working, the files are showing up on the barman server?

Yes, it is working. Last X'log file is present on all thee servers.
Also, comparting last transaction number on master and slave shows that all are 
in sync.
Last, but not least, select max(id) from a busy table shows same id (when 
queried almost simultaneously using a simple test routine).

>
>
> The files are about 7 months old.

Are there newer files that would indicate that the streaming is working?

Yes, streaming is working properly (as stated above).

Thanks,


Edson Richter


>
>
> Thanks,
>
> Edson
>
> >
> >
> > Regards,
> >
> > Edson
> >
> > >
> > >
> > > Edson
> > >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


--
Adrian Klaver
adrian.kla...@aklaver.com


RE: Replication: slave server has 3x size of production server?

2020-02-22 Thread Edson Richter


De: Adrian Klaver 
Enviado: sábado, 22 de fevereiro de 2020 16:16
Para: Edson Richter ; pgsql-general 

Assunto: Re: Replication: slave server has 3x size of production server?

On 2/22/20 11:03 AM, Edson Richter wrote:
> 
>

>
>
> Streaming replication. Initiated via pg_basebackup.
>
> Settings on master server:
>
> # - Sending Server(s) -
> # Set these on the master and on any standby that will send replication
> data.
> max_wal_senders = 2 # max number of walsender processes
> (change requires restart)
> wal_keep_segments = 25  # in logfile segments, 16MB each; 0 disables
> #wal_sender_timeout = 60s   # in milliseconds; 0 disables
> max_replication_slots = 2   # max number of replication
> slots (change requires restart)
> #track_commit_timestamp = off   # collect timestamp of transaction
> commit (change requires restart)
> # - Master Server -
> # These settings are ignored on a standby server.
> #synchronous_standby_names = '' # standby servers that provide sync
> rep number of sync standbys and comma-separated list of
> application_name from standby(s); '*' = all
> #vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is
> delayed
>
>
>
> Settings on slave server:
>
> # - Standby Servers -
> # These settings are ignored on a master server.
> hot_standby = on# "on" allows queries during
> recovery (change requires restart)
> max_standby_archive_delay = -1  # max delay before canceling
> queries when reading WAL from archive; -1 allows indefinite delay
> max_standby_streaming_delay = -1# max delay before canceling
> queries when reading streaming WAL; -1 allows indefinite delay
> wal_receiver_status_interval = 10s  # send replies at least this
> often 0 disables
> hot_standby_feedback = on   # send info from standby to
> prevent query conflicts
> wal_receiver_timeout = 0# time that receiver waits for
> communication from master in milliseconds; 0 disables
> wal_retrieve_retry_interval = 5s# time to wait before retrying
> to retrieve WAL after a failed attempt

What are the settings for:

archive_mode
archive_command

on the standby?

Are the files in pg_xlog on the standby mostly from well in the past?

Actually, standby server is sending wals to a backup (barman) server:

archive_mode = always   # enables archiving; off, on, or always (change 
requires restart)
archive_command = 'rsync -e "ssh -2 -C -p 2022" -az %p 
barman@192.168.0.2:/dados/barman/dbcluster/incoming/%f'


The files are about 7 months old.


Thanks,

Edson

>
>
> Regards,
>
> Edson
>
> >
> >
> > Edson
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


--
Adrian Klaver
adrian.kla...@aklaver.com


RE: Replication: slave server has 3x size of production server?

2020-02-22 Thread Edson Richter


De: Adrian Klaver 
Enviado: sábado, 22 de fevereiro de 2020 15:50
Para: Edson Richter ; pgsql-general 

Assunto: Re: Replication: slave server has 3x size of production server?

On 2/22/20 10:05 AM, Edson Richter wrote:
> 
>
> *De:* Adrian Klaver 
> *Enviado:* sábado, 22 de fevereiro de 2020 14:33
>     *Para:* Edson Richter ; pgsql-general
> 
> *Assunto:* Re: Replication: slave server has 3x size of production
> server?
>     On 2/22/20 9:25 AM, Edson Richter wrote:
> > Hi!
> >
> > I've a database cluster created at 9.6.10 linux x64 server rhel. I made
> > progressive upgrades, first upgrading slave and then upgrading master.
> > Actually both are running 9.6.17.
> > Current production server has 196Gb in size.
> > Nevertheless, the replicated (slave) server has 598 Gb in size.
> > Replication server has 3x size of production server, is that normal?
>
> How are you measuring the sizes?
>
>
> This is the command:
>
> du --max-depth 1 -h pgDbCluster
>
>
> Production:
>
> du --max-depth 1 -h pgDbCluster
>
> 56M pgDbCluster/pg_log
> 444KpgDbCluster/global
> 4,0KpgDbCluster/pg_stat
> 4,0KpgDbCluster/pg_snapshots
> 16K pgDbCluster/pg_logical
> 20K pgDbCluster/pg_replslot
> 61M pgDbCluster/pg_subtrans
> 4,0KpgDbCluster/pg_commit_ts
> 465MpgDbCluster/pg_xlog
> 4,0KpgDbCluster/pg_twophase
> 12M pgDbCluster/pg_multixact
> 4,0KpgDbCluster/pg_serial
> 195GpgDbCluster/base
> 284KpgDbCluster/pg_stat_tmp
> 12M pgDbCluster/pg_clog
> 4,0KpgDbCluster/pg_dynshmem
> 12K pgDbCluster/pg_notify
> 4,0KpgDbCluster/pg_tblspc
> 196GpgDbCluster
>
>
> Slave:
>
> du -h --max-depth 1 pgDbCluster
>
> 403GpgDbCluster/pg_xlog
> 120KpgDbCluster/pg_log
> 424KpgDbCluster/global
> 0   pgDbCluster/pg_stat
> 0   pgDbCluster/pg_snapshots
> 4,0KpgDbCluster/pg_logical
> 8,0KpgDbCluster/pg_replslot
> 60M pgDbCluster/pg_subtrans
> 0   pgDbCluster/pg_commit_ts
> 0   pgDbCluster/pg_twophase
> 11M pgDbCluster/pg_multixact
> 0   pgDbCluster/pg_serial
> 195GpgDbCluster/base
> 12M pgDbCluster/pg_clog
> 0   pgDbCluster/pg_dynshmem
> 8,0KpgDbCluster/pg_notify
> 12K pgDbCluster/pg_stat_tmp
> 0   pgDbCluster/pg_tblspc
> 598GpgDbCluster

So the WAL logs are not being cleared.

What replication method is being used?

What are the settings for the replication?

Streaming replication. Initiated via pg_basebackup.

Settings on master server:

# - Sending Server(s) -
# Set these on the master and on any standby that will send replication data.
max_wal_senders = 2 # max number of walsender processes (change 
requires restart)
wal_keep_segments = 25  # in logfile segments, 16MB each; 0 disables
#wal_sender_timeout = 60s   # in milliseconds; 0 disables
max_replication_slots = 2   # max number of replication slots (change 
requires restart)
#track_commit_timestamp = off   # collect timestamp of transaction commit 
(change requires restart)
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep number 
of sync standbys and comma-separated list of application_name from standby(s); 
'*' = all
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed



Settings on slave server:

# - Standby Servers -
# These settings are ignored on a master server.
hot_standby = on# "on" allows queries during recovery 
(change requires restart)
max_standby_archive_delay = -1  # max delay before canceling queries 
when reading WAL from archive; -1 allows indefinite delay
max_standby_streaming_delay = -1# max delay before canceling queries 
when reading streaming WAL; -1 allows indefinite delay
wal_receiver_status_interval = 10s  # send replies at least this often 0 
disables
hot_standby_feedback = on   # send info from standby to prevent 
query conflicts
wal_receiver_timeout = 0# time that receiver waits for 
communication from master in milliseconds; 0 disables
wal_retrieve_retry_interval = 5s# time to wait before retrying to 
retrieve WAL after a failed attempt


Regards,

Edson

>
>
> Edson
>

--
Adrian Klaver
adrian.kla...@aklaver.com


RE: Replication: slave server has 3x size of production server?

2020-02-22 Thread Edson Richter


De: Adrian Klaver 
Enviado: sábado, 22 de fevereiro de 2020 14:33
Para: Edson Richter ; pgsql-general 

Assunto: Re: Replication: slave server has 3x size of production server?

On 2/22/20 9:25 AM, Edson Richter wrote:
> Hi!
>
> I've a database cluster created at 9.6.10 linux x64 server rhel. I made
> progressive upgrades, first upgrading slave and then upgrading master.
> Actually both are running 9.6.17.
> Current production server has 196Gb in size.
> Nevertheless, the replicated (slave) server has 598 Gb in size.
> Replication server has 3x size of production server, is that normal?

How are you measuring the sizes?


This is the command:

du --max-depth 1 -h pgDbCluster


Production:

du --max-depth 1 -h pgDbCluster

56M pgDbCluster/pg_log
444KpgDbCluster/global
4,0KpgDbCluster/pg_stat
4,0KpgDbCluster/pg_snapshots
16K pgDbCluster/pg_logical
20K pgDbCluster/pg_replslot
61M pgDbCluster/pg_subtrans
4,0KpgDbCluster/pg_commit_ts
465MpgDbCluster/pg_xlog
4,0KpgDbCluster/pg_twophase
12M pgDbCluster/pg_multixact
4,0KpgDbCluster/pg_serial
195GpgDbCluster/base
284KpgDbCluster/pg_stat_tmp
12M pgDbCluster/pg_clog
4,0KpgDbCluster/pg_dynshmem
12K pgDbCluster/pg_notify
4,0KpgDbCluster/pg_tblspc
196GpgDbCluster


Slave:

du -h --max-depth 1 pgDbCluster

403GpgDbCluster/pg_xlog
120KpgDbCluster/pg_log
424KpgDbCluster/global
0   pgDbCluster/pg_stat
0   pgDbCluster/pg_snapshots
4,0KpgDbCluster/pg_logical
8,0KpgDbCluster/pg_replslot
60M pgDbCluster/pg_subtrans
0   pgDbCluster/pg_commit_ts
0   pgDbCluster/pg_twophase
11M pgDbCluster/pg_multixact
0   pgDbCluster/pg_serial
195GpgDbCluster/base
12M pgDbCluster/pg_clog
0   pgDbCluster/pg_dynshmem
8,0KpgDbCluster/pg_notify
12K pgDbCluster/pg_stat_tmp
0   pgDbCluster/pg_tblspc
598GpgDbCluster


Edson



Where is the space being taken up on disk?

>
> Shall I drop the slave server and re-create it? How to avoid this
> situation in future?
>
> Thanks,
>
> Edson
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com


Replication: slave server has 3x size of production server?

2020-02-22 Thread Edson Richter
Hi!

I've a database cluster created at 9.6.10 linux x64 server rhel. I made 
progressive upgrades, first upgrading slave and then upgrading master.
Actually both are running 9.6.17.
Current production server has 196Gb in size.
Nevertheless, the replicated (slave) server has 598 Gb in size.
Replication server has 3x size of production server, is that normal?

Shall I drop the slave server and re-create it? How to avoid this situation in 
future?

Thanks,

Edson