Hi,
I intend to increase the speed of streaming replication with logical
decoding using following configuration:
wal_level = logical
fsync = on
synchronous_commit = off
wal_sync_method = fdatasync
wal_buffers = 256MB
wal_writer_delay = 2seconds
checkpoint_timeout = 15min
max_wal_size=10GB
Th
Hi,
I am running streaming replication with the archive.
As you can see below that Master pg_xlog is at WAL:
000101330093. But archive_status shows way behind:
000101330088.done
What could be the reason behind this? How should I let the PostgreSQL
archive the WAL from
On Wed, Sep 27, 2017 at 2:55 PM, Jerry Sievers
wrote:
> John Britto writes:
>
> > Hello,
> >
> > I have a streaming replication setup along with WAL archive.
> >
> > archive_command = ‘test ! -f /var/pg_archive/%f && cp %p > location>%f && scp %p postgres@192.168.0.123:/%f'
> >
> > When the SCP
John Britto writes:
> Hello,
>
> I have a streaming replication setup along with WAL archive.
>
> archive_command = ‘test ! -f /var/pg_archive/%f && cp %p location>%f && scp %p postgres@192.168.0.123:/%f'
>
> When the SCP command fails, the master repeatedly tries to send the
> archived WAL to s
Hello,
I have a streaming replication setup along with WAL archive.
archive_command = ‘test ! -f /var/pg_archive/%f && cp %p %f && scp %p postgres@192.168.0.123:/%f'
When the SCP command fails, the master repeatedly tries to send the
archived WAL to standby. But during this time, the pg_xlog dir
On Tue, Sep 12, 2017 at 11:43 PM, Ron Johnson wrote:
> On 09/07/2017 09:32 AM, Tom Lane wrote:
>>
>> Ron Johnson writes:
>>>
>>> On 09/07/2017 09:08 AM, Tom Lane wrote:
Manual cleanup shouldn't be very hard, fortunately. Run pg_controldata
to see where the last checkpoint is, and
On 09/07/2017 09:32 AM, Tom Lane wrote:
Ron Johnson writes:
On 09/07/2017 09:08 AM, Tom Lane wrote:
Manual cleanup shouldn't be very hard, fortunately. Run pg_controldata
to see where the last checkpoint is, and delete WAL files whose names
indicate they are before that (but not the one inclu
On 09/07/2017 05:07 PM, Michael Paquier wrote:
On Thu, Sep 7, 2017 at 11:08 PM, Tom Lane wrote:
Manual cleanup shouldn't be very hard, fortunately. Run pg_controldata
to see where the last checkpoint is, and delete WAL files whose names
indicate they are before that (but not the one including
On Thu, Sep 7, 2017 at 11:08 PM, Tom Lane wrote:
> Manual cleanup shouldn't be very hard, fortunately. Run pg_controldata
> to see where the last checkpoint is, and delete WAL files whose names
> indicate they are before that (but not the one including the checkpoint!).
> If you don't intend to d
Ron Johnson writes:
> On 09/07/2017 09:08 AM, Tom Lane wrote:
>> Manual cleanup shouldn't be very hard, fortunately. Run pg_controldata
>> to see where the last checkpoint is, and delete WAL files whose names
>> indicate they are before that (but not the one including the checkpoint!).
> All WAL
On 09/07/2017 09:08 AM, Tom Lane wrote:
Ron Johnson writes:
After disabling log shipping via setting "archive_mode = off", and then
running, "pg_ctl reload", old WAL files and their associated .ready files
aren't being deleted.
Hmm. I might be misremembering, but I think that it's the archive
Ron Johnson writes:
> After disabling log shipping via setting "archive_mode = off", and then
> running, "pg_ctl reload", old WAL files and their associated .ready files
> aren't being deleted.
Hmm. I might be misremembering, but I think that it's the archiver
process that is in charge of dele
Hi,
v8.4 (and there's nothing I can do about it).
After disabling log shipping via setting "archive_mode = off", and then
running, "pg_ctl reload", old WAL files and their associated .ready files
aren't being deleted.
Is there any document you can point me to as to why this is happening, and
On Tue, Aug 15, 2017 at 4:45 AM, basti wrote:
> i have fixed. pg_update has create a wrong cluster
Let's be sure that we are not talking about a bug here, because you
are giving no details so it is hard to know if what you are seeing is
caused by an incorrect operation, or if that's an actual bug
i have fixed. pg_update has create a wrong cluster
On 14.08.2017 20:52, basti wrote:
Hello,
i try to replicate my database. what i have done?
- create a cluster on slave (UTF8, en_US.utf8 collate/c_type)
- stop cluster and cleanup datadir
- do basebackup from master
- start db-cluster
Master
On Tue, Aug 15, 2017 at 3:52 AM, basti wrote:
> master and slave had set the same locales.
> I dont unterstand that i can create a database in en_us.utf8 and then when i
> did the basebackup it's change to c locale.
> I cant find any option for pg_basebackup to set locale/collate.
> I use this how
Hello,
i try to replicate my database. what i have done?
- create a cluster on slave (UTF8, en_US.utf8 collate/c_type)
- stop cluster and cleanup datadir
- do basebackup from master
- start db-cluster
Master has utf8, en_us.uft8 collate/c_type
Now my db on slave has UTF8, c.utf8 collate/c_type.
2017-04-10 16:49 GMT+02:00 Bill Moran :
>
> > >> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo
> > >> wrote:
> > >> > Hi,
> > >> > I have a very big table (10GB).
> > >> > I noticed that many WAL segments are being written when elaborating
> read
> > >> > only transactions like this:
> > >> > sele
> >> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo
> >> wrote:
> >> > Hi,
> >> > I have a very big table (10GB).
> >> > I noticed that many WAL segments are being written when elaborating read
> >> > only transactions like this:
> >> > select * from dati256 where id >4300 limit 100
2017-04-06 17:51 GMT+02:00 Tom DalPozzo :
>
>
> 2017-04-04 19:18 GMT+02:00 Scott Marlowe :
>
>> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo
>> wrote:
>> > Hi,
>> > I have a very big table (10GB).
>> > I noticed that many WAL segments are being written when elaborating read
>> > only transactions
On Thu, Apr 6, 2017 at 8:51 AM, Tom DalPozzo wrote:
>
> What is the meaning of FPI_FOR_HINT?
>
>
Full Page Image for Hint [Bits]
Its noted as being dependent upon checksums being enabled.
I have a feel for the interactions involved here but not enough to explain
them in detail.
David J.
2017-04-04 19:18 GMT+02:00 Scott Marlowe :
> On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo wrote:
> > Hi,
> > I have a very big table (10GB).
> > I noticed that many WAL segments are being written when elaborating read
> > only transactions like this:
> > select * from dati256 where id >43
On Tue, Apr 4, 2017 at 9:46 AM, Tom DalPozzo wrote:
> Hi,
> I have a very big table (10GB).
> I noticed that many WAL segments are being written when elaborating read
> only transactions like this:
> select * from dati256 where id >4300 limit 100;
> I don't understand why are there
On 04/04/17 16:46, Tom DalPozzo wrote:
Hi,
I have a very big table (10GB).
I noticed that many WAL segments are being written when elaborating read
only transactions like this:
select * from dati256 where id >4300 limit 100;
I don't understand why are there WAL writings during rea
Hi,
I have a very big table (10GB).
I noticed that many WAL segments are being written when elaborating read
only transactions like this:
select * from dati256 where id >4300 limit 100;
I don't understand why are there WAL writings during read only transactions.
Regards
Pupillo
On Mon, Dec 12, 2016 at 12:37 PM, Albe Laurenz
wrote:
> Torsten Förtsch wrote:
> > if I do something like this:
> >
> > BEGIN;
> > UPDATE tbl SET data='something' WHERE pkey='selector';
> > UPDATE tbl SET data=NULL WHERE pkey='selector';
> > COMMIT;
> >
> > Given 'selector' actually exists, I get
Torsten Förtsch wrote:
> if I do something like this:
>
> BEGIN;
> UPDATE tbl SET data='something' WHERE pkey='selector';
> UPDATE tbl SET data=NULL WHERE pkey='selector';
> COMMIT;
>
> Given 'selector' actually exists, I get a separate WAL entry for each of the
> updates. My question is,
> does
Hi,
if I do something like this:
BEGIN;
UPDATE tbl SET data='something' WHERE pkey='selector';
UPDATE tbl SET data=NULL WHERE pkey='selector';
COMMIT;
Given 'selector' actually exists, I get a separate WAL entry for each of
the updates. My question is, does the first update actually hit the data
On Mon, Dec 12, 2016 at 9:31 AM, Patrick B wrote:
> No.. it didn't copy. i tested here. I had to manually copy the history file
> from /var/lib/pgsql/9.2/data/pg_xlogs from the new master to the same
> directory on the slaves.
An archive command is able to properly fetch the history files to
defi
2016-12-12 12:09 GMT+13:00 Patrick B :
> 2016-12-12 12:00 GMT+13:00 Venkata B Nagothi :
>
>>
>> On Mon, Dec 12, 2016 at 7:48 AM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> Are the history files copied with the wal_files? Or I have to do it
>>> separated?
>>>
>>> 0003.history': No such file o
2016-12-12 12:00 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Dec 12, 2016 at 7:48 AM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> Are the history files copied with the wal_files? Or I have to do it
>> separated?
>>
>> 0003.history': No such file or directory
>>
>>
>> I'm using PostgreSQL 9.2.
>>
>
>
On Mon, Dec 12, 2016 at 7:48 AM, Patrick B wrote:
> Hi guys,
>
> Are the history files copied with the wal_files? Or I have to do it
> separated?
>
> 0003.history': No such file or directory
>
>
> I'm using PostgreSQL 9.2.
>
Can you please explain the scenario you are referring to ? during s
Hi guys,
Are the history files copied with the wal_files? Or I have to do it
separated?
0003.history': No such file or directory
I'm using PostgreSQL 9.2.
Cheers
Patrick
On Tue, Dec 6, 2016 at 7:24 AM, Israel Brewster wrote:
> Simple question: are WAL files archived when full, or when recycled?
When full.
> That is, are the WAL archive files "up-to-date" other than the current WAL
> file,
> or will the archives always be wal_keep_segments behind?
WAL archives
Simple question: are WAL files archived when full, or when recycled? That is, are the WAL archive files "up-to-date" other than the current WAL file, or will the archives always be wal_keep_segments behind?---Israel BrewsterSystems Analyst IIRavn Alaska52
2016-12-01 14:15 GMT+13:00 David G. Johnston :
> On Wed, Nov 30, 2016 at 6:05 PM, Patrick B
> wrote:
>
>> https://www.postgresql.org/docs/9.2/static/runtime-config-
>> replication.html
>>
>> wal_keep_segments is the parameter responsible for streaming replication
>> be able to recover itself with
On Wed, Nov 30, 2016 at 6:05 PM, Patrick B wrote:
> https://www.postgresql.org/docs/9.2/static/runtime-config-replication.html
>
> wal_keep_segments is the parameter responsible for streaming replication
> be able to recover itself without using wal_files, is that right?
>
[...] without using w
2016-11-29 23:59 GMT+13:00 Patrick B :
>
>
> 2016-11-29 16:36 GMT+13:00 David G. Johnston :
>
>> On Mon, Nov 28, 2016 at 8:22 PM, Patrick B
>> wrote:
>>
>>>
>>> Ho
>>> [w]
>>> is that even possible?? I don't understand!
>>>
>>>
>> https://www.postgresql.org/docs/9.2/static/warm-standby.html
>>
2016-11-29 16:36 GMT+13:00 David G. Johnston :
> On Mon, Nov 28, 2016 at 8:22 PM, Patrick B
> wrote:
>
>>
>> Ho
>> [w]
>> is that even possible?? I don't understand!
>>
>>
> https://www.postgresql.org/docs/9.2/static/warm-standby.html
> """
>
> If you use streaming replication without file-ba
On Mon, Nov 28, 2016 at 8:22 PM, Patrick B wrote:
>
> Ho
> [w]
> is that even possible?? I don't understand!
>
>
https://www.postgresql.org/docs/9.2/static/warm-standby.html
"""
If you use streaming replication without file-based continuous archiving,
you have to set wal_keep_segments in the
2016-11-29 15:21 GMT+13:00 David Steele :
> On 11/24/16 8:05 PM, Patrick B wrote:
>
> > hmm.. I really don't get it.
> >
> >
> >
> > If I get messages like:
> >
> > *cp: cannot stat '/walfiles/00021AF800A5': No such file or
> > director*y
> >
> > In my head, it's saying that it was una
On 11/24/16 8:05 PM, Patrick B wrote:
> hmm.. I really don't get it.
>
>
>
> If I get messages like:
>
> *cp: cannot stat '/walfiles/00021AF800A5': No such file or
> director*y
>
> In my head, it's saying that it was unable to recover that file and,
> because of that, there is m
On Fri, Nov 25, 2016 at 10:05 AM, Patrick B wrote:
> If I get messages like:
>
> cp: cannot stat '/walfiles/00021AF800A5': No such file or
> directory
>
> In my head, it's saying that it was unable to recover that file and, because
> of that, there is missing data.
> Even if the server
2016-11-23 16:18 GMT+13:00 Venkata B Nagothi :
>
> On Wed, Nov 23, 2016 at 1:59 PM, Patrick B
> wrote:
>
>>
>>
>> 2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>>
>>>
>>>
>>> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B
>>> wrote:
>>>
Hi guys,
I currently have a slave02 server that
On Wed, Nov 23, 2016 at 1:59 PM, Patrick B wrote:
>
>
> 2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>
>>
>>
>> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> I currently have a slave02 server that is replicating from another
>>> slave01 via Cascading replication.
2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>
>
> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I currently have a slave02 server that is replicating from another
>> slave01 via Cascading replication. The master01 server is shipping
>> wal_files (via ssh) to both slav
On Wed, Nov 23, 2016 at 1:03 PM, Patrick B wrote:
> Hi guys,
>
> I currently have a slave02 server that is replicating from another slave01
> via Cascading replication. The master01 server is shipping wal_files (via
> ssh) to both slaves.
>
>
> I'm doing some tests on slave02 to test the recovery
Hi guys,
I currently have a slave02 server that is replicating from another slave01
via Cascading replication. The master01 server is shipping wal_files (via
ssh) to both slaves.
I'm doing some tests on slave02 to test the recovery via wal_files... The
goal here is to stop postgres, wait few min
2016-11-14 15:33 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Nov 14, 2016 at 1:22 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> My current scenario is:
>>
>> master01 - Postgres 9.2 master DB
>> slave01 - Postgres 9.2 streaming replication + wal_files slave server for
>> read-only queries
>> slave02 -
On Mon, Nov 14, 2016 at 1:22 PM, Patrick B wrote:
> Hi guys,
>
> My current scenario is:
>
> master01 - Postgres 9.2 master DB
> slave01 - Postgres 9.2 streaming replication + wal_files slave server for
> read-only queries
> slave02 - Postgres 9.2 streaming replication + wal_files slave server @
Hi guys,
My current scenario is:
master01 - Postgres 9.2 master DB
slave01 - Postgres 9.2 streaming replication + wal_files slave server for
read-only queries
slave02 - Postgres 9.2 streaming replication + wal_files slave server @ AWS
master01 sends wal_files to both slaves via ssh.
*On the ma
On 11/3/16 1:16 PM, Tom DalPozzo wrote:
so if I understand right, the ...DE file's previous name, was less than
...C6, then it was renamed in big advance for later use. I was missing
this advance.
That is correct.
--
-David
da...@pgmasters.net
--
Sent via pgsql-general mailing list (pgsql-g
Hi,
so if I understand right, the ...DE file's previous name, was less than
...C6, then it was renamed in big advance for later use. I was missing this
advance.
Thanks!
Pupillo
2016-11-03 11:45 GMT+01:00 hubert depesz lubaczewski :
> On Thu, Nov 03, 2016 at 11:28:57AM +0100, Tom DalPozzo wrote:
On Thu, Nov 03, 2016 at 11:28:57AM +0100, Tom DalPozzo wrote:
> What am I missing?
David already explained, but you might want to read also:
https://www.depesz.com/2011/07/14/write-ahead-log-understanding-postgresql-conf-checkpoint_segments-checkpoint_timeout-checkpoint_warning/
depesz
--
The b
On 11/3/16 12:28 PM, Tom DalPozzo wrote:
Hi,
I found, in pg_xlog dir, several WAL segment files with old modification
timestamp but with their names greater than more recent files.
Ex.:
000100C6 modified today
000100DE modified yesterday
This is complet
Hi,
I found, in pg_xlog dir, several WAL segment files with old modification
timestamp but with their names greater than more recent files.
Ex.:
000100C6 modified today
000100DE modified yesterday
I thought it could not be possible.
I'm doing some tests w
Hi Moreno:
On Wed, Aug 3, 2016 at 1:07 PM, Moreno Andreo wrote:
It's already been answered, but as it seems to be answering a chunk of
my mail...
> Should I keep fsync off? I'd think it would be better leaving it on, right?
Yes. If you have to ask wether fsync should be on, it should.
I mean,
Il 03/08/2016 18:01, Jeff Janes ha scritto:
On Thu, Jul 28, 2016 at 6:25 AM, Moreno Andreo wrote:
Hi folks! :-)
I'm about to bring up my brand new production server and I was wondering if
it's possible to calculate (approx.) the WAL directory size.
I have to choose what's better in terms of cos
On Thu, Jul 28, 2016 at 6:25 AM, Moreno Andreo wrote:
> Hi folks! :-)
> I'm about to bring up my brand new production server and I was wondering if
> it's possible to calculate (approx.) the WAL directory size.
> I have to choose what's better in terms of cost vs. performance (we are on
> Google C
On Thu, Jul 28, 2016 at 6:33 AM, David G. Johnston
wrote:
> On Thu, Jul 28, 2016 at 9:25 AM, Moreno Andreo
> wrote:
>>
>> I've read somewhere that the formula should be 16 MB * 3 *
>> checkpoint_segment in size.
>
> [...]
>
>>
>> Using the above formula I have:
>> 16 MB * 3 * 1 GB
>> that lea
Il 03/08/2016 14:12, Michael Paquier ha scritto:
On Wed, Aug 3, 2016 at 8:07 PM, Moreno Andreo wrote:
Should I keep fsync off? I'd think it would be better leaving it on, right?
>From the docs:
https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS
Whil
On Wed, Aug 3, 2016 at 8:07 PM, Moreno Andreo wrote:
> Should I keep fsync off? I'd think it would be better leaving it on, right?
>From the docs:
>https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS
While turning off fsync is often a performance benefi
Il 29/07/2016 17:26, Francisco Olarte ha scritto:
Hi:
On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
wrote:
After Andreas post and thinking about it a while, I went to the decision
that it's better not to use RAM but another persistent disk, because there
can be an instant between when a WAL
Il 29/07/2016 15:30, David G. Johnston
ha scritto:
On Fri, Jul 29, 2016 at
7:08 AM, Moreno Andreo wrote:
R
Hi:
On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
wrote:
> After Andreas post and thinking about it a while, I went to the decision
> that it's better not to use RAM but another persistent disk, because there
> can be an instant between when a WAL is written and it's fsync'ed, and if a
> failur
To: FarjadFarid(ChkNet)
;
pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory
size calculation
Il 29/07/2016 11:44, FarjadFarid(ChkNet)
ha scritto:
On Fri, Jul 29, 2016 at 7:08 AM, Moreno Andreo
wrote:
> R
> egarding backups I disagree. Files related to database must be consistent
> to the database itself, so backup must be done saving both database and
> images.
>
I'd suggest you consider that such binary data be defined as immutable.
T
Sorry the URL should have been https://www.maytech.net/
Of course there are other companies in this space.
From: Moreno Andreo [mailto:moreno.and...@evolu-s.it]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) ;
pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL
. They also distributed servers around the
world.
Hope this helps.
From: Moreno Andreo [mailto:moreno.and...@evolu-s.it]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) ;
pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation
Il 29/07/2016
[SPAM] Re: [GENERAL] WAL directory
size calculation
Il 29/07/2016 10:43, John R Pierce ha
scritto:
Aside of this
luck.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation
Il 29/07/2016 10:43, John R Pierce ha
@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation
Il 29/07/2016 10:43, John R Pierce ha scritto:
Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.
Il 29/07/2016 10:43, John R Pierce ha
scritto:
Aside of this,
I'm having 350 DBs that sum up a bit more than 1 TB, and
plan
to use wal_level=archive because I plan to have a backup
Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB,
and plan
to use wal_level=archive because I plan to have a backup server with
barman.
With that many databases with that so many objectsand undoubtable client
connections, I'd want to spread that across a cluster of smaller
Il 28/07/2016 20:45, Francisco Olarte ha scritto:
On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo wrote:
Obviously ramdisk will be times faster disk, but having a, say, 512 GB
ramdisk will be a little too expensive :-)
Besides defeating the purpose of WAL, if you are going to use non
persistent
On Thu, Jul 28, 2016 at 9:54 AM, Andreas Kretschmer wrote:
> Without Replication 1 GB would be fine, even with replication. But it must
> be realible!
>
>
The required size of WAL depends on what your intended checkpoint_timeout
vs. the amount
of WAL generated from data turnover is. A rather smal
On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo wrote:
> Obviously ramdisk will be times faster disk, but having a, say, 512 GB
> ramdisk will be a little too expensive :-)
Besides defeating the purpose of WAL, if you are going to use non
persistent storage for WAL you could as well use minimal le
Il 28/07/2016 15:33, David G. Johnston
ha scritto:
On Thu, Jul 28, 2016 at
9:25 AM, Moreno Andreo wrote:
I've
read somewhere that the formula should be 16 MB * 3 *
Il 28/07/2016 15:54, Andreas Kretschmer ha scritto:
Am 28.07.2016 um 15:25 schrieb Moreno Andreo:
Hi folks! :-)
I'm about to bring up my brand new production server and I was
wondering if it's possible to calculate (approx.) the WAL directory
size.
I have to choose what's better in terms of co
Am 28.07.2016 um 15:25 schrieb Moreno Andreo:
Hi folks! :-)
I'm about to bring up my brand new production server and I was
wondering if it's possible to calculate (approx.) the WAL directory size.
I have to choose what's better in terms of cost vs. performance (we
are on Google Cloud Platform)
On Thu, Jul 28, 2016 at 9:25 AM, Moreno Andreo
wrote:
> I've read somewhere that the formula should be 16 MB * 3 *
> checkpoint_segment in size.
>
[...]
> Using the above formula I have:
> 16 MB * 3 * 1 GB
> that leads to to ... uh .. 48000 TB?
>
You seem to be mis-remembering the formu
Hi folks! :-)
I'm about to bring up my brand new production server and I was wondering
if it's possible to calculate (approx.) the WAL directory size.
I have to choose what's better in terms of cost vs. performance (we are
on Google Cloud Platform) between a ramdisk or a separate persistent
dis
On 17/05/16 00:54, Scott Moynes wrote:
> wal_keep_segments is set to 32.
>
> Here is the replication slot:
>
> slot_name|
> n6lbb2vmohwuxoyk_00018732_f58b5354_79ad_4e6e_b18b_47acb1d7ce1f
> plugin | test_decoding
> slot_type| logical
> datoid | 18732
> datab
wal_keep_segments is set to 32.
Here is the replication slot:
slot_name|
n6lbb2vmohwuxoyk_00018732_f58b5354_79ad_4e6e_b18b_47acb1d7ce1f
plugin | test_decoding
slot_type| logical
datoid | 18732
database | omitted
active | f
xmin |
On 05/16/2016 03:33 PM, Scott Moynes wrote:
I have a PostgreSQL server that is not recycling WAL files. Log files
are continually created and no old log files are ever removed.
Running PostgreSQL v 9.4.8 with archive settings:
archive_mode = on
archive_command = /bin/true
Checkpoint lo
Scott Moynes wrote:
> I have a PostgreSQL server that is not recycling WAL files. Log files are
> continually created and no old log files are ever removed.
>
> Running PostgreSQL v 9.4.8 with archive settings:
>
> archive_mode = on
> archive_command = /bin/true
>
> Checkpoint logging is
I have a PostgreSQL server that is not recycling WAL files. Log files are
continually created and no old log files are ever removed.
Running PostgreSQL v 9.4.8 with archive settings:
archive_mode = on
archive_command = /bin/true
Checkpoint logging is enabled and does not record any logs
On 12/15/15 2:49 AM, Jov wrote:
I think this behavior for recovery_min_apply_delay is not good,because
if the receiver do not fetch the wal for a long time(in these cases it
must replay 3d's wal before wal receiver start),the master will delete
the wal,and the standby will need be re do.
AFAIK,
I ask this problem because I meet twice recently that the wal receiver
process do not start after a long time.
first time:
I change recovery_min_apply_delay from default to 3d on a standby,the
standby start but there is no receiver process,and on the
master,pg_stat_replication show nothing.After 3
On Mon, Sep 28, 2015 at 12:53:37PM -0600, Scott Marlowe wrote:
> The issue was reported as omnipitr-cleanup is SLOOOW, so we run
> purgewal by hand, because the cleanup is so slow it can't keep up. But
> running it by hand is not supported.
>
> We fixed the problem though, we wrote out own script
On Mon, Sep 28, 2015 at 9:12 AM, Keith Fiske wrote:
>
>
> On Mon, Sep 28, 2015 at 10:54 AM, Scott Marlowe
> wrote:
>>
>> On Mon, Sep 28, 2015 at 8:48 AM, CS DBA
>> wrote:
>> > All;
>> >
>> > We have a 3 node replication setup:
>> >
>> > Master (node1) --> Cascading Replication Node (node2) -->
On Mon, Sep 28, 2015 at 08:54:54AM -0600, Scott Marlowe wrote:
> Look up WAL-E. It's works really well. We tried using OmniPITR and
> it's buggy and doesn't seem to get fixed very quickly (if at all).
Any examples? I'm developer of OmniPITR, and as far as I know there are
(currently) no unfixed bu
On Mon, Sep 28, 2015 at 10:54 AM, Scott Marlowe
wrote:
> On Mon, Sep 28, 2015 at 8:48 AM, CS DBA
> wrote:
> > All;
> >
> > We have a 3 node replication setup:
> >
> > Master (node1) --> Cascading Replication Node (node2) --> Downstream
> > Standby node (node3)
> >
> > We will be deploying WAL a
On Mon, Sep 28, 2015 at 8:48 AM, CS DBA wrote:
> All;
>
> We have a 3 node replication setup:
>
> Master (node1) --> Cascading Replication Node (node2) --> Downstream
> Standby node (node3)
>
> We will be deploying WAL archiving from the master for PITR backups and
> we'll use the staged WAL file
All;
We have a 3 node replication setup:
Master (node1) --> Cascading Replication Node (node2) --> Downstream
Standby node (node3)
We will be deploying WAL archiving from the master for PITR backups and
we'll use the staged WAL files in the recovery.conf files in case the
standbys need to
Hi Adrian
Thank you for your explanation, I did read the wal document that you
mentioned and every settings look ok.
I will keep an eye on the situation and I really appreciate your help.
--
View this message in context:
http://postgresql.nabble.com/wal-files-stay-in-the-pg-xlog-dir-tp586378
On 08/28/2015 01:59 PM, kingl wrote:
Hi Adrian
Thank you for your prompt reply.
For more in depth information take a look here:
http://www.postgresql.org/docs/9.4/interactive/wal-configuration.html
which deals with the WAL configuration settings and explains what you
are seeing. To get up t
Hi Adrian
Thank you for your prompt reply.
In the pg_xlog there are 2,015 wal files now. repmgr recommends to keep 5000
wal files however for our env that would be an overkill so i changed it to
2000.
the other issue is that the standby node has only 1345 wal files in the
pg_xlog, i thought tha
On 08/28/2015 01:07 PM, kingl wrote:
To whom it may concern:
We have a 2 nodes postgres cluster, postgres server v9.3.8 and repmgr is
used to enable the cluster function. barman v1.4.1 is used to take backup of
the master postgres node.
everything seems to be working except the wal files in pg_
To whom it may concern:
We have a 2 nodes postgres cluster, postgres server v9.3.8 and repmgr is
used to enable the cluster function. barman v1.4.1 is used to take backup of
the master postgres node.
everything seems to be working except the wal files in pg_xlog on node1
keeps accumulating.
ther
1 - 100 of 442 matches
Mail list logo