On Tue, Dec 8, 2015 at 4:43 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Dec 8, 2015 at 3:33 AM, FattahRozzaq <ssoor...@gmail.com> wrote:
>
>> Hi all,
>>
>> Please help...
>>
>> I have 1 master PostgreSQL and 1 standby PostgreSQL.
>> Both servers has the same OS Linux Debian Wheezy, the same hardware.
>>
>> Both server hardware:
>> CPU: 24 cores
>> RAM: 128GB
>> Disk-1: 800GB SAS (for OS, logs, WAL archive directory)
>> Disk-2: 330GB SSD (for PostgreSQL data directory, except WAL archive
>> and except pg_log)
>>
>> The part of the configuration are as below:
>> checkpoint_segments = 64
>> checkpoint_completion_target = 0.9
>> default_statistics_target = 10
>> maintenance_work_mem = 1GB
>> effective_cache_size = 64GB
>> shared_buffers = 24GB
>> work_mem = 5MB
>> wal_buffers = 8MB
>> wal_keep_segments = 4096
>> wal_level = hot_standby
>> max_wal_senders = 10
>> archive_mode = on
>> archive_command = 'cp -i %p /home/postgres/archive/master/%f'
>>
>>
>> The WAL archive is at /home/postgres/archive/master/, right?
>> This directory consume more than 750GB of Disk-1.
>> Each segment in the /home/postgres/archive/master/ is 16MB each
>> There are currently 47443 files in this folder.
>>
>> I want to limit the total size use by WAL archive to around 200-400 GB.
>>
>> Do I set the segment too big?
>> wal_keep_segments = 4096
>> checkpoint_segments = 64
>>
>> What value should I set for it?
>>
>
> In which case you need to calculate how long it takes to accumulate that
> much archive data and then perform a base backup roughly that often after
> which point any WAL older that the point at which you began the backup can
> be removed.
>
> You cannot just limit how large the WAL archive is since removing any WAL
> file will pretty much make any attempt at restoration fail.​
>
> David J.
>
>
​While valid I missed that you have a streaming replica on the other end
that should be removing files as they are loaded pending the retention
setting...see Michael's response for better information.

David J.​

Reply via email to