On 13 January 2017 at 18:17, Ivan Voras <ivo...@gmail.com> wrote:

> On 13 January 2017 at 12:00, Stuart Bishop <stu...@stuartbishop.net>
> wrote:
>
>>
>>
>> On 7 January 2017 at 02:33, Ivan Voras <ivo...@gmail.com> wrote:
>>
>>>
>>>
>>>
>>> I forgot to add one more information, the databases are 50G+ each so
>>> doing the base backup on demand over the network is not a great option.
>>>
>>
>> If you don't want to rebuild your report databases, you can use
>> PostgreSQL built in replication to keep them in sync. Just promote the
>> replica to a primary, run your reports, then wind it back to a standby and
>> let it catch up.
>>
>
>
> Ah, that's a nice option, didn't know about pg_rewind! I need to read
> about it some more...
> So far, it seems like the best one.
>
>
>
>> Personally though, I'd take the opportunity to set up wal shipping and
>> point in time recovery on your primary, and rebuild your reporting database
>> regularly from these backups. You get your fresh reporting database on
>> demand without overloading the primary, and regularly test your backups.
>>
>
> I don't think that would solve the main problem. If I set up WAL shipping,
> then the secondary server will periodically need to ingest the logs, right?
> And then I'm either back to running it for a while and rewinding it, as
> you've said, or basically restoring it from scratch every time which will
> be slower than just doing a base backup, right?
>

It is solving a different problem (reliable, tested backups). As a side
effect, you end up with a copy of your main database that you can run
reports on. I'm suggesting that maybe the slow restoration of the database
is not actually a problem, but instead that you can use it to your
advantage. Maybe this fits into your bigger picture. Or maybe having a
dozen hot standbys of your existing dozen servers is a better option for
you.

-- 
Stuart Bishop <stu...@stuartbishop.net>
http://www.stuartbishop.net/

Reply via email to