Re: Warm standby can't start because logs stream too quickly from the master

2017-12-02 Thread Zach Walton
This was my fault. I'd restored recovery.conf from recovery.done to try to
recover manually after automated recovery failed. Everything's working
after stopping the database and running pgpool online recovery again.

Thanks for the help.

On Sat, Dec 2, 2017 at 1:55 PM, Joshua D. Drake 
wrote:

> On 12/02/2017 11:02 AM, Zach Walton wrote:
>
>>
>> Generally this works itself out if I wait (sometimes a really long time).
>> Is there a configuration option that allows a warm standby to start without
>> having fully replayed the logs from the master?
>>
>> * Note: wal_keep_segments is set to 8192 on these servers, which have
>> large disks, to allow for recovery within a couple of hours of a failover
>> without resorting to restoring from archive
>> * This is specifically an issue for pgpool recovery, which fails if a
>> standby can't start within (by default) 300 seconds. Open to toggling that
>> param if there's no way around this.
>>
>
> It needs to only reach a consistent state, it doesn't need restore all
> logs. What does your recovery.conf say and are you *100% sure% you issued a
> pg_stop_backup()?
>
> JD
>
>
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
>
> PostgreSQL Centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://pgconf.org
> * Unless otherwise stated, opinions are my own.   *
>
>


Re: Warm standby can't start because logs stream too quickly from the master

2017-12-02 Thread Jeff Janes
On Sat, Dec 2, 2017 at 11:02 AM, Zach Walton  wrote:

> Looking at the startup process:
>
> postgres 16749  4.1  6.7 17855104 8914544 ?Ss   18:36   0:44 postgres:
> startup process   recovering 00085B1C0030
>
> Then a few seconds later:
>
> postgres 16749  4.2  7.0 17855104 9294172 ?Ss   18:36   0:47 postgres:
> startup process   recovering 00085B1C0047
>
> It's replaying logs from the master, but it's always a few behind, so
> startup never finishes. Here's a demonstration:
>
> # while :; do echo $(ls data/pg_xlog/ | grep -n $(ps aux | egrep "startup
> process" | awk '{print $15}')) $(ls data/pg_xlog/ | wc -l); sleep 1; done
> # current replay location # number of WALs in pg_xlog
> 1655:00085B1C0064 1659
> 1656:00085B1C0065 1660
> 1658:00085B1C0067 1661
> 1659:00085B1C0068 1662
> 1660:00085B1C0069 1663
>
> Generally this works itself out if I wait (sometimes a really long time).
> Is there a configuration option that allows a warm standby to start without
> having fully replayed the logs from the master?
>


Warm standbys aren't supposed to start up, that is what makes them warm.
Are you trying to set up a hot standby?  Are you trying to promote a warm
standby to be the new master (but usually you would do that when the
current master has died, and so would no longer be generating log.)

Cheers,

Jeff


Re: Warm standby can't start because logs stream too quickly from the master

2017-12-02 Thread Joshua D. Drake

On 12/02/2017 11:02 AM, Zach Walton wrote:


Generally this works itself out if I wait (sometimes a really long 
time). Is there a configuration option that allows a warm standby to 
start without having fully replayed the logs from the master?


* Note: wal_keep_segments is set to 8192 on these servers, which have 
large disks, to allow for recovery within a couple of hours of a 
failover without resorting to restoring from archive
* This is specifically an issue for pgpool recovery, which fails if a 
standby can't start within (by default) 300 seconds. Open to toggling 
that param if there's no way around this.


It needs to only reach a consistent state, it doesn't need restore all 
logs. What does your recovery.conf say and are you *100% sure% you 
issued a pg_stop_backup()?


JD



--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.org
* Unless otherwise stated, opinions are my own.   *




Warm standby can't start because logs stream too quickly from the master

2017-12-02 Thread Zach Walton
Looking at the startup process:

postgres 16749  4.1  6.7 17855104 8914544 ?Ss   18:36   0:44 postgres:
startup process   recovering 00085B1C0030

Then a few seconds later:

postgres 16749  4.2  7.0 17855104 9294172 ?Ss   18:36   0:47 postgres:
startup process   recovering 00085B1C0047

It's replaying logs from the master, but it's always a few behind, so
startup never finishes. Here's a demonstration:

# while :; do echo $(ls data/pg_xlog/ | grep -n $(ps aux | egrep "startup
process" | awk '{print $15}')) $(ls data/pg_xlog/ | wc -l); sleep 1; done
# current replay location # number of WALs in pg_xlog
1655:00085B1C0064 1659
1656:00085B1C0065 1660
1658:00085B1C0067 1661
1659:00085B1C0068 1662
1660:00085B1C0069 1663

Generally this works itself out if I wait (sometimes a really long time).
Is there a configuration option that allows a warm standby to start without
having fully replayed the logs from the master?

* Note: wal_keep_segments is set to 8192 on these servers, which have large
disks, to allow for recovery within a couple of hours of a failover without
resorting to restoring from archive
* This is specifically an issue for pgpool recovery, which fails if a
standby can't start within (by default) 300 seconds. Open to toggling that
param if there's no way around this.


Re: Problems with triggers and table lock

2017-12-02 Thread Melvin Davidson
On Fri, Dec 1, 2017 at 4:39 PM, Job  wrote:

> Dear guys,
>
> we are using Postgresql 9.6.1 with Rubyrep in order to replicate some
> tables across two different, and remote, database servers.
> Since few weeks sometimes when inserting/updating some rows, the statement
> remains waiting and table seems locked for insert/updates.
>
> When i issue "select * from pg_stat_activity" no query is locked.
> I only obtain this error messages when i kill (i also try directly from
> bash with pgsql) with "ctrl+c" the locked insert:
>
> ERROR:  canceling statement due to user request
> CONTEXT:  while updating tuple (0,7) in relation "TABLE"
> SQL statement "UPDATE TABLE SET FIELD=NOW() WHERE FIELD IS NULL"
> PL/pgSQL function TRIGGER_NAME() line 3 at SQL statement
>
> How can i see what is locking my table/tables?
>
> Thank you!
> F
>


>How can i see what is locking my table/tables?

The attached query will give you all the necessary info.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
SELECT l.database AS db_oid, 
 d.datname,
l.relation, 
   n.nspname,
   c.relname,
   a.pid,
   a.usename,
   l.locktype,
   l. mode, 
   l. granted,
   l.tuple 
  FROM pg_locks l
  JOIN pg_class c ON (c.oid = l.relation)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_database dON (d.oid = l.database)
  JOIN pg_stat_activity a ON (a.pid = l.pid) 
ORDER BY database,
 relation, 
 pid;


Re: pg data backup from vps

2017-12-02 Thread Sameer Kumar
On Sat, Dec 2, 2017 at 8:02 AM John R Pierce  wrote:

> On 12/1/2017 12:44 PM, basti wrote:
> > Replication is no backup.
> > Its more like a RAID.
> >
> > That mean tubles that are delete on master by a mistake there are also
> > delete on slave.
> >
> > correct me if i'am wrong.
>

You have not mentioned the version. If you are using 9.4 or above, you can
apply a delay between master and slave by specifying
"recovery_min_apply_delay" on slave's reccovery.conf. This might be the
fastest way to recover a dropped/truncated/deleted table specially for
large databases.

https://www.postgresql.org/docs/9.4/static/standby-settings.html

The challenge with this approach would be to choose the duration of delay.
Usually something upto 6hours should be fine. If you can not identify a
dropped table in 6hours, that means very likely you can go another 6 hours
without it (time good enough to restore a backup and perform PITR as
suggested in other answers).


>
> a wal archive plus occasional basebackups lets you restore to any point
> in time (PITR) covered since the oldest basebackup.
>
> think of a base backup as a 'full' backup, and the wal logs in the
> archive as incrementals.one such approach might be a weekly
> basebackup, where you keep the last 4 weeks, and keep all wal files
> since the start of oldest basebackup.   yes, this will take quite a bit
> of space
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
> --

-- 

Best Regards,

*Sameer Kumar | Senior Solution Architect*

*ASHNIK PTE. LTD.*

36 Robinson Road, #14-04 City House, Singapore 068877

T: +65 6438 3504 | www.ashnik.com

Skype: sameer.ashnik |   M: +65 8110 0350


Re: Problems with triggers and table lock

2017-12-02 Thread Alban Hertroys

> On 1 Dec 2017, at 22:39, Job  wrote:
> 
> Dear guys,
> 
> we are using Postgresql 9.6.1 with Rubyrep in order to replicate some tables 
> across two different, and remote, database servers.
> Since few weeks sometimes when inserting/updating some rows, the statement 
> remains waiting and table seems locked for insert/updates.
> 
> When i issue "select * from pg_stat_activity" no query is locked.
> I only obtain this error messages when i kill (i also try directly from bash 
> with pgsql) with "ctrl+c" the locked insert:
> 
> ERROR:  canceling statement due to user request
> CONTEXT:  while updating tuple (0,7) in relation "TABLE"
> SQL statement "UPDATE TABLE SET FIELD=NOW() WHERE FIELD IS NULL"
> PL/pgSQL function TRIGGER_NAME() line 3 at SQL statement
> 
> How can i see what is locking my table/tables?

In its most basic form:

select * from pg_locks;

There are a few more elaborate versions of that query around that look up 
identifiers and such. Those are probably provided in the docs, but otherwise 
the mailing list archives or your favourite internet search engine should do 
the trick.

Possibly you're waiting on an uncommitted transaction from a process in state 
"Idle in transaction" from pg_stat_activity.

If that doesn't get you anywhere, an explain plan of your query would make it 
easier to help you.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.