Re: [GENERAL] Migration Query

2017-05-08 Thread Venkata B Nagothi
On Tue, May 9, 2017 at 1:13 AM, PAWAN SHARMA 
wrote:

> Hi All,
>
> Does one any having list of bottlenecks and workarounds while migrating
> data
>  from Oracle to Postgresql. like what are thing which we can migrate from
> Oracle database to Postgresql and what we can't?
>

In general you must be able to migrate pretty much any data type to
PostgreSQL. It would be good to understand the data-types specifically used
in the to-be-migrated Oracle database and how that would effect Application
functionality/performance after migrating to PostgreSQL.

If you can provide details on the specific data-types used by Oracle, then,
the required information can be provided. As per my experience as far as
data types are concerned, issues generally crop-up around when migrating
large object types. Having said that, PostgreSQL is rich in supporting side
range of data-types.

Which is the best tool for migration from Oracle to Postgresql?
>

As already suggested, ora2pg is the best open-source tool.

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] effective_cache_size X shared_buffer

2017-04-02 Thread Venkata B Nagothi
On Mon, Apr 3, 2017 at 1:33 PM, Patrick B  wrote:

> 2017-04-03 13:23 GMT+12:00 Patrick B :
>
>> Hi guys.
>>
>> I'm thinking about increasing the query cache for my PG 9.2 server.
>> I've got a project happening, which is doing lots and lots of writes and
>> reads during the night, and in the morning I see PG cache warming up again,
>> as all the cache "was used" by those write and read tasks.
>>
>> So my environment gets very slow for a few hours, until the queries used
>> on a daily basis go to the cache.
>>
>> Question:
>> Should I increase effective_cache_size or shared_buffer? What's the
>> difference between them?
>>
>> Thanks
>> Patrick
>>
>
> Can I also increase shared_buffer on my slave only? Would that make any
> difference if using selects on the slave? Or this parameter must be the
> same across all servers (Master/slaves) ?
>

Yes, you can increase the shared_buffers in the slave database.
Configuration can be different across master and slave databases.

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] effective_cache_size X shared_buffer

2017-04-02 Thread Venkata B Nagothi
On Mon, Apr 3, 2017 at 11:23 AM, Patrick B  wrote:

> Hi guys.
>
> I'm thinking about increasing the query cache for my PG 9.2 server.
> I've got a project happening, which is doing lots and lots of writes and
> reads during the night, and in the morning I see PG cache warming up again,
> as all the cache "was used" by those write and read tasks.
>
> So my environment gets very slow for a few hours, until the queries used
> on a daily basis go to the cache.
>
> Question:
> Should I increase effective_cache_size or shared_buffer? What's the
> difference between them?
>

shared_buffers is the main memory component of PostgreSQL, where-in the
data accessed by queries is retrieved from the disk is stored so that
subsequent requests for the same data blocks will result in memory read. In
short, first read of a data block is the disk-read and most of the
subsequent reads will be from memory. Increasing this parameter will
increase the possibility of more memory reads.

effective_cache_size is important to improve performance of the select
queries. This parameter can be configured upto 75-80% of the RAM
(configuring to a much bigger size is not harmful). This parameter is the
scope of search for the data blocks required by SELECTs

It all depends on various factors like database size, size of the data
being accessed, database server hardware, RAM size etc. You have not
provided any such details.

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] Postgres HA

2017-02-22 Thread Venkata B Nagothi
On Thu, Feb 23, 2017 at 9:58 AM, Dylan Luong 
wrote:

> Hi
>
>
>
> I am a DBA at the University of South Australia. For PostgreSQL High
> Availability, we currently have setup a Master/Slave across two datacenters
> using PostgreSQL (WAL) streaming replication. We use an LTM (load balancer)
> server that sits between the application servers and the PostgreSQL server
> that directs connections to the Master (and the Slave if failover occurs).
> We also have watchdog processes on the PostgreSQL servers that polls the
> LTM to determine who is Master and perform automatic failover if required.
> I am looking at options to improve our high availability.
>
> I would like to know how other organizations in different industries
> (other than education) setup High Availability on their PostgreSQL
> environments.
>
> What  tools do you use. Are they commercial licensed products? How is the
> architecture setup and how do you do recovery of new slave.
>
> Your information is greatly appreciated.
>

An efficient High Availability setup for PostgreSQL would depend on various
factors like Application, Infrastructure and other Business Continuity
requirements. In your case, you have already mentioned that the Load
Balancer continuously polls to check the master status and fails over to
slave when the former is down. If you are looking at improving this setup,
then, it is important for us to know how the slave promotion is happening ?
is that done by some tools like pgPool-II ? Well, those are the open-source
tools available if you wish to automate the slave promotion when the master
is down. If you are looking at an highly efficient High Availability setup
would depend on how a) Application failover and b) slave promotion are
going hand-in-hand. Following are some of the factors to consider which can
help improve the efficiency in PostgreSQL High Availability -

- Application requirements for continued / uninterrupted data operations on
slave post the fail-over
- How fast the slave gets promoted when master fails
- You need to ensure Master and Slave are in absolute sync all the time
(importantly just before fail-over)

- Various other factors related to infrastructure like Network, database
load etc.

Hope that helps !

Regards,
Venkata B N

Database Consultant


Re: [GENERAL] Streaming Replication Without Downtime

2017-02-20 Thread Venkata B Nagothi
On Tue, Feb 21, 2017 at 6:53 AM, Gabriel Ortiz Lour 
wrote:

> Hi!
>
>   Thanks for pointing out pg_basebackup
>
>   The issue I'm facing now is about missing WAL files.
>
>   What i'm doing:
> # su postgres -c 'pg_basebackup -D /var/lib/postgresql/9.1/main/ -x -h
> master -U sa_rep' ; service postgresql start
>
>   The idea is to call "postgresql start" as soon as pg_basebackup ends.
>   But I'm getting the following error:
>
>   FATAL: could not receive data from WAL stream: FATAL: requested WAL
> segment XXX has already been removed
>
>   Shoud I just increase 'wal_keep_segments' ?
>

Yes, that is the way to go. But, you need to know what number you need to
increase the wal_keep_segments to ?
Which purely depends on the number of WALs being generated. Which version
of PostgreSQL are you using by the way ?


>   If I do so, do I need do restart the master or just a reload will do it?
>

No need to restart, "reload" will do.

Venkata B N
Database consultant


> -- Forwarded message --
> From: Achilleas Mantzios 
> Date: 2017-02-17 11:20 GMT-02:00
> Subject: Re: [GENERAL] Streaming Replication Without Downtime
> To: pgsql-general@postgresql.org
>
>
> Gabriel you are thinking this in the correct way, but its really :
>
> pg_basebackup -D  --write-recovery-conf --progress
> --xlog-method=stream -h 
> then you just edit recovery.conf (if needed), tweak postgersql.conf (if
> needed) and start the standby .
>
> On 17/02/2017 15:09, Gunnar "Nick" Bluth wrote:
>
> (sorry for the toppost, mobile device)
> What you're looking for is pg_basebackup with - - xlog=stream, I guess.
>
> Regards,
>
> Nick
>
>
> Am 17. Februar 2017 14:06:36 MEZ schrieb Gabriel Ortiz Lour
>  :
>>
>> Hi all,
>>   I've been searching for a way to initialize a new Hot Standby node with
>> Streaming Replication withou the need for stop or even restarting the
>> master.
>>   Of course the master is already with the needed SR configs.
>>
>>   I know I have to use pg_start_backup/pg_stop_backup, but i'd like some
>> tips, or a link to some tutorial, with the order of the steps.
>>
>>   I assume will be something like:
>>
>> -  configure Slave for SR
>> -  pg_start_backup()
>> -  rsync PGDATA to slave
>> -  start PG on the slave
>> -  pg_stop_backup()
>>
>>   Anything i'm thinking wrong?
>>
>> Thanks in advance,
>> Gabriel
>>
>
> --
> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
> gesendet.
>
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>


Re: [GENERAL] clarification about standby promotion

2017-02-09 Thread Venkata B Nagothi
On Fri, Feb 10, 2017 at 2:42 AM, Jehan-Guillaume de Rorthais <iog...@free.fr
> wrote:

> On Thu, 9 Feb 2017 10:41:15 +1100
> Venkata B Nagothi <nag1...@gmail.com> wrote:
>
> > On Thu, Feb 9, 2017 at 4:53 AM, Benoit Lobréau <benoit.lobr...@gmail.com
> >
> > wrote:
> >
> > > Hi,
> > >
> > >
> > > I would like to clarify something about standby promotion. From the
> > > sentence below. I understand that, during the promotion process,
> postgres
> > > will replay all the available wals (from the archive or pg_xlog).
> > >
> >
> > Yes, that is correct.
> >
> >
> > > https://www.postgresql.org/docs/9.5/static/warm-standby.
> > > html#STREAMING-REPLICATION
> > >
> > > 25.2.2. Standby Server Operation
> > > ...
> > > Standby mode is exited and the server switches to normal operation when
> > > pg_ctl promote is run or a trigger file is found (trigger_file). Before
> > > failover, any WAL immediately available in the archive or in pg_xlog
> will
> > > be restored, but no attempt is made to connect to the master.
> > >
> > > I have seen several articles like this one (
> https://www.enterprisedb.com/
> > > switchoverswitchback-postgresql-93) where they say that
> > > pg_last_xlog_receive_location() and pg_last_xlog_replay_location()
> should
> > > be checked before promotion. I don't understand why since they will be
> > > replayed anyway. Did something changed since 9.3 about this ?
> > >
> >
> > The above link is about improvements related to switch-over/switch-back
> > process from the version 9.3. What you are asking is about standby
> > promotion process. When the standby is promoted, as mentioned in the
> docs,
> > the standby server attempts to apply the available WAL during the
> promotion
> > process and will not attempt to connect to master.
>
>
> Sure, but when you are doing a switchover, the standby is supposed to be
> connected to the master when you shutdown the master. So based on the doc,
> the standby should receive **everything** from the master before the master
> actually shutdown.
>

Yes, Standby would receive everything from the master before master shuts
down.
To perform switch-over / switch-back, It is important that, the standby
receives something called last known position from the master automatically
even after the master shuts down, which does not automatically happen in
9.2 and this happens automatically from 9.3. This improvement is only
applicable in the case when you want to swap roles of master and standby.

> Which means, you would not know if there are any pending WALs yet to be
> > streamed from master or in other words, standby may not know if the
> master is
> > a-head. It is important that you know the standby current position by
> > executing the above mentioned *xlog* functions.
>
> Sure, but in the link pointed by Benoit, the check is only comparing what
> the
> **standby** received with what the standby **replayed**. It seems there's
> no
> point to do such check.
>
> What you are describing is to check the very last LSN from the master (its
> shutdown checkpoint) with what the slave received. The only way to check
> this
> is to compare LSN from the shut down master to the LSN the slave received.
>

I think, i need to be more clear here. What i meant was - If you wish to
intentionally promote standby, then, yes, before just shutting down the
master check the last *xlog* or checkpoint position and in real-time, this
must be done after the application is shutdown completely and the database
is not encountering any data changes. In application terminology, the data
must be in complete sync between master and standby. But, yes, i agree that
the only way to check the master-standby position sync after the master
shuts down is by comparing pg_controldata output of master and standby.

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] clarification about standby promotion

2017-02-08 Thread Venkata B Nagothi
On Thu, Feb 9, 2017 at 4:53 AM, Benoit Lobréau 
wrote:

> Hi,
>
>
> I would like to clarify something about standby promotion. From the
> sentence below. I understand that, during the promotion process, postgres
> will replay all the available wals (from the archive or pg_xlog).
>

Yes, that is correct.


> https://www.postgresql.org/docs/9.5/static/warm-standby.
> html#STREAMING-REPLICATION
>
> 25.2.2. Standby Server Operation
> ...
> Standby mode is exited and the server switches to normal operation when
> pg_ctl promote is run or a trigger file is found (trigger_file). Before
> failover, any WAL immediately available in the archive or in pg_xlog will
> be restored, but no attempt is made to connect to the master.
>
> I have seen several articles like this one (https://www.enterprisedb.com/
> switchoverswitchback-postgresql-93) where they say that
> pg_last_xlog_receive_location() and pg_last_xlog_replay_location() should
> be checked before promotion. I don't understand why since they will be
> replayed anyway. Did something changed since 9.3 about this ?
>

The above link is about improvements related to switch-over/switch-back
process from the version 9.3. What you are asking is about standby
promotion process. When the standby is promoted, as mentioned in the docs,
the standby server attempts to apply the available WAL during the promotion
process and will not attempt to connect to master. Which means, you would
not know if there are any pending WALs yet to be streamed from master or in
other words, standby may not know if the master is a-head. It is important
that you know the standby current position by executing the above mentioned
*xlog* functions. Hope i answered your question.

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] Question slow query

2017-01-27 Thread Venkata B Nagothi
On Tue, Jan 17, 2017 at 6:27 AM, Patrick B  wrote:

>
>
> 2017-01-12 16:48 GMT+13:00 Andreas Joseph Krogh :
>
>> På torsdag 12. januar 2017 kl. 03:15:59, skrev Patrick B <
>> patrickbake...@gmail.com>:
>>
>> Hi guys,
>>
>> I've got a slow query, running at 25 seconds.
>>
>>
>>   ->  Bitmap Heap Scan on ja_notes r_1103088  
>> (cost=234300.55..1254978.62 rows=553275 width=101) (actual 
>> time=1423.411..10572.549 rows=475646 loops=1)
>> Recheck Cond: (n_type = ANY 
>> ('{note,note_1,note_2}'::n_type[]))
>> Filter: (timezone('Etc/UTC'::text, 
>> d_modified) >= '2016-08-07 23:12:34'::timestamp without time zone)
>> Rows Removed by Filter: 13725231
>>
>>
>>
>> As you can see, 13.725.231 rows were removed by Filter..
>>
>> Should I create an index for this column?
>>
>>> d_modified  | timestamp with time zone | default
>>> statement_timestamp()
>>
>> I tried but it didn't help... the query doesn't even hit the index.
>>
>>
>>
>> CREATE INDEX q ON test USING BTREE (d_modified);
>>
>>
>>
>> Am I missing something?
>>
>>
>> Yes, you've not shown us:
>> 1. The query
>> 2. The schema
>> 3. Complete EXPLAIN-output
>>
>>
>
> Sorry about that.. just wanted to understand how to index a timestamp
> column. I was able to do this way:
>
>>
>> CREATE INDEX CONCURRENTLY ON note (timezone('Etc/UTC'::text, d_date));
>
>
What about the SQL Query ? what does the column stats show ? The EXPLAIN
output is the complete output ?

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] Transaction apply speed on the standby

2017-01-27 Thread Venkata B Nagothi
On Fri, Jan 27, 2017 at 3:34 AM, Rakesh Kumar 
wrote:

> Ver 9.6.1
>
> In a streaming replication can it be assumed that if both primary and
> standby are of the same hardware, then the rate at which transactions are
> applied on the standby will be same as that on primary. Or standbys are
> always slower than primary in applying transactions because of the way
> replication works.
>
> The reason why I am asking is that I am trying to find what are the
> circumstances when there can be a large gap between primary and standby in
> async replication mode.
>

As already said, the replication sync between master and standby depends on
lot of factors CPUs, Disks, Network bandwidth etc.

If it is the application demand that data has to be in complete sync on
Standby at all times, then, the best way to go is with synchronous
replication. In this mode PostgreSQL ensures all the changes on master are
replicated to standby and master waits until changes are committed on
standby. This is even more better in 9.6 with "remote_apply" to
synchronous_commit parameter, which ensures data from master is actually
replayed at synchronous standby. However, this type of replication mode can
be a risk when there is a failure (Example : Network break down) and
standby is not connected to master and during this time transactions on
master wait until standby responds back.

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] migrate Sql Server database to PostgreSql

2017-01-18 Thread Venkata B Nagothi
On Thu, Jan 19, 2017 at 6:17 AM, PAWAN SHARMA 
wrote:

> Hello All,
>
> I am using postgres 9.5  enterprise edition.
>
> i want to to how to migrate Sql Server database to PostgreSql. what are
> the things required for migration and what are the cron and prons of
> migration.
>

Well, how challenging the migration is going to be purely depends on
complexity of the the database and the how the Applications. As far as i
know, there is no open-source tool available to migrate SQL Server database
to PostgreSQL. You can try "sqlserver2pgsql" which migrates does not
migrate procedures.

Overall with my experience, i can say, migrating application would be most
challenging part.

- Preliminary analysis on the database, its object types, data types and
code complexity of the procedures. T-SQL procedures will need to re-written
and they can in-turn trigger application code changes which can be
challenging.
  Data types and procedures can pose real challenges while migrating
applications.
- If applications being used against SQL Server are proprietary, then, you
need to ensure they work with PostgreSQL.
- Evaluate the amount of Application code changes to be done, that will
give you an idea on how difficult migration will be.

Hope that helps !


Regards,

Venkata B N
Database Consultant


Re: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-06 Thread Venkata B Nagothi
On Sat, Jan 7, 2017 at 2:56 AM, Job  wrote:

> Hi guys,
>
> really much appreciated your replies.
>
>  >> You might want to include the query plans for each server
>
> W e use a function, the explain analyze is quite similar:
> POSTGRESQL 8.4.22:
>
> explain analyze select 'record.com' where 'record.com' like '%.%' and
> function_cloud_view_orari('53', '192.168.10.234', 'record.com') != ''
> limit 1;
>
> QUERY PLAN
> 
> 
> -
>  Limit  (cost=0.03..0.04 rows=1 width=0) (actual time=1.488..1.488 rows=0
> loops=1)
>->  Result  (cost=0.03..0.04 rows=1 width=0) (actual time=1.485..1.485
> rows=0 loops=1)
>  One-Time Filter: ((function_cloud_view_orari('53'::character
> varying, '192.168.10.234'::character varying, 'record.com'::character
> varying))::text <> ''::text)
>  Total runtime: 1.531 ms
>
> POSTGRES 9.6.1:
> explain analyze select 'record.com' where 'record.com' like '%.%' and
> function_cloud_view_orari('53', '192.168.10.234', 'record.com') != ''
> limit 1;
>
> QUERY PLAN
> 
> 
> -
>  Limit  (cost=0.03..0.04 rows=1 width=32) (actual time=4.216..4.216 rows=0
> loops=1)
>->  Result  (cost=0.03..0.04 rows=1 width=32) (actual time=4.215..4.215
> rows=0 loops=1)
>  One-Time Filter: ((function_cloud_view_orari('53'::character
> varying, '192.168.10.234'::character varying, 'record.com'::character
> varying))::text <> ''::text)
>  Planning time: 0.046 ms
>  Execution time: 4.230 ms
>
> There is only one condition that, by deleting, Query in new 9.6.1
> Postgresql Server is very fast also on massive benchmark test.
> The condition is this:
> "exists ( select 1 from gruorari where gruorari.idgrucate=grucategorie.id
> and ( (('{'||gg_sett||'}')::int[] && array[EXTRACT(DOW FROM
> NOW())::int])='t' and  now()::time between gruorari.dalle::time and
> gruorari.alle::time) )"
>
> We have a table of "weekly events", as example:
> - monday from 12 to 14
> - tuesday from 18 to 20
> ...
>

As already mentioned by others, i do not see a major performance problem
(atleast based on the information you gave) due to upgrading to 9.6.1. Do
you have latest statistics in place ? What about data ?

If you can notice in the EXPLAIN output, there is a difference in the
*width*. In 9.6.1 width is 32, any idea why ?

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] Postgres 9.6 Streaming Replication on Solaris 10

2016-12-19 Thread Venkata B Nagothi
On Mon, Dec 19, 2016 at 4:40 AM,  wrote:

> Hello List,
> I am setting up Postgres 9.6 for streaming replication.  The OS is
> Solaris X86  I downloaded the postgres solaris binaries from the Postres
> site and installed on 2 instances of Solaris 10.  I then used pg_dumpall to
> load the master server with data from our devel0pment server.  That all
> worked as advertised and was checked using pgAdmin Version 4.  No
> problems.  I then setup up the standby configuration and the master
> configuration and then used pg_basebackup from the standby to initialize
> its data.  Works, not a problem and verified using pgAdmin Version 4 that
> the data was there and read only.  However when a change was made on the
> master it does not appear on the standby.  A check of the standby
> server.log revealed the following message:
>
> FATAL: could not load library "/usr/postgres/9.6-pgdg/lib/li
> bpqwalreceiver.so": ld.so.1: postgres: fatal: relocation error: file
> /usr/postgres/9.6-pgdg/lib/libpqwalreceiver.so: symbol PQconnectdbParams:
> referenced symbol not found
>
> The message repeats until the database is shutdown and restarted and then
> appears if a change is made on the master.  I think it is strange that this
> error is occurring on what appears to be a
>

Do you have "/usr/postgres/9.6-pgdg/lib" part of LD_LIBRARY_PATH variable ?

Regards,
Venkata B N

Database Consultant


Re: [GENERAL] WAL history files - Pgsql 9.2

2016-12-11 Thread Venkata B Nagothi
On Mon, Dec 12, 2016 at 7:48 AM, Patrick B  wrote:

> Hi guys,
>
> Are the history files copied with the wal_files? Or I have to do it
> separated?
>
> 0003.history': No such file or directory
>
>
> I'm using PostgreSQL 9.2.
>

Can you please explain the scenario you are referring to ? during streaming
replication ? or during standby promotion ?

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-22 Thread Venkata B Nagothi
On Wed, Nov 23, 2016 at 1:59 PM, Patrick B <patrickbake...@gmail.com> wrote:

>
>
> 2016-11-23 15:55 GMT+13:00 Venkata B Nagothi <nag1...@gmail.com>:
>
>>
>>
>> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B <patrickbake...@gmail.com>
>> wrote:
>>
>>> Hi guys,
>>>
>>> I currently have a slave02 server that is replicating from another
>>> slave01 via Cascading replication. The master01 server is shipping
>>> wal_files (via ssh) to both slaves.
>>>
>>>
>>> I'm doing some tests on slave02 to test the recovery via wal_files...
>>> The goal here is to stop postgres, wait few minutes, start postgres again,
>>> watch it recovering from wal_files, once it's done see the streaming
>>> replication start working again.
>>>
>>> 1 - Stop postgres on slave02(streaming replication + wal_files)
>>> 2 - Wait for 5 minutes
>>> 3 - Start postgres - The goal here is to tail the logs to see if the
>>> wal_files are being successfully recovered
>>>
>>> However, when doing step3 I get these messages:
>>>
>>> cp: cannot stat '/walfiles/00021AF800A4': No such file or
>>> directory
>>>
>>> cp: cannot stat '/walfiles/00021AF800A5': No such file or
>>> directory
>>>
>>> cp: cannot stat '/walfiles/00021AF800A6': No such file or
>>> directory
>>> LOG:  consistent recovery state reached at 1AF8/AB629F90
>>> LOG:  database system is ready to accept read only connections
>>> LOG:  streaming replication successfully connected to primary
>>>
>>>
>>>
>>> still on slave01: *Sometimes the log_delay time is bigger.. sometimes
>>> is lower*
>>>
>>> SELECT CASE WHEN pg_last_xlog_receive_location() =
>>> pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() -
>>> pg_last_xact_replay_timestamp()) END AS log_delay;
>>>
>>>  log_delay
>>>
>>> ---
>>>
>>>   0.386863
>>>
>>>
>>>
>>> On master01:
>>>
>>> select * from pg_current_xlog_location();
>>>
>>>  pg_current_xlog_location
>>>
>>> --
>>>
>>>  1AF8/D3F47A80
>>>
>>>
>>>
>>> *QUESTION:*
>>>
>>> So.. I just wanna understand what's the risk of those errors... what's
>>> happening?
>>> *cp: cannot stat '/walfiles/00021AF800A5': No such file or
>>> director*y - Means it didn't find the file. However, the file exists on
>>> the Master, but it didn't start shipping yet. What are the consequences of
>>> that?
>>>
>>
>> That is just saying that the slave cannot find the WAL file. That should
>> not be of big importance. Eventually, that will vanish when the log file
>> gets shipped from the master. Also "cp: cannot stat." errors have been been
>> fixed in 9.3 i believe.
>>
>
> Hi Venkata !
>
> Yeah that's fine.. the streaming replication is already working fine.
>
> But, as it didn't find/recover some of the wal_files, doesn't that mean
> that the DB isn't up-to-date?
>

Not necessarily. Standby periodically checks if the WAL file it is looking
for is available at restore_command location and generates that message if
the file is not available. These messages are not of any harm.

Below link might help you :

https://www.postgresql.org/message-id/4DDC9515.203%40enterprisedb.com

Regards,
Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] Wal files - Question | Postgres 9.2

2016-11-22 Thread Venkata B Nagothi
On Wed, Nov 23, 2016 at 1:03 PM, Patrick B  wrote:

> Hi guys,
>
> I currently have a slave02 server that is replicating from another slave01
> via Cascading replication. The master01 server is shipping wal_files (via
> ssh) to both slaves.
>
>
> I'm doing some tests on slave02 to test the recovery via wal_files... The
> goal here is to stop postgres, wait few minutes, start postgres again,
> watch it recovering from wal_files, once it's done see the streaming
> replication start working again.
>
> 1 - Stop postgres on slave02(streaming replication + wal_files)
> 2 - Wait for 5 minutes
> 3 - Start postgres - The goal here is to tail the logs to see if the
> wal_files are being successfully recovered
>
> However, when doing step3 I get these messages:
>
> cp: cannot stat '/walfiles/00021AF800A4': No such file or
> directory
>
> cp: cannot stat '/walfiles/00021AF800A5': No such file or
> directory
>
> cp: cannot stat '/walfiles/00021AF800A6': No such file or
> directory
> LOG:  consistent recovery state reached at 1AF8/AB629F90
> LOG:  database system is ready to accept read only connections
> LOG:  streaming replication successfully connected to primary
>
>
>
> still on slave01: *Sometimes the log_delay time is bigger.. sometimes is
> lower*
>
> SELECT CASE WHEN pg_last_xlog_receive_location() =
> pg_last_xlog_replay_location() THEN 0 ELSE EXTRACT (EPOCH FROM now() -
> pg_last_xact_replay_timestamp()) END AS log_delay;
>
>  log_delay
>
> ---
>
>   0.386863
>
>
>
> On master01:
>
> select * from pg_current_xlog_location();
>
>  pg_current_xlog_location
>
> --
>
>  1AF8/D3F47A80
>
>
>
> *QUESTION:*
>
> So.. I just wanna understand what's the risk of those errors... what's
> happening?
> *cp: cannot stat '/walfiles/00021AF800A5': No such file or
> director*y - Means it didn't find the file. However, the file exists on
> the Master, but it didn't start shipping yet. What are the consequences of
> that?
>

That is just saying that the slave cannot find the WAL file. That should
not be of big importance. Eventually, that will vanish when the log file
gets shipped from the master. Also "cp: cannot stat." errors have been been
fixed in 9.3 i believe.

Regards,

Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] Check integrity between servers

2016-11-16 Thread Venkata B Nagothi
On Thu, Nov 17, 2016 at 10:19 AM, Patrick B 
wrote:

> Would be possible to check the integrity between two database servers?
>
> Both servers are slaves (streaming replication + wal_files) but I believe
> one of them, when recovered from wal_files in a fast outage we got, got
> recovered not 100%.
>

Did you mean to say recovery is not successful or complete ? How did you
know that recovery is not 100% ? Any errors, messages etc ?


> How could I check the data between both DB?
>

That is something which Application needs to do. From database perspective
you can check if the databases are in complete sync.

Regards,

Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] Fwd: Mail to be posted in PostgreSQL community

2016-11-15 Thread Venkata B Nagothi
On Tue, Nov 15, 2016 at 4:15 PM, kaustubh kelkar 
wrote:

>
> Hi,
>
>
>
> Trying to create multiple database instances of PostgreSQL 9.6
> In this case , trying to create 2nd instance with port 5434/5435.
>
>
>
> *1st attempt:*
>
>
>
>
> *Create new database cluster : *
>
> C:\Program Files\PostgreSQL\9.6\installer\server>initcluster.vbs postgres
> postgr
>
> es 12345 "C:\Program Files\PostgreSQL\9.6" "C:\Program
> Files\PostgreSQL\9.6\data
>
> 5434" 5433 DEFAULT
>
>
>
> *Register as Windows Service: *
>
> C:\Program Files\PostgreSQL\9.6\installer\server>startupcfg.vbs 9.6
> postgres 123
>
> 45 "C:\Program Files\PostgreSQL\9.6" "C:\Program
> Files\PostgreSQL\9.6\data5434"
>
> postgresql-x64-9.6-5434
>
>
>
> But after this step, new service is supposed to be found in services.msc
> which is not successful.
>
> (Reference http://stackoverflow.com/questions/6241793/are-independent-
> instances-of-postgresql-possible )
>
>
>
> *2nd attempt :*
>
>
>
> *Create new database cluster : *
>
> C:\Program Files\PostgreSQL\9.6\bin>initdb C:\PostgreSQL\9.6\data5435
>
> The files belonging to this database system will be owned by user
> "Administrator
>
> ".
>
> This user must also own the server process.
>
>
>
> The database cluster will be initialized with locale "English_United
> States.1252
>
> ".
>
> The default database encoding has accordingly been set to "WIN1252".
>
> The default text search configuration will be set to "english".
>
>
>
> Data page checksums are disabled.
>
>
>
> creating directory C:/PostgreSQL/9.6/data5435 ... ok
>
> creating subdirectories ... ok
>
> selecting default max_connections ... 100
>
> selecting default shared_buffers ... 128MB
>
> selecting dynamic shared memory implementation ... windows
>
> creating configuration files ... ok
>
> running bootstrap script ... ok
>
> performing post-bootstrap initialization ... ok
>
> syncing data to disk ... ok
>
>
>
> WARNING: enabling "trust" authentication for local connections
>
> You can change this by editing pg_hba.conf or using the option -A, or
>
> --auth-local and --auth-host, the next time you run initdb.
>
>
>
> Success. You can now start the database server using:
>
>
>
> "pg_ctl" -D "C:\PostgreSQL\9.6\data5435" -l logfile start
>
>
> *Register as Windows Service: *
>
>
> C:\Program Files\PostgreSQL\9.6\bin>pg_ctl register -N pgsql-9.6-5435 -U
> administrator -P test@123 -D "C:\PostgreSQL\9.6\data5435" -o "-p 5435"
>
>
>
> C:\Program Files\PostgreSQL\9.6\bin>net start pgsql-9.6-5435
>
> The pgsql-9.6-5435 service is starting.
>
> The pgsql-9.6-5435 service was started successfully.
>
>
>
> (Reference: http://www.dbforums.com/showthread.php?1669767-How-to-create
> -two-different-postgresql-services-for-configure-database  )
>
>
>
> *After trying to login through SQL shell*
>
> Server [localhost]:
>
> Database [postgres]:
>
> Port [5433]: 5435
>
> Username [postgres]:
>
> psql: FATAL:  role "postgres" does not exist
>

As which user (OS user account) you have created the postgresql instance ?
which means, you need to login as "Administrator" user.

Regards,

Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] Wal files being delayed - Pgsql 9.2

2016-11-13 Thread Venkata B Nagothi
On Mon, Nov 14, 2016 at 1:22 PM, Patrick B  wrote:

> Hi guys,
>
> My current scenario is:
>
> master01 - Postgres 9.2 master DB
> slave01 - Postgres 9.2 streaming replication + wal_files slave server for
> read-only queries
> slave02 - Postgres 9.2 streaming replication + wal_files slave server @ AWS
>
> master01 sends wal_files to both slaves via ssh.
>
>
> *On the master:*
>
> select * from pg_current_xlog_location();
>
>  pg_current_xlog_location
>
> --
>
>  1A7C/14AEB2C0
>
> (1 row)
>
>
> *On the slaves:*
>
> ls -ltr /var/lib/pgsql/archive/
>
> -rw--- 1 postgres postgres 16777216 Nov 14 01:21
> 00021A7A00F9
>
>
> As you can see, the last wal_files on the slaves isn't the
> 00021A7C* ones. I think the SSH delivery is being delayed. Not sure
> why tho.
> How can I see how many files are behind?
>


You can identify the file name by using the function pg_xlogfile_name().

"select pg_xlogfile_name('1A7C/14AEB2C0');"


Regards,

Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] replication setup: advice needed

2016-11-04 Thread Venkata B Nagothi
On Thu, Nov 3, 2016 at 8:17 PM, Dmitry Karasik 
wrote:

> Dear all,
>
> I'd like to ask for help or advice with choosing the best replication
> setup for
> my task.
>
> I need to listen to continuous inserts/deletes/updates over a set of
> tables,
> and serve them over http, so I would like to off-load this procedure to a
> separate slave machine.  I thought that logical master-slave replication
> could
> be the best match here, but I couldn't find enough details in the
> documentation
> which implementation would match my needs best.
>

Which version of PostgreSQL are you using ?


>
> Basically, I need to:
>
> a) replicate selected tables to a hot standby slave
> b) on the slave, listen for the insert/update/delete events (either
> through triggers or logical decoder plugin)
>
> While I see that a) should be feasible, I can't see if it's possible to do
> b) at all.
> Also, with so many replication solutions, I don't want to test them all
> one by one, but
> rather would like to ask for help choosing the one goes best here -- and
> if there's none,
> an alternative setup then.
>

You need to look at a replication solution like Slony, which is a trigger
based replication solution. If you are using PostgreSQL version 9.4 or
higher, then, you can explore "pglogical" which is WAL based and uses
logical decoding capability.

If you are just looking at replicating specific tables, then either of the
above solutions would work fine.

Regards,

Venkata B N
Database Consultant

Fujitsu Australia


Re: [GENERAL] Checking Postgres Streaming replication delay

2016-10-30 Thread Venkata B Nagothi
On Mon, Oct 31, 2016 at 11:57 AM, Patrick B 
wrote:

> Hi guys,
>
> I'm using this query to measure the delay between a Master and a Streaming
> Replication Slave server, using PostgreSQL 9.2.
>
> SELECT
>> pg_last_xlog_receive_location() receive,
>> pg_last_xlog_replay_location() replay,
>> (
>> extract(epoch FROM now()) -
>> extract(epoch FROM pg_last_xact_replay_timestamp())
>> )::int lag;
>
>
> In your opinion, is that right?
>
> Yes, thats right.

Regards,

Venkata B N
Database Consultant


Re: [GENERAL] Multi tenancy : schema vs databases

2016-09-30 Thread Venkata B Nagothi
On Fri, Sep 30, 2016 at 10:16 AM, Rakesh Kumar <rakeshkumar...@outlook.com>
wrote:

>
>
> ____
> From: Venkata B Nagothi <nag1...@gmail.com>
> Sent: Thursday, September 29, 2016 17:25
> To: Rakesh Kumar
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Multi tenancy : schema vs databases
>
> On Fri, Sep 30, 2016 at 5:18 AM, Rakesh Kumar <rakeshkumar...@outlook.com<
> mailto:rakeshkumar...@outlook.com>> wrote:
>
> Hi
>
> I would like to know which technique is better for supporting
> multi-tenancy=
>  applications, going upto hundreds or even thousands of tenants.
>
> 1 - One database with difference schemas (one schema per tenant)
> or
> 2 - One database per tenant.
>
> Did you mean one database with-in a postgresql cluster ?
>
> Yes.  Say something like this within a PG cluster
>
> db4978
> db6234
> ...
> 100s of such databases.
>

That would make things worst if you are going for one database per tenant.
As said by John just now, it would end up in an very complex and bad design
contributing to very poor performance and high maintenance overhead.
A schema per tenant would be a good idea and its hard to say without
knowing the data isolation levels you require for each tenant.

Regards,
Venkata B N

Database consultant / Architect


Re: [GENERAL] Multi tenancy : schema vs databases

2016-09-29 Thread Venkata B Nagothi
On Fri, Sep 30, 2016 at 5:18 AM, Rakesh Kumar 
wrote:

>
> Hi
>
> I would like to know which technique is better for supporting
> multi-tenancy=
>  applications, going upto hundreds or even thousands of tenants.
>
> 1 - One database with difference schemas (one schema per tenant)
> or
> 2 - One database per tenant.
>

Did you mean one database with-in a postgresql cluster ?


> The points to be considered are:
>
> 1 - which is more light weight from resources point of view.
> 2 - which is easier for backup/restore
> 3 - Which is better from security p.o.v
>

A schema per tenant would probably be a good idea to go with. Since, you
are saying there could be thousands of tenants, going for
single-database-per-tenant could possibly end up in a very bad and complex
database design.

One point to consider would be that, how different could be the
backup/restore, security or any other database policies for different
tenants.


Regards,

Venkata B N
Database Consultant / Architect


Re: [GENERAL] Extract date from a TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL column

2016-09-21 Thread Venkata B Nagothi
On Thu, Sep 22, 2016 at 1:37 PM, Patrick B  wrote:

> Hi guys,
>
> I'm using postgres 9.2 and got the following column:
>
> start TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL
>
>
> SELECT start FROM test1;
>
>
> 2015-12-18 02:40:00
>
>  I need to split that date into two columns on my select:
>
> 2015-12-18 = date column
> 02:40:00 = time column
>
> How can I do that without modifying any column/data?
> Maybe in a select?
>

postgres=# select to_char(start, 'DD-MM-') from test;
  to_char

 18-12-2015


Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] We have a requirement to downgrade from PostgreSQL 9.5.4 to 9.5.2

2016-09-20 Thread Venkata B Nagothi
> We are currently running at 9.5.2 and plan to upgrade to 9.5.4, but our
> change requirement needs to plan for a downgrade with data preservation,
> before upgrade is authorized.
>
> Thus, I am asking this: Is it safe to downgrade from 9.5.4 to 9.5.2 by
> simply replacing the binaries? We will keep the data folder same.
>

Technically, that should not be a problem. The only thing to foresee would
be that, if you are upgrading to fix any BUGs or CVEs, then downgrading
would revert the fixes.
Just curious to know - when you say downgrade, do you mean to say there has
to be a back-out plan in place if the upgrade fails ? or you are really
going to downgrade again ?

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] FATAL: could not receive data from WAL stream

2016-09-19 Thread Venkata B Nagothi
On Tue, Sep 20, 2016 at 12:38 PM, Patrick B 
wrote:

> Hi guys,
>
> I got a slave server running Postgres 9.2 with streaming replication and
> wal_archive in an EC2 Instance at Amazon.
>
> Postgres logs are showing me this error:
>
>> restored log file "0002179A00F8" from archive
>> invalid record length at 179A/F8FFF3D0
>> WAL segment `/var/lib/pgsql/9.2/archive/0003.history` not found
>> streaming replication successfully connected to primary
>> FATAL:  could not receive data from WAL stream: FATAL:  requested WAL
>> segment 0002179A00F8 has already been removed
>
>
> However, 0002179A00F8 file is inside
> /var/lib/pgsql/9.2/archive directory:
>
>> postgres@devops:/var/lib/pgsql/9.2/archive$ ls -la | grep
>> 0002179A00F8
>> -rw--- 1 postgres postgres 16777216 Sep 16 05:16
>> 0002179A00F8
>
>
>
> It's an UBUNTU instance, so my recovery.conf is:
>
>
> */etc/postgresql/9.2/main/recovery.conf:*
>
>> restore_command = 'exec /var/lib/pgsql/bin/restore_wal_segment.bash
>> "/var/lib/pgsql/9.2/wal_archive/%f" "%p"'
>> archive_cleanup_command = '/var/lib/postgresql/bin/pg_ar
>> chivecleaup_mv.bash'
>> recovery_target_timeline = 'latest'
>> standby_mode = on
>> primary_conninfo = 'host=IP_MY_SLAVE port=5432 user=replicator
>> application_name=devops'
>
>
>
> What can be happening, if the file is in there?
>

Do you mean to say that the WAL file "0002179A00F8" is
available @ "/var/lib/pgsql/9.2/archive" location ?

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] large number dead tup - Postgres 9.5

2016-09-11 Thread Venkata B Nagothi
On Mon, Sep 12, 2016 at 9:17 AM, Patrick B  wrote:

> Hi guys,
>
> select schemaname,relname,n_live_tup, n_dead_tup from pg_stat_all_tables
> where relname = 'parts';
>
>
> schemaname relname   n_live_tup n_dead_tup
>> -- - -- --
>> public parts 191623953  182477402
>
>
> See the large number of dead_tup?
>
> My autovacuum parameters are:
>
> "autovacuum_vacuum_threshold" : "300",
>> "autovacuum_analyze_threshold" : "200",
>> "autovacuum_vacuum_scale_factor" : "0.005",
>> "autovacuum_analyze_scale_factor" : "0.002",
>
>
> Table size: 68 GB
>
> Why does that happen? Autovacuum shouldn't take care of dead_tuples?
>

Could you notice if the table is regularly getting vacuumed at all ? when
was the last_autovacuum and last_autoanalyze time ?


>
> Because of that the table is very slow...
> When I do a select on that table it doesn't use an index, for example:
>
>
> \d parts;
>
>> "index_parts_id" btree (company_id)
>> "index_parts_id_and_country" btree (company_id, country)
>
>
>
>
>
> explain select * from parts WHERE company_id = 12;
>
> Seq Scan on parts  (cost=0.00..6685241.40 rows=190478997 width=223)
>>   Filter: (company_id = 12)
>
>
That should be due to not running VACUUM and ANALYZE. Did you VACUUM
ANALYZE and see if the query is picking up the Index. This is possible if
"company_id" has unique values.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Setup pgpool-II with streaming replication

2016-09-08 Thread Venkata B Nagothi
On Fri, Sep 9, 2016 at 3:14 AM, Ashish Chauhan 
wrote:

> Thanks Venkata for your reply.
>
>
>
> Currently, we have two Postgres server with master-slave streaming
> replication. All application servers are directly pointing to master server
> only. I am planning to setup new server for PgPool, at the time of divert
> app server to PgPool, I will pool out application server one by one from
> Load balancer and point it to PgPool which will not affect our application.
>

That should probably minimize your downtime. Still you need to change the
port number (from database port to pgpool port) in the App configuration,
will that not require the Application restart ? I am just curious to know.


> Could you pls send me steps how to install PgPool without PgPoolAdmin?
>

Below link has complete instructions to install and configure pgPool

http://www.pgpool.net/docs/latest/tutorial-en.html#install

pgPool Admin is a separate tool which does not come along with pgPool
software by default.

Regards,
Venkata B N

Fujitsu Australia

>


Re: [GENERAL] Setup pgpool-II with streaming replication

2016-09-07 Thread Venkata B Nagothi
I gone through below articles but some or other way require DB server
> restart which is not possible in our case. Could you please provide some
> documentation or steps how to achive connection pooling without any
> downtime?
>

DB server restart ? Why do you think you need to restart the database ?
that can be confirmed if you could help us know your planned architecture
and the existing database configuration settings.

Yes, downtime is definitely needed as the Application needs to connect to
the database using pgPool port instead of DB port which is something to do
with the Application server.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Postgres UPGRADE from 9.2 to 9.4

2016-09-06 Thread Venkata B Nagothi
On Wed, Sep 7, 2016 at 10:43 AM, Patrick B  wrote:

>
>> how large is the full database cluster?
>>
>> if its only a few GB or whatever, and not grossly complex, pg_dumpall -h
>> oldhost | psql -h newhost, is the simplest way to move a complete set of
>> databases from an old server to a new. if there are large tables with
>> millions of rows indexed, make sure you set maintenance_work_mem on the new
>> server to 2GB before running this.
>>
>>
>> it's a full copy from the production, so it's 2.3TB
>

Thats quite huge. pg_upgrade would be a better choice and yes, downtime is
needed. You need to have the database shutdown all through the upgrade
process.

How long it will take depends on type of Disks you have. It is definitely a
lot faster than pg_dump/pg_restore.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] How to reduce WAL files in Point in time recovery

2016-09-02 Thread Venkata B Nagothi
On Fri, Sep 2, 2016 at 2:58 PM, Amee Sankhesara - Quipment India <
amee.sankhes...@quipment.nl> wrote:

> Hi,
>
>
>
> I have setup PITR in PostgreSQL. I am taking base backup at every specific
> interval and also kept WAL files of size 16 MB each.
>
>
>
> Now the situation is that even there is no any major change in database,
> it suddenly started creating too many WAL files.
>
>
>
> I have gathered statistics with count of WAL files created on specific
> dates as shown below:
>
>
>
> Date   | WAL file count
>
> ---| -
>
> 2016-08-31 |  1569
>
> 2016-08-30 |  3031
>
> 2016-08-29 |  2664
>
> 2016-08-28 |  1251
>
> 2016-08-27 |  1231
>
> 2016-08-26 |  1946
>
> 2016-08-25 |  1850
>
> 2016-08-24 |  1666
>
> 2016-08-23 |  1562
>
> 2016-08-22 |  1525
>
> 2016-08-21 |   765
>
> 2016-08-20 |   761
>
> 2016-08-19 |  1180
>
> 2016-08-18 |  1077
>
> 2016-08-17 |  1064
>
> 2016-08-16 |   832
>
> 2016-08-15 |   732
>
> 2016-08-14 |   402
>
> 2016-08-13 |   691
>
> 2016-08-12 |  1991
>
> 2016-08-11 |   465
>
>
>
> here we are expecting normal count to be between 600 to 800 according to
> our database transactions. But in above statistics you can see major
> fluctuation in file counts. I do not understand where the problem is and
> how can I find the root cause of the problem ?
>

Why do you think this is a problem ? PostgreSQL generates WAL for a reason.
If there are no much transactions on the database, then, it could be
because of maintenance operations like VACUUM.


>
>
> Also I would like to inform that few days back I had performed full vacuum
> on 2 or 3 tables which were having size of 3 to 4 GB. But I do not guess it
> is because of this vacuum.
>

Yes, more WALs are generated when you perform VACUUM.


>
>
> So could you please provide me guidelines to get this problem solved ?
>

If you are really concerned about huge WAL generation, then, consider
analyzing your Application and see if you can reduce any transactions to
avoid WAL generation.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-02 Thread Venkata B Nagothi
On Fri, Sep 2, 2016 at 2:40 PM, Patrick B <patrickbake...@gmail.com> wrote:

>
>
> 2016-09-02 15:36 GMT+12:00 Venkata B Nagothi <nag1...@gmail.com>:
>
>>
>> On Fri, Sep 2, 2016 at 12:48 PM, Patrick B <patrickbake...@gmail.com>
>> wrote:
>>
>>> Hi guys,
>>>
>>> I'll be performing a migration on my production master database server,
>>> which is running PostgreSQL 9.2 atm, from SATA disks to SSD disks.
>>> I've got some questions about it, and it would be nice if u guys could
>>> share your experiences/thoughts:
>>>
>>> *SCENARIO:*
>>>
>>> I currently have one MASTER and two Streaming Replication Slaves
>>> servers...
>>>
>>> master01
>>>> slave01 (Streaming replication + wal_files)
>>>> slave02 (Streaming replication + wal_files)
>>>
>>>
>>> ...Postgres is mounted on: /var/lib/pgsql/... The SSD disks will be
>>> installed only on my Master server, because my main problem is Writes and
>>> not reads.
>>>
>>> The new SSD volume will be mounted on /var/lib/pgsql2/
>>>
>>>
>>>
>>>- The slave02 server will loose the streaming replication connection
>>>to the master, once slave01 becomes the new master a new timeline will be
>>>settled? Will slave02 be able to connect to the slave01 server for
>>>streaming replication?
>>>
>>> Yes, slave01 becomes new master with a new timeline id. Cascading
>> replication is supported in 9.2, but, the dependency on WAL archives is a
>> bit heavy. You need to ensure .history file is copied over to slave02. I
>> think, you have WAL archiving enabled, so, should be fine.
>>
>
>
> hmmm is the .history files located into pg_xlog? I can't see none
> are they only generated when a new timeline id is created?
> If so, I think it will be fine as they're already being shipped to the
> slaves
>

No. The issue in 9.2 ( or rather until version 9.2) is, the WAL generated
after the master is shutdown (which is supposed to have the last known
status of master) is not automatically transferred to slave which is very
important when you are swapping over master-slave roles. You can only do
that manually and there is no way postgres does it automatically and same
would be the issue when attempt to make master a new slave (without
building slave from scratch). When you promote slave .history file gets
generated with a new timeline id, which you need to make master a new
slave. All of this is fixed in 9.3.

I wanted to stress on this to point out that master-slave roles can be
reversed without actually needing to build replication from scratch and is
trickier in the versions 9.2 and earlier.

Anyways, this may not be that important now as you confirmed that you are
going for option 2

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-01 Thread Venkata B Nagothi
On Fri, Sep 2, 2016 at 12:48 PM, Patrick B  wrote:

> Hi guys,
>
> I'll be performing a migration on my production master database server,
> which is running PostgreSQL 9.2 atm, from SATA disks to SSD disks.
> I've got some questions about it, and it would be nice if u guys could
> share your experiences/thoughts:
>
> *SCENARIO:*
>
> I currently have one MASTER and two Streaming Replication Slaves servers...
>
> master01
>> slave01 (Streaming replication + wal_files)
>> slave02 (Streaming replication + wal_files)
>
>
> ...Postgres is mounted on: /var/lib/pgsql/... The SSD disks will be
> installed only on my Master server, because my main problem is Writes and
> not reads.
>
> The new SSD volume will be mounted on /var/lib/pgsql2/
>
>
>
>- The slave02 server will loose the streaming replication connection
>to the master, once slave01 becomes the new master a new timeline will be
>settled? Will slave02 be able to connect to the slave01 server for
>streaming replication?
>
> Yes, slave01 becomes new master with a new timeline id. Cascading
replication is supported in 9.2, but, the dependency on WAL archives is a
bit heavy. You need to ensure .history file is copied over to slave02. I
think, you have WAL archiving enabled, so, should be fine.

*MIGRATION OPTIONS:*
>
> *Migration Option 1:* *I know this option will work*
>
>
>1. Mount the new volume */var/lib/pgsql2/ *on the master01 server
>2. Turn slave01 into a master server
>3. once I can confirm everything is working fine, I can go to step 4
>4. Stop postgres on the master01, start copying the DB using
>pg_basebackup from slave02 to master01 (Will have to edit postgres to
>use /var/lib/pgsql2/ instead /var/lib/pgsql - Is that possible? Or I'd have
>to create a symbolic link?)
>5. Start postgres on master01 server and check if all goes well as
>streaming replication server (Will test it for days)
>6. Turn master01 into a master server and I'll have to re-copy the DB
>into slave01 to make it a streaming replication server again
>
> @ Step 4, you can consider making master01 slave directly by building a
new recovery.conf file and copying over slave02's history file by doing
which, you can avoid re-build streaming replication from scratch.
When you "edit postgres", did you mean changing postgresql.conf ? if yes,
changing the parameter in postgresql.conf to use the new location should
not be a problem.

@ Step 6, Once you turn master01 (new slave) back to master server, you can
consider making slave01 (new master) a slave again by copying over the
.history files and required WALs. You do not have to build replication from
scratch.

*Migration Option 2:* *I don't know if this is possible - IS THIS
> POSSIBLE*
>
>1. Mount the new volume */var/lib/pgsql2/* on the master01 server
>2. Stop postgres on the server (I won't stop postgres on the slave so
>the users will be able to use the server as read-only)
>3. Copy the data from /var/lib/pgsql/ to /var/lib/pgsql2/
>4. Configure postgres to start using the new volume(/var/lib/pgsql2/)
>
>  This looks pretty straight forward. The only issue would be that, users
will not be able to do writes. If you are bringing down master and starting
up again, it should not a problem, slaves should be able catch up again.

What do you guys think? Is option possible? if so it would be much easier :)
>

Well, both the options work based on your expectations, Application
requirements on downtime, SLAs etc.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Vacuum Full - Questions

2016-08-31 Thread Venkata B Nagothi
On Thu, Sep 1, 2016 at 10:32 AM, Patrick B <patrickbake...@gmail.com> wrote:

>
>
> 2016-09-01 11:53 GMT+12:00 Venkata B Nagothi <nag1...@gmail.com>:
>
>>
>> On Thu, Sep 1, 2016 at 8:41 AM, Patrick B <patrickbake...@gmail.com>
>> wrote:
>>
>>> Hi guys,
>>>
>>> A dev has ran a VACUUM FULL command into our test database running
>>> PostgreSQL 9.5 (I know... goddamn)...
>>>
>>> ... after the Vacuum Full, some queries start using SEQ scans instead of
>>> indexes...
>>>
>>> Does that happen because of the size of the table? The table that I'm
>>> referring to is 150MB big after the vacuum (Before was 1G)...
>>>
>>
>> Yes, it is possible that sequential scans after vacuum full are cheaper
>> than Index scans before vacuum full ? do you see improvement in query
>> response times ?
>> How does the cost differ ?
>>
>> Regards,
>> Venkata B N
>>
>> Fujitsu Australia
>>
>
>
> Well... the response time was worst than when using index.. that's very
> weird... I've re-created the indexes now a ran ANALYZE and the query is
> using again the index.. just wanted understand what happened...
>

There you go.. Running ANALYZE made the difference. Whenever you execute
VACUUM or VACUUM FULL make sure you execute ANALYZE so that latest stats
are updated. Sorry, i should have mentioned this earlier.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Vacuum Full - Questions

2016-08-31 Thread Venkata B Nagothi
On Thu, Sep 1, 2016 at 8:41 AM, Patrick B  wrote:

> Hi guys,
>
> A dev has ran a VACUUM FULL command into our test database running
> PostgreSQL 9.5 (I know... goddamn)...
>
> ... after the Vacuum Full, some queries start using SEQ scans instead of
> indexes...
>
> Does that happen because of the size of the table? The table that I'm
> referring to is 150MB big after the vacuum (Before was 1G)...
>

Yes, it is possible that sequential scans after vacuum full are cheaper
than Index scans before vacuum full ? do you see improvement in query
response times ?
How does the cost differ ?

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Queries on async replicas locked each other after index rename on master

2016-08-30 Thread Venkata B Nagothi
On Wed, Aug 31, 2016 at 8:07 AM, Chinh Nguyen  wrote:

> Hello All,
>
> We recently tried to reindex on a heavy used database cluster (master
> + multiple hot-standby async replicas, all taking traffic) and
> replicas were effectively blocked for 10 minutes resulting in drop of
> a lot of read traffic. We reindex by create new index concurrently,
> then rename old and new index and drop old index afterwards.
> After we execute the renaming on master, we found a quick burst of
> exclusive locks as expected:
>
> Process 34482 waits for AccessShareLock on relation 4153657 of
> database 16420; blocked by process 18953.
> Process 18953: ALTER INDEX public.old_index RENAME TO temp_index;
>
> But all hot-standby replicas started to see queries locking up for 10
> minutes, sample log from a replica is below:
>
> LOG: process 41040 still waiting for AccessShareLock on relation
> 4153657 of database 16420 after 1000.072 ms
>
> We have in config:
>
> max_standby_streaming_delay = 10min
> lock_timeout = 10s
>
> Any idea what happened with the replicas? Why queries on replica
> locked for so long vs sub-second on master?
>

I think, it is not the lock release wait time, it is the delay in WAL
replay on standby from master due to the parameter
"max_standby_streaming_delay".
If you have read queries running on standby, WAL replay waits for
"max_standby_streaming_delay" duration (which is 10 mins in your case)
before timing out queries on replica.


Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Question about performance - Postgres 9.5

2016-08-16 Thread Venkata B Nagothi
On Mon, Jun 13, 2016 at 8:37 AM, Patrick B  wrote:

> Hi guys,
>
> In the db I'm working one, it will be three tables:
>
> visits, work, others.
>
> Everything the customer do, will be logged. All inserts/updates/deletes
> will be logged.
>
> Option 1: Each table would have its own log table.
> visits_logs, work_logs, others_logs
>
> Option 2: All the logs would be stored here...
> log_table
>
> Can you please guys tell me which option would be faster in your opinion,
> and why?
>

Did you mean that, you will be auditing the activities happening on those 3
tables ? If yes, can you clarify on what you will be exactly logging ?

What will be the volume of transactions all the 3 tables will be receiving
over a business day ? if the volume is manageable, then one table for
logging all the actions across 3 tables would be good.

If you are auditing and size of the data is manageable then, even one table
would also be good. A separate audit table for each table would generally
be a good idea, which makes it easy for tracking activities.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Running pg_dump from a slave server

2016-08-16 Thread Venkata B Nagothi
On Wed, Aug 17, 2016 at 2:09 PM, Sameer Kumar <sameer.ku...@ashnik.com>
wrote:

>
>
> On Wed, Aug 17, 2016 at 12:00 PM Venkata B Nagothi <nag1...@gmail.com>
> wrote:
>
>> On Wed, Aug 17, 2016 at 1:31 PM, Sameer Kumar <sameer.ku...@ashnik.com>
>> wrote:
>>
>>>
>>>
>>> On Wed, Aug 17, 2016 at 10:34 AM Patrick B <patrickbake...@gmail.com>
>>> wrote:
>>>
>>>> Hi guys,
>>>>
>>>> I'm using PostgreSQL 9.2 and I got one master and one slave with
>>>> streaming replication.
>>>>
>>>> Currently, I got a backup script that runs daily from the master, it
>>>> generates a dump file with 30GB of data.
>>>>
>>>> I changed the script to start running from the slave instead the
>>>> master, and I'm getting this errors now:
>>>>
>>>> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>>>>> failed.
>>>>> pg_dump: Error message from server: ERROR:  canceling statement due to
>>>>> conflict with recovery
>>>>> DETAIL:  User was holding a relation lock for too long.
>>>>
>>>>
>>> Looks like while your pg_dump sessions were trying to fetch the data,
>>> someone fired a DDL or REINDEX or VACUUM FULL on the master database.
>>>
>>>>
>>>> Isn't that possible? I can't run pg_dump from a slave?
>>>>
>>>
>>> Well you can do that, but it has some limitation. If you do this quite
>>> often, it would be rather better to have a dedicated standby for taking
>>> backups/pg_dumps. Then you can set max_standby_streaming_delay and
>>> max_standby_archiving_delay to -1. But I would not recommend doing this if
>>> you use your standby for other read queries or for high availability.
>>>
>>> Another option would be avoid such queries which causes Exclusive Lock
>>> on the master database during pg_dump.
>>>
>>
>> Another work around could be to pause the recovery, execute the pg_dump
>> and then, resume the recovery process. Not sure if this work around has
>> been considered.
>>
>> You can consider executing "pg_xlog_replay_pause()" before executing
>> pg_dump and then execute "pg_xlog_replay_resume()" after the pg_dump
>> process completes.
>>
>
> Ideally I would not prefer if I had only one standby. If I am right, it
> would increase the time my standby would take to complete recovery and
> become active during a promotion (if I need it during a failure of master).
> It may impact high availability/uptime. Isn't it?
>

Yes, depending on how long pg_dump takes on standby, the lag would increase
by that much time. It does have an impact on high availability up-time
which needs to be taken into consideration and same is the case with
max_standby* parameters as you mentioned above. I am just referring to an
option for a clean pg_dump from standby, ofcourse, provided it adheres with
high-availability up-time SLA.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] schema advice for event stream with tagging and filtering

2016-08-16 Thread Venkata B Nagothi
On Tue, Aug 16, 2016 at 6:38 PM, Chris Withers 
wrote:

> Hi All,
>
> What would be the best schema to use when looking to implement an event
> stream with tagging and filtering?
>
> An event is a row with a primary key along the lines of (colo, host,
> category) and an associated set of tags, where each tag has a type and a
> value (eg: {"color": "red", "owner": "fred", "status": "open"...}).
>
> Events come in as a streams of creates/updates as a cluster of http posts
> to a web app.
>

Not sure which version of PostgreSQL you are using. Did you consider
JSON/JSONB data-types which is strongly supported by PostgreSQL ? You can
consider having a table with JSON or JSONB column type. If you can use
JSONB, then, it supports indexing as well.

That should make web app easy to push JSON format data to PostgreSQL.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Running pg_dump from a slave server

2016-08-16 Thread Venkata B Nagothi
On Wed, Aug 17, 2016 at 1:31 PM, Sameer Kumar 
wrote:

>
>
> On Wed, Aug 17, 2016 at 10:34 AM Patrick B 
> wrote:
>
>> Hi guys,
>>
>> I'm using PostgreSQL 9.2 and I got one master and one slave with
>> streaming replication.
>>
>> Currently, I got a backup script that runs daily from the master, it
>> generates a dump file with 30GB of data.
>>
>> I changed the script to start running from the slave instead the master,
>> and I'm getting this errors now:
>>
>> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>>> failed.
>>> pg_dump: Error message from server: ERROR:  canceling statement due to
>>> conflict with recovery
>>> DETAIL:  User was holding a relation lock for too long.
>>
>>
> Looks like while your pg_dump sessions were trying to fetch the data,
> someone fired a DDL or REINDEX or VACUUM FULL on the master database.
>
>>
>> Isn't that possible? I can't run pg_dump from a slave?
>>
>
> Well you can do that, but it has some limitation. If you do this quite
> often, it would be rather better to have a dedicated standby for taking
> backups/pg_dumps. Then you can set max_standby_streaming_delay and
> max_standby_archiving_delay to -1. But I would not recommend doing this if
> you use your standby for other read queries or for high availability.
>
> Another option would be avoid such queries which causes Exclusive Lock on
> the master database during pg_dump.
>

Another work around could be to pause the recovery, execute the pg_dump and
then, resume the recovery process. Not sure if this work around has been
considered.

You can consider executing "pg_xlog_replay_pause()" before executing
pg_dump and then execute "pg_xlog_replay_resume()" after the pg_dump
process completes.

Regards,
Venkata B N

Fujitsu Australia