Suggestion to Monitoring Tool

2020-05-27 Thread postgann2020 s
Hi Team,

Thanks for your support.

Environment Details:
OS: RHEL 7.2
Postgres: 9.5.15
Master-Slave with Streaming replication

We are planning to implement the monitoring tool for our environment.

Could someone please suggest the Monitoring Tool based on your experience.

We are looking to cover the below areas.

1. Monitoring metrics and alerting.
2. Monitoring events and alerting.
3. Consolidate all the PROD DB logs and provide insights on log data.
4. logging explain plan and insights on explain plans. (Something like
store explain plan and compare plans and send alerts on deviations)
5. Logging audit data and insights from audit data.


Thanks & Regards,
Postgann.


Re: suggestion the process to sync the data from PROD DB to Staging environment

2020-05-27 Thread postgann2020 s
Thanks, David,

Please find the environment details.

Environment:
PROD:
OS: RHEL 7.1
Postgres: 9.5.15

Staging:
OS: RHEL 7.1
Postgres: 9.5.15

Thanks,
PostgAnn.

On Wed, May 27, 2020 at 9:51 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, May 27, 2020 at 8:56 AM postgann2020 s 
> wrote:
>
>> Could someone please suggest the process to *sync the data from PROD DB
>> to the Staging environment* with minimal manual intervention or
>> automatically.
>>
>
> Read up on the general purpose "bash" scripting language, the PostgreSQL
> "pg_dump" and "pg_restore" commands, "ssh", and "cron".
>
> "cron" and "bash" provide for the "automatically" requirement.
>
> It is possible to assemble something functional with those tools.  Whether
> it will actually work in your specific situation is impossible to say since
> you provide zero information about your environment.
>
> David J.
>
>


Re: suggestion the process to sync the data from PROD DB to Staging environment

2020-05-27 Thread postgann2020 s
Hi Luan,

Thanks for your email.

How frequently do you sync *hourly/daily/weekly*?  > looking for a daily
basis.
Do you have other processes writing data into Staging env?  > Yes, we have
old PROD data.

Thanks & Regards,
Postgann.

On Wed, May 27, 2020 at 9:33 PM Luan Huynh  wrote:

> Hi Postgann,
>
> How frequently do you sync *hourly/daily/weekly*? Do you have other
> processes writing data into Staging env?
>
> Regards,
>
> On Wed, May 27, 2020 at 5:56 PM postgann2020 s 
> wrote:
>
>> Hi Team,
>>
>> Thanks for your support.
>>
>> Could someone please suggest the process to *sync the data from PROD DB
>> to the Staging environment* with minimal manual intervention or
>> automatically.
>>
>> Thanks & Regards,
>> Postgann.
>>
>


Suggest the Schedular for activities

2020-05-27 Thread postgann2020 s
Hi Team,

Thanks for your support.

Currently, we are using tomcat for scheduling and want to replace it with
DB specific schedulers.

Could someone please suggest the Schedular for application activities
instead of creating tomcat schedulers.
Also scheduler for DB specific activities as well instead of corn.

Thanks & Regards,
Postgann.


suggestion the process to sync the data from PROD DB to Staging environment

2020-05-27 Thread postgann2020 s
Hi Team,

Thanks for your support.

Could someone please suggest the process to *sync the data from PROD DB to
the Staging environment* with minimal manual intervention or automatically.

Thanks & Regards,
Postgann.


Re: Suggestion to improve query performance of data validation in proc.

2020-05-22 Thread postgann2020 s
Hi David,

Thanks for your feedback.

We are using the below kind of validation throughout the proc in multiple
locations and for validation we are using the below statements.

--check Data available or not for structure_id1
   IF EXISTS(SELECT 1  FROM schema.table_name WHERE
column1=structure_id1)  THEN
 is_exists1 :=true;
END IF;

We are looking for a better query than "*SELECT 1  FROM schema.table_name
WHERE column1=structure_id1*" this query for data validation.

Please suggest is there any other ways to validate this kind of queries
which will improve the overall performance.

Regards,
Postgann.

On Fri, May 22, 2020 at 12:36 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> You should read through the and the contained linked FAQ - note especially
> the concept and recommendation for “cross-posting”.
>
> https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics
>
> On Thursday, May 21, 2020, postgann2020 s  wrote:
>
>>
>> We have multiple long procs that are having 100s of data validations and
>> currently we have written as below.
>>
>> ***
>>
>> if (SELECT 1  FROM SCHEMA.TABLE WHERE column=data AND column=data) then
>> statements
>> etc..
>>
>> ***
>>
>> Are there any other ways to validate the data, which will help us to
>> improve the performance of the query?
>>
>
> I have no idea what your are trying to get at here.  You should try
> providing SQL that actually runs.  Though at first glance it seems quite
> probable your are doing useless work anyway.
>
> David J.
>


Suggestion to improve query performance of data validation in proc.

2020-05-22 Thread postgann2020 s
Hi Team,
Thanks for your support.

Could you please suggest on below query.

We have multiple long procs that are having 100s of data validations and
currently we have written as below.

***

if (SELECT 1  FROM SCHEMA.TABLE WHERE column=data AND column=data) then
statements
etc..

***

Are there any other ways to validate the data, which will help us to
improve the performance of the query?.

Thanks for your support.

Regards,
PostgAnn.


Re: Suggestion on index creation for TEXT data field

2020-05-21 Thread postgann2020 s
Hi David, Adrian,

Thanks for the information.
Sure, will post on PostGIS community.

Regards,
PostgAnn.

On Thu, May 21, 2020 at 8:21 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, May 21, 2020 at 7:45 AM postgann2020 s 
> wrote:
>
>> >And what type of data exactly are we talking about.  ==> Column is
>> stroing GIS data.
>>
>
> GIS data isn't really TEXT and isn't a core datatype of PostgreSQL so this
> is maybe better posted to the PostGIS community directly...
>
> David J.
>
>


Re: Suggestion on table analyze

2020-05-21 Thread postgann2020 s
Hi Adrian,

Thanks, I'll check it out.

Regards,
PostgAnn.

On Thu, May 21, 2020 at 8:11 PM Adrian Klaver 
wrote:

> On 5/21/20 7:18 AM, postgann2020 s wrote:
> > Hi Team,
> >
> > Thanks for your support.
> >
> > Could you please suggest on below query.
> >
> > Environment
> > PostgreSQL: 9.5.15
> > Postgis: 2.2.7
> >
> > Mostly table contain GIS data.
> >
> > While analyzing the table getting below NOTICE. It seems is pretty
> > understanding, but needs help on the below points.
> >
> > 1 . What might be the reason for getting the NOTICE?.
> > 2. Is this lead to any problems in the future?.
> >
> > ANALYZE SCHEMA.TABLE;
> >
> > NOTICE:  no non-null/empty features, unable to compute statistics
> > NOTICE:  no non-null/empty features, unable to compute statistics
> > Query returned successfully with no result in 1.1 secs.
>
> This is coming from PostGIS:
>
> postgis/gserialized_estimate.c:
> /* If there's no useful features, we can't work out stats */
>  if ( ! notnull_cnt )
>  {
>  elog(NOTICE, "no non-null/empty features, unable to
> compute statistics");
>  stats->stats_valid = false;
>  return;
>  }
>
>
>
> You might find more information from here:
>
> https://postgis.net/support/
>
> Though FYI PostGIS 2.2.7 is past EOL:
>
> https://postgis.net/source/
>
> >
> > Thanks for your support.
> >
> > Regards,
> > PostgAnn.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Suggestion on index creation for TEXT data field

2020-05-21 Thread postgann2020 s
Hi David,

Thanks for your email.

>And what type of data exactly are we talking about.  ==> Column is stroing
GIS data.

Regards,
PostgAnn.

On Thu, May 21, 2020 at 8:06 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, May 21, 2020 at 7:28 AM postgann2020 s 
> wrote:
>
>> which is having an avg width of 149bytes.
>>
>
> The average is meaningless if your maximum value exceeds a limit.
>
> 2. What type of index is the best suited for this type of data?.
>>
>
> And what type of data exactly are we talking about.  "TEXT" is not a
> useful answer.
>
> If the raw data is too large no index is going to be "best" -  as the hint
> suggests you either need to drop the idea of indexing the column altogether
> or apply some function to the raw data and then index the result.
>
> David J.
>
>


Suggestion on index creation for TEXT data field

2020-05-21 Thread postgann2020 s
Hi Team,
Thanks for your support.
Could you please suggest on below query.

Environment

PostgreSQL: 9.5.15
Postgis: 2.2.7
Mostly table contains GIS data and we are trying to creating an index on
the column which is having an avg width of 149bytes.

 CREATE INDEX index_idx
  ON SCHEMA.TABLE
  USING btree
  (column);

ERROR:  index row size 2976 exceeds maximum 2712 for index "index_idx"
HINT:  Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full-text
indexing.

Could you please suggest on below queries.
1. How to solve the issue?.
2. What type of index is the best suited for this type of data?.

Thanks for your support.

Regards,
PostgAnn.


Suggestion on table analyze

2020-05-21 Thread postgann2020 s
Hi Team,

Thanks for your support.

Could you please suggest on below query.

Environment
PostgreSQL: 9.5.15
Postgis: 2.2.7

Mostly table contain GIS data.

While analyzing the table getting below NOTICE. It seems is pretty
understanding, but needs help on the below points.

1 . What might be the reason for getting the NOTICE?.
2. Is this lead to any problems in the future?.

ANALYZE SCHEMA.TABLE;

NOTICE:  no non-null/empty features, unable to compute statistics
NOTICE:  no non-null/empty features, unable to compute statistics
Query returned successfully with no result in 1.1 secs.

Thanks for your support.

Regards,
PostgAnn.


Suggestion to improve query performance.

2020-05-20 Thread postgann2020 s
Hi Team,

Thanks for your support.

We are using below environment:

Application :
Programming Language : JAVA
Geoserver

Database Stack:
PostgreSQL : 9.5.15
Postgis

We have 3 geoserver queries and are getting some performance issues after
changing the GeoServer queries.I have posted the queries and explain the
plans of both the old and new queries.

The same type of issues found for 3 queries:
1. Changed index scan to Bitmap scan.
2. All New Queries, again condition checked.



Old Queriy:

>>

Query No:1

1. No issue while executing query.
2. It is feteching: 38 rows only.

===

EXPLAIN ANALYZE SELECT
"underground_route_id","ug_route_sub_type","sw_uid22",encode(ST_AsBinary(ST_Simplify(ST_Force2D("the_geom"),
1.506687768824122E-5, true)),'base64') as "the_geom" FROM
"schema"."underground_route" WHERE  ("the_geom" && ST_GeomFromText('POLYGON
((77.20637798309326 28.627887618687176, 77.20637798309326
28.632784466413323, 77.21195697784424 28.632784466413323, 77.21195697784424
28.627887618687176, 77.20637798309326 28.627887618687176))', 4326) AND
(("ug_route_sub_type" = 'IP1-IRU-Intercity' AND "ug_route_sub_type" IS NOT
NULL ) OR ("ug_route_sub_type" = 'IP1-IRU-Intracity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" = 'IRU-Intracity'
AND "ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" =
'IRU-Intercity' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'IP1-Own-Intercity' AND "ug_route_sub_type" IS NOT
NULL ) OR ("ug_route_sub_type" = 'IP1-Own-Intracity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" = 'Own-Intracity'
AND "ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" =
'Own-Intercity' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Own-Intercity-Patch-replacement' AND
"ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" =
'Own-Intracity-Patch-replacement' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Clamping' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'None' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'On kerb' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Other' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Suspend' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'In Duct Chamber' AND "ug_route_sub_type" IS NOT
NULL ) OR ("ug_route_sub_type" = '' AND "ug_route_sub_type" IS NOT NULL )
OR "ug_route_sub_type" IS NULL  OR ("sw_uid22" = 'Overhead' AND "sw_uid22"
IS NOT NULL  AND "ug_route_sub_type" = 'Own-Intercity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("sw_uid22" = 'Overhead' AND
"sw_uid22" IS NOT NULL  AND "ug_route_sub_type" = 'Own-Intracity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("sw_uid22" = 'Overhead' AND
"sw_uid22" IS NOT NULL  AND "ug_route_sub_type" =
'Own-Intercity-Patch-replacement' AND "ug_route_sub_type" IS NOT NULL ) OR
("sw_uid22" = 'Overhead' AND "sw_uid22" IS NOT NULL  AND
"ug_route_sub_type" = 'Own-Intracity-Patch-replacement' AND
"ug_route_sub_type" IS NOT NULL )));

 Explan Plan:
 

 Index Scan using underground_route_the_geom_geo_idx on underground_route
 (cost=0.41..41.20 rows=7 width=157) (actual time=0.158..1.010 rows=38
loops=1)
   Index Cond: (the_geom &&
'010320E6100100054C354D534022DEBDA03C40004C354D53407BA9AC29FEA13C4000B4904D53407BA9AC29FEA13C4000B49
04D534022DEBDA03C40004C354D534022DEBDA03C40'::geometry)
   Filter: ug_route_sub_type)::text = 'IP1-IRU-Intercity'::text) AND
(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'IP1-IRU-Intracity'::text) AN
D (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR
(((ug_route_sub_type)::text = 'IRU-In
tercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR
(((ug_route_sub_type)::text = 'IP1-Own-Intercity'::text) AND
(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub
_type)::text = 'IP1-Own-Intracity'::text) AND (ug_route_sub_type IS NOT
NULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity'::text) AND
(ug_route_sub_type IS NOT NUL
L)) OR (((ug_route_sub_type)::text = 'Own-Intercity'::text) AND
(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'Own-Intercity-Patch-replacement'::tex
t) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT
NULL)) OR (((ug_route_su
b_type)::text = 'Clamping'::text) AND (ug_route_sub_type IS NOT NULL)) OR
(((ug_route_sub_type)::text = 'None'::text) AND (ug_route_sub_type IS NOT
NULL)) OR (((ug_rout
e_sub_type)::text 

Re: Could someone please help us share the procedure to troubleshoot the locks on proc issues.

2020-04-02 Thread postgann2020 s
Thanks Adrian, will share the details.

On Fri, Apr 3, 2020 at 4:30 AM Adrian Klaver 
wrote:

> On 4/2/20 12:37 PM, postgann2020 s wrote:
> > Hi Team,
> >
> > Good Evening,
> >
> > Could someone please help us share the procedure to troubleshoot the
> > locks on proc issues.
> >
> > Environment:
> > 
> >   1 pgpool server (Master Pool Node) using Straming replication with
> > load balancing
> >   4 DB nodes (1Master and 3 Slaves).
> >
> >   Versions:
> >   1. postgres: 9.5.15
> >   2. pgpool   : 3.9
> >   3. repmgr:  4.1
> >
> > We are continuously facing locking issues for below procedures , due to
> > this the  rest of the call for these procs going into waiting
> > state.Which cause the DB got hung. Below are the procs  running with
> > DB_User2 from the application.
> >
> > 1. select * from Schema1.duct_remove_validation($1,$2,$3,$4)  ==> This
> > proc it self calling Schema1.cable_remove_validation($1,$2).
> > 2. select * from Schema1.cable_remove_validation($1,$2)  ==> This is
> > also calling from the applications
>
> To figure out below we need to see what is happening in above.
>
> >
> > if we ran explain analyze, its taking msec only, but if we run
> > simultaneouly from application getting locked and waiting state.
> >
> > We have ran below query for showing blocking queries and attached output
> > in Blocking_Queries_with_PID.csv file:
> >
> > SELECT
> > pl.pid as blocked_pid
> > ,psa.usename as blocked_user
> > ,pl2.pid as blocking_pid
> > ,psa2.usename as blocking_user
> > ,psa.query as blocked_statement
> > FROM pg_catalog.pg_locks pl
> > JOIN pg_catalog.pg_stat_activity psa
> > ON pl.pid = psa.pid
> > JOIN pg_catalog.pg_locks pl2
> > JOIN pg_catalog.pg_stat_activity psa2
> > ON pl2.pid = psa2.pid
> > ON pl.transactionid = pl2.transactionid
> > AND pl.pid != pl2.pid
> > WHERE NOT pl.granted;
> >
> > Output: attached output in Blocking_Queries_with_PID.csv file
> >
> >
> > The waiting connections are keep on accumulating and cause DB hung.
> > I have attached pg_stat_activity excel file with the user along with the
> > proc queries which cause waiting state.
> >
> > Finds:
> >
> > There are total 18 connections for DB_User2 which are running only above
> > 2 procs, Out of that only one connection with 18732 is running proc
> > (select * from Schema1.duct_remove_validation($1,$2,$3,$4))from long
> > time  and reset of all 17 connections are in waiting state from the long
> > time.
> >
> > There are many exclusive locks on table for 18732 and other process as
> > well. I have attached pg_locks reference excel(Lock_Reference_For_PROC)
> > with highlighted pid 18732.
> >
> > Could someone please suggest the procedure to troubleshoot this issue.
> > Please find the attachment for reference.
> >
> > Thanks,
> > Postgann.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Help to find-the-maximum-length-of-field-in-a-particular-column-in-all the tables

2020-04-02 Thread postgann2020 s
Hi Team,

Good Evening,

We are trying to syncing the table field size with other db tables filed
size.

We have multiple tables in multiple schemas in DB. we are tyring to findout
maximum size of each column in table for all tables and for all schemas in
DB.

How to find the maximum length of data field in a particular column in a
table and for all tables in schema.

Example column names in table: a,b,c,d,e,f,g
Example schema names in DB: A,B,C,D

Expected output:
column_name Max_size_of_column
a 10
b 20

or
column_name, Max_size_of_column, column_table, table_schema
a 10 Table1 Schema1
b 20 Table1 Schema1

I have tried below query, but not able to get desired output.

SELECT (select column_name from INFORMATION_SCHEMA.COLUMNS where
table_name='building'), select max(length(select column_name from
INFORMATION_SCHEMA.COLUMNS where table_name='building')) from from
INFORMATION_SCHEMA.COLUMNS where table_name='building'
group by column_name;

Please help us to get the desired output.

Below is the reference i have used for above one.
https://stackoverflow.com/questions/43123311/how-to-find-the-maximum-length-of-data-in-a-particular-field-in-postgresql

Regards,
Postgann.


Could someone please help us share the procedure to troubleshoot the locks on proc issues.

2020-04-02 Thread postgann2020 s
Hi Team,

Good Evening,

Could someone please help us share the procedure to troubleshoot the locks
on proc issues.

Environment:

 1 pgpool server (Master Pool Node) using Straming replication with load
balancing
 4 DB nodes (1Master and 3 Slaves).

 Versions:
 1. postgres: 9.5.15
 2. pgpool   : 3.9
 3. repmgr:  4.1

We are continuously facing locking issues for below procedures , due to
this the  rest of the call for these procs going into waiting state.Which
cause the DB got hung. Below are the procs  running with DB_User2 from the
application.

1. select * from Schema1.duct_remove_validation($1,$2,$3,$4)  ==> This proc
it self calling Schema1.cable_remove_validation($1,$2).
2. select * from Schema1.cable_remove_validation($1,$2)  ==> This is also
calling from the applications

if we ran explain analyze, its taking msec only, but if we run
simultaneouly from application getting locked and waiting state.

We have ran below query for showing blocking queries and attached output in
Blocking_Queries_with_PID.csv file:

SELECT
pl.pid as blocked_pid
,psa.usename as blocked_user
,pl2.pid as blocking_pid
,psa2.usename as blocking_user
,psa.query as blocked_statement
FROM pg_catalog.pg_locks pl
JOIN pg_catalog.pg_stat_activity psa
ON pl.pid = psa.pid
JOIN pg_catalog.pg_locks pl2
JOIN pg_catalog.pg_stat_activity psa2
ON pl2.pid = psa2.pid
ON pl.transactionid = pl2.transactionid
AND pl.pid != pl2.pid
WHERE NOT pl.granted;

Output: attached output in Blocking_Queries_with_PID.csv file


The waiting connections are keep on accumulating and cause DB hung.
I have attached pg_stat_activity excel file with the user along with the
proc queries which cause waiting state.

Finds:

There are total 18 connections for DB_User2 which are running only above 2
procs, Out of that only one connection with 18732 is running proc (select *
from Schema1.duct_remove_validation($1,$2,$3,$4))from long time  and reset
of all 17 connections are in waiting state from the long time.

There are many exclusive locks on table for 18732 and other process as
well. I have attached pg_locks reference excel(Lock_Reference_For_PROC)
with highlighted pid 18732.

Could someone please suggest the procedure to troubleshoot this issue.
Please find the attachment for reference.

Thanks,
Postgann.


Blocking_PROCS_With_PIDS.csv
Description: MS-Excel spreadsheet


pg_stat_activity_output_for_clarification.xlsx
Description: MS-Excel 2007 spreadsheet


Lock_Reference_For_PROC.xlsx
Description: MS-Excel 2007 spreadsheet