Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-23 Thread pavan95
Hi Matthew,

Yeah and you said right!. I have analyzed the entire database and also
created appropriate indexes for the columns used in WHERE/JOIN clauses.

Okay I will just provide the fourth union part of the query which you can
analyze easier(this not that big).

Please find the query part. And refer to the table definitions in my
previous posts.
Query:

select 
0 as id,
header.id as header_id,
'0' as emp_id,
 0 as sno,
users.alias_id as alias,
partner.name as name,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
work_order_no != 'CORPORATE' and
res_employee_id=users.res_employee_id and date in (select date::date from
generate_series((select start_date from tms_timesheet_period where
id=127),(select end_date from tms_timesheet_period where id=127),'1
day'::interval) date)) billed_hrs,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
release_no = 'unbillable_time' and
res_employee_id=users.res_employee_id and date in (select date::date from
generate_series((select start_date from tms_timesheet_period where
id=127),(select end_date from tms_timesheet_period where id=127),'1
day'::interval) date)) as unbilled_hrs,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
release_no = 'paid_time_off' and
res_employee_id=users.res_employee_id and date in (select date::date from
generate_series((select start_date from tms_timesheet_period where
id=127),(select end_date from tms_timesheet_period where id=127),'1
day'::interval) date)) as paid_time_off,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
release_no = 'unpaid_leave' and
res_employee_id=users.res_employee_id and date in (select date::date from
generate_series((select start_date from tms_timesheet_period where
id=127),(select end_date from tms_timesheet_period where id=127),'1
day'::interval) date)) as unpaid_leave,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
release_no = 'bereavement_time' and
res_employee_id=users.res_employee_id and date in (select date::date from
generate_series((select start_date from tms_timesheet_period where
id=127),(select end_date from tms_timesheet_period where id=127),'1
day'::interval) date)) as breavement_time,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
res_employee_id=users.res_employee_id
and  date >='2018-04-16' and date <='2018-04-30' and release_no in
('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as
leave,
'Not Submitted' state,
header.res_employee_id as header_emp_id,
'Not Submitted' as header_status

from res_users users,
res_partner partner,
tms_timesheet_status status,
tms_timesheet_header header

where 
header.res_employee_id=users.res_employee_id
and  status.id=header.status_id
and users.partner_id=partner.id
and status.name='Draft'
and header.timesheet_period_id=127
and header.res_employee_id in ('14145', '14147',
'ON-14148', '11331', '11332', '11333', 'ON-11334', '65432', '65416',
'54643', '23266', '4681', '56464', '64649', '89564', '98798', '1',
'44466', '87852', '65464', '65464', '44655', '8201', '65465', 'ON-78785',
'13233', 'ON-5544', 'ON-54654', '23131', '98765', '25134', '13218', '84645',
'4687', '6546', '4988', '89796', '79878', '7198', '15726', '2132', '5310',
'13056', '4446', '16825', '16740', '3912', '19601', '13200', '12981',
'ON-3332', '13166', 'ON-3144', 'ON-1251', 'ON-2799', 'ON-2338', '7286',
'ON-2381', 'ON-3102', 'ON-2938', '64782', '5407', '54641', '46379',
'G151151', '5007', '6011', '5050', '20869', '20204', '12410', '10488',
'14582', '13574', '12982', '7884', '7788', '13417', '7922', '16744',
'16746', '16756', '8292', '16745', '19989', '8297', '5020', '14184',
'17161', '20767', '20753', '20289', '19979', '19975', '20272', '4292',
'G9341010', '14791', '5121', 'ON-1767', 'ON-581', 'ON-700', 'ON-437',
'ON-562', 'ON-1726', 'OFF-1060', 'ON-147', 'OFF-612', 'OFF-635', 'OFF-857',
'ON-900280', 'ON-1934', 'ON-1922', 'ON-2258', 'OFF-2537', 'ON-2872',

Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-22 Thread pavan95
Hi Justin,

Please find the output of explain(analyze,buffers) for the whole query in
the below link.

Link:  https://explain.depesz.com/s/dNkb   

Thanks in Advance!


Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-22 Thread pavan95
Hi all, 
Thank you so much for your valuable responses.Tried every aspect which you
have said for my sub-query.  
I hoped a better decrease in cost for my main query. But yes it decreased
but not to a great extent.
What I felt is to provide the main query and the associated table
definitions in the query. Please help me to tune the following big query. 
select res.id id,
  row_number() OVER () as sno,
   res.header_id,
   res.emp_id,
   res.alias alias,
   res.name as name,
   res.billed_hrs billed_hrs,
   res.unbilled_hrs unbilled_hrs,
   res.paid_time_off paid_time_off,
   res.unpaid_leave unpaid_leave,
   res.breavement_time breavement_time,
   res.leave leave,
   res.state,
   count(*) OVER() AS full_count,
   res.header_emp_id,
   res.header_status
 from (
select 
history.id as id,
0 as header_id,
'0' as emp_id,
 row_number() OVER () as sno,
user1.alias_id as alias,
partner.name as name,
( select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
work_order_no != 'CORPORATE') billed_hrs,

(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'unbillable_time') as unbilled_hrs,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'paid_time_off') as paid_time_off,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'unpaid_leave') as unpaid_leave,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'bereavement_time') as breavement_time,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and date
>='2018-04-16' and date <='2018-04-30' and release_no in
('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as
leave,
(case when tl_status.state = '' then 'Waiting for approval'
else tl_status.state end) as state,
header.res_employee_id as header_emp_id,
status.name as header_status 
from tms_workflow_history history, 
res_users users,
res_users user1,
res_partner partner,
tms_timesheet_status status,
tms_timesheet_header header
left join tms_workflow_history tl_status on
tl_status.timesheet_id=header.id
 and
tl_status.active=True
 and
tl_status.group_id=13

where 
 history.timesheet_id=header.id
and header.res_employee_id=user1.res_employee_id
and  status.id=header.status_id
and history.user_id=users.id
and user1.partner_id=partner.id
and header.timesheet_period_id = 127
 and (history.state = 'Approved' )
and history.current_activity='N'
and history.is_final_approver=True 
and history.active = True
   union 
select 
0 as id,
header.id as header_id,
'0' as emp_id,
 0 as sno,
users.alias_id as alias,
partner.name as name,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
work_order_no != 'CORPORATE' and
res_employee_id=users.res_employee_id and date in (select date::date from
generate_series((select start_date from tms_timesheet_period where
id=127),(select 

Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-22 Thread pavan95
Hi all, 
Thank you so much for your valuable responses.Tried every aspect which you
have said for my sub-query.  
I hoped a better decrease in cost for my main query. But yes it decreased
but not to a great extent.
What I felt is to provide the main query and the associated table
definitions in the query. Please help me to tune the following big query. 
select res.id id,
  row_number() OVER () as sno,
   res.header_id,
   res.emp_id,
   res.alias alias,
   res.name as name,
   res.billed_hrs billed_hrs,
   res.unbilled_hrs unbilled_hrs,
   res.paid_time_off paid_time_off,
   res.unpaid_leave unpaid_leave,
   res.breavement_time breavement_time,
   res.leave leave,
   res.state,
   count(*) OVER() AS full_count,
   res.header_emp_id,
   res.header_status
 from (
select 
history.id as id,
0 as header_id,
'0' as emp_id,
 row_number() OVER () as sno,
user1.alias_id as alias,
partner.name as name,
( select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
work_order_no != 'CORPORATE') billed_hrs,

(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'unbillable_time') as unbilled_hrs,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'paid_time_off') as paid_time_off,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'unpaid_leave') as unpaid_leave,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and
release_no = 'bereavement_time') as breavement_time,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
timesheet_header_id=header.id and date
>='2018-04-16' and date <='2018-04-30' and release_no in
('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as
leave,
(case when tl_status.state = '' then 'Waiting for approval'
else tl_status.state end) as state,
header.res_employee_id as header_emp_id,
status.name as header_status 
from tms_workflow_history history, 
res_users users,
res_users user1,
res_partner partner,
tms_timesheet_status status,
tms_timesheet_header header
left join tms_workflow_history tl_status on
tl_status.timesheet_id=header.id
 and
tl_status.active=True
 and
tl_status.group_id=13

where 
 history.timesheet_id=header.id
and header.res_employee_id=user1.res_employee_id
and  status.id=header.status_id
and history.user_id=users.id
and user1.partner_id=partner.id
and header.timesheet_period_id = 127
 and (history.state = 'Approved' )
and history.current_activity='N'
and history.is_final_approver=True 
and history.active = True
   union 
select 
0 as id,
header.id as header_id,
'0' as emp_id,
 0 as sno,
users.alias_id as alias,
partner.name as name,
(select SUM( floor(effort_hours) + ( (effort_hours -
floor(effort_hours)) / 0.6 ))
from tms_timesheet_details where 
work_order_no != 'CORPORATE' and
res_employee_id=users.res_employee_id and date in (select date::date from
generate_series((select start_date from tms_timesheet_period where
id=127),(select 

Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-21 Thread pavan95
Hi David,

Thank you so much for your valuable inputs.  Is there anything that I need
to look from Indexes perspective or Join order ??

Kindly let me know if  it can be tuned further.

Thank you very much.  


Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-21 Thread pavan95
Hi abbas,

Thank you so much.  I've got this query from my development team asking to
improve its performance. 

Now I got pretty much clear idea of it. And it will be the final extent to
which we can tune the performance right?

If there is still a way give me some tips to enhance the query performance. 

But kudos for your "floor" function. After a long struggle with the indexes,
joins and the hints I came to know that there is also a way to tune the
query performance by rewriting the query.

Thanks in advance.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-21 Thread pavan95
Hi Abbas,

Thanks for your valuable suggestions. To my surprise I got the same output
as what I have executed before. 

But unfortunately I'm unable to understand the logic of the code, in
specific what is base 60 number? The used data type for "effort_hours"
column is 'double precision'. 

Kindly help me in understanding the logic. Thanks in advance.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-21 Thread pavan95
Hi mlunon,

A great thanks for your timely response. And yes it worked when I rewritten
the query.

The query got enhanced with approximate of 1000 planner seeks. You can find
it from the explain plan below:

amp_test=# explain select
sum (
CASE MOD(cast(effort_hours as decimal),1)
WHEN 0.45 THEN cast(effort_hours as int)+0.75
WHEN 0.15 THEN cast(effort_hours as int)+0.25
WHEN 0.30 THEN cast(effort_hours as int)+0.5
WHEN 0 THEN cast(effort_hours as int)
END
)
from tms_timesheet_details detail , tms_wsr_header header  where
wsr_header_id=header.id and work_order_no != 'CORPORATE';
   QUERY PLAN

 Aggregate  (cost=8813.60..8813.61 rows=1 width=8)
   ->  Hash Join  (cost=608.27..5647.67 rows=70354 width=8)
 Hash Cond: (detail.wsr_header_id = header.id)
 ->  Seq Scan on tms_timesheet_details detail  (cost=0.00..3431.14
rows=72378 width=12)
   Filter: ((work_order_no)::text <> 'CORPORATE'::text)
 ->  Hash  (cost=399.23..399.23 rows=16723 width=4)
   ->  Seq Scan on tms_wsr_header header  (cost=0.00..399.23
rows=16723 width=4)
(7 rows)


But is this the optimum, can we reduce the cost more at least to around 5000
planner seeks. As it is only a subpart of the query which is called multiple
number of times in the main query.

And to send the main query along with tables description and explain plan it
will be a vast message so send you a sub-part.

Please help me to tune it more. Thanks in Advance.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-21 Thread pavan95
Hi all,

Hope my mail finds you in good time. I had a problem with a query which is
hitting the production seriously.
The below is the sub part of the query for which I cannot reduce the CPU
cost. 

Please check and verify whether I'm doing wrong or whether that type index
type suits it or not. 

Kindly help me resolve this issue.

*Query*:

explain select sum(CASE
 WHEN MOD(cast(effort_hours as decimal),1) =
0.45 THEN
 cast(effort_hours as int)+0.75
 ELSE
 CASE
 WHEN MOD(cast(effort_hours as decimal),1) =
0.15 THEN
 cast(effort_hours as int) + 0.25

  ELSE
 CASE
 WHEN MOD(cast(effort_hours as decimal),1) =
0.30 THEN
 cast(effort_hours as int) + 0.5

  ELSE
 CASE
 WHEN MOD(cast(effort_hours as decimal),1) =
0 THEN
 cast(effort_hours as int) 
 end
 END
 END
 END) from tms_timesheet_details, tms_wsr_header
header  where wsr_header_id=header.id and work_order_no != 'CORPORATE';

 QUERY PLAN
-
 Aggregate  (cost=9868.91..9868.92 rows=1 width=8)
   ->  Hash Join  (cost=608.27..5647.67 rows=70354 width=8)
 Hash Cond: (tms_timesheet_details.wsr_header_id = header.id)
 ->  Seq Scan on tms_timesheet_details  (cost=0.00..3431.14
rows=72378 width=12)
   Filter: ((work_order_no)::text <> 'CORPORATE'::text)
 ->  Hash  (cost=399.23..399.23 rows=16723 width=4)
   ->  Seq Scan on tms_wsr_header header  (cost=0.00..399.23
rows=16723 width=4)
(7 rows)


The count of number of rows in the tables used are:

1) tms_timesheet_details:

amp_test=# select count(*) from tms_timesheet_details;
 count

 110411
(1 row)

2) tms_wsr_header:

amp_test=# select count(*) from tms_wsr_header;
 count
---
 16723
(1 row)


The details of the tables and the columns used are as below:

1) tms_timesheet_details:

amp_test=# \d tms_timesheet_details
  Table
"public.tms_timesheet_details"
   Column|Type |
Modifiers
-+-+
 id  | integer | not null default
nextval('tms_timesheet_details_id_seq'::regclass)
 status  | character varying   |
 create_uid  | integer |
 effort_hours| double precision|
 work_order_no   | character varying   |
 res_employee_id | character varying   |
 wsr_header_id   | integer |
 remarks | character varying   |
 write_date  | timestamp without time zone |
 timesheet_header_id | integer |
 date| date|
 create_date | timestamp without time zone |
 write_uid   | integer |
 release_no  | character varying   |
 project_id  | character varying   |
 loc_name| character varying   |
 user_id | integer |
 ao_emp_id   | character varying   |
Indexes:
"tms_timesheet_details_pkey" PRIMARY KEY, btree (id)
"tms_timesheet_details_uniq_res_employee_id_efforts" UNIQUE, btree
(res_employee_id, work_order_no, release_no, date, project_id)
"timesheet_detail_inx" btree (wsr_header_id, timesheet_header_id)
"ts_detail_date_idx" btree (date)
"ts_detail_hdr_id_idx" btree (timesheet_header_id)
"ts_detail_release_no_idx" btree (release_no)
"work_order_no_idx" btree (work_order_no)
Foreign-key constraints:
"tms_timesheet_details_create_uid_fkey" FOREIGN KEY (create_uid)
REFERENCES res_users(id) ON DELETE SET NULL
"tms_timesheet_details_timesheet_header_id_fkey" FOREIGN KEY
(timesheet_header_id) REFERENCES tms_timesheet_header(id) ON DELETE SET NULL
"tms_timesheet_details_user_id_fkey" FOREIGN KEY (user_id) REFERENCES
res_users(id) ON DELETE SET NULL
"tms_timesheet_details_write_uid_fkey" FOREIGN KEY (write_uid)
REFERENCES res_users(id) ON DELETE SET NULL
"tms_timesheet_details_wsr_header_id_fkey" FOREIGN KEY (wsr_header_id)
REFERENCES tms_wsr_header(id) ON DELETE SET NULL


2) tms_wsr_header:

amp_test=# \d tms_wsr_header
 

Re: 8.2 Autovacuum BUG ?

2018-01-31 Thread pavan95
Hi all,

Regarding this archive log generation found one observation.  

A table named abc_table id found to be archived every 9'th and 39'th minute.
We are able to find number of tuples deleted from the pg_stat_user_tables
view. 

But to my wonder the number of tuple inserts are shown 0.  How can there be
any delete without any inserts.

It was found that the table is having 2060 rows, where in which all rows are
getting deleted in every 9'th and 39'th minute of an hour. It implies that
those deleted should be inserted before the delete operation.

Also performed vacuum freeze on that table before 9'th minute of an hour it
generated 36 archive logs, and when I tried to do the same operation after
9'th minute(say 11'th minute of the same hour), it is generating the same
number of archive logs.

This is possible only if  the entire table gets updated/recreated.  Now my
final doubt is why the tuple inserts in pg_stat_user_tables is showing 0,
when corresponding deletes are existing?

Please find the below outputs FYR.


--Steps performed on production server:--

--1. Found Count Of Rows in Production
--**
prod_erp=# select count(*) from abc_table;;
 count
---
  2060
(1 row)

--2. Issued 'Select pg_stat_reset();'

--3. Before Delete Statements (Before JAN 31'st 2018 14:09 Hrs)
--

Issued:

select * from pg_stat_user_tables where relname ='abc_table';
-[ RECORD 1 ]-+
relid | 550314
schemaname| public
relname   | abc_table
seq_scan  | 2
seq_tup_read  | 4120
idx_scan  | 0
idx_tup_fetch | 0
n_tup_ins | 0
n_tup_upd | 0
n_tup_del | 0
n_tup_hot_upd | 0
n_live_tup| 0
n_dead_tup| 0
last_vacuum   |
last_autovacuum   |
last_analyze  |
last_autoanalyze  |
vacuum_count  | 0
autovacuum_count  | 0
analyze_count | 0
autoanalyze_count | 0


--4. After Delete Statements (Before JAN 31'st 2018 14:09 Hrs)
--

select * from pg_stat_user_tables where relname ='abc_table';
-[ RECORD 1 ]-+
relid | 550314
schemaname| public
relname   | abc_table
seq_scan  | 3
seq_tup_read  | 6180
idx_scan  | 2060
idx_tup_fetch | 2060
n_tup_ins | 0
n_tup_upd | 0
n_tup_del | 2060
n_tup_hot_upd | 0
n_live_tup| 0
n_dead_tup| 0
last_vacuum   |
last_autovacuum   |
last_analyze  |
last_autoanalyze  |
vacuum_count  | 0
autovacuum_count  | 0
analyze_count | 0
autoanalyze_count | 0


--5. After Delete Statements (Before JAN 31'st 2018 14:39 Hrs)
--  

select * from pg_stat_user_tables where relname ='abc_table';
-[ RECORD 1 ]-+
relid | 550314
schemaname| public
relname   | abc_table
seq_scan  | 4
seq_tup_read  | 8240
idx_scan  | 4120
idx_tup_fetch | 4120
n_tup_ins | 0
n_tup_upd | 0
n_tup_del | 4120
n_tup_hot_upd | 0
n_live_tup| 0
n_dead_tup| 0
last_vacuum   |
last_autovacuum   |
last_analyze  |
last_autoanalyze  |
vacuum_count  | 0
autovacuum_count  | 0
analyze_count | 0
autoanalyze_count | 0


--6. After Delete Statements (Before JAN 31'st 2018 15:09 Hrs)
--  


select * from pg_stat_user_tables where relname ='abc_table';
-[ RECORD 1 ]-+
relid | 550314
schemaname| public
relname   | abc_table
seq_scan  | 5
seq_tup_read  | 10300
idx_scan  | 6180
idx_tup_fetch | 6180
n_tup_ins | 0
n_tup_upd | 0
n_tup_del | 6180
n_tup_hot_upd | 0
n_live_tup| 0
n_dead_tup| 0
last_vacuum   |
last_autovacuum   |
last_analyze  |
last_autoanalyze  |
vacuum_count  | 0
autovacuum_count  | 0
analyze_count | 0
autoanalyze_count | 0



As said above if we compare n_tup_del value in steps 4,5,6 it says us that
entire table is getting deleted(correct me if I'm wrong), but n_tup_ins is
0. 

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: 8.2 Autovacuum BUG ?

2018-01-30 Thread pavan95
Hello all,

Will a sudden restart(stop/start) of a postgres database will generate this
huge WAL?

Regards,
Pavan








--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: 8.2 Autovacuum BUG ?

2018-01-24 Thread pavan95
Hi Álvaro Herrera,

Please find the corresponding output:

*1).select name, setting, source, sourcefile, sourceline from pg_settings
where name like '%vacuum%'; *
-[ RECORD 1 ]
name   | autovacuum
setting| on
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 437
-[ RECORD 2 ]
name   | autovacuum_analyze_scale_factor
setting| 0.1
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 451
-[ RECORD 3 ]
name   | autovacuum_analyze_threshold
setting| 50
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 448
-[ RECORD 4 ]
name   | autovacuum_freeze_max_age
setting| 2
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 452
-[ RECORD 5 ]
name   | autovacuum_max_workers
setting| 3
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 443
-[ RECORD 6 ]
name   | autovacuum_naptime
setting| 60
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 445
-[ RECORD 7 ]
name   | autovacuum_vacuum_cost_delay
setting| 20
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 454
-[ RECORD 8 ]
name   | autovacuum_vacuum_cost_limit
setting| -1
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 457
-[ RECORD 9 ]
name   | autovacuum_vacuum_scale_factor
setting| 0.2
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 450
-[ RECORD 10 ]---
name   | autovacuum_vacuum_threshold
setting| 50
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 446
-[ RECORD 11 ]---
name   | log_autovacuum_min_duration
setting| 100
source | configuration file
sourcefile | /etc/postgresql/9.1/main/postgresql.conf
sourceline | 439
-[ RECORD 12 ]---
name   | vacuum_cost_delay
setting| 0
source | default
sourcefile |
sourceline |
-[ RECORD 13 ]---
name   | vacuum_cost_limit
setting| 200
source | default
sourcefile |
sourceline |
-[ RECORD 14 ]---
name   | vacuum_cost_page_dirty
setting| 20
source | default
sourcefile |
sourceline |
-[ RECORD 15 ]---
name   | vacuum_cost_page_hit
setting| 1
source | default
sourcefile |
sourceline |
-[ RECORD 16 ]---
name   | vacuum_cost_page_miss
setting| 10
source | default
sourcefile |
sourceline |
-[ RECORD 17 ]---
name   | vacuum_defer_cleanup_age
setting| 0
source | default
sourcefile |
sourceline |
-[ RECORD 18 ]---
name   | vacuum_freeze_min_age
setting| 5000
source | default
sourcefile |
sourceline |
-[ RECORD 19 ]---
name   | vacuum_freeze_table_age
setting| 15000
source | default
sourcefile |
sourceline |


*2).select oid::regclass, reloptions from pg_class where reloptions is not
null; *

(No rows)



Thanks in Advance.


Regards,
Pavan




--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: need help on memory allocation

2018-01-23 Thread pavan95
Then we should find like if there are any idle sessions with uncommitted
transactions. Those might be the culprits.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: 8.2 Autovacuum BUG ?

2018-01-23 Thread pavan95
Hi Claudio,

We didn't configure any replication to our production server. Which strace
are you talking about?

We did a keen observation that only at the time 9'th minute of the hour and
39'th minute of the hour the so called archive logs are generated even when
nobody is connecting from application(off the business hours). Minimum of 76
files are being produced in these two intervals of a hour. Tried to monitor
the DML's but those are the same DML's which were in the past. Any idea??

Thanks in advance.



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: need help on memory allocation

2018-01-23 Thread pavan95
Hi Rambabu,

If you are finding some  sessions then of course your database is
perfectly alright. As  sessions won't consume any memory.

Kindly specify the issue briefly.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: 8.2 Autovacuum BUG ?

2018-01-23 Thread pavan95
Hello,

Is there any way to check, how many transactions happened till date from the
point the database created and started accepting transactions ?

The reason for this doubt is to find whether my database has crossed 2
million transactions or not. 

Strangely had an interesting observation, when I tried to a vacuum full, it
is generating 1GB of archive logs per sec, and yes it's true.


So I had a doubt like whether this is related to vacuum

Please help me cope up with this.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: 8.2 Autovacuum BUG ?

2018-01-23 Thread pavan95
Hi Andreas,

Yes I'm facing problem because of this huge WAL(archive log) generation. As
it is seriously consuming a lot of disk space almost close to 50GB per day
even if the DML's don't have that impact in this WAL generation.

Previously the archive_log size is nearly 2 to 3 GB a day. Now with the same
set of DML's how is it being generated to 50GB is my burning doubt.

I just wanted to know how to stabilize this issue, as checking and deleting
the archive logs on hourly basis is not a good idea.

Finally, I'm looking how to reduce this back to normal. Thanks in Advance.

Regards,
Pavan  



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html