Re: [PERFORM] Improving PostgreSQL insert performance

2017-06-09 Thread John Gorman
You need to be careful with the setFetchSize we have tables with over 10 
million rows and many columns and the PostgreSQL JDBC driver silently fails, 
ignores the fetch size and tries to read the entire table content into memory. 
I spent many agonizing days on this.

ps.setFetchSize(65536);

Regards
John


From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Frits Jalvingh
Sent: Friday, June 09, 2017 7:55 AM
To: Sunkara, Amrutha; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Improving PostgreSQL insert performance

I am not doing anything special I guess. I am adding the results of the tests 
and the programs I'm using to the following page:

https://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests

The copy example, in Java, is at the end. All of the examples use trivial data 
and the same data. If you find fault please let me know ;) But the copy does 
insert the records as they can be seen ;)
On Fri, Jun 9, 2017 at 4:47 PM Sunkara, Amrutha 
> wrote:
Frits,

When you use the copy command, are you doing anything special to get the run 
time that you are indicating?


Re: [PERFORM] Auto generate number in Postgres-9.1.

2017-03-20 Thread John Gorman
Sequences are stored as a separate object in PostgreSQL.

Here in this example table and you can see that rec_id is a sequence number and 
that the object name is: whiteboards_rec_id_seq

mydb=> \d whiteboards

 Table "public.whiteboards"
Column |Type |  
Modifiers   
---+-+--
 rec_id| integer | not null default 
nextval('whiteboards_rec_id_seq'::regclass)
 board_name| character varying(24)   | not null
 board_content | text| not null
 updatets  | timestamp without time zone | default now()
Indexes:
"whiteboards_pkey" PRIMARY KEY, btree (rec_id)

Now I can display the whiteboards_rec_id_seq object

mydb=> \dS whiteboards_rec_id_seq 
 Sequence "public.whiteboards_rec_id_seq"
Column |  Type   | Value  
---+-+
 sequence_name | name| whiteboards_rec_id_seq
 last_value| bigint  | 12
 start_value   | bigint  | 1
 increment_by  | bigint  | 1
 max_value | bigint  | 9223372036854775807
 min_value | bigint  | 1
 cache_value   | bigint  | 1
 log_cnt   | bigint  | 31
 is_cycled | boolean | f
 is_called | boolean | t

-Original Message-
From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Josh Berkus
Sent: Monday, March 20, 2017 6:43 AM
To: Dinesh Chandra 12108; pgsql-performance-ow...@postgresql.org
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Auto generate number in Postgres-9.1.

Dinesh,

> I have to add one column  "ID" in postgres table which will generate
> Auto Increment
> ed number .
> 
>  
> 
> Example:
> 
> Suppose I have five records and if I insert 1 new record It should auto
> generate 6.

https://www.postgresql.org/docs/9.6/static/sql-createsequence.html
also SERIAL on this page:
https://www.postgresql.org/docs/9.6/static/datatype-numeric.html


> 
> If I truncate the same table and then again insert rows should start
> with 1 in "ID" column.

That's not how it works, normally.  I'd suggest adding an ON TRUNCATE
trigger to the table.


-- 
Josh Berkus
Containers & Databases Oh My!


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?

2017-02-27 Thread John Gorman
Even though it's not listed in any of the documentation or “pg_dump --help” you 
can check the return code of the process. A return code greater than 0 (zero) 
usually indicates a failure

./bin >pg_dump -U dummy_user  dummy_database; echo $?
1

From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Rick Otten
Sent: Monday, February 27, 2017 3:36 AM
To: Dinesh Chandra 12108
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not 
?

Although it doesn't really tell if the pg_dump was successful (you'll need to 
do a full restore to be sure), I generate an archive list.  If that fails, the 
backup clearly wasn't successful, and if it succeeds, odds are pretty good that 
it worked:

-- bash code snippet --
archiveList=`pg_restore -l ${backupFolder}`
if [[ ! ${archiveList} =~ "Archive created at" ]]
then
echo "PostgreSQL backup - Archive List Test Failed for 
${hostName}:${dbName}"
echo "Archive listing:"
echo ${archiveList}
exit 1
fi
---



On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108 
> wrote:
Hi,

We are taking daily full backup of PostgreSQL database using PG_DUMP which is 
automatic scheduled through Cronjobs.

How can I check my yesterday backup is successfully or not?
Is there any query or view by which I can check it?

Regards,
Dinesh Chandra
|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.
--
Mobile: +91-9953975849 | Ext 1078 
|dinesh.chan...@cyient.com
Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.




DISCLAIMER:

This email message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply email and destroy all copies of 
the original message. Check all attachments for viruses before opening them. 
All views or opinions presented in this e-mail are those of the author and may 
not reflect the opinion of Cyient or those of our affiliates.



Re: [PERFORM] Correct use of cursors for very large result sets in Postgres

2017-02-23 Thread John Gorman
Yes of course that’s all verified and taken into account during code 
initialization


From: Vitalii Tymchyshyn [mailto:v...@tym.im]
Sent: Wednesday, February 22, 2017 8:14 PM
To: John Gorman; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in 
Postgres

For JDBC there are certain prerequisites for setFetchSize to work, e.g. using 
forward only result sets and transactions.

вт, 21 лют. 2017 о 09:06 John Gorman 
<jgor...@eldocomp.com<mailto:jgor...@eldocomp.com>> пише:
My experience with cursors in PostgreSQL with Java has been to stay away from 
them. We support 2 databases with our product, PostgreSQL (default) and SQL 
Server. While re-encrypting data in a database the application used cursors 
with a fetch size of 1000.

Worked perfectly on SQL Server and on PostgreSQL until we got to a PostgreSQL 
table with more than 11 million rows. After spending weeks trying to figure out 
what was happening, I realized that when it gets to a table with more than 10 
million rows for some reason, the cursor functionality just silently stopped 
working and it was reading the entire table. I asked another very senior 
architect to look at it and he came to the same conclusion. Because of limited 
time, I ended up working around it using limit/offset.

Again we are using Java, so the problem could just be in the PostgreSQL JDBC 
driver. Also we were on 9.1 at the time.

Regards
John

From: 
pgsql-performance-ow...@postgresql.org<mailto:pgsql-performance-ow...@postgresql.org>
 
[mailto:pgsql-performance-ow...@postgresql.org<mailto:pgsql-performance-ow...@postgresql.org>]
 On Behalf Of Mike Beaton
Sent: Tuesday, February 21, 2017 6:49 AM
To: pgsql-performance@postgresql.org<mailto:pgsql-performance@postgresql.org>
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in 
Postgres

Thanks, Tom.

Wouldn't this mean that cursors are noticeably non-optimal even for normal data 
sizes, since the entire data to be streamed from the table is always duplicated 
into another buffer and then streamed?

> if you want the whole query result at once, why are you bothering with a 
> cursor?

The PostgreSQL docs 
(https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) 
clearly recommend cursors as a way to return a reference to a large result set 
from a function (as I understood, this is recommended precisely as a way to 
avoid tuple-based buffering of the data).

So following that advice, it's not unreasonable that I would actually have a 
cursor to a large dataset.

Then, I would ideally want to be able to fetch the data from that cursor 
without the entire data getting duplicated (even if only a bit at a time 
instead of all at once, which seems to be the best case behaviour) as I go.

Additionally, I thought that if I had a streaming use-case (which I do), and a 
streaming data-access layer (which I do), then since `SELECT * FROM large` is 
absolutely fine, end-to-end, in that situation, then by symmetry and the 
principle of least astonishment `FETCH ALL FROM cursor` might be fine too.



Re: [PERFORM] Correct use of cursors for very large result sets in Postgres

2017-02-21 Thread John Gorman
My experience with cursors in PostgreSQL with Java has been to stay away from 
them. We support 2 databases with our product, PostgreSQL (default) and SQL 
Server. While re-encrypting data in a database the application used cursors 
with a fetch size of 1000.

Worked perfectly on SQL Server and on PostgreSQL until we got to a PostgreSQL 
table with more than 11 million rows. After spending weeks trying to figure out 
what was happening, I realized that when it gets to a table with more than 10 
million rows for some reason, the cursor functionality just silently stopped 
working and it was reading the entire table. I asked another very senior 
architect to look at it and he came to the same conclusion. Because of limited 
time, I ended up working around it using limit/offset.

Again we are using Java, so the problem could just be in the PostgreSQL JDBC 
driver. Also we were on 9.1 at the time.

Regards
John

From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Mike Beaton
Sent: Tuesday, February 21, 2017 6:49 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Correct use of cursors for very large result sets in 
Postgres

Thanks, Tom.

Wouldn't this mean that cursors are noticeably non-optimal even for normal data 
sizes, since the entire data to be streamed from the table is always duplicated 
into another buffer and then streamed?

> if you want the whole query result at once, why are you bothering with a 
> cursor?

The PostgreSQL docs 
(https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) 
clearly recommend cursors as a way to return a reference to a large result set 
from a function (as I understood, this is recommended precisely as a way to 
avoid tuple-based buffering of the data).

So following that advice, it's not unreasonable that I would actually have a 
cursor to a large dataset.

Then, I would ideally want to be able to fetch the data from that cursor 
without the entire data getting duplicated (even if only a bit at a time 
instead of all at once, which seems to be the best case behaviour) as I go.

Additionally, I thought that if I had a streaming use-case (which I do), and a 
streaming data-access layer (which I do), then since `SELECT * FROM large` is 
absolutely fine, end-to-end, in that situation, then by symmetry and the 
principle of least astonishment `FETCH ALL FROM cursor` might be fine too.



Re: [PERFORM] Index not used

2016-06-16 Thread John Gorman
When you run psql, are you running that on the application server or the 
database server? Does the application run on the same server as the database 
and how is the application connecting to the database (JDBC, ODBC, etc)?

In other words is there a difference in network time between the 2?

Also the queries are not exactly the same. With psql you use "select *" and the 
application specifies what columns it wants returned and the order to return 
them. Try running the exact query on both.

Regards
John
 
-Original Message-
From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of 
meike.talb...@women-at-work.org
Sent: Thursday, June 16, 2016 12:59 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Index not used

Hello,
 
I've a basic table with about 100K rows:
 


CREATE TABLE "public"."push_topic" (
 "id" Serial PRIMARY KEY,
 "guid" public.push_guid NOT NULL,
 "authenticatorsending" Varchar(32) NOT NULL,
 "authenticatorsubscription" Varchar(32) NOT NULL,
 "countpushed" Integer NOT NULL,
 "datecreated" timestamp NOT NULL,
 "datelastpush" timestamp
)
CREATE UNIQUE INDEX push_topic_idx_topicguid ON push_topic
  USING btree (guid)


 
When I query this through pgsql, the queries are fast as expected.

This is the query:

select * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'

And the plan:



Index Scan using push_topic_idx_topicguid on push_topic  (cost=0.42..8.44 
rows=1 width=103) (actual time=0.117..0.121 rows=1 loops=1)
  Index Cond: ((guid)::bpchar = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::bpchar)
  Buffers: shared hit=3 read=1
Total runtime: 0.191 ms



However when I run the exact query through a different application 
(CodeSynthesis ORM) the query is very slow (~ 115ms logged)
I noted this is due to a sequential scan happening on the table instead of an 
index scan.

This is query plan in the log file:



LOG:  plan:
DETAIL: {PLANNEDSTMT 
   :commandType 1 
   :queryId 0 
   :hasReturning false 
   :hasModifyingCTE false 
   :canSetTag true 
   :transientPlan false 
   :planTree 
  {SEQSCAN 
  :startup_cost 0.00 
  :total_cost 2877.58 
  :plan_rows 429 
  :plan_width 103 
  :targetlist (
 {TARGETENTRY 
 :expr 
{VAR 
:varno 1 
:varattno 1 
:vartype 23 
:vartypmod -1 
:varcollid 0 
:varlevelsup 0 
:varnoold 1 
:varoattno 1 
:location 7
}
 :resno 1 
 :resname id 
 :ressortgroupref 0 
 :resorigtbl 16393 
 :resorigcol 1 
 :resjunk false
 }
 {TARGETENTRY 
 :expr 
{VAR 
:varno 1 
:varattno 2 
:vartype 16385 
:vartypmod -1 
:varcollid 100 
:varlevelsup 0 
:varnoold 1 
:varoattno 2 
:location 26
}
 :resno 2 
 :resname guid 
 :ressortgroupref 0 
 :resorigtbl 16393 
 :resorigcol 2 
 :resjunk false
 }
 {TARGETENTRY 
 :expr 
{VAR 
:varno 1 
:varattno 3 
:vartype 1043 
:vartypmod 36 
:varcollid 100 
:varlevelsup 0 
:varnoold 1 
:varoattno 3 
:location 47
}
 :resno 3 
 :resname authenticatorsending 
 :ressortgroupref 0 
 :resorigtbl 16393 
 :resorigcol 3 
 :resjunk false
 }
 {TARGETENTRY 
 :expr 
{VAR 
:varno 1 
:varattno 4 
:vartype 1043 
:vartypmod 36 
:varcollid 100 
:varlevelsup 0 
:varnoold 1 
:varoattno 4 
:location 84
}
 :resno 4 
 :resname authenticatorsubscription 
 :ressortgroupref 0 
 :resorigtbl 16393 
 :resorigcol 4 
 :resjunk false
 }
 {TARGETENTRY 
 :expr 
{VAR 
:varno 1 
  

Re: [PERFORM] Database transaction with intermittent slow responses

2016-05-16 Thread John Gorman
Hi Gerado and Robert,

We turned DEBUG on in the application logs and then this weekend I was going to 
multi-threaded test program to reproduce the issue in a stand-alone program, 
and when I got the debug logs and stripped out everything except the database 
queries for the ChangeHistory table, I found some other unexpected transactions 
which were hitting the ChangeHistory table fairly hard and in rapid succession.

At this point development is looking into it, and we believe this application 
transaction is the source of the issue, which is why I have not responded to 
your emails.

I should know more after tonight.

Thanks again for your feedback and responses

Regards
John

-Original Message-
From: Gerardo Herzig [mailto:gher...@fmed.uba.ar] 
Sent: Friday, May 13, 2016 4:11 PM
To: John Gorman
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database transaction with intermittent slow responses

Oh, so *all* the transactions are being slowed down at that point...What about 
CPU IO Wait% at that moment? Could be some other processes stressing the system 
out?

Now im thinking about hard disk issues...maybe some "smart" messages?

Have some other hardware to give it a try?

Gerardo

- Mensaje original -----
> De: "John Gorman" <jgor...@eldocomp.com>
> Para: "Gerardo Herzig" <gher...@fmed.uba.ar>
> CC: pgsql-performance@postgresql.org, "John Gorman" <jgor...@eldocomp.com>
> Enviados: Viernes, 13 de Mayo 2016 18:25:37
> Asunto: RE: [PERFORM] Database transaction with intermittent slow responses
> 
> Hi Gerado,
> 
> Thanks for the quick response. We do not appear to have a connection
> limit since our application is the only thing talking to the
> database, the connections are somewhat limited. We are using about
> 126 of a max allowed 350 connections. We keep these metrics in a
> different database, and we also generate alerts if we get close to
> the catalog/cluster limit.
> 
> Also I have been monitoring heavily and watching for locks while the
> transaction runs for a long time. While I see occasional locks, they
> are on other tables and are brief, so I do not believe there is a
> database lock issue/contention.
> 
> The application is timing the transaction out. When we detect that
> the timeout limit has occurred, we cancel the database connection
> (conn.cancel();) - we have been doing this for several years with no
> issue.
> 
> I setup a adhoc monitor which runs every 2 seconds and displays
> "select * from pg_stat_activity where datname = 'p306' and
> current_query not like '<IDLE%'; and then write the output to a log.
> I can see the transaction being executed in the database for over 50
> seconds, so I do believe the database actually is working on it.
> 
> We have a few monitoring programs that track and record quite a few
> thinks including database locks (number and type), connections
> (number and where). I have reviewed the history and do not see any
> trends.
> 
> If it helps here is a monitor snippet of the transaction taking over
> 50 seconds (SELECT * FROM ChangeHistory)
> 
> 
> >> Wed May 11 07:50:09 MST 2016
>  3709009 | p306|5644 |16387 | p306|
>   | 172.20.0.82 | coreb   |   59871 |
>  2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |
>  2016-05-11 07:50:09.396161-07 | f   | SELECT * FROM
>  ChangeHistory WHERE Category BETWEEN $1 AND $2 AND
>  PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser
>  DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11
> 
> >> Wed May 11 07:50:11 MST 2016
>  3709009 | p306|   15014 |16387 | p306|
>   | 172.20.0.82 | coreb   |   35859
>  | 2016-05-11 07:31:31.968087-07 | 2016-05-11 07:50:11.575881-07 |
>  2016-05-11 07:50:11.766942-07 | f   | SELECT * FROM Employee
>  WHERE SocialSecurityNumber BETWEEN $1 AND $2 ORDER BY LastName,
>  FirstName, MiddleName, BlkOfBusID, ClientID, CertificateNumber,
>  SocialSecurityNumber, MedicareID, BirthDate, AlternateID1,
>  AlternateID2 LIMIT 11
>  3709009 | p306|5644 |16387 | p306|
>   | 172.20.0.82 | coreb   |   59871
>  | 2016-05-11 07:04:16.503194-07 | 2016-05-11 07:50:09.394202-07 |
>  2016-05-11 07:50:09.396161-07 | f   | SELECT * FROM
>  ChangeHistory WHERE Category BETWEEN $1 AND $2 AND
>  PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser
>  DESC, Category DESC, PrimaryKeyOfChange DESC LIMIT 11
>  3709009 | p306|   17312 |16387 | p306|
>   | 172.20.0.86 | batchb.eldocomp.com |   54263
>  | 2016-05-11 07:38:08.464797-07 | 2016-05-11 07:47:42.982944-07 |
>  2016-05-11 07:50:11.712848

Re: [PERFORM] Database transaction with intermittent slow responses

2016-05-13 Thread John Gorman
1.247388-07 
| 2016-05-11 07:51:02.125906-07 | 2016-05-11 07:51:02.311956-07 | f   | 
SELECT * FROM EmpEligibilityCoverage WHERE EmployeeID = $1 AND EffectiveDate <= 
$2 ORDER BY EmployeeID DESC, EffectiveDate DESC LIMIT 101
 3709009 | p306|   17312 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   54263 | 2016-05-11 07:38:08.464797-07 
| 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:02.23586-07  | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001
 3709009 | p306|   16771 |16387 | p306|  | 
172.20.0.82 | coreb   |   37470 | 2016-05-11 07:36:18.535139-07 
| 2016-05-11 07:51:02.16-07 | 2016-05-11 07:51:02.295888-07 | f   | 
DELETE FROM ToothChartMaintenance WHERE Claimnumber = $1
 3709009 | p306|8671 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   55055 | 2016-05-11 07:17:52.292909-07 
| 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:02.235869-07 | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001

>> Wed May 11 07:51:04 MST 2016
 3709009 | p306|5644 |16387 | p306|  | 
172.20.0.82 | coreb   |   59871 | 2016-05-11 07:04:16.503194-07 
| 2016-05-11 07:50:09.394202-07 | 2016-05-11 07:50:09.396161-07 | f   | 
SELECT * FROM ChangeHistory WHERE Category BETWEEN $1 AND $2 AND 
PrimaryKeyOfChange BETWEEN $3 AND $4 ORDER BY ChgTS DESC, ChgUser DESC, 
Category DESC, PrimaryKeyOfChange DESC LIMIT 11
 3709009 | p306|   17312 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   54263 | 2016-05-11 07:38:08.464797-07 
| 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:04.277287-07 | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001
 3709009 | p306|8671 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   55055 | 2016-05-11 07:17:52.292909-07 
| 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:04.277543-07 | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001

>> Wed May 11 07:51:06 MST 2016
 3709009 | p306|   17312 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   54263 | 2016-05-11 07:38:08.464797-07 
| 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:06.313649-07 | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001
 3709009 | p306|8671 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   55055 | 2016-05-11 07:17:52.292909-07 
| 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:06.313855-07 | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001

>> Wed May 11 07:51:08 MST 2016
 3709009 | p306|   22530 |16387 | p306|  | 
172.20.0.82 | coreb   |   42494 | 2016-05-11 07:51:04.419169-07 
| 2016-05-11 07:51:08.351721-07 | 2016-05-11 07:51:08.373929-07 | f   | 
SELECT * FROM ChangeHistory WHERE Category = $1 AND PrimaryKeyOfChange = $2 
ORDER BY Category, PrimaryKeyOfChange, ChgTS, ExcludedKeyFields LIMIT 2001
 3709009 | p306|   17312 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   54263 | 2016-05-11 07:38:08.464797-07 
| 2016-05-11 07:47:42.982944-07 | 2016-05-11 07:51:08.335854-07 | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001
 3709009 | p306|8671 |16387 | p306|  | 
172.20.0.86 | batchb.eldocomp.com |   55055 | 2016-05-11 07:17:52.292909-07 
| 2016-05-11 07:40:50.525528-07 | 2016-05-11 07:51:08.359281-07 | f   | 
SELECT * FROM Employee WHERE EmployeeID BETWEEN $1 AND $2 ORDER BY EmployeeID 
LIMIT 1001

Regards
John

-Original Message-
From: Gerardo Herzig [mailto:gher...@fmed.uba.ar] 
Sent: Friday, May 13, 2016 2:05 PM
To: John Gorman
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Database transaction with intermittent slow responses

After quick reading, im thinking about a couples of chances:

1) You are hitting a connection_limit
2) You are hitting a lock contention (perhaps some other backend is locking the 
table and not releasing it)

Who throws the timeout? It is Postgres or your JDBC connector?

My initial blind guess is that your "timed out queries" never gets postgres at 
all, and are blocked prior to that for some other issue. If im wrong, well, you 
should at least have the timeout recorded in your logs.

You should also track #of_connectinos and #of_locks over that tables.

See http://www.postgresql.org/docs/9.1/static/view-pg-locks.html for pg_lock 
information

That should be my starting point for viewing whats going on.

HTH
Gerardo

- 

[PERFORM] Database transaction with intermittent slow responses

2016-05-13 Thread John Gorman
Name  | Type  | Owner | Table |  Size   | 
Description
+---+---+---+---+-+-
 public | changehistory_idx1| index | p306  | changehistory | 9597 MB |
 public | changehistory_idx3| index | p306  | changehistory | 11 GB   |
 public | changehistory_idx4| index | p306  | changehistory | 4973 MB |
 public | changehistory_pkey| index | p306  | changehistory | 2791 MB |
 public | changehistory_search2 | index | p306  | changehistory | 9888 MB |
 public | changehistory_search3 | index | p306  | changehistory | 10 GB   |
 public | changehistory_search4 | index | p306  | changehistory | 9240 MB |
 public | changehistory_search5 | index | p306  | changehistory | 8373 MB |
(8 rows)


>>>
p306=> select count(*) from changehistory ;
   count

 129,185,024

>>>
Show all (filtered)
==

  name   | setting
-+
 autovacuum  | on
 autovacuum_analyze_scale_factor | 0.001
 autovacuum_analyze_threshold| 500
 autovacuum_freeze_max_age   | 2
 autovacuum_max_workers  | 5
 autovacuum_naptime  | 1min
 autovacuum_vacuum_cost_delay| 0
 autovacuum_vacuum_cost_limit| -1
 autovacuum_vacuum_scale_factor  | 0.001
 autovacuum_vacuum_threshold | 500
 bgwriter_delay  | 200ms
 block_size  | 8192
 check_function_bodies   | on
 checkpoint_completion_target| 0.9
 checkpoint_segments | 256
 checkpoint_timeout  | 1h
 checkpoint_warning  | 30s
 client_encoding | UTF8
 commit_delay| 0
 commit_siblings | 5
 cpu_index_tuple_cost| 0.005
 cpu_operator_cost   | 0.0025
 cpu_tuple_cost  | 0.01
 cursor_tuple_fraction   | 0.1
 deadlock_timeout| 1s
 default_statistics_target   | 100
 default_transaction_deferrable  | off
 default_transaction_isolation   | read committed
 default_transaction_read_only   | off
 default_with_oids   | off
 effective_cache_size| 8GB
 from_collapse_limit | 8
 fsync   | on
 full_page_writes| on
 ignore_system_indexes   | off
 join_collapse_limit | 8
 krb_caseins_users   | off
 lo_compat_privileges| off
 maintenance_work_mem| 1GB
 max_connections | 350
 max_files_per_process   | 1000
 max_function_args   | 100
 max_identifier_length   | 63
 max_index_keys  | 32
 max_locks_per_transaction   | 64
 max_pred_locks_per_transaction  | 64
 max_prepared_transactions   | 0
 max_stack_depth | 2MB
 max_wal_senders | 5
 random_page_cost| 4
 segment_size| 1GB
 seq_page_cost   | 1
 server_encoding | UTF8
 server_version  | 9.1.14
 shared_buffers  | 2GB
 sql_inheritance | on
 statement_timeout   | 0
 synchronize_seqscans| on
 synchronous_commit  | on
 synchronous_standby_names   |
 tcp_keepalives_count| 0
 tcp_keepalives_idle | -1
 tcp_keepalives_interval | 0
 track_activities| on
 track_activity_query_size   | 1024
 track_counts| on
 track_functions | none
 transaction_deferrable  | off
 transaction_isolation   | read committed
 transaction_read_only   | off
 transform_null_equals   | off
 update_process_title| on
 vacuum_cost_delay   | 0
 vacuum_cost_limit   | 200
 vacuum_cost_page_dirty  | 20
 vacuum_cost_page_hit| 1
 vacuum_cost_page_miss   | 10
 vacuum_defer_cleanup_age    | 0
 vacuum_freeze_min_age   | 5000
 vacuum_freeze_table_age | 15000

John Gorman | Manager of Production Support, Architecture, Release Engineering 
| Eldorado | a Division of MPHASIS | www.eldoinc.com/ |
5353 North 16th Street, Suite 400, Phoenix, Arizona 85016-3228 | Tel 
602.604.3100 | Fax: 602.604.3115