Re: Issue in PG start

2021-05-07 Thread David G. Johnston
On Friday, May 7, 2021, sivapostg...@yahoo.com 
wrote:

> Hello,
>
> PG 11.8 in Windows 10 and currently PG 11.11
>
> Yesterday [07th May] morning when we switched on the computer and
> subsequently PGAdmin, we got the message following message
> FATAL: the database system is starting up
>
> I uninstalled PG and installed [v 11.11] PG with the same data directory.
> Fortunately it worked even when we switched off /on the computer for
> lunch.   NO issues yesterday.
>
> But today morning [08th May], again we have the same issue and same
> message.   The log file is with the following message only
>
> 2021-05-08 10:26:20.783 IST [7360] LOG:  database system was interrupted;
> last known up at 2021-05-07 18:46:00 IST
>
>
> How / where to debug the issue and correct it ?
>

To see why the server is not completing its startup cycle you need to look
at the server’s log file.  All you’ve shown is a client failing to
establish a connection every ten seconds.

David J.


Re: Issue in PG start

2021-05-07 Thread sivapostg...@yahoo.com
Hello,
PG 11.8 in Windows 10 and currently PG 11.11
Yesterday [07th May] morning when we switched on the computer and subsequently 
PGAdmin, we got the message following messageFATAL: the database system is 
starting up

I uninstalled PG and installed [v 11.11] PG with the same data directory.  
Fortunately it worked even when we switched off /on the computer for lunch.   
NO issues yesterday.
But today morning [08th May], again we have the same issue and same message.   
The log file is with the following message only
2021-05-08 10:26:20.783 IST [7360] LOG:  database system was interrupted; last 
known up at 2021-05-07 18:46:00 IST2021-05-08 10:28:36.159 IST [7048] FATAL:  
the database system is starting up2021-05-08 10:28:46.199 IST [15240] FATAL:  
the database system is starting up2021-05-08 10:28:56.233 IST [7036] FATAL:  
the database system is starting up2021-05-08 10:29:06.266 IST [5656] FATAL:  
the database system is starting up2021-05-08 10:29:16.299 IST [4464] FATAL:  
the database system is starting up2021-05-08 10:29:26.327 IST [11764] FATAL:  
the database system is starting up2021-05-08 10:29:36.365 IST [4248] FATAL:  
the database system is starting up2021-05-08 10:29:46.406 IST [14920] FATAL:  
the database system is starting up2021-05-08 10:29:56.429 IST [6364] FATAL:  
the database system is starting up2021-05-08 10:30:06.458 IST [14472] FATAL:  
the database system is starting up2021-05-08 10:30:16.489 IST [1372] FATAL:  
the database system is starting up2021-05-08 10:30:26.521 IST [4232] FATAL:  
the database system is starting up2021-05-08 10:30:36.547 IST [14832] FATAL:  
the database system is starting up2021-05-08 10:30:46.578 IST [13852] FATAL:  
the database system is starting up2021-05-08 10:30:56.607 IST [9720] FATAL:  
the database system is starting up2021-05-08 10:31:06.634 IST [12396] FATAL:  
the database system is starting up2021-05-08 10:31:16.693 IST [8] FATAL:  the 
database system is starting up2021-05-08 10:31:26.722 IST [12492] FATAL:  the 
database system is starting up2021-05-08 10:31:36.750 IST [15020] FATAL:  the 
database system is starting up2021-05-08 10:31:46.778 IST [9684] FATAL:  the 
database system is starting up2021-05-08 10:31:56.924 IST [15356] FATAL:  the 
database system is starting up2021-05-08 10:32:06.946 IST [1060] FATAL:  the 
database system is starting up2021-05-08 10:32:16.968 IST [14828] FATAL:  the 
database system is starting up2021-05-08 10:32:27.025 IST [10544] FATAL:  the 
database system is starting up2021-05-08 10:32:37.054 IST [816] FATAL:  the 
database system is starting up2021-05-08 10:32:47.087 IST [14676] FATAL:  the 
database system is starting up2021-05-08 10:32:57.141 IST [13116] FATAL:  the 
database system is starting up2021-05-08 10:33:07.171 IST [15280] FATAL:  the 
database system is starting up2021-05-08 10:33:17.202 IST [12972] FATAL:  the 
database system is starting up2021-05-08 10:33:27.229 IST [4476] FATAL:  the 
database system is starting up2021-05-08 10:33:37.259 IST [1992] FATAL:  the 
database system is starting up2021-05-08 10:33:47.291 IST [5260] FATAL:  the 
database system is starting up2021-05-08 10:33:57.425 IST [15040] FATAL:  the 
database system is starting up2021-05-08 10:34:07.459 IST [5780] FATAL:  the 
database system is starting up2021-05-08 10:34:17.487 IST [5052] FATAL:  the 
database system is starting up2021-05-08 10:34:25.047 IST [15396] FATAL:  the 
database system is starting up2021-05-08 10:34:25.071 IST [15424] FATAL:  the 
database system is starting up2021-05-08 10:34:25.094 IST [15452] FATAL:  the 
database system is starting up2021-05-08 10:34:25.118 IST [15480] FATAL:  the 
database system is starting up2021-05-08 10:34:25.142 IST [15508] FATAL:  the 
database system is starting up2021-05-08 10:34:25.318 IST [15548] FATAL:  the 
database system is starting up2021-05-08 10:34:27.514 IST [15588] FATAL:  the 
database system is starting up2021-05-08 10:34:37.534 IST [15668] FATAL:  the 
database system is starting up2021-05-08 10:34:47.559 IST [15868] FATAL:  the 
database system is starting up2021-05-08 10:34:57.592 IST [16256] FATAL:  the 
database system is starting up2021-05-08 10:35:01.668 IST [16332] FATAL:  the 
database system is starting up2021-05-08 10:35:07.623 IST [1332] FATAL:  the 
database system is starting up2021-05-08 10:35:08.079 IST [15396] FATAL:  the 
database system is starting up2021-05-08 10:35:17.653 IST [7316] FATAL:  the 
database system is starting up2021-05-08 10:35:27.783 IST [16376] FATAL:  the 
database system is starting up2021-05-08 10:35:37.825 IST [15596] FATAL:  the 
database system is starting up2021-05-08 10:35:47.859 IST [15928] FATAL:  the 
database system is starting up2021-05-08 10:35:57.887 IST [5544] FATAL:  the 
database system is starting up2021-05-08 10:36:07.912 IST [2988] FATAL:  the 
database system is starting up2021-05-08 10:36:17.946 IST [7532] FATAL:  the 
database system is starting up2021-05-08 10:36:27.975 IST [13672] 

Re: Postgres upgrade 12 - issues with OIDs

2021-05-07 Thread Laurenz Albe
On Sat, 2021-05-08 at 13:37 +1000, Venkata B Nagothi wrote:
> We are thinking to upgrade to PG 11 instead so that we can avoid doing ALTER 
> TABLE.. SET WITHOUT OIDs.
>  Does that makes sense ? Please advise if there are any gotchas !

It makes sense, but it means that you will have to face the same problem later.
However, for upgrading from v11 with little down time you may be able to use
logical replication.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Postgres upgrade 12 - issues with OIDs

2021-05-07 Thread Venkata B Nagothi
Missed including pgsql-general group.

On Sat, 8 May 2021 at 11:06 am, Venkata B Nagothi  wrote:

>
>
> On Wed, 5 May 2021 at 9:22 am, Bruce Momjian  wrote:
>
>> On Wed, May  5, 2021 at 07:49:29AM +1000, Venkata B Nagothi wrote:
>> > Hi There,
>> >
>> > We are attempting to upgrade our Postgres databases from 9.5 to 12.5
>> using
>> > pg_upgrade link mode and are facing issues with OIDs.
>> >
>> > ALTER TABLE... SET WITHOUT OIDs on the larger tables is taking very
>> long and is
>> > locking up the table as well. We do have tables of more than 1 TB of
>> size.
>> > Is there any way to make this complete faster ?  Any suggestions would
>> be
>> > great.
>>
>> Uh, I see this on our code:
>>
>> pg_fatal("Your installation contains tables declared WITH OIDS,
>> which is not\n"
>>  "supported anymore.  Consider removing the oid column
>> using\n"
>>  "ALTER TABLE ... SET WITHOUT OIDS;\n"
>>  "A list of tables with the problem is in the file:\n"
>>  "%s\n\n", output_path);
>>
>> Uh, I don't know of any way to speed that up, though it might be faster
>
>
> That’s a big challenge for us as we want to achieve this with 0 down time
> to our live database and very minimal downtime to our DR.
>
>>
>
>> if it was done while no one else was accessing the table.  I see this
>> comment in our PG 11 code:
>>
>> /*
>>  * If we dropped the OID column, must adjust pg_class.relhasoids and
>> tell
>>  * Phase 3 to physically get rid of the column.  We formerly left the
>>  * column in place physically, but this caused subtle problems.  See
>>  * http://archives.postgresql.org/pgsql-hackers/2009-02/msg00363.php
>>  */
>
>
> We are thinking to upgrade to PG 11 instead so that we can avoid doing
> ALTER TABLE.. SET WITHOUT OIDs. Does that makes sense ? Please advise if
> there are any gotchas !
>
>>
>
>>
>> --
>
> Regards,
>
> Venkata B N
> Database Consultant
>
>
-- 

Regards,

Venkata B N
Database Consultant


Re: trigger impacting insertion of records

2021-05-07 Thread Adrian Klaver

On 5/6/21 11:37 PM, Atul Kumar wrote:

hi,

The data is inserting using some json sript which is working fine in
our stating server and inserting complete records.

But in production data insertion is slow and after some insertion it
just abort somehow.

DB logs are given below:


#PostGreSQL: idle^^2021-05-06 18:06:09
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: BEGIN
#PostGreSQL: idle in transaction^^2021-05-06 18:06:09
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: DEALLOCATE pdo_stmt_00d6
#PostGreSQL: INSERT^^2021-05-06 18:06:09
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  execute pdo_stmt_00d7: insert into
bonzipay.bp_ach_trans_response
(payment_pastransid,payment_status,code,site,accountnumber,amount,bankaccountnumber,accountype,TranId)

  select
payment_pastransid,'SETTLED',code
,site,accountnumber,amount,bankaccountnumber,accountype,TranId from
bonzipay.bp_ach_trans

  where payment_status ='PROCESS' and
accountnumber='USR=647376' and bankaccountnumber='3027469304'

  and amount='6000' and accountype='22'
order by 1 desc limit 1
#PostGreSQL: idle in transaction^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: COMMIT
#PostGreSQL: idle^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: BEGIN
#PostGreSQL: idle in transaction^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: DEALLOCATE pdo_stmt_00d7
#PostGreSQL: UPDATE^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  execute pdo_stmt_00d8: UPDATE
bonzipay.BP_ACH_TRANS SET payment_status='SETTLED' where
payment_pastransid= (select payment_pastransid from
bonzipay.BP_ACH_TRANS where payment_status ='PROCESS' and
accountnumber='USR=647376' and

  amount='6000'and
bankaccountnumber='3027469304' and accountype='22' order by 1 desc
limit 1)






#PostGreSQL: idle in transaction^^2021-05-06 18:06:12
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  unexpected EOF on client connection with an open
transaction



The formatting of the above makes it hard to follow. Also trying to 
figure out why there are log lines at '2021-05-06
18:00:01' following those at '2021-05-06 18:06:09'.  If I am following 
it looks like a connection is timing out/being disconnected.


Where is the production server located relative to the process running 
the script, same machine, same local network or a remote network?


Also to be complete what Postgres version(though I suspect 9.5)?





Please suggest the solution to troubleshoot it more.




Regards,
Atul








--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Strange behavior of function date_trunc

2021-05-07 Thread Pavel Luzanov

David,


On 06.05.2021 17:28, David G. Johnston wrote:
On Thu, May 6, 2021 at 6:44 AM Tom Lane > wrote:


This case is the reason we invented the "stable" attribute to begin
with.  People have since misinterpreted it as authorizing caching of
function results, but that's not what it was intended for.


This is a good paragraph...if something like it gets added to the 
create function documentation mis-interpretations are likely to decrease.




I found additional details in the documentation. In particular about the 
index scanning for stable functions:

https://www.postgresql.org/docs/13/xfunc-volatility.html

The link to this section there is in the create function page. Maybe 
that's enough.


--
Pavel Luzanov
Postgres Professional: https://postgrespro.com
The Russian Postgres Company



Re: Strange behavior of function date_trunc

2021-05-07 Thread Pavel Luzanov

I will try to summarize what was said before.

We have discussed the details of executing STABLE functions in queries 
of the form:

SELECT * FROM t WHERE col oper stable_func();

* Checking STABLE does not guarantee that the function will be executed 
only once. If the table is scanned sequentially, the function is 
executed for each row of the query.


* If the table has an index on the col column, the planner can choose to 
scan the index. In this case, the STABLE mark gives the right to 
calculate the function value once and use that value to search the index.


* In the case of a sequential scan, the total cost of the plan includes, 
among other things, the cost of the function multiplied by the number of 
rows.  For user-defined functions, the default cost is 100. It may be 
worth changing this value for a more adequate estimate. Decreasing the 
cost of a function will decrease the cost of a seq scan and vice versa. 
Refining the function cost estimate will enable the planner to make a 
more accurate choice between seq scan and index scan.


* If seq scan is preferred, you can avoid executing the function 
multiple times by materializing the result of the function.


* There are two ways to materialize the result: a scalar subquery and a CTE.
    SELECT * FROM t WHERE col oper (SELECT stable_func();
    WITH m AS MATERIALIZED (SELECT stable_func() AS f) SELECT * FROM t, 
m WHERE col oper m.f;


* When materializing a function result, the planner has no way to use 
the function value to build the plan. Therefore, it will not be able to 
use the statistics for the t.col to select the optimal plan.  The 
generic algorithm will be used.


Thank you very much for sharing.

--
Pavel Luzanov
Postgres Professional: https://postgrespro.com
The Russian Postgres Company






Re: [RPM/CentOS7] Need to disable repo_gpgcheck on pgdg-common when using RPM version 42.0-17.1

2021-05-07 Thread Thomas Boussekey
Hello,

Le ven. 7 mai 2021 à 13:55, Devrim Gündüz  a écrit :

>
> Hi,
>
> On Thu, 2021-05-06 at 16:29 +0200, Thomas Boussekey wrote:
> > >
> > This morning, a new RPM version has been sent to
> > https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/ wit
> > h
> > version ID 42.0-17.1
> > We had to adapt our tooling to comply with this new repository file.
> >
> > We faced the following error on section pgdg-common
> > > Failure talking to yum: failure: repodata/repomd.xml.asc from pgdg-
> > > common
> >
> > Searching on the internet, we managed to perform a workaround disabling
> > the
> > parameter `repo_gpgcheck` into the section pgdg-common.
> >
> > Hope it can help others,
>
> Actually please put that setting back. There was a sync issue which I
> fixed yesterday. repo_gpgcheck should be 1, per:
>
>
> https://people.planetpostgresql.org/devrim/index.php?/archives/113-CVE-2021-20271-and-PostgreSQL-YUMDNF-repo.html

I saw your blog article yesterday, nice job!

Yesterday, I didn't find the file `repomd.xml.asc` into the folder:
https://download.postgresql.org/pub/repos/yum/srpms/common/redhat/rhel-7-x86_64/repodata/

Now it is available!
I remove my workaround

Have a nice week-end,
Thomas

>
>
> Regards,
>
> --
> Devrim Gündüz
> Open Source Solution Architect, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR
>


Re: Optimizing search query with sorting by creation field

2021-05-07 Thread Vijaykumar Jain
What is your baseline expectation?
With what size of db table, what query should take how much time
How much server resources can be used?


If this seems to be a timeseries db,
Are the rows append only  and random insertion order ?

You are create partitions based on time and sub partitions on some other
field that can be used as predictable to exclude scanning other partitions
and speed up retrieved rows.
Clustering of tables based on timestamp may help reduce sort time if order
by is always used.

Basically a lot of stuff for improvement I would list are used by
timescaledb which is optimized for time series based queries.

If you want or do not want to use timescaledb,
I guess it has enough content to help plan time series based data query and
retrieve data.






On Fri, May 7, 2021, 6:12 AM Droid Tools  wrote:

> Hi,
>
> I'm looking for tips on optimizing a search query where someone searches
> for content within a post and wants to sort the results by a timestamp. I
> have a table `posts` with a `created` field (timestamp) and a post_tsv
> column (TSVECTOR). Likewise I have a GIN Index on the `post_tsv` field and
> a separate index on the `created` field.
>
> My initial, naive, attempt was to simply to do something like:
>
> ```
> SELECT * FROM posts WHERE post_tsv @@ websearch_to_tsquery(?) ORDER BY
> created DESC
> ```
>
> However, I didn't realize in this case the `created` index would be
> ignored, which means if there was a large number of posts returned, this
> query would take several seconds to execute. Also the planner would do
> weird things even if the result set was small and still take several
> seconds to execute. Currently I've papered over the problem by issuing a
> subquery and sorting that instead.
>
>  ```
> SELECT * FROM
>   (SELECT * FROM posts WHERE post_tsv @@ websearch_to_tsquery(?) LIMIT
> 10,000) q
> ORDER BY created DESC
> ```
>
> In short I execute the search, limit that to 10,000 rows, and then order
> the 10,000 rows that were returned. This worked amazingly for queries that
> returned fewer than 10,000 rows as those queries went from taking several
> seconds to run down to a handful of milliseconds. The problem is for
> queries with more than 10,000 rows you essentially end up with random
> results. I'm still not using the created index, but sorting 10,000 rows in
> memory is relatively fast.
>
> I'm stuck where to go from here - what I would like, since I know I will
> only ever ORDER BY the created field is to build some index where the
> default ordering is by the created field. GIN, as I understand it, doesn't
> support indexing in this manner (using one of the columns as a sort field).
> Is there anything else I could try?
>
>


Re: [RPM/CentOS7] Need to disable repo_gpgcheck on pgdg-common when using RPM version 42.0-17.1

2021-05-07 Thread Devrim Gündüz

Hi,

On Thu, 2021-05-06 at 16:29 +0200, Thomas Boussekey wrote:
> >
> This morning, a new RPM version has been sent to
> https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/ wit
> h
> version ID 42.0-17.1
> We had to adapt our tooling to comply with this new repository file.
> 
> We faced the following error on section pgdg-common
> > Failure talking to yum: failure: repodata/repomd.xml.asc from pgdg-
> > common
> 
> Searching on the internet, we managed to perform a workaround disabling
> the
> parameter `repo_gpgcheck` into the section pgdg-common.
> 
> Hope it can help others,

Actually please put that setting back. There was a sync issue which I
fixed yesterday. repo_gpgcheck should be 1, per:

https://people.planetpostgresql.org/devrim/index.php?/archives/113-CVE-2021-20271-and-PostgreSQL-YUMDNF-repo.html

Regards,

-- 
Devrim Gündüz
Open Source Solution Architect, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


Re: idle_in_transaction_session_timeout

2021-05-07 Thread luis . roberto


- Mensagem original -
> De: "Atul Kumar" 
> Para: "pgsql-general" 
> Enviadas: Sexta-feira, 7 de maio de 2021 3:34:44
> Assunto: idle_in_transaction_session_timeout

> Hi,

> I have postgres 9.5 version running on my machine.

> When I am trying to find out the parameter
> idle_in_transaction_session_timeout it is showing me below error:

> postgres=# show idle_in_transaction_session_timeout;
> ERROR: unrecognized configuration parameter
> "idle_in_transaction_session_timeout"

> I also checked postgresql.conf but even in this file there is no such 
> parameter.

> Please help me to find this parameter.

> Regards,
> Atul

idle_in_transaction_session_timeout first appears in v9.6[1]

[1] https://www.postgresql.org/docs/9.6/runtime-config-client.html

Luis R. Weck 




SV: idle_in_transaction_session_timeout

2021-05-07 Thread Gustavsson Mikael

Hi,

idle_in_transaction_session_timeout came in PG 9.6 if im not mistaken.

KR
Mikael


Från: Atul Kumar 
Skickat: den 7 maj 2021 8:34
Till: pgsql-general
Ämne: idle_in_transaction_session_timeout

Hi,

I have postgres 9.5 version running on my machine.

When I am trying to find out the parameter
idle_in_transaction_session_timeout it is showing me below error:

postgres=# show idle_in_transaction_session_timeout;
ERROR:  unrecognized configuration parameter
"idle_in_transaction_session_timeout"


I also checked postgresql.conf but even in this file there is no such parameter.

Please help me to find this parameter.




Regards,
Atul




Re: trigger impacting insertion of records

2021-05-07 Thread Atul Kumar
hi,

The data is inserting using some json sript which is working fine in
our stating server and inserting complete records.

But in production data insertion is slow and after some insertion it
just abort somehow.

DB logs are given below:


#PostGreSQL: idle^^2021-05-06 18:06:09
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: BEGIN
#PostGreSQL: idle in transaction^^2021-05-06 18:06:09
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: DEALLOCATE pdo_stmt_00d6
#PostGreSQL: INSERT^^2021-05-06 18:06:09
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  execute pdo_stmt_00d7: insert into
bonzipay.bp_ach_trans_response
(payment_pastransid,payment_status,code,site,accountnumber,amount,bankaccountnumber,accountype,TranId)

 select
payment_pastransid,'SETTLED',code
,site,accountnumber,amount,bankaccountnumber,accountype,TranId from
bonzipay.bp_ach_trans

 where payment_status ='PROCESS' and
accountnumber='USR=647376' and bankaccountnumber='3027469304'

 and amount='6000' and accountype='22'
order by 1 desc limit 1
#PostGreSQL: idle in transaction^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: COMMIT
#PostGreSQL: idle^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: BEGIN
#PostGreSQL: idle in transaction^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  statement: DEALLOCATE pdo_stmt_00d7
#PostGreSQL: UPDATE^^2021-05-06 18:06:10
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  execute pdo_stmt_00d8: UPDATE
bonzipay.BP_ACH_TRANS SET payment_status='SETTLED' where
payment_pastransid= (select payment_pastransid from
bonzipay.BP_ACH_TRANS where payment_status ='PROCESS' and
accountnumber='USR=647376' and

 amount='6000'and
bankaccountnumber='3027469304' and accountype='22' order by 1 desc
limit 1)






#PostGreSQL: idle in transaction^^2021-05-06 18:06:12
PDT^^bonzipay^^usmliv1^^127.0.0.1(38435)^^60949111.16a5^^2021-05-06
18:00:01 PDTLOG:  unexpected EOF on client connection with an open
transaction




Please suggest the solution to troubleshoot it more.




Regards,
Atul





On 5/6/21, Adrian Klaver  wrote:
> On 5/6/21 12:45 AM, Atul Kumar wrote:
>> Hi,
>>
>> I have simple table having structure like given below:
>>
>> \d bp_ach_trans
>>Table "bonzipay.bp_ach_trans"
>> Column   |  Type  |
>>  Modifiers
>> ++---
>> bptransid  | integer| not null default
>> nextval('bp_ach_trans_bptransid_seq1'::regclass)
>>
>> filename   | character varying(50)  |
>>   payment_status | character varying(30)  |
>>   settledate | character varying(15)  |
>>   payment_pastransid | bigint |
>>   tname  | character varying(250) |
>>   code   | character varying(5)   |
>>   error_txt  | character varying(200) |
>>   routingnumber  | character varying(15)  |
>>   tracenumber| character varying(10)  |
>>   accountnumber  | character varying(15)  |
>>   bankaccountnumber  | character varying(17)  |
>>   type   | character varying(1)   |
>>   amount | numeric|
>>   site   | character varying(30)  |
>>   accountype | character varying(2)   |
>>   tranid | character varying(15)  |
>>
>> Triggers:
>>  ins_ussf_rec AFTER INSERT ON bp_ach_trans FOR EACH ROW EXECUTE
>> PROCEDURE ussf_accountnumber_update()
>>
>>
>>
>> the function definition is like below:
>>
>> CREATE OR REPLACE FUNCTION bonzipay.ussf_accountnumber_update()
>>   RETURNS trigger
>>   LANGUAGE plpgsql
>> AS $function$ BEGIN update bonzipay.bp_ach_trans set
>> accountnumber=replace(accountnumber,'_',' ') where
>> left(accountnumber,3) = 'US_'; RETURN NEW; END; $function$
>>
>>
>> my query is:
>>
>> when I am inserting around 1000 records in the table having
>> accountnumber not having value 'US_', I am getting only 300 records
>> insertion. remaining around 700 values are not getting inserted.
>
> How are you determining this?
>
> How are you doing the 

idle_in_transaction_session_timeout

2021-05-07 Thread Atul Kumar
Hi,

I have postgres 9.5 version running on my machine.

When I am trying to find out the parameter
idle_in_transaction_session_timeout it is showing me below error:

postgres=# show idle_in_transaction_session_timeout;
ERROR:  unrecognized configuration parameter
"idle_in_transaction_session_timeout"


I also checked postgresql.conf but even in this file there is no such parameter.

Please help me to find this parameter.




Regards,
Atul