On Wed, Jan 24, 2018 at 3:54 AM, pavan95
wrote:
> Hi Claudio,
>
> We didn't configure any replication to our production server. Which strace
> are you talking about?
>
This one: https://linux.die.net/man/1/strace
You can attach it to a process (assuming you have
Then we should find like if there are any idle sessions with uncommitted
transactions. Those might be the culprits.
Regards,
Pavan
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
On Tue, Jan 23, 2018 at 10:54:01PM -0700, pavan95 wrote:
> If you are finding some sessions then of course your database is
> perfectly alright. As sessions won't consume any memory.
Those have a cost as well when building transaction snapshots. Too much
of them is no good either, let's not
Thomas Kellerer schrieb am 19.01.2018 um 17:48:
>
> I wonder what performance implications that has on a server with
> around 50-100 active connections (through pgBouncer).
>
> My understanding of the documentation is, that Postgres will work
> just fine if we lower the limit, it simply releases
Hi Claudio,
We didn't configure any replication to our production server. Which strace
are you talking about?
We did a keen observation that only at the time 9'th minute of the hour and
39'th minute of the hour the so called archive logs are generated even when
nobody is connecting from
Hi Rambabu,
If you are finding some sessions then of course your database is
perfectly alright. As sessions won't consume any memory.
Kindly specify the issue briefly.
Regards,
Pavan
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
Hi,
I have to provide a summary of how much spaces is used in the large objects
table based on a group by condition.
I would expect an index only scan on the large object table, but a full seq
scan that last for hours is performed.
BigSql distribution
PostgreSQL 9.6.5 on x86_64-pc-mingw64,
Hello,
Is there any way to check, how many transactions happened till date from the
point the database created and started accepting transactions ?
The reason for this doubt is to find whether my database has crossed 2
million transactions or not.
Strangely had an interesting observation, when
I've have a look to the plan with pgadmin, and I think the problem is
rather here :
-> Sort (cost=4997.11..4997.11 rows=1 width=69) (actual
time=27.427..28.896 rows=7359 loops=1)
Sort Key: amendment.id
Sort Method: quicksort Memory: 1227kB
-> Nested Loop
2018-01-23 16:18 GMT+01:00 Justin Pryzby :
> On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:
>
>> Here is the default plan :
>
> Can you resend without line breaks or paste a link to explain.depesz?
I hope it's better like that. I've attached it too, just
Am 23.01.2018 um 16:20 schrieb Pavan Teja:
Hi David,
If it's yes what needs to be done in order to stabilize this issue??
Don't top-post ;-)
You can't prevent the generation of wal's (apart from using unlogged
tables, but i'm sure, that will be not your solution.)
Regards, Andreas
--
Hi David,
If it's yes what needs to be done in order to stabilize this issue??
Thanks in advance.
Regards,
Pavan
On Jan 23, 2018 8:15 PM, "David G. Johnston"
wrote:
> On Tue, Jan 23, 2018 at 7:39 AM, Pavan Teja
> wrote:
>
>> "
Yes so many wals are continuing to be produced.
Deleting the wals after a backup of the database.
Yes archiving mode is on. And the warning message in log file is
" checkpoints are frequently occurring (1second apart). Consider increasing
checkpoint_segements parameter".
My doubt is previously
Hi Laurenz,
Any Update, this is continuously hitting our production database.
Regards,
Rambabu Vakada,
PostgreSQL DBA.
On Tue, Jan 23, 2018 at 6:12 PM, Rambabu V wrote:
> Hi Laurenz,
>
> OOM error not recording in server level, it is only recording in our
> database
Hello all,
So I have a view, for which I can select all rows in about 3s (returns ~80k
rows), but if I add a where clause on a column, it takes +300s to return
the ~8k lines.
>From the plan, I see that it expects to return only 1 row and so choose to
perform some nested loops. Of course, I did
Hi Andreas,
Yes I'm facing problem because of this huge WAL(archive log) generation. As
it is seriously consuming a lot of disk space almost close to 50GB per day
even if the DML's don't have that impact in this WAL generation.
Previously the archive_log size is nearly 2 to 3 GB a day. Now with
16 matches
Mail list logo