Note that if you still have the settings you showed in your original
post you're just moving the goal posts a few feet further back. Any
heavy load can still trigger this kind of behaviour.
On Tue, Jul 7, 2015 at 5:29 AM, eudald_v wrote:
> Hello guys!
>
> I finally got rid of it.
> It looks that
Hello guys!
I finally got rid of it.
It looks that at the end it was all due to transparent_hugepages values.
I disabled them and cpu spikes disappeared. I am sorry cause it's something
I usually disable on postgresql servers, but I forgot to do so on this one
and never thought about it.
Thanks
On Tue, Jun 30, 2015 at 8:52 AM, eudald_v wrote:
> Hello all,
> This is my very first message to the Postgresql community, and I really hope
> you can help me solve the trouble I'm facing.
>
> I've an 80 core server (multithread) with close to 500GB RAM.
>
> My configuration is:
> MaxConn: 1500 (w
Josh Berkus wrote:
> On 07/02/2015 08:41 AM, eudald_v wrote:
>> And this is how it looks like when the spike happens:
>> [system CPU staying over 90%]
> I think you have a driver, kernel, Linux memory management, or IO
> stack issue.
In my experience this is usually caused by failure to disable
On 07/02/2015 08:41 AM, eudald_v wrote:
> And this is how it looks like when the spike happens:
> http://pastebin.com/2hAYuDZ5
Hmm, those incredibly high system % indicate that there's something
wrong with your system. If you're not using software RAID or ZFS, you
should never see that.
I think
On 07/02/2015 08:41 AM, eudald_v wrote:
> All that was recorded during a spike. From this log I have to point
> something:
> Tables TABLE_X and TABLE_Y have both a TRIGGER that does an INSERT to
> TABLE_Z
> As you can see, TABLE_Z was being VACUUM ANALYZED. I wonder if TRIGGERS and
> VACUUM work we
Dear Josh,
I'm sorry I didn't write before, but we have been very busy with this issue
and, you know, when something goes wrong, the apocalypse comes with it.
I've been working on everything you suggested.
I used your tables and script and I can give you a sample of it on
locked_query_start
2015
On 06/30/2015 07:52 AM, eudald_v wrote:
> Two days from now, I've been experiencing that, randomly, the connections
> rise up till they reach max connections, and the load average of the server
> goes arround 300~400, making every command issued on the server take
> forever. When this happens, ram
Dear Tom,
Thanks for your fast approach.
First of all, yes, queries seems to take more time to process and they are
like queued up (you can even see inserts with status waiting on top/htop).
I didn't know about that connection tip, and I will absolutely find a moment
to add a pg_pooler to reduce
eudald_v writes:
> This is my very first message to the Postgresql community, and I really hope
> you can help me solve the trouble I'm facing.
> I've an 80 core server (multithread) with close to 500GB RAM.
> My configuration is:
> MaxConn: 1500 (was 850)
> Shared buffers: 188Gb
> work_mem: 110
10 matches
Mail list logo