On Thu, Jun 5, 2014 at 6:57 PM, Vincent Lasmarias
wrote:
> Thanks for the informative responses and suggestions. My responses below:
>
> * Sorry for the double post. I posted the original message using my gmail
> account and got a "is not a member of any of the restrict_post groups"
> response and
Thanks for the informative responses and suggestions. My responses below:
* Sorry for the double post. I posted the original message using my gmail
account and got a "is not a member of any of the restrict_post groups"
response and when I didn't see it for a day, I ended up wondering if it was
due
On Thu, Jun 5, 2014 at 2:47 PM, Deron wrote:
> We saw very similar issues with a CentOS server with 40 cores (32
> virtualized) when moving from a physical server to a virtual server (I think
> it had 128GB RAM). Never had the problem on a physical server. We checked
> the same things as noted
We saw very similar issues with a CentOS server with 40 cores (32
virtualized) when moving from a physical server to a virtual server (I
think it had 128GB RAM). Never had the problem on a physical server. We
checked the same things as noted here, but never found a bug. We really
thought it ha
On Thu, Jun 5, 2014 at 10:58 AM, Jeff Janes wrote:
> This sounds like a kernel problem, probably either the zone reclaim issue,
> or the transparent huge pages issue.
I at first thought maybe same, but I don't think THP was introduced
until 2.6.38...OP is running 2.6.32-431.11.2.el6.x86_6. Maybe
On Wed, Jun 4, 2014 at 5:27 PM, vlasmarias wrote:
> For the past few days, we've been seeing unexpected extremely high CPU
> spikes
> in our system. We observed the following: the 'free' memory would go down
> to
> lower than 300 MB; at that point, 'cached' slowly starts to go down, and
> then CP
kiki wrote:
> I expanded work_mem to 256 Mb and created index on table
>
> create index xxx on system_alarm (id_camera, date, time) where confirmed =
> 'false' and dismissed = 'false';
That index is not used for the query (as could be expected).
You better remove it.
> the processor load now exe
> kiki wrote:
>> The speed of the query is not a problem but the strange thing is the
>> processor load with postmaster when the query is executed.
>> I dont now how to reduce processor load.
>
> Did you try without the ORDER BY?
> Where are the execution plans?
>
> Yours,
> Laurenz Albe
>
I expa
kiki wrote:
> The speed of the query is not a problem but the strange thing is the
> processor load with postmaster when the query is executed.
> I dont now how to reduce processor load.
Did you try without the ORDER BY?
Where are the execution plans?
Yours,
Laurenz Albe
--
Sent via pgsql-perf
Please try to avoid top-posting where inappropriate.
kiki wrote:
>>> There is still heavy load of postmaster process (up to 100%) for a simple
>>> query
>>>
>>> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND
>>> confirmed='false' AND dismissed='false' ORDER BY date DESC, time
On Mon, Sep 29, 2008 at 10:29:45AM +0200, [EMAIL PROTECTED] wrote:
> >> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND
> >> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC
> >> LIMIT 1;
> Sorry, without LIMIT returns around 70 rows.
> Tried to index da
Hello Herald,
the queried table is used for communication between server application and
web user interface.
When application detects an event it writes it down in table.
The web client checks every 10 second if something new is written in the
table.
Usually nothing new is written but the client h
Sorry, without LIMIT returns around 70 rows.
Tried to index date column and time column but the performance is pretty
much the same.
Everything is OK, I just dont understand way is this query burdening the
processor so much.
Regards,
Maja
> kiki wrote:
>> First I have increased shared_buffer
kiki wrote:
> First I have increased shared_buffers from 2000 to 8000. Since the
> postgresql is on Debian I had to increase SHMMAX kernel value.
> Everything is working much faster now.
Good to hear that the problem is gone.
> There is still heavy load of postmaster process (up to 100%) for a si
Hello Maja,
> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND
> confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC
> LIMIT 1;
>
> (the table is indexed by id_camera, has around 1 milion rows, and this
> query returns around 70 rows and is executed (EXPLAI
Thanks for the instructions for detecting the problem.
It helped a lot.
First I have increased shared_buffers from 2000 to 8000. Since the
postgresql is on Debian I had to increase SHMMAX kernel value.
Everything is working much faster now.
There is still heavy load of postmaster process (up to 1
It would be useful to confirm that this is a backend process.
With top, hit the 'c' key to show the full path / description of the
process.
Backend postgres processes should then have more useful descriptions of what
they are doing and identifying themselves.
You can also confirm what query is caus
kiki wrote:
> The number of rows returned by the query varies, right now is:
>
> 49 row(s)
> Total runtime: 3,965.718 ms
> The table currently has 971582 rows.
>
> But the problem is that when database server is restarted everything works
> fine and fast. No heavy loads of the processor and as time
Thank's for your response.
The situation is that the top result is when the server is already
exhibiting problems.
The number of rows returned by the query varies, right now is:
49 row(s)
Total runtime: 3,965.718 ms
The table currently has 971582 rows.
But the problem is that when database serv
> If that's what it looks like your server is running just fine. Load
> of 1.31, 85+% idle, no wait time. Or is that top and vmstat output
> from when the server is running fine?
Don't forget that there are 8 CPUs, and the backend will only run on one
of them.
But I concur that this seems ok.
H
2008/9/25 <[EMAIL PROTECTED]>:
> The result of the top command:
>
> top - 20:44:58 up 5:36, 1 user, load average: 1.31, 1.39, 1.24
> Tasks: 277 total, 2 running, 275 sleeping, 0 stopped, 0 zombie
> Cpu(s): 11.5%us, 2.2%sy, 0.0%ni, 86.3%id, 0.0%wa, 0.0%hi, 0.0%si,
> 0.0%st
> Mem: 3
21 matches
Mail list logo