Also it is worth checking what your sysctl vm.zone_reclaim_mode is set
to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger
cpu number machines has been discussed at length on this list - but
still traps us now and again!
Cheers
Mark
On 02/04/13 19:33, Armand du Plessis wr
I had my reservations about my almost 0% IO usage on the raid0 array as
well. I'm looking at the numbers in atop and it doesn't seem to reflect the
aggregate of the volumes as one would expect. I'm just happy I am seeing
numbers on the volumes, they're not too bad.
One thing I was wondering, as a
On 02/04/13 19:08, Jeff Janes wrote:
On Monday, April 1, 2013, Mark Kirkwood wrote:
Your provisioned volumes are much better than the default AWS ones,
but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth
of Postgres 8k buffers). So you may need to look at adding more
Thanks Jeff, yup, I'm actually busy setting up pg_pool now. Preliminary
results looks promising after switching some client nodes to connect
through the pool.
Here's the output of pg_stat_dabatase but also doesn't seem to be spending
more time there either. I'll have a look through the archives fo
On Monday, April 1, 2013, Armand du Plessis wrote:
> I've run an EXPLAIN ANALYZE on one of the queries that appeared in the
> pg_locks (although like you say that might be a red herring) both during
> normal response times (2) and also after the locks backlog materialized (1)
>
> The output below,
On Monday, April 1, 2013, Mark Kirkwood wrote:
>
> Your provisioned volumes are much better than the default AWS ones, but
> are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres
> 8k buffers). So you may need to look at adding more volumes into the array,
> or adding some sep
On 02/04/13 13:55, Bruce Momjian wrote:
On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:
Due to the security nature of the release, the source and binaries will
only be publicly available on April 4 --- there are no pre-release
versions available.
The PostgreSQL homepage h
Yeah, as I understand it you should have 6000 IOPS available for the md
device (ideally).
The iostats you display certainly look benign... but the key time to be
sampling would be when you see the lock list explode - could look very
different then.
Re vm.dirty* - I would crank the values dow
On Mon, Apr 1, 2013 at 5:55 PM, Bruce Momjian wrote:
> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:
> > > Due to the security nature of the release, the source and binaries will
> > > only be publicly available on April 4 --- there are no pre-release
> > > versions availa
Hi Steven,
Sounds very familiar. Painfully familiar :(
But I really don't know. All I can see is that in this particular
configuration the instance has 2 x Intel Xeon E5-2670, eight-core
processors. I can't find any info on whether it's flex or round robin. AWS
typically don't make the underlying
Armand,
All of the symptoms you describe line up perfectly with a problem I had
recently when upgrading DB hardware.
Everything ran find until we hit some threshold somewhere at which point
the locks would pile up in the thousands just as you describe, all while we
were not I/O bound.
I was movin
I've run an EXPLAIN ANALYZE on one of the queries that appeared in the
pg_locks (although like you say that might be a red herring) both during
normal response times (2) and also after the locks backlog materialized (1)
The output below, I've just blanked out some columns. The IO timings do
seem a
On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:
> > Due to the security nature of the release, the source and binaries will
> > only be publicly available on April 4 --- there are no pre-release
> > versions available.
>
> The PostgreSQL homepage has a big announcement saying
Hi Jeff,
Sorry I should've mentioned the new instance is Postgres 9.2.3. The old
instance IO maxed out the disk/io available on a single EBS volume on AWS.
It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping
6 1000 IOPS volumes in RAID-0 would give me some breathing space on
2013/4/2 Bruce Momjian :
> On Mon, Apr 1, 2013 at 05:10:22PM -0700, Tory M Blue wrote:
>> So It was announced that there would be a security patch for all versions
>> released on the 4th. I see it's been announced/released on the website, but
>> the
>> versions available show Feb dates.
>>
>> Sho
Thanks Mark,
I had a look at the iostat output (on a 5s interval) and pasted it below.
The utilization and waits seems low. Included a sample below, #1 taken
during normal operation and then when the locks happen it basically drops
to 0 across the board. My (mis)understanding of the IOPS was that
On Mon, Apr 1, 2013 at 05:10:22PM -0700, Tory M Blue wrote:
> So It was announced that there would be a security patch for all versions
> released on the 4th. I see it's been announced/released on the website, but
> the
> versions available show Feb dates.
>
> Should the source be current? Or do
On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis wrote:
> [Apologies, I first sent this to the incorrect list, postgres-admin, in
> the event you receive it twice]
>
> Hi there,
>
> I'm hoping someone on the list can shed some light on an issue I'm having
> with our Postgresql cluster. I'm liter
In addition to tuning the various Postgres config knobs you may need to
look at how your AWS server is set up. If your load is causing an IO
stall then *symptoms* of this will be lots of locks...
You have quite a lot of memory (60G), so look at tuning the
vm.dirty_background_ratio, vm.dirty_ra
So It was announced that there would be a security patch for all versions
released on the 4th. I see it's been announced/released on the website, but
the versions available show Feb dates.
Should the source be current? Or does it take a while for source and other
to be made available?
Figured if
Try these SQL statements , they will give you more information about whats
happening in your server lock-wise
SELECT
locktype, virtualtransaction,transactionid,nspname,relname,mode,granted,
cast(date_trunc('second',query_start) AS timestamp) AS query_start,
substr(current_query,1,25) AS query
FROM
Apologies, the checkpoint log entry was a few seconds after I sent this
email. Now pg_locks on 7000.
And vmstat:
procs ---memory-- ---swap-- -io --system--
-cpu-
r b swpd free buff cache si sobibo in cs us sy id
wa st
7 0 0 351212
Thanks for the reply.
I've now updated the background writer settings to:
# - Background Writer -
bgwriter_delay = 200ms # 10-1ms between rounds
bgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round
bgwriter_lru_multiplier = 2.0 # 0-10.0 multip
Hello, i think that your system during the checkpoint pauses all clients in
order to flush all data from controller's cache to the disks if i were you
i'd try to tune my checkpoint parameters better, if that doesn't work, show
us some vmstat output please
Vasilis Ventirozos
-- Forwarded me
[Apologies, I first sent this to the incorrect list, postgres-admin, in the
event you receive it twice]
Hi there,
I'm hoping someone on the list can shed some light on an issue I'm having
with our Postgresql cluster. I'm literally tearing out my hair and don't
have a deep enough understanding of
On Sat, Mar 30, 2013 at 11:00 PM, Kelphet Xiong wrote:
> I guess it is because postgres only uses a single thread to read
> the data or “pushing the data around in RAM” according to Kevin’s statement.
> Then my question is actually why postgres can not use the remaining
> 93.4%CPU.
postgres can u
26 matches
Mail list logo