On 03/10/17 04:29, Tom Lane wrote:
Mariel Cherkassky writes:
explain analyze SELECT Ma.User_Id,
COUNT(*) COUNT
FROM Manuim Ma
WHERE Ma.Bb_Open_Date =
(SELECT Bb_Open_Date
On 19/08/17 13:49, Mark Kirkwood wrote:
On 19/08/17 02:21, Jeremy Finzel wrote:
On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe
mailto:scott.marl...@gmail.com>> wrote:
So do iostat or iotop show you if / where your disks are working
hardest? Or is this CPU overhead that
to put my tables on.
Thanks!
Charles
On Sat, Jul 15, 2017 at 1:57 AM, Mark Kirkwood
mailto:mark.kirkw...@catalyst.net.nz>>
wrote:
Thinking about this a bit more - if somewhat more blazing
performance is needed, then this could be achieved via losing the
RAID card and spin
On 19/08/17 02:21, Jeremy Finzel wrote:
On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe
mailto:scott.marl...@gmail.com>> wrote:
So do iostat or iotop show you if / where your disks are working
hardest? Or is this CPU overhead that's killing performance?
Sorry for the delayed reply.
thermal runaway", i.e. when the server can't get a
temperature reading from a PCIe card, it spins the fans at full speed
to protect the server against high temperature. The machine being next
to my desk I worry about the deafening noise it will create.
Thanks!
Chales
On Sat, Jul 15, 2017
onger
lifetime. However for your workload the Sammy is probably fine.
regards
Mark
On 15/07/17 11:09, Mark Kirkwood wrote:
Ah yes - that seems more sensible (but still slower than I would
expect for 5 disks RAID 0).
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
dom_page_cost from 22 to 4
Reducing min_parallel_relation_size to 512kB to have more workers
when doing sequential parallel scan of my biggest table
Thanks for recomending this test, I now know what the real throughput
should be!
Charles
On Wed, Jul 12, 2017 at 4:11 AM, Mark Kirkwood
mailto:mark.kirkw...@ca
Hmm - how are you measuring that sequential scan speed of 4MB/s? I'd
recommend doing a very simple test e.g, here's one on my workstation -
13 GB single table on 1 SATA drive - cold cache after reboot, sequential
scan using Postgres 9.6.2:
bench=# EXPLAIN SELECT count(*) FROM pgbench_accounts
On 23/05/17 08:21, Clemens Eisserer wrote:
> Hi,
>
> I have a letancy-sensitive legacy application, where the time consumed
> by query planning was always causing some headaches.
> Currently it is running on postgresql-8.4 - will postgresql-10 support
> generating plans using multiple CPU cores to
On 15/12/16 23:28, Dinesh Chandra 12108 wrote:
Dear expert,
In postgreSQL-9.1,the size of *pgsql_tmp* inside tablespace (Temp
tablespace) is increased by 544G in one day.
However, the DBsize is as usual but tablespace size is getting increased.
Could you please suggest why it is happening ?
On 19/09/16 19:40, Job wrote:
Hello,
i would please like to have some suggestions to optimize Postgres 8.4 for a
very heavy number of select (with join) queries.
The queries read data, very rarely they write.
We probably need to see schema and query examples to help you (with
EXPLAIN ANALY
On 01/09/16 17:56, Mark Kirkwood wrote:
the other way to attack this is to trace your backend postgres
process (err perfmon...no idea how to do this on windows...)
No idea why I thought you were on windows (maybe was reading another
message just before yours) - sorry!
--
Sent via pgsql
On 01/09/16 10:01, Bobby Mozumder wrote:
Is it possible to find the number of disk IOs performed for a query? EXPLAIN
ANALYZE looks like it shows number of sequential rows scanned, but not number
of IOs.
My database is on an NVMe SSD, and am trying to cut microseconds of disk IO per
query b
On 22/07/16 13:07, Johan Fredriksson wrote:
And by the way, I have also tried to upgrade to Postgresql 9.4.8 (the latest
version in postgresl.org's own repository) without improvment.
Not sure what repo you are using, but 9.5.3 and 9.6 Beta are the
*actual* latest versions. Now I'm not sure
On 08/07/16 02:09, Wes Vaske (wvaske) wrote:
?The Crucial drive does not have power loss protection. The Samsung drive does.
(The Crucial M550 has capacitors to protect data that's already been written to the
device but not the entire cache. For instance, if data is read from the device during
On 06/07/16 07:17, Mkrtchyan, Tigran wrote:
Hi,
We had a similar situation and the best performance was with 64MB
background_bytes and 512 MB dirty_bytes.
Tigran.
On Jul 5, 2016 16:51, Kaixi Luo wrote:
Here are my server specs:
RAID1 - 2x480GB Samsung SSD with power loss protecti
On 23/07/15 23:37, domenico febbo wrote:
is the problem also in PostgreSQL 9.4.x?
I'm going to buy a production's server with 4 sockets E7-4850 12 cores
so 12*4 = 48 cores (and 96 threads using HT).
What do you suggest?
Using or not HT?
From my experience 9.4 is considerably better (we are us
On 21/07/15 20:04, David Rowley wrote:
On 21 July 2015 at 14:59, Jeison Bedoya Delgado
mailto:jeis...@audifarma.com.co>> wrote:
hi everyone,
Recently update a database to machine with RHEL7, but i see that the
performance is betther if the hyperthreading tecnology is
deactivated
On 24/06/15 09:05, Jim Nasby wrote:
> On 6/19/15 9:57 AM, Ian Pushee wrote:
>>
>>
>> On 6/19/2015 10:47 AM, Andreas Kretschmer wrote:
Explain Analyze outputs (links as requested):
Default plan: http://explain.depesz.com/s/ib3k
Forced index (random_page_cost=1.0): http://explain.depes
On 04/06/15 12:58, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake wrote:
On 06/03/2015 03:16 PM, Tomas Vondra wrote:
What is more important, though, is the amount of memory. OP reported the
query writes ~95GB of tem
On 13/02/15 00:20, Gudmundsson Martin (mg) wrote:
Hi all!
- checkpoint_segments 1000
- checkpoint_completion_target 0.9
- wal_buffers 256MB
- shared_buffers 31 gb
- max_connections 500
I see that some of you are using wal_buffers = 256MB.
I was under the impression that Postgres will not ben
On 10/02/15 10:29, Gavin Flower wrote:
On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:
Hi,
A survay: with pgbench using TPS-B, what is the maximum TPS you're
ever seen?
For me: 12000 TPS.
--
Luis Antonio Dias de Sá Junior
Important to specify:
1. O/S
2. version of PostgreSQL
3. Pos
On 16/01/15 16:28, Josh Berkus wrote:
On 01/16/2015 04:17 PM, Mark Kirkwood wrote:
On 16/01/15 16:06, Mark Kirkwood wrote:
A bit more poking about shows that the major factor (which this fake
dataset anyway) is the default for effective_cache_size (changes from
128MB to 4GB in 9.4
On 16/01/15 16:06, Mark Kirkwood wrote:
A bit more poking about shows that the major factor (which this fake
dataset anyway) is the default for effective_cache_size (changes from
128MB to 4GB in 9.4). Increasing this makes 9.2 start using the
files_in_flight index in a plain index scan too
On 16/01/15 15:32, Mark Kirkwood wrote:
On 16/01/15 13:37, Mark Kirkwood wrote:
On 16/01/15 11:30, Josh Berkus wrote:
This is an obfuscation and mock up, but:
table files (
id serial pk,
filename text not null,
state varchar(20) not null
... 18 more columns
)
index file_state
On 16/01/15 13:37, Mark Kirkwood wrote:
On 16/01/15 11:30, Josh Berkus wrote:
This is an obfuscation and mock up, but:
table files (
id serial pk,
filename text not null,
state varchar(20) not null
... 18 more columns
)
index file_state on (state)
(35GB in size)
index
On 16/01/15 11:30, Josh Berkus wrote:
This is an obfuscation and mock up, but:
table files (
id serial pk,
filename text not null,
state varchar(20) not null
... 18 more columns
)
index file_state on (state)
(35GB in size)
index file_in_flight_state (stat
On 12/12/14 11:36, Eric Pierce wrote:
From: pgsql-performance-ow...@postgresql.org
on behalf of Evgeniy Shishkin
Sent: Thursday, December 11, 2014 7:11 AM
To: Andrea Suisani
Cc: mfatticci...@mbigroup.it; pgsql-performance@postgresql.org
Subject: Re: [P
On 10/12/14 21:30, Strahinja Kustudić wrote:
On Wed, Dec 10, 2014 at 4:55 AM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:
That is interesting: I've done some testing on this type of card with 16
(slightly faster Hitachi) SSD attached. Setting WT and NORA should enable
th
On 10/12/14 12:28, Strahinja Kustudić wrote:
* These tests are with the H710p controller set to write-back (WB) and
with adaptive read ahead (ADRA). I ran a few tests with
write-through (WT) and no read ahead (NORA), but the results were worse.
That is interesting: I've done some tes
On 15/11/14 15:08, Jim Nasby wrote:
On 11/14/14, 5:00 PM, Mark Kirkwood wrote:
as the 'rule of thumb' for setting shared_buffers. However I was
recently benchmarking a machine with a lot of ram (1TB) and entirely
SSD storage [1], and that seemed quite happy with 50GB of shared
buffe
On 15/11/14 06:06, Shaun Thomas wrote:
Alexey,
The issue is not that 8GB is the maximum. You *can* set it higher. What I'm
saying, and I'm not alone in this, is that setting it higher can actually
decrease performance for various reasons. Setting it to 25% of memory on a
system with 512GB of
On 25/09/14 01:03, Mkrtchyan, Tigran wrote:
With pg_test_timing I can see, that overhead is 48 nsec on my server and 32
nsec on the laptop.
what makes this difference and have it any influence on the overall performance?
Hmm - 22 nsec for my workstation, so while it could be a factor, your
On 24/09/14 21:23, Mkrtchyan, Tigran wrote:
Hi Merlin et al.
after building postgres 9.4 myself from sources I get the same performance as
with 9.3. The difference was in the value of debug_assertions setting.
Now the next step. Why my 3 years old laptop gets x1.8 times more tps than my
one mo
On 19/09/14 19:24, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Tigran Mkrtchyan"
Cc: "Merlin Moncure" , "postgres performance list"
Sent: Friday, September 19, 2014 8:26:27 AM
Subject: Re: [PERFORM] postgres 9
On 19/09/14 19:24, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Tigran Mkrtchyan"
Cc: "Merlin Moncure" , "postgres performance list"
Sent: Friday, September 19, 2014 8:26:27 AM
Subject: Re: [PERFORM] postgres 9
On 19/09/14 17:53, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3
one for 9.4), see below for results.
I'm running xfs on them with trim/discard enabled:
$ mount|
On 19/09/14 10:16, Mark Kirkwood wrote:
On 19/09/14 09:10, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Merlin Moncure" , "Tigran Mkrtchyan"
Cc: "postgres performance list"
Sent: Thursday, September 18, 2014 10:
On 19/09/14 09:10, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Merlin Moncure" , "Tigran Mkrtchyan"
Cc: "postgres performance list"
Sent: Thursday, September 18, 2014 10:56:36 PM
Subject: Re: [PERFORM] postgre
On 19/09/14 08:32, Merlin Moncure wrote:
On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran
wrote:
9.3.5:
0.035940END;
9.4beta2:
0.957854END;
time being spent on 'END' is definitely suggesting i/o related issues.
This is making me very skeptical that post
On 18/09/14 21:58, Mkrtchyan, Tigran wrote:
Hi Folk,
I am trying to investigate some performance issues which we have with postgres
(a different topic by itself) and tried postgres.9.4beta2, with a hope that it
perform better.
Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.
On 26/08/14 10:13, Josh Berkus wrote:
On 08/22/2014 07:02 AM, Andres Freund wrote:
On 2014-08-21 14:02:26 -0700, Josh Berkus wrote:
On 08/20/2014 07:40 PM, Bruce Momjian wrote:
Not sure how you can make such a blanket statement when so many people
have tested and shown the benefits of hyper-th
On 26/08/14 06:47, Jeison Bedoya Delgado wrote:
hi, recently i change the hardware of my database 32 cores up to 64
cores and 128GB Ram, but the performance is the same. Perhaps i have to
change any parameter in the postgresql.conf?.
In addition to the points that others have made, even if yo
On 26/08/14 10:13, Josh Berkus wrote:
On 08/22/2014 07:02 AM, Andres Freund wrote:
On 2014-08-21 14:02:26 -0700, Josh Berkus wrote:
On 08/20/2014 07:40 PM, Bruce Momjian wrote:
Not sure how you can make such a blanket statement when so many people
have tested and shown the benefits of hyper-th
On 22/08/14 11:29, Josh Berkus wrote:
On 08/21/2014 04:08 PM, Steve Crawford wrote:
On 08/21/2014 03:51 PM, Josh Berkus wrote:
On 08/21/2014 02:26 PM, Scott Marlowe wrote:
I'm running almost the exact same setup in production as a spare. It
has 4 of those CPUs, 256G RAM, and is currently set t
On 21/08/14 11:14, Mark Kirkwood wrote:
You didn't mention what
cpu this is for (or how many sockets etc), would be useful to know.
Just to clarify - while you mentioned that the production system was 40
cores, it wasn't immediately obvious that the same system was the sour
On 21/08/14 07:13, Josh Berkus wrote:
Mark, all:
So, this is pretty damming:
Read-only test with HT ON:
[pgtest@db ~]$ pgbench -c 20 -j 4 -T 600 -S bench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 30
query mode: simple
number of clients: 20
number of threads: 4
durati
On 15/08/14 06:18, Josh Berkus wrote:
Mark,
Is the 60-core machine using some of the Intel chips which have 20
hyperthreaded virtual cores?
If so, I've been seeing some performance issues on these processors.
I'm currently doing a side-by-side hyperthreading on/off test.
Hi Josh,
The board
On 01/08/14 09:38, Alvaro Herrera wrote:
Matt Clarkson wrote:
The LWLOCK_STATS below suggest that ProcArrayLock might be the main
source of locking that's causing throughput to take a dive as the client
count increases beyond the core count.
Any thoughts or comments on these results are welc
On 31/07/14 00:47, Tomas Vondra wrote:
On 30 Červenec 2014, 14:39, Tom Lane wrote:
"Tomas Vondra" writes:
On 30 ??ervenec 2014, 3:44, Mark Kirkwood wrote:
While these numbers look great in the middle range (12-96 clients),
then
benefit looks to be tailing off as client numbers incr
respect to comments like "it shouldn't make difference" etc etc,
well the profile suggests otherwise, and the change in tps numbers
support the observation.
regards
Mark
On 30/07/14 20:42, Tomas Vondra wrote:
On 30 Červenec 2014, 3:44, Mark Kirkwood wrote:
While these number
On 17/07/14 11:58, Mark Kirkwood wrote:
Trying out with numa_balancing=0 seemed to get essentially the same
performance. Similarly wrapping postgres startup with --interleave.
All this made me want to try with numa *really* disabled. So rebooted
the box with "numa=off" appended to
On 12/07/14 01:19, Kevin Grittner wrote:
It might be worth a test using a cpuset to interleave OS cache and
the NUMA patch I submitted to the current CF to see whether this is
getting into territory where the patch makes a bigger difference.
I would expect it to do much better than using numactl
On 11/07/14 20:22, Andres Freund wrote:
On 2014-07-11 12:40:15 +1200, Mark Kirkwood wrote:
Full report http://paste.ubuntu.com/886/
#
8.82%postgres [kernel.kallsyms][k]
_raw_spin_lock_irqsave
|
--- _raw_spin_lock_irqsave
On 11/07/14 20:22, Andres Freund wrote:
On 2014-07-11 12:40:15 +1200, Mark Kirkwood wrote:
Postgres 9.4 beta
rwlock patch
pgbench scale = 2000
On that scale - that's bigger than shared_buffers IIRC - I'd not expect
the patch to make much of a difference.
Right - we did te
On 01/07/14 22:13, Andres Freund wrote:
On 2014-07-01 21:48:35 +1200, Mark Kirkwood wrote:
- cherry picking the last 5 commits into 9.4 branch and building a package
from that and retesting:
Clients | 9.4 tps 60 cores (rwlock)
+--
6 | 70189
12
On 01/07/14 21:48, Mark Kirkwood wrote:
[1] from git://git.postgresql.org/git/users/andresfreund/postgres.git,
commits:
4b82477dcaf81ad7b0c102f4b66e479a5eb9504a
10d72b97f108b6002210ea97a414076a62302d4e
67ffebe50111743975d54782a3a94b15ac4e755f
fe686ed18fe132021ee5e557c67cc4d7c50a1ada
On 27/06/14 21:19, Andres Freund wrote:
On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:
My feeling is spinlock or similar, 'perf top' shows
kernel find_busiest_group
kernel _raw_spin_lock
as the top time users.
Those don't tell that much by themselves, could you do a hiera
On 27/06/14 21:19, Andres Freund wrote:
On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:
My feeling is spinlock or similar, 'perf top' shows
kernel find_busiest_group
kernel _raw_spin_lock
as the top time users.
Those don't tell that much by themselves, could you do a hiera
On 27/06/14 14:01, Scott Marlowe wrote:
On Thu, Jun 26, 2014 at 5:49 PM, Mark Kirkwood
wrote:
I have a nice toy to play with: Dell R920 with 60 cores and 1TB ram [1].
The context is the current machine in use by the customer is a 32 core one,
and due to growth we are looking at something
I have a nice toy to play with: Dell R920 with 60 cores and 1TB ram [1].
The context is the current machine in use by the customer is a 32 core
one, and due to growth we are looking at something larger (hence 60 cores).
Some initial tests show similar pgbench read only performance to what
Rob
On 21/01/14 21:45, Mark Kirkwood wrote:
On 21/01/14 21:37, Katharina Koobs wrote:
Dear Heikki,
thank you for your valuable feedback. Regarding your questions: It
gradually slower every day. The database size is increasing only
slightly over time.
I will try your hint regarding CLUSTERING. The
On 21/01/14 21:37, Katharina Koobs wrote:
Dear Heikki,
thank you for your valuable feedback. Regarding your questions: It
gradually slower every day. The database size is increasing only
slightly over time.
I will try your hint regarding CLUSTERING. The difference in effect of
VACUUM FULL in ver
On 28/12/13 18:19, ankush upadhyay wrote:
Hello All,
I am using multi tenant system and doing performance testing of multi
tenant application. In case of single tenant it is working fine but once
I enable tenants, then some time database servers not responding. Any clue?
It is a bit tricky to
On 23/12/13 21:58, Johann Spies wrote:
On 19 December 2013 16:48, Tom Lane mailto:t...@sss.pgh.pa.us>> wrote:
Johann Spies mailto:johann.sp...@gmail.com>> writes:
> I would appreciate some help optimising the following query:
It's a mistake to imagine that indexes are going to h
On 20/12/13 06:53, Shaun Thomas wrote:
On 12/17/2013 08:48 PM, Kai Sellgren wrote:
This is your select:
SELECT *
FROM "Log"
LEFT JOIN "NewsArticle" ON "NewsArticle".id = "Log"."targetId" AND
"Log"."targetType" = 'NewsArticle'
ORDER BY "Log"."createdAt" DESC
LIMIT 10
This is your index:
CRE
On 06/12/13 05:13, Skarsol wrote:
On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe wrote:
On Thu, Dec 5, 2013 at 8:16 AM, Skarsol wrote:
psql (PostgreSQL) 9.2.5
Red Hat Enterprise Linux Server release 6.4 (Santiago)
Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013
x86_64
x86
On 26/11/13 09:28, Andrew Dunstan wrote:
On 11/25/2013 03:19 PM, Heikki Linnakangas wrote:
On 25.11.2013 22:01, Lee Nguyen wrote:
Hi,
Having attended a few PGCons, I've always heard the remark from a few
presenters and attendees that Postgres shouldn't be run inside a VM.
That
bare metal is t
On 06/08/13 22:46, Ivan Voras wrote:
Here are two more unexpected results. Same test table (1 mil. records,
"id" is SERIAL PRIMARY KEY, PostgreSQL 9.1, VACUUM ANALYZE performed
before the experiments):
ivoras=# explain analyze select * from lt where id > 90 limit 10;
On 23/05/13 14:26, Mark Kirkwood wrote:
On 23/05/13 14:22, Greg Smith wrote:
On 5/22/13 10:04 PM, Mark Kirkwood wrote:
Make that quite a few capacitors (top right corner):
http://regmedia.co.uk/2013/05/07/m500_4.jpg
There are some more shots and descriptions of the internals in the
On 23/05/13 14:22, Greg Smith wrote:
On 5/22/13 10:04 PM, Mark Kirkwood wrote:
Make that quite a few capacitors (top right corner):
http://regmedia.co.uk/2013/05/07/m500_4.jpg
There are some more shots and descriptions of the internals in the
excellent review at
http://techreport.com/review
On 23/05/13 13:32, Mark Kirkwood wrote:
On 23/05/13 13:01, Joshua D. Drake wrote:
On 05/22/2013 04:37 PM, Merlin Moncure wrote:
On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake
wrote:
I am curious how the 710 or S3700 stacks up against the new M500 from
Crucial? I know Intel is kind of
On 23/05/13 13:01, Joshua D. Drake wrote:
On 05/22/2013 04:37 PM, Merlin Moncure wrote:
On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake
wrote:
I am curious how the 710 or S3700 stacks up against the new M500 from
Crucial? I know Intel is kind of the goto for these things but the
m500 is
p
On 17/05/13 12:06, Tomas Vondra wrote:
Hi,
On 16.5.2013 16:46, Cuong Hoang wrote:
Pro for the master server. I'm aware of write cache issue on SSDs in
case of power loss. However, our hosting provider doesn't offer any
other choices of SSD drives with supercapacitor. To minimise risk, we
will
On 14/05/13 10:10, Marti Raudsepp wrote:
On Mon, May 13, 2013 at 6:01 PM, ach wrote:
what I'm wondering is, since
the unique constraint already covers the whole table and all rows in
entirety, is it really necessary for statistics to be set that high on
those?
AFAIK if there are exact-matchin
On 11/05/13 01:30, Tom Lane wrote:
Mark Kirkwood writes:
Unfortunately a trigger will not really do the job - analyze ignores in
progress rows (unless they were added by the current transaction), and
then the changes made by analyze are not seen by any other sessions. So
no changes to plans
added.
7 трав. 2013 08:33, "Mark Kirkwood" mailto:mark.kirkw...@catalyst.net.nz>> напис.
On 07/05/13 18:10, Simon Riggs wrote:
On 7 May 2013 01:23, mailto:mark.kirkw...@catalyst.net.nz>__> wrote:
I'm thinking that a variant of (2) might be simpl
On 07/05/13 19:33, Simon Riggs wrote:
On 7 May 2013 07:32, Mark Kirkwood wrote:
On 07/05/13 18:10, Simon Riggs wrote:
On 7 May 2013 01:23, wrote:
I'm thinking that a variant of (2) might be simpler to inplement:
(I think Matt C essentially beat me to this suggestion - he origi
On 07/05/13 18:10, Simon Riggs wrote:
On 7 May 2013 01:23, wrote:
I'm thinking that a variant of (2) might be simpler to inplement:
(I think Matt C essentially beat me to this suggestion - he originally
discovered this issue). It is probably good enough for only *new* plans to
react to the i
> Simon Riggs wrote:
>
> Patch works and improves things, but we're still swamped by the block
> accesses via the index.
Which *might* be enough to stop it making the server go unresponsive,
we'll look at the effect of this in the next few days, nice work!
>
> Which brings me back to Mark's origi
On 05/05/13 00:49, Simon Riggs wrote:
On 3 May 2013 13:41, Simon Riggs wrote:
(3) to make the check on TransactionIdIsInProgress() into a heuristic,
since we don't *need* to check that, so if we keep checking the same
xid repeatedly we can reduce the number of checks or avoid xids that
seem to
> mark.kirkw...@catalyst.net.nz wrote on 03.05.2013 00:19:
>> I think the idea of telling postgres that we are doing a load is
>> probably
>> the wrong way to go about this. We have a framework that tries to
>> automatically figure out the best plans...I think some more thought
>> about
>> how to m
> On 2 May 2013 01:49, Mark Kirkwood wrote:
>
> I think we need a problem statement before we attempt a solution,
> which is what Tom is alluding to.
>
Actually no - I think Tom (quite correctly) was saying that the patch was
not a viable solution. With which I agree.
I belie
On 02/05/13 02:06, Tom Lane wrote:
Mark Kirkwood writes:
I am concerned that the deafening lack of any replies to my original
message is a result of folk glancing at your original quick reply and
thinking... incomplete problem spec...ignore... when that is not that
case - yes I should have
On 26/04/13 15:34, Gavin Flower wrote:
On 26/04/13 15:19, Mark Kirkwood wrote:
While in general you are quite correct - in the above case
(particularly as I've supplied a test case) it should be pretty
obvious that any moderately modern version of postgres on any
supported platform will ex
On 26/04/13 14:56, Gavin Flower wrote:
On 26/04/13 14:33, Mark Kirkwood wrote:
Recently we encountered the following unhappy sequence of events:
1/ system running happily
2/ batch load into table begins
3/ very quickly (some) preexisting queries on said table go orders of
magnitude slower
4
Recently we encountered the following unhappy sequence of events:
1/ system running happily
2/ batch load into table begins
3/ very quickly (some) preexisting queries on said table go orders of
magnitude slower
4/ database instance becomes unresponsive
5/ application outage
After looking down
On 02/04/13 21:47, Mark Kirkwood wrote:
On 02/04/13 21:34, Dave Page wrote:
On Mon, Apr 1, 2013 at 11:43 PM, Mark Kirkwood
wrote:
On 02/04/13 13:55, Bruce Momjian wrote:
On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:
Due to the security nature of the release, the
On 02/04/13 21:34, Dave Page wrote:
On Mon, Apr 1, 2013 at 11:43 PM, Mark Kirkwood
wrote:
On 02/04/13 13:55, Bruce Momjian wrote:
On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:
Due to the security nature of the release, the source and binaries will
only be publicly
Also it is worth checking what your sysctl vm.zone_reclaim_mode is set
to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger
cpu number machines has been discussed at length on this list - but
still traps us now and again!
Cheers
Mark
On 02/04/13 19:33, Armand du Plessis wr
On 02/04/13 19:08, Jeff Janes wrote:
On Monday, April 1, 2013, Mark Kirkwood wrote:
Your provisioned volumes are much better than the default AWS ones,
but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth
of Postgres 8k buffers). So you may need to look at adding more
On 02/04/13 13:55, Bruce Momjian wrote:
On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:
Due to the security nature of the release, the source and binaries will
only be publicly available on April 4 --- there are no pre-release
versions available.
The PostgreSQL homepage h
Yeah, as I understand it you should have 6000 IOPS available for the md
device (ideally).
The iostats you display certainly look benign... but the key time to be
sampling would be when you see the lock list explode - could look very
different then.
Re vm.dirty* - I would crank the values dow
In addition to tuning the various Postgres config knobs you may need to
look at how your AWS server is set up. If your load is causing an IO
stall then *symptoms* of this will be lots of locks...
You have quite a lot of memory (60G), so look at tuning the
vm.dirty_background_ratio, vm.dirty_ra
On 21/03/13 13:44, David Rees wrote:
On Thu, Mar 14, 2013 at 4:37 PM, David Boreham wrote:
You might want to evaluate the performance you can achieve with a single-SSD
(use several for capacity by all means) before considering a RAID card + SSD
solution.
Again I bet it depends on the applicatio
On 16/03/13 07:06, Rick Otten wrote:
I not convinced about the need for BBU with SSD - you *can* use them
without one, just need to make sure about suitable longevity and also
the presence of (proven) power off protection (as discussed
previously). It is worth noting that using unproven or SSD kn
On 15/03/13 11:34, Bruce Momjian wrote:
I don't think any drive that corrupts on power-off is suitable for a
database, but for non-db uses, sure, I guess they are OK, though you
have to be pretty money-constrainted to like that tradeoff.
Agreed - really *all* SSD should have capacitor (or equ
On 15/03/13 10:37, Mark Kirkwood wrote:
Also, in terms of performance, the faster PCIe SSD do about as well by
themselves as connected to a RAID card with BBU.
Sorry - I meant to say "the faster **SAS** SSD do...", since you can't
currently plug PCIe SSD into RAID cards (co
On 15/03/13 07:54, Bruce Momjian wrote:
Only use SSDs with a BBU cache, and don't set SSD caches to
write-through because an SSD needs to cache the write to avoid wearing
out the chips early, see:
http://momjian.us/main/blogs/pgblog/2012.html#August_3_2012
I not convinced about the ne
On 14/03/13 09:16, David Boreham wrote:
On 3/13/2013 1:23 PM, Steve Crawford wrote:
What concerns me more than wear is this:
InfoWorld Article:
http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715
Referenced research paper:
https://ww
1 - 100 of 428 matches
Mail list logo