king it MATERIALIZED, or creating an
equivalent trigger based summary table (there are examples in the docs
of how to do this).
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 19/08/17 13:49, Mark Kirkwood wrote:
On 19/08/17 02:21, Jeremy Finzel wrote:
On Tue, Aug 15, 2017 at 12:07 PM, Scott Marlowe
mailto:scott.marl...@gmail.com>> wrote:
So do iostat or iotop show you if / where your disks are working
hardest? Or is this CPU overhead that
my old
Crucial 550 can do 350 MB/s sustained writes (so two of them in RAID0
are doing 700 MB/s for hours).
Bigger capacity drives can do better - but overall I'm not that
impressed with the current trend of using TLC NAND.
regards
Mark
On 21/07/17 00:50, Charles Nadeau wrote:
Mark,
I r
(I've seen SAN
disks with iostat utilizations of 105% <-- Lol... and await numbers that
scroll off the page in that scenario)!
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Right, that is a bit of a show stopper for those SSD (the Intel needs
SATA 6Gb/s and the Sammy's need PCIe 3.0 to perform to their rated specs).
regards
Mark
On 16/07/17 04:12, Charles Nadeau wrote:
Mark,
The server is a . It doesn't really work with SATA drives. And when
yo
onger
lifetime. However for your workload the Sammy is probably fine.
regards
Mark
On 15/07/17 11:09, Mark Kirkwood wrote:
Ah yes - that seems more sensible (but still slower than I would
expect for 5 disks RAID 0).
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
ID stripe size - for DW work it makes
sense for it to be reasonably big (256K to 1M), which again will help
speed is sequential scans.
Cheers
Mark
On 15/07/17 02:09, Charles Nadeau wrote:
Mark,
First I must say that I changed my disks configuration from 4 disks in
RAID 10 to 5 disks in RAI
.00 1.08 100.00
So might be useful for us to see something like that from your system -
note you need to check you really have flushed the cache, and that no
other apps are using the db.
regards
Mark
On 12/07/17 00:46, Charles Nadeau wrote:
After reducing random_page_cost to 4 and testing m
tiple CPU cores to reduce the time required
> to generate a single plan?
>
> Thank you in advance and best regards, Clemens
>
>
Hi,
Might be worthwhile posting an example (query + EXPLAIN ANALYZE etc), so
we can see what type of queries are resulting in long plan times.
C
some additional indexes (logging the queries will
help you decide what if anything needs to be done).
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ANALYZE output). Also - err 8.4 - I (and others probably) will
recommend you upgrade to a more recent (and supported for that matter)
version - currently 9.5/9.6 - lots of performance improvements you are
missing out on!
Best wishes
Mark
--
Sent via pgsql-performance mailing list (pgsql
On 01/09/16 17:56, Mark Kirkwood wrote:
the other way to attack this is to trace your backend postgres
process (err perfmon...no idea how to do this on windows...)
No idea why I thought you were on windows (maybe was reading another
message just before yours) - sorry!
--
Sent via pgsql
count read and write
calls.
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
m not sure they will actually help your
particular query, but are probably worth a try out!
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the MX200 board (see):
http://www.anandtech.com/show/9258/crucial-mx200-250gb-500gb-1tb-ssd-review
looks to have the same sort of capacitors that the M550 uses, so not
ideal for db or transaction logs!
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
ucial M550 that have capacitors
and (originally) claimed to be power off safe, but with testing have
been shown to be not really power off safe at all. I'd be dubious about
Samsungs too.
The Intel Datacenter range (S3700 and similar) are known to have power
off safety that does work.
reg
On 08/01/16 19:07, Nicolas Paris wrote:
Hello Mark,
As far as I know, MongoDB is able to get better writing performances
thanks to scaling (easy to manage sharding). Postgresql cannot (is not
designed for - complicated).
Why comparing postgresql & mongoDB performances on a standalone
inst
Hi all, I just wrote an article about the postgres performance
optimizations I've been working on recently especially compared to our
old MongoDB platform
https://mark.zealey.org/2016/01/08/how-we-tweaked-postgres-upsert-performance-to-be-2-3-faster-than-mongodb
--
Sent via pgsql-performance
initial data dump we don't change the data again so we
could just do this at the end of the import process.
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ven if they have not been updated in a year. I
can't understand why it has to be like this and it seems that it would
be much nicer to not automatically truncate if it doesn't have to. This
would be great in the situation where you can tolerate a low chance of
data-loss but want very quick
he last data modification statement was run more
than eg 30 seconds or 1 minute before an unclean shutdown (or the data
was otherwise flushed to disk and there was no IO since then) can we not
assume that the data is not corrupted and hence not truncate the
unlogged tables?
Thanks
Mark
--
S
number of clients. With 48 cores you should
(hopefully) see a tps curve that increases and then gently flattens off
somewhere. If 96 cores are "too many" then you will see a tps curve that
initially increases then sharply drops.
Cheers
Mark
--
Sent via pgsql-performance mailing lis
ly the number of cores
that is the problem - particularly as benchmark results for single
socket cpus clearly show hyperthreading helps performance...
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://ww
heard others recommend setting it as low as 1.0 on an all SSD setup.
>
> It's also worth noting that there's some consensus that the optimizer is
> generally too eager to switch from an index scan to a seqscan.
Mind you, this eagerness could be caused by the OP having
effective_cache_size set to the default. This should be changed (set to
a few GB...)!
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
obably need to:
- get more disk or,
- tweak postgres params to get a less disk hungry plan (need to see that
explain analyze)!
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
etter performance than 16MB (probably related to the fact
the the default wal file size is 16MB). We just experimented further
with bigger values, and saw some improvement.
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
ting how tuning configurations are helping (or not) for a particular
hardware and software setup, but is less useful for answering the
question "how many TPS can postgres do"...
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
On 16/01/15 16:28, Josh Berkus wrote:
On 01/16/2015 04:17 PM, Mark Kirkwood wrote:
On 16/01/15 16:06, Mark Kirkwood wrote:
A bit more poking about shows that the major factor (which this fake
dataset anyway) is the default for effective_cache_size (changes from
128MB to 4GB in 9.4
On 16/01/15 16:06, Mark Kirkwood wrote:
A bit more poking about shows that the major factor (which this fake
dataset anyway) is the default for effective_cache_size (changes from
128MB to 4GB in 9.4). Increasing this makes 9.2 start using the
files_in_flight index in a plain index scan too
On 16/01/15 15:32, Mark Kirkwood wrote:
On 16/01/15 13:37, Mark Kirkwood wrote:
On 16/01/15 11:30, Josh Berkus wrote:
This is an obfuscation and mock up, but:
table files (
id serial pk,
filename text not null,
state varchar(20) not null
... 18 more columns
)
index file_state
On 16/01/15 13:37, Mark Kirkwood wrote:
On 16/01/15 11:30, Josh Berkus wrote:
This is an obfuscation and mock up, but:
table files (
id serial pk,
filename text not null,
state varchar(20) not null
... 18 more columns
)
index file_state on (state)
(35GB in size)
index
g it gets
you a plan on the files_in_flight index. I'm seeing this scenario with a
fake/generated dataset a bit like yours in 9.2 (9.5 uses the
files_in_flight w/o any coercing).
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
g factor is (usually)
not raw sequential speed but fsync latency. These days a modern SSD has
fsync response pretty much equal to that of a card with BBU + spinners -
and has "more" high speed storage available (cards usually have only a
1G or so of RAM on them).
regards
Mark
On 10/12/14 21:30, Strahinja Kustudić wrote:
On Wed, Dec 10, 2014 at 4:55 AM, Mark Kirkwood <
mark.kirkw...@catalyst.net.nz> wrote:
That is interesting: I've done some testing on this type of card with 16
(slightly faster Hitachi) SSD attached. Setting WT and NORA should enable
th
lagshiptech.com/eBay/Dell/poweredgeh310h710h810UsersGuide.pdf).
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 15/11/14 15:08, Jim Nasby wrote:
On 11/14/14, 5:00 PM, Mark Kirkwood wrote:
as the 'rule of thumb' for setting shared_buffers. However I was
recently benchmarking a machine with a lot of ram (1TB) and entirely
SSD storage [1], and that seemed quite happy with 50GB of shared
buffe
bigger than about 100GB - but this seems like a good thing to
come out with some numbers for i.e pgbench read write and read only tps
vs shared_buffers 1 -> 100 GB in size.
Cheers
Mark
[1] I may be in a position to benchmark the machines these replaced at
some not to distant time. These
On 25/09/14 01:03, Mkrtchyan, Tigran wrote:
With pg_test_timing I can see, that overhead is 48 nsec on my server and 32
nsec on the laptop.
what makes this difference and have it any influence on the overall performance?
Hmm - 22 nsec for my workstation, so while it could be a factor, your
month old server?
And Mark Kirkwood's desktop gets x2 times more tps as well? Is there some
special optimization
for i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?
Yes - firstly, nicely done re finding the assertions (my 9.4 beta2 was
built from src - never thought to mention
On 19/09/14 19:24, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Tigran Mkrtchyan"
Cc: "Merlin Moncure" , "postgres performance list"
Sent: Friday, September 19, 2014 8:26:27 AM
Subject: Re: [PERFORM] postgres 9
On 19/09/14 19:24, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Tigran Mkrtchyan"
Cc: "Merlin Moncure" , "postgres performance list"
Sent: Friday, September 19, 2014 8:26:27 AM
Subject: Re: [PERFORM] postgres 9
On 19/09/14 17:53, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3
one for 9.4), see below for results.
I'm running xfs on them with trim/discard enabled:
$ mount|
On 19/09/14 10:16, Mark Kirkwood wrote:
On 19/09/14 09:10, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Merlin Moncure" , "Tigran Mkrtchyan"
Cc: "postgres performance list"
Sent: Thursday, September 18, 2014 10:
On 19/09/14 09:10, Mkrtchyan, Tigran wrote:
- Original Message -
From: "Mark Kirkwood"
To: "Merlin Moncure" , "Tigran Mkrtchyan"
Cc: "postgres performance list"
Sent: Thursday, September 18, 2014 10:56:36 PM
Subject: Re: [PERFORM] postgre
note that my Crucial/Micron M550's are very fast for
most writes *but* are much slower for sync writes (and fsync) that
happen at commit...
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
psed time to get reasonably repeatable
numbers (to ensure you get about 1 checkpoint in your run). In addition
I usually do
psql <before each run so that I've got some confidence that we are starting
from approximately the same state each time (and getting hopefully only
*one* checkpoint
| 1913
32 | 3574 | 3560
64 | 5873 | 5412
128 | 8351 | 7450
256 | 9426 | 7840
512 | 9357 | 7288
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ply throw up new factors to inhibit performance. My *guess* (and
it is a guess) is that we are seeing 2 (perhaps more) performance
bottlenecks very close to each other: numa and spinlock contention at least.
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performa
results at the time. I've
just got some new ssd's to play with so might run some pgbench tests on
my home machine (Haswell i7) with HT on and off.
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
re there issues persisting in
3.2+? The 12.04 LTS release of Ubuntu Server was 3.2 but the 14.04 is 3.13.
The issues I know of were fixed in 3.9.
There is a 3.11 kernel series for Ubuntu 12.04 Precise.
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
On 21/08/14 11:14, Mark Kirkwood wrote:
You didn't mention what
cpu this is for (or how many sockets etc), would be useful to know.
Just to clarify - while you mentioned that the production system was 40
cores, it wasn't immediately obvious that the same system was the sour
On 21/08/14 07:13, Josh Berkus wrote:
Mark, all:
So, this is pretty damming:
Read-only test with HT ON:
[pgtest@db ~]$ pgbench -c 20 -j 4 -T 600 -S bench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 30
query mode: simple
number of clients: 20
number of threads: 4
On 15/08/14 06:18, Josh Berkus wrote:
Mark,
Is the 60-core machine using some of the Intel chips which have 20
hyperthreaded virtual cores?
If so, I've been seeing some performance issues on these processors.
I'm currently doing a side-by-side hyperthreading on/off test.
Hi Josh,
650 | 0.014609
So we're seeing delay coming fairly equally from 5 lwlocks.
Thanks again - any other suggestions welcome!
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 31/07/14 00:47, Tomas Vondra wrote:
On 30 Červenec 2014, 14:39, Tom Lane wrote:
"Tomas Vondra" writes:
On 30 ??ervenec 2014, 3:44, Mark Kirkwood wrote:
While these numbers look great in the middle range (12-96 clients),
then
benefit looks to be tailing off as client numbers incr
respect to comments like "it shouldn't make difference" etc etc,
well the profile suggests otherwise, and the change in tps numbers
support the observation.
regards
Mark
On 30/07/14 20:42, Tomas Vondra wrote:
On 30 Červenec 2014, 3:44, Mark Kirkwood wrote:
While these number
On 17/07/14 11:58, Mark Kirkwood wrote:
Trying out with numa_balancing=0 seemed to get essentially the same
performance. Similarly wrapping postgres startup with --interleave.
All this made me want to try with numa *really* disabled. So rebooted
the box with "numa=off" appended to
ssing that with 60 cores I do:
$ sudo /bin/bash -c "echo 0-59 >/dev/cpuset/postgres/cpus"
i.e cpus are cores not packages...? If I've stuffed it up I'll redo!
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To m
On 11/07/14 20:22, Andres Freund wrote:
On 2014-07-11 12:40:15 +1200, Mark Kirkwood wrote:
Full report http://paste.ubuntu.com/886/
#
8.82%postgres [kernel.kallsyms][k]
_raw_spin_lock_irqsave
|
--- _raw_spin_lock_irqsave
On 11/07/14 20:22, Andres Freund wrote:
On 2014-07-11 12:40:15 +1200, Mark Kirkwood wrote:
Postgres 9.4 beta
rwlock patch
pgbench scale = 2000
On that scale - that's bigger than shared_buffers IIRC - I'd not expect
the patch to make much of a difference.
Right - we did te
On 01/07/14 22:13, Andres Freund wrote:
On 2014-07-01 21:48:35 +1200, Mark Kirkwood wrote:
- cherry picking the last 5 commits into 9.4 branch and building a package
from that and retesting:
Clients | 9.4 tps 60 cores (rwlock)
+--
6 | 70189
12
On 01/07/14 21:48, Mark Kirkwood wrote:
[1] from git://git.postgresql.org/git/users/andresfreund/postgres.git,
commits:
4b82477dcaf81ad7b0c102f4b66e479a5eb9504a
10d72b97f108b6002210ea97a414076a62302d4e
67ffebe50111743975d54782a3a94b15ac4e755f
fe686ed18fe132021ee5e557c67cc4d7c50a1ada
On 27/06/14 21:19, Andres Freund wrote:
On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:
My feeling is spinlock or similar, 'perf top' shows
kernel find_busiest_group
kernel _raw_spin_lock
as the top time users.
Those don't tell that much by themselves, could you do a hiera
On 27/06/14 21:19, Andres Freund wrote:
On 2014-06-27 14:28:20 +1200, Mark Kirkwood wrote:
My feeling is spinlock or similar, 'perf top' shows
kernel find_busiest_group
kernel _raw_spin_lock
as the top time users.
Those don't tell that much by themselves, could you do a hiera
On 27/06/14 14:01, Scott Marlowe wrote:
On Thu, Jun 26, 2014 at 5:49 PM, Mark Kirkwood
wrote:
I have a nice toy to play with: Dell R920 with 60 cores and 1TB ram [1].
The context is the current machine in use by the customer is a 32 core one,
and due to growth we are looking at something
clients (e.g 200 - 500). I have yet to break out the
perf toolset, but I'm wondering if any folk has compared 32 and 60 (or
64) core read write pgbench performance?
regards
Mark
[1] Details:
4x E7-4890 15 cores each.
1 TB ram
16x Toshiba PX02SS SATA SSD
4x Samsung NVMe XS1715 PCIe SSD
U
On 21/01/14 21:45, Mark Kirkwood wrote:
On 21/01/14 21:37, Katharina Koobs wrote:
Dear Heikki,
thank you for your valuable feedback. Regarding your questions: It
gradually slower every day. The database size is increasing only
slightly over time.
I will try your hint regarding CLUSTERING. The
version 9.0 sounds very interesting. I will discuss the
update to version 9.0 with my colleague.
Any further idea or feedback is much appreciated.
Index bloat could be a factor too - performing a regular REINDEX on the
relevant tables could be worth a try.
Regards
Mark
--
Sent via pgsql
to tell without any relevant information (e.g schema
description). But a likely culprit would be a missing index on the
relevant 'tenant_id' type field in each table that you are using to
distinguish the various tenant datasets.
Regards
Mark
--
Sent via pgsql-performance mailing l
e work_mem?
You can disable the hash and merge join options by doing:
SET enable_hashjoin=off;
SET enable_mergejoin=off;
before running the query again. Timing it (or EXPLAIN ANALYZE) should
demonstrate if that planner made the right call by choosing hash or
merge in the first place.
regards
Ma
ating an index on NewsArticle(id) so that the
join to this table does not require a full table scan:
CREATE INDEX newsarticle_id_idx ON "NewsArticle" (id);
(probably not a problem when you only have a few articles - but will be
as the volume increases over time).
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rk better with deadline than
noop (as their own scheduling firmware may be pretty poor). Also, check
if there are updates for the SSD firmware. I have a couple of Crucial
M4s that changed from being fairly average to very fast indeed after
getting later firmware...
Cheers
Mark
--
Sent via p
8 cpus tended to stop scaling so we are
using more smaller VMs rather than fewer big ones [1].
regards
Mark
[1] This was with Pgbench. Note this was over a year ago, so this effect
may be not present (different kernels and kvm versions), or the magic
number may be higher than 8 now...
Good Afternoon,
I also came across this too.
The issue goes away if you keep your join columns the same data type on
both tables.
The nested loop happens when the join columns are not the same data type.
Hope this helps.
Best
-Mark
On Fri, Sep 6, 2013 at 2:35 PM, Tom Lane wrote:
> Br
*Sorry correction.
I meant the Materialize disappears when the join columns are the same data
type.
On Fri, Sep 6, 2013 at 3:46 PM, Mark Mayo wrote:
> Good Afternoon,
>
> I also came across this too.
> The issue goes away if you keep your join columns the same data type on
> bot
idth=4) (actual time=0.014..0.016
rows=20 loops=1)
Index Cond: (aid > 1000)
Total runtime: 0.029 ms
(4 rows)
...and we have index scans for both cases.
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 23/05/13 14:26, Mark Kirkwood wrote:
On 23/05/13 14:22, Greg Smith wrote:
On 5/22/13 10:04 PM, Mark Kirkwood wrote:
Make that quite a few capacitors (top right corner):
http://regmedia.co.uk/2013/05/07/m500_4.jpg
There are some more shots and descriptions of the internals in the
On 23/05/13 14:22, Greg Smith wrote:
On 5/22/13 10:04 PM, Mark Kirkwood wrote:
Make that quite a few capacitors (top right corner):
http://regmedia.co.uk/2013/05/07/m500_4.jpg
There are some more shots and descriptions of the internals in the
excellent review at
http://techreport.com/review
On 23/05/13 13:32, Mark Kirkwood wrote:
On 23/05/13 13:01, Joshua D. Drake wrote:
On 05/22/2013 04:37 PM, Merlin Moncure wrote:
On Wed, May 22, 2013 at 5:42 PM, Joshua D. Drake
wrote:
I am curious how the 710 or S3700 stacks up against the new M500 from
Crucial? I know Intel is kind of
y for 5 years
- S3700 100GB drive: 1000GB per day for 5 years
But great to see more reasonably priced SSD with power off protection.
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/m
way
more than a few seconds...which means look out for huge data loss.
I'd be inclined to apply more leverage to hosting provider to source
SSDs suitable for your needs, or change hosting providers.
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
a little suspicious!
Regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Tue, 30 Apr 2013 06:20:55 -0500, Christoph Berg
wrote:
Hi,
this is more of a report than a question, because we thought this
would be interesting to share.
We recently (finally) migrated an Request Tracker 3.4 database running
on 8.1.19 to 9.2.4. The queries used by rt3.4 are sometimes wei
On 11/05/13 01:30, Tom Lane wrote:
Mark Kirkwood writes:
Unfortunately a trigger will not really do the job - analyze ignores in
progress rows (unless they were added by the current transaction), and
then the changes made by analyze are not seen by any other sessions. So
no changes to plans
added.
7 трав. 2013 08:33, "Mark Kirkwood" mailto:mark.kirkw...@catalyst.net.nz>> напис.
On 07/05/13 18:10, Simon Riggs wrote:
On 7 May 2013 01:23, mailto:mark.kirkw...@catalyst.net.nz>__> wrote:
I'm thinking that a variant of (2) might be simpl
On 07/05/13 19:33, Simon Riggs wrote:
On 7 May 2013 07:32, Mark Kirkwood wrote:
On 07/05/13 18:10, Simon Riggs wrote:
On 7 May 2013 01:23, wrote:
I'm thinking that a variant of (2) might be simpler to inplement:
(I think Matt C essentially beat me to this suggestion - he origi
g secret incantations to
make things work well :-)
I'm still thinking that making postgres smarter about having current
stats for getting the actual optimal plan is the best solution.
Cheers
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
e here is the partial execution numbers
would need to be sent. Clearly one would need to avoid doing this too
often (!) - possibly only when number of changed rows >
autovacuum_analyze_scale_factor proportion of the relation concerned or
similar.
regards
Mark
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
hecking of as yet-incomplete xids, and to bulk concurrent
transactions.
ISTM we can improve performance of TransactionIdIsInProgress() by
caching the procno of our last xid.
Mark, could you retest with both these patches? Thanks.
Thanks Simon, will do and report back.
--
Sent via pgsql-p
;
> I wonder if "freezing" (analyze, then disable autovacuum) the statistics
> for the large number of rows would work.
>
>
>
I'm thinking that the issue is actually the opposite - it is that a new
plan is needed because the new (uncomitted) rows are changing the data
d
> On 2 May 2013 01:49, Mark Kirkwood wrote:
>
> I think we need a problem statement before we attempt a solution,
> which is what Tom is alluding to.
>
Actually no - I think Tom (quite correctly) was saying that the patch was
not a viable solution. With which I agree.
I belie
On 02/05/13 02:06, Tom Lane wrote:
Mark Kirkwood writes:
I am concerned that the deafening lack of any replies to my original
message is a result of folk glancing at your original quick reply and
thinking... incomplete problem spec...ignore... when that is not that
case - yes I should have
On 26/04/13 15:34, Gavin Flower wrote:
On 26/04/13 15:19, Mark Kirkwood wrote:
While in general you are quite correct - in the above case
(particularly as I've supplied a test case) it should be pretty
obvious that any moderately modern version of postgres on any
supported platform will ex
e has "exists" in both HQL and criteria API
> (e.g. see
> http://www.cereslogic.com/pages/2008/09/22/hibernate-criteria-subqueries-exists/
> for
> criteria). So, may be it's easier for you to tune your hibernate query to
> use exists
>
>
> 2013/4/30 Mark Hamp
I have a Hibernate-generated query (That's not going to change, so let's
just focus on the Postgres side for now) like this:
SELECT *
from PERSON p
where p.PERSON_ID in (
select distinct p2.PERSON_ID
from PERSON p2
left oute
On 26/04/13 14:56, Gavin Flower wrote:
On 26/04/13 14:33, Mark Kirkwood wrote:
Recently we encountered the following unhappy sequence of events:
1/ system running happily
2/ batch load into table begins
3/ very quickly (some) preexisting queries on said table go orders of
magnitude slower
4
3 rows=2 loops=1)
Index Cond: (typ = 3)
Total runtime: 6.615 ms
Regards
Mark
plan.tar.gz
Description: application/gzip
*** analyze.c.orig 2013-04-26 10:40:06.634942283 +1200
--- analyze.c 2013-04-26 11:36:13.537404101 +1200
***
*** 1173,1183
* has to adjust th
, 2013 at 12:31 PM, Mark Davidson wrote:
>
>> Thanks for your response Vasillis. I've run pgbench on both machines
>> `./pgbench -c 10 -t 1 pgbench` getting 99.800650 tps on my local
>> machine and 23.825332 tps on the server so quite a significant difference.
>
Sorry Vasillis I missed you asking that I just did './pgbench -i pgbench'
didn't specific set any values. I can try some specific ones if you can
suggest any.
On 8 April 2013 21:28, Vasilis Ventirozos wrote:
>
>
>
> On Mon, Apr 8, 2013 at 11:18 PM, Mark Davidson wro
18.976496 (including connections establishing)
> tps = 1119.180126 (excluding connections establishing)
>
> i am assuming that you didn't populate your pgbench db with the default
> values , if you tell me how you did i will be happy to re run the test and
> see the dif
1 - 100 of 951 matches
Mail list logo