e the OP may not be seeing an issue from too
> large of a pg buffer, my point still stands, large pg_buffer can cause
> problems with heavy or even moderate write loads.
Sure, but that can go the other way as well. What additional
instrumentation is needed so that people can actually know which i
s.
Is the nature of the queries the same, just the duration that changes?
Or are the queries of a different nature?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
estimated cost, I
think this issue would be enough to tip it into the seq scan. Also, your
poor setting of effective_cache_size might also be enough to tip it. And
both combined, almost certainly are.
But ultimately, I think you are optimizing for a case that does not
actually exist.
Cheers,
Jeff
hed and coming from RAM, which is almost
surely the case here.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
e doing this sort in the first place.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.252 rows=39 loops=1)
It would interesting to know why that is.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the first place. If you installed from source, then
just follow "sudo make install" with "cd contrib; sudo make install")
Then, just change postgresql.conf so that
shared_preload_libraries = 'pg_stat_statements'
And restart the server.
Then in psql run
create extension pg_stat_statements ;
Cheers,
Jeff
ther evidence. If
a table only needs to be vacuumed once a day and you preemptively do
it at 3a.m., then autovac won't bother to do it itself during the day.
So there is no point, but much risk, in also turning autovac off.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-
rop
the trigger, then the live table should have good-enough preexisting
statistics to make the trigger choose a good plan.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
in-between times
> that I could shutdown the app and perform a (re-) cluster on the overall
> table? Problem is, with a table that size, and the hardware I'm "blessed
> with", the cluster takes quite a bit of time. :(
Probably not. If the data starts out clustered and gets up
x27;t match on
> both tables, it's quite likely indices won't be used when checking the
> FK, and that spells trouble.
Will PG allow you to add a FK constraint where there is no usable
index on the referenced side?
I have failed to do so, but perhaps I am not being devious enough.
Ch
n one that is the culprit?
You can look in pg_statio_user_tables to see what tables and indexes have
high io being driven by the bulk loading.
Use "top" to see of the server is mostly IO bound or CPU bound.
Cheers,
Jeff
On Fri, Jan 11, 2013 at 12:13 AM, Andrzej Zawadzki wrote:
> On 10.01.2013 19:17, Jeff Janes wrote:
>> Also, if you temporarily set enable_seqscan=off on 9.2, what plan do
>> you then get?
>
> Plan is different.
>
> "Join Filter: (sa.arrear_im
o
you then get?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
(analyze, buffers) rather
than merely EXPLAIN. In this case, I would also try setting
enable_hashjoin=off and enable_mergejoin=off in the session, in order to
force the planner to use the plan you think you want, so we can see what PG
thinks of that one.
Cheers,
Jeff
>
number of partitions to make that
happen. But if the table is clustered, this is exactly what you would
get--the trouble would be keeping it clustered. If most of the
line-items are inserted at the same time as each other, they probably
should be fairly well clustered to start with.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sunday, January 6, 2013, AJ Weber wrote:
> All fair questions...
>
> Thank you for your detailed response!
>
>
> On 1/4/2013 11:03 PM, Jeff Janes wrote:
>
> On Friday, January 4, 2013, AJ Weber wrote:
>
>> Hi all,
>>
>> I have a table that has abou
reating an
temp table was the preferred method to force a flush. Although I
wonder if that behavior might be optimized away at some point.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
re hard drives.
> Finally, if anyone has any comments about my settings listed above that
> might help improve performance, I thank you in advance.
>
Your default statistics target seemed low. Without knowing the nature of
your most resource intensive queries or how much memory tomcat is using, it
is hard to say more.
Cheers,
Jeff
s it took the previously mentioned
plpgsql trigger to do the same thing.
This was under 9.1.7.
In 9.2.2, it seems to get 3 times worse yet for RULEs in the insert loop.
But that result seems hard to believe, so I am repeating it.
Cheers
Jeff
>
>
then return 5;
when (random()*10)::integer then return 6;
when (random()*10)::integer then return 7;
when (random()*10)::integer then return 8;
when (random()*10)::integer then return 9;
when (random()*10)::integer then return 10;
else return -6;
Cheers,
Jeff
>
On Thursday, December 20, 2012, Jeff Janes wrote:
> On Thursday, December 20, 2012, Tom Lane wrote:
>
>>
>> What I did to try to duplicate Richard's situation was to create a test
>> table in which all the exit_state values were NULL, then build the
>> index,
On Thursday, December 27, 2012, Andrew Dunstan wrote:
> On 12/26/2012 11:03 PM, Jeff Janes wrote:
>
>>
>> Do you have a huge number of tables? Maybe over the course of a
>> long-lived connection, it touches enough tables to bloat the relcache /
>> syscache. I don
most gain,
especially if you use COPY or \copy. Since the main goal of partitioning
is to allow your physical storage layout to conspire with your bulk
operations, it is hard to see how you can get the benefits of partitioning
without having your bulk loading participate in that conspiracy.
Cheers,
Jeff
On Wednesday, December 26, 2012, Pavel Stehule wrote:
> 2012/12/27 Jeff Janes :
> >
> > More automated would be nice (i.e. one operation to make both the check
> > constraints and the trigger, so they can't get out of sync), but would
> not
> > necessarily mean
On Thursday, December 27, 2012, Richard Neill wrote:
>
>
> On 27/12/12 16:17, Jeff Janes wrote:
>
>>
>> I still think your best bet is to get rid of the partial index and trade
>> the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).
>> I
On Thursday, December 20, 2012, Jeff Janes wrote:
> On Thursday, December 20, 2012, Richard Neill wrote:
>
>>
>>
>> -> Bitmap Index Scan on tbl_tracker_exit_state_idx
>> (cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277
>> lo
reindex should not be needed in
> "normal" operation... is this true? Or are the docs wrong? Or have I got
> such an edge case?
>
Your case seems pretty far out there to me.
Cheers,
Jeff
> minutes.
Do you have a huge number of tables? Maybe over the course of a long-lived
connection, it touches enough tables to bloat the relcache / syscache. I
don't know how the autovac would be involved in that, though.
Cheers,
Jeff
k. rollback first, then ruminate on the results of the explain.
Also, this will forcibly cancel any autovacuums occurring on the table. I
think one of the reasons he needs to reindex so much is that he is already
desperately short of vacuuming behavior.
Cheers,
Jeff
>
>
ll the SQL engine that has to parse, plan, and execute
> > it.
> >
>
> Jeff, I've changed the code from dynamic to:
>
> CREATE OR REPLACE FUNCTION quotes_insert_trigger()
> RETURNS trigger AS $$
> DECLARE
> r_date text;
> BEGIN
> r_date = to_ch
ne. I
have not examined this code, and don't know whether it is doing its job but
just isn't enough to prevent the bloat, or if for some reason it is not
applicable to your situation.
Cheers,
Jeff
On Tuesday, December 11, 2012, Tom Lane wrote:
> Jeff Janes > writes:
> > On Tue, Dec 11, 2012 at 2:50 AM, Patryk Sidzina
> > > wrote:
> >> The differences come up when you change the "INSERT" to "EXECUTE
> 'INSERT'" (
>
ur IFs are nested
like a binary search, not a linear search. And if they are mostly for
"today's" date, then make sure you search backwards.
Cheers,
Jeff
are many other ways of approaching it, but mostly you have to already
have a good theory about what is going on in order know which one to use or
to interpret the results, and many of them require you to make custom
compiles of the postgres server code.
Cheers,
Jeff
;
> -> Bitmap Index Scan on tbl_tracker_exit_state_idx
> (cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277
> loops=1)
>
This is finding 100 times more rows than it thinks it will. If that could
be fixed, surely this plan would not look as good. But then, it would
probably just switch to another plan that is not the one you want, either.
Cheers,
Jeff
>
ch one, if that is what you are doing.
Some kind of bulk join would probably be more efficient.
Cheers,
Jeff
>
>
On Thursday, December 20, 2012, Tom Lane wrote:
> Jeff Janes > writes:
> > In any case, I can't get it to prefer the full index in 9.1.6 at all.
> The
> > partial index wins hands down unless the table is physically clustered by
> > the parcel_id_code column.
On Thursday, December 20, 2012, Charles Gomes wrote:
> Jeff,
>
> The 8288 writes are fine, as the array has a BBU, it's fine. You see about
> 4% of the utilization.
>
BBU is great for latency, but it doesn't do much for throughput, unless it
is doing write combining behi
> rrqm/s wrqm/s r/s
> w/s rsec/s wsec/s avgrq-sz avgqu-sz
> await svctm %util
>
> Pgresql--data
> 0.00 0.000.00
> 8288.00 0.00 66304.00
> 8.0060.927.35
> 0.01 4.30
8288 randomly scattered writes per second sound like enough to
bottlen
.
This is what I did for the table:
create table tbl_tracker as select case when random()<0.001 then 2 else
case when random()< 0.3 then NULL else 1 end end as exit_state,
(random()*9)::int as parcel_id_code from generate_series(1,500) ;
Cheers,
Jeff
>
You all were right. The time-outs for TRUNCATE were due to a rogue pg_dump.
And the issue with the inserts was due to an unrelated code change.
Thanks for your help!
--Jeff O
On Dec 11, 2012, at 5:34 PM, Osborn, Jeff wrote:
> Yeah I've been running a cron pulling relevant i
On Wed, Dec 12, 2012 at 8:46 AM, Niels Kristian Schjødt
wrote:
>
> Den 11/12/2012 kl. 18.25 skrev Jeff Janes :
>
>> On Tue, Dec 11, 2012 at 2:04 AM, Niels Kristian Schjødt
>> wrote:
>>
>>> Maybe I should mention, that I never see more than max 5Gb out of my
On Thu, Nov 29, 2012 at 11:58 AM, Claudio Freire wrote:
> On Thu, Nov 29, 2012 at 3:32 PM, Jeff Davis wrote:
>>
>> I tried a quick test with 2M tuples and 3 indexes over int8, numeric,
>> and text (generated data). There was also an unindexed bytea column.
>> Using
(cost=0.00..267.03 rows=1 width=108)"
This looks like the same large-index over-penalty as discussed in the
recent thread "[PERFORM] Slow query: bitmap scan troubles".
Back-patching the log(npages) change is starting to look like a good idea.
Cheers,
Jeff
--
Sent via pgsql-perfo
the slowdown is that it calls AllocSetAlloc more often. I suspect
that this slow-down will be considered acceptable trade-off for
getting good parameterized plans.
commit e6faf910d75027bdce7cd0f2033db4e912592bcc
Author: Tom Lane
Date: Fri Sep 16 00:42:53 2011 -0400
Redesign th
Yeah I've been running a cron pulling relevant info from pg_stat_activity.
Haven't seen anything yet. Currently looking into the pg_dump situation.
--Jeff O
On Dec 11, 2012, at 5:16 PM, Merlin Moncure wrote:
> On Tue, Dec 11, 2012 at 3:38 PM, Sergey Konoplev wrote:
>
the issue?
Thanks,
--Jeff O
smime.p7s
Description: S/MIME cryptographic signature
On Tue, Dec 11, 2012 at 2:04 AM, Niels Kristian Schjødt
wrote:
> Den 11/12/2012 kl. 00.58 skrev Jeff Janes :
>
>>
>> The fact that there is much more writing than reading tells me that
>> most of your indexes are in RAM. The amount of index you are rapidly
>> readin
disks. Maybe you can fix that for less
money than it will cost you in your optimization time to make the best
of the disks you already have.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t; Any ideas why those results differ?
Did you just run it once each?
The run-to-run variability in timing can be substantial.
I put the above into a custom file for "pgbench -f sidzina.sql -t 1 -p
$port" and run it on both versions in random order for several hundred
iterations. Th
On Sat, Dec 8, 2012 at 5:19 AM, Guillaume Smet wrote:
> Hi Jeff,
>
> On Sat, Dec 8, 2012 at 3:32 AM, Jeff Janes wrote:
>> If those estimates are better, it probably means that your filter
>> condition is picking a part of the "el JOIN l" that has much different
&g
e joins, it is hard
to speculate.
If you remove all filters (all members of the "where" which are not
join criteria), then what does the plan look like?
If those estimates are better, it probably means that your filter
condition is picking a part of the "el JOIN l" that has much d
On Thu, Dec 6, 2012 at 12:05 PM, Claudio Freire wrote:
> On Thu, Dec 6, 2012 at 2:27 PM, Jeff Janes wrote:
>> On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire
>> wrote:
>>> As far as I can see on the explain, the misestimation is 3x~4x not 200x.
>>
>> It is 3
On Wed, Dec 5, 2012 at 9:43 AM, Claudio Freire wrote:
> On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes wrote:
>> I'm not sure that this change would fix your problem, because it might
>> also change the costs of the alternative plans in a way that
>> neutralizes things.
On Tue, Dec 4, 2012 at 3:42 PM, Jeff Janes wrote:
(Regarding http://explain.depesz.com/s/4MWG, wrote)
>
> But I am curious about how the cost estimate for the primary key look
> up is arrived at:
>
> Index Scan using cons_pe_primary_key on position_effect
> (cost=0.00..42.
rver know that.
Well, that part is fairly easy. Make random_page_cost and
seq_page_cost much smaller than their defaults. Like, 0.04 and 0.03,
for example.
I think the *_page_cost should strictly an estimate of actually doing
IO, with a separate parameter to reflect likelihood of needing to do
t
nt of the fact
that a bitmap scan which overflows work_mem and so becomes "lossy" is
quite a performance set-back. Nor does it look like explain analyze
informs you of this happening. But maybe I'm just looking in the
wrong places.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
with your results. Run each
> query several times.
If that is not how the production system works (running the same query
over and over) then you want to model the cold cache, not the hot one.
But in any case, the posted explains indicates that all buffers were
cached.
Cheers,
Jeff
--
Sent
On Thu, Nov 29, 2012 at 10:14 AM, Mike Blackwell wrote:
>
>
>
> On Thu, Nov 29, 2012 at 12:09 PM, Jeff Janes wrote:
>>
>>
>> But If you do keep the drop index inside the transaction, then you
>> would probably be better off using truncate rather than
s not really
worth the effort to batch if that is the size of the update.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n you
would probably be better off using truncate rather than delete, and
rebuild the index non-concurrently and move that inside the
transaction as well.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
subtransactions, make sure to release them as quickly as
you create them (don't just use ROLLBACK TO, that still leaves the
savepoint there); having 1500 open subtransactions might cause
performance problems elsewhere.
Regards,
Jeff Davis
--
Sent via pgsql-performance mailing list (pgsq
arate
transactions, but only up to a point, after which it levels off. I'm not
sure exactly when that point is, but after that, the downsides of
keeping a transaction open (like inability to remove the previous
version of an updated tuple) take over.
Regards,
Jeff Davis
--
Sent via pg
s?
If the extra 130 columns are mostly null, the difference will be very
small. Or, if the where clause is such that you only do a single-row
lookup on a primary key column, for example, the difference will also
be small.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
d be. While the more traditional type of hint is easy to use,
because the end user understands their data more than they understand
the guts of the planner.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Nov 14, 2012 at 12:04 PM, Jon Nelson wrote:
> On Wed, Nov 14, 2012 at 1:01 PM, Jeff Janes wrote:
>>
>> While the WAL is suppressed for the table inserts, it is not
>> suppressed for the index inserts, and the index WAL traffic is enough
>> to lead to conten
t does seem like the docs could at least be clarified here.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Tue, Nov 13, 2012 at 7:12 PM, Denis wrote:
> Jeff Janes wrote
>> On Thu, Nov 8, 2012 at 1:04 AM, Denis <
>
>> socsam@
>
>> > wrote:
>>>
>>> Still I can't undesrtand why pg_dump has to know about all the tables?
>>
>>
ok at the churn in pg_xlog) during those
loads?
Maybe your contention is someplace else. Since they must all be using
different tables, I don't think it would be the relation extension
lock. Maybe buffer mapping lock or freelist lock?
Cheers,
Jeff
--
Sent via pgsql-performance mailing l
ons would bypass WAL entirely. I can't figure
out if there is a corresponding optimization in 8.4, though.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
rewrite pg_dump from the ground up to achieve your
specific needs (dumping one schema, with no dependencies between to
other schemata) you could probably make it much more efficient. But
then it wouldn't be pg_dump, it would be something else.
Cheers,
Jeff
--
Sent via pgsql-performanc
On Mon, Nov 5, 2012 at 3:58 PM, Marko Kreen wrote:
> On Tue, Nov 6, 2012 at 1:31 AM, Jeff Janes wrote:
>> On Mon, Nov 5, 2012 at 2:58 PM, Marko Kreen wrote:
>>> On Sun, Nov 4, 2012 at 1:53 AM, Jeff Janes wrote:
>>>> On a 4 CPU machine, if I run pgbench -c10 -j10
On Mon, Nov 5, 2012 at 2:58 PM, Marko Kreen wrote:
> On Sun, Nov 4, 2012 at 1:53 AM, Jeff Janes wrote:
>> On a 4 CPU machine, if I run pgbench -c10 -j10 with dummy queries
>> (like "select 1;" or "set timezone...") against 2 instances of
>> pgbouncer,
On Mon, Nov 5, 2012 at 8:48 AM, Claudio Freire wrote:
> On Mon, Nov 5, 2012 at 1:44 PM, Jeff Janes wrote:
>>> Well, I'm pretty sure that having more work_mem is a good thing (tm)
>>> normally ;-)
>>
>> In my experience when doing sorts in isolation, having m
r of
tape-merging. I always blamed it on the L1/L2 etc. levels of caching.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Oct 31, 2012 at 11:39 AM, Merlin Moncure wrote:
> On Tue, Oct 30, 2012 at 4:58 PM, Jeff Janes wrote:
>> On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob
>> wrote:
>>
>>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine
>>>
On Oct 29, 2012, at 12:42 PM, Jeff Janes wrote:
> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz wrote:
>> I am configuring streaming replication with hot standby
>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
>> PostgreSQL was compiled from source.
On Oct 29, 2012, at 12:42 PM, Jeff Janes wrote:
> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz wrote:
>> I am configuring streaming replication with hot standby
>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
>> PostgreSQL was compiled from source.
On Tue, Oct 30, 2012 at 3:16 PM, Scott Marlowe wrote:
> On Tue, Oct 30, 2012 at 4:11 PM, Greg Williamson
> wrote:
>> Jeff / Catalin --
>>
>> Jeff Janes wrote:
>>
>>>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob
>>>wrote:
>>>
>>
n to Postgres via Unix socket.
Isn't pgbouncer single-threaded?
If you hitting it with tiny queries as fast as possible from 20
connections, I would think that it would become the bottleneck.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
T
:
> it took the system more than 80 minutes to replay 48 WAL files
> and connect to the primary.
>
> Can anybody think of an explanation why it takes that long?
Could the slow log files be replaying into randomly scattered pages
which are not yet in RAM?
Do you have sar or vms
enable_seqscan) might not be
the right knobs, but they are the knobs that currently exist.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Oct 26, 2012 at 8:30 AM, Böckler Andreas wrote:
>
> Am 25.10.2012 um 18:20 schrieb Jeff Janes:
>
>> Can you load the data into 9.2 and see if it does better? (I'm not
>> optimistic that it will be.)
>
> This takes months, the customer has to pay us for th
#x27;till you find the next hole instead of
> making it right at the beginning of construction process
We are not at the beginning of the construction process. You are
already living in the house.
Version 9.3 is currently under construction. Maybe this will be a fix
for this problem in t
gt; know the right values before SELECT to get good results ;)
Not sure what you mean here. If you change the settings just for the
query, it should be safe because when the query is already fast it is
not using the seq scan, so discouraging it from using one even further
is not going to do an
On Fri, Oct 19, 2012 at 8:07 AM, Shaun Thomas wrote:
> On 10/19/2012 10:05 AM, Jeff Janes wrote:
>
>> http://www.postgresql.org/docs/9.0/static/sql-altertablespace.html
>
>
> Yep. I realized my error was not checking the ALTER page after going through
> CREATE. I swore I
to the
> tablespaces themselves?
Been done already:
http://www.postgresql.org/docs/9.0/static/sql-altertablespace.html
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ant type
you expect to see.
The default page cost settings already assume that random fetches are
far more likely to be cache hits than sequential fetches are. If that
is not true, then the default random page cost is way too low,
regardless of the number of spindles or the concurrency.
but rather
how much of the table is in cache.
>
> @Jeff I have 4 drives in RADI10. The database has around 80GB of indices.
That seems like a pretty small disk set for a server of this size.
Do you know what percentage of that 80GB of indices gets dirtied
during any given round of batch loading
ct to benefit
from it concurrently) are worse than the consequences of
underestimating it--assuming you have the types of queries for which
it makes much of a difference.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Sun, Oct 7, 2012 at 7:43 AM, Ants Aasma wrote:
> On Wed, Sep 26, 2012 at 11:11 PM, Jeff Janes wrote:
>> On Wed, Sep 12, 2012 at 11:40 PM, Ants Aasma wrote:
>>> I don't have any links for OS level monitoring, but with version 9.2
>>> track_io_timing would do th
n_date, or on (creation_date, c) might. How
many records are there per day? If you add a count(*) to your select,
what would typical values be?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ndex. But wouldn't that remove most of the
benefits of partitioning? You could no longer add or remove
partitions instantaneously, for example.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ence anyway? If so,
in what circumstances?
In my hands, queries for which effective_cache_size might come into
play (for deciding between seq scan and index scan) are instead
planned as bitmap scans.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
2
> Swap:21000 51 20949
>
> So it did a little swapping, but only minor,
The kernel has, over the entire time the server has been up, found 51
MB of process memory to swap. That doesn't really mean anything. Do
you see active swapping going on, like with vmstat
On Tue, Oct 9, 2012 at 1:56 PM, Shaun Thomas wrote:
> On 10/09/2012 03:12 PM, Craig James wrote:
>
>>~3200 TPS max with hyperthreading
>>~9000 TPS max without hyprethreading
>
>
> That's really odd. We got almost the opposite effect on our X5645's.
>
> Also, there's no way your RAID is sus
ideas how to prevent this situation?
The bulk update could take an Exclusive (not Access Exclusive) lock.
Or the delete could perhaps be arranged to delete the records in ctid
order (although that might still deadlock). Or you could just repeat
the failed transaction.
Cheers,
Jeff
--
Sent
That might result
in the geom being inserted in a more cache-friendly order.
> Any ideas? Partitioning?
Do most selects against this table specify user_name as well as a
geometry query? If so, that might be a good partitioning key.
Otherwise, I don't see what you could partition on in
elects is not estimated, except to the extent they are folded into
something else, like the page visiting costs.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
301 - 400 of 878 matches
Mail list logo