On 04/03/13 13:52, Niels Kristian Schjødt wrote:
LSI MegaRAID SAS 9260-4i with four Intel SSDSC2CW240A3K5 SSDs OR four Hitachi
Ultrastar 15K600 SAS drives?
My app is pretty write heavy and I have a lot of concurrent connections 300 -
(though considering adding pgpool2 in front to increase
On 27/01/13 02:45, k...@rice.edu wrote:
On Sun, Jan 27, 2013 at 03:15:45AM +0300, belal hamed wrote:
I connect to my server through ADSL connection 4Mbps
Here is your problem. You need to understand the performance
characteristics of your communication channel. ADSL is a VERY
asymmetric
Dear Shaun,
Thanks for that - it's really interesting to know.
On 02/01/13 21:46, Shaun Thomas wrote:
Hey everyone!
After much testing and hair-pulling, we've confirmed two kernel
settings that should always be modified in production Linux systems.
Especially new ones with the completely fair
The partial index is highly leveraged. If every tuple in the
table is updated once, that amounts to every tuple in the index
being updated 25,000 times.
How so? That sounds like O(n_2) behaviour.
If the table has 5 million rows while the index has 200 (active)
On 27/12/12 16:17, Jeff Janes wrote:
I still think your best bet is to get rid of the partial index and trade
the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).
I think that will be much less fragile than reindexing in a cron job.
So, at the moment, I have 3
On 27/12/12 17:21, François Beausoleil wrote:
Le 2012-12-27 à 12:10, Nikolas Everett a écrit :
We just upgraded from 8.3 to 9.1 and we're seeing some performance problems.
When we EXPLAIN ANALYZE our queries the explain result claim that the queries
are reasonably fast but the wall clock
The partial index is highly leveraged. If every tuple in the
table is updated once, that amounts to every tuple in the index
being updated 25,000 times.
How so? That sounds like O(n_2) behaviour.
If the table has 5 million rows while the index has 200 (active)
Dear All,
I think periodic reindex may be the solution. Even after reducing the
autovacuum fraction to 0.05, the index still seems to bloat.
After another couple of days runtime, the index is using 11MB, and
I get a query that takes 2.448ms. Then I reindex (takes about 3 sec),
and the index
I've now installed 9.2. As you said, thanks to the change in 9.2 it
initially prefers the partial index.
BUT, after 1 cycle of inserting 500k rows, then deleting them all,
then starting to insert again, I find that the planner has reverted
to the former bad behaviour.
On 21/12/12 05:15, Jeff Janes wrote:
- What I'm trying to do is trace the history of the books
through the system and assign each one a proper unique id.
So, if I see a book with parcel_id_code = 37,
is it a new book (after pid wrap), or is it the same book I saw 1
Dear Tom,
Thanks againg for your help on this.
On 20/12/12 03:06, Tom Lane wrote:
Richard Neill rn...@richardneill.org writes:
The problem is, when I now run my query, the planner ignores the
dedicated index tbl_tracker_performance_1_idx, and instead uses both
of the full indexes... resulting
Dear All,
I've just joined this list, and I'd like to request some advice.
I have a table (1 GB in size) with 24 columns, and 5.6 million rows. Of
these, we're interested in two columns, parcel_id_code, and exit_state.
parcel_id_code has a fairly uniform distribution of integers
On 19/12/12 22:59, Sergey Konoplev wrote:
On Wed, Dec 19, 2012 at 1:13 PM, Richard Neill rn...@richardneill.org wrote:
Index Scan using tbl_tracker_performance_1_idx on tbl_tracker
(cost=0.00..5440.83 rows=1 width=174) (actual time=0.052..0.052 rows=0
loops=1)
Index Cond
Thanks for your help,
On 20/12/12 00:08, Sergey Konoplev wrote:
On Wed, Dec 19, 2012 at 3:49 PM, Richard Neill rn...@richardneill.org wrote:
* The reindex solution doesn't work. I just tried it, and the query planner
is still using the wrong indexes.
Can you show the explain analyze
Dear Tom,
Thanks very much for your advice.
A psql session is below. This shows that, if I force the planner to use
the partial index, by dropping the others, then it's fast. But as soon
as I put the full indexes back (which I need for other queries), the
query planner chooses them instead,
Dear Jeff,
Thanks for your help,
* The reindex solution doesn't work. I just tried it, and
the query planner
is still using the wrong indexes.
It switched to a better one of the wrong indices, though, and got
several times faster.
I think that this is a
**Rod MacNeil wrote:
Hi All,
I have a server running CentOS5 with 6gb of memory that will run
postgres 8.3 exclusively.
I would like to allocate 4gb of the memory to shared buffers for postgres.
It might be worth pausing at this point:
The various postgresql tuning guides usually suggest
Dear All,
Just wondering whether there is a missing scope for the query planner
(on 8.4.2) to be cleverer than it currently is.
Specifically, I wonder whether the optimiser should know that by
converting a CASE condition into a WHERE condition, it can use an index.
Have I found a possible
without being any uglier than the original query is.
You're absolutely right, but I'm afraid this won't help. I'd simplified
the original example query, but in real life, I've got about 50
different sub-ranges, which cover virtually all the id-space.
--
Tom Lane wrote:
Richard Neill rn
DM wrote:
Is there any script/tool to identify if the table requires full vacuum?
or to re-index an existing index table?
Don't know if there is a script to specifically do this, though you may
find this query a useful one:
SELECT relname, reltuples, relpages FROM pg_class ORDER BY
Dear All,
I'm trying to optimise the speed of some selects with the where condition:
WHERE id =
(SELECT MAX(id) FROM tbl_sort_report WHERE parcel_id_code='43024')
This is relatively slow, taking about 15-20ms, even though I have a
joint index on both fields:
CREATE INDEX testidx3 ON
Yup, I even tried manually running 'VACUUM FULL' and it didn't help. As
for upgrading;
VACUUM FULL is usually considered a bad idea. What you probably want to
do instead is CLUSTER, followed by ANALYZE.
Basically, VACUUM makes the indexes smaller (but doesn't reclaim much
space from the
A couple of thoughts occur to me:
1. For reads, RAID 1 should also be good: it will allow a read to occur
from whichever disk can provide the data fastest.
2. Also, for reads, the more RAM you have, the better (for caching). I'd
suspect that another 8GB of RAM is a better expenditure than a 2nd
Greg Smith wrote:
Richard Neill wrote:
3. RAID 0 is twice as unreliable as no raid. I'd recommend using RAID 1
intead. If you use the Linux software mdraid, remote admin is easy.
The main thing to be wary of with Linux software RAID-1 is that you
configure things so that both drives
Jeremy Harris wrote:
On 12/24/2009 05:12 PM, Richard Neill wrote:
Of course, with a server machine, it's nearly impossible to use mdadm
raid: you are usually compelled to use a hardware raid card.
Could you expand on that?
Both of the last machines I bought (an IBM X3550 and an HP DL380
with a hideous firmware bug.
And there is no way to bypass it.
The HP have the P400 cards, which are decent in themselves, just not as
good as software raid.
Richard
Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Dec 24, 2009 at 11:09 AM, Richard Neill rn...@cam.ac.uk wrote:
Jeremy Harris
Scott Marlowe wrote:
On Thu, Dec 24, 2009 at 3:51 PM, Richard Neill rn...@cam.ac.uk wrote:
Adam Tauno Williams wrote:
This isn't true. IBMs IPS series controllers can the checked and
configured via the ipssend utility that works very well in 2.6.x LINUX.
Unfortunately, what we got
Dear All,
Thanks for all your help so far. This page was particularly helpful:
http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm
(does the advice for 8.3 apply unchanged to 8.4?)
I'm still hitting issues with this though: sync is taking 7-10 seconds
and I need to get it down to
Dear All,
Thanks for all your help so far. This page was particularly helpful:
http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm
(does the advice for 8.3 apply unchanged to 8.4?)
I'm still hitting issues with this though: sync is taking 7-10 seconds
and I need to get it down to
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
That does look weird. Do we have a self-contained test case?
Not at the moment. It seems to only occur with relatively complex joins.
Richard, could you capture the schema for the affected tables and
views with pg_dump -s and
Kevin Grittner wrote:
Richard Neill rn...@cam.ac.uk wrote:
Regarding pg_statistic, I don't understand how to find the
relevant rows - what am I looking for? (the pg_statistic table is
247M in size).
I think the only relevant rows would be the ones with starelid =
pg_class.oid
amount of
lower priority IO be getting in the way thanks to write barriers?
If so, can I safely turn off write barriers?
Thanks,
Richard
P.S. Should I rename this thread?
Richard Neill wrote:
Dear All,
It definitely looks checkpoint-related - the checkpoint timeout is set
to 5 minutes
Dear All,
I don't know if this is a stupid question, or not, but I can't
understand the following.
I have a pretty simple query, which runs in about 7ms
SELECT * FROM h.inventory WHERE demand_id =289276563;
The result of this is a 15 row x 17 column table. However, I want this
to be
Jean-Michel Pouré wrote:
Le mardi 01 décembre 2009 à 18:52 +, Richard Neill a écrit :
Is this normal? Have I hit a bug?
PostgreSQL query analyzer needs to run a couple of times before it can
rewrite and optimize the query. Make sure demand_id, id and join IDs
carry indexes.
I did
Dear Kevin,
Thanks for a very helpful reply.
Kevin Grittner wrote:
Richard Neill rn...@cam.ac.uk wrote:
Am I wrong in thinking that ORDER BY is always applied after the
main query is run?
Yes, you are wrong to think that. It compares the costs of various
plans, and when it has an index
Thanks for your explanations.
Tom Lane wrote:
Richard Neill rn...@cam.ac.uk writes:
Now, I understand that increasing checkpoint_segments is generally a
good thing (subject to some limit), but doesn't that just mean that
instead of say a 1 second outage every minute, it's a 10 second outage
Sergey Aleynikov wrote:
Hello,
2009/11/25 Richard Neill rn...@cam.ac.uk:
Also, if you find odd statistics of freshly analyzed table - try
increasing statistics target, using
ALTER TABLE .. ALTER COLUMN .. SET STATISTICS ...
If you're using defaults - it's again low for large tables. Start
Dear All,
I'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon)
is responsible for some deadlocks/dropouts I'm seeing.
One particular table gets hit about 5 times a second (for single row
updates and inserts) + associated index changes. This is a very light
load for the
Matthew Wakeling wrote:
On Sun, 22 Nov 2009, Richard Neill wrote:
Worse still, doing a cluster of most of the tables and vacuum full
analyze
Why are you doing a vacuum full? That command is not meant to be used
except in the most unusual of circumstances, as it causes bloat to indexes
Matthew Wakeling wrote:
On Sun, 22 Nov 2009, Richard Neill wrote:
Worse still, doing a cluster of most of the tables and vacuum full
analyze
Why are you doing a vacuum full? That command is not meant to be used
except in the most unusual of circumstances, as it causes bloat to indexes
Matthew Wakeling wrote:
On Wed, 25 Nov 2009, Richard Neill wrote:
On Sun, 22 Nov 2009, Richard Neill wrote:
Worse still, doing a cluster of most of the tables and vacuum full
analyze
In fact, cluster is exactly the command you are looking for. It will
drop the indexes, do a complete table
Sergey Aleynikov wrote:
Hello,
* Is there any way I can nail the query planner to a particular query plan,
rather than have it keep changing its mind?
All these setting leads to choosing different plans. If you have small
number of complex sensitive queires, you can run explain on them with
Dear All,
Thanks very much for your help so far. My understanding of PG is getting
a lot better!
I wonder if I've understood analyze properly: I'm not sure I quite
understand how specific the statistics gathered actually are.
In particular, what happens in the following case:
1. I start
Matthew Wakeling wrote:
We're about to purchase a new server to store some of our old databases,
and I was wondering if someone could advise me on a RAID card. We want
to make a 6-drive SATA RAID array out of 2TB drives, and it will be RAID
5 or 6 because there will be zero write traffic.
Thanks very much for your help so far.
(it is pretty confusing that the HashAggregate reports ~6M rows, but
the sort does 41M rows, but maybe I can not read this).
Anyway, I think that if You up the work_mem for this query to 512M,
the sort will be in memory, an thus plenty faster.
Tried this
to ask sensible queries,
why can you trust them to set their own work_mem values?
Richard
On Nov 20, 2009, at 4:39 PM, Richard Neill wrote:
Justin Pitts wrote:
Set work_mem in postgresql.conf down to what the 200 clients need,
which sounds to me like the default setting.
In the session
Greg Smith wrote:
Richard Neill wrote:
Am I missing something though, or is this project dormant, without
having released any files?
My bad--gave you the wrong url.
http://git.postgresql.org/gitweb?p=pg_top.git;a=summary has the project
I meant to point you toward.
Will try that out
Dear All,
Thanks for your help earlier with the previous question. I wonder if I
might ask another.
We have various queries that need to run, of which I'm going to focus on
2, vox and du_report.
Both of them are extremely sensitive to the precise values of
random_page_cost and
Axel Rau wrote:
Am 13.11.2009 um 14:57 schrieb Laszlo Nagy:
I was thinking about ARECA 1320 with 2GB memory + BBU. Unfortunately,
I cannot find information about using ARECA cards with SSD drives.
They told me: currently not supported, but they have positive customer
reports. No date yet for
Kevin Grittner wrote:
Richard Neill wrote:
SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining )
FROM
core.demand,
viwcs.previous_wave
LEFT OUTER JOIN viwcs.wave_end_demand USING ( wid )
WHERE core.demand.id = viwcs.wave_end_demand.demand_id;
For comparison, how
Thom Brown wrote:
It looks like your statistics are way out of sync with the real data.
Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual
time=248577.879..253168.466 rows=347308 loops=1)
This shows that it thinks there will be 8,686 rows, but actually
traverses
Thom Brown wrote:
Okay, have you tried monitoring the connections to your database?
Try: select * from pg_stat_activity;
Tried that - it's very useful as far as it goes. I can see that in most
cases, the DB is running just the one query.
What I really want to know is, how far through
Thom Brown wrote:
2009/11/20 Richard Neill rn...@cam.ac.uk mailto:rn...@cam.ac.uk
It might also help if you posted your postgresql.conf too.
Below (have removed the really non-interesting bits).
Thanks,
Richard
I can't actually see anything in your config that would
Fernando Hevia wrote:
-Mensaje original-
De: Richard Neill
max_connections = 500 # (change requires restart)
work_mem = 256MB# min 64kB
Not that it has to do with your current problem but this combination could
bog your server
Justin Pitts wrote:
Set work_mem in postgresql.conf down to what the 200 clients need, which
sounds to me like the default setting.
In the session which needs more work_mem, execute:
SET SESSION work_mem TO '256MB'
Isn't that terribly ugly? It seems to me less hackish to rely on the
many
Dear All,
I've just joined this list, so let me first thank you in advance for
your hospitality.
I'm having lots of trouble with variously slow running queries on a
production system. I've tried all the obvious fixes: changing the
query planner, checking for indexing, autovacuum, making
Thanks for your help. This issue splits into 2 bits:
1. Fixing specific queries.
2. Finding out when a specific running query is going to complete.
(At the moment, this is the bit I really need to know).
Greg Williamson wrote:
Richard --
You might post the results of EXPLAIN ANALYZE your
Greg Williamson wrote:
Richard --
You might post the results of EXPLAIN ANALYZE your SQL here; ...
be sure to run it in a transaction if you want to be able roll it
back. Perhaps try EXPLAIN your SQL; first as it is faster, but
EXPLAIN ANALYZE shows what the planner is doing.
Is
Greg Williamson wrote:
Richard --
You might post the results of EXPLAIN ANALYZE your SQL here; ...
be sure to run it in a transaction if you want to be able roll it
back. Perhaps try EXPLAIN your SQL; first as it is faster, but
EXPLAIN ANALYZE shows what the planner is doing.
Here's
59 matches
Mail list logo