[PERFORM] Table partitioning

2011-03-05 Thread Tobias Brox
Sorry for not responding directly to your question and for changing the subject ... ;-) On 4 March 2011 18:18, Landreville landrevi...@deadtreepages.com wrote: That is partitioned into about 3000 tables by the switchport_id (FK to a lookup table), each table has about 30 000 rows currently (a

Re: [PERFORM] Table partitioning

2011-03-05 Thread Tobias Brox
On 5 March 2011 12:59, Mark Thornton mthorn...@optrak.co.uk wrote: If your partitions a loosely time based and you don't want to discard old data, then surely the number of partitions will grow without limit. True, but is it relevant? With monthly table partitioning it takes hundreds of years

Re: [PERFORM] Why we don't want hints Was: Slow count(*) again...

2011-02-11 Thread Tobias Brox
2011/2/11 Віталій Тимчишин tiv...@gmail.com: If the list is hard-coded, you can create partial index  on account_transaction(account_id, created desc) where trans_type_id in ( ... long, hard-coded list ...) My idea as well, though it looks ugly and it would be a maintenance head-ache

Re: [PERFORM] Why we don't want hints Was: Slow count(*) again...

2011-02-11 Thread Tobias Brox
2011/2/11 Vitalii Tymchyshyn tiv...@gmail.com: My idea as well, though it looks ugly and it would be a maintenance head-ache (upgrading the index as new transaction types are added would mean costly write locks on the table, Create new one concurrently. Concurrently? Are there any ways to

Re: [PERFORM] Why we don't want hints Was: Slow count(*) again...

2011-02-10 Thread Tobias Brox
On 4 February 2011 04:46, Josh Berkus j...@agliodbs.com wrote: Optimizer hints are used to work around problems in the optimizer and introduce upgrade and maintenance issues. We would rather have the problems reported and fixed. We have discussed a more sophisticated system of per-class cost

Re: [PERFORM] table partitioning and select max(id)

2011-02-05 Thread Tobias Brox
[Greg Smith] Here's the comment from that describing the main technique used to fix it: This module tries to replace MIN/MAX aggregate functions by subqueries of the form (SELECT col FROM tab WHERE ... ORDER BY col ASC/DESC LIMIT 1) Huh ... that sounds a bit like pg 8.0 to me ;-) I

[PERFORM] table partitioning and select max(id)

2011-02-04 Thread Tobias Brox
I implemented table partitioning, and it caused havoc with a select max(id) on the parent table - the query plan has changed from a lightningly fast backwards index scan to a deadly seq scan. Both partitions are set up with primary key index and draws new IDs from the same sequence ... select

Re: [PERFORM] Full Text index is not using during OR operation

2010-11-29 Thread Tobias Brox
Just a general note re the subject, I've also had troubles with postgres being unable to optimize a query with OR. The work-around, although a bit messy, was to use a UNION-query instead. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your

Re: [PERFORM] Difference between explain analyze and real execution time

2010-11-15 Thread Tobias Brox
[Tom Lane] EXPLAIN ANALYZE doesn't account for all of the runtime involved.  In this case, I'd bet that session startup/shutdown is a big part of the difference. The session startup/shutdown should be the same for the real SQL and the broken SQL, shouldn't it? [Artur Zając] time psql -c

Re: [PERFORM] Memory usage - indexes

2010-09-29 Thread Tobias Brox
I just got this crazy, stupid or maybe genius idea :-) One thing that I've learned in this thread is that fat indexes (i.e. some index on some_table(a,b,c,d,e,f)) is to be avoided as much as possible. One of our biggest indexes looks like this: acc_trans(customer_id, trans_type, created) For

Re: [PERFORM] Memory usage - indexes

2010-09-29 Thread Tobias Brox
On 29 September 2010 10:03, Mark Kirkwood mark.kirkw...@catalyst.net.nz Yeah, I think the idea of trying to have a few smaller indexes for the 'hot' customers is a good idea. However I am wondering if just using single column indexes and seeing if the bitmap scan/merge of smaller indexes is

Re: [PERFORM] Memory usage - indexes

2010-09-25 Thread Tobias Brox
On 25 September 2010 00:00, Greg Smith g...@2ndquadrant.com wrote: Overindexed tables containing more columns than are actually selective is a very popular source of PostgreSQL slowdowns.  It's easy to say oh, I look this data up using columns a,b,c, so lets put an index on a,b,c.  But if an

Re: [PERFORM] Memory usage - indexes

2010-09-24 Thread Tobias Brox
On 24 September 2010 18:23, Bob Lunney bob_lun...@yahoo.com wrote: Consult pg_statio_user_indexes to see which indexes have been used and how much. What is the main differences between pg_statio_user_indexes and pg_stat_user_indexes?  Indexes with comparitively low usages rates aren't

Re: [PERFORM] Memory usage - indexes

2010-09-24 Thread Tobias Brox
On 24 September 2010 00:12, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: All good questions! Before (or maybe as well as) looking at index sizes vs memory I'd check to see if any of your commonly run queries have suddenly started to use different plans due to data growth, e.g: - index

Re: [PERFORM] Memory usage - indexes

2010-09-24 Thread Tobias Brox
On 24 September 2010 19:16, Brad Nicholson bnich...@ca.afilias.info wrote: [Brad Nicholson] Why is the vacuum dragging out over time?  Is the size of your data increasing, are you doing more writes that leave dead tuples, or are your tables and/or indexes getting bloated? Digressing a bit here

Re: [PERFORM] Memory usage - indexes

2010-09-24 Thread Tobias Brox
On 24 September 2010 21:06, Bob Lunney bob_lun...@yahoo.com wrote: First off, what version of PostgreSQL are you running?  If you have 8.4, nightly vacuuming shouldn't be necessary with properly tuned autovacuum jobs. 8.3. We'll upgrade to 9.0 during the December holidays fwiw. But point

Re: [PERFORM] Memory usage - indexes

2010-09-24 Thread Tobias Brox
On 24 September 2010 00:12, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: Re index size, you could try indexes like: some_table(a) some_table(b) which may occupy less space, and the optimizer can bitmap and/or them to work like the compound index some_table(a,b). Hm ... never

Re: [PERFORM] Memory usage - indexes

2010-09-24 Thread Tobias Brox
On 24 September 2010 21:24, Brad Nicholson bnich...@ca.afilias.info wrote: The pertinent difference between pg_stat_user_indexes and pg_statio_user_indexes is the latter shows the number of blocks read from disk or found in the cache. I have a minor, but very important correction involving

Re: [PERFORM] Memory usage - indexes

2010-09-24 Thread Tobias Brox
Thanks for spending your time on this ... amidst all the useful feedback I've received, I'd rate your post as the most useful post. 1) Are there any good ways to verify my hypothesis? You can confim easily whether the contents of the PostgreSQL buffer cache contain when you think they do by

Re: [PERFORM] locking issue on simple selects?

2010-09-23 Thread Tobias Brox
On 15 September 2010 12:05, Tobias Brox tobi...@gmail.com wrote: Recently we've frequently encountered issues where some simple selects (meaning, selects doing an index lookup and fetching one row) have become stuck for several minutes.  Apparently all requests on one exact table gets stuck

[PERFORM] Memory usage - indexes

2010-09-23 Thread Tobias Brox
We've come to a tipping point with one of our database servers, it's generally quite loaded but up until recently it was handling the load well - but now we're seeing that it struggles to process all the selects fast enough. Sometimes we're observing some weird lock-like behaviour (see my other

Re: [PERFORM] locking issue on simple selects?

2010-09-23 Thread Tobias Brox
On 23 September 2010 22:55, Kevin Grittner kevin.gritt...@wicourts.gov wrote: Have you turned on checkpoint logging? Yes ... it seems so: 13:19:13.840 - LOG: checkpoint complete: wrote 3849 buffers (0.2%); 0 transaction log file(s) added, 0 removed, 5 recycled; write=269.551 s, sync=0.103 s,

[PERFORM] locking issue on simple selects?

2010-09-15 Thread Tobias Brox
We have a production database server ... it's quite busy but usually working completely fine, simple queries taking a fraction of a millisecond to run. Recently we've frequently encountered issues where some simple selects (meaning, selects doing an index lookup and fetching one row) have become

Re: [PERFORM] locking issue on simple selects?

2010-09-15 Thread Tobias Brox
On 15 September 2010 15:39, Tom Lane t...@sss.pgh.pa.us wrote: An exclusive lock will block selects too.  Have you looked into pg_locks for ungranted lock requests? Well - I thought so, we have a logging script that logs the content of the pg_locks table, it didn't log anything interesting but

Re: [PERFORM] locking issue on simple selects?

2010-09-15 Thread Tobias Brox
On 15 September 2010 21:28, Greg Smith g...@2ndquadrant.com wrote:  There are some useful examples of lock views on the wiki: http://wiki.postgresql.org/wiki/Lock_Monitoring http://wiki.postgresql.org/wiki/Lock_dependency_information http://wiki.postgresql.org/wiki/Find_Locks Thanks. I think

Re: [PERFORM] Disable WAL completely

2008-02-18 Thread Tobias Brox
[Erik Jones] Right. Without the xlog directory you'll have very little chance of ever doing any kind of clean stop/start of your database. If you don't need the reliability offered by Postgres's use of transaction logs you'll probably be much better served with a different database

Re: [PERFORM] Anyone using a SAN?

2008-02-13 Thread Tobias Brox
[Peter Koczan - Wed at 10:56:54AM -0600] We're considering setting up a SAN where I work. Is there anyone using a SAN, for postgres or other purposes? If so I have a few questions for you. Some time ago, my boss was planning to order more hardware - including a SAN - and coincidentally, SANs

Re: [PERFORM] Anyone using a SAN?

2008-02-13 Thread Tobias Brox
[Arjen van der Meijden] Your SAN-pusher should have a look at the HP-submissions for TPC-C... The recent Xeon systems are all without SAN's and still able to connect hundreds of SAS-disks. Yes, I had a feeling that the various alternative solutions for direct connection hadn't been

Re: [PERFORM] autovacuum: recommended?

2007-11-18 Thread Tobias Brox
[EMAIL PROTECTED] The table was quite huge (say 20k of products along with detailed descriptions etc.) and was completely updated and about 12x each day, i.e. it qrew to about 12x the original size (and 11/12 of the rows were dead). This caused a serious slowdown of the application each day,

Re: [PERFORM] autovacuum: recommended?

2007-11-16 Thread Tobias Brox
[Gábor Farkas - Fri at 10:40:43AM +0100] my question is: is it recommended to use it? or in other words, should i only use autovacuum? or it's better to use manual-vacuuming? which one is the way of the future :) ? or should i use both auto-vacuum and manual-vacuum? Nightly vacuums are

Re: [PERFORM] SAN vs Internal Disks

2007-09-11 Thread Tobias Brox
[Decibel! - Tue at 06:07:44PM -0500] It's still in the same chassis, though, which means if you lose memory or mobo you're still screwed. In a SAN setup for redundancy, there's very little in the way of a single point of failure; generally only the backplane, and because there's very little

Re: [PERFORM] SAN vs Internal Disks

2007-09-07 Thread Tobias Brox
We're also considering to install postgres on SAN - that is, my boss is convinced this is the right way to go. Advantages: 1. Higher I/O (at least the salesman claims so) 2. Easier to upgrade the disk capacity 3. Easy to set up warm standby functionality. (Then again, if the postgres server

Re: [PERFORM] Long running transaction in pg_activity_log

2007-08-22 Thread Tobias Brox
[Sachchida Ojha - Wed at 04:40:09PM -0400] I see some long running transaction in my pg_activity_log table. My app becomes almost unusable. My question is How can I query the database to see what sql these transactions are running. IDLE in transaction means that no sql query is running at

Re: [PERFORM] Performance Testing Utility

2007-06-13 Thread Tobias Brox
[Christo Du Preez - Wed at 12:25:20PM +0200] Is there some kind of performance testing utility available for postgresql Something I can run after installing postgresql to help me identify if my installation is optimal. I've been battling for days now trying to sort out performance issues

Re: [PERFORM] pg_stat_* collection

2007-05-03 Thread Tobias Brox
[Alexander Staubo - Thu at 04:52:55PM +0200] I have been considering tarring them up as a proper release at some point. Anyone interested? Yes. Eventually I have my own collection as well: db_activity - counts the number of (all, slow, very slow, stuck idle in transaction) queries in

Re: [PERFORM] pg_stat_* collection

2007-05-03 Thread Tobias Brox
[Greg Smith - Fri at 12:53:55AM -0400] Munin is a very interesting solution to this class of problem. They've managed to streamline the whole data collection process by layering clever Perl hacks three deep. It's like the anti-SNMP--just build the simplest possible interface that will

[PERFORM] Long running transactions again ...

2007-04-10 Thread Tobias Brox
We had problems again, caused by long running transactions. I'm monitoring the pg_stat_activity view, checking the query_start of all requests that are not idle - but this one slipped under the radar as the application was running frequent queries towards the database. That's not what concerns

Re: OT: Munin (was Re: [PERFORM] Determining server load from client)

2007-03-21 Thread Tobias Brox
I have my postgres munin monitoring script at http://oppetid.no/~tobixen/pg_activity.munin.txt (had to suffix it with .txt to make the local apache happy). I would like to see what others have done as well. ---(end of broadcast)--- TIP 1: if

Re: OT: Munin (was Re: [PERFORM] Determining server load from client)

2007-03-21 Thread Tobias Brox
[Erik Jones - Wed at 09:31:48AM -0500] I use cacti (http://cacti.net) which does the same thing that munin does but in php instead. Here's what I use to db stats to it (again, php): I haven't tried cacti, but our sysadm has done a little bit of research and concluded cacti is better.

[PERFORM] extract(field from timestamp) vs date dimension

2007-01-23 Thread Tobias Brox
Does anyone have experience with using postgres for data warehousing? Right, I saw one post suggestion to use mysql for a mostly read-only database ... but anyway, I think it's not a question to change the database platform for this project, at least not today ;-) Ralph Kimball seems to be some

Re: [PERFORM] extract(field from timestamp) vs date dimension

2007-01-23 Thread Tobias Brox
[Chad Wagner - Tue at 08:24:34AM -0500] I guess go with your gut, but at some point the expressions are going to be too complicated to maintain, and inefficient. The layout of my system is quite flexible, so it should eventually be fairly trivial to throw in a date dimension at a later stage.

[PERFORM] Planner statistics, correlations

2007-01-12 Thread Tobias Brox
We have a table with a timestamp attribute (event_time) and a state flag which usually changes value around the event_time (it goes to 4). Now we have more than two years of events in the database, and around 5k of future events. It is important to frequently pick out overdue events, say:

Re: [PERFORM] Planner statistics, correlations

2007-01-12 Thread Tobias Brox
[Peter Childs - Fri at 08:56:54AM +] Can you say what state might be rather than what it is not. I'm guess that state is an int but there is only a limited list of possible states, if you can say what it might be rather than what it is the index is more liklly to be used. explain select

Re: [PERFORM] Planner statistics, correlations

2007-01-12 Thread Tobias Brox
[Richard Huxton - Fri at 09:17:48AM +] Try a partial index: CREATE INDEX my_new_index ON events (event_time) WHERE state in (1,2,3); I have that, the index is used and the query is lightning fast - the only problem is that the planner is using the wrong estimates. This becomes a real

Re: [PERFORM] Planner statistics, correlations

2007-01-12 Thread Tobias Brox
[Heikki Linnakangas - Fri at 10:41:34AM +] I thought about partitioning the table by state, putting rows with state=4 into one partition, and all others to another partition. That sounds like a good idea - but wouldn't that be costly when changing state? ---(end of

Re: [PERFORM] Missing the point of autovacuum

2007-01-06 Thread Tobias Brox
[Daryl Herzmann - Sat at 12:59:03PM -0600] As the months have gone by, I notice many of my tables having *lots* of unused item pointers. For example, Probably not the issue here, but we had some similar issue where we had many long-running transactions - i.e. some careless colleague entering

[PERFORM] max/min and index usage

2006-12-05 Thread Tobias Brox
We're using 8.1 - I thought such a construct was safe in pg 8.1: select max(indexed_value) from huge_table; while earlier we had to use: select indexed_value from huge_table order by indexed_value desc limit 1; seems like I was wrong: mydb= explain analyze select indexed_value1 from

Re: [PERFORM] max/min and index usage

2006-12-05 Thread Tobias Brox
[Tobias Brox - Wed at 04:01:56AM +0100] We're using 8.1 - I thought such a construct was safe in pg 8.1: select max(indexed_value) from huge_table; while earlier we had to use: select indexed_value from huge_table order by indexed_value desc limit 1; seems like I was wrong

Re: [PERFORM] max/min and index usage

2006-12-05 Thread Tobias Brox
[Tom Lane - Tue at 10:29:53PM -0500] These are not actually exactly the same thing. In particular, I suppose your table contains a lot of nulls? Yes; I'm sorry I was a bit quick with the first posting. ---(end of broadcast)--- TIP 1: if

Re: [PERFORM] Defining performance.

2006-11-30 Thread Tobias Brox
[Paul Lathrop - Thu at 02:59:27PM -0800] growing disk space usage. The DBA had come to the conclusion that the VACUUM command did/does not work on these systems, because even after a VACUUM FULL, the size of the database was continually increasing. So, as things stand with the PG7.2 machines,

Re: [PERFORM] Defining performance.

2006-11-30 Thread Tobias Brox
[Jeff Davis - Thu at 04:57:54PM -0800] We're having the same issues, so we do the dumping and restoring every now and then to be sure everything is properly cleaned up. With 8.1. What's causing that? Is it index bloat? I would think a REINDEX would avoid having to dump/restore,

Re: [PERFORM] Defining performance.

2006-11-30 Thread Tobias Brox
[EMAIL PROTECTED] - Thu at 06:37:12PM -0600] As my dataset has gotten larger I have had to throw more metal at the problem, but I have also had to rethink my table and query design. Just because your data set grows linearly does NOT mean that the performance of your query is guaranteed to

Re: [PERFORM] Defining performance.

2006-11-30 Thread Tobias Brox
[Chris - Fri at 02:32:05PM +1100] Not really. A bad query is a bad query (eg missing a join element). It won't show up for 3000 rows, but will very quickly if you increase that by a reasonable amount. Even as simple as a missing index on a join column won't show up for a small dataset but

Re: [PERFORM] Setting nice values

2006-11-06 Thread Tobias Brox
[Madison Kelly - Mon at 08:10:12AM -0500] to run, which puts it into your drawback section. The server in question is also almost under load of some sort, too. A great tip and one I am sure to make use of later, thanks! I must have been sleepy, listing up cons vs drawbacks ;-) Anyway, the

Re: [PERFORM] Setting nice values

2006-11-06 Thread Tobias Brox
[Madison Kelly - Mon at 08:48:19AM -0500] Ah, sorry, long single queries is what you meant. No - long running single transactions :-) If it's only read-only queries, one will probably benefit by having one transaction for every query. ---(end of

Re: [PERFORM] Setting nice values

2006-11-02 Thread Tobias Brox
[Madison Kelly - Thu at 10:25:07AM -0500] Will the priority of the script pass down to the pgsql queries it calls? I figured (likely incorrectly) that because the queries were executed by the psql server the queries ran with the server's priority. I think you are right, and in any case, I

[PERFORM] Swappiness setting on a linux pg server

2006-10-19 Thread Tobias Brox
I just came to think about /proc/sys/swappiness ... When this one is set to a high number (say, 100 - which is maximum), the kernel will aggressively swap out all memory that is not beeing accessed, to allow more memory for caches. For a postgres server, OS caches are good, because postgres

Re: [PERFORM] Swappiness setting on a linux pg server

2006-10-19 Thread Tobias Brox
[Jim C. Nasby - Thu at 10:28:31AM -0500] I think it'd be much better to experiment with using much larger shared_buffers settings. The conventional wisdom there is from 7.x days when you really didn't want a large buffer, but that doesn't really apply with the new buffer management we got in

Re: [PERFORM] Swappiness setting on a linux pg server

2006-10-19 Thread Tobias Brox
[Jim C. Nasby - Thu at 10:28:31AM -0500] I think it'd be much better to experiment with using much larger shared_buffers settings. The conventional wisdom there is from 7.x days when you really didn't want a large buffer, but that doesn't really apply with the new buffer management we got in

Re: [PERFORM] Swappiness setting on a linux pg server

2006-10-19 Thread Tobias Brox
[Jim C. Nasby - Thu at 11:31:26AM -0500] Yeah, test setups are a good thing to have... We would need to replicate the production traffic as well to do reliable tests. Well, we'll get to that one day ... The issue with pg_xlog is you don't need bandwidth... you need super-low latency. The

Re: [PERFORM] Swappiness setting on a linux pg server

2006-10-19 Thread Tobias Brox
[Jim C. Nasby - Thu at 11:45:32AM -0500] The issue with pg_xlog is you don't need bandwidth... you need super-low latency. The best way to accomplish that is to get a battery-backed RAID controller that you can enable write caching on. Sounds a bit risky to me :-) Well, you do

Re: [PERFORM] Swappiness setting on a linux pg server

2006-10-19 Thread Tobias Brox
[Jim C. Nasby - Thu at 12:00:39PM -0500] What's reasonable for work_mem depends on your workload. If you've got some reporting queries that you know aren't run very concurrently they might benefit from large values of work_mem. For stats.distributed.net, I set work_mem to something like 2MB in

Re: [PERFORM] Swappiness setting on a linux pg server

2006-10-19 Thread Tobias Brox
[Ron - Thu at 03:10:35PM -0400] Jim is correct that traditional 7.x folklore regarding shared buffer size is nowhere near as valid for 8.x. Jim tends to know what he is talking about when speaking about pg operational issues. I would not doubt it, but it's always better to hear it from more

Re: [ADMIN] [PERFORM] autovacuum on a -mostly- r/o table

2006-10-15 Thread Tobias Brox
[Matthew T. O'Connor - Wed at 02:33:10PM -0400] In addition autovacuum respects the work of manual or cron based vacuums, so if you issue a vacuum right after a daily batch insert / update, autovacuum won't repeat the work of that manual vacuum. I was experimenting a bit with autovacuum now.

Re: [ADMIN] [PERFORM] autovacuum on a -mostly- r/o table

2006-10-15 Thread Tobias Brox
[Matthew T. O'Connor - Sun at 10:42:34AM -0400] Yeah, I think if the delay settings are too high it can cause problems, that's part of the reason we have yet to turn these on be default since we won't have enough data to suggest good values. Can you tell us what settings you finally

[PERFORM] long running transactions

2006-10-10 Thread Tobias Brox
While doing a verbose vacuum, I'm constantly hitting things like: DETAIL: 3606 dead row versions cannot be removed yet. I believe this is a problem, because I still do have some empty tables requireing up to 3-400 ms just to check if the table is empty (see thread slow queue-like empty table).

Re: [PERFORM] long running transactions

2006-10-10 Thread Tobias Brox
[Tom Lane - Tue at 12:23:40PM -0400] Look in pg_locks to see the lowest-numbered transaction ID --- each transaction will be holding exclusive lock on its own XID. You can correlate that back to pg_stat_activity via the PID. Thanks a lot for the quick reply - I've already identified one

Re: [PERFORM] long running transactions

2006-10-10 Thread Tobias Brox
[Tobias Brox - Tue at 06:39:13PM +0200] Thanks a lot for the quick reply - I've already identified one long-running transaction. belonging to autovacuum ... how come? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore

Re: [PERFORM] long running transactions

2006-10-10 Thread Tobias Brox
[Tom Lane - Tue at 12:42:52PM -0400] belonging to autovacuum ... how come? Blocked on someone else's lock, maybe? hardly, the autovacuum is the only one having such a low transaction id, and also the only one hanging around when waiting a bit and rechecking the pg_locks table.

Re: [PERFORM] long running transactions

2006-10-10 Thread Tobias Brox
[Tom Lane - Tue at 01:18:27PM -0400] Hmph. Is the autovac process actually doing anything (strace would be revealing)? It's definitively doing something; mostly reading, but also some few writes, semops and opens. If not, can you attach to the autovac process with gdb and get a stack

Re: [PERFORM] long running transactions

2006-10-10 Thread Tobias Brox
[Tom Lane - Tue at 02:04:55PM -0400] It seems stuck, has had the same transid for a long while, and the number of undeletable dead rows in our tables are increasing. Perhaps you have overly aggressive vacuum cost delay settings? Perhaps, though I wouldn't expect it to sleep in the middle

Re: [PERFORM] long running transactions

2006-10-10 Thread Tobias Brox
[Tom Lane - Tue at 02:26:53PM -0400] autovacuum_vacuum_cost_delay = 500 autovacuum_vacuum_cost_limit = 200 Well, that's going to cause it to sleep half a second after every dozen or so page I/Os. I think you'd be well advised to reduce the delay. Modified it to 20/250, and it

Re: [PERFORM] Simple join optimized badly?

2006-10-09 Thread Tobias Brox
[Jim C. Nasby - Mon at 04:18:27PM -0500] I can agree to that, but we'll never get any progress so long as every time hints are brought up the response is that they're evil and should never be in the database. I'll also say that a very simple hinting language (ie: allowing you to specify access

Re: [PERFORM] slow queue-like empty table

2006-10-04 Thread Tobias Brox
[Csaba Nagy - Thu at 10:45:35AM +0200] So you should check for idle in transaction sessions, those are bad... or any other long running transaction. Thank you (and others) for pointing this out, you certainly set us on the right track. We did have some few unclosed transactions; transactions

Re: [PERFORM] Unsubscribe

2006-10-04 Thread Tobias Brox
To be a bit constructive, could it be an idea to add unsubscribe information as one of the standard tailer tips? Then unsubscribe info wouldn't appear in every mail, but often enough for people considering to unsubscribe. To be totally non-constructive, let me add a bit to the noise below:

[PERFORM] Multi-key index not beeing used - bug?

2006-10-04 Thread Tobias Brox
Look at this: NBET= explain select * from account_transaction where users_id=123456 order by created desc limit 10; QUERY PLAN

Re: [PERFORM] Multi-key index not beeing used - bug?

2006-10-04 Thread Tobias Brox
[Tom Lane - Wed at 04:33:54PM -0400] We have indices on the users_id field and the (users_id, created)-tuple. Neither of those indexes can provide the sort order the query is asking for. Ah; that's understandable - the planner have two options, to do a index traversion without any extra

Re: [PERFORM] Poor performance on very simple query ?

2006-10-03 Thread Tobias Brox
[Arnaud Lesauvage - Tue at 01:25:10PM +0200] I have a performance problem, but I am not sure whether it really is a problem or not. QUERY PLAN

Re: [PERFORM] Poor performance on very simple query ?

2006-10-03 Thread Tobias Brox
[Tobias Brox - Tue at 02:10:04PM +0200] Did you try analyze as well? It's weird it's using seq scan, since you have a primary key it's supposed to have an index ... though 500 rows is little. I just checked up our own production database, takes 0.08 ms to fetch a row by ID from one of our

Re: [PERFORM] Poor performance on very simple query ?

2006-10-03 Thread Tobias Brox
[Arnaud Lesauvage - Tue at 02:13:59PM +0200] Tobias Brox wrote: Oh, the gid is not primary key. I guess I should also apologize for adding noise here :-) Yes, it is a primary key, but I am the noise maker here ! ;-) Oh - it is. How can you have a default value on a primary key

[PERFORM] slow queue-like empty table

2006-09-28 Thread Tobias Brox
I have a query which really should be lightning fast (limit 1 from index), but which isn't. I've checked the pg_locks table, there are no locks on the table. The database is not under heavy load at the moment, but the query seems to draw CPU power. I checked the pg_locks view, but found nothing

Re: [PERFORM] slow queue-like empty table

2006-09-28 Thread Tobias Brox
[Tobias Brox - Thu at 08:56:31AM +0200] It really seems like some transaction is still viewing the queue, since it found 38k of non-removable rows ... but how do I find the pid of the transaction viewing the queue? As said, the pg_locks didn't give me any hints ... Dropping the table

Re: [PERFORM] Merge Join vs Nested Loop

2006-09-27 Thread Tobias Brox
[Tom Lane - Tue at 06:09:56PM -0400] If your tables are small enough to fit (mostly) in memory, then the planner tends to overestimate the cost of a nestloop because it fails to account for cacheing effects across multiple scans of the inner table. This is addressed in 8.2, but in earlier

Re: [PERFORM] Merge Join vs Nested Loop

2006-09-27 Thread Tobias Brox
[Scott Marlowe - Wed at 09:58:30AM -0500] Have you tried chaning the cpu_* cost options to see how they affect merge versus nested loop? As said in the original post, increasing any of them shifts the planner towards nested loops instead of merge_join. I didn't check which one of the cost

Re: [PERFORM] Merge Join vs Nested Loop

2006-09-27 Thread Tobias Brox
[Scott Marlowe - Wed at 10:19:24AM -0500] So, by decreasing them, you should move away from nested loops then, right? Has that not worked for some reason? I want to move to nested loops, they are empirically faster in many of our queries, and that makes sense since we've got quite big tables

Re: [PERFORM] Merge Join vs Nested Loop

2006-09-27 Thread Tobias Brox
[Scott Marlowe - Wed at 10:31:35AM -0500] And remember, you can always change any of those settings in session for just this one query to force the planner to make the right decision. sure ... I could identify the most problematic queries, and hack up the software application to modify the

Re: [PERFORM] autovacuum on a -mostly- r/o table

2006-09-27 Thread Tobias Brox
[Edoardo Ceccarelli - Wed at 06:49:23PM +0200] ...another thing is, how could autovacuum check for machine load, this is something I cannot imagine right now... One solution I made for our application, is to check the pg_stats_activity view. It requires some config to get the stats available

Re: [PERFORM] autovacuum on a -mostly- r/o table

2006-09-27 Thread Tobias Brox
[Edoardo Ceccarelli - Wed at 06:08:30PM +0200] We also activated the autovacuum feature to give it a try and that's were our problems started. (...) How can I configure the vacuum to run after the daily batch insert/update? I think you shouldn't use autovacuum in your case. We haven't dared

Re: [PERFORM] Merge Join vs Nested Loop

2006-09-27 Thread Tobias Brox
I found a way to survive yet some more weeks :-) One of the queries we've had most problems with today is principially something like: select A.*,sum(B.*) from A join B where A.createdx and ... order by A.created desc limit 32 group by A.* There is by average two rows in B for every row in

[PERFORM] Merge Join vs Nested Loop

2006-09-26 Thread Tobias Brox
I have some odd cases here joining two tables - the planner insists on Merge Join, but Nested Loop is really faster - and that makes sense, since I'm selecting just a small partition of the data available. All planner constants seems to be set at the default values, the only way to get a shift

[PERFORM] The need for full vacuum / reindex

2005-09-27 Thread Tobias Brox
By occation, we dropped the whole production database and refreshed it from a database backup - and all our performance problems seems to have gone. I suppose this means that to keep the database efficient, one eventually does have to do reindexing and/or full vacuum from time to time? --

Re: [PERFORM] Swapping

2005-08-31 Thread Tobias Brox
[Carlos Henrique Reimer - Wed at 03:25:15PM -0300] I´m trying to tune a linux box with a 12 GB database and 4 GB RAM. First of all I would like to stop the swapping, so the shared_buffers and sort_mem were decreased but even so it started swapping two hours after DBMS started up. I would

Re: [PERFORM] Swapping

2005-08-31 Thread Tobias Brox
[Tobias Brox - Wed at 09:22:17PM +0200] I'd trust linux to handle swap/cache sensibly. Eventually, become involved with kernel hacking ;-) Of course, there are also some files in /proc/sys/vm that you may want to peek into, for tuning the swapping. Particularly, at later 2.6-kernels (I'm

Re: [PERFORM] Limit + group + join

2005-08-27 Thread Tobias Brox
[Tom Lane] I looked into this and (...) I've committed some changes that hopefully will let 8.1 be smarter about GROUP BY ... LIMIT queries. [Mark Kirkwood] Very nice :-) (...) This is 8.1devel from today. Splendid :-) Unfortunately we will not be upgrading for some monthes still, but anyway

[PERFORM] Limit + group + join

2005-08-25 Thread Tobias Brox
Consider this setup - which is a gross simplification of parts of our production system ;-) create table c (id integer primary key); create table b (id integer primary key, c_id integer); create index b_on_c on b(c_id) insert into c (select ... lots of IDs ...); insert into b (select

Re: [PERFORM] Limit + group + join

2005-08-25 Thread Tobias Brox
[Jeffrey W. Baker - Thu at 06:56:59PM -0700] explain select c.id from c join b on c_id=c.id group by c.id order by c.id desc limit 5; Where's b in this join clause? join b on c_id=c.id It just a funny way of writing: select c.id from c,b where c_id=c.id group by c.id order by c.id desc

[PERFORM] partial index regarded more expensive

2005-08-10 Thread Tobias Brox
to be better? ('vacuum analyze game' did not significantly impact the numbers, and I've tried running the queries some times with and without the game_by_state-index to rule out cacheing effects) -- Tobias Brox This signature has been virus scanned, and is probably safe to read. This mail may contain

Re: [PERFORM] partial index regarded more expensive

2005-08-10 Thread Tobias Brox
[PFC - Wed at 08:15:13PM +0200] why not simply create an index on (game_end, state) ? No, the planner prefers to use the partial index (I dropped the index on game(state)). -- Tobias Brox, Nordicbet IT dept This signature has been virus scanned, and is probably safe to read. This mail

[PERFORM] nice/low priority Query

2005-08-02 Thread Tobias Brox
. Ideally, if it was trivial to give priorities, it should be possible to keep the CPUs running at 100% for hours without causing critical problems...? -- Tobias Brox, +47-91700050 Tromso, Norway ---(end of broadcast)--- TIP 3: Have you checked our

Re: [PERFORM] nice/low priority Query

2005-08-02 Thread Tobias Brox
[Tobias Brox - Tue at 06:04:34PM +0200] (...) and at one point in the code I'm even asking the database for explain plan, grepping out the estimated cost number, and referring the user to take contact with the IT-dept if he really needs the report. But I digress. I just came to think about

  1   2   >