Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-14 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 Yeah, I'm not familiar with the logic in that area of the code, so I
 can't comment all that intelligently.  However, I feel like there's a
 class of things that could potentially be optimized if we know that
 the only snapshot they could affect is the one we're currently using.

Yeah, perhaps.  The other thing I noticed while looking at the code is
that CREATE INDEX's test to see whether there are broken HOT chains is
borderline brain-dead: if there are any recently-dead HOT-updated tuples
in the table, it assumes they represent broken HOT chains, whether they
really do or not.  In principle you could find the live member of the
chain and see whether or not it is really different from the dead member
in the columns used by the new index.  In Jon's example that would win
because his update didn't actually change the indexed column.  It's
unclear though that it would be useful often enough to be worth the
extra code and cycles.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-13 Thread Tom Lane
Jon Nelson jnelson+pg...@jamponi.net writes:
 OK. This is a highly distilled example that shows the behavior.

 BEGIN;
 CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,
 ''::text AS c from generate_series(1,500) AS x;
 UPDATE foo SET c = 'foo' WHERE b = 'A' ;
 CREATE INDEX foo_b_idx on foo (b);
 [ and the rest of the transaction can't use that index ]

OK, this is an artifact of the HOT update optimization.  Before
creating the index, you did updates on the table that would have been
executed differently if the index had existed.  When the index does get
created, its entries for those updates are incomplete, so the index
can't be used in transactions that could in principle see the unmodified
rows.

You could avoid this effect either by creating the index before you do
any updates on the table, or by not wrapping the entire process into a
single transaction.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-13 Thread Jon Nelson
On Sat, Nov 13, 2010 at 9:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jon Nelson jnelson+pg...@jamponi.net writes:
 OK. This is a highly distilled example that shows the behavior.

 BEGIN;
 CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,
 ''::text AS c from generate_series(1,500) AS x;
 UPDATE foo SET c = 'foo' WHERE b = 'A' ;
 CREATE INDEX foo_b_idx on foo (b);
 [ and the rest of the transaction can't use that index ]

 OK, this is an artifact of the HOT update optimization.  Before
 creating the index, you did updates on the table that would have been
 executed differently if the index had existed.  When the index does get
 created, its entries for those updates are incomplete, so the index
 can't be used in transactions that could in principle see the unmodified
 rows.

Aha!  When you indicated that HOT updates were part of the problem, I
googled HOT updates for more detail and ran across this article:
http://pgsql.tapoueh.org/site/html/misc/hot.html
which was very useful in helping me to understand things.

If I understand things correctly, after a tuple undergoes a HOT-style
update, there is a chain from the original tuple to the updated tuple.
If an index already exists on the relation (and involves the updated
column), a *new entry* in the index is created.  However, if an index
does not already exist and one is created (which involves a column
with tuples that underwent HOT update) then it seems as though the
index doesn't see either version. Is that description inaccurate?

What would the effect be of patching postgresql to allow indexes to
see and follow the HOT chains during index creation?

The reason I did the update before the index creation is that the
initial update (in the actual version, not this test version) updates
2.8 million of some 7.5 million rows (or a bit under 40% of the entire
table), and such a large update seems like it would have a deleterious
effect on the index (although in either case the planner properly
chooses a sequential scan for this update).

 You could avoid this effect either by creating the index before you do
 any updates on the table, or by not wrapping the entire process into a
 single transaction.

I need the whole thing in a single transaction because I make
/extensive/ use of temporary tables and many dozens of statements that
need to either succeed or fail as one.

Is this HOT update optimization interaction with indexes documented
anywhere? It doesn't appear to be common knowledge as there are now 20
messages in this topic and this is the first mention of the HOT
updates / index interaction. I would like to suggest that an update to
the CREATE INDEX documentation might contain some caveats about
creating indexes in transactions on relations that might have HOT
updates.

Again, I'd like to thank everybody for helping me to figure this out.
It's not a huge burden to create the index before the updates, but
understanding *why* it wasn't working (even if it violates the
principle-of-least-surprise) helps quite a bit.


-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-13 Thread Tom Lane
Jon Nelson jnelson+pg...@jamponi.net writes:
 What would the effect be of patching postgresql to allow indexes to
 see and follow the HOT chains during index creation?

It would break things.  We did a *lot* of thinking about this when
HOT was implemented; there are not simple improvements to be made.

The particular case you have here might be improvable because you
actually don't have any indexes at all during the UPDATE, and so
maybe there's no need for it to create HOT-update chains.  But that
would still fall over if you made an index, did the update, then
made more indexes.

 Is this HOT update optimization interaction with indexes documented
 anywhere? It doesn't appear to be common knowledge as there are now 20
 messages in this topic and this is the first mention of the HOT
 updates / index interaction.

The reason it wasn't mentioned before was that you kept on not showing
us what you did, and there was no reason for anyone to guess that you
were mixing updates and index creations in a single transaction.  We
have seen people run into this type of issue once or twice since 8.3
came out, but it's sufficiently uncommon that it doesn't spend time at
the front of anybody's mind.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-13 Thread Robert Haas
On Sat, Nov 13, 2010 at 10:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jon Nelson jnelson+pg...@jamponi.net writes:
 OK. This is a highly distilled example that shows the behavior.

 BEGIN;
 CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,
 ''::text AS c from generate_series(1,500) AS x;
 UPDATE foo SET c = 'foo' WHERE b = 'A' ;
 CREATE INDEX foo_b_idx on foo (b);
 [ and the rest of the transaction can't use that index ]

 OK, this is an artifact of the HOT update optimization.  Before
 creating the index, you did updates on the table that would have been
 executed differently if the index had existed.  When the index does get
 created, its entries for those updates are incomplete, so the index
 can't be used in transactions that could in principle see the unmodified
 rows.

Is the in principle here because there might be an open snapshot
other than the one under which CREATE INDEX is running, like a cursor?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-13 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sat, Nov 13, 2010 at 10:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 OK, this is an artifact of the HOT update optimization.  Before
 creating the index, you did updates on the table that would have been
 executed differently if the index had existed.  When the index does get
 created, its entries for those updates are incomplete, so the index
 can't be used in transactions that could in principle see the unmodified
 rows.

 Is the in principle here because there might be an open snapshot
 other than the one under which CREATE INDEX is running, like a cursor?

Well, the test is based on xmin alone, not cmin, so it can't really tell
the difference.  It's unclear that it'd be worth trying.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-13 Thread Robert Haas
On Sat, Nov 13, 2010 at 7:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Sat, Nov 13, 2010 at 10:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 OK, this is an artifact of the HOT update optimization.  Before
 creating the index, you did updates on the table that would have been
 executed differently if the index had existed.  When the index does get
 created, its entries for those updates are incomplete, so the index
 can't be used in transactions that could in principle see the unmodified
 rows.

 Is the in principle here because there might be an open snapshot
 other than the one under which CREATE INDEX is running, like a cursor?

 Well, the test is based on xmin alone, not cmin, so it can't really tell
 the difference.  It's unclear that it'd be worth trying.

Yeah, I'm not familiar with the logic in that area of the code, so I
can't comment all that intelligently.  However, I feel like there's a
class of things that could potentially be optimized if we know that
the only snapshot they could affect is the one we're currently using.
For example, when bulk loading a newly created table with COPY or
CTAS, we could set the xmin-committed hint bit if it weren't for the
possibility that some snapshot with a command-ID equal to or lower
than our own might take a look and get confused.  That seems to
require a BEFORE trigger or another open snapshot.  And, if we
HOT-update a tuple created by our own transaction that can't be of
interest to anyone else ever again, it would be nice to either mark it
for pruning or maybe even overwrite it in place; similarly if we
delete such a tuple it would be nice to schedule its execution.  There
are problems with all of these ideas, and I'm not totally sure how to
make any of it work, but to me this sounds suspiciously like another
instance of a somewhat more general problem.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-11-12 Thread Jon Nelson
On Thu, Oct 28, 2010 at 9:23 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jon Nelson jnelson+pg...@jamponi.net writes:
 On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 It thinks it's faster, or there is some reason why it *can't* use the
 index, like a datatype mismatch.  You could tell which by trying set
 enable_seqscan = off to see if that will make it change to another
 plan; if so, the estimated costs of that plan versus the original
 seqscan would be valuable information.

 When I place the index creation and ANALYZE right after the bulk
 update, follow it with 'set enable_seqscan = false', the next query
 (also an UPDATE - should be about 7 rows) results in this plan:

 Seq Scan on foo_table  (cost=100.00..1004998.00 rows=24 
 width=236)

 OK, so it thinks it can't use the index.  (The cost=100 bit is
 the effect of enable_seqscan = off: it's not possible to just never use
 seqscans, but we assign an artificially high cost to discourage the
 planner from selecting them if there's any other alternative.)

 So we're back to wondering why it can't use the index.  I will say
 once more that we could probably figure this out quickly if you'd
 post an exact example instead of handwaving.

OK. This is a highly distilled example that shows the behavior.
The ANALYZE doesn't appear to change anything, nor the SET STATISTICS
(followed by ANALYZE), nor disabling seqential scans. Re-writing the
table with ALTER TABLE does, though.
If the initial UPDATE (the one before the index creation) is commented
out, then the subsequent updates don't use sequential scans.

\timing off
BEGIN;
CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,
''::text AS c from generate_series(1,500) AS x;
UPDATE foo SET c = 'foo' WHERE b = 'A' ;
CREATE INDEX foo_b_idx on foo (b);

-- let's see what it looks like
EXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';

-- does forcing a seqscan off help?
set enable_seqscan = false;
EXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';

-- what about analyze?
ANALYZE VERBOSE foo;
EXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';

-- what about statistics?
ALTER TABLE foo ALTER COLUMN b SET STATISTICS 1;
ANALYZE VERBOSE foo;
EXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';

-- let's re-write the table
ALTER TABLE foo ALTER COLUMN a TYPE int;
EXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';

ROLLBACK;

-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-29 Thread Justin Pitts
If you alter the default_statistics_target or any of the specific
statistics targets ( via ALTER TABLE SET STATISTICS ) , the change
will not have an effect until an analyze is performed.

This is implied by
http://www.postgresql.org/docs/9.0/static/planner-stats.html and
http://www.postgresql.org/docs/9.0/static/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET,
but it might save questions like this if it were much more explicit.

On Wed, Oct 27, 2010 at 2:52 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
 On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson reid.thomp...@ateb.com wrote:
 On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
 set it to 500 and restarted postgres.

 did you re-analyze?

 Not recently. I tried that, initially, and there was no improvement.
 I'll try it again now that I've set the stats to 500.
 The most recent experiment shows me that, unless I create whatever
 indexes I would like to see used *before* the large (first) update,
 then they just don't get used. At all. Why would I need to ANALYZE the
 table immediately following index creation? Isn't that part of the
 index creation process?

 Currently executing is a test where I place an ANALYZE foo after the
 COPY, first UPDATE, and first index, but before the other (much
 smaller) updates.

 ..

 Nope. The ANALYZE made no difference. This is what I just ran:

 BEGIN;
 CREATE TEMPORARY TABLE foo
 COPY ...
 UPDATE ... -- 1/3 of table, approx
 CREATE INDEX foo_rowB_idx on foo (rowB);
 ANALYZE ...
 -- queries from here to 'killed' use WHERE rowB = 'someval'
 UPDATE ... -- 7 rows. seq scan!
 UPDATE ... -- 242 rows, seq scan!
 UPDATE .. -- 3700 rows, seq scan!
 UPDATE .. -- 3100 rows, seq scan!
 killed.


 --
 Jon

 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-29 Thread Justin Pitts
 Jason Pitts:
 RE: changing default_statistics_target (or via ALTER TABLE SET STATS)
 not taking effect until ANALYZE is performed.

 I did already know that, but it's probably good to put into this
 thread. However, you'll note that this is a temporary table created at
 the beginning of a transaction.


( giving up on replying to the group; the list will not accept my posts )
I've been following the thread so long I had forgotten that. I rather
strongly doubt that analyze can reach that table's content inside that
transaction, if you are creating, populating, and querying it all
within that single transaction.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-29 Thread Robert Haas
On Wed, Oct 27, 2010 at 3:44 PM, Justin Pitts justinpi...@gmail.com wrote:
 Jason Pitts:
 RE: changing default_statistics_target (or via ALTER TABLE SET STATS)
 not taking effect until ANALYZE is performed.

 I did already know that, but it's probably good to put into this
 thread. However, you'll note that this is a temporary table created at
 the beginning of a transaction.


 ( giving up on replying to the group; the list will not accept my posts )

Evidently it's accepting some of them...

 I've been following the thread so long I had forgotten that. I rather
 strongly doubt that analyze can reach that table's content inside that
 transaction, if you are creating, populating, and querying it all
 within that single transaction.

Actually I don't think that's a problem, at least for a manual ANALYZE.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-28 Thread Jon Nelson
On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jon Nelson jnelson+pg...@jamponi.net writes:
 I'd like to zoom out a little bit and, instead of focusing on the
 specifics, ask more general questions:
..
 - is there some way for me to determine /why/ the planner chooses a
 sequential scan over other options?

 It thinks it's faster, or there is some reason why it *can't* use the
 index, like a datatype mismatch.  You could tell which by trying set
 enable_seqscan = off to see if that will make it change to another
 plan; if so, the estimated costs of that plan versus the original
 seqscan would be valuable information.

When I place the index creation and ANALYZE right after the bulk
update, follow it with 'set enable_seqscan = false', the next query
(also an UPDATE - should be about 7 rows) results in this plan:

Seq Scan on foo_table  (cost=100.00..1004998.00 rows=24 width=236)

The subsequent queries all have the same first-row cost and similar
last-row costs, and of course the rows value varies some as well. All
of them, even the queries which update exactly 1 row, have similar
cost:

Seq Scan on foo_table  (cost=100.00..1289981.17 rows=1 width=158)

I cranked the logging up a bit, but I don't really know what to fiddle
there, and while I got a lot of output, I didn't see much in the way
of cost comparisons.

-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-28 Thread Tom Lane
Jon Nelson jnelson+pg...@jamponi.net writes:
 On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 It thinks it's faster, or there is some reason why it *can't* use the
 index, like a datatype mismatch.  You could tell which by trying set
 enable_seqscan = off to see if that will make it change to another
 plan; if so, the estimated costs of that plan versus the original
 seqscan would be valuable information.

 When I place the index creation and ANALYZE right after the bulk
 update, follow it with 'set enable_seqscan = false', the next query
 (also an UPDATE - should be about 7 rows) results in this plan:

 Seq Scan on foo_table  (cost=100.00..1004998.00 rows=24 width=236)

OK, so it thinks it can't use the index.  (The cost=100 bit is
the effect of enable_seqscan = off: it's not possible to just never use
seqscans, but we assign an artificially high cost to discourage the
planner from selecting them if there's any other alternative.)

So we're back to wondering why it can't use the index.  I will say
once more that we could probably figure this out quickly if you'd
post an exact example instead of handwaving.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Mladen Gogala

On 10/27/2010 1:29 PM, Jon Nelson wrote:

I have an app which imports a lot of data into a temporary table, does
a number of updates, creates some indexes, and then does a bunch more
updates and deletes, and then eventually inserts some of the columns
from the transformed table into a permanent table.

Things were not progressing in a performant manner - specifically,
after creating an index on a column (INTEGER) that is unique, I
expected statements like this to use an index scan:

update foo set colA = 'some value' where indexed_colB = 'some other value'

but according to the auto_explain module (yay!) the query plan
(always) results in a sequential scan, despite only 1 row getting the
update.

In summary, the order goes like this:

BEGIN;
CREATE TEMPORARY TABLE foo ...;
copy into foo 
UPDATE foo  -- 4 or 5 times, updating perhaps 1/3 of the table all told
CREATE INDEX ... -- twice - one index each for two columns
ANALYZE foo;  -- didn't seem to help
UPDATE foo SET ... WHERE indexed_column_B = 'some value'; -- seq scan?
Out of 10 million rows only one is updated!
...

What might be going on here?

How big is your default statistics target? The default is rather small, 
it doesn't produce very good or usable histograms.


--

Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com
The Leader in Integrated Media Intelligence Solutions




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Jon Nelson
On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala
mladen.gog...@vmsinfo.com wrote:
 On 10/27/2010 1:29 PM, Jon Nelson wrote:
 How big is your default statistics target? The default is rather small, it
 doesn't produce very good or usable histograms.

Currently, default_statistics_target is 50.

I note that if I create a indexes earlier in the process (before the
copy) then they are used.
I'm not trying creating them after the first UPDATE (which updates
2.8million of the 10million rows).
The subsequent UPDATE statements update very few (3-4 thousand for 2
of them, less than a few dozen for the others) and the ones that use
the index only update *1* row.

I'll also try setting a higher default_statistics_target and let you know!

-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Jon Nelson
On Wed, Oct 27, 2010 at 12:59 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
 On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala
 mladen.gog...@vmsinfo.com wrote:
 On 10/27/2010 1:29 PM, Jon Nelson wrote:
 How big is your default statistics target? The default is rather small, it
 doesn't produce very good or usable histograms.

 Currently, default_statistics_target is 50.

I set it to 500 and restarted postgres. No change in (most of) the query plans!
The update statement that updates 7 rows? No change.
The one that updates 242 rows? No change.
3714? No change.
I killed the software before it got to the 1-row-only statements.

 I'm not trying creating them after the first UPDATE (which updates
 2.8million of the 10million rows).

I mean to say that I (tried) creating the indexes after the first
UPDATE statement. This did not improve things.
I am now trying to see how creating the indexes before between the
COPY and the UPDATE performs.
I didn't really want to do this because I know that the first UPDATE
statement touches about 1/3 of the table, and this would bloat the
index and slow the UPDATE (which should be a full table scan anyway).
It's every subsequent UPDATE that touches (at most) 4000 rows (out of
10 million) that I'm interested in.

-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Reid Thompson
On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
 set it to 500 and restarted postgres.

did you re-analyze?


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Jon Nelson
On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson reid.thomp...@ateb.com wrote:
 On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
 set it to 500 and restarted postgres.

 did you re-analyze?

Not recently. I tried that, initially, and there was no improvement.
I'll try it again now that I've set the stats to 500.
The most recent experiment shows me that, unless I create whatever
indexes I would like to see used *before* the large (first) update,
then they just don't get used. At all. Why would I need to ANALYZE the
table immediately following index creation? Isn't that part of the
index creation process?

Currently executing is a test where I place an ANALYZE foo after the
COPY, first UPDATE, and first index, but before the other (much
smaller) updates.

..

Nope. The ANALYZE made no difference. This is what I just ran:

BEGIN;
CREATE TEMPORARY TABLE foo
COPY ...
UPDATE ... -- 1/3 of table, approx
CREATE INDEX foo_rowB_idx on foo (rowB);
ANALYZE ...
-- queries from here to 'killed' use WHERE rowB = 'someval'
UPDATE ... -- 7 rows. seq scan!
UPDATE ... -- 242 rows, seq scan!
UPDATE .. -- 3700 rows, seq scan!
UPDATE .. -- 3100 rows, seq scan!
killed.


-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Jon Nelson
On Wed, Oct 27, 2010 at 1:52 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
 On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson reid.thomp...@ateb.com wrote:
 On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
 set it to 500 and restarted postgres.

 did you re-analyze?

 Not recently. I tried that, initially, and there was no improvement.
 I'll try it again now that I've set the stats to 500.
 The most recent experiment shows me that, unless I create whatever
 indexes I would like to see used *before* the large (first) update,
 then they just don't get used. At all. Why would I need to ANALYZE the
 table immediately following index creation? Isn't that part of the
 index creation process?

 Currently executing is a test where I place an ANALYZE foo after the
 COPY, first UPDATE, and first index, but before the other (much
 smaller) updates.

 ..

 Nope. The ANALYZE made no difference. This is what I just ran:

 BEGIN;
 CREATE TEMPORARY TABLE foo
 COPY ...
 UPDATE ... -- 1/3 of table, approx
 CREATE INDEX foo_rowB_idx on foo (rowB);
 ANALYZE ...
 -- queries from here to 'killed' use WHERE rowB = 'someval'
 UPDATE ... -- 7 rows. seq scan!
 UPDATE ... -- 242 rows, seq scan!
 UPDATE .. -- 3700 rows, seq scan!
 UPDATE .. -- 3100 rows, seq scan!
 killed.


Even generating the index beforehand (sans ANALYZE) was no help.
If I generate *all* of the indexes ahead of time, before the COPY,
that's the only time index usage jives with my expectations.

Here is an example of the output from auto analyze (NOTE: the WHERE
clause in this statement specifies a single value in the same column
that has a UNIQUE index on it):

Seq Scan on foo_table  (cost=0.00..289897.04 rows=37589 width=486)

and yet the actual row count is exactly 1.

If I change the order so that the index creation *and* analyze happen
*before* the first (large) update, then things appear to proceed
normally and the indexes are used when expected, although in some
cases the stats are still way off:

Bitmap Heap Scan on foo_table  (cost=40.96..7420.39 rows=1999 width=158)

and yet there are only 7 rows that match. The others seem closer (only
off by 2x rather than 250x).

It seems as though creating an index is not enough. It seems as though
ANALYZE after index creation is not enough, either. I am theorizing
that I have to touch (or just scan?) some percentage of the table in
order for the index to be used?  If that's true, then what is ANALYZE
for?  I've got the stats cranked up to 500. Should I try 1000?


Jason Pitts:
RE: changing default_statistics_target (or via ALTER TABLE SET STATS)
not taking effect until ANALYZE is performed.

I did already know that, but it's probably good to put into this
thread. However, you'll note that this is a temporary table created at
the beginning of a transaction.


-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Tom Lane
Jon Nelson jnelson+pg...@jamponi.net writes:
 The most recent experiment shows me that, unless I create whatever
 indexes I would like to see used *before* the large (first) update,
 then they just don't get used. At all.

You're making a whole lot of assertions here that don't square with
usual experience.  I think there is some detail about what you're
doing that affects the outcome, but since you haven't shown a concrete
example, it's pretty hard to guess what the critical detail is.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Jon Nelson
On Wed, Oct 27, 2010 at 2:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jon Nelson jnelson+pg...@jamponi.net writes:
 The most recent experiment shows me that, unless I create whatever
 indexes I would like to see used *before* the large (first) update,
 then they just don't get used. At all.

 You're making a whole lot of assertions here that don't square with
 usual experience.  I think there is some detail about what you're
 doing that affects the outcome, but since you haven't shown a concrete
 example, it's pretty hard to guess what the critical detail is.

First, let me supply all of the changed (from the default) params:

default_statistics_target = 500
maintenance_work_mem = 240MB
work_mem = 256MB
effective_cache_size = 1GB
checkpoint_segments = 128
shared_buffers = 1GB
max_connections = 30
wal_buffers = 64MB
shared_preload_libraries = 'auto_explain'

The machine is a laptop with 4GB of RAM running my desktop. Kernel is
2.6.36, filesystem is ext4 (for data) and ext2 (for WAL logs). The
disk is a really real disk, not an SSD.

The sequence goes exactly like this:

BEGIN;
CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).
COPY (approx 8 million rows, ~900 MB)[1]
UPDATE (2.8 million of the rows)
UPDATE (7 rows)
UPDATE (250 rows)
UPDATE (3500 rows)
UPDATE (3100 rows)
a bunch of UPDATE (1 row)
...

Experimentally, I noticed that performance was not especially great.
So, I added some indexes (three indexes on one column each). One index
is UNIQUE.
The first UPDATE can't use any of the indexes. The rest should be able to.

In my experiments, I found that:

If I place the index creation *before* the copy, the indexes are used.
If I place the index creation *after* the copy but before first
UPDATE, the indexes are used.
If I place the index creation at any point after the first UPDATE,
regardless of whether ANALYZE is run, the indexes are not used (at
least, according to auto_analyze).

Does that help?


[1] I've been saying 10 million. It's really more like 8 million.
-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Tom Lane
Jon Nelson jnelson+pg...@jamponi.net writes:
 The sequence goes exactly like this:

 BEGIN;
 CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).
 COPY (approx 8 million rows, ~900 MB)[1]
 UPDATE (2.8 million of the rows)
 UPDATE (7 rows)
 UPDATE (250 rows)
 UPDATE (3500 rows)
 UPDATE (3100 rows)
 a bunch of UPDATE (1 row)
 ...

 Experimentally, I noticed that performance was not especially great.
 So, I added some indexes (three indexes on one column each). One index
 is UNIQUE.
 The first UPDATE can't use any of the indexes. The rest should be able to.

Please ... there is *nothing* exact about that.  It's not even clear
what the datatypes of the indexed columns are, let alone what their
statistics are, or whether there's something specific about how you're
declaring the table or the indexes.

Here's an exact test case, which is something I just tried to see if
I could easily reproduce your results:

begin;
create temp table foo (f1 int, f2 text, f3 text);
insert into foo select x, 'xyzzy', x::text from generate_series(1,100) x;
update foo set f2 = 'bogus' where f1  50;
explain update foo set f2 = 'zzy' where f1 = 42;
create index fooi on foo(f1);
explain update foo set f2 = 'zzy' where f1 = 42;
analyze foo;
explain update foo set f2 = 'zzy' where f1 = 42;
rollback;

I get a seqscan, a bitmap index scan, then a plain indexscan, which
is about what I'd expect.  Clearly there's something you're doing
that deviates from this, but you are failing to provide the detail
necessary to figure out what the critical difference is.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Jon Nelson
On Wed, Oct 27, 2010 at 4:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Jon Nelson jnelson+pg...@jamponi.net writes:
 The sequence goes exactly like this:

 BEGIN;
 CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).
 COPY (approx 8 million rows, ~900 MB)[1]
 UPDATE (2.8 million of the rows)
 UPDATE (7 rows)
 UPDATE (250 rows)
 UPDATE (3500 rows)
 UPDATE (3100 rows)
 a bunch of UPDATE (1 row)
 ...

 Experimentally, I noticed that performance was not especially great.
 So, I added some indexes (three indexes on one column each). One index
 is UNIQUE.
 The first UPDATE can't use any of the indexes. The rest should be able to.

 Please ... there is *nothing* exact about that.  It's not even clear
 what the datatypes of the indexed columns are, let alone what their
 statistics are, or whether there's something specific about how you're
 declaring the table or the indexes.

The indexed data types are:
- an INT (this is a unique ID, and it is declared so)
- two TEXT fields. The initial value of one of the text fields is
NULL, and it is updated to be not longer than 10 characters long. The
other text field is not more than 4 characters long. My guesstimate as
to the distribution of values in this column is not more than 2 dozen.

I am not doing anything when I define the table except using TEMPORARY.
The indexes are as bog-standard as one can get. No where clause, no
functions, nothing special at all.

I'd like to zoom out a little bit and, instead of focusing on the
specifics, ask more general questions:

- does the table being temporary effect anything? Another lister
emailed me and wondered if ANALYZE on a temporary table might behave
differently.
- is there some way for me to determine /why/ the planner chooses a
sequential scan over other options? I'm already using auto explain.
- in the general case, are indexes totally ready to use after creation
or is an analyze step necessary?
- do hint bits come into play here at all?



-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] temporary tables, indexes, and query plans

2010-10-27 Thread Tom Lane
Jon Nelson jnelson+pg...@jamponi.net writes:
 I'd like to zoom out a little bit and, instead of focusing on the
 specifics, ask more general questions:

 - does the table being temporary effect anything? Another lister
 emailed me and wondered if ANALYZE on a temporary table might behave
 differently.

Well, the autovacuum daemon can't do anything with temp tables, so
you're reliant on doing a manual ANALYZE if you want the planner to
have stats.  Otherwise it should be the same.

 - is there some way for me to determine /why/ the planner chooses a
 sequential scan over other options?

It thinks it's faster, or there is some reason why it *can't* use the
index, like a datatype mismatch.  You could tell which by trying set
enable_seqscan = off to see if that will make it change to another
plan; if so, the estimated costs of that plan versus the original
seqscan would be valuable information.

 - in the general case, are indexes totally ready to use after creation
 or is an analyze step necessary?

They are unless you said CREATE INDEX CONCURRENTLY, which doesn't seem
like it's relevant here; but since you keep on not showing us your code,
who knows?

 - do hint bits come into play here at all?

No.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Temporary tables

2004-08-06 Thread Gaetano Mendola
G u i d o B a r o s i o wrote:
The box:
Linux 2.4.24-ck1 
8  Intel(R) Xeon(TM) MP CPU 2.80GHz
4 gb RAM.
Postgresql 7.4.2

The problem: 
Short in disk space. (waiting new hard)

The real problem:
Developers usually write queries involving the creation of temporary tables. 
I seen too this behavior, till I explained that this is a valid sql:
select T.* from ( select * from table t where a = 5 ) AS T join foo using ( bar );
show us a typical function that use temporary tables.

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] Temporary tables

2004-08-05 Thread G u i d o B a r o s i o
The box:
Linux 2.4.24-ck1 
8  Intel(R) Xeon(TM) MP CPU 2.80GHz
4 gb RAM.
Postgresql 7.4.2

The problem: 
Short in disk space. (waiting new hard)

The real problem:
Developers usually write queries involving the creation of temporary tables. 

The BF question:
Is a good idea to link this tmp tables to another partition?
If so, how can I link this tmp tables to another partition?
Suggestions?

Thanks in advance!
Guido




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Temporary tables

2004-08-05 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
G u i d o B a r o s i o wrote:
| The box:
| Linux 2.4.24-ck1
| 8  Intel(R) Xeon(TM) MP CPU 2.80GHz
| 4 gb RAM.
| Postgresql 7.4.2
|
| The problem:
| Short in disk space. (waiting new hard)
|
| The real problem:
| Developers usually write queries involving the creation of temporary tables.
I seen too this behavior, till I explained that this is a valid sql:
select T.* from ( select * from table t where a = 5 ) AS T join foo using ( bar );
show us a typical function that use temporary tables.

Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD4DBQFBElSx7UpzwH2SGd4RAhnkAKDABtA1fZSCCF/WAP5TUBJnHdWHYACWLjjQ
LgncGg4+b0lPCQbafXVG6w==
=1f1i
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match