[PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Віталій Тимчишин
Hi, all.

All this optimizer vs hint thread reminded me about crazy idea that got to
my head some time ago.
I currently has two problems with postgresql optimizer
1) Dictionary tables. Very usual thing is something like select * from
big_table where distionary_id = (select id from dictionary where
name=value). This works awful if dictionary_id distribution is not uniform.
The thing that helps is to retrieve subselect value and then simply do
select * from big_table where dictionary_id=id_value.
2) Complex queries. If there are over 3 levels of subselects, optmizer
counts often become less and less correct as we go up on levels. On ~3rd
level this often lead to wrong choises. The thing that helps is to create
temporary tables from subselects, analyze them and then do main select using
this temporary tables.
While first one can be fixed by introducing some correlation statistics, I
don't think there is any simple way to fix second one.

But what if optimizer could in some cases tell fetch this and this and then
I'll plan other part of the query based on statistics of what you've
fetched?

-- 
Best regards,
 Vitalii Tymchyshyn


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Torsten Zühlsdorff

Mladen Gogala schrieb:

Well, the problem will not go away.  As I've said before, all other 
databases have that feature and none of the reasons listed here 
convinced me that everybody else has a crappy optimizer.  The problem 
may go away altogether if people stop using PostgreSQL.


A common problem of programmers is, that they want a solution they 
already know for a problem they already know, even if it is the worst 
solution the can choose.


There are so many possibilities to solve a given problem and you even 
have time to do this before your application get released.


Also: if you rely so heavily on hints, then use a database which 
supports hints. A basic mantra in every training i have given is: use 
the tool/technic/persons which fits best for the needs of the project. 
There are many databases out there - choose for every project the one, 
which fits best!


Greetings from Germany,
Torsten
--
http://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 
verschiedenen Datenbanksystemen abstrahiert,
Queries von Applikationen trennt und automatisch die Query-Ergebnisse 
auswerten kann.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to best use 32 15k.7 300GB drives?

2011-02-04 Thread Vitalii Tymchyshyn

03.02.11 20:42, Robert Haas написав(ла):

2011/1/30 Віталій Тимчишинtiv...@gmail.com:

I was thinking if a table file could be deleted if it has no single live
row. And if this could be done by vacuum. In this case vacuum on table that
was fully updated recently could be almost as good as cluster - any scan
would skip such non-existing files really fast. Also almost no disk space
would be wasted.

VACUUM actually already does something along these lines.  If there
are 1 or any larger number of entirely-free pages at the end of a
table, VACUUM will truncate them away.  In the degenerate case where
ALL pages are entirely-free, this results in zeroing out the file.

The problem with this is that it rarely does much.  Consider a table
with 1,000,000 pages, 50% of which contain live rows.  On average, how
many pages will this algorithm truncate away?  Answer: if the pages
containing live rows are randomly distributed, approximately one.
Yes, but take into account operations on a (by different reasons) 
clustered tables, like removing archived data (yes I know, this is best 
done with partitioning, but one must still go to a point when he will 
decide to use partitioning :) ).

Your idea of having a set of heaps rather than a single heap is an
interesting one, but it's pretty much catering to the very specific
case of a full-table update.  I think the code changes needed would be
far too invasive to seriously contemplate doing it just for that one
case - although it is an important case that I would like to see us
improve.
Why do you expect such a invasive code changes? I know little about 
postgresql code layering, but what I propose (with changing delete to 
truncate) is:

1) Leave tuple addressing as it is now
2) Allow truncated files, treating non-existing part as if it contained 
not used tuples

3) Make vacuum truncate file if it has not used tuples at the end.

The only (relatively) tricky thing I can see is synchronizing truncation 
with parallel ongoing scan.


Best regards, Vitalii Tymchyshyn



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-04 Thread Greg Smith

Andy Colson wrote:
Yes, I agree... for today.  If you gaze into 5 years... double the 
core count (but not the speed), double the IO rate.  What do you see?


Four more versions of PostgreSQL addressing problems people are having 
right now.  When we reach the point where parallel query is the only way 
around the actual bottlenecks in the software people are running into, 
someone will finish parallel query.  I am not a fan of speculative 
development in advance of real demand for it.  There are multiple much 
more serious bottlenecks impacting scalability in PostgreSQL that need 
to be addressed before this one is #1 on the development priority list 
to me.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Query performance with disabled hashjoin and mergejoin

2011-02-04 Thread Ivan Voras
I'm running all this on a 9.0 server with good enough hardware. The 
query is:


SELECT  news.id AS news_id
,   news.layout_id
,   news.news_relation_id
,   news.author_id
,   news.date_created
,   news.date_published
,   news.lastedit
,   news.lastedit_user_id
,   news.lastedit_date
,   news.approved_by
,   news.state
,   news.visible_from
,   news.visible_to
,   news.archived_by
,   news.archived_date
,   news.priority
,   news.collection_id
,   news.comment
,   news.keywords
,   news.icon
,   news.icon_url
,   news.icon_width
,   news.icon_height
,   news.icon_position
,   news.icon_onclick
,   news.icon_newwindow
,   news.no_lead
,   news.content_exists
, news.title, news.lead, news.content


,   author.public_name AS 
author_public_name
,   lastedit_user.public_name AS 
lastedit_user_public_name
,   approved_by_user.public_name AS 
approved_by_public_name
,   archived_by_user.public_name AS 
archived_by_public_name

FROM news
JOIN users AS author ON news.author_id 
= author.id
LEFT JOIN users AS lastedit_user ON 
news.lastedit_user_id = lastedit_user.id
LEFT JOIN users AS approved_by_user ON 
news.approved_by = approved_by_user.id
LEFT JOIN users AS archived_by_user ON 
news.archived_by = archived_by_user.id


WHERE (news.layout_id = 8980) AND (state = 
2) AND (date_published = 1296806570 AND (visible_from IS NULL OR 
1296806570 BETWEEN visible_f

rom AND visible_to))
ORDER BY priority DESC, date_published DESC
;

The vanilla plan, with default settings is:

 Sort  (cost=7325.84..7329.39 rows=1422 width=678) (actual 
time=100.846..100.852 rows=7 loops=1)

   Sort Key: news.priority, news.date_published
   Sort Method:  quicksort  Memory: 38kB
   -  Hash Left Join  (cost=2908.02..7251.37 rows=1422 width=678) 
(actual time=100.695..100.799 rows=7 loops=1)

 Hash Cond: (news.archived_by = archived_by_user.id)
 -  Hash Left Join  (cost=2501.75..6819.47 rows=1422 
width=667) (actual time=76.742..76.830 rows=7 loops=1)

   Hash Cond: (news.approved_by = approved_by_user.id)
   -  Hash Left Join  (cost=2095.48..6377.69 rows=1422 
width=656) (actual time=53.248..53.318 rows=7 loops=1)

 Hash Cond: (news.lastedit_user_id = lastedit_user.id)
 -  Hash Join  (cost=1689.21..5935.87 rows=1422 
width=645) (actual time=29.793..29.846 rows=7 loops=1)

   Hash Cond: (news.author_id = author.id)
   -  Bitmap Heap Scan on news 
(cost=1282.94..5494.05 rows=1422 width=634) (actual time=5.532..5.560 
rows=7 loops=1)
 Recheck Cond: ((layout_id = 8980) AND 
(state = 2) AND ((visible_from IS NULL) OR (1296806570 = visible_to)))
 Filter: ((date_published = 
1296806570) AND ((visible_from IS NULL) OR ((1296806570 = visible_from) 
AND (1296806570 = visible_to
 -  BitmapAnd  (cost=1282.94..1282.94 
rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)
   -  Bitmap Index Scan on 
news_index_layout_id_state  (cost=0.00..150.14 rows=2587 width=0) 
(actual time=0.909..0.909 rows=3464 loops=1)
 Index Cond: ((layout_id = 
8980) AND (state = 2))
   -  BitmapOr 
(cost=1132.20..1132.20 rows=20127 width=0) (actual time=4.136..4.136 
rows=0 loops=1)
 -  Bitmap Index Scan on 
news_visible_from  (cost=0.00..1122.09 rows=19976 width=0) (actual 
time=3.367..3.367 rows=19932 loops=1)
  

Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Grant Johnson



Yes.  And this has little to do with hints.  It has to do with years
of development lead with THOUSANDS of engineers who can work on the
most esoteric corner cases in their spare time.  Find the pg project a
couple hundred software engineers and maybe we'll catch Oracle a
little quicker.  Otherwise we'll have to marshall our resources to do
the best we can on the project ,and that means avoiding maintenance
black holes and having the devs work on the things that give the most
benefit for the cost.  Hints are something only a tiny percentage of
users could actually use and use well.

Write a check, hire some developers and get the code done and present
it to the community.  If it's good and works it'll likely get
accepted.  Or use EDB, since it has oracle compatibility in it.

I have to disagree with you here.   I have never seen Oracle outperform 
PostgreSQL on complex joins, which is where the planner comes in.  
Perhaps on certain throughput things, but this is likely do to how we 
handle dead rows, and counts, which is definitely because of how dead 
rows are handled, but the easier maintenance makes up for those.  Also 
both of those are by a small percentage.


I have many times had Oracle queries that never finish (OK maybe not 
never, but not over a long weekend) on large hardware, but can be 
finished on PostgreSQL in a matter or minutes on cheap hardware.   This 
happens to the point that often I have set up a PostgreSQL database to 
copy the data to for querying and runnign the complex reports, even 
though the origin of the data was Oracle, since the application was 
Oracle specific.   It took less time to duplicate the database and run 
the query on PostgreSQL than it did to just run it on Oracle.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Mladen Gogala

Віталій Тимчишин wrote:

Hi, all.

All this optimizer vs hint thread
There is no optimizer vs. hint. Hints are a necessary part of the 
optimizer in all other databases. Without hints Postgres will not get 
used in the company that I work for, period. I was willing to wait but 
the fatwa against hints seems unyielding, so that's it. I am even 
inclined to believe that deep down under the hood, this fatwa has an 
ulterior motive, which disgusts me deeply. With hints, there would be 
far fewer consulting gigs.


Mladen Gogala 
Sr. Oracle DBA

1500 Broadway
New York, NY 10036
(212) 329-5251
www.vmsinfo.com 



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Shaun Thomas

On 02/04/2011 07:56 AM, Mladen Gogala wrote:


Hints are a necessary part of the
optimizer in all other databases. Without hints Postgres will not get
used in the company that I work for, period.


I've said repeatedly that EnterpriseDB, a fork of PostgreSQL, has the 
hints you seek, yet you seem to enjoy berating the PostgreSQL community 
as if it owes you something.


Also, we don't care if you don't use PostgreSQL. If I put something up 
for free, some random guy not taking it won't exactly hurt my feelings.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Greg Smith

Mladen Gogala wrote:
I am even inclined to believe that deep down under the hood, this 
fatwa has an ulterior motive, which disgusts me deeply. With hints, 
there would be far fewer consulting gigs.


Now you're just being rude.  Given that you have direct access to the 
developers of the software, for free, on these mailing lists, the main 
reason there is consulting work anyway is because some companies can't 
publish their queries or data publicly.  All of us doing PostgreSQL 
consulting regularly take those confidental reports and turn them into 
feedback to improve the core software.  That is what our clients want, 
too:  a better PostgreSQL capable of handling their problem, not just a 
hacked up application that works today, but will break later once data 
volume or distribution changes.


You really just don't get how open-source development works at all if 
you think money is involved in why people have their respective 
technical opinions on controversial subjects.  Try and hire the 
sometimes leader of this particular fatwa, Tom Lane, for a consulting 
gig if you think that's where his motivation lies.  I would love to have 
a recording of *that* phone call.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Kenneth Marshall
On Thu, Feb 03, 2011 at 04:39:12PM -0800, da...@lang.hm wrote:
 On Thu, 3 Feb 2011, Robert Haas wrote:

 On Thu, Feb 3, 2011 at 3:54 PM,  da...@lang.hm wrote:
 with the current code, this is a completely separate process that knows
 nothing about the load, so if you kick it off when you start the load, it
 makes a pass over the table (competing for I/O), finishes, you continue 
 to
 update the table, so it makes another pass, etc. As you say, this is a 
 bad
 thing to do. I am saying to have an option that ties the two togeather,
 essentially making the data feed into the Analyze run be a fork of the 
 data
 comeing out of the insert run going to disk. So the Analyze run doesn't 
 do
 any I/O and isn't going to complete until the insert is complete. At 
 which
 time it will have seen one copy of the entire table.

 Yeah, but you'll be passing the entire table through this separate
 process that may only need to see 1% of it or less on a large table.
 If you want to write the code and prove it's better than what we have
 now, or some other approach that someone else may implement in the
 meantime, hey, this is an open source project, and I like improvements
 as much as the next guy.  But my prediction for what it's worth is
 that the results will suck.  :-)

 I will point out that 1% of a very large table can still be a lot of disk 
 I/O that is avoided (especially if it's random I/O that's avoided)

 David Lang


In addition, the streaming ANALYZE can provide better statistics at
any time during the load and it will be complete immediately. As far
as passing the entire table through the ANALYZE process, a simple
counter can be used to only send the required samples based on the
statistics target. Where this would seem to help the most is in
temporary tables which currently do not work with autovacuum but it
would streamline their use for more complicated queries that need
an analyze to perform well.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Mladen Gogala

Shaun Thomas wrote:

On 02/04/2011 07:56 AM, Mladen Gogala wrote:

  

Hints are a necessary part of the
optimizer in all other databases. Without hints Postgres will not get
used in the company that I work for, period.



I've said repeatedly that EnterpriseDB, a fork of PostgreSQL, has the 
hints you seek, yet you seem to enjoy berating the PostgreSQL community 
as if it owes you something.


Also, we don't care if you don't use PostgreSQL. If I put something up 
for free, some random guy not taking it won't exactly hurt my feelings.


  
Shaun, I don't need to convince you or the Postgres community. I needed 
an argument to convince my boss.
My argument was that the sanctimonious and narrow minded Postgres 
community is unwilling to even consider creating the tools I need for 
large porting projects, tools provided by other major databases. This 
discussion served my purpose wonderfully.  Project is killed, here we 
part ways. No more problems for either of us. Good luck with the 
perfect optimizer and good riddance. My only regret is about the time 
I have wasted.


--
Mladen Gogala 
Sr. Oracle DBA

1500 Broadway
New York, NY 10036
(212) 329-5251
www.vmsinfo.com 



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Vitalii Tymchyshyn

04.02.11 16:33, Kenneth Marshall написав(ла):


In addition, the streaming ANALYZE can provide better statistics at
any time during the load and it will be complete immediately. As far
as passing the entire table through the ANALYZE process, a simple
counter can be used to only send the required samples based on the
statistics target. Where this would seem to help the most is in
temporary tables which currently do not work with autovacuum but it
would streamline their use for more complicated queries that need
an analyze to perform well.

Actually for me the main con with streaming analyze is that it adds 
significant CPU burden to already not too fast load process. Especially 
if it's automatically done for any load operation performed (and I can't 
see how it can be enabled on some threshold).
And you can't start after some threshold of data passed by since you may 
loose significant information (like minimal values).


Best regards, Vitalii Tymchyshyn

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query performance with disabled hashjoin and mergejoin

2011-02-04 Thread Greg Smith

Ivan Voras wrote:

The vanilla plan, with default settings is:


Pause here for a second:  why default settings?  A default PostgreSQL 
configuration is suitable for systems with about 128MB of RAM.  Since 
you say you have good enough hardware, I'm assuming you have a bit 
more than that.  The first things to try here are the list at 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ; your bad 
query here looks like it might benefit from a large increase to 
effective_cache_size, and possibly an increase to work_mem as well.  
Your bad plan here is doing a lot of sequential scans instead of 
indexed lookups, which makes me wonder if the change in join types 
you're forcing isn't fixing that part as a coincidence.


Note that the estimated number of rows coming out of each form of plan 
is off by a factor of about 200X, so it's not that the other plan type 
is better estimating anything.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
I am having huge performance problems with a table. Performance deteriorates
every day and I have to run REINDEX and ANALYZE on it every day.  auto
vacuum is on.  yes, I am reading the other thread about count(*) :)

but obviously I'm doing something wrong here


explain analyze select count(*) from fastadder_fastadderstatus;

Aggregate  (cost=62458.73..62458.74 rows=1 width=0) (actual
time=77130.000..77130.000 rows=1 loops=1)
   -  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61701.18
rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)
 Total runtime: *77250.000 ms*

directly after REINDEX and ANALYZE:

 Aggregate  (cost=62348.70..62348.71 rows=1 width=0) (actual
time=15830.000..15830.000 rows=1 loops=1)
   -  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61613.16
rows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)
 Total runtime: 15830.000 ms

still very bad for a 300k row table

a similar table:

explain analyze select count(*) from fastadder_fastadderstatuslog;

 Aggregate  (cost=8332.53..8332.54 rows=1 width=0) (actual
time=1270.000..1270.000 rows=1 loops=1)
   -  Seq Scan on fastadder_fastadderstatuslog  (cost=0.00..7389.02
rows=377402 width=0) (actual time=0.000..910.000 rows=377033 loops=1)
 Total runtime: 1270.000 ms


It gets updated quite a bit each day, and this is perhaps the problem.
To me it doesn't seem like that many updates

100-500 rows inserted per day
no deletes

10k-50k updates per day
mostly of this sort:   set priority=1 where id=12345

is it perhaps this that is causing the performance problem ?

I could rework the app to be more efficient and do updates using batches
where id IN (1,2,3,4...)

I assume that means a more efficient index update compared to individual
updates.

There is one routine that updates position_in_queue using a lot (too many)
update statements.
Is that likely to be the culprit ?

*What else can I do to investigate ?*


   Table
public.fastadder_fastadderstatus
  Column   |   Type   |
  Modifiers
---+--+
 id| integer  | not null default
nextval('fastadder_fastadderstatus_id_seq'::regclass)
 apt_id| integer  | not null
 service_id| integer  | not null
 agent_priority| integer  | not null
 priority  | integer  | not null
 last_validated| timestamp with time zone |
 last_sent | timestamp with time zone |
 last_checked  | timestamp with time zone |
 last_modified | timestamp with time zone | not null
 running_status| integer  |
 validation_status | integer  |
 position_in_queue | integer  |
 sent  | boolean  | not null default false
 built | boolean  | not null default false
 webid_suffix  | integer  |
 build_cache   | text |
Indexes:
fastadder_fastadderstatus_pkey PRIMARY KEY, btree (id)
fastadder_fastadderstatus_apt_id_key UNIQUE, btree (apt_id,
service_id)
fastadder_fastadderstatus_agent_priority btree (agent_priority)
fastadder_fastadderstatus_apt_id btree (apt_id)
fastadder_fastadderstatus_built btree (built)
fastadder_fastadderstatus_last_checked btree (last_checked)
fastadder_fastadderstatus_last_validated btree (last_validated)
fastadder_fastadderstatus_position_in_queue btree (position_in_queue)
fastadder_fastadderstatus_priority btree (priority)
fastadder_fastadderstatus_running_status btree (running_status)
fastadder_fastadderstatus_service_id btree (service_id)
Foreign-key constraints:
fastadder_fastadderstatus_apt_id_fkey FOREIGN KEY (apt_id) REFERENCES
nsproperties_apt(id) DEFERRABLE INITIALLY DEFERRED
fastadder_fastadderstatus_service_id_fkey FOREIGN KEY (service_id)
REFERENCES fastadder_fastadderservice(id) DEFERRABLE INITIALLY DEFERRED


thanks !


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread hubert depesz lubaczewski
On Fri, Feb 04, 2011 at 03:46:35PM +0100, felix wrote:
 directly after REINDEX and ANALYZE:
 
  Aggregate  (cost=62348.70..62348.71 rows=1 width=0) (actual
 time=15830.000..15830.000 rows=1 loops=1)
-  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61613.16
 rows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)
  Total runtime: 15830.000 ms

do run vacuum of the table. reindex doesn't matter for seq scans, and
analyze, while can help choose different plan - will not help here
anyway.

Best regards,

depesz


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Kenneth Marshall
On Thu, Feb 03, 2011 at 09:05:52PM -0500, Robert Haas wrote:
 On Thu, Feb 3, 2011 at 8:37 PM,  da...@lang.hm wrote:
  On Thu, 3 Feb 2011, Robert Haas wrote:
 
  On Thu, Feb 3, 2011 at 7:39 PM, ?da...@lang.hm wrote:
 
  Yeah, but you'll be passing the entire table through this separate
  process that may only need to see 1% of it or less on a large table.
  If you want to write the code and prove it's better than what we have
  now, or some other approach that someone else may implement in the
  meantime, hey, this is an open source project, and I like improvements
  as much as the next guy. ?But my prediction for what it's worth is
  that the results will suck. ?:-)
 
  I will point out that 1% of a very large table can still be a lot of disk
  I/O that is avoided (especially if it's random I/O that's avoided)
 
  Sure, but I think that trying to avoid it will be costly in other ways
  - you'll be streaming a huge volume of data through some auxiliary
  process, which will have to apply some algorithm that's very different
  from the one we use today. ?The reality is that I think there's little
  evidence that the way we do ANALYZE now is too expensive. ?It's
  typically very cheap and works very well. ?It's a bit annoying when it
  fires off in the middle of a giant data load, so we might need to
  change the time of it a little, but if there's a problem with the
  operation itself being too costly, this is the first I'm hearing of
  it. ?We've actually worked *really* hard to make it cheap.
 
  I could be misunderstanding things here, but my understanding is that it's
  'cheap' in that it has little impact on the database while it is running.
 
 I mean that it's cheap in that it usually takes very little time to complete.
 
  the issue here is that the workflow is
 
  load data
  analyze
  start work
 
  so the cost of analyze in this workflow is not 1% impact on query speed for
  the next X time, it's the database can't be used for the next X time while
  we wait for analyze to finish running
 
 OK.
 
  I don't understand why the algorithm would have to be so different than
  what's done today, surely the analyze thread could easily be tweaked to
  ignore the rest of the data (assuming we don't have the thread sending the
  data to analyze do the filtering)
 
 If you want to randomly pick 10,000 rows out of all the rows that are
 going to be inserted in the table without knowing in advance how many
 there will be, how do you do that?  Maybe there's an algorithm, but
 it's not obvious to me.  But mostly, I question how expensive it is to
 have a second process looking at the entire table contents vs. going
 back and rereading a sample of rows at the end.  I can't remember
 anyone ever complaining ANALYZE took too long to run.  I only
 remember complaints of the form I had to remember to manually run it
 and I wish it had just happened by itself.
 
 -- 
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company

Probably doomed to be shot down, but since you are effectively inline,
you could sample assuming a range of table row counts. Start at the
size of a table where random (index) lookups are faster than a sequential
scan and then at appropriate multiples, 100x, 100*100X,... then you should
be able to generate appropriate statistics. I have not actually looked at
how that would happen, but it would certainly allow you to process far, far
fewer rows than the entire table.

Regards,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Greg Smith

felix wrote:

explain analyze select count(*) from fastadder_fastadderstatus;

Aggregate  (cost=62458.73..62458.74 rows=1 width=0) (actual 
time=77130.000..77130.000 rows=1 loops=1)
   -  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61701.18 
rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)

 Total runtime: *77250.000 ms*



PostgreSQL version?  If you're running on 8.3 or earlier, I would be 
suspicous that your Free Space Map has been overrun.


What you are seeing is that the table itself is much larger on disk than 
it's supposed to be.  That can be caused by frequent UPDATEs if you 
don't have vacuum cleanup working effectively, you'll get lots of dead 
sections left behind from UPDATEs in the middle.  The best way to fix 
all this is to run CLUSTER on the table.  That will introduce a bit of 
downtime while it holds a lock on the table (only a few minutes based on 
what you've shown here), but the copy you'll have afterwards won't be 
spread all over disk anymore.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books



Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 08:46 AM, felix wrote:


explain analyze select count(*) from fastadder_fastadderstatus;

Aggregate  (cost=62458.73..62458.74 rows=1 width=0) (actual
time=77130.000..77130.000 rows=1 loops=1)
-  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61701.18
rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)
  Total runtime: *77250.000 ms*


How big is this table when it's acting all bloated and ugly?

SELECT relpages*8/1024 FROM pg_class
 WHERE relname='fastadder_fastadderstatus';

That's the number of MB it's taking up that would immediately affect a 
count statement.



directly after REINDEX and ANALYZE:

  Aggregate  (cost=62348.70..62348.71 rows=1 width=0) (actual
time=15830.000..15830.000 rows=1 loops=1)
-  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61613.16
rows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)
  Total runtime: 15830.000 ms


That probably put it into cache, explaining the difference, but yeah... 
that is pretty darn slow. Is this the only thing running when you're 
doing your tests? What does your disk IO look like?



10k-50k updates per day
mostly of this sort:   set priority=1 where id=12345


Well... that's up to 16% turnover per day, but even then, regular 
vacuuming should keep it manageable.



I could rework the app to be more efficient and do updates using batches
where id IN (1,2,3,4...)


No. Don't do that. You'd be better off loading everything into a temp 
table and doing this:


UPDATE fastadder_fastadderstatus s
   SET priority = 1
  FROM temp_statuses t
 WHERE t.id=s.id;

It's a better practice, but still doesn't really explain your 
performance issues.



fastadder_fastadderstatus_pkey PRIMARY KEY, btree (id)
fastadder_fastadderstatus_apt_id_key UNIQUE, btree (apt_id, service_id)
fastadder_fastadderstatus_agent_priority btree (agent_priority)
fastadder_fastadderstatus_apt_id btree (apt_id)
fastadder_fastadderstatus_built btree (built)
fastadder_fastadderstatus_last_checked btree (last_checked)
fastadder_fastadderstatus_last_validated btree (last_validated)
fastadder_fastadderstatus_position_in_queue btree (position_in_queue)
fastadder_fastadderstatus_priority btree (priority)
fastadder_fastadderstatus_running_status btree (running_status)
fastadder_fastadderstatus_service_id btree (service_id)


Whoh! Hold on, here. That looks like *way* too many indexes. Definitely 
will slow down your insert/update performance. The index on 'built' for 
example, is a boolean. If it's evenly distributed, that's 150k matches 
for true or false, rendering it useless, yet still requiring space and 
maintenance. I'm guessing the story is similar for quite a few of the 
others.


It doesn't really explain your count speed, but it certainly isn't helping.

Something seems fishy, here.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 08:56 AM, Greg Smith wrote:


PostgreSQL version?  If you're running on 8.3 or earlier, I would be
suspicous that your Free Space Map has been overrun.


That's my first inclination. If he says autovacuum is running, there's 
no way it should be bloating the table that much.


Felix, If you're running a version before 8.4, what is your 
max_fsm_pages setting? If it's too low, autovacuum won't save you, and 
your tables will continue to grow daily unless you vacuum full 
regularly, and I wouldn't recommend that to my worst enemy. ;)


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Frank Heikens



On 04 Feb, 2011,at 02:56 PM, Mladen Gogala mladen.gog...@vmsinfo.com wrote:


Віталій Тимчишин wrote:
 Hi, all.

 All this optimizer vs hint thread
There is no optimizer vs. hint. Hints are a necessary part of the
optimizer in all other databases. 


That has nothing to do with PostgreSQL: PostgreSQL = PostgreSQL. And it doesn't 
have hints and everybody knows it.

Without hints Postgres will not get
used in the company that I work for, period. 


That's up to you, that's fine. But why did you start with PostgreSQL in the 
first place? You knew PostgreSQL doesn't have hints and the wiki told you hints 
are not wanted as well. When hints are an essential requirement for your 
company, you should pick another product, EnterpriseDB Postgres Plus for 
example.

I was willing to wait but
the fatwa against hints seems unyielding, 


There is no fatwa. The PostgreSQL project prefers to spend resources on a 
better optimizer to solve the real problems, not on hints for working around 
the problems. That has nothing to do with any fatwa or religion.

so that's it. I am even
inclined to believe that deep down under the hood, this fatwa has an
ulterior motive, which disgusts me deeply. With hints, there would be
far fewer consulting gigs.


The consulting guys are the ones who love hints: They know they have to come 
back the other month because the old hint does more harm than good when data 
changes. And data will change over time.

You said it's so simple to implement hints in PostgreSQL, so please, show us. 
Or ask/pay somebody to write this simple code for you to support hints, nobody 
will ever stop you from doing that. When you have a use case that proves the 
usage of hints will improve the performance of PostgreSQL and you have some 
code that can be maintained by the PostgreSQL project, it might be implemented 
in the contrib or even core. It's up to you, not somebody else.



Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, NY 10036
(212) 329-5251
www.vmsinfo.com


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Cédric Villemain
2011/2/4 Frank Heikens frankheik...@mac.com:


 On 04 Feb, 2011,at 02:56 PM, Mladen Gogala mladen.gog...@vmsinfo.com
 wrote:

 Віталій Тимчишин wrote:
 Hi, all.

 All this optimizer vs hint thread
 There is no optimizer vs. hint. Hints are a necessary part of the
 optimizer in all other databases.


 That has nothing to do with PostgreSQL: PostgreSQL = PostgreSQL. And it
 doesn't have hints and everybody knows it.

 Without hints Postgres will not get
 used in the company that I work for, period.


 That's up to you, that's fine. But why did you start with PostgreSQL in the
 first place? You knew PostgreSQL doesn't have hints and the wiki told you
 hints are not wanted as well. When hints are an essential requirement for
 your company, you should pick another product, EnterpriseDB Postgres Plus
 for example.

 I was willing to wait but
 the fatwa against hints seems unyielding,


 There is no fatwa. The PostgreSQL project prefers to spend resources on a
 better optimizer to solve the real problems, not on hints for working around
 the problems. That has nothing to do with any fatwa or religion.

 so that's it. I am even
 inclined to believe that deep down under the hood, this fatwa has an
 ulterior motive, which disgusts me deeply. With hints, there would be
 far fewer consulting gigs.


 The consulting guys are the ones who love hints: They know they have to come
 back the other month because the old hint does more harm than good when data
 changes. And data will change over time.

 You said it's so simple to implement hints in PostgreSQL, so please, show
 us. Or ask/pay somebody to write this simple code for you to support hints,
 nobody will ever stop you from doing that. When you have a use case that
 proves the usage of hints will improve the performance of PostgreSQL and you
 have some code that can be maintained by the PostgreSQL project, it might be
 implemented in the contrib or even core. It's up to you, not somebody else.

Just in case you miss it:
http://www.sai.msu.su/~megera/wiki/plantuner

Btw feel free to do how you want, it is open source, and BSD, you can
take PostgreSQL, add hints, go and sell that to your boss.





 Mladen Gogala
 Sr. Oracle DBA
 1500 Broadway
 New York, NY 10036
 (212) 329-5251
 www.vmsinfo.com


 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance




-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Scott Marlowe
On Fri, Feb 4, 2011 at 6:05 AM, Grant Johnson gr...@amadensor.com wrote:

 Yes.  And this has little to do with hints.  It has to do with years
 of development lead with THOUSANDS of engineers who can work on the
 most esoteric corner cases in their spare time.  Find the pg project a
 couple hundred software engineers and maybe we'll catch Oracle a
 little quicker.  Otherwise we'll have to marshall our resources to do
 the best we can on the project ,and that means avoiding maintenance
 black holes and having the devs work on the things that give the most
 benefit for the cost.  Hints are something only a tiny percentage of
 users could actually use and use well.

 Write a check, hire some developers and get the code done and present
 it to the community.  If it's good and works it'll likely get
 accepted.  Or use EDB, since it has oracle compatibility in it.

 I have to disagree with you here.   I have never seen Oracle outperform
 PostgreSQL on complex joins, which is where the planner comes in.  Perhaps
 on certain throughput things, but this is likely do to how we handle dead
 rows, and counts, which is definitely because of how dead rows are handled,
 but the easier maintenance makes up for those.  Also both of those are by a
 small percentage.

 I have many times had Oracle queries that never finish (OK maybe not never,
 but not over a long weekend) on large hardware, but can be finished on
 PostgreSQL in a matter or minutes on cheap hardware.   This happens to the
 point that often I have set up a PostgreSQL database to copy the data to for
 querying and runnign the complex reports, even though the origin of the data
 was Oracle, since the application was Oracle specific.   It took less time
 to duplicate the database and run the query on PostgreSQL than it did to
 just run it on Oracle.

It very much depends on the query.  With lots of tables to join, and
with pg 8.1 which is what I used when we were running Oracle 9, Oracle
won.  With fewer tables to join in an otherwise complex reporting
query PostgreSQL won.  I did the exact thing you're talking about. I
actually wrote a simple replication system fro Oracle to PostgreSQL
(it was allowed to be imperfect because it was stats data and we could
recreate at a moment).

PostgreSQL on a PIV workstation with 2G ram and 4 SATA drives in
RAID-10 stomped Oracle on much bigger Sun hardware into the ground for
reporting queries.  Queries that ran for hours or didn't finish in
Oracle ran in 5 to 30 minutes on the pg box.

But not all queries were like that.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Fwd: [PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
sorry, reply was meant to go to the list.

-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 5:17 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: stho...@peak6.com




On Fri, Feb 4, 2011 at 4:00 PM, Shaun Thomas stho...@peak6.com wrote:

 How big is this table when it's acting all bloated and ugly?

458MB

 Is this the only thing running when you're doing your tests? What does your
 disk IO look like?


this is on a live site.  best not to scare the animals.

I have the same config on the dev environment but not the same table size.


  10k-50k updates per day
 mostly of this sort:   set priority=1 where id=12345


 Well... that's up to 16% turnover per day, but even then, regular vacuuming
 should keep it manageable.


something is definitely amiss with this table.

I'm not sure if its something that happened at one point when killing an
task that was writing to it or if its something about the way the app is
updating.  it SHOULDN'T be that much of a problem, though I can find ways to
improve it.


No. Don't do that. You'd be better off loading everything into a temp table
 and doing this:

 UPDATE fastadder_fastadderstatus s
   SET priority = 1
  FROM temp_statuses t
  WHERE t.id=s.id;


ok, that is one the solutions I was thinking about.

are updates of the where id IN (1,2,3,4) generally not efficient ?
how about for select queries ?


 fastadder_fastadderstatus_pkey PRIMARY KEY, btree (id)
 fastadder_fastadderstatus_apt_id_key UNIQUE, btree (apt_id, service_id)
 fastadder_fastadderstatus_agent_priority btree (agent_priority)
 fastadder_fastadderstatus_apt_id btree (apt_id)
 fastadder_fastadderstatus_built btree (built)
 fastadder_fastadderstatus_last_checked btree (last_checked)
 fastadder_fastadderstatus_last_validated btree (last_validated)
 fastadder_fastadderstatus_position_in_queue btree (position_in_queue)
 fastadder_fastadderstatus_priority btree (priority)
 fastadder_fastadderstatus_running_status btree (running_status)
 fastadder_fastadderstatus_service_id btree (service_id)


 Whoh! Hold on, here. That looks like *way* too many indexes.


I actually just added most of those yesterday in an attempt to improve
performance. priority and agent_priority were missing indexes and that was a
big mistake.

overall performance went way up on my primary selects


 Definitely will slow down your insert/update performance.


there are a lot more selects happening throughout the day


 The index on 'built' for example, is a boolean. If it's evenly distributed,
 that's 150k matches for true or false,


ok,

built True is in the minority.

here is the test query that caused me to add indices to the booleans.  this
is a 30k table which is doing selects on two booleans constantly.  again:
True is the minority

explain analyze SELECT nsproperties_apt.id,
nsproperties_apt.display_address, nsproperties_apt.apt_num,
nsproperties_apt.bldg_id, nsproperties_apt.is_rental,
nsproperties_apt.is_furnished, nsproperties_apt.listing_type,
nsproperties_apt.list_on_web, nsproperties_apt.is_approved,
nsproperties_apt.status, nsproperties_apt.headline,
nsproperties_apt.slug, nsproperties_apt.cross_street,
nsproperties_apt.show_apt_num, nsproperties_apt.show_building_name,
nsproperties_apt.external_url, nsproperties_apt.listed_on,
nsproperties_bldg.id, nsproperties_bldg.name FROM nsproperties_apt
LEFT OUTER JOIN nsproperties_bldg ON (nsproperties_apt.bldg_id =
nsproperties_bldg.id) WHERE (nsproperties_apt.list_on_web = True AND
nsproperties_apt.is_available = True ) ;
   QUERY PLAN


 Hash Left Join  (cost=408.74..10062.18 rows=3344 width=152) (actual
time=12.688..2442.542 rows=2640 loops=1)
   Hash Cond: (nsproperties_apt.bldg_id = nsproperties_bldg.id)
   -  Seq Scan on nsproperties_apt  (cost=0.00..9602.52 rows=3344
width=139) (actual time=0.025..2411.644 rows=2640 loops=1)
 Filter: (list_on_web AND is_available)
   -  Hash  (cost=346.66..346.66 rows=4966 width=13) (actual
time=12.646..12.646 rows=4966 loops=1)
 -  Seq Scan on nsproperties_bldg  (cost=0.00..346.66 rows=4966
width=13) (actual time=0.036..8.236 rows=4966 loops=1)
 Total runtime: 2444.067 ms
(7 rows)

=

 Hash Left Join  (cost=1232.45..9784.18 rows=5690 width=173) (actual
time=30.000..100.000 rows=5076 loops=1)
   Hash Cond: (nsproperties_apt.bldg_id = nsproperties_bldg.id)
   -  Bitmap Heap Scan on nsproperties_apt  (cost=618.23..9075.84 rows=5690
width=157) (actual time=10.000..60.000 rows=5076 loops=1)
 Filter: (list_on_web AND is_available)
 -  BitmapAnd  (cost=618.23..618.23 rows=5690 width=0) (actual
time=10.000..10.000 rows=0 loops=1)
   -  Bitmap Index Scan on nsproperties_apt_is_available
 (cost=0.00..131.81 rows=6874 width=0) (actual time=0.000..0.000 

[PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
reply was meant for the list

-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 4:39 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: Greg Smith g...@2ndquadrant.com




On Fri, Feb 4, 2011 at 3:56 PM, Greg Smith g...@2ndquadrant.com wrote:

 PostgreSQL version?  If you're running on 8.3 or earlier, I would be
 suspicous that your Free Space Map has been overrun.


8.3




 What you are seeing is that the table itself is much larger on disk than
 it's supposed to be.


which part of the explain told you that ?

 shaun thomas

SELECT relpages*8/1024 FROM pg_class
 WHERE relname='fastadder_fastadderstatus';

458MB

way too big. build_cache is text between 500-1k chars




  That can be caused by frequent UPDATEs if you don't have vacuum cleanup
 working effectively, you'll get lots of dead sections left behind from
 UPDATEs in the middle.


ok, I just vacuumed it (did this manually a few times as well). and auto is
on.

still:
32840.000ms
and still 458MB



 The best way to fix all this is to run CLUSTER on the table.


http://www.postgresonline.com/journal/archives/10-How-does-CLUSTER-ON-improve-index-performance.html

now that would order the data on disk by id (primary key)
the usage of the table is either by a query or by position_in_queue which is
rewritten often (I might change this part of the app and pull it out of this
table)

is this definitely the best way to fix this ?

thanks for your help !


That will introduce a bit of downtime while it holds a lock on the table
 (only a few minutes based on what you've shown here), but the copy you'll
 have afterwards won't be spread all over disk anymore.

 --
 Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
 PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
 PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books




Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Kenneth Marshall
On Fri, Feb 04, 2011 at 05:20:27PM +0100, felix wrote:
 reply was meant for the list
 
 -- Forwarded message --
 From: felix crucialfe...@gmail.com
 Date: Fri, Feb 4, 2011 at 4:39 PM
 Subject: Re: [PERFORM] Really really slow select count(*)
 To: Greg Smith g...@2ndquadrant.com
 
 
 
 
 On Fri, Feb 4, 2011 at 3:56 PM, Greg Smith g...@2ndquadrant.com wrote:
 
  PostgreSQL version?  If you're running on 8.3 or earlier, I would be
  suspicous that your Free Space Map has been overrun.
 
 
 8.3
 
 
 
 
  What you are seeing is that the table itself is much larger on disk than
  it's supposed to be.
 
 
 which part of the explain told you that ?
 
  shaun thomas
 
 SELECT relpages*8/1024 FROM pg_class
  WHERE relname='fastadder_fastadderstatus';
 
 458MB
 
 way too big. build_cache is text between 500-1k chars
 

As has been suggested, you really need to CLUSTER the table
to remove dead rows. VACUUM will not do that, VACUUM FULL will
but will take a full table lock and then you would need to
REINDEX to fix index bloat. CLUSTER will do this in one shot.
You almost certainly have your free space map way too small,
which is how you bloated in the first place.

Cheers,
Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 10:17 AM, felix wrote:


 How big is this table when it's acting all bloated and ugly?

458MB


Wow! There's no way a table with 300k records should be that big unless 
it's just full of text. 70-seconds seems like a really long time to read 
half a gig, but that might be because it's fighting for IO with other 
processes.


For perspective, we have several 1-2 million row tables smaller than 
that. Heck, I have a 11-million row table that's only 30% larger.



are updates of the where id IN (1,2,3,4) generally not efficient ?
how about for select queries ?


Well, IN is notorious for being inefficient. It's been getting better, 
but even EXISTS is a better bet than using IN. We've got a lot of stuff 
using IN here, and we're slowly phasing it out. Every time I get rid of 
it, things get faster.



I actually just added most of those yesterday in an attempt to improve
performance. priority and agent_priority were missing indexes and that
was a big mistake.


Haha. Well, that can always be true. Ironically one of the things you 
actually did by creating the indexes is create fast lookup values to 
circumvent your table bloat. It would help with anything except sequence 
scans, which you saw with your count query.



ok,
built True is in the minority.


Ok, in that case, use a partial index. If a boolean value is only 1% of 
your table or something, why bother indexing the rest anyway?


CREATE INDEX fastadder_fastadderstatus_built
 ON fastadder_fastadderstatus
  WHERE built;

But only if it really is the vast minority. Check this way:

SELECT built, count(1)
   FROM fastadder_fastadderstatus
  GROUP BY 1;

We used one of these to ignore a status that was over 90% of the table, 
where the other statuses combined were less than 10%. The index was 10x 
smaller and much faster than before.


If you know both booleans are used together often, you can combine them 
into a single index, again using a partial where it only indexes if both 
values are true. Much smaller, much faster index if it's more selective 
than the other indexes.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 10:03 AM, felix wrote:


  max_fsm_pages   | 153600 | Sets the
maximum number of disk pages for which free space is tracked.
  max_fsm_relations   | 1000   | Sets the
maximum number of tables and indexes for which free space is tracked.

how do I determine the best size or if that's the problem ?


Well, the best way is to run:

vacuumdb -a -v -z vacuum.log

And at the end of the log, it'll tell you how many pages it wants, and 
how many pages were available.


 From the sounds of your database, 150k is way too small. If a single 
table is getting 10-50k updates per day, it's a good chance a ton of 
other tables are getting similar traffic. With max_fsm_pages at that 
setting, any update beyond 150k effectively gets forgotten, and 
forgotten rows aren't reused by new inserts or updates.


Your database has probably been slowly expanding for months without you 
realizing it. The tables that get the most turnover will be hit the 
hardest, as it sounds like what happened here.


You can stop the bloating by setting the right max_fsm_pages setting, 
but you'll either have to go through and VACUUM FULL every table in your 
database, or dump/restore to regain all the lost space and performance 
(the later would actually be faster). Before I even touch an older 
PostgreSQL DB, I set it to some value over 3-million just as a starting 
value to be on the safe side. A little used memory is a small price to 
pay for stopping gradual expansion.


Your reindex was a good idea. Indexes do sometimes need that. But your 
base tables need work too. Unless you're on 8.4 or above, auto_vacuum 
isn't enough.


Just to share an anecdote, I was with a company about five years ago and 
they also used the default max_fsm_pages setting. Their DB had expanded 
to 40GB and was filling their disk, only a couple weeks before 
exhausting it. I set the max_fsm_pages setting to 2-million, set up a 
bunch of scripts to vacuum-full the tables from smallest to largest (to 
make enough space for the larger tables, you see) and the database ended 
up at less than 20GB.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
On Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas stho...@peak6.com wrote:



 vacuumdb -a -v -z vacuum.log

 And at the end of the log, it'll tell you how many pages it wants, and how
 many pages were available.


this is the dev, not live. but this is after it gets done with that table:

CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing public.fastadder_fastadderstatus
INFO:  fastadder_fastadderstatus: scanned 2492 of 2492 pages, containing
154378 live rows and 0 dead rows; 3 rows in sample, 154378 estimated
total rows

and there's nothing at the end of the whole vacuum output about pages

actual command:

vacuumdb -U postgres -W -v -z djns4  vacuum.log

I tried it with all databases too

?

thanks


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Scott Marlowe
On Fri, Feb 4, 2011 at 10:38 AM, felix crucialfe...@gmail.com wrote:


 On Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas stho...@peak6.com wrote:


 vacuumdb -a -v -z vacuum.log

 And at the end of the log, it'll tell you how many pages it wants, and how
 many pages were available.

 this is the dev, not live. but this is after it gets done with that table:
 CPU 0.00s/0.00u sec elapsed 0.00 sec.
 INFO:  analyzing public.fastadder_fastadderstatus
 INFO:  fastadder_fastadderstatus: scanned 2492 of 2492 pages, containing
 154378 live rows and 0 dead rows; 3 rows in sample, 154378 estimated
 total rows
 and there's nothing at the end of the whole vacuum output about pages
 actual command:
 vacuumdb -U postgres -W -v -z djns4  vacuum.log
 I tried it with all databases too

I believe you have to run it on the whole db to get that output.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
vacuumdb -a -v -z -U postgres -W  vacuum.log

that's all, isn't it ?

it did each db

8.3 in case that matters

the very end:

There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing public.seo_partnerlinkcategory
INFO:  seo_partnerlinkcategory: scanned 0 of 0 pages, containing 0 live
rows and 0 dead rows; 0 rows in sample, 0 estimated total rows



On Fri, Feb 4, 2011 at 6:40 PM, Scott Marlowe scott.marl...@gmail.comwrote:


  I tried it with all databases too

 I believe you have to run it on the whole db to get that output.



Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 11:38 AM, felix wrote:


CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing public.fastadder_fastadderstatus
INFO: fastadder_fastadderstatus: scanned 2492 of 2492 pages,
containing 154378 live rows and 0 dead rows; 3 rows in sample,
154378 estimated total rows

and there's nothing at the end of the whole vacuum output about pages


I'm not sure if it gives it to you if you pick a single DB, but if you 
use -a for all, you should get something at the very end like this:


INFO:  free space map contains 1365918 pages in 1507 relations
DETAIL:  A total of 1326656 page slots are in use (including overhead).
1326656 page slots are required to track all free space.
Current limits are:  300 page slots, 3500 relations, using 38784 kB.
VACUUM

That's on our dev system. Your dev table seems properly sized, but prod 
probably isn't. If you run an all-database vacuum after-hours, you'll 
see the stuff at the end. And if your 'page slots are required' is 
greater than your 'page slots are in use,' you've got a problem.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
vacuumdb -a -v -z -U postgres -W  vacuum.log
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
cruxnu:nsbuildout crucial$

do you think its possible that it just doesn't have anything to complain
about ?
or the password is affecting it ?

In any case I'm not sure I want to run this even at night on production.

what is the downside to estimating max_fsm_pages too high ?

300 should be safe
its certainly not 150k

I have one very large table (10m) that is being analyzed before I warehouse
it.
that could've been the monster that ate the free map.
I think today I've learned that even unused tables affect postgres
performance.


and do you agree that I should turn CLUSTER ON ?
I have no problem to stop all tasks to this table at night and just reload
it



On Fri, Feb 4, 2011 at 6:47 PM, Shaun Thomas stho...@peak6.com wrote:

 On 02/04/2011 11:44 AM, felix wrote:

  the very end:

 There were 0 unused item pointers.
 0 pages are entirely empty.
 CPU 0.00s/0.00u sec elapsed 0.00 sec.
 INFO:  analyzing public.seo_partnerlinkcategory
 INFO: seo_partnerlinkcategory: scanned 0 of 0 pages, containing 0 live
 rows and 0 dead rows; 0 rows in sample, 0 estimated total rows


 That looks to me like it didn't finish. Did you fork it off with '' or run
 it and wait until it gave control back to you?

 It really should be telling you how many pages it wanted, and are in use.
 If not, something odd is going on.


 --
 Shaun Thomas
 OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
 312-676-8870
 stho...@peak6.com

 __

 See  http://www.peak6.com/email_disclaimer.php
 for terms and conditions related to this email



Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Scott Marlowe
You can run vacuum verbose on just the postgres database and get the
global numbers at the end.  gotta be a superuser as well.

# \c postgres postgres
postgres=# vacuum verbose;
 lots deleted.
DETAIL:  A total of 7664 page slots are in use (including overhead).
7664 page slots are required to track all free space.
Current limits are:  1004800 page slots, 5000 relations, using 6426 kB.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 12:14 PM, felix wrote:


do you think its possible that it just doesn't have anything to
complain about ? or the password is affecting it ?


Why is it asking for the password over and over again? It shouldn't be 
doing that. And also, are you running this as a user with superuser 
privileges? You might want to think about setting up a .pgpass file, or 
setting up local trust for the postgres user so you can run maintenance 
without having to manually enter a password.



In any case I'm not sure I want to run this even at night on
production.


You should be. Even with auto vacuum turned on, all of our production 
systems get a nightly vacuum over the entire list of databases. It's non 
destructive, and about the only thing that happens is disk IO. If your 
app has times where it's not very busy, say 3am, it's a good time.


This is especially true since your free space map is behind.

We actually turn off autovacuum because we have a very transactionally 
intense DB, and if autovacuum launches on a table in the middle of the 
day, our IO totally obliterates performance. We only run a nightly 
vacuum over all the databases when very few users or scripts are using 
anything.



what is the downside to estimating max_fsm_pages too high ?


Nothing really. It uses more memory to track it, but on modern servers, 
it's not a concern. The only risk is that you don't know what the real 
setting should be, so you may not completely stop your bloating.



and do you agree that I should turn CLUSTER ON ?


Cluster isn't really something you turn on, but something you do. It's 
like vacuum full, in that it basically rebuilds the table and all 
indexes from scratch. The major issue you'll run into is that it 
reorders the table by the index you chose, so you'd best select the 
primary key unless you have reasons to use something else. And you have 
to do it table by table, which will really suck since we already know 
your whole db has bloated, not just one or two tables.


You're going to be doing some scripting, buddy. :) Well, unless you just 
do a dump/restore and start over with sane postgresql.conf settings.



I have no problem to stop all tasks to this table at night and just
reload it


That will work for this table. Just keep in mind all your tables have 
been suffering since you installed this database. Tables with the 
highest turnover were hit hardest, but they all have non-ideal sizes 
compared to what they would be if your maintenance was working.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Greg Smith

felix wrote:

and do you agree that I should turn CLUSTER ON ?
I have no problem to stop all tasks to this table at night and just 
reload it


You don't turn it on; it's a one time operation that does a cleanup.  It 
is by far the easiest way to clean up the mess you have right now.  
Moving forward, if you have max_fsm_pages set to an appropriate number, 
you shouldn't end up back in this position again.  But VACUUM along 
won't get you out of there, and VACUUM FULL is always a worse way to 
clean this up than CLUSTER.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Robert Haas
On Fri, Feb 4, 2011 at 9:38 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
 Actually for me the main con with streaming analyze is that it adds
 significant CPU burden to already not too fast load process.

Exactly.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to best use 32 15k.7 300GB drives?

2011-02-04 Thread Robert Haas
On Fri, Feb 4, 2011 at 4:19 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
 Why do you expect such a invasive code changes? I know little about
 postgresql code layering, but what I propose (with changing delete to
 truncate) is:
 1) Leave tuple addressing as it is now

i.e. a block number and a slot position within the block?

Seems like you'd need file,block,slot.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Scott Marlowe
On Fri, Feb 4, 2011 at 11:38 AM, Greg Smith g...@2ndquadrant.com wrote:
 You don't turn it on; it's a one time operation that does a cleanup.  It is
 by far the easiest way to clean up the mess you have right now.  Moving
 forward, if you have max_fsm_pages set to an appropriate number, you
 shouldn't end up back in this position again.  But VACUUM along won't get
 you out of there, and VACUUM FULL is always a worse way to clean this up
 than CLUSTER.

note that for large, randomly ordered tables, cluster can be pretty
slow, and you might want to do the old:

begin;
select * into temporaryholdingtable order by somefield;
truncate oldtable;
insert into oldtables select * from temporaryholdingtable;
commit;

for fastest performance.  I've had Cluster take hours to do that the
above does in 1/4th the time.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Write-heavy pg_stats_collector on mostly idle server

2011-02-04 Thread Josh Berkus
All,

Seeing an issue which is new on me.  On a mostly idle PostgreSQL server,
the stats collector is rewriting the entire stats file twice per second.

Version: 8.4.4
Server: Ubuntu, kernel 2.6.32
Server set up: ApacheMQ server.  25 databases, each of which hold 2-3
tables.
Filesystem: Ext4, defaults
Active connections: around 15
Autovacuum settings: defaults

Symptoms: on a server which gets around 20 reads and 15 writes per
minute, we are seeing average 500K/second writes by the stats collector
to pg_stat.tmp.  pg_stat.tmp is around 270K.

An strace of the stats collector process shows that the stats collector
is, in fact, rewriting the entire stats file twice per second.

Anyone seen anything like this before?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 01:01 PM, Scott Marlowe wrote:


begin;
select * into temporaryholdingtable order by somefield;
truncate oldtable;
insert into oldtables select * from temporaryholdingtable;
commit;


That's usually how I do it, except for larger tables, I also throw in a 
DROP INDEX for all the indexes on the table before the insert, and 
CREATE INDEX statements afterwards.


Which actually brings up a question I've been wondering to myself that I 
may submit to [HACKERS]: Can we add a a parallel option to the reindexdb 
command? We added one to pg_restore, so we already know it works.


I have a bunch of scripts that get all the indexes in the database and 
order them by size (so they're distributed evenly), round-robin them 
into separate REINDEX sql files, and launches them all in parallel 
depending on how many threads you want, but that's so hacky I feel dirty 
every time I use it.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
On Fri, Feb 4, 2011 at 7:34 PM, Shaun Thomas stho...@peak6.com wrote:

 Why is it asking for the password over and over again? It shouldn't be
 doing that.


because I asked it to: -W
on the production server I need to enter password and I'm testing on dev
first.

I just sudo tried it but still no report


 and do you agree that I should turn CLUSTER ON ?


 Cluster isn't really something you turn on, but something you do.


djns4=# cluster fastadder_fastadderstatus;
ERROR:  there is no previously clustered index for table
fastadder_fastadderstatus

http://www.postgresonline.com/journal/archives/10-How-does-CLUSTER-ON-improve-index-performance.html

djns4=# alter table fastadder_fastadderstatus CLUSTER ON
fastadder_fastadderstatus_pkey; ALTER TABLE djns4=# CLUSTER
fastadder_fastadderstatus; CLUSTER

ok, that's why I figured I was turning something on. the table has been
altered.

it will be pk ordered, new entries always at the end and no deletes

but this means I have to manually run cluster from time to time, right ? not
that there will be much or any reordering.  or it should be fine going
forward with vacuum and enlarging the free space memory map.



 It's like vacuum full, in that it basically rebuilds the table and all
 indexes from scratch. The major issue you'll run into is that it reorders
 the table by the index you chose, so you'd best select the primary key
 unless you have reasons to use something else. And you have to do it table
 by table, which will really suck since we already know your whole db has
 bloated, not just one or two tables.


do we know that ?  many of the tables are fairly static.

only this one is seriously borked, and yet other related tables seem to be
fine.




 You're going to be doing some scripting, buddy. :) Well, unless you just do
 a dump/restore and start over with sane postgresql.conf settings.


well who knew the defaults were unsane ? :)

scripting this is trivial, I already have the script

I have made the mistake of doing VACUUM FULL in the past. in fact on this
table, and it had to be killed because it took down my entire website !
 that may well be the major borking event. a credit to postgres that the
table still functions if that's the case.

scott marlowe:

begin;
 select * into temporaryholdingtable order by somefield;
 truncate oldtable;
 insert into oldtables select * from temporaryholdingtable;
 commit;


that sounds like a good approach.

gentlemen, 300,000 + thanks for your generous time !
(a small number, I know)

-felix


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Scott Marlowe
On Fri, Feb 4, 2011 at 12:26 PM, felix crucialfe...@gmail.com wrote:
 I just sudo tried it but still no report

It's not about who you are in Unix / Linux, it's about who you are in
Postgresql.  \du will show you who is a superusr.  psql -U username
will let you connect as that user.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 01:26 PM, felix wrote:


because I asked it to: -W on the production server I need to enter
password and I'm testing on dev first.


Right. I'm just surprised it threw up the prompt so many times.


I just sudo tried it but still no report


Nono... you have to run the vacuum command with the -U for a superuser 
in the database. Like the postgres user.



but this means I have to manually run cluster from time to time, right ?
not that there will be much or any reordering.  or it should be fine
going forward with vacuum and enlarging the free space memory map.


It should be fine going forward. You only need to re-cluster if you want 
to force the table to remain in the order you chose, since it doesn't 
maintain the order for updates and new inserts. Since you're only doing 
it as a cleanup, that's not a concern for you.



do we know that ?  many of the tables are fairly static. only this
one is seriously borked, and yet other related tables seem to be
fine.


Probably not in your case. I just mean that any non-static table is 
going to have this problem. If you know what those are, great. I don't 
usually have that luxury, so I err on the side of assuming the whole DB 
is borked. :)


Also, here's a query you may find useful in the future. It reports the 
top 20 tables by size, but also reports the row counts and what not. 
It's a good way to find possibly bloated tables, or tables you could 
archive:


SELECT n.nspname AS schema_name, c.relname AS table_name,
   c.reltuples AS row_count,
   c.relpages*8/1024 AS mb_used,
   pg_total_relation_size(c.oid)/1024/1024 AS total_mb_used
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid=c.relnamespace)
 WHERE c.relkind = 'r'
 ORDER BY total_mb_used DESC
 LIMIT 20;

The total_mb_used column is the table + all of the indexes and toast 
table space. The mb_used is just for the table itself. This will also 
help you see index bloat, or if a table has too much toasted data.



well who knew the defaults were unsane ? :)


Not really unsane, but for any large database, they're not ideal. This 
also goes for the default_statistics_target setting. If you haven't 
already, you may want to bump this up to 100 from the default of 10. Not 
enough stats can make the planner ignore indexes and other bad things, 
and it sounds like your DB is big enough to benefit from that.


Later versions have made 100 the default, so you'd just be catching up. :)

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
ah right, duh.
yes, I did it as -U postgres, verified as a superuser

just now did it from inside psql as postgres

\c djns4
vacuum verbose analyze;

still no advice on the pages



On Fri, Feb 4, 2011 at 8:34 PM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Feb 4, 2011 at 12:26 PM, felix crucialfe...@gmail.com wrote:
  I just sudo tried it but still no report

 It's not about who you are in Unix / Linux, it's about who you are in
 Postgresql.  \du will show you who is a superusr.  psql -U username
 will let you connect as that user.



Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 01:59 PM, felix wrote:



still no advice on the pages


I think it just hates you.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread felix
it probably has good reason to hate me.



ns= SELECT n.nspname AS schema_name, c.relname AS table_name,
ns-   c.reltuples AS row_count,
ns-   c.relpages*8/1024 AS mb_used,
ns-   pg_total_relation_size(c.oid)/1024/1024 AS total_mb_used
ns-  FROM pg_class c
ns-  JOIN pg_namespace n ON (n.oid=c.relnamespace)
ns-  WHERE c.relkind = 'r'
ns-  ORDER BY total_mb_used DESC
ns-  LIMIT 20;
 schema_name |table_name|  row_count  | mb_used |
total_mb_used
-+--+-+-+---
 public  | django_session   | 1.47843e+07 |4122 |
  18832
 public  | traffic_tracking2010 | 9.81985e+06 | 811 |
   1653
 public  | mailer_mailingmessagelog | 7.20214e+06 | 441 |
   1082
 public  | auth_user| 3.20077e+06 | 572 |
791
 public  | fastadder_fastadderstatus|  302479 | 458 |
693
 public  | registration_registrationprofile | 3.01345e+06 | 248 |
404
 public  | reporting_dp_6c93734c|  1.1741e+06 |  82 |
224
 public  | peoplez_contact  |   79759 |  18 |
221
 public  | traffic_tracking201101   | 1.49972e+06 | 163 |
204
 public  | reporting_dp_a3439e2a| 1.32739e+06 |  82 |
187
 public  | nsproperties_apthistory  |   44906 |  69 |
126
 public  | nsproperties_apt |   30780 |  71 |
125
 public  | clients_showingrequest   |   85175 |  77 |
103
 public  | reporting_dp_4ffe04ad|  330252 |  26 |
 63
 public  | fastadder_fastadderstatuslog |  377402 |  28 |
 60
 public  | nsmailings_officememotoagent |  268345 |  15 |
 52
 public  | celery_taskmeta  |5041 |  12 |
 32
 public  | mailer_messagelog|  168298 |  24 |
 32
 public  | datapoints_job   |9167 |  12 |
 23
 public  | fastadder_fastadderstatus_errors |  146314 |   7 |
 21

oh and there in the footnotes to django they say dont' forget to run the
delete expired sessions management every once in a while. thanks guys.

it won't run now because its too big, I can delete them from psql though

well just think how sprightly my website will run tomorrow once I fix these.




On Fri, Feb 4, 2011 at 9:00 PM, Shaun Thomas stho...@peak6.com wrote:

 On 02/04/2011 01:59 PM, felix wrote:


  still no advice on the pages


 I think it just hates you.


 --
 Shaun Thomas
 OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
 312-676-8870
 stho...@peak6.com

 __

 See  http://www.peak6.com/email_disclaimer.php
 for terms and conditions related to this email



Re: [PERFORM] Really really slow select count(*)

2011-02-04 Thread Shaun Thomas

On 02/04/2011 02:14 PM, felix wrote:


oh and there in the footnotes to django they say dont' forget to run
the delete expired sessions management every once in a while.
thanks guys.


Oh Django... :)


it won't run now because its too big, I can delete them from psql though


You might be better off deleting the inverse. You know, start a 
transaction, select all the sessions that *aren't* expired, truncate the 
table, insert them back into the session table, and commit.


BEGIN;
CREATE TEMP TABLE foo_1 AS
SELECT * FROM django_session WHERE date_expired  CURRENT_DATE;
TRUNCATE django_session;
INSERT INTO django_session SELECT * from foo_1;
COMMIT;

Except I don't actually know what the expired column is. You can figure 
that out pretty quick, I assume. That'll also have the benefit of 
cleaning up the indexes and the table all at once. If you just do a 
delete, the table won't change at all, except that it'll have less 
active records.



well just think how sprightly my website will run tomorrow once I fix
these.


Maybe. :)

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-04 Thread Mark Stosberg
On 02/03/2011 10:57 AM, gnuo...@rcn.com wrote:
 For PG to prosper in the future, it has to embrace the 
 multi-core/processor/SSD machine at the query level

As the person who brought up the original concern, I'll add that
multi-core at the query level really isn't important for us. Most of
our PostgreSQL usage is through a web application which fairly
automatically takes advantage of multiple cores, because there are
several parallel connections.

A smaller but important piece of what we do is run this cron script
needs to run hundreds of thousands of variations of the same complex
SELECT as fast it can.

What honestly would have helped most is not technical at all-- it would
have been some documentation on how to take advantage of multiple cores
for this case.

It looks like it's going to be trivial-- Divide up the data with a
modulo, and run multiple parallel cron scripts that each processes a
slice of the data. A benchmark showed that this approach sped up our
processing 3x when splitting the application 4 ways across 4 processors.
(I think we failed to achieve a 4x improvement because the server was
already busy handling some other tasks).

Part of our case is likely fairly common *today*: many servers are
multi-core now, but people don't necessarily understand how to take
advantage of that if it doesn't happen automatically.

Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] checkpoint_completion_target and Ext3

2011-02-04 Thread Josh Berkus
Greg (Smith),

Given your analysis of fsync'ing behavior on Ext3, would you say that it
is better to set checkpoint_completion_target to 0.0 on Ext3?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Re: getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-04 Thread Scott Marlowe
On Fri, Feb 4, 2011 at 2:18 PM, Mark Stosberg m...@summersault.com wrote:
 It looks like it's going to be trivial-- Divide up the data with a
 modulo, and run multiple parallel cron scripts that each processes a
 slice of the data. A benchmark showed that this approach sped up our
 processing 3x when splitting the application 4 ways across 4 processors.
 (I think we failed to achieve a 4x improvement because the server was
 already busy handling some other tasks).

I once had about 2 months of machine work ahead of me for one server.
Luckily it was easy to break up into chunks and run it on all the
workstations at night in the office, and we were done in  1 week.
pgsql was the data store for it, and it was just like what you're
talking about, break it into chunks, spread it around.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Bruce Momjian
Greg Smith wrote:
 Check out 
 http://www.indeed.com/jobtrends?q=postgres%2C+mysql%2C+oraclerelative=1relative=1
  
 if you want to see the real story here.  Oracle has a large installed 
 base, but it's considered a troublesome legacy product being replaced 

+1 for Oracle being a troublesome legacy product.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Bruce Momjian
Mladen Gogala wrote:
 Chris Browne wrote:
  Well, the community declines to add hints until there is actual
  consensus on a good way to add hints.

 OK. That's another matter entirely.   Who should make that decision? Is 
 there a committee or a person who would be capable of making that decision?
 
  Nobody has ever proposed a way to add hints where consensus was arrived
  at that the way was good, so...

 
 So, I will have to go back on my decision to use Postgres and 
 re-consider MySQL? I will rather throw away the effort invested in 

You want to reconsider using MySQL because Postgres doesn't have hints. 
Hard to see how that logic works.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] table partitioning and select max(id)

2011-02-04 Thread Tobias Brox
I implemented table partitioning, and it caused havoc with a select
max(id) on the parent table - the query plan has changed from a
lightningly fast backwards index scan to a deadly seq scan.  Both
partitions are set up with primary key index and draws new IDs from
the same sequence ... select max(id) on both partitions are fast.
Are there any tricks I can do to speed up this query?  I can't add the
ID to the table constraints, we may still get in old data causing
rows with fresh IDs to get into the old table.

(I decided to keep this short rather than include lots of details -
but at least worth mentioning that we're using PG9)

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Mark Kirkwood

On 05/02/11 03:36, Mladen Gogala wrote:
Shaun, I don't need to convince you or the Postgres community. I 
needed an argument to convince my boss.
My argument was that the sanctimonious and narrow minded Postgres 
community is unwilling to even consider creating the tools I need for 
large porting projects, tools provided by other major databases. This 
discussion served my purpose wonderfully.  Project is killed, here we 
part ways. No more problems for either of us. Good luck with the 
perfect optimizer and good riddance. My only regret is about the 
time I have wasted.




I think it is unlikely that your boss is going to dismiss Postgres on 
the basis of some minor technical point (no optimizer hints). Bosses 
usually (and should) care about stuff like reference sites, product 
pedigree and product usage in similar sized companies to theirs. 
Postgres will come out rather well if such an assessment is actually 
performed I would think.


The real question you should be asking is this:

Given that there are no hints, what do I do to solve the problem of a 
slow query suddenly popping up in production? If and when this situation 
occurs, see how quickly the community steps in to help you solve it (and 
it'd bet it will solved be very quickly indeed).


Best wishes

Mark




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Bruce Momjian
Mladen Gogala wrote:
 Actually, it is not unlike a religious dogma, only stating that hints 
 are bad. It even says so in the wiki. The arguments are
 1) Refusal to implement hints is motivated by distrust toward users, 
 citing that some people may mess things up.
 Yes, they can, with and without hints.
 2) All other databases have them. This is a major feature and if I were 
 in the MySQL camp, I would use it as an
argument. Asking me for some proof is missing the point. All other 
 databases have hints precisely because
they are useful. Assertion that only Postgres is so smart that can 
 operate without hints doesn't match the
reality. As a matter of fact, Oracle RDBMS on the same machine will 
 regularly beat PgSQL in performance.
That has been my experience so far.   I even posted counting query 
 results.
 3) Hints are make it or break it feature. They're absolutely needed in 
 the fire extinguishing situations.
 
 I see no arguments to say otherwise and until that ridiculous we don't 
 want hints dogma is on wiki, this is precisely what it is:  a dogma. 

Uh, that is kind of funny considering that text is on a 'wiki', meaning
everything there is open to change if the group agrees.

 Dogmas do not change and I am sorry that you don't see it that way. 
 However, this discussion
 did convince me that I need to take another look at MySQL and tone down 
 my engagement with PostgreSQL community. This is my last post on the 
 subject because posts are becoming increasingly personal. This level of 
 irritation is also
 characteristic of a religious community chastising a sinner. Let me 
 remind you again: all other major databases have that possibility: 
 Oracle, MySQL, DB2, SQL Server and Informix. Requiring burden of proof 
 about hints is equivalent to saying that all these databases are 
 developed by idiots and have a crappy optimizer.

You need to state the case for hints independent of what other databases
do, and indepdendent of fixing the problems where the optimizer doesn't
match reatility.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-04 Thread Chris Browne
gnuo...@rcn.com writes:
 Time for my pet meme to wiggle out of its hole (next to Phil's, and a
 day later).  For PG to prosper in the future, it has to embrace the
 multi-core/processor/SSD machine at the query level.  It has to.  And
 it has to because the Big Boys already do so, to some extent, and
 they've realized that the BCNF schema on such machines is supremely
 efficient.  PG/MySql/OSEngineOfChoice will get left behind simply
 because the efficiency offered will be worth the price.

 I know this is far from trivial, and my C skills are such that I can
 offer no help.  These machines have been the obvious current machine
 in waiting for at least 5 years, and those applications which benefit
 from parallelism (servers of all kinds, in particular) will filter out
 the winners and losers based on exploiting this parallelism.

 Much as it pains me to say it, but the MicroSoft approach to software:
 write to the next generation processor and force users to upgrade,
 will be the winning strategy for database engines.  There's just way
 too much to gain.

I'm not sure how true that is, really.  (e.g. - too much to gain.)

I know that Jan Wieck and I have been bouncing thoughts on valid use of
threading off each other for *years*, now, and it tends to be
interesting but difficult to the point of impracticality.

But how things play out are quite fundamentally different for different
usage models.

It's useful to cross items off the list, so we're left with the tough
ones that are actually a problem.

1.  For instance, OLTP applications, that generate a lot of concurrent
connections, already do perfectly well in scaling on multi-core systems.
Each connection is a separate process, and that already harnesses
multi-core systems perfectly well.  Things have improved a lot over the
last 10 years, and there may yet be further improvements to be found,
but it seems pretty reasonable to me to say that the OLTP scenario can
be treated as solved in this context.

The scenario where I can squint and see value in trying to multithread
is the contrast to that, of OLAP.  The case where we only use a single
core, today, is where there's only a single connection, and a single
query, running.

But that can reasonably be further constrained; not every
single-connection query could be improved by trying to spread work
across cores.  We need to add some further assumptions:

2.  The query needs to NOT be I/O-bound.  If it's I/O bound, then your
system is waiting for the data to come off disk, rather than to do
processing of that data.

That condition can be somewhat further strengthened...  It further needs
to be a query where multi-processing would not increase the I/O burden.

Between those two assumptions, that cuts the scope of usefulness to a
very considerable degree.

And if we *are* multiprocessing, we introduce several new problems, each
of which is quite troublesome:

 - How do we decompose the query so that the pieces are processed in
   ways that improve processing time?

   In effect, how to generate a parallel query plan?

   It would be more than stupid to consider this to be obvious.  We've
   got 15-ish years worth of query optimization efforts that have gone
   into Postgres, and many of those changes were not obvious until
   after they got thought through carefully.  This multiplies the
   complexity, and opportunity for error.

 - Coordinating processing

   Becomes quite a bit more complex.  Multiple threads/processes are
   accessing parts of the same data concurrently, so a parallelized
   query that harnesses 8 CPUs might generate 8x as many locks and
   analogous coordination points.

 - Platform specificity

   Threading is a problem in that each OS platform has its own
   implementation, and even when they claim to conform to common
   standards, they still have somewhat different interpretations.  This
   tends to go in one of the following directions:

a) You have to pick one platform to do threading on.

   Oops.  There's now PostgreSQL-Linux, that is the only platform
   where our multiprocessing thing works.  It could be worse than
   that; it might work on a particular version of a particular OS...

b) You follow some apparently portable threading standard

   And find that things are hugely buggy because the platforms
   follow the standard a bit differently.  And perhaps this means
   that, analogous to a), you've got a set of platforms where this
   works (for some value of works), and others where it can't.
   That's almost as evil as a).

c) You follow some apparently portable threading standard

   And need to wrap things in a pretty thick safety blanket to make
   sure it is compatible with all the bugs in interpretation and
   implementation.  Complexity++, and performance probably suffers.

   None of these are particularly palatable, which is why threading
   proposals get a lot of pushback.

At the end of the day, if this is 

Re: [PERFORM] Does auto-analyze work on dirty writes?

2011-02-04 Thread Mark Mielke

On 02/04/2011 10:41 AM, Tom Lane wrote:

1. Autovacuum fires when the stats collector's insert/update/delete
counts have reached appropriate thresholds.  Those counts are
accumulated from messages sent by backends at transaction commit or
rollback, so they take no account of what's been done by transactions
still in progress.

2. Only live rows are included in the stats computed by ANALYZE.
(IIRC it uses SnapshotNow to decide whether rows are live.)

Although the stats collector does track an estimate of the number of
dead rows for the benefit of autovacuum, this isn't used by planning.
Table bloat is accounted for only in terms of growth of the physical
size of the table in blocks.


Thanks, Tom.

Does this un-analyzed bloat not impact queries? I guess the worst case 
here is if autovaccum is disabled for some reason and 99% of the table 
is dead rows. If I understand the above correctly, I think analyze might 
generate a bad plan under this scenario, thinking that a value is 
unique, using the index - but every tuple in the index has the same 
value and each has to be looked up in the table to see if it is visible?


Still, I guess the idea here is not to disable autovacuum, making dead 
rows insignificant in the grand scheme of things. I haven't specifically 
noticed any performance problems here - PostgreSQL is working great for 
me as usual. Just curiosity...


Cheers,
mark

--
Mark Mielkem...@mielke.cc


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query performance with disabled hashjoin and mergejoin

2011-02-04 Thread Ivan Voras

On 04/02/2011 15:44, Greg Smith wrote:

Ivan Voras wrote:

The vanilla plan, with default settings is:


Pause here for a second: why default settings? A default PostgreSQL
configuration is suitable for systems with about 128MB of RAM. Since you
say you have good enough hardware, I'm assuming you have a bit more
than that. The first things to try here are the list at
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ; your bad
query here looks like it might benefit from a large increase to
effective_cache_size, and possibly an increase to work_mem as well. Your
bad plan here is doing a lot of sequential scans instead of indexed
lookups, which makes me wonder if the change in join types you're
forcing isn't fixing that part as a coincidence.


My earlier message didn't get through so here's a repeat:

Sorry for the confusion, by default settings I meant planner default 
settings not generic shared buffers, wal logs, work memory etc. - which 
are adequately tuned.



Note that the estimated number of rows coming out of each form of plan
is off by a factor of about 200X, so it's not that the other plan type
is better estimating anything.


Any ideas how to fix the estimates? Or will I have to simulate hints by 
issuing set enable_hashjoin=f; set enable_mergejoin=f; for this query? :)




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] table partitioning and select max(id)

2011-02-04 Thread Ken Cox
This is a known limitation of partitioning.  One solution is to use a
recursive stored proc, which can use indexes.  Such a solution is
discussed here:
http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php

Regards,
Ken

http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php

On Fri, Feb 4, 2011 at 6:24 PM, Tobias Brox tobi...@gmail.com wrote:
 I implemented table partitioning, and it caused havoc with a select
 max(id) on the parent table - the query plan has changed from a
 lightningly fast backwards index scan to a deadly seq scan.  Both
 partitions are set up with primary key index and draws new IDs from
 the same sequence ... select max(id) on both partitions are fast.
 Are there any tricks I can do to speed up this query?  I can't add the
 ID to the table constraints, we may still get in old data causing
 rows with fresh IDs to get into the old table.

 (I decided to keep this short rather than include lots of details -
 but at least worth mentioning that we're using PG9)

 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance




-- 
-Ken

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Scott Marlowe
On Fri, Feb 4, 2011 at 5:17 PM, Bruce Momjian br...@momjian.us wrote:
 Mladen Gogala wrote:
 characteristic of a religious community chastising a sinner. Let me
 remind you again: all other major databases have that possibility:
 Oracle, MySQL, DB2, SQL Server and Informix. Requiring burden of proof
 about hints is equivalent to saying that all these databases are
 developed by idiots and have a crappy optimizer.

 You need to state the case for hints independent of what other databases
 do, and indepdendent of fixing the problems where the optimizer doesn't
 match reatility.

And that kind of limits to an area where we would the ability to nudge
costs instead of just set them for an individual part of a query.
i.e. join b on (a.a=b.b set selectivity=0.01) or (a.a=b.b set
selectivity=1.0) or something like that.  i.e. a.a and b.b have a lot
of matches or few, etc.  If there's any thought of hinting it should
be something that a DBA, knowing his data model well, WILL know more
than the current planner because the planner can't get cross table
statistics yet.

But then, why not do something to allow cross table indexes and / or
statistics?  To me that would go much further to helping fix the
issues where the current planner flies blind.

-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Scott Marlowe
2011/2/4 Mark Kirkwood mark.kirkw...@catalyst.net.nz:
 Given that there are no hints, what do I do to solve the problem of a slow
 query suddenly popping up in production? If and when this situation occurs,
 see how quickly the community steps in to help you solve it (and it'd bet it
 will solved be very quickly indeed).

That is EXACTLY what happened to me.  I had a query killing my
production box because it was running VERY long by picking the wrong
plan.  Turned out it was ignoring the number of NULLs and this led to
it thinking one access method that was wrong was the right one.  I had
a patch within 24 hours of identifying the problem, and it took me  1
hour to have it applied and running in production.

If Oracle can patch their query planner for you in 24 hours, and you
can apply patch with confidence against your test then production
servers in an hour or so, great.  Til then I'll stick to a database
that has the absolutely, without a doubt, best coder support of any
project I've ever used.

My point in the other thread is that if you can identify a point where
a hint would help, like my situation above, you're often better off
presenting a test case here and getting a patch to make it smarter.

However, there are places where the planner just kind of guesses.  And
those are the places to attack when you find a pathological behaviour.
 Or to rewrite your query or use a functional index.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] table partitioning and select max(id)

2011-02-04 Thread Greg Smith

Tobias Brox wrote:

I implemented table partitioning, and it caused havoc with a select
max(id) on the parent table - the query plan has changed from a
lightningly fast backwards index scan to a deadly seq scan. 


This problem was fixed in the upcoming 9.1: 


http://archives.postgresql.org/pgsql-committers/2010-11/msg00028.php
http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=034967bdcbb0c7be61d0500955226e1234ec5f04

Here's the comment from that describing the main technique used to fix it:

This module tries to replace MIN/MAX aggregate functions by subqueries 
of the form


(SELECT col FROM tab WHERE ... ORDER BY col ASC/DESC LIMIT 1)

Given a suitable index on tab.col, this can be much faster than the 
generic scan-all-the-rows aggregation plan.  We can handle multiple 
MIN/MAX aggregates by generating multiple subqueries, and their 
orderings can be different.  However, if the query contains any 
non-optimizable aggregates, there's no point since we'll have to scan 
all the rows anyway.


Unfortunately that change ends a series of 6 commits of optimizer 
refactoring in this area, so it's not the case that you just apply this 
one commit as a bug-fix to a 9.0 system.  I have a project in process to 
do the full backport needed I might be able to share with you if that 
works out, and you're willing to run with a customer patched server 
process.  Using one of the user-space ideas Ken suggested may very well 
be easier for you.  I'm stuck with an app I can't rewrite to do that.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Talking about optimizer, my long dream

2011-02-04 Thread Mark Kirkwood

O

If Oracle can patch their query planner for you in 24 hours, and you
can apply patch with confidence against your test then production
servers in an hour or so, great.  Til then I'll stick to a database
that has the absolutely, without a doubt, best coder support of any
project I've ever used.

My point in the other thread is that if you can identify a point where
a hint would help, like my situation above, you're often better off
presenting a test case here and getting a patch to make it smarter.



By way of contrast - I had a similar situation with DB2 (a few years 
ago) with a bad plan being chosen for BETWEEN predicates in some cases. 
I found myself having to spend about a hour or two a week chasing the 
support organization for - wait for it - 6 months to get a planner patch!


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread david

On Fri, 4 Feb 2011, Vitalii Tymchyshyn wrote:


04.02.11 16:33, Kenneth Marshall ???(??):


In addition, the streaming ANALYZE can provide better statistics at
any time during the load and it will be complete immediately. As far
as passing the entire table through the ANALYZE process, a simple
counter can be used to only send the required samples based on the
statistics target. Where this would seem to help the most is in
temporary tables which currently do not work with autovacuum but it
would streamline their use for more complicated queries that need
an analyze to perform well.

Actually for me the main con with streaming analyze is that it adds 
significant CPU burden to already not too fast load process. Especially if 
it's automatically done for any load operation performed (and I can't see how 
it can be enabled on some threshold).


two thoughts

1. if it's a large enough load, itsn't it I/O bound?


2. this chould be done in a separate process/thread than the load itself, 
that way the overhead of the load is just copying the data in memory to 
the other process.


with a multi-threaded load, this would eat up some cpu that could be used 
for the load, but cores/chip are still climbing rapidly so I expect that 
it's still pretty easy to end up with enough CPU to handle the extra load.


David Lang

And you can't start after some threshold of data passed by since you may 
loose significant information (like minimal values).


Best regards, Vitalii Tymchyshyn



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-04 Thread Robert Haas
On Sat, Feb 5, 2011 at 12:46 AM,  da...@lang.hm wrote:
 Actually for me the main con with streaming analyze is that it adds
 significant CPU burden to already not too fast load process. Especially if
 it's automatically done for any load operation performed (and I can't see
 how it can be enabled on some threshold).

 two thoughts

 1. if it's a large enough load, itsn't it I/O bound?

Sometimes.  Our COPY is not as cheap as we'd like it to be.

 2. this chould be done in a separate process/thread than the load itself,
 that way the overhead of the load is just copying the data in memory to the
 other process.

I think that's more expensive than you're giving it credit for.

But by all means implement it and post the patch if it works out...!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements

2011-02-04 Thread david

On Fri, 4 Feb 2011, Chris Browne wrote:


2.  The query needs to NOT be I/O-bound.  If it's I/O bound, then your
system is waiting for the data to come off disk, rather than to do
processing of that data.


yes and no on this one.

it is very possible to have a situation where the process generating the 
I/O is waiting for the data to come off disk, but if there are still idle 
resources in the disk subsystem.


it may be that the best way to address this is to have the process 
generating the I/O send off more requests, but that sometimes is 
significantly more complicated than splitting the work between two 
processes and letting them each generate I/O requests


with rotating disks, ideally you want to have at least two requests 
outstanding, one that the disk is working on now, and one for it to start 
on as soon as it finishes the one that it's on (so that the disk doesn't 
sit idle while the process decides what the next read should be). In 
practice you tend to want to have even more outstanding from the 
application so that they can be optimized (combined, reordered, etc) by 
the lower layers.


if you end up with a largish raid array (say 16 disks), this can translate 
into a lot of outstanding requests that you want to have active to fully 
untilize the array, but having the same number of requests outstanding 
with a single disk would be counterproductive as the disk would not be 
able to see all the outstanding requests and therefor would not be able to 
optimize them as effectivly.


David Lang

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance