Re: [SQL]
My understanding is you did it the best way. Alternatively, you may word your query as SELECT i.cname FROM fc_client_info i WHERE EXISTS ( SELECT * FROM fc_communication c WHERE c.acode = i.acode AND c.contactdate = '09/06/2001' ) ORDER BY lower(i.cname); instead of SELECT i.cname FROM fc_client_info i, fc_communication c WHERE i.acode = c.acode AND c.contactdate = '09/06/2001' ORDER BY lower(cname); I suppose the correlated subquery using the EXISTS predicate, which results in an outer loop join, is much slower than your originally used sort-merge join. Try, but make sure both tables are well populated, otherwise the benchmark result is no good. Christoph ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] ERROR: DefineQueryRewrite: rule plan string too big.
I tried to create a view as follows create view sesql_userindexes as SELECT c.relname AS TBL_NAME, i.relname AS IDX_NAME, x.indisunique AS UNIQUE_FLAG, 1+ (CASE WHEN x.indkey[1]=0 THEN 0 ELSE 1 END)+(CASE WHEN x.indkey[2]=0 THEN 0 ELSE 1 END)+ (CASE WHEN x.indkey[3]=0 THEN 0 ELSE 1 END)+(CASE WHEN x.indkey[4]=0 THEN 0 ELSE 1 END)+ (CASE WHEN x.indkey[5]=0 THEN 0 ELSE 1 END)+(CASE WHEN x.indkey[6]=0 THEN 0 ELSE 1 END)+ (CASE WHEN x.indkey[7]=0 THEN 0 ELSE 1 END) AS IDXCOL_TOTAL, x.indkey AS COL_SEQ FROM pg_index x, pg_class c, pg_class i WHERE ((c.oid = x.indrelid) AND (i.oid = x.indexrelid)) and not (c.relname ~* 'pg_') ; I received the error message ERROR: DefineQueryRewrite: rule plan string too big. If do the SELECT without creating the view, it works as intended. I have to mention I am still working on version 6.1 (shame on me). Is this fixed in the upgrade versions? Or does anybody have an idea to word it smaller? Thanks in advance, Christoph ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] ERROR: DefineQueryRewrite: rule plan string too big.
Haller Christoph <[EMAIL PROTECTED]> writes: > I received the error message > ERROR: DefineQueryRewrite: rule plan string too big. > If do the SELECT without creating the view, it works as intended. > I have to mention I am still working on version 6.1 (shame on me). Indeed. > Is this fixed in the upgrade versions? Yes. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] optimizing queries and indexes...
i'm fairly new at this whole database design thing and my grasp of set theory is not what it was when i was in college lo these many years past. but i want to get a better idea of how to optimize sql statements. i have several friends that are DBA's by profession and work on oracle and/or ms sql server. they have all told me that while there are some general rules to follow that each database is different. for example, one thing that one of my friends said is: select X from big_table ,little_table Generally speaking, Oracle optimizes better when the smaller/reducing tables are on the bottom and the larger tables are on the top. I believe SQLServer likes them in the opposite direction. and also: Generally speaking indexes should be built with column names in the order of higher cardinality. I frequently screw this up because it runs counter to the way you think about building your joins. An example might be as follows: An index with: Company_Id (distinct count = 4) Dept_Id (distinct count = 40) Employee_ID (distinct count = 1000+) This index should probably be in the illogical order of: Employee_Id Dept_Id Company_Id so, i am hopeful that there is some sort of postgresql performance faq for queries. also, is there a general rule as to what is considered expensive when running explain on a query? i had one query running a join and calculation across two unindexed tables. the initial plan cost about 800 with a majority of the cost being taken up in a sequential scan of about 5000 rows costing 210 and then a hash join of 225 rows across the two tables costing 585. after creating indexes on the costs dropped about in half to 106 and 299 respectively. is 800 expensive? is 400 expensive? will the cost go up when the data baloons to 100,000 rows? many thanks! rjsjr ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
