Re: [SQL] Optimize querry sql
Hi,
I deleted the fonction date_part and now the time of the querry is : 464 ms !!!
Limit (cost=1373.32..1373.50 rows=15 width=27) (actual
time=463.762..463.857 rows=15 loops=1)
-> HashAggregate (cost=1373.32..1408.52 rows=2816 width=27)
(actual time=463.755..463.820 rows=15 loops=1)
-> Hash Join (cost=149.72..1189.22 rows=24546 width=27)
(actual time=42.106..348.561 rows=18543 loops=1)
Hash Cond: ("outer".idxreseller = "inner".idxreseller)
-> Seq Scan on stat a (cost=0.00..545.27 rows=24877
width=8) (actual time=0.054..167.340 rows=24881 loops=1)
Filter: ((month = 8) AND (year = 2007))
-> Hash (cost=142.68..142.68 rows=2816 width=23)
(actual time=41.954..41.954 rows=2816 loops=1)
-> Seq Scan on reseller b (cost=0.00..142.68
rows=2816 width=23) (actual time=0.035..28.447 rows=2816 loops=1)
Filter: (asp = 6)
Total runtime: 464.337 ms
Have you advices to optimize the query please ?
Stan
2007/9/14, hubert depesz lubaczewski <[EMAIL PROTECTED]>:
> On Fri, Sep 14, 2007 at 03:02:59PM +0200, Stanislas de Larocque wrote:
> > I want to optimize my query sql (execution time : 2665 ms) :
> SELECT
> b.idxreseller,
> sum(a.nbrq),
> b.namereseller
> from
> stat a,
> reseller b
> where
> b.asp=6
> and a.idxreseller=b.reseller
> and a.month=date_part('month',now() - interval '1 month')
> and a.year=date_part('year',now() - interval '1 month')
> GROUP BY
> b.idxreseller,b.namereseller limit 15;
>
> 1. cast all date_parts to int4, like in:
> a.month = cast( date_part('month',now() - interval '1 month') as int4)
> 2. why there is a limit without any order by?
> 3. change it to get namereseller from subselect, not from join.
>
> depesz
>
> --
> quicksil1er: "postgres is excellent, but like any DB it requires a
> highly paid DBA. here's my CV!" :)
> http://www.depesz.com/ - blog dla ciebie (i moje CV)
>
--
Stanislas de Larocque
[EMAIL PROTECTED]
06 63 64 00 47
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
Re: [SQL] Optimize querry sql
On Mon, Sep 17, 2007 at 09:17:58AM +0200, Stanislas de Larocque wrote: > Have you advices to optimize the query please ? for some many rows the 400ms looks quite reasonable. the best thing you can make to speed things up is to calculate the counts with triggers. depesz -- quicksil1er: "postgres is excellent, but like any DB it requires a highly paid DBA. here's my CV!" :) http://www.depesz.com/ - blog dla ciebie (i moje CV) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Format interval as hours/minutes etc
On Mon, 2007-09-17 at 02:15 +0930, Shane Ambler wrote: > I get the idea you want the interval to be expressed as 2,765 days and > 23 hours or 66,383 hours, which I think would be useful (more so for > shorter intervals). > > I am thinking the exact function you are after isn't there - from what I > can find a larger interval is always given as x years y months z days... > which is why extracting the epoch is the easiest point to start your calcs. > > Maybe this can be a feature request - functions to give an interval in > total number of days/hours/minutes instead of years months days Doesn't the SQL standard allow one to define intervals as YEAR TO MONTH, DAY TO HOUR, HOUR TO SECOND and so on? This sets both the greatest unit to report and the resolution. (YEAR/MONTH cannot be mixed with other types because of the uncertainty of month lengths.) Is there any plan to support that? -- Oliver Elphick [EMAIL PROTECTED] Isle of Wight http://www.lfix.co.uk/oliver GPG: 1024D/A54310EA 92C8 39E7 280E 3631 3F0E 1EC0 5664 7A2F A543 10EA Do you want to know God? http://www.lfix.co.uk/knowing_god.html -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Extracting hostname from URI column
On Mon, Sep 17, 2007 at 12:18:56AM +0100, Gregory Stark wrote: > > Not all countries break up their national tld space into sections like .co.uk > or .com.au. Canadian domains can be bought directly under .ca like amazon.ca. > > I think you'll have to store a specific list of tlds and how deep you want to > look. Another way to look at this is that any containing domain is just a domain. So .info is a domain, example.info is a domain, and in case of the existence of host1.some.example.info, some.example.info is a domain. Also, co.uk is just as much a domain as yahoo.co.uk. Some poor misguided people at Netscape badly misunderstood the DNS many years ago, and web browser cookies have been attempting to recover from that misunderstanding ever since (with sometimes serious security implications). There's really no reliable way to deduce common responsibility from the DNS labels, and it's sometimes terrifically important not to make that mistake. A -- Andrew Sullivan | [EMAIL PROTECTED] The plural of anecdote is not data. --Roger Brinner ---(end of broadcast)--- TIP 6: explain analyze is your friend
[SQL] SQL performance help: self join or static var
Hi,
Given a table (truncated some real fields for simplicity):
CREATE TABLE city
(
id serial NOT NULL,
muni_city_id integer,
post_code_city_id integer,
alias_city_id integer,
city_type character varying(15),
post_code_type character varying(15),
CONSTRAINT city_pkey PRIMARY KEY (id)
)
WITHOUT OIDS;
CREATE INDEX index_city_on_muni_city_id
ON city
USING btree
(muni_city_id);
CREATE INDEX index_city_on_post_code_type
ON city
USING btree
(post_code_type);
Filled with ~168,000 records
Which of the following SQL statements should I expect better
performance on?
select * from city
where post_code_type in ('P', 'R')
EXPLAIN ANALYZE:
"Seq Scan on city (cost=0.00..4492.82 rows=76172 width=290) (actual
time=0.039..163.564 rows=30358 loops=1)"
" Filter: ((post_code_type)::text = ANY (('{P,R}'::character
varying[])::text[]))"
"Total runtime: 231.947 ms"
OR
select * from city
where id = muni_city_id
EXPLAIN ANALYZE:
"Seq Scan on city (cost=0.00..3535.41 rows=383 width=290) (actual
time=0.022..124.463 rows=30200 loops=1)"
" Filter: (muni_city_id = id)"
"Total runtime: 195.342 ms"
In my case both statements are semantically equivalent and I'm trying
to figure out if I should prefer the search of a varchar field for
static values over the "self join" constraint to an indexed integer
column?
My (almost completely ignorant) eyes say that the latter
(id=muni_city_id) is faster by a little - any voices of support or
other ideas in this regard? Caveats?
Thanks,
Steve
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
