Hi,
Given a table (truncated some real fields for simplicity):
CREATE TABLE city
(
id serial NOT NULL,
muni_city_id integer,
post_code_city_id integer,
alias_city_id integer,
city_type character varying(15),
post_code_type character varying(15),
CONSTRAINT city_pkey PRIMARY KEY (id
On Mon, Sep 17, 2007 at 12:18:56AM +0100, Gregory Stark wrote:
>
> Not all countries break up their national tld space into sections like .co.uk
> or .com.au. Canadian domains can be bought directly under .ca like amazon.ca.
>
> I think you'll have to store a specific list of tlds and how deep yo
On Mon, 2007-09-17 at 02:15 +0930, Shane Ambler wrote:
> I get the idea you want the interval to be expressed as 2,765 days and
> 23 hours or 66,383 hours, which I think would be useful (more so for
> shorter intervals).
>
> I am thinking the exact function you are after isn't there - from what
On Mon, Sep 17, 2007 at 09:17:58AM +0200, Stanislas de Larocque wrote:
> Have you advices to optimize the query please ?
for some many rows the 400ms looks quite reasonable.
the best thing you can make to speed things up is to calculate the
counts with triggers.
depesz
--
quicksil1er: "postgre
Hi,
I deleted the fonction date_part and now the time of the querry is : 464 ms !!!
Limit (cost=1373.32..1373.50 rows=15 width=27) (actual
time=463.762..463.857 rows=15 loops=1)
-> HashAggregate (cost=1373.32..1408.52 rows=2816 width=27)
(actual time=463.755..463.820 rows=15 loops=1)