Re: [SQL] Poor performance in inet << cidr join (Resolved)

2005-11-03 Thread Axel Rau

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Am 03.11.2005 um 00:22 schrieb Tom Lane:


Axel Rau <[EMAIL PROTECTED]> writes:

Question: Can rtree_inet be included in the core?


No, because rtree is going away in 8.2.  Feel like converting that
code to be a GIST opclass, instead?
Perhaps. It would be a big step for me, since I started with pg 3 month 
ago.

As starting points, I found
- - chapter 48 in the 8.0 manual
- - rtree_gist in 8.0 contrib
- - btree_gist in 8.1 contrib

Any more?
Axel
Axel Rau, Frankfurt, Germany   +49-69-951418-0
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (Darwin)

iQEVAwUBQ2oIlMFz9+6bacTRAQIwHQgAgOKK5rxY4aTrEStJeljORZwUWQre66ZD
ZoD6HYcVxJepRC9lEbakxLmdokHtaMp1rqWziiv7idlDqApc6deVlo7ESozFC2jZ
Yb/hVBIxmaBuHFj2n/AbYwGPR18g1SLODhyOj6/QlciKLw0apAVLUb0iHCfS7ie0
qSaCh/oARM8066SpMtdBX5oKLRgcgXYYtK9UNmX0njLqyDmCd9WZRrOwnLRRzptZ
k5R1iMDrksV/Hifx9RcaGNRkQ4JZ15rf3OCLHgwMMwrpfLB0jMcBN8o1YEpZimJc
Ffc5ChGlTfb4ADcCMGl3mBgBFNRoUZqMKrFakvEHJY65jz8ng3bl3w==
=Hco5
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] Slow query - SELECTing one row from 'big' table.

2005-11-03 Thread Mario Splivalo
I have a procedure that goes something like this:
-- Function: create_message(varchar, varchar, varchar, int4)
-- DROP FUNCTION create_message("varchar", "varchar", "varchar", int4);

CREATE OR REPLACE FUNCTION create_message("varchar", "varchar",
"varchar", int4)
  RETURNS SETOF messages AS
'
DECLARE
aFrom ALIAS FOR $1;
aTo ALIAS FOR $2;
aRawText ALIAS FOR $3;
aDestinationId ALIAS FOR $4;
rec messages%ROWTYPE;

BEGIN
INSERT INTO messages ("from", "to", "raw_text", "destination_id")
   VALUES (aFrom, aTo, aRawText, aDestinationId);

 FOR rec IN
 SELECT *
 FROM messages
 WHERE (id = currval(''public.message_id_seq''::text))
 LOOP
 RETURN NEXT rec;
 END LOOP;

 RETURN;
END

The table messages is like this:
CREATE TABLE messages
(
  id int4 NOT NULL DEFAULT nextval('public.message_id_seq'::text),
  "from" varchar(15) NOT NULL,
  "to" varchar(10) NOT NULL,
  receiving_time timestamptz(0) NOT NULL DEFAULT now(),
  raw_text varchar NOT NULL,
  keyword varchar,
  destination_id int4,
  vpn_id int4,
  service_id int4,
  status int4 NOT NULL DEFAULT 2,
  reply varchar,
  CONSTRAINT pk_messages PRIMARY KEY (id),
  CONSTRAINT fk_messages_destination_id FOREIGN KEY (destination_id)
REFERENCES destinations (id) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_messages_service_id FOREIGN KEY (service_id) REFERENCES
services (id) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_messages_vpn_id FOREIGN KEY (vpn_id) REFERENCES vpns
(id) ON UPDATE NO ACTION ON DELETE NO ACTION
) 

And extra indices on table messages are like this:

CREATE INDEX idx_messages_receiving_time ON messages USING btree
  (receiving_time);
CREATE INDEX idx_messages_vpn_id ON messages USING btree (vpn_id);
CREATE INDEX idx_service_id ON messages USING btree (service_id);

Now, the table messsages has well above 700k rows. When I call the
function, it takes almost 5 seconds to run on a busy server, on my
shitty I/O capabilities laptop it takes as long as 30 seconds (no users
but me). Now, when I run the SELECT * FROM messages WHERE id = 12317 (or
any other random integer), the response time is much less.

Here is the query plan:
test_db=# explain select * from messages where id = 742001;
  QUERY PLAN
--
 Index Scan using pk_messages on messages  (cost=0.00..6.01 rows=2
width=197) (actual time=0.030..0.034 rows=1 loops=1)
   Index Cond: (id = 742001)
 Total runtime: 0.086 ms
(3 rows)

Now I'm wondering why the SELECT inside the procedure takes that long?
I've tried first to extract the return form currval function to the
local variable, and then supply that variable to the WHERE, but nothing
changed. It seems that SELECT itself is slow, not the currval.

Am I doing something wrong? Or that's just the way it is? 

I have a workaround now, I declared local variable of type messages%ROW,
and I fill that variable with the parametars received from the caller of
the function, and then i do RETURN NEXT, RETURN. But still, why is that
SELECT so slow inside the function?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Welcome to the pgsql-sql list!

2005-11-03 Thread Achilleus Mantzios

For some reason i got unsubscribed from sql
for no apparent reason.
(I cant recall anything close to that, 
and i never intented to unsubscribe from -sql list)

The [EMAIL PROTECTED] person might be 
interested in investigating this.

Thank you.

-- 
-Achilleus


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Slow query - SELECTing one row from 'big' table.

2005-11-03 Thread Tom Lane
Mario Splivalo <[EMAIL PROTECTED]> writes:
>  SELECT *
>  FROM messages
>  WHERE (id = currval(''public.message_id_seq''::text))

That cannot legally be converted into an indexscan, because currval() is
a volatile function --- the planner cannot be certain that its value
won't change during the query.  (In this case we can assume it's safe
because nothing in that query would call nextval(), but the planner
isn't omniscient enough to make that conclusion.)

Fetch the currval into a local variable and use the variable in the
query.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL] Encoding on 8.0.4

2005-11-03 Thread Don Drake
I recently upgraded my DB from 7.4.3 to 8.0.4 and I've noticed the following errors appearing in my serverlog:


2005-11-03 05:56:57 CST 127.0.0.1(38858) ERROR:  Unicode characters greater than or equal to 0x1 are not supported
2005-11-03 06:04:09 CST 127.0.0.1(38954) ERROR:  invalid byte sequence for encoding "UNICODE": 0xe02d76
2005-11-03 06:04:21 CST 127.0.0.1(38964) ERROR:  invalid byte sequence for encoding "UNICODE": 0xe02d76
2005-11-03 06:11:35 CST 127.0.0.1(39072) ERROR:  Unicode characters greater than or equal to 0x1 are not supported
2005-11-03 06:23:23 CST 127.0.0.1(39657) ERROR:  invalid byte sequence for encoding "UNICODE": 0xd40d
2005-11-03 08:10:02 CST 127.0.0.1(44073) ERROR:  invalid byte sequence for encoding "UNICODE": 0xe46973
2005-11-03 08:21:13 CST 127.0.0.1(44711) ERROR:  Unicode characters greater than or equal to 0x1 are not supported
2005-11-03 08:26:36 CST 127.0.0.1(44745) ERROR:  invalid byte sequence for encoding "UNICODE": 0xc447
2005-11-03 08:40:59 CST 127.0.0.1(45087) ERROR:  invalid byte sequence for encoding "UNICODE": 0xdd20
2005-11-03 09:14:52 CST 127.0.0.1(46009) ERROR:  Unicode characters greater than or equal to 0x1 are not supported

I never received these errors on when running 7.4.3.  I used the
default encodings on 7.4.3 and I tried chaning client_encoding from
sql_ascii to UNICODE and I'm still seeing this. I'm storing in a text
data type email that contains other characterset
characters.   

Any ideas on how to resolve this?

-Don-- Donald DrakePresidentDrake Consultinghttp://www.drakeconsult.com/http://www.MailLaunder.com/
312-560-1574


[SQL] date question

2005-11-03 Thread Judith Altamirano Figueroa




Hi everybody, in Postgres 7.0.2 I have the next query:

SELECT * from clientes_proceso where fecha_mod::date <= now() -1;

but in version 8.0.1 returns the next error:

ERROR: The operator doesn't exist: timestamp with time zone - integer

How can drop a day to now()??




Re: [SQL] date question

2005-11-03 Thread lucas

Quoting Judith Altamirano Figueroa <[EMAIL PROTECTED]>:

Hi everybody, in Postgres 7.0.2 I have the next query:
SELECT * from clientes_proceso where fecha_mod::date <= now() -1;
but in version 8.0.1 returns the next error:
ERROR: The operator doesn't exist: timestamp with time zone - integer
How can drop a day to now()??


Try using "now()::date", or "interval". Like:
 select * from clientes_proceso where fecha_mod::date <= now()::date -1;
or:
 select * from clientes_proceso where fecha_mod::date <= now() - '1
day'::interval;
---
Lucas


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] date question

2005-11-03 Thread Michael Glaesemann


On Nov 4, 2005, at 1:17 , Judith Altamirano Figueroa wrote:


Hi everybody, in Postgres 7.0.2 I have the next query:

SELECT * from clientes_proceso where fecha_mod::date <= now() -1;

but in version 8.0.1 returns the next error:

ERROR: The operator doesn't exist: timestamp with time zone - integer

How can drop a day to now()??



You can try a couple of different things: one is to use CURRENT_DATE  
- 1 instead of now() -1. Another would be to cast now() to date,  
e.g., now()::date - 1.


Hope this helps.


Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] serial in output

2005-11-03 Thread alessandra de gregorio
Hi,

What function should I use to get a serial number, together with my results,
from a query?

Ex. Of output I want:

1   ooo pp  ij
2   hou joo iu  
3   bhi ft  yh

Basically, I would like to have one column with integers, from 1 onwards, no
matter how many tables I join in, or the data that I get back.

Thanks




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Changing location of ORDER BY has large effect on performance, but not results...

2005-11-03 Thread Jason Turner
I have two queries that return the same results, but one is 6 times
slower than the other one, can anyone enlighten me as to why?

My initial guess is that it is not able to utilize the index on
foo.tracktitle to sort the result set after foo has been joined with
other tables. This seems kind of broken to me. I am running 8.0.4 on
Gentoo Linux.

Thanks,
Jason

--- First Query ---

select
  foo.*,
genre.genrename,
album.albumtitle,
(select performer.performername from performer as p, trackperformers as tp
where p.performerid = tp.performerid
and tp.trackid = foo.trackid
limit 1) AS performername,

  (SELECT coverartid
FROM trackcoverart
WHERE trackcoverart.trackid = foo.trackid
LIMIT 1) as trackcoverart,

 (SELECT albumcoverart.coverartid
  FROM albumcoverart, track
 WHERE foo.trackid = trackid
  AND  albumcoverart.albumid = foo.albumid
 LIMIT 1) as albumcoverart

FROM  (select * from track order by tracktitle) as foo, album, genre

where foo.albumid = album.albumid
and foo.genreid = genre.genreid

offset 2000
limit 20;

--- First Explain Analyze ---

Limit  (cost=20915.07..21123.71 rows=20 width=338) (actual
time=184.997..186.417 rows=20 loops=1)
  ->  Hash Join  (cost=50.81..131860.75 rows=12635 width=338) (actual
time=5.085..185.202 rows=2020 loops=1)
Hash Cond: ("outer".albumid = "inner".albumid)
->  Hash Join  (cost=13.07..938.94 rows=12635 width=318)
(actual time=1.317..34.143 rows=2020 loops=1)
  Hash Cond: ("outer".genreid = "inner".genreid)
  ->  Subquery Scan foo  (cost=0.00..736.34 rows=12635
width=288) (actual time=0.021..16.317 rows=2020 loops=1)
->  Index Scan using track_tracktitle on track 
(cost=0.00..609.99 rows=12635 width=332) (actual time=0.012..4.266
rows=2020 loops=1)
  ->  Hash  (cost=11.66..11.66 rows=566 width=34) (actual
time=1.267..1.267 rows=0 loops=1)
->  Seq Scan on genre  (cost=0.00..11.66 rows=566
width=34) (actual time=0.004..0.737 rows=566 loops=1)
->  Hash  (cost=33.59..33.59 rows=1659 width=24) (actual
time=3.646..3.646 rows=0 loops=1)
  ->  Seq Scan on album  (cost=0.00..33.59 rows=1659
width=24) (actual time=0.012..2.194 rows=1659 loops=1)
SubPlan
  ->  Limit  (cost=0.00..7.53 rows=1 width=4) (actual
time=0.021..0.021 rows=1 loops=2020)
->  Nested Loop  (cost=0.00..7.53 rows=1 width=4)
(actual time=0.019..0.019 rows=1 loops=2020)
  ->  Index Scan using albumcoverart_albumid on
albumcoverart  (cost=0.00..3.01 rows=1 width=4) (actual
time=0.007..0.007 rows=1 loops=2020)
Index Cond: (albumid = $1)
  ->  Index Scan using track_pkey on track 
(cost=0.00..4.51 rows=1 width=0) (actual time=0.007..0.007 rows=1
loops=2020)
Index Cond: ($0 = trackid)
  ->  Limit  (cost=0.00..2.78 rows=1 width=4) (actual
time=0.006..0.006 rows=0 loops=2020)
->  Index Scan using trackcoverart_trackid on
trackcoverart  (cost=0.00..27.80 rows=10 width=4) (actual
time=0.004..0.004 rows=0 loops=2020)
  Index Cond: (trackid = $0)
  ->  Limit  (cost=0.00..0.03 rows=1 width=17) (actual
time=0.028..0.028 rows=1 loops=2020)
->  Nested Loop  (cost=0.00..64.89 rows=2142 width=17)
(actual time=0.025..0.025 rows=1 loops=2020)
  ->  Nested Loop  (cost=0.00..6.05 rows=2
width=0) (actual time=0.019..0.019 rows=1 loops=2020)
->  Index Scan using
trackperformers_trackid on trackperformers tp  (cost=0.00..3.01 rows=1
width=4) (actual time=0.007..0.007 rows=1 loops=2020)
  Index Cond: (trackid = $0)
->  Index Scan using performer_pkey on
performer p  (cost=0.00..3.02 rows=1 width=4) (actual
time=0.007..0.007 rows=1 loops=2020)
  Index Cond: (p.performerid =
"outer".performerid)
  ->  Seq Scan on performer  (cost=0.00..18.71
rows=1071 width=17) (actual time=0.002..0.002 rows=1 loops=2020)
Total runtime: 186.706 ms

--- Second Query ---

select
  foo.*,
genre.genrename,
album.albumtitle,
(select performer.performername from performer as p, trackperformers as tp
where p.performerid = tp.performerid
and tp.trackid = foo.trackid
limit 1) AS performername,

  (SELECT coverartid
FROM trackcoverart
WHERE trackcoverart.trackid = foo.trackid
LIMIT 1) as trackcoverart,

 (SELECT albumcoverart.coverartid
  FROM albumcoverart, track
 WHERE foo.trackid = trackid
  AND  albumcoverart.albumid = foo.albumid
 LIMIT 1) as albumcoverart

FROM  track as foo, album, genre

where foo.albumid = album.albumid
and foo.genreid = genre.genreid

order by foo.tracktitle

offset 2000
limit 20;

--- Second Explain Analyze ---

Limit  (cost=134126.42..134126.47 rows=20 width=382) (actual
time=1068.650..1068.698 rows=20 loops=1)
  ->  S

[SQL] JOIN condition confusion

2005-11-03 Thread Thomas Good
Hi,

I am wondering if there is a way to set conditions on a left joined
table without hosing the join altogether:

query = qq |SELECT p.*, a.user_id
 FROM patient_dosing p
 LEFT JOIN patient_assignment a
 ON p.patient_id = a.patient_id
 WHERE p.dose_type = 'Missed (AWOL)'
 AND (p.dose_date >= $start_date AND p.dose_date <= $end_date)
 -- the next two conditions hose the left join
 -- AND a.end_date IS NULL
 -- AND lower(a.assign_type) = 'primary'
 ORDER BY a.user_id, p.patient_id| if ($dbtype ne "oracle");

Thanks much!
Tom



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Design question: Scalability and tens of thousands of tables?

2005-11-03 Thread zackchandler
Hi everybody.  Quick question:

I'm designing an application that will allow users to create custom objects
on the application level.  A custom object can have zero or more attributes.
Attributes can be one of 5-10 types (ex. String, Number, List, Date, Time,
Currency, etc...). This will allow users to track anything exactly as they
want.  My first thought on how to structure this is to make a custom table
in the db for each custom object.  The attributes would map to fields and
everything would work just dandy.

The problem is I am very concerned about scalability with having a different
table created for each custom object.  I want to design to site to handle
tens of thousands of users.  If each user has 3-5 custom objects the
database would have to handle tens of thousands of tables.

So it would appear that this is a broken solution and will not scale.  Has
anyone designed a similar system or have ideas to share?

Thanks,
Zack

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Design question: Scalability and tens of thousands of tables?

2005-11-03 Thread Michael Glaesemann


On Nov 4, 2005, at 2:58 , zackchandler wrote:

The problem is I am very concerned about scalability with having a  
different
table created for each custom object.  I want to design to site to  
handle

tens of thousands of users.  If each user has 3-5 custom objects the
database would have to handle tens of thousands of tables.

So it would appear that this is a broken solution and will not  
scale.  Has

anyone designed a similar system or have ideas to share?


One resource you may want to look at is Joe Conway's site:
http://www.joeconway.com/

He has a PDF available for download that may help you accomplish this:
OSCON 2004 presentation: Flexible Data Acquisition and Analysis

Joe's an active member of the PostgreSQL community and there's a lot  
of good stuff there.


Hope this helps.

Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster