I was more or less toying with an idea for a project I have, which includes renumbering a primary key (don't ask, it's necessary :/ )
Anyway, I was looking into the usefullness of a INSERT INTO newtable SELECT field, field, CASE pkey WHEN x1 THEN y1 WHEN x2 THEN y2 etc END FROM oldtable
The resulting select was about 1.7MB of query-text, mostly composed of the CASE-statement. So I discarded that idea, I still wanted to know how much time it would take on my database (MySQL) and found it to take about 1100 seconds, in contrast to simply selecting the data, which'd take about 0.7 seconds orso... The table I tested this on is about 30MB.
Of course I wanted to know how long it'd take on postgresql, selecting the pkey-field only (without the case) took also some 0.7 seconds (the entire table may have been more).
But the CASE-version took 9026139.201 ms, i.e. over 9000 seconds about 8 times slower than MySQL.
What I'm wondering about:
Although I was not expecting Postgresql to heavily beat MySQL, I was surprised to see it so much slower. Is the CASE-statement in Postgresql that inefficient? Or is it simply not very scalable (i.e. don't try to have 100000 cases like I did)?
The database is a lightly optimised gentoo-compile of 7.4.2, the mysql-version was 4.0.18 in case anyone wanted to know that.
Arjen van der Meijden
PS, don't try to "help improve the query" I discarded the idea as too inefficient and went along with a simple left join to get the "new pkey" out of a temporary table ;)
---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]