Re: [PERFORM] Sorted union

2005-11-03 Thread Kevin Grittner
Just as an FYI, if you want to reassure yourself that the ORDER BY
is being applied as intended, you could do the following:

(
 select 1 as hint, start_time as when [...]
 union all
 select 2 as hint, end_time as when [...]
) order by seq, when

This is ANSI/ISO standard, and works in PostgreSQL (based on
a quick test).


>>> "Merlin Moncure" <[EMAIL PROTECTED]>  >>>

hmm, try pushing the union into a subquery...this is better style
because it's kind of ambiguous if the ordering will apply before/after
the union.

select q.when from
(
 select 1 as hint, start_time as when [...]
 union all
 select 2 as hint, end_time as when [...]
) q order by q.seq, when


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Sorted union

2005-11-03 Thread Scott Lamb

On Nov 3, 2005, at 10:21 AM, Merlin Moncure wrote:
Reading the previous paragraphs I was just about to suggest this.   
This

is a much more elegant method...you are reaping the benefits of having
normalized your working set.  You were trying to denormalize it  
back to

what you were used to.  Yes, now you can drop your index and simplify
your queries...normalized data is always more 'natural'.


I'm not sure normalized is the right word. In either case, I'm  
storing it in the same form. In either case, my ConcurrencyProcessor  
class gets the same form. The only difference is if the database  
splits the rows or if my application does so.


But we're essentially agreed. This is the algorithm I'm going to try  
implementing, and I think it will work out well. It also means  
sending about half as much data from the database to the application.



Mind you, I still think PostgreSQL should be able to perform that
sorted union fast. Maybe sometime I'll have enough free time to take
my first plunge into looking at a database query planner.


I'm not so sure I agree, by using union you were basically pulling two
independent sets (even if they were from the same table) that  
needed to

be ordered.


Yes.


  There is zero chance of using the index here for ordering
because you are ordering a different set than the one being indexed.


I don't think that's true. It just needs to look at the idea of  
independently ordering each element of the union and then merging  
that, compared to the cost of grabbing the union and then ordering  
it. In this case, the former cost is about 0 - it already has  
independently ordered them, and the merge algorithm is trivial.  



Regards,
Scott

--
Scott Lamb 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Sorted union

2005-11-03 Thread Merlin Moncure
> The ANSI/ISO specs are not at all ambiguous on this.  An
> ORDER BY is not allowed for the SELECT statements within
> a UNION.  It must come at the end and applied to the resulting
> UNION.

Interesting :/ 

Merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Sorted union

2005-11-03 Thread Kevin Grittner
The ANSI/ISO specs are not at all ambiguous on this.  An
ORDER BY is not allowed for the SELECT statements within
a UNION.  It must come at the end and applied to the resulting
UNION.

Similarly, the column names in the result come from the first
query in the UNION.  Column names in the query on the right
side of a UNION are immaterial.

Unless we have reason to believe that PostgreSQL is
non-compliant on this point, I don't think it is a good idea to
slow the query down with the subquery.

-Kevin


>>> "Merlin Moncure" <[EMAIL PROTECTED]>  >>>

> Merlin Moncure wrote:
> > hmm, try pushing the union into a subquery...this is better style
> > because it's kind of ambiguous if the ordering will apply
before/after
> > the union.
> 
> Seems to be a little slower. There's a new "subquery scan" step.

I figured.  However it's more correct, I'm not sure if the original
query is necessarily guaranteed to give the right answer (in terms of
ordering).  It might though.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Sorted union

2005-11-03 Thread Merlin Moncure
> Wow. I hadn't known about generate_series, but there are a bunch of
> places I've needed it.

It's a wonder tool :).
 
> But I think there is something I can do: I can just do a query of the
> transaction table sorted by start time. My graph tool can keep a

Reading the previous paragraphs I was just about to suggest this.  This
is a much more elegant method...you are reaping the benefits of having
normalized your working set.  You were trying to denormalize it back to
what you were used to.  Yes, now you can drop your index and simplify
your queries...normalized data is always more 'natural'.

> Mind you, I still think PostgreSQL should be able to perform that
> sorted union fast. Maybe sometime I'll have enough free time to take
> my first plunge into looking at a database query planner.

I'm not so sure I agree, by using union you were basically pulling two
independent sets (even if they were from the same table) that needed to
be ordered.  There is zero chance of using the index here for ordering
because you are ordering a different set than the one being indexed.
Had I not been able to talk you out of de-normalizing your table I was
going to suggest rigging up a materialized view and indexing that:

http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html

Merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Sorted union

2005-11-03 Thread Scott Lamb

On Nov 3, 2005, at 8:20 AM, Merlin Moncure wrote:

select t, (select count(*) from  transaction where t between happened
and when_stopped) from
(
select ((generate_series(1,60) * scale)::text::interval) + '12:00
pm'::time as t
) q;


Wow. I hadn't known about generate_series, but there are a bunch of  
places I've needed it.


As cool as this is, though, I don't think it helps me. There's  
another event-driven graph that I need. For lack of a better name, I  
call it the slot graph. Every single transaction is graphed as a  
horizontal line from its start time to its end time, with a vertical  
line at the start and stop. Successful, timed out, and failed  
transactions are green, black, and red, respectively. I use it in a  
couple different ways:


(1) on short timescales, it's nice to look at individual  
transactions. My tester will max out at either a rate or a  
concurrency. If I'm having problems, I'll get bursts of timeouts.  
This graph is the one that makes it clear why - it shows how things  
align, etc. Actually, even for longer timespans, this is still  
helpful - it's nice to see that most of the slots are filled with  
timing-out transactions when the rate falls.


(2) It can show you if something affects all of the transactions at  
once. When we did a database failover test, we saw a bunch of  
failures (as expected; our application isn't responsible for  
retries). This graph is the one that showed us that _all_  
transactions that were active at a specific time failed and that no  
other transactions failed. (There was a sharp vertical line of reds  
and blacks in the larger block of greens).


I wish I could just show these to you, rather than describing them.  
It's all proprietary data, though. Maybe soon I'll have similar  
graphs of my open source SSL proxy.


But the point is, I don't think I can represent this information  
without sending every data point to my application. I assign slots by  
the start time and free them by the stop time.


But I think there is something I can do: I can just do a query of the  
transaction table sorted by start time. My graph tool can keep a  
priority queue of all active transactions, keyed by the stop time.  
Whenever it grabs a new event, it can peek at the next start time but  
check if there are any stop times before it. Then at the end, it can  
pick up the rest of the stop times. The concurrency will never exceed  
a few thousand, so the additional CPU time and memory complexity are  
not a problem. As a bonus, I will no longer need my index on the stop  
time. Dropping it will save a lot of disk space.


Thanks for getting me off the "I need a fast query that returns these  
exact results" mindset. It is good to step back and look at the big  
picture.


Mind you, I still think PostgreSQL should be able to perform that  
sorted union fast. Maybe sometime I'll have enough free time to take  
my first plunge into looking at a database query planner.


Regards,
Scott

--
Scott Lamb 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Sorted union

2005-11-03 Thread Merlin Moncure

> Merlin Moncure wrote:
> > hmm, try pushing the union into a subquery...this is better style
> > because it's kind of ambiguous if the ordering will apply
before/after
> > the union.
> 
> Seems to be a little slower. There's a new "subquery scan" step.

I figured.  However it's more correct, I'm not sure if the original
query is necessarily guaranteed to give the right answer (in terms of
ordering).  It might though.

> 
> > question: why do you want to flatten the table...is it not easier to
> > work with as records?
> 
> For most things, yes. But I'm making a bunch of different graphs from
> these data, and a few of them are much easier with events. The best
> example is my concurrency graph. Whenever there's a start event, it
goes
> up one. Whenever there's a stop event, it goes down one. It's
completely
> trivial once you have it separated into events.

well, if you don't mind attempting things that are not trivial, how
about trying: 

select t, (select count(*) from  transaction where t between happened
and when_stopped) from
(
select ((generate_series(1,60) * scale)::text::interval) + '12:00
pm'::time as t
) q;
for example, to check concurrency at every second for a minute (starting
from 1 second) after 12:00 pm, (scale is zero in this case),

select t, (select count(*) from  transaction where t between happened
and when_stopped) from
(
select (generate_series(1,60)::text::interval) + '12:00 pm'::time as
t
) q;

this could be a win depending on how much data you pull into your
concurrency graph.  maybe not though.  

Merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Sorted union

2005-11-03 Thread Scott Lamb

Merlin Moncure wrote:

hmm, try pushing the union into a subquery...this is better style
because it's kind of ambiguous if the ordering will apply before/after
the union.


Seems to be a little slower. There's a new "subquery scan" step.

explain analyze
selectq.when_happened from (
selectwhen_stopped as when_happened,
  1 as order_hint
from  transaction t
where '2005-10-25 15:00:00' <= when_stopped
  and when_stopped <= '2005-10-26 10:00:00'
union all
selectwhen_stopped as when_happened,
  2 as order_hint
from  transaction t
where '2005-10-25 15:00:00' <= when_stopped
  and when_stopped <= '2005-10-26 10:00:00'
) q order by  when_happened, order_hint;


   QUERY PLAN 


---
 Sort  (cost=713013.96..721751.25 rows=3494916 width=12) (actual 
time=34392.264..37237.148 rows=3364006 loops=1)

   Sort Key: when_happened, order_hint
   ->  Subquery Scan q  (cost=0.00..229474.11 rows=3494916 width=12) 
(actual time=0.194..20283.452 rows=3364006 loops=1)
 ->  Append  (cost=0.00..194524.95 rows=3494916 width=8) 
(actual time=0.191..14967.632 rows=3364006 loops=1)
   ->  Subquery Scan "*SELECT* 1"  (cost=0.00..97262.48 
rows=1747458 width=8) (actual time=0.189..5535.139 rows=1682003 loops=1)
 ->  Index Scan using transaction_stopped on 
"transaction" t  (cost=0.00..79787.90 rows=1747458 width=8) (actual 
time=0.186..3097.268 rows=1682003 loops=1)
   Index Cond: (('2005-10-25 
15:00:00'::timestamp without time zone <= when_stopped) AND 
(when_stopped <= '2005-10-26 10:00:00'::timestamp without time zone))
   ->  Subquery Scan "*SELECT* 2"  (cost=0.00..97262.48 
rows=1747458 width=8) (actual time=0.173..5625.155 rows=1682003 loops=1)
 ->  Index Scan using transaction_stopped on 
"transaction" t  (cost=0.00..79787.90 rows=1747458 width=8) (actual 
time=0.169..3146.714 rows=1682003 loops=1)
   Index Cond: (('2005-10-25 
15:00:00'::timestamp without time zone <= when_stopped) AND 
(when_stopped <= '2005-10-26 10:00:00'::timestamp without time zone))

 Total runtime: 39775.225 ms
(11 rows)


question: why do you want to flatten the table...is it not easier to
work with as records?


For most things, yes. But I'm making a bunch of different graphs from 
these data, and a few of them are much easier with events. The best 
example is my concurrency graph. Whenever there's a start event, it goes 
up one. Whenever there's a stop event, it goes down one. It's completely 
trivial once you have it separated into events.


Thanks,
Scott

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Sorted union

2005-11-03 Thread Merlin Moncure
>  selectwhen_stopped as when_happened,
>1 as order_hint
>  from  transaction t
>  where '2005-10-25 15:00:00' <= when_stopped
>and when_stopped <= '2005-10-26 10:00:00'
>  union all
>  selectwhen_stopped as when_happened,
>2 as order_hint
>  from  transaction t
>  where '2005-10-25 15:00:00' <= when_stopped
>and when_stopped <= '2005-10-26 10:00:00'
>  order by  when_happened, order_hint;

hmm, try pushing the union into a subquery...this is better style
because it's kind of ambiguous if the ordering will apply before/after
the union.

select q.when from
(
 select 1 as hint, start_time as when [...]
 union all
 select 2 as hint, end_time as when [...]
) q order by q.seq, when

question: why do you want to flatten the table...is it not easier to
work with as records?

Merlin
 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Sorted union

2005-11-02 Thread Scott Lamb

On 2 Nov 2005, at 21:13, Scott Lamb wrote:

I want to retrieve all the events during a timespan in one list;  
typical timespans will involve up to a couple rows.


Err, I meant up to a couple million rows. With two rows, I wouldn't  
be so concerned about performance. ;)


--
Scott Lamb 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[PERFORM] Sorted union

2005-11-02 Thread Scott Lamb

Using PostgreSQL 8.0.4.

I've got a table with 4.5 million rows that I expect to become huge  
(hundred million? who knows). Each row has a start and end time. I  
want to retrieve all the events during a timespan in one list;  
typical timespans will involve up to a couple rows. If the start and  
stop happen at the same time (which is possible), the start must come  
first in my sequence. So essentially, I want this:


selectwhen_stopped as when_happened,
  1 as order_hint
from  transaction t
where '2005-10-25 15:00:00' <= when_stopped
  and when_stopped <= '2005-10-26 10:00:00'
union all
selectwhen_stopped as when_happened,
  2 as order_hint
from  transaction t
where '2005-10-25 15:00:00' <= when_stopped
  and when_stopped <= '2005-10-26 10:00:00'
order by  when_happened, order_hint;

I'd really like the first row to be retrieved in O(1) time and the  
last in O(n) time (n = number of rows in the timespan, not the whole  
table). I previously was doing things manually with flat files. But  
there's a sort in PostgreSQL's plan, so I think I'm getting O(n log  
n) time for both. It's frustrating to start using a real database and  
get performance regressions.



QUERY PLAN
 
 
-
Sort  (cost=667469.90..676207.19 rows=3494916 width=8) (actual  
time=28503.612..31377.254 rows=3364006 loops=1)

   Sort Key: when_happened, order_hint
   ->  Append  (cost=0.00..194524.95 rows=3494916 width=8) (actual  
time=0.191..14659.712 rows=3364006 loops=1)
 ->  Subquery Scan "*SELECT* 1"  (cost=0.00..97262.48  
rows=1747458 width=8) (actual time=0.190..5375.925 rows=1682003 loops=1)
   ->  Index Scan using transaction_stopped on  
"transaction" t  (cost=0.00..79787.90 rows=1747458 width=8) (actual  
time=0.186..2962.585 rows=1682003 loops=1)
 Index Cond: (('2005-10-25 15:00:00'::timestamp  
without time zone <= when_stopped) AND (when_stopped <= '2005-10-26  
10:00:00'::timestamp without time zone))
 ->  Subquery Scan "*SELECT* 2"  (cost=0.00..97262.48  
rows=1747458 width=8) (actual time=0.167..5449.151 rows=1682003 loops=1)
   ->  Index Scan using transaction_stopped on  
"transaction" t  (cost=0.00..79787.90 rows=1747458 width=8) (actual  
time=0.163..3026.730 rows=1682003 loops=1)
 Index Cond: (('2005-10-25 15:00:00'::timestamp  
without time zone <= when_stopped) AND (when_stopped <= '2005-10-26  
10:00:00'::timestamp without time zone))

Total runtime: 33312.814 ms
(10 rows)

Each side of the union is retrieved in sorted order, but it doesn't  
seem to realize that. There seem to be two things it's missing:


(1) It doesn't recognize that constant expressions are irrelevant to  
the sort. I.e., the first half of the union:


selectwhen_started as when_happened,
  1 as order_hint
from  transaction t
where '2005-10-25 15:00:00' <= when_started
  and when_started <= '2005-10-26 10:00:00'
order by  when_happened, order_hint;

does this:


  QUERY PLAN
 
 
-
Sort  (cost=291770.42..296139.05 rows=1747453 width=8) (actual  
time=8462.026..9895.715 rows=1681994 loops=1)

   Sort Key: when_started, 1
   ->  Index Scan using transaction_started on "transaction" t   
(cost=0.00..79788.21 rows=1747453 width=8) (actual  
time=0.190..2953.393 rows=1681994 loops=1)
 Index Cond: (('2005-10-25 15:00:00'::timestamp without time  
zone <= when_started) AND (when_started <= '2005-10-26  
10:00:00'::timestamp without time zone))

Total runtime: 10835.114 ms
(5 rows)

The sort is unnecessary. If I take out the constant order_hint, it  
works:


selectwhen_started as when_happened
from  transaction t
where '2005-10-25 15:00:00' <= when_started
  and when_started <= '2005-10-26 10:00:00'
order by  when_happened;


   QUERY PLAN
 
 
---
Index Scan using transaction_started on "transaction" t   
(cost=0.00..79788.21 rows=1747453 width=8) (actual  
time=0.189..2715.513 rows=1681994 loops=1)
   Index Cond: (('2005-10-25 15:00:00'::timestamp without time zone  
<= when_started) AND (when_started <= '2005-10-26  
10:00:00'::timestamp without time zone))

Total runtim