There was some discussion recently of simplifying the sorting code (and
hopefully making it a tad faster) by eliminating support for using
randomly-chosen operators in ORDER BY ... USING, and requiring USING to
specify an operator that is the < or > member of some btree opclass.
This strikes me as a good idea in any case since a USING operator that
doesn't act like < or > is probably not going to yield a consistent sort
order.

However, when I went to do this as part of the NULLS FIRST/LAST + DESC
index order patch I'm working on, I found out that removing this feature
makes the regression tests fail: specifically, there are tests that
assume they can ORDER BY with these operators:

        <(circle,circle)                circle_lt
        <<(polygon,polygon)             poly_left
        <(box,box)                      box_lt
        <<(point,point)                 point_left

I thought for a bit about adding btree opclasses covering these cases,
but it seems a bit silly ... and actually I think the sorts on poly_left
and point_left are not even self-consistent because I don't think these
operators satisfy the trichotomy law.

What I'm inclined to do is change the tests a bit, eg do "ORDER BY
area(circle)", which is what's really happening with circle_lt anyway.
Anybody unhappy with that plan?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to