While thinking about Isaac Morland's patch to add abs(interval), I happened to notice that interval_cmp_value() seems rather inefficently written: it's expending an int64 division -- or even two of them, if the compiler's not very smart -- to split up the "time" field into days and microseconds. That's quite pointless, since we're immediately going to recombine the results into microseconds. Integer divisions are pretty expensive, too, on a lot of hardware.
I suppose this is a hangover from when the code supported float as well as int64 time fields; but since that's long gone, I see no reason not to do the attached. regards, tom lane
diff --git a/src/backend/utils/adt/timestamp.c b/src/backend/utils/adt/timestamp.c index 1c0bf0aa5c..1978f85873 100644 --- a/src/backend/utils/adt/timestamp.c +++ b/src/backend/utils/adt/timestamp.c @@ -2352,20 +2352,17 @@ static inline INT128 interval_cmp_value(const Interval *interval) { INT128 span; - int64 dayfraction; int64 days; /* - * Separate time field into days and dayfraction, then add the month and - * day fields to the days part. We cannot overflow int64 days here. + * Combine the months and days fields into an integer number of days. + * Since the inputs are int32, int64 arithmetic suffices here. */ - dayfraction = interval->time % USECS_PER_DAY; - days = interval->time / USECS_PER_DAY; - days += interval->month * INT64CONST(30); + days = interval->month * INT64CONST(30); days += interval->day; - /* Widen dayfraction to 128 bits */ - span = int64_to_int128(dayfraction); + /* Widen time field to 128 bits */ + span = int64_to_int128(interval->time); /* Scale up days to microseconds, forming a 128-bit product */ int128_add_int64_mul_int64(&span, days, USECS_PER_DAY);