Hi Dean,

On 2017/07/05 23:18, Dean Rasheed wrote:
> On 5 July 2017 at 10:43, Amit Langote <langote_amit...@lab.ntt.co.jp> wrote:
>> In retrospect, that sounds like something that was implemented in the
>> earlier versions of the patch, whereby there was no ability to specify
>> UNBOUNDED on a per-column basis.  So the syntax was:
>> FROM { (x [, ...]) | UNBOUNDED } TO { (y [, ...]) | UNBOUNDED }
> Yes, that's where I ended up too.

I see.

>> But, it was pointed out to me [1] that that doesn't address the use case,
>> for example, where part1 goes up to (10, 10) and part2 goes from (10, 10)
>> up to (10, unbounded).
>> The new design will limit the usage of unbounded range partitions at the
>> tail ends.
> True, but I don't think that's really a problem. When the first column
> is a discrete type, an upper bound of (10, unbounded) can be rewritten
> as (11) in the new design. When it's a continuous type, e.g. floating
> point, it can no longer be represented, because (10.0, unbounded)
> really means (col1 <= 10.0). But we've already decided not to support
> anything other than inclusive lower bounds and exclusive upper bounds,
> so allowing this upper bound goes against that design choice.


>>> Of course, it's pretty late in the day to be proposing this kind of
>>> redesign, but I fear that if we don't tackle it now, it will just be
>>> harder to deal with in the future.
>>> Actually, a quick, simple hacky implementation might be to just fill
>>> in any omitted values in a partition bound with negative infinity
>>> internally, and when printing a bound, omit any values after an
>>> infinite value. But really, I think we'd want to tidy up the
>>> implementation, and I think a number of things would actually get much
>>> simpler. For example, get_qual_for_range() could simply stop when it
>>> reached the end of the list of values for the bound, and it wouldn't
>>> need to worry about an unbounded value following a bounded one.
>>> Thoughts?
>> I cooked up a patch for the "hacky" implementation for now, just as you
>> described in the above paragraph.  Will you be willing to give it a look?
>> I will also think about the non-hacky way of implementing this.
> OK, I'll take a look.

Thanks a lot for your time.

> Meanwhile, I already had a go at the "non-hacky" implementation (WIP
> patch attached). The more I worked on it, the simpler things got,
> which I think is a good sign.

It definitely looks good to me.  I was thinking of more or less the same
approach, but couldn't have done as clean a job as you've done with your

> Part-way through, I realised that the PartitionRangeDatum Node type is
> no longer needed, because each bound value is now necessarily finite,
> so the lowerdatums and upperdatums lists in a PartitionBoundSpec can
> now be made into lists of Const nodes, making them match the
> listdatums field used for LIST partitioning, and then a whole lot of
> related code gets simplified.

Yeah, seems that way.

> It needed a little bit more code in partition.c to track individual
> bound sizes, but there were a number of other places that could be
> simplified, so overall this represents a reduction in the code size
> and complexity.

Sounds reasonable.

> It's not complete (e.g., no doc updates yet), but it passes all the
> tests, and so far seems to work as I would expect.

Thanks a lot for working on it.


Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to