On Sat, Apr 7, 2018 at 7:25 AM, David Rowley
> On 7 April 2018 at 13:50, Amit Langote <amitlangot...@gmail.com> wrote:
>> On Sat, Apr 7, 2018 at 10:31 AM, David Rowley
>>> I looked at all the regression test diffs for each of the servers you
>>> mentioned and I verified that the diffs match on each of the 7
>>> Maybe the best solution is to pull those tests out of
>>> partition_prune.sql then create partition_prune_hash and just have an
>>> alternative .out file with the partitions which match on bigendian
>>> We could also keep them in the same file, but that's a much bigger
>>> alternative file to maintain and more likely to get broken if someone
>>> forgets to update it.
>>> What do you think?
>> Yeah, that's an idea.
>> Is it alright though that same data may end up in different hash
>> partitions depending on the architecture? IIRC, that's the way we
>> decided to go when using hash partitioning, but it would've been
>> clearer if there was already some evidence in regression tests that
>> that's what we've chosen, such as, some existing tests for tuple
> The only alternative would be to change all the hash functions so that
> they normalise their endianness. It does not sound like something that
> will perform very well. Plus it would break everyone's hash indexes on
> a pg_upgrade.
> pg_basebackups can't be transferred over to other architectures
> anyway, so I'm not so worried about tuples being routed to other
> Maybe someone else can see a reason why this is bad?
I don't think the concept is bad by itself. That's expected, in fact,
we have added an option to pg_dump (dump through parent or some such)
to handle exactly this case. What Amit seems to be complaining though
is the regression test. We need to write regression tests so that they
produce the same plans, pruning same partitions by name, on all
The Postgres Database Company