On Sat, Apr 7, 2018 at 8:37 AM, David Rowley
<david.row...@2ndquadrant.com> wrote:
> On 7 April 2018 at 15:03, Ashutosh Bapat
> <ashutosh.ba...@enterprisedb.com> wrote:
>> On Sat, Apr 7, 2018 at 7:25 AM, David Rowley
>>> The only alternative would be to change all the hash functions so that
>>> they normalise their endianness. It does not sound like something that
>>> will perform very well. Plus it would break everyone's hash indexes on
>>> a pg_upgrade.
>>> pg_basebackups can't be transferred over to other architectures
>>> anyway, so I'm not so worried about tuples being routed to other
>>> partitions.
>>> Maybe someone else can see a reason why this is bad?
>> I don't think the concept is bad by itself. That's expected, in fact,
>> we have added an option to pg_dump (dump through parent or some such)
>> to handle exactly this case. What Amit seems to be complaining though
>> is the regression test. We need to write regression tests so that they
>> produce the same plans, pruning same partitions by name, on all
>> architectures.
> Why is writing tests that produce the same output required?
> We have many tests with alternative outputs. Look in
> src/tests/regress/expected for files matching _1.out

That's true, but we usually add such alternative output when we know
all the variants possible as long as "all the variants" do not cover
everything possible. AFAIU, that's not true here. Also, on a given
machine a particular row is guaranteed to fall in a given partition.
On a different machine it will fall in some other partition, but
always that partition on that machine. We don't have a way to select
alternate output based on the architecture. May be a better idea is to
use .source file, creating .out on the fly based on the architecture
of machine like testing the hash output for a given value to decide
which partition it will fall into and then crafting .out with that
partition's name.

Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company

Reply via email to