On 2022-Apr-14, Benjamin Tingle wrote:
> It doesn't help if I partition temp_data by textfield beforehand either
> (using the same scheme as the target table). It still opts to concatenate
> all of temp_data, hash it, then perform a sequential scan against the
> target partitions.
Does it still
Benjamin Tingle writes:
> Interesting. Why is it impossible to prune hash partitions? Maybe prune
> isn’t the best word, more so use to advantage. At the very least, it should
> be possible to utilize a parallel insert against a table partitioned by
> hash. (Partition query rows, then distribute
Interesting. Why is it impossible to prune hash partitions? Maybe prune
isn’t the best word, more so use to advantage. At the very least, it should
be possible to utilize a parallel insert against a table partitioned by
hash. (Partition query rows, then distribute these rows to parallel workers)
On Sun, Apr 17, 2022 at 8:53 AM Kumar, Mukesh
wrote:
> We request you to please provide some assistance on below issue and it is
> impacting the migration project.
>
I suggest you try and re-write the loop-based function into a set-oriented
view.
Specifically, I think doing:
Benjamin Tingle writes:
> I've recently started taking advantage of the PARTITION BY HASH feature for
> my database system. It's a really great fit since my tables can get quite
> large (900M+ rows for some) and splitting them up into manageable chunks
> should let me upload to them without
Hi All ,
We request you to please provide some assistance on below issue and it is
impacting the migration project.
Thanks and Regards,
Mukesh Kumar
From: Kumar, Mukesh
Sent: Friday, April 15, 2022 11:43 AM
To: Bhupendra Babu
Cc: Michel SALAIS ; Ranier Vilela ;
postgres performance list ;
Hi Babu ,
Please find below the script for the function from Oracle
Hi babu ,
Please find attached the script for function from Oracle .
Please revert in case of any query.
Thanks and Regards,
Mukesh Kumar
From: Bhupendra Babu
Sent: Friday, April 15, 2022 3:44 AM
To: Kumar, Mukesh
Cc:
Greetings Postgres Developers,
I've recently started taking advantage of the PARTITION BY HASH feature for
my database system. It's a really great fit since my tables can get quite
large (900M+ rows for some) and splitting them up into manageable chunks
should let me upload to them without having