Hi,

On 2019-04-03 10:19:17 +0530, Pavan Deolasee wrote:
> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
> index c1fd7b78ce..09df70a3ac 100644
> --- a/src/backend/commands/copy.c
> +++ b/src/backend/commands/copy.c
> @@ -2833,6 +2833,15 @@ CopyFrom(CopyState cstate)
>                                       !has_instead_insert_row_trig &&
>                                       resultRelInfo->ri_FdwRoutine == NULL;
>  
> +                             /*
> +                              * Note: As of PG12, COPY FREEZE is not 
> supported on
> +                              * partitioned table. Nevertheless have this 
> check in place so
> +                              * that we do the right thing if it ever gets 
> supported.
> +                              */
> +                             if (ti_options & TABLE_INSERT_FROZEN)
> +                                     
> CheckAndSetAllVisibleBulkInsertState(resultRelInfo->ri_RelationDesc,
> +                                                     bistate);
> +
>                               /*
>                                * We'd better make the bulk insert mechanism 
> gets a new
>                                * buffer when the partition being inserted 
> into changes.
> @@ -3062,6 +3071,15 @@ CopyFrom(CopyState cstate)
>                                                               
> firstBufferedLineNo);
>       }
>  
> +     /*
> +      * If we are inserting frozen tuples, check if the last page used can 
> also
> +      * be marked as all-visible and all-frozen. This ensures that a table 
> can
> +      * be fully frozen when the data is loaded.
> +      */
> +     if (ti_options & TABLE_INSERT_FROZEN)
> +             
> CheckAndSetAllVisibleBulkInsertState(resultRelInfo->ri_RelationDesc,
> +                             bistate);
> +
>       /* Done, clean up */
>       error_context_stack = errcallback.previous;

I'm totally not OK with this from a layering
POV. CheckAndSetAllVisibleBulkInsertState is entirely heap specific
(without being named such), whereas all the heap specific bits are
getting moved below tableam.

Greetings,

Andres Freund


Reply via email to