On Tue, Aug 27, 2019 at 4:46 PM vignesh C <vignes...@gmail.com> wrote:

> Few comments:
> Comment:
> + buf = (char *) malloc(statbuf->st_size);
> + if (buf == NULL)
> + ereport(ERROR,
> + (errcode(ERRCODE_OUT_OF_MEMORY),
> + errmsg("out of memory")));
> +
> + if ((cnt = fread(buf, 1, statbuf->st_size, fp)) > 0)
> + {
> + Bitmapset  *mod_blocks = NULL;
> + int nmodblocks = 0;
> +
> + if (cnt % BLCKSZ != 0)
> + {
>
> We can use same size as full page size.
> After pg start backup full page write will be enabled.
> We can use the same file size to maintain data consistency.
>

Can you please explain which size?
The aim here is to read entire file in-memory and thus used
statbuf->st_size.

Comment:
> Should we check if it is same timeline as the system's timeline.
>

At the time of taking the incremental backup, we can't check that.
However, while combining, I made sure that the timeline is the same for all
backups.


>
> Comment:
>
> Should we support compression formats supported by pg_basebackup.
> This can be an enhancement after the functionality is completed.
>

For the incremental backup, it just works out of the box.
For combining backup, as discussed up-thread, the user has to
uncompress first, combine them, compress if required.


> Comment:
> We should provide some mechanism to validate the backup. To identify
> if some backup is corrupt or some file is missing(deleted) in a
> backup.
>

Maybe, but not for the first version.


> Comment:
> +/*
> + * When to send the whole file, % blocks modified (90%)
> + */
> +#define WHOLE_FILE_THRESHOLD 0.9
> +
> This can be user configured value.
> This can be an enhancement after the functionality is completed.
>

Yes.


> Comment:
> We can add a readme file with all the details regarding incremental
> backup and combine backup.
>

Will have a look.


>
> Regards,
> Vignesh
> EnterpriseDB: http://www.enterprisedb.com
>

Thanks
-- 
Jeevan Chalke
Technical Architect, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Reply via email to