On Tue, Jan 25, 2022 at 09:52:12PM +0530, tushar wrote:
> C) -R option is silently ignoring
>
> go to /tmp/pp folder and extract it - there is no "standby.signal" file and
> if we start cluster against this data directory,it will not be in slave
> mode.
Yeah, I don't think it's good to silently
On Tue, Jan 25, 2022 at 03:54:52AM +, Shinoda, Noriyoshi (PN Japan FSIP)
wrote:
> Michael, I am proposing to that we remove this message as part of
> this commit:
>
> -pg_log_info("no value specified for compression
> level, switching to default");
>
> I think most people
On Tue, Jan 25, 2022 at 8:42 AM Dagfinn Ilmari Mannsåker
wrote:
> I just noticed there was a superfluous [ in the SGM documentation, and
> that the short form was missing the [{client|server}-] part. Updated
> patch attaced.
Committed, thanks.
--
Robert Haas
EDB: http://www.enterprisedb.com
On 1/22/22 12:03 AM, Robert Haas wrote:
I committed the base backup target patch yesterday, and today I
updated the remaining code in light of Michael Paquier's commit
5c649fe153367cdab278738ee4aebbfd158e0546. Here is the resulting patch.
Thanks Robert, I tested against the latest PG Head and
Dagfinn Ilmari Mannsåker writes:
> "Shinoda, Noriyoshi (PN Japan FSIP)" writes:
>
>> Hi,
>> Thank you for committing a great feature. I have tested the committed
>> features.
>> The attached small patch fixes the output of the --help message. In the
>> previous commit, only gzip and none
"Shinoda, Noriyoshi (PN Japan FSIP)" writes:
> Hi,
> Thank you for committing a great feature. I have tested the committed
> features.
> The attached small patch fixes the output of the --help message. In the
> previous commit, only gzip and none were output, but in the attached
> patch,
Shinoda
-Original Message-
From: Robert Haas
Sent: Saturday, January 22, 2022 3:33 AM
To: Dipesh Pandit ; Michael Paquier
Cc: Jeevan Ladhe ; tushar
; Dmitry Dolgov <9erthali...@gmail.com>; Mark
Dilger ; pgsql-hack...@postgresql.org
Subject: Re: refactoring basebackup.c
On Wed,
On Mon, Jan 24, 2022 at 9:30 AM Dipesh Pandit wrote:
> v13 patch does not apply on the latest head, it requires a rebase. I have
> applied
> it on commit dc43fc9b3aa3e0fa9c84faddad6d301813580f88 to validate gzip
> decompression patches.
It only needed trivial rebasing; I have committed it after
Hi,
> Here is a more detailed review.
Thanks for the feedback, I have incorporated the suggestions
and updated a new version of the patch (v3-0001).
The required documentation changes are also incorporated in
updated patch (v3-0001).
> Interesting approach. This unfortunately has the effect of
On Thu, Jan 20, 2022 at 11:10 AM Robert Haas wrote:
> On Thu, Jan 20, 2022 at 8:00 AM Dipesh Pandit wrote:
> > Thanks for the feedback, I have incorporated the suggestions and
> > updated a new patch v2.
>
> Cool. I'll do a detailed review later, but I think this is going in a
> good direction.
On Wed, Jan 19, 2022 at 4:26 PM Robert Haas wrote:
> I spent some time thinking about test coverage for the server-side
> backup code today and came up with the attached (v12-0003).
I committed the base backup target patch yesterday, and today I
updated the remaining code in light of Michael
On Thu, Jan 20, 2022 at 8:00 AM Dipesh Pandit wrote:
> Thanks for the feedback, I have incorporated the suggestions and
> updated a new patch v2.
Cool. I'll do a detailed review later, but I think this is going in a
good direction.
> I tried to add the test coverage for server side gzip
Hi,
Thanks for the feedback, I have incorporated the suggestions and
updated a new patch v2.
> I spent some time thinking about test coverage for the server-side
> backup code today and came up with the attached (v12-0003). It does an
> end-to-end test that exercises server-side backup and
On Wed, Jan 19, 2022 at 7:16 AM Dipesh Pandit wrote:
> I have done initial testing and
> working on updating the test coverage.
I spent some time thinking about test coverage for the server-side
backup code today and came up with the attached (v12-0003). It does an
end-to-end test that exercises
On 1/18/22 8:12 PM, Jeevan Ladhe wrote:
Similar to LZ4 server-side compression, I have also tried to add a ZSTD
server-side compression in the attached patch.
Thanks Jeevan. while testing found one scenario where the server is
getting crash while performing pg_basebackup
against
On Wed, Jan 19, 2022 at 7:16 AM Dipesh Pandit wrote:
> I have added support for decompressing a gzip compressed tar file
> at client. pg_basebackup can enable server side compression for
> plain format backup with this change.
>
> Added a gzip extractor which decompresses the compressed archive
>
Hi,
I have added support for decompressing a gzip compressed tar file
at client. pg_basebackup can enable server side compression for
plain format backup with this change.
Added a gzip extractor which decompresses the compressed archive
and forwards it to the next streamer. I have done initial
On Tue, Jan 18, 2022 at 9:43 AM Jeevan Ladhe
wrote:
> The patch surely needs some grooming, but I am expecting some initial
> review, specially in the area where we are trying to close the zstd stream
> in bbsink_zstd_end_archive(). We need to tell the zstd library to end the
> compression by
On Tue, Nov 16, 2021 at 4:47 PM Robert Haas wrote:
> Here's a new patch set.
And here's another one.
I've committed the first two patches from the previous set, the second
of those just today, and so we're getting down to the meat of the
patch set.
0001 adds "server" and "blackhole" as backup
Hi,
Similar to LZ4 server-side compression, I have also tried to add a ZSTD
server-side compression in the attached patch. I have done some initial
testing and things seem to be working.
Example run:
pg_basebackup -t server:/tmp/data_zstd -Xnone --server-compression=zstd
The patch surely needs
On Tue, Dec 28, 2021 at 1:12 PM Jeevan Ladhe
wrote:
> Hi Tushar,
>
> You need to apply Robert's v10 version patches 0002, 0003 and 0004, before
> applying the lz4 patch(v8 version).
> Please let me know if you still face any issues.
>
Thanks, Jeevan.
I tested —server-compression option using
On Wed, Jan 5, 2022 at 5:11 AM tushar wrote:
> One scenario where I feel error message is confusing and if it is not
> supported at all then error message need to be a little bit more clear
>
> if we use -z (or -Z ) with -t , we are getting this error
> [edb@centos7tushar bin]$ ./pg_basebackup
On 1/4/22 8:07 PM, Robert Haas wrote:
Before sending an email like this, it would be a good idea to read the
documentation for the --server-compression option.
Sure, Thanks Robert.
One scenario where I feel error message is confusing and if it is not
supported at all then error message need
On Mon, Jan 3, 2022 at 12:12 PM tushar wrote:
> On 11/22/21 11:05 PM, Jeevan Ladhe wrote:
> > Please find the lz4 compression patch here that basically has:
> Please refer to this scenario , where --server-compression is only
> compressing
> base backup into lz4 format but not pg_wal directory
>
On 11/22/21 11:05 PM, Jeevan Ladhe wrote:
Please find the lz4 compression patch here that basically has:
Please refer to this scenario , where --server-compression is only
compressing
base backup into lz4 format but not pg_wal directory
[edb@centos7tushar bin]$ ./pg_basebackup -Ft
On 11/22/21 11:05 PM, Jeevan Ladhe wrote:
Please find the lz4 compression patch here that basically has:
One small issue, in the "pg_basebackup --help", we are not displaying
lz4 value under --server-compression option
[edb@tusharcentos7-v14 bin]$ ./pg_basebackup --help | grep
On 12/28/21 1:11 PM, Jeevan Ladhe wrote:
You need to apply Robert's v10 version patches 0002, 0003 and 0004,
before applying the lz4 patch(v8 version).
Thanks, able to apply now.
--
regards,tushar
EnterpriseDB https://www.enterprisedb.com/
The Enterprise PostgreSQL Company
Hi Tushar,
You need to apply Robert's v10 version patches 0002, 0003 and 0004, before
applying the lz4 patch(v8 version).
Please let me know if you still face any issues.
Regards,
Jeevan Ladhe
On Mon, Dec 27, 2021 at 7:01 PM tushar
wrote:
> On 11/22/21 11:05 PM, Jeevan Ladhe wrote:
> > Please
On 11/22/21 11:05 PM, Jeevan Ladhe wrote:
Please find the lz4 compression patch here that basically has:
Thanks, Could you please rebase your patch, it is failing at my end -
[edb@centos7tushar pg15_lz]$ git apply /tmp/v8-0001-LZ4-compression.patch
error: patch failed:
Hi Robert,
Please find the lz4 compression patch here that basically has:
1. Documentation
2. pgindent run over it.
3. your comments addressed for using "+="
I have not included the compression level per your comment below:
-
> "On second thought, maybe we don't need to do this. There's
On Mon, Nov 15, 2021 at 2:23 PM Robert Haas wrote:
> Yeah, that's what it should be doing. I'll commit a fix, thanks for
> the report and diagnosis.
Here's a new patch set.
0001 - When I committed the patch to add the missing 2 blocks of zero
bytes to the tar archives generated by the server, I
On Mon, Nov 15, 2021 at 11:25 AM Dmitry Dolgov <9erthali...@gmail.com> wrote:
> Walsender tries to send a backup manifest, but crashes on the trottling sink:
>
> #2 0x560857b551af in ExceptionalCondition
> (conditionName=0x560857d15d27 "sink->bbs_next != NULL",
>
> On Fri, Nov 05, 2021 at 11:50:01AM -0400, Robert Haas wrote:
> On Tue, Nov 2, 2021 at 10:32 AM Robert Haas wrote:
> > Meanwhile, I think it's probably OK for me to go ahead and commit
> > 0001-0003 from my patches at this point, since it seems we have pretty
> > good evidence that the
On Mon, Nov 8, 2021 at 4:41 PM Robert Haas wrote:
> On Mon, Nov 8, 2021 at 11:34 AM Robert Haas wrote:
> > Anyway, here's my proposal for fixing the issue immediately before us.
> > 0001 adds logic to pad out the unterminated tar archives, and 0002
> > makes the server terminate its tar archives
On Mon, Nov 8, 2021 at 11:34 AM Robert Haas wrote:
> Anyway, here's my proposal for fixing the issue immediately before us.
> 0001 adds logic to pad out the unterminated tar archives, and 0002
> makes the server terminate its tar archives while preserving the logic
> added by 0001 for cases where
On Mon, Nov 8, 2021 at 10:59 AM Tom Lane wrote:
> Robert Haas writes:
> > It turns out that these commits are causing failures on prairiedog.
> > Per email from Tom off-list, that's apparently because prairiedog has
> > a fussy version of tar that doesn't like it when you omit the trailing
> >
Robert Haas writes:
> It turns out that these commits are causing failures on prairiedog.
> Per email from Tom off-list, that's apparently because prairiedog has
> a fussy version of tar that doesn't like it when you omit the trailing
> NUL blocks that are supposed to be part of a tar file.
FTR,
On Fri, Nov 5, 2021 at 11:50 AM Robert Haas wrote:
> I went ahead and committed 0001 and 0002, but got nervous about
> proceeding with 0003.
It turns out that these commits are causing failures on prairiedog.
Per email from Tom off-list, that's apparently because prairiedog has
a fussy version
On Tue, Nov 2, 2021 at 10:32 AM Robert Haas wrote:
> Meanwhile, I think it's probably OK for me to go ahead and commit
> 0001-0003 from my patches at this point, since it seems we have pretty
> good evidence that the abstraction basically works, and there doesn't
> seem to be any value in holding
On Tue, Nov 2, 2021 at 10:32 AM Robert Haas wrote:
> Looks pretty good. I think you should work on stuff like documentation
> and tests, and I need to do some work on that stuff, too. Also, I
> think you should try to figure out how to support different
> compression levels.
On second thought,
On Tue, Nov 2, 2021 at 7:53 AM Jeevan Ladhe
wrote:
> I have implemented the cleanup callback bbsink_lz4_cleanup() in the attached
> patch.
>
> Please have a look and let me know of any comments.
Looks pretty good. I think you should work on stuff like documentation
and tests, and I need to do
I have implemented the cleanup callback bbsink_lz4_cleanup() in the
attached patch.
Please have a look and let me know of any comments.
Regards,
Jeevan Ladhe
On Fri, Oct 29, 2021 at 6:54 PM Robert Haas wrote:
> On Fri, Oct 29, 2021 at 8:59 AM Jeevan Ladhe
> wrote:>
> > bbsink_gzip_ops
On Fri, Oct 29, 2021 at 8:59 AM Jeevan Ladhe
wrote:>
> bbsink_gzip_ops have the cleanup() callback set to NULL, and when the
> bbsink_cleanup() callback is triggered, it tries to invoke a function that
> is NULL. I think either bbsink_gzip_ops should set the cleanup callback
> to
Thanks, Robert for the patches.
I tried to take a backup using gzip compression and got a core.
$ pg_basebackup -t server:/tmp/data_gzip -Xnone --server-compression=gzip
NOTICE: WAL archiving is not enabled; you must ensure that all required
WAL segments are copied through other means to
Hi Robert,
I mean #define CHUNK_SIZE is still in the patch.
>
Oops, removed now.
> > > With just that change, you can set
> > > next_buf_len = LZ4F_HEADER_SIZE_MAX + mysink->output_buffer_bound --
> > > but that's also more than you need. You can instead do next_buf_len =
> > >
On Fri, Oct 15, 2021 at 7:54 AM Jeevan Ladhe
wrote:
> > The loop is gone, but CHUNK_SIZE itself seems to have evaded the
> > executioner.
>
> I am sorry, but I did not really get it. Or it is what you have pointed
> in the following paragraphs?
I mean #define CHUNK_SIZE is still in the patch.
Hi Robert,
> The loop is gone, but CHUNK_SIZE itself seems to have evaded the
executioner.
I am sorry, but I did not really get it. Or it is what you have pointed
in the following paragraphs?
> I think this is not the best way to accomplish the goal. Adding
> LZ4F_compressBound(0) to
On Thu, Oct 14, 2021 at 1:21 PM Jeevan Ladhe
wrote:
> Agree. Removed the CHUNK_SIZE and the loop.
Try harder. :-)
The loop is gone, but CHUNK_SIZE itself seems to have evaded the executioner.
> Fair enough. I have made the change in the bbsink_lz4_begin_backup() to
> make sure we reserve
Thanks, Robert for reviewing the patch.
On Tue, Oct 12, 2021 at 11:09 PM Robert Haas wrote:
This is the only place where CHUNK_SIZE gets used, and I don't think I
> see any point to it. I think the 5th argument to LZ4F_compressUpdate
> could just be avail_in. And as soon as you do that then I
On Tue, Oct 5, 2021 at 5:51 AM Jeevan Ladhe
wrote:
> I have fixed the autoFlush issue. Basically, I was wrongly initializing
> the lz4 preferences in bbsink_lz4_begin_archive() instead of
> bbsink_lz4_begin_backup(). I have fixed the issue in the attached
> patch, please have a look at it.
Hi Robert,
I think the patch v6-0007-Support-base-backup-targets.patch has broken
the case for multiple tablespaces. When I tried to take the backup
for target 'none' and extract the base.tar I was not able to locate
tablespace_map file.
I debugged and figured out in normal tar backup i.e. '-Ft'
Hi Robert,
I have fixed the autoFlush issue. Basically, I was wrongly initializing
the lz4 preferences in bbsink_lz4_begin_archive() instead of
bbsink_lz4_begin_backup(). I have fixed the issue in the attached
patch, please have a look at it.
Regards,
Jeevan Ladhe
On Fri, Sep 24, 2021 at 6:27
>
> Still, there's got to be a simple way to make this work, and it can't
> involve setting autoFlush. Like, look at this:
>
> https://github.com/lz4/lz4/blob/dev/examples/frameCompress.c
>
> That uses the same APIs that we're here and a fixed-size input buffer
> and a fixed-size output buffer,
On Wed, Sep 22, 2021 at 12:41 PM Jeevan Ladhe
wrote:
> If I set prefs->autoFlush to 0, then LZ4F_compressUpdate() returns an
> error: ERROR_dstMaxSize_tooSmall after a few iterations.
>
> After digging a bit in the source of LZ4F_compressUpdate() in LZ4 repository,
> I
> see that it throws this
On Tue, Sep 21, 2021 at 10:50 PM Robert Haas wrote:
>
> + if (opt->compression == BACKUP_COMPRESSION_LZ4)
>
> else if
>
> + /* First of all write the frame header to destination buffer. */
> + Assert(CHUNK_SIZE >= LZ4F_HEADER_SIZE_MAX);
> + headerSize = LZ4F_compressBegin(mysink->ctx,
> +
On Tue, Sep 21, 2021 at 10:27 PM Robert Haas wrote:
> On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe
> wrote:
> > Yes, you are right here, and I could verify this fact with an experiment.
> > When autoflush is 1, the file gets less compressed i.e. the compressed
> file
> > is of more size than
On Tue, Sep 21, 2021 at 9:35 AM Jeevan Ladhe
wrote:
> Here is a patch for lz4 based on the v5 set of patches. The patch adapts with
> the
> bbsink changes, and is now able to make the provision for the required length
> for the output buffer using the new callback function
>
On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe
wrote:
> Yes, you are right here, and I could verify this fact with an experiment.
> When autoflush is 1, the file gets less compressed i.e. the compressed file
> is of more size than the one generated when autoflush is set to 0.
> But, as of now, I
On Tue, Sep 14, 2021 at 11:30 AM Sergei Kornilov wrote:
> I found that in 0001 you propose to rename few options. Probably we could
> rename another option for clarify? I think FAST (it's about some bw limits?)
> and WAIT (wait for what? checkpoint?) option names are confusing.
> Could we
Hi Robert,
Here is a patch for lz4 based on the v5 set of patches. The patch adapts
with the
bbsink changes, and is now able to make the provision for the required
length
for the output buffer using the new callback
function bbsink_lz4_begin_backup().
Sample command to take backup:
pg_basebackup
>
> >> + /*
> >> + * LZ4F_compressUpdate() returns the number of bytes written into
> output
> >> + * buffer. We need to keep track of how many bytes have been
> cumulatively
> >> + * written into the output buffer(bytes_written). But,
> >> + * LZ4F_compressUpdate() returns 0 in case the data is
Thanks for the newer set of the patches Robert!
I was wondering if we should change the bbs_buffer_length in bbsink to
be size_t instead of int, because that's what most of the compression
libraries have their length variables defined as.
Regards,
Jeevan Ladhe
On Mon, Sep 13, 2021 at 9:42 PM
Hello
I found that in 0001 you propose to rename few options. Probably we could
rename another option for clarify? I think FAST (it's about some bw limits?)
and WAIT (wait for what? checkpoint?) option names are confusing.
Could we replace FAST with "CHECKPOINT [fast|spread]" and WAIT to
On Mon, Sep 13, 2021 at 9:42 PM Robert Haas wrote:
>
> On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar wrote:
> > Seems like nothing has been done about the issue reported in [1]
> >
> > This one line change shall fix the issue,
>
> Oops. Try this version.
Thanks, this version works fine.
--
On Mon, Sep 13, 2021 at 6:03 AM Jeevan Ladhe
wrote:
>> + /*
>> + * If we do not have enough space left in the output buffer for this
>> + * chunk to be written, first archive the already written contents.
>> + */
>> + if (nextChunkLen > mysink->base.bbs_next->bbs_buffer_length -
>>
On Fri, Sep 10, 2021 at 5:25 AM Robert Haas wrote:
>
> On Wed, Sep 8, 2021 at 3:39 PM Robert Haas wrote:
> > The way the gzip APIs I used work, you tell it how big the output
> > buffer is and it writes until it fills that buffer, or until the input
> > buffer is empty, whichever happens first.
Thanks, Robert for your response.
On Thu, Sep 9, 2021 at 1:09 AM Robert Haas wrote:
> On Wed, Sep 8, 2021 at 2:14 PM Jeevan Ladhe
> wrote:
> > To give an example, I put some logging statements, and I can see in the
> log:
> > "
> > bytes remaining in mysink->base.bbs_next->bbs_buffer: 16537
>
On Wed, Sep 8, 2021 at 2:14 PM Jeevan Ladhe
wrote:
> To give an example, I put some logging statements, and I can see in the log:
> "
> bytes remaining in mysink->base.bbs_next->bbs_buffer: 16537
> input size to be compressed: 512
> estimated size for compressed buffer by LZ4F_compressBound():
>
> 0007 adds server-side compression; currently, it only supports
> server-side compression using gzip, but I hope that it won't be hard
> to generalize that to support LZ4 as well, and Andres told me he
> thinks we should aim to support zstd since that library has built-in
> parallel compression
On Thu, Jul 22, 2021 at 1:14 PM tushar wrote:
> On 7/19/21 8:29 PM, Dilip Kumar wrote:
> > I am not sure why this is working, from the code I could not find if
> > the backup target is server then are we doing anything with the -R
> > option or we are just silently ignoring it
>
> OK, in an
On 7/19/21 8:29 PM, Dilip Kumar wrote:
I am not sure why this is working, from the code I could not find if
the backup target is server then are we doing anything with the -R
option or we are just silently ignoring it
OK, in an another scenario I can see , "-t server" working with
On Wed, Jul 21, 2021 at 12:11 PM Mark Dilger
wrote:
> If you were going to support lots of formats, not just tar, you might want
> the streamer class for each format to have a callback which sets up the
> injector, rather than having CreateBackupStreamer do it directly. Even then,
> having
> On Jul 21, 2021, at 8:09 AM, Robert Haas wrote:
>
> A callback where?
If you were going to support lots of formats, not just tar, you might want the
streamer class for each format to have a callback which sets up the injector,
rather than having CreateBackupStreamer do it directly. Even
On Tue, Jul 20, 2021 at 4:03 PM Mark Dilger
wrote:
> I was only imagining having a callback for injecting manifests or recovery
> configurations. It is not necessary that this be done in the current patch
> set, or perhaps ever.
A callback where?
I actually think the ideal scenario would be
> On Jul 20, 2021, at 11:57 AM, Robert Haas wrote:
>
> I don't really understand what your problem is with how the patch set
> leaves pg_basebackup.
I don't have a problem with how the patch set leaves pg_basebackup.
> On the server side, because I dropped the
> bbarchiver stuff,
On Mon, Jul 19, 2021 at 2:51 PM Mark Dilger
wrote:
> The difficulty in v3-0007 with pg_basebackup only knowing how to parse tar
> archives seems to be a natural consequence of not sufficiently abstracting
> out the handling of the tar format. If the bbsink and bbstreamer
> abstractions fully
> On Jul 8, 2021, at 8:56 AM, Robert Haas wrote:
>
> The interesting
> patches in terms of functionality are 0006 and 0007;
The difficulty in v3-0007 with pg_basebackup only knowing how to parse tar
archives seems to be a natural consequence of not sufficiently abstracting out
the handling
On Mon, Jul 19, 2021 at 6:02 PM tushar wrote:
>
> On 7/16/21 12:43 PM, Dilip Kumar wrote:
> > I think the problem is that bbsink_gzip_end_archive() is not
> > forwarding the end request to the next bbsink. The attached patch so
> > fix it.
>
> Thanks Dilip. Reported issue seems to be fixed now
On 7/16/21 12:43 PM, Dilip Kumar wrote:
I think the problem is that bbsink_gzip_end_archive() is not
forwarding the end request to the next bbsink. The attached patch so
fix it.
Thanks Dilip. Reported issue seems to be fixed now with your patch
[edb@centos7tushar bin]$ ./pg_basebackup
On Fri, Jul 16, 2021 at 12:43 PM Dilip Kumar wrote:
>
> On Mon, Jul 12, 2021 at 5:51 PM tushar wrote:
> >
> > On 7/8/21 9:26 PM, Robert Haas wrote:
> > > Here at last is a new version.
> > Please refer this scenario ,where backup target using
> > --server-compression is closing the server
> >
On 7/8/21 9:26 PM, Robert Haas wrote:
Here at last is a new version.
if i try to perform pg_basebackup using "-t server " option against
localhost V/S remote machine ,
i can see difference in backup size.
data directory whose size is
[edb@centos7tushar bin]$ du -sch data/
578M data/
578M
On Mon, Jul 12, 2021 at 5:51 PM tushar wrote:
>
> On 7/8/21 9:26 PM, Robert Haas wrote:
> > Here at last is a new version.
> Please refer this scenario ,where backup target using
> --server-compression is closing the server
> unexpectedly if we don't provide -no-manifest option
>
>
On 7/8/21 9:26 PM, Robert Haas wrote:
Here at last is a new version.
Please refer this scenario ,where backup target using
--server-compression is closing the server
unexpectedly if we don't provide -no-manifest option
[tushar@localhost bin]$ ./pg_basebackup --server-compression=gzip4 -t
> On Jul 29, 2020, at 8:31 AM, Robert Haas wrote:
>
> On Fri, May 8, 2020 at 4:55 PM Robert Haas wrote:
>> So it might be good if I'd remembered to attach the patches. Let's try
>> that again.
>
> Here's an updated patch set.
Hi Robert,
v2-0001 through v2-0009 still apply cleanly, but
On Fri, Jul 31, 2020 at 12:49 PM Andres Freund wrote:
> Have you tested whether this still works against older servers? Or do
> you think we should not have that as a goal?
I haven't tested that recently but I intended to keep it working. I'll
make sure to nail that down before I get to the
Hi,
On 2020-07-29 11:31:26 -0400, Robert Haas wrote:
> Here's an updated patch set. This is now rebased over master and
> includes as 0001 the patch I posted separately at
> http://postgr.es/m/CA+TgmobAczXDRO_Gr2euo_TxgzaH1JxbNxvFx=hyvbinefn...@mail.gmail.com
> but drops some other patches that
On Tue, Jun 30, 2020 at 10:45 AM Dipesh Pandit
wrote:
> Hi,
>
> I have repeated the experiment with 8K block size and found that the
> results are not varying much after applying the patch.
> Please find the details below.
>
>
> Later I connected with Suraj to validate the experiment details and
Hi,
I have repeated the experiment with 8K block size and found that the
results are not varying much after applying the patch.
Please find the details below.
*Backup type*: local backup using pg_basebackup
*Data size*: Around 200GB (200 tables - each table around 1.05 GB)
*TAR_SEND_SIZE value*:
Hi,
On Wed, May 13, 2020 at 7:49 PM Robert Haas wrote:
>
> So the patch came out slightly faster at 8kB and slightly slower in the
> other tests. That's kinda strange. I wonder if it's just noise. How much do
> the results vary run to run?
>
It is not varying much except for 8kB run. Please see
On Wed, May 13, 2020 at 12:01 AM Suraj Kharage <
suraj.khar...@enterprisedb.com> wrote:
> 8kb 32kb (default value) 128kB 1024kB
> Without refactor patch real 10m22.718s
> user 1m23.629s
> sys 8m51.410s real 8m36.245s
> user 1m8.471s
> sys 7m21.520s real 6m54.299s
> user 0m55.690s
> sys 5m46.502s
Hi Suraj,
Two points I wanted to mention.
1. The max rate at which the transfer is happening when the tar size is
128 Kb is at most .48 GB/sec. Is there a possibility to understand what is
the buffer size which is being used. That could help us explain some part
of the puzzle.
2.
Hi,
Did some performance testing by varying TAR_SEND_SIZE with Robert's
refactor patch and without the patch to check the impact.
Below are the details:
*Backup type*: local backup using pg_basebackup
*Data size*: Around 200GB (200 tables - each table around 1.05 GB)
*different TAR_SEND_SIZE
On Wed, May 13, 2020 at 1:56 AM Robert Haas wrote:
>
> On Tue, May 12, 2020 at 4:32 AM Dilip Kumar wrote:
> > Some of the bbsink_libpq_* functions are directly calling pq_* e.g.
> > bbsink_libpq_begin_backup whereas others are calling SendCopy*
> > functions and therein those are calling pq_*
On Tue, May 12, 2020 at 4:32 AM Dilip Kumar wrote:
> Some of the bbsink_libpq_* functions are directly calling pq_* e.g.
> bbsink_libpq_begin_backup whereas others are calling SendCopy*
> functions and therein those are calling pq_* functions. I think
> bbsink_libpq_* function can directly call
On Sat, May 9, 2020 at 2:23 AM Robert Haas wrote:
>
> Hi,
>
> I'd like to propose a fairly major refactoring of the server's
> basebackup.c. The current code isn't horrific or anything, but the
> base backup mechanism has grown quite a few features over the years
> and all of the code knows about
Hi Robert,
Please see my comments inline below.
On Tue, May 12, 2020 at 12:33 AM Robert Haas wrote:
> Yeah, that needs to be tested. Right now the chunk size is 32kB but it
> might be a good idea to go larger. Another thing is that right now the
> chunk size is tied to the protocol message
On Fri, May 8, 2020 at 5:27 PM Andres Freund wrote:
> I wonder if there's cases where recursively forwarding like this will
> cause noticable performance effects. The only operation that seems
> frequent enough to potentially be noticable would be "chunks" of the
> file. So perhaps it'd be good
Hi,
On 2020-05-08 16:53:09 -0400, Robert Haas wrote:
> They represent closely-related concepts, so much so that I initially
> thought we could get by with just one new abstraction layer. I found
> on experimentation that this did not work well, so I split it up into
> two and that worked a lot
101 - 198 of 198 matches
Mail list logo