Now that I've poked this again, even a network error such as the client
went away while backup was in progress will leave behind these files. A
really unstable client could leave a lot of orphan rsyncTmp's.
Mike
--
On Tue, Jan 31, 2017 at 1:14 AM, Johan Ehnberg wrote:
> On 01/30/2017 04:58 PM, Kent Tenney wrote:
> > The error msg has changed with current version:
> > commit edbd1a4613e0125ed65372738abff935230db075
> >
> > when the script executes 'makeDist ...'
> >
> > Unexpected Conf
On Wed, Feb 1, 2017 at 2:53 AM, Jan Stransky
wrote:
>
> 3) Full backup of each dataset as separate host, then second with
> already filled pool. Preferably from SSD to SSD to not be IO limited.
>
In practice if you use the --checksum-seed option with rsync the
timing
Hi,
as I have installed BackupPC, I was pleasantly surprised with
compression effectiveness (data savings), but unpleasantly surprised
with its CPU dependence.
Therefore, I am thinking about preparing some compression CPU
performance benchmark for BackupPC. Potential new users or HW buyers
I think you would have to make a script to modify host-specific config
file for backuppc.
Jan
On 02/01/2017 09:26 AM, Andreas Roth wrote:
> Hi all,
>
> I want to backup a filesystem structure of a big data application. The
> application having a data directory structure. Every directory
> Which
Hi all,
I want to backup a filesystem structure of a big data application. The
application having a data directory structure. Every directory
Which don't need to be backed up contains a kind of ignore file.
Is there a possibility to configure backuppc to skip a complete folder when
there is a