Just following up on this because I got a very useful reply from Holger which
explained that a variable can be used to hold a list of excludes, but noted
that doing so will break the ability to use the GUI. If the GUI is used to edit
a host's config after manually setting a variable all changes
I was under the incorrect assumption that specifying an asterisk would apply
the exclusion list to all ShareNames until I struggled with making it work and
re-read the tutorial [1] where it specifies that the asterisk "means it applies
to all shares that don't have a specific entry".
On 04 Aug 2016 14:04, Adam Goryachev wrote:
I can't comment on the rest, but directory entries are always created,
because backuppc needs them for the backup structure (and there is no
disk saving/not possible to hard link them)...
Makes perfect sense! Thanks for the response Adam.
Backups are taking about three hours for a particular fileserver and records
indicate that over 300k new directories are being created every run.
I opened the XferLOG in a browser and searched for the word "create d" which
catches every newly-created directory. The count was 348k matches. But
To relieve the impact of backing up hundreds of gigabytes in a short time, I
added several larger folders to the exclude list for a share. Then each day I
would remove a couple from the exclude list to add more data to the backup.
What's happening is that backup #12 is trying to go back to
On 08 Jun 2016 20:32 Juergen Harms wrote:
I have added a corresponding issue at github - I am not sure whether the
picking up of access rights to the contents at sourceforge is
successfully completed. That makes sure that your remark is not lost.
(
Giving this another go and crossing my fingers. Using tar to seed data from a
local file-level backup into the BackupPC pool.
I realized that I had some but not all the plus signs removed from the tar
command lines. This is mentioned several times in emails (eg:
On 06 Jun 2016 21:06, Carl Wilhelm wrote:
A negative result is still a result. Thanks for reporting your result.
Good point. I'm glad to contribute to the collective knowledge :)
+--
|This was sent by itism...@gmail.com via
Well that was a huge waste of time. Attempting to import the backups from the
local store using this TarClientCmd:
/usr/bin/sudo $tarPath -cvf - -C /home/backup/$host/cur$shareName . --totals
did appear to import all the files, but when I switch it back to rsync and
scheduled another backup
Just following up on my efforts to import a host's existing file-level backups
located under /home/backup/hostname/cur/ on the BackupPC server into the
BackupPC pool located in /home/BackupPC/pool. I think I can do what I need
without additional scripts or efforts to tar.gz the current files.
Johan Ehnberg wrote:
To do it directly from directories, you could try changing 'zcat' to
'tar --strip-components=X' and point it at the directory. Set X to match
the number of path components that should not be included for the
archive to make the structure equal to that of the actual host to
, is to ensure that the
paths that you get in BackupPC from the seeding match those that you get
when backing up the actual host.
I updated the script with improved documentation with the help of your
experiences. Thanks!
Best regards,
Johan
On 2016-05-14 00:03, cardiganimpatience wrote
method here:
http://johan.ehnberg.net/backuppc-pre-loading-seeding-and-migrating-moving-script/
You may be able to use tar with --strip-components to work around tar
extra paths on the fly.
Good luck!
Johan
On 2016-05-06 17:19, cardiganimpatience wrote:
BackupPC is installed and working
BackupPC is installed and working great for new hosts. Is there a way to take
the hundreds of GB from old hosts that exist on the backup server and import
them into the BackupPC storage pool?
The old backup system uses rsync to dump all files to a local disk on the same
server where BackupPC
14 matches
Mail list logo