|
Am 07.04.2017 um 11:20 schrieb Stefano
Zamboni:
On 07/04/2017 09:45, Terry Fage wrote:
Forwarded on behalf of Jesper Knudsen
I am personally using https://www.urbackup.org/ which does a
great job backing up
all my clients to the SME server. I am quite sure that same
software can do
a SME backup if he Linux client is used.
this topic has been discussed many time here around (BZ, forums,
MLs..)
there are many products out there that are good for backup but
there are some aspects we can't forget:
1) we need to make backups of SME outside SME itself.. backuppc
and affa are really good tools, but if you use them on SME itself,
they'll use (a good part of) some disk space and they must have
the feature to export on remote storage (remote means USB disks
too) a full working backup set.
If we're talking about affa and backuppc outside our server, it
could be tricky because they both need a server to run (and in
some cases we have not a second server, just a nas or, worst, an
USB disk)
2) we'd keep the backup as easy as possible, meaning that the
restore froma bacup set should be feasible without any hassle..
from this point of view, if you don't have a SME server running
and you want to extract a file/dir from dar, it's not so easy..
and you must have a good dar catalogue.. and it would be good to
open a backup set from a windows client too
Referring to my other post, mentioning otibackup. This was one of
the big reasons why I choose to write something my own. It is not
otibackup itself but additonal tools which let you create an
smeserver.tar.gz file of just the server configuration. Once you
have restored the server to it's original config, you sync back
whatever backup of the user data/content in the history. I've done
two updates with this approach when going from SME 7 to 8 to 9. I
always did a fresh install with the configuration in the tar.gz file
and synced all the data back as postinstall.
I don't know when/why dar was adopted, but it has some issues..
- it can't use parallel compression and so even if you have 2000
cores, only one will be used during backup.. using pigz (available
from epel) won't help 'cause dar don't call gzip.
- it's fragile, and this is the reason we are here :-)
- it could be a bottleneck for the server.. if you keep many sets,
backup could be fast, but the removal of an old set can be not.. I
have a server that needs 5 hours to backup (and that's fine,
starting at 22:00), but the post backup stage (i.e. the
/usr/bin/dar_manager -Q -B /mnt/smb/FQDN/dar-catalog -D 15 one)
could take the same time.. and this is bad, since users complain
that the server is slow.
- it has not (but maybe it's related to how we use it) any kind of
log/diagnostic tool while it's running.. you've just to wait and
see
Urbackup, Affa, Backuppc, Amanda are more "backup server" oriented
IMO, i.e. they are tools to create a backup server, not to backup
a server :-)
NS uses duplicity (see http://duplicity.nongnu.org/) which seems
to have good features (http://duplicity.nongnu.org/features.html)
We'd try to take a look at NS' scripts and make some tests.
ciao
S.
_______________________________________________
Discussion about project organisation and overall direction
To unsubscribe, e-mail [email protected]
Searchable archive at
https://lists.contribs.org/mailman/public/discussion/
|
_______________________________________________
Discussion about project organisation and overall direction
To unsubscribe, e-mail [email protected]
Searchable archive at https://lists.contribs.org/mailman/public/discussion/