Re: [gentoo-user] {OT} backups... still backups....

2013-07-18 Thread Grant
 You're welcome. A pull system does rely on the server being secure, which
 is why I don't use it for offsite backups to the cloud :-O

 Wouldn't a push/pull combination be a good compromise?

 The remote servers push their backups to their own location on a staging
 server.
 The backup-storage server then pulls the backups from there.

 The staging can then be a VM with the real backups being moved onto
 host-storage where the VM has no access to.

 This way, when the staging is compromised, only the latest backup can be
 accessed.
 When the remote server is compromised, only the latest backup can
 potentially be overwritten.
 But, the actual backups can not be accessed as the host will not have any
 outgoing connectivity and when the backups are being pulled, the VM will
 be stopped to allow access to the disk containing the backups.

 Following would be the steps:
 1) remote server(s) push backup to the VM
 2) host shuts down VM
 3) host mounts backup-store of VM locally
 4) host takes a backup of the backup-store
 5) host starts VM

 By using LVM snapshots, the downtime of the VM can be significantly reduced.
 Additionally, the VM OS and software can be restored from a known-good
 copy prior to each restart and the VM can be configured to only be running
 during the backup-window of the remote systems. This would then
 significantly reduce the window of opportunity for any security breach
 attempts.

I think I follow. :)  Do you think that would be better than having
the clients push to the backup server via rsync, then having the
backup server update an rdiff-backup repository that the clients have
no access to, then having another system pull from the backup server's
rsynced data and create its own rdiff-backup repository?  To me that
seems like it would have the right combination of security,
redundancy, and simplicity.

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-02 Thread Grant
 I'd rather lose my backups than lose my backups and give up root
 read/write to every system I back up. :)

 If you want to leave your backup server open to exploitation attempts,
 maybe you should be looking at a different solution :)

If open to exploitation attempts means accepting inbound connections
from the internet then I agree.  I'm grateful for your help with this
Neil!

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-02 Thread Neil Bothwick
On Mon, 1 Jul 2013 23:24:55 -0700, Grant wrote:

  I'd rather lose my backups than lose my backups and give up root
  read/write to every system I back up. :)  
 
  If you want to leave your backup server open to exploitation attempts,
  maybe you should be looking at a different solution :)  
 
 If open to exploitation attempts means accepting inbound connections
 from the internet then I agree.  I'm grateful for your help with this
 Neil!

You're welcome. A pull system does rely on the server being secure, which
is why I don't use it for offsite backups to the cloud :-O


-- 
Neil Bothwick

It compiled? The first screen came up? Ship it! -- Bill Gates


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-07-02 Thread J. Roeleveld
On Tue, July 2, 2013 10:08, Neil Bothwick wrote:

 You're welcome. A pull system does rely on the server being secure, which
 is why I don't use it for offsite backups to the cloud :-O

Wouldn't a push/pull combination be a good compromise?

The remote servers push their backups to their own location on a staging
server.
The backup-storage server then pulls the backups from there.

The staging can then be a VM with the real backups being moved onto
host-storage where the VM has no access to.

This way, when the staging is compromised, only the latest backup can be
accessed.
When the remote server is compromised, only the latest backup can
potentially be overwritten.
But, the actual backups can not be accessed as the host will not have any
outgoing connectivity and when the backups are being pulled, the VM will
be stopped to allow access to the disk containing the backups.

Following would be the steps:
1) remote server(s) push backup to the VM
2) host shuts down VM
3) host mounts backup-store of VM locally
4) host takes a backup of the backup-store
5) host starts VM

By using LVM snapshots, the downtime of the VM can be significantly reduced.
Additionally, the VM OS and software can be restored from a known-good
copy prior to each restart and the VM can be configured to only be running
during the backup-window of the remote systems. This would then
significantly reduce the window of opportunity for any security breach
attempts.

--
Joost




Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
  Isn't that a gaping security hole?  I think this amounts to granting
  the backup server root read access (and write access if you want to
  restore) on each client?
 
  How can you backup system files without root read access? You are
  granting this to s specific user, one without a login shell, on the
  server.

 If the backup server is infiltrated, the infiltrator would have root
 read access to each of the clients, correct?  If the clients push to
 the backup server instead, their access on the server can be
 restricted to the backup directory.

 Yes, but with push you have to secure each machine whereas with pull
 backups it's only the server to secure. And you'd still need to grant
 access to the server from the clients, which could be escalated. With
 backuppc, the server does not need to be accessible from the Internet at
 all, all requests are outgoing. If the server machine serves other
 purposes and needs to be net-accessible, run the backup server in a
 chroot or VM.

I'm planning to rsync --fake-super the important files from each
client to a particular folder on the backup server as an unprivileged
user and then have the backup server run rdiff-backup locally to
maintain a history of those files.  authorized_keys on the server
would restrict the clients to a particular rsync command in a
particular directory.  That way if the backup server is infiltrated,
the clients aren't exposed in any way, and if a client is infiltrated,
the only extra exposure is the rsync'ed copy of the files on the
server which isn't a real vulnerability because of the rdiff-backup
history.  I'd also like to have a secondary backup server pull those
same rsync'ed files from the primary backup server and run its own
rdiff-backup repository on them.  That way all copies of any system's
backups are never made vulnerable by the break-in of a single system.

Doesn't that compare favorably to a layout like backuppc's?

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
 You did not tell us what are you trying to backup; entire system or just
 particular files.

Right now I'm working on particular files and folders but it sounds
nice to eventually back up each system in its entirety.  That does
sounds like a lot of data to move offsite though.

 Are you afraid of updates or data loss?

I'm mainly thinking about data loss and break-ins.  With updates I
find that the problematic package is usually fairly obvious and Gentoo
makes it easy to roll back, especially with FEATURES=buildpkg.

 I have two machine in remote location as well.  So I usually upgrade my
 local machine first, wait one week and if there are no surprises I upgrade
 remote main server first.  If everything goes OK (no surprises and/or
 complains), I upgrade remote backup machine.
 I run vpn so I just use rsync over vpn to make an incremental backup daily
 (Mon. to Fri.).

That's the same sort of backup process I'm working on.  I have a
similar staggered update strategy as well.  It came in handy with the
udev network device name change recently.

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Neil Bothwick
On Mon, 1 Jul 2013 01:39:56 -0700, Grant wrote:

  Yes, but with push you have to secure each machine whereas with pull
  backups it's only the server to secure. And you'd still need to grant
  access to the server from the clients, which could be escalated. With
  backuppc, the server does not need to be accessible from the Internet
  at all, all requests are outgoing. If the server machine serves other
  purposes and needs to be net-accessible, run the backup server in a
  chroot or VM.  
 
 I'm planning to rsync --fake-super the important files from each
 client to a particular folder on the backup server as an unprivileged
 user and then have the backup server run rdiff-backup locally to
 maintain a history of those files.

How does that work with files that aren't world-readable?

 authorized_keys on the server
 would restrict the clients to a particular rsync command in a
 particular directory.  That way if the backup server is infiltrated,
 the clients aren't exposed in any way, and if a client is infiltrated,
 the only extra exposure is the rsync'ed copy of the files on the
 server which isn't a real vulnerability because of the rdiff-backup
 history.  I'd also like to have a secondary backup server pull those
 same rsync'ed files from the primary backup server and run its own
 rdiff-backup repository on them.  That way all copies of any system's
 backups are never made vulnerable by the break-in of a single system.
 
 Doesn't that compare favorably to a layout like backuppc's?

It's a lot more work and doesn't cover everything. One of the advantages
of a pull system like BackupPC is that the only work needed on the client
is adding the backuppc user's key to authorized keys. Everything else is
done by the server. If the server cannot contact the client, or the
connection is broken mid-backup, it tries again. It also gives a single
point of configuration. If you want to change the backup plan fr all
machines, you make one change on one computer.

It works well, save work and minimises disk space usage, especially with
multiple similar clients. Preventing infiltration is simple as you don't
need to open it to the Internet at all, the backup server can be
completely stealthed and still do its job.


-- 
Neil Bothwick

Better to understand a little than to misunderstand a lot.


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
 I'm planning to rsync --fake-super the important files from each
 client to a particular folder on the backup server as an unprivileged
 user and then have the backup server run rdiff-backup locally to
 maintain a history of those files.

 How does that work with files that aren't world-readable?

The client can run rsync as root, the unprivileged user would be
writing on the backup server.  --fake-super writes all original
ownership/permissions to xattrs in the files.

 authorized_keys on the server
 would restrict the clients to a particular rsync command in a
 particular directory.  That way if the backup server is infiltrated,
 the clients aren't exposed in any way, and if a client is infiltrated,
 the only extra exposure is the rsync'ed copy of the files on the
 server which isn't a real vulnerability because of the rdiff-backup
 history.  I'd also like to have a secondary backup server pull those
 same rsync'ed files from the primary backup server and run its own
 rdiff-backup repository on them.  That way all copies of any system's
 backups are never made vulnerable by the break-in of a single system.

 Doesn't that compare favorably to a layout like backuppc's?

 It's a lot more work and doesn't cover everything. One of the advantages
 of a pull system like BackupPC is that the only work needed on the client
 is adding the backuppc user's key to authorized keys. Everything else is
 done by the server. If the server cannot contact the client, or the
 connection is broken mid-backup, it tries again. It also gives a single
 point of configuration. If you want to change the backup plan fr all
 machines, you make one change on one computer.

If you have a crazy number of machines to back up, I could see
sacrificing some security for convenience.  Still I would think you
could use something like puppet to have the best of both worlds.  I
have 5 machines and I think I can get it down to 3.

 It works well, save work and minimises disk space usage, especially with
 multiple similar clients. Preventing infiltration is simple as you don't
 need to open it to the Internet at all, the backup server can be
 completely stealthed and still do its job.

Obviously the backup server has to be able to make outbound
connections in order to pull so I think you're saying it could drop
inbound connections, but then how could you talk to it?  Do you mean a
local backup server?

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Neil Bothwick
On Mon, 1 Jul 2013 05:29:58 -0700, Grant wrote:

  It's a lot more work and doesn't cover everything. One of the
  advantages of a pull system like BackupPC is that the only work
  needed on the client is adding the backuppc user's key to authorized
  keys. Everything else is done by the server. If the server cannot
  contact the client, or the connection is broken mid-backup, it tries
  again. It also gives a single point of configuration. If you want to
  change the backup plan fr all machines, you make one change on one
  computer.  
 
 If you have a crazy number of machines to back up, I could see
 sacrificing some security for convenience.  Still I would think you
 could use something like puppet to have the best of both worlds.  I
 have 5 machines and I think I can get it down to 3.

There is no sacrifice, you are running rsync as root on the client
either way. Alternatively, you could run rsyncd on the client, which
avoids the need for the server to be able to run an SSH session.

  It works well, save work and minimises disk space usage, especially
  with multiple similar clients. Preventing infiltration is simple as
  you don't need to open it to the Internet at all, the backup server
  can be completely stealthed and still do its job.  
 
 Obviously the backup server has to be able to make outbound
 connections in order to pull so I think you're saying it could drop
 inbound connections, but then how could you talk to it?  Do you mean a
 local backup server?

Yes, you talk to the server over the LAN, or a VPN. There need be no way
of connecting to it from outside of your LAN.


-- 
Neil Bothwick

There's a fine line between fishing and standing on the shore looking
like an idiot.


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
  It's a lot more work and doesn't cover everything. One of the
  advantages of a pull system like BackupPC is that the only work
  needed on the client is adding the backuppc user's key to authorized
  keys. Everything else is done by the server. If the server cannot
  contact the client, or the connection is broken mid-backup, it tries
  again. It also gives a single point of configuration. If you want to
  change the backup plan fr all machines, you make one change on one
  computer.

 If you have a crazy number of machines to back up, I could see
 sacrificing some security for convenience.  Still I would think you
 could use something like puppet to have the best of both worlds.  I
 have 5 machines and I think I can get it down to 3.

 There is no sacrifice, you are running rsync as root on the client
 either way. Alternatively, you could run rsyncd on the client, which
 avoids the need for the server to be able to run an SSH session.

I think the sacrifice is that with the backuppc method, if someone
breaks into the backup server they will have read(/write) access to
the clients.  The method I'm describing requires more management if
you have a lot of machines, but it doesn't have the aforementioned
vulnerability.

The rsyncd option is interesting.  If you don't want to restore
directly onto the client, there are no SSH keys involved at all?

  It works well, save work and minimises disk space usage, especially
  with multiple similar clients. Preventing infiltration is simple as
  you don't need to open it to the Internet at all, the backup server
  can be completely stealthed and still do its job.

 Obviously the backup server has to be able to make outbound
 connections in order to pull so I think you're saying it could drop
 inbound connections, but then how could you talk to it?  Do you mean a
 local backup server?

 Yes, you talk to the server over the LAN, or a VPN. There need be no way
 of connecting to it from outside of your LAN.

To me it seems presumptuous to be sure a particular machine will never
be infiltrated to the degree that you're OK with such an infiltration
giving read(/write) access on every client to the infiltrator.

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Neil Bothwick
On Mon, 1 Jul 2013 06:31:38 -0700, Grant wrote:

  There is no sacrifice, you are running rsync as root on the client
  either way. Alternatively, you could run rsyncd on the client, which
  avoids the need for the server to be able to run an SSH session.  
 
 I think the sacrifice is that with the backuppc method, if someone
 breaks into the backup server they will have read(/write) access to
 the clients.  The method I'm describing requires more management if
 you have a lot of machines, but it doesn't have the aforementioned
 vulnerability.
 
 The rsyncd option is interesting.  If you don't want to restore
 directly onto the client, there are no SSH keys involved at all?

Not even then, the server talks to the client in the same way for
restores as it does for backups, so it would still use rsyncd if you
wanted it to.

  Obviously the backup server has to be able to make outbound
  connections in order to pull so I think you're saying it could drop
  inbound connections, but then how could you talk to it?  Do you mean
  a local backup server?  
 
  Yes, you talk to the server over the LAN, or a VPN. There need be no
  way of connecting to it from outside of your LAN.  
 
 To me it seems presumptuous to be sure a particular machine will never
 be infiltrated to the degree that you're OK with such an infiltration
 giving read(/write) access on every client to the infiltrator.

I don't think it too unreasonable to assume that a machine with no ports
exposed to the Internet will not be compromised from the Internet.
Whereas a push approach requires that the server have open ports.


-- 
Neil Bothwick

Just when you got it all figured out:  An UPGRADE!


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
  There is no sacrifice, you are running rsync as root on the client
  either way. Alternatively, you could run rsyncd on the client, which
  avoids the need for the server to be able to run an SSH session.

 I think the sacrifice is that with the backuppc method, if someone
 breaks into the backup server they will have read(/write) access to
 the clients.  The method I'm describing requires more management if
 you have a lot of machines, but it doesn't have the aforementioned
 vulnerability.

 The rsyncd option is interesting.  If you don't want to restore
 directly onto the client, there are no SSH keys involved at all?

 Not even then, the server talks to the client in the same way for
 restores as it does for backups, so it would still use rsyncd if you
 wanted it to.

Hmmm, now that I think about it, I guess the server accessing the
client via rsyncd still provides the server with root read/write
access to the client just like SSH keys.

 I don't think it too unreasonable to assume that a machine with no ports
 exposed to the Internet will not be compromised from the Internet.
 Whereas a push approach requires that the server have open ports.

Agreed, but this requires that the backup server is local to the admin
which may not be possible.  openvpn requires open ports of course.
There's also the possibility of a local break-in

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Michael Hampicke
Am 01.07.2013 16:08, schrieb Grant:
 There is no sacrifice, you are running rsync as root on the client
 either way. Alternatively, you could run rsyncd on the client, which
 avoids the need for the server to be able to run an SSH session.

 I think the sacrifice is that with the backuppc method, if someone
 breaks into the backup server they will have read(/write) access to
 the clients.  The method I'm describing requires more management if
 you have a lot of machines, but it doesn't have the aforementioned
 vulnerability.

 The rsyncd option is interesting.  If you don't want to restore
 directly onto the client, there are no SSH keys involved at all?

 Not even then, the server talks to the client in the same way for
 restores as it does for backups, so it would still use rsyncd if you
 wanted it to.
 
 Hmmm, now that I think about it, I guess the server accessing the
 client via rsyncd still provides the server with root read/write
 access to the client just like SSH keys.
 
 I don't think it too unreasonable to assume that a machine with no ports
 exposed to the Internet will not be compromised from the Internet.
 Whereas a push approach requires that the server have open ports.
 
 Agreed, but this requires that the backup server is local to the admin
 which may not be possible.  openvpn requires open ports of course.
 There's also the possibility of a local break-in
 
That' how we do it. The backuppc server is in our local lan, and only
accessible from local lan. It pulls backups from all our machines in
offsite data centers. To compromise our backuppc machine one would have
to physically break into our companies building.
But if somebody has physical access to the machine on which you store
your backups, you're screwed, no matter if you use push or pull backup :)



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
 That' how we do it. The backuppc server is in our local lan, and only
 accessible from local lan. It pulls backups from all our machines in
 offsite data centers. To compromise our backuppc machine one would have
 to physically break into our companies building.
 But if somebody has physical access to the machine on which you store
 your backups, you're screwed, no matter if you use push or pull backup :)

I'd rather lose my backups than lose my backups and give up root
read/write to every system I back up. :)

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Neil Bothwick
On Mon, 1 Jul 2013 16:14:02 -0700, Grant wrote:

 I'd rather lose my backups than lose my backups and give up root
 read/write to every system I back up. :)

If you want to leave your backup server open to exploitation attempts,
maybe you should be looking at a different solution :)


-- 
Neil Bothwick

Y'know how s'm people treat th'r body like a TEMPLE?
Well, I treat mine like 'n AMUSEMENT PARK...  S'great...


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Neil Bothwick
On Sat, 29 Jun 2013 16:42:33 -0700, Grant wrote:

 Can anyone think of an automated method that remotely and securely
 backs up data from one system to another, preserves permissions and
 ownership, and keeps the backups safe even if the backed-up system is
 compromised?

app-backup/backuppc

It uses hard links, but to save space, so all versions of all files are
kept for your entire history, but unchanged files are kept only once,
even if present on multiple targets.


-- 
Neil Bothwick

Time for a diet! -- [NO FLABBIER].


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Grant
 Can anyone think of an automated method that remotely and securely
 backs up data from one system to another, preserves permissions and
 ownership, and keeps the backups safe even if the backed-up system is
 compromised?

 app-backup/backuppc

 It uses hard links, but to save space, so all versions of all files are
 kept for your entire history, but unchanged files are kept only once,
 even if present on multiple targets.

Thank you for the recommendation.

How far would I have to open my systems in order for backuppc to function?

Can the web server reside on a different system than the backup server?

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Neil Bothwick
On Sun, 30 Jun 2013 01:11:35 -0700, Grant wrote:

  app-backup/backuppc
 
  It uses hard links, but to save space, so all versions of all files
  are kept for your entire history, but unchanged files are kept only
  once, even if present on multiple targets.  
 
 Thank you for the recommendation.
 
 How far would I have to open my systems in order for backuppc to
 function?

You have to grant root rsync access to the backuppc user on the server.

 Can the web server reside on a different system than the backup server?

I haven't tried that but I don't see why not.


-- 
Neil Bothwick

...Advert for restaurant:
  Exotic foods for all occasions. Police balls a speciality.


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Stefan G. Weichinger
Am 30.06.2013 01:42, schrieb Grant:

 Can anyone think of an automated method that remotely and securely
 backs up data from one system to another, preserves permissions and
 ownership, and keeps the backups safe even if the backed-up system is
 compromised?
 
 I did delve into bacula but decided it was overkill for just a few systems.

I use amanda but it might be overkill for you as well. The initial
learning curve is a bit steep but then it is reliable and rather easy to
add ned systems.

What about using duplicity? And that dupinanny-helper-script.




Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread William Kenworthy
On 30/06/13 17:58, Stefan G. Weichinger wrote:
 Am 30.06.2013 01:42, schrieb Grant:
 
 Can anyone think of an automated method that remotely and securely
 backs up data from one system to another, preserves permissions and
 ownership, and keeps the backups safe even if the backed-up system is
 compromised?

 I did delve into bacula but decided it was overkill for just a few systems.
 
 I use amanda but it might be overkill for you as well. The initial
 learning curve is a bit steep but then it is reliable and rather easy to
 add ned systems.
 
 What about using duplicity? And that dupinanny-helper-script.
 
 

sounds something like bacula in that it uses hard links, but also is
much simpler.  To restore, you just rsync the file/files/everything back
as needed.  Can be automated (passwordless logins using certs) and
basicly just works (for quite a few years now!).

BillK




*  app-backup/dirvish
  Latest version available: 1.2.1
  Latest version installed: 1.2.1
  Size of downloaded files: 47 kB
  Homepage:http://www.dirvish.org/
  Description: Dirvish is a fast, disk based, rotating network
backup system.
  License: OSL-2.0



Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread David Relson
On Sun, 30 Jun 2013 01:11:35 -0700
Grant wrote:

  Can anyone think of an automated method that remotely and securely
  backs up data from one system to another, preserves permissions and
  ownership, and keeps the backups safe even if the backed-up system
  is compromised?
 
  app-backup/backuppc
 
  It uses hard links, but to save space, so all versions of all files
  are kept for your entire history, but unchanged files are kept only
  once, even if present on multiple targets.
 
 Thank you for the recommendation.
 
 How far would I have to open my systems in order for backuppc to
 function?
 
 Can the web server reside on a different system than the backup
 server?
 
 - Grant

I've been using backuppc since 2007 and am very happy with it.



Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Mick
On Sunday 30 Jun 2013 12:05:05 William Kenworthy wrote:
 On 30/06/13 17:58, Stefan G. Weichinger wrote:
  Am 30.06.2013 01:42, schrieb Grant:
  Can anyone think of an automated method that remotely and securely
  backs up data from one system to another, preserves permissions and
  ownership, and keeps the backups safe even if the backed-up system is
  compromised?
  
  I did delve into bacula but decided it was overkill for just a few
  systems.
  
  I use amanda but it might be overkill for you as well. The initial
  learning curve is a bit steep but then it is reliable and rather easy to
  add ned systems.
  
  What about using duplicity? And that dupinanny-helper-script.
 
 sounds something like bacula in that it uses hard links, but also is
 much simpler.  To restore, you just rsync the file/files/everything back
 as needed.  Can be automated (passwordless logins using certs) and
 basicly just works (for quite a few years now!).
 
 BillK
 
 
 
 
 *  app-backup/dirvish
   Latest version available: 1.2.1
   Latest version installed: 1.2.1
   Size of downloaded files: 47 kB
   Homepage:http://www.dirvish.org/
   Description: Dirvish is a fast, disk based, rotating network
 backup system.
   License: OSL-2.0

What file system are you using with Dirvish and how much space compared to the 
source fs is it using?
-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Grant
 How far would I have to open my systems in order for backuppc to
 function?

 You have to grant root rsync access to the backuppc user on the server.

Isn't that a gaping security hole?  I think this amounts to granting
the backup server root read access (and write access if you want to
restore) on each client?

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Neil Bothwick
On Sun, 30 Jun 2013 13:12:29 -0700, Grant wrote:

  You have to grant root rsync access to the backuppc user on the
  server.  
 
 Isn't that a gaping security hole?  I think this amounts to granting
 the backup server root read access (and write access if you want to
 restore) on each client?

How can you backup system files without root read access? You are granting
this to s specific user, one without a login shell, on the server.

You don;t need to grant write access if you don't want to. BackupPC has
an option to restore to a tar or zip archive, which you can manually
restore.


-- 
Neil Bothwick

It's no use crying over spilt milk -- it only makes it salty for the cat.


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Grant
  You have to grant root rsync access to the backuppc user on the
  server.

 Isn't that a gaping security hole?  I think this amounts to granting
 the backup server root read access (and write access if you want to
 restore) on each client?

 How can you backup system files without root read access? You are granting
 this to s specific user, one without a login shell, on the server.

If the backup server is infiltrated, the infiltrator would have root
read access to each of the clients, correct?  If the clients push to
the backup server instead, their access on the server can be
restricted to the backup directory.

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread William Kenworthy
I used reiserfs3 (very good) and now btrfs (so-so, but getting better) -
stay away from anything ext* - they fall apart under the load eventually
losing the lot ... the filesystem gets hammered when its creating tons
of hardlinks.  From personal experiance I have a very poor view on ext2
and ext3 ... less experience (and failures!) with ext4 though as I avoid
ext* on principle now where I can.

First copy takes the same space as the original, subsequent only
includes changes (as hard links for existing files use zero space.)
Over time, it stabilises at ~2x the original size for full gentoo
systems with regular updates (configurable, I keep +2weeks daily, and
+6months Sunday backups - dirvish-expire can be a weekly cron job to
cull expired versions)  My current setup uses a manually run script
(simple bash) to pull the wanted directories from a number of vm's and a
desktop.  I used to do it automatically but until I stabilise my network
changes its easier manually.

Development looks slow/old from their website, but the activity is
elsewhere.

BillK

* from the dirvish web site In other news, I've learned from the
director of the Oregon State University Open Source Lab that they will
be backing up their servers with dirvish. These servers are the primary
mirror sites for Mozilla, Kernel.org, Gentoo, Drupal, and other major
open source projects.  - if its good enough for them, its good enough ...


On 01/07/13 02:08, Mick wrote:
 On Sunday 30 Jun 2013 12:05:05 William Kenworthy wrote:
 On 30/06/13 17:58, Stefan G. Weichinger wrote:
 Am 30.06.2013 01:42, schrieb Grant:
 Can anyone think of an automated method that remotely and securely
 backs up data from one system to another, preserves permissions and
 ownership, and keeps the backups safe even if the backed-up system is
 compromised?

 I did delve into bacula but decided it was overkill for just a few
 systems.

 I use amanda but it might be overkill for you as well. The initial
 learning curve is a bit steep but then it is reliable and rather easy to
 add ned systems.

 What about using duplicity? And that dupinanny-helper-script.

 sounds something like bacula in that it uses hard links, but also is
 much simpler.  To restore, you just rsync the file/files/everything back
 as needed.  Can be automated (passwordless logins using certs) and
 basicly just works (for quite a few years now!).

 BillK




 *  app-backup/dirvish
   Latest version available: 1.2.1
   Latest version installed: 1.2.1
   Size of downloaded files: 47 kB
   Homepage:http://www.dirvish.org/
   Description: Dirvish is a fast, disk based, rotating network
 backup system.
   License: OSL-2.0
 
 What file system are you using with Dirvish and how much space compared to 
 the 
 source fs is it using?
 




Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Neil Bothwick
On Sun, 30 Jun 2013 14:36:14 -0700, Grant wrote:

  Isn't that a gaping security hole?  I think this amounts to granting
  the backup server root read access (and write access if you want to
  restore) on each client?  
 
  How can you backup system files without root read access? You are
  granting this to s specific user, one without a login shell, on the
  server.  
 
 If the backup server is infiltrated, the infiltrator would have root
 read access to each of the clients, correct?  If the clients push to
 the backup server instead, their access on the server can be
 restricted to the backup directory.

Yes, but with push you have to secure each machine whereas with pull
backups it's only the server to secure. And you'd still need to grant
access to the server from the clients, which could be escalated. With
backuppc, the server does not need to be accessible from the Internet at
all, all requests are outgoing. If the server machine serves other
purposes and needs to be net-accessible, run the backup server in a
chroot or VM.


-- 
Neil Bothwick

Religious error: (A)tone, (R)epent, (I)mmolate?


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} backups... still backups....

2013-06-30 Thread Joseph

On 06/29/13 16:42, Grant wrote:

Remote, automated, secure backups is the most difficult and
time-consuming Gentoo project I've undertaken.

Right now I'm pushing data from each of my systems to a backup server
via rdiff-backup.  The main problem with this is if a system is
compromised its backup is also vulnerable.  Also, you can't restrict
rdiff-backup to a particular directory in authorized_keys like you can
with rsync, and rdiff-backup isn't very good over the internet (I've
had trouble on sub-optimal connections) and it's recommended on the
mailing list to use rdiff-backup either before or after rsync'ing over
the internet.

We've discussed this vulnerability here before and it was suggested
that I use hard links to version the rdiff-backup repository on the
backup server in case it's tampered with.  I've been studying hard
links, cp -al, rsnapshot (which uses rsync and hard links), and rsync
--link-dest (which uses hard links) but I can't figure out how that
would work without the inevitable duplication of data on a large
scale.

Can anyone think of an automated method that remotely and securely
backs up data from one system to another, preserves permissions and
ownership, and keeps the backups safe even if the backed-up system is
compromised?

I did delve into bacula but decided it was overkill for just a few systems.

- Grant


You did not tell us what are you trying to backup; entire system or just 
particular files.
Are you afraid of updates or data loss?

I have two machine in remote location as well.  So I usually upgrade my local machine first, wait one week and if there are no surprises I upgrade remote main server 
first.  If everything goes OK (no surprises and/or complains), I upgrade remote backup machine. 


I run vpn so I just use rsync over vpn to make an incremental backup daily 
(Mon. to Fri.).

--
Joseph



[gentoo-user] {OT} backups... still backups....

2013-06-29 Thread Grant
Remote, automated, secure backups is the most difficult and
time-consuming Gentoo project I've undertaken.

Right now I'm pushing data from each of my systems to a backup server
via rdiff-backup.  The main problem with this is if a system is
compromised its backup is also vulnerable.  Also, you can't restrict
rdiff-backup to a particular directory in authorized_keys like you can
with rsync, and rdiff-backup isn't very good over the internet (I've
had trouble on sub-optimal connections) and it's recommended on the
mailing list to use rdiff-backup either before or after rsync'ing over
the internet.

We've discussed this vulnerability here before and it was suggested
that I use hard links to version the rdiff-backup repository on the
backup server in case it's tampered with.  I've been studying hard
links, cp -al, rsnapshot (which uses rsync and hard links), and rsync
--link-dest (which uses hard links) but I can't figure out how that
would work without the inevitable duplication of data on a large
scale.

Can anyone think of an automated method that remotely and securely
backs up data from one system to another, preserves permissions and
ownership, and keeps the backups safe even if the backed-up system is
compromised?

I did delve into bacula but decided it was overkill for just a few systems.

- Grant