Re: [gentoo-user] {OT} backups... still backups....
On Sat, 29 Jun 2013 16:42:33 -0700, Grant wrote: Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? app-backup/backuppc It uses hard links, but to save space, so all versions of all files are kept for your entire history, but unchanged files are kept only once, even if present on multiple targets. -- Neil Bothwick Time for a diet! -- [NO FLABBIER]. signature.asc Description: PGP signature
Re: [gentoo-user] {OT} backups... still backups....
Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? app-backup/backuppc It uses hard links, but to save space, so all versions of all files are kept for your entire history, but unchanged files are kept only once, even if present on multiple targets. Thank you for the recommendation. How far would I have to open my systems in order for backuppc to function? Can the web server reside on a different system than the backup server? - Grant
Re: [gentoo-user] {OT} backups... still backups....
On Sun, 30 Jun 2013 01:11:35 -0700, Grant wrote: app-backup/backuppc It uses hard links, but to save space, so all versions of all files are kept for your entire history, but unchanged files are kept only once, even if present on multiple targets. Thank you for the recommendation. How far would I have to open my systems in order for backuppc to function? You have to grant root rsync access to the backuppc user on the server. Can the web server reside on a different system than the backup server? I haven't tried that but I don't see why not. -- Neil Bothwick ...Advert for restaurant: Exotic foods for all occasions. Police balls a speciality. signature.asc Description: PGP signature
Re: [gentoo-user] {OT} backups... still backups....
Am 30.06.2013 01:42, schrieb Grant: Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? I did delve into bacula but decided it was overkill for just a few systems. I use amanda but it might be overkill for you as well. The initial learning curve is a bit steep but then it is reliable and rather easy to add ned systems. What about using duplicity? And that dupinanny-helper-script.
Re: [gentoo-user] {OT} backups... still backups....
On 30/06/13 17:58, Stefan G. Weichinger wrote: Am 30.06.2013 01:42, schrieb Grant: Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? I did delve into bacula but decided it was overkill for just a few systems. I use amanda but it might be overkill for you as well. The initial learning curve is a bit steep but then it is reliable and rather easy to add ned systems. What about using duplicity? And that dupinanny-helper-script. sounds something like bacula in that it uses hard links, but also is much simpler. To restore, you just rsync the file/files/everything back as needed. Can be automated (passwordless logins using certs) and basicly just works (for quite a few years now!). BillK * app-backup/dirvish Latest version available: 1.2.1 Latest version installed: 1.2.1 Size of downloaded files: 47 kB Homepage:http://www.dirvish.org/ Description: Dirvish is a fast, disk based, rotating network backup system. License: OSL-2.0
Re: [gentoo-user] {OT} backups... still backups....
On Sun, 30 Jun 2013 01:11:35 -0700 Grant wrote: Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? app-backup/backuppc It uses hard links, but to save space, so all versions of all files are kept for your entire history, but unchanged files are kept only once, even if present on multiple targets. Thank you for the recommendation. How far would I have to open my systems in order for backuppc to function? Can the web server reside on a different system than the backup server? - Grant I've been using backuppc since 2007 and am very happy with it.
[gentoo-user] configuring NFS server to handle client reboots
Hi, I am using Gentoo Linux as an NFS server while doing development on an Blackfin embedded system. The Blackfin is running uClinux and the development host is Gentoo testing version (~amd64). The NFS server version is 1.2.7. Here is the problem that I am observing: I start from a known state, restarting the NFS server on Gentoo and power cycling the Blackfin. Once the Blackfin system has booted, I mount an folder on the Blackfin. This succeeds without any problem. Then I power cycle the Blackfin again and after it is up, I try to mount that same folder again. But this time, the mount command hangs for a minute or so and eventually fails with a timeout error. Here is the mount command: mount -o nolock,tcp 10.2.2.254:/romfs_2011R1 /mnt When I look into the NFS server's system log, I can see that the mount was authenticated: Jun 30 13:54:53 bonsai rpc.mountd[1597]: authenticated mount request from 10.2.2.220:911 for /home/ta/uclinux_2011R1/db1/uclinux-dist/romfs (/home/ta/uclinux_2011R1) I have captured what is happening using wireshark and what I am seeing is that the mount request succeeds, but the client initiates another TCP connection (SYN) and this SYN is never responded to by the server. I know that a sm-notify program is used on both NFS clients/servers to notify reboots, but this embedded system does not have the sm-notify capability. And I would rather not try to port it to uClinux. So, my question is, can I somehow configure the NFS server to allow mounting the same directory repeatedly whenever the NFS client reboots? -- Timur
Re: [gentoo-user] configuring NFS server to handle client reboots
On 06/30/13 20:10, Timur Aydin wrote: Here is the mount command: mount -o nolock,tcp 10.2.2.254:/romfs_2011R1 /mnt BTW, when I use UDP instead of TCP, then the mount works after repeated reboots. But I would rather use TCP, because based on past experiments I did, TCP mounted NFS shares have a higher bandwidth. Also, when using TCP, if I restart the NFS server on the gentoo host, /etc/init.d/nfs restart Then I can mount the NFS share on the Blackfin repeatedly. This all tells me that the NFS server is preventing subsequent TCP mounts unless the existing mount is unmounted first. -- Timur
Re: [gentoo-user] configuring NFS server to handle client reboots
The server configuration is as follows: === bonsai ~ # cat /etc/conf.d/nfs # /etc/conf.d/nfs # If you wish to set the port numbers for lockd, # please see /etc/sysctl.conf # Optional services to include in default `/etc/init.d/nfs start` # For NFSv4 users, you'll want to add rpc.idmapd here. NFS_NEEDED_SERVICES=rpc.idmapd # Number of servers to be started up by default OPTS_RPC_NFSD=8 # Options to pass to rpc.mountd # ex. OPTS_RPC_MOUNTD=-p 32767 OPTS_RPC_MOUNTD= # Options to pass to rpc.statd # ex. OPTS_RPC_STATD=-p 32765 -o 32766 OPTS_RPC_STATD= # Options to pass to rpc.idmapd OPTS_RPC_IDMAPD= # Options to pass to rpc.gssd OPTS_RPC_GSSD= # Options to pass to rpc.svcgssd OPTS_RPC_SVCGSSD= # Options to pass to rpc.rquotad (requires sys-fs/quota) OPTS_RPC_RQUOTAD= # Timeout (in seconds) for exportfs EXPORTFS_TIMEOUT=30 # Options to set in the nfsd filesystem (/proc/fs/nfsd/). # Format is option=value. Multiple options are allowed. #OPTS_NFSD=nfsv4leasetime=30 max_block_size=4096 === bonsai ~ # cat /etc/exports /home/ta/uclinux_2011R1 *(rw,sync,all_squash,anonuid=1000,anongid=100,no_subtree_check) /home/ta/tmp/flash_nfsroot *(rw,sync,all_squash,anonuid=1000,anongid=100,no_subtree_check) -- Timur
Re: [gentoo-user] {OT} backups... still backups....
On Sunday 30 Jun 2013 12:05:05 William Kenworthy wrote: On 30/06/13 17:58, Stefan G. Weichinger wrote: Am 30.06.2013 01:42, schrieb Grant: Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? I did delve into bacula but decided it was overkill for just a few systems. I use amanda but it might be overkill for you as well. The initial learning curve is a bit steep but then it is reliable and rather easy to add ned systems. What about using duplicity? And that dupinanny-helper-script. sounds something like bacula in that it uses hard links, but also is much simpler. To restore, you just rsync the file/files/everything back as needed. Can be automated (passwordless logins using certs) and basicly just works (for quite a few years now!). BillK * app-backup/dirvish Latest version available: 1.2.1 Latest version installed: 1.2.1 Size of downloaded files: 47 kB Homepage:http://www.dirvish.org/ Description: Dirvish is a fast, disk based, rotating network backup system. License: OSL-2.0 What file system are you using with Dirvish and how much space compared to the source fs is it using? -- Regards, Mick signature.asc Description: This is a digitally signed message part.
[gentoo-user] Re: SMplayer: Update notification
On 30/06/2013 09:19, Mick wrote: On Saturday 29 Jun 2013 17:45:31 meino.cra...@gmx.de wrote: Hi, I am using smplayer to play DVB-T, since Kaffeine stucks with some channels. This evening, smplayer notifies me, that a new version will be available. SMplayer is only able to know this by automonously accesing the internet and its home site. I dont like this. How can I swith this off? Best regards, mcc In my ~/.config/smplayer/smplayer.ini, I have these lines: [update_checker] checked_date=@Variant(\0\0\0\xe\0%{\x99) last_known_version=0.8.5.5487 [smplayer] stable_version=0.8.5 check_for_new_version=true You may want to try setting this to 'check_for_new_version=false' and restart it. Hi, If this is bothering people, please file a bug with this information and we can probably have it included by default. I know we disable update checkers for a number of other packages too. Best regards, Michael
Re: [gentoo-user] {OT} backups... still backups....
How far would I have to open my systems in order for backuppc to function? You have to grant root rsync access to the backuppc user on the server. Isn't that a gaping security hole? I think this amounts to granting the backup server root read access (and write access if you want to restore) on each client? - Grant
Re: [gentoo-user] {OT} backups... still backups....
On Sun, 30 Jun 2013 13:12:29 -0700, Grant wrote: You have to grant root rsync access to the backuppc user on the server. Isn't that a gaping security hole? I think this amounts to granting the backup server root read access (and write access if you want to restore) on each client? How can you backup system files without root read access? You are granting this to s specific user, one without a login shell, on the server. You don;t need to grant write access if you don't want to. BackupPC has an option to restore to a tar or zip archive, which you can manually restore. -- Neil Bothwick It's no use crying over spilt milk -- it only makes it salty for the cat. signature.asc Description: PGP signature
Re: [gentoo-user] {OT} backups... still backups....
You have to grant root rsync access to the backuppc user on the server. Isn't that a gaping security hole? I think this amounts to granting the backup server root read access (and write access if you want to restore) on each client? How can you backup system files without root read access? You are granting this to s specific user, one without a login shell, on the server. If the backup server is infiltrated, the infiltrator would have root read access to each of the clients, correct? If the clients push to the backup server instead, their access on the server can be restricted to the backup directory. - Grant
Re: [gentoo-user] {OT} backups... still backups....
I used reiserfs3 (very good) and now btrfs (so-so, but getting better) - stay away from anything ext* - they fall apart under the load eventually losing the lot ... the filesystem gets hammered when its creating tons of hardlinks. From personal experiance I have a very poor view on ext2 and ext3 ... less experience (and failures!) with ext4 though as I avoid ext* on principle now where I can. First copy takes the same space as the original, subsequent only includes changes (as hard links for existing files use zero space.) Over time, it stabilises at ~2x the original size for full gentoo systems with regular updates (configurable, I keep +2weeks daily, and +6months Sunday backups - dirvish-expire can be a weekly cron job to cull expired versions) My current setup uses a manually run script (simple bash) to pull the wanted directories from a number of vm's and a desktop. I used to do it automatically but until I stabilise my network changes its easier manually. Development looks slow/old from their website, but the activity is elsewhere. BillK * from the dirvish web site In other news, I've learned from the director of the Oregon State University Open Source Lab that they will be backing up their servers with dirvish. These servers are the primary mirror sites for Mozilla, Kernel.org, Gentoo, Drupal, and other major open source projects. - if its good enough for them, its good enough ... On 01/07/13 02:08, Mick wrote: On Sunday 30 Jun 2013 12:05:05 William Kenworthy wrote: On 30/06/13 17:58, Stefan G. Weichinger wrote: Am 30.06.2013 01:42, schrieb Grant: Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? I did delve into bacula but decided it was overkill for just a few systems. I use amanda but it might be overkill for you as well. The initial learning curve is a bit steep but then it is reliable and rather easy to add ned systems. What about using duplicity? And that dupinanny-helper-script. sounds something like bacula in that it uses hard links, but also is much simpler. To restore, you just rsync the file/files/everything back as needed. Can be automated (passwordless logins using certs) and basicly just works (for quite a few years now!). BillK * app-backup/dirvish Latest version available: 1.2.1 Latest version installed: 1.2.1 Size of downloaded files: 47 kB Homepage:http://www.dirvish.org/ Description: Dirvish is a fast, disk based, rotating network backup system. License: OSL-2.0 What file system are you using with Dirvish and how much space compared to the source fs is it using?
Re: [gentoo-user] {OT} backups... still backups....
On Sun, 30 Jun 2013 14:36:14 -0700, Grant wrote: Isn't that a gaping security hole? I think this amounts to granting the backup server root read access (and write access if you want to restore) on each client? How can you backup system files without root read access? You are granting this to s specific user, one without a login shell, on the server. If the backup server is infiltrated, the infiltrator would have root read access to each of the clients, correct? If the clients push to the backup server instead, their access on the server can be restricted to the backup directory. Yes, but with push you have to secure each machine whereas with pull backups it's only the server to secure. And you'd still need to grant access to the server from the clients, which could be escalated. With backuppc, the server does not need to be accessible from the Internet at all, all requests are outgoing. If the server machine serves other purposes and needs to be net-accessible, run the backup server in a chroot or VM. -- Neil Bothwick Religious error: (A)tone, (R)epent, (I)mmolate? signature.asc Description: PGP signature
Re: [gentoo-user] configuring NFS server to handle client reboots
On Sun, Jun 30, 2013 at 08:10:53PM +0300, Timur Aydin wrote I know that a sm-notify program is used on both NFS clients/servers to notify reboots, but this embedded system does not have the sm-notify capability. And I would rather not try to port it to uClinux. So, my question is, can I somehow configure the NFS server to allow mounting the same directory repeatedly whenever the NFS client reboots? A possible quick-n-dirty approach is to run a script that first does a umount of the share, and then does the mount. Ignore error messages from the umount attempt. -- Walter Dnes waltd...@waltdnes.org I don't run desktop environments; I run useful applications
Re: [gentoo-user] {OT} backups... still backups....
On 06/29/13 16:42, Grant wrote: Remote, automated, secure backups is the most difficult and time-consuming Gentoo project I've undertaken. Right now I'm pushing data from each of my systems to a backup server via rdiff-backup. The main problem with this is if a system is compromised its backup is also vulnerable. Also, you can't restrict rdiff-backup to a particular directory in authorized_keys like you can with rsync, and rdiff-backup isn't very good over the internet (I've had trouble on sub-optimal connections) and it's recommended on the mailing list to use rdiff-backup either before or after rsync'ing over the internet. We've discussed this vulnerability here before and it was suggested that I use hard links to version the rdiff-backup repository on the backup server in case it's tampered with. I've been studying hard links, cp -al, rsnapshot (which uses rsync and hard links), and rsync --link-dest (which uses hard links) but I can't figure out how that would work without the inevitable duplication of data on a large scale. Can anyone think of an automated method that remotely and securely backs up data from one system to another, preserves permissions and ownership, and keeps the backups safe even if the backed-up system is compromised? I did delve into bacula but decided it was overkill for just a few systems. - Grant You did not tell us what are you trying to backup; entire system or just particular files. Are you afraid of updates or data loss? I have two machine in remote location as well. So I usually upgrade my local machine first, wait one week and if there are no surprises I upgrade remote main server first. If everything goes OK (no surprises and/or complains), I upgrade remote backup machine. I run vpn so I just use rsync over vpn to make an incremental backup daily (Mon. to Fri.). -- Joseph