Re: Home made backup system

2019-12-26 Thread rhkramer
Thanks for the reply and the useful explanations (and the expression of 
limitation of your personal knowledge).  I will add one question / comment 
down below:

On Thursday, December 26, 2019 10:23:54 AM Greg Wooledge wrote:
> For most people, it comes down to "when you can't write to the device
> any more, you throw it away and get another".

I guess that is the rub, and a question, I'll want to throw it away when I 
can't read from it anymore.  (But only if I somehow either copy all the still 
good stuff off it, or have another copy that is still readable.)

(I once read a thread about long term archival storage where they suggested a 
scheme (I think it might have been CDs at the time), like every year making 
one (or more) additional copies of the CD (and, probably, but I really don't 
remember details), starting with more than one copy (CD) of the data to be 
archived.  (IIRC, the objective was trying to maintain the data intact for 100 
years or something like that.)

I wonder if failures typically start with a failure to write or a failure to 
read?  I suspect it depends on a lot of factors, e.g., like age of the writing 
-- I mean, under the wrong (for someone looking for long term back up / 
archival storage), something might be written successfully, but 20 years later 
might not be readable.

Of course, I am unlikely to need my backups for more than a few months, so 
most of the above is probably moot. ;-)



Re: Home made backup system

2019-12-26 Thread Thomas Schmitt
Hi,

Greg Wooledge wrote:
> > > Remember, tar was designed for magnetic tapes,
> > > which are read sequentially.  It provides no way for a reader to learn
> > > that file xyz is at byte offset 31337 and that it should skip ahead to
> > > that point if it only wants that one file.

rhkra...@gmail.com wrote:
> > Just to confirm, I assume that is true ("no way to skip ahead to byte
> > 31337") even if the underlying media is a (somewhat random access) disk
> > instead of (serial access) tape?

It is about not knowing to what byte address to skip.
tar is simply a sequence of file containers. File header, data, next file
header, data, and so on.
Lacking is a kind of directory, which predicts where a particular file
begins.

There are archivers which have such a catalog. With some quillings this
constitutes a filesystem.


> > In other words, I suspect it would be more reliable if it functioned a
> > little bit more like a WORM (Write Once, Read Many) type device

That would be CD-R, DVD-R, DVD+R, and BD-R media.


> > data is appended by  writing in previously unused locations
> > rather than deleting some data,

That's called multi-session. It has other advantages beyond reducing the wear
of media. Typical filesystem for multi-session on write-once media is ISO 9660.


Greg Wooledge wrote:
> "Write Once, Read Many" is an entirely different data storage paradigm.
> Think of a large dusty vault full of optical media.

One can destroy them physically. Put stack-wise into the oven at 200 C / 400 F
for 10 minutes. Wear robust gloves when bending and breaking the hot media.
Single media can be destroyed by help of a lighter.


> Very expensive, and very niche.

One can buy 25 GB BD-R for less than a dollar, 50 GB for less than 2 dollar.
The usefulness depends on the storage constraints of orginal and backup.


> You can't reuse the medium, nor do you WANT to

If you want to re-use, there are CD-RW, DVD-RW, DVD+R, DVD-RAM, BD-RE.
Multi-session is possible on them with ISO 9660 filesystems.


Have a nice day :)

Thomas



Re: Home made backup system

2019-12-26 Thread Charles Curley
On Thu, 26 Dec 2019 09:51:59 -0500
rhkra...@gmail.com wrote:

> Again, I assume (I know what assume does) that "USB mass-storage
> device that acts like a hard drive" is (or might be) a pen drive type
> of device.  I've had a lot of bad luck (well, more bad luck than I'd
> like) with that kind of device, and I suspect that the problem is
> more likely to occur when parts of the device are erased to allow
> something new to be written to it.

When I first started working with the technology behind what we now
call flash drives, back in the late Pliocene, their capacities were
measured in bits, and I think I worked with 256 bit and 512 bit
devices. At that time, you had to read several bytes worth, modify the
relevant bits, and write out the several bytes worth to make a change,
much like changing a sector on a floppy disk or hard drive.

As you conjectured, device life was measured in write cycles, usually
on the order of tens or hundreds of write cycles.

Today all of that is still more or less true, except the capacities
and lives of the devices are greatly extended.

And one other change: When I was working with these things, the host
computer's operating system device driver had to take care of all that,
including using different "sectors" to spread out the wear and avert
device failure. Today, all of that is "under the hood" of the flash
drive, completely invisible to the host computer.

This is similar to the evolution of the hard drive. Way back when five
megabytes was a lot of hard drive (I started working with the Seagate
ST-506), the operating system driver had to worry about encoding,
tracks, sectors, and heads, error correction, and about bad sectors and
re-mapping them. SCSI, and later, IDE, moved all that onto the drive
itself, and all the OS sees is a linear expanse of sectors. 

So much of what you conjectured indeed goes on, but on the flash device,
and at a level utterly and completely invisible to the host operating
system. And it almost certainly does it better than most of us here
could do it.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Home made backup system

2019-12-26 Thread Greg Wooledge
On Thu, Dec 26, 2019 at 09:51:59AM -0500, rhkra...@gmail.com wrote:
> Just to confirm, I assume that is true ("no way to skip ahead to byte 31337") 
> even if the underlying media is a (somewhat random access) disk instead of 
> (serial access) tape?

Correct.  There's no central index inside the tar archive that says
"file xyz begins at byte 12345".  This is by design, so that you can
append new content to an existing tar archive.  When you append a new
file to an existing archive, you simply drop a new metadata header
record, and then the new content.  So, the entire archive is a long
string of

header file header file header file 

The only way to find a file is to read the entire thing from the beginning
until you find the file you want.

> Again, I assume (I know what assume does) that "USB mass-storage device that 
> acts like a hard drive" is (or might be) a pen drive type of device.

Yes.

> I've had 
> a lot of bad luck (well, more bad luck than I'd like) with that kind of 
> device, and I suspect that the problem is more likely to occur when parts of 
> the device are erased to allow something new to be written to it.
> 
> In other words, I suspect it would be more reliable if it functioned a little 
> bit more like a WORM (Write Once, Read Many) type device

"Write Once, Read Many" is an entirely different data storage paradigm.
Think of a large dusty vault full of optical media.  Once you've backed up
your full database (or whatever) to one of these media, it goes into
the vault.  You can't reuse the medium, nor do you WANT to, for legal
reasons.  You've chosen this technology specifically because it CANNOT
be altered once written, and therefore gives you some sort of debatably
reliable legal trail of evidence.  "On May 7th, this is what we had."

Very expensive, and very niche.

> -- not that the whole 
> device necessarily has to be written in one go, but more that, for highest 
> reliablity,  data is appended by  writing in previously unused locations 
> rather than deleting some data, and then writing new data in previously used 
> and erased locations.

I am not an expert in solid state storage, so I won't even try to
address the questions about long-term reliability of various USB mass
storage devices.

For most people, it comes down to "when you can't write to the device
any more, you throw it away and get another".

> I don't know whether rsync, in the normal course of events will delete 
> (erase) 
> and write data in previously used locations, but it would be helpful to have 
> comments, with respect to:
> 
>* whether rsync will rewrite to previously used locations, [...]

Rsync does not operate at the disk sector level.  It operates at the
file level.  If you've modified a file since the last backup, then rsync
knows it needs to modify the backed-up copy of the file.  It will use
various algorithms to decide whether it should just copy the entire
file from the source, or try to preserve pieces of the file that are
already on the destination.

The main goal there is to reduce the transmission of bytes from a
source host to a destination host, because one of rsync's main use
cases is backing up files across a network.

Since you're focusing on the case where there's no network involved,
a lot of that work is just not relevant.  In the end, as far as I
understand it, rsync will create a new file on the destination, which
contains the new content (however it gets the new content).  Then the
older copy of the file will be deleted.

How the storage device's controller works (how it decides which parts
of the device get the new file, how the part where the old file used to
be get recycled, etc.) is outside of rsync's purview, and definitely
outside of *my* personal knowledge.



Re: Home made backup system

2019-12-26 Thread rhkramer
Thanks for addressing this -- I have a few questions I want to ask for my own 
edification / clarification:

On Thursday, December 26, 2019 08:18:12 AM Greg Wooledge wrote:
> The drawback of using tar is that it creates an *archive* of files -- that
> is, a single file (or byte stream) that contains a mashup of metadata and
> file contents.  If you want to extract one file from this archive, you
> have to read the entire archive from the beginning until you find the
> file you're looking for.  Remember, tar was designed for magnetic tapes,
> which are read sequentially.  It provides no way for a reader to learn
> that file xyz is at byte offset 31337 and that it should skip ahead to
> that point if it only wants that one file.

Just to confirm, I assume that is true ("no way to skip ahead to byte 31337") 
even if the underlying media is a (somewhat random access) disk instead of 
(serial access) tape?
 
> For most people, a backup using rsync to a removable *random access*
> medium (an external hard drive, or USB mass-storage device that acts
> like a hard drive) is a much better fit for their needs.

Again, I assume (I know what assume does) that "USB mass-storage device that 
acts like a hard drive" is (or might be) a pen drive type of device.  I've had 
a lot of bad luck (well, more bad luck than I'd like) with that kind of 
device, and I suspect that the problem is more likely to occur when parts of 
the device are erased to allow something new to be written to it.

In other words, I suspect it would be more reliable if it functioned a little 
bit more like a WORM (Write Once, Read Many) type device -- not that the whole 
device necessarily has to be written in one go, but more that, for highest 
reliablity,  data is appended by  writing in previously unused locations 
rather than deleting some data, and then writing new data in previously used 
and erased locations.

I once looked into the rsync type of thing (for example, I read the author's 
thesis back in the day) but I don't remember all I'd like to remember.  
(Including, I don't remember if he used the term rsync in the thesis, mabye it 
was rcopy or something.)

I don't know whether rsync, in the normal course of events will delete (erase) 
and write data in previously used locations, but it would be helpful to have 
comments, with respect to:

   * whether rsync will rewrite to previously used locations, (I think it 
does, I mean, I think under certain circumstances (maybe based on certain 
options), e.g., if a file is deleted from the "working space", that file is (or 
can be) deleted from the rsynced backup, and then that space can be reused.)

   * if when you say a "USB mass-storage device that acts like a hard drive" 
you refer to (or include) a pendrive type device

   * your experience as to the reliability of a pendrive type device, either 
in a WORM type usage (as described above) or when rewriting over previously 
used areas

Thanks!





Re: Home made backup system

2019-12-26 Thread Greg Wooledge
On Thu, Dec 26, 2019 at 08:18:12AM -0500, Greg Wooledge wrote:
> On Wed, Dec 25, 2019 at 11:07:22AM -0800, David Christensen wrote:
> > > I was amazed that nobody yet considered tar.

Sorry... that sentence was actually written by Franco Martelli.  I
replied to the wrong email.



Re: Home made backup system

2019-12-26 Thread Greg Wooledge
On Wed, Dec 25, 2019 at 11:07:22AM -0800, David Christensen wrote:
> > I was amazed that nobody yet considered tar.

The best use case for tar is creating a full backup to removable media
(magnetic tapes are literally what it was designed for -- the "t" stands
for tape).

The drawback of using tar is that it creates an *archive* of files -- that
is, a single file (or byte stream) that contains a mashup of metadata and
file contents.  If you want to extract one file from this archive, you
have to read the entire archive from the beginning until you find the
file you're looking for.  Remember, tar was designed for magnetic tapes,
which are read sequentially.  It provides no way for a reader to learn
that file xyz is at byte offset 31337 and that it should skip ahead to
that point if it only wants that one file.

Tar also provides no means to *update* the copy of a file contained
within an existing archive.  I.e. you can't do any kind of incremental
or differential backup with it -- not realistically.  The closest you
could come would be appending a series of binary patches to the end
of the existing archive.

(Appending to an archive only works if the archive is uncompressed, by
the way.)

It's certainly not *wrong* to do backups using tar, but for a lot of
people, it's not the strategy they want to employ.

For most people, a backup using rsync to a removable *random access*
medium (an external hard drive, or USB mass-storage device that acts
like a hard drive) is a much better fit for their needs.



Re: Home made backup system

2019-12-25 Thread David Christensen

On 2019-12-25 08:42, Franco Martelli wrote:

On 18/12/19 at 18:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.
...


I was amazed that nobody yet considered tar. My backup with tar is based
to a script that invoke tar reading two hidden file .tarExclude and
.tarInclude:

~# cat .tarExclude
/home/myuser/.cache
/home/myuser/.kde
/home/myuser/.mozilla/firefox/.default
/home/myuser/VirtualBox\ VMs
/home/myuser/Shared
/home/myuser/Sources
/home/myuser/Video
/home/myuser/Scaricati
/home/myuser/Modelli
/home/myuser/Documenti
/home/myuser/Pubblici
/home/myuser/Desktop
/home/myuser/Immagini
/home/myuser/Musica
/home/myuser/linux-source-4.19

~# cat .tarInclude
/home/myuser
/root/
/etc/
/usr/local/bin/
/usr/local/etc/
/boot/grub/grub.cfg
/boot/config-4.19.67

then the script invoke tar command this way:

/bin/tar -X /root/.tarExclude -zcpvf /tmp/$f -T /root/.tarInclude

$f variable is the filename that it'll be moved to USB stick once tested
with the command:

/bin/tar ztf /tmp/$f >/dev/null

one thing you must take care is that the -X switch must came before of
the -T switch otherwise tar command fails.
HTH



tar(1) is very flexible:

1.  I tend to formulate my archive jobs by host and (sub-)directory -- 
e.g. tinkywinky:/home, cvs:/var/local/cvs, etc..


2.  Within each archive job, I include everything by default and then 
specify what to exclude via the various --exclude* options.


3.  For a given host and directory, I may have multiple archive jobs 
that are run at different frequencies (daily, weekly, monthly, etc.). 
Frequent jobs exclude the most files and infrequent jobs exclude few (or 
none).


4.  The --exclude-tag* options have the advantage (and risk) that the 
administrator(s) and user(s) can maintain archive exclusion tag files 
(e.g. ".noarchive") throughout the live filesystem as archiving 
requirements change over time.  This reduces or eliminates the need for 
the administrator to make changes to the archiving scripts, 
configuration files, and job files.


5.  All that said, I do have one VPS whose archive job is inverted by 
design -- based at root, exclude everything by default, and specify what 
to include via the --files-from option.



David



Re: Home made backup system

2019-12-25 Thread Franco Martelli
On 18/12/19 at 18:02, rhkra...@gmail.com wrote:
> Aside / Admission: I don't backup all that I should and as often as I should, 
> so I'm looking for ways to improve.  One thought I have is to write my own 
> backup "system" and use it, and I've thought about that a little, and provide 
> some of my thoughts below.
> ...

I was amazed that nobody yet considered tar. My backup with tar is based
to a script that invoke tar reading two hidden file .tarExclude and
.tarInclude:

~# cat .tarExclude
/home/myuser/.cache
/home/myuser/.kde
/home/myuser/.mozilla/firefox/.default
/home/myuser/VirtualBox\ VMs
/home/myuser/Shared
/home/myuser/Sources
/home/myuser/Video
/home/myuser/Scaricati
/home/myuser/Modelli
/home/myuser/Documenti
/home/myuser/Pubblici
/home/myuser/Desktop
/home/myuser/Immagini
/home/myuser/Musica
/home/myuser/linux-source-4.19

~# cat .tarInclude
/home/myuser
/root/
/etc/
/usr/local/bin/
/usr/local/etc/
/boot/grub/grub.cfg
/boot/config-4.19.67

then the script invoke tar command this way:

/bin/tar -X /root/.tarExclude -zcpvf /tmp/$f -T /root/.tarInclude

$f variable is the filename that it'll be moved to USB stick once tested
with the command:

/bin/tar ztf /tmp/$f >/dev/null

one thing you must take care is that the -X switch must came before of
the -T switch otherwise tar command fails.
HTH

Merry Xmas

-- 
Franco Martelli



Re: Home made backup system

2019-12-23 Thread Celejar
On Mon, 23 Dec 2019 20:11:07 -0600
Nate Bargmann  wrote:

> Thanks for the tips!

Sure! Let us know if you hack together anything interesting.

Celejar



Re: Home made backup system

2019-12-23 Thread Nate Bargmann
Thanks for the tips!

- Nate

-- 

"The optimist proclaims that we live in the best of all
possible worlds.  The pessimist fears this is true."

Web: https://www.n0nb.us
Projects: https://github.com/N0NB
GPG fingerprint: 82D6 4F6B 0E67 CD41 F689 BBA6 FB2C 5130 D55A 8819



signature.asc
Description: PGP signature


Re: Home made backup system

2019-12-23 Thread Celejar
On Thu, 19 Dec 2019 14:25:24 -0600
Nate Bargmann  wrote:

> I also use rsnapshot on this machine to backup to another drive in the
> same case.  I'd thought about off site, perhaps AWS or such but haven't
> spent enough time trying to figure out how I might do that with
> rsnapshot.

One way to do this is by just using something like rclone (which speaks
AWS) to sync the rsnapshot backup to AWS. I do this with borg backups: I
used to do this with hubiC, which unfortunately has been offline for a
while. I currently sync a borg backup with a c14 cold storage repository
using a tool I wrote for this purpose:

https://github.com/tmo1/c14sync

(rclone doesn't have the capability to automate the moving of data into
and out of c14's "safes" (cold storage repositories), or at least it
didn't when I wrote my utility, but in general, rclone is the standard
tool to sync local data with cloud storage providers, assuming you
don't have access to the cloud storage via traditional protocals like
ssh and rsync.)

Celejar



Re: Home made backup system

2019-12-21 Thread Klaus Fuerstberger
Am 18.12.19 um 18:02 schrieb rhkra...@gmail.com:
> A purpose of sending this to the mailing-list is to find out if there already 
> exists a solution (or parts of a solution) close to what I'm thinking about 
> (no sense re-inventing the wheel), or if someone thinks I've overlooked 
> something or making a big mistake.

For my Linux based servers I use Dirvish Backup, a rsync based backup
solution which works with hardlinks, so you can always have a backup of
the whole root tree of your servers and save a lot of space. It works
local and also remote over ssh, you can add pre and post scripts for
example to stop and start database servers, or make database snapshots:

http://dirvish.org/

Klaus



Re: Fetchnews (was Re: Home made backup system)

2019-12-21 Thread songbird
rhkra...@gmail.com wrote:
> On Friday, December 20, 2019 09:40:28 PM songbird wrote:
>> Kenneth Parker wrote:
>
>> > Could you please ship me a personal email, on how you configured gmane
>> > and LKML to read debian-user?
>
>>   i'd rather post public messages as that way if anyone
>> else is reading along or searching they can also use the
>> information if they like.  that's why i like usenet.
>
>
> +1, and thanks from the peanut gallery

  you're welcome!

  i should also say that when you first subscribe to a 
group on gmane then it will not need any setup that i
recall.  however, when you reply to your first message
from a gmane group it will send you a confirmation e-mail
asking you to make sure it was really you who sent the
message.  you only need to do this once per each gmane
group you actually reply to.


  songbird



Re: Fetchnews (was Re: Home made backup system)

2019-12-21 Thread Ralph Katz
On 12/20/19 7:40 PM, songbird wrote:

[snip] ...[configuring gmane to read debian-user]

> 
>   gmane is a mail to usenet gateway service.
> 
>   when you install leafnode and your favorite newsreader 
> and get them configured you will still have to download
> an active list from the news service provider and then
> have to subscribe to each group you would like to use.
> 
>   so in the case of LKML the group name is:
> 
> gmane.linux.kernel 
>   which is an alias used for the linux kernel mailing list.
> 
>   when you use your newsagent to search for that group
> you will have to pull articles to read (via whatever you
> use to get messages).  leafnode is what i use because i
> understand how it works.  other people use gnus, but 
> i've always been used to trn, rn, tin like interfaces
> so slrn works well for me.
> 
>   so basically there are three chunks to set up to get
> going.  leafnode, a newsreader/writer and the news
> acct itself.  none of this is super easy but doable to
> anyone who likes to poke at linux/debian, etc.
> 
>   i'd rather post public messages as that way if anyone
> else is reading along or searching they can also use the
> information if they like.  that's why i like usenet.
> 
> 
>   songbird
> 
> 

I read on gmane several newsgroups from time to time with thunderbird.
Nothing is needed to install.  Setup is easy:
- Setup new newsgroup account on thunderbird; name news.gmane.org,
enter your SMTP server.
- Server settings:  Type: NNTP, Name: news.gmane.org  port: 119
- choose newsgroups:  gmane.linux.debian.user
- download messages.

I choose to post directly to the debian-user list as it posts faster
than thru gmane and still keeps the threading.
gmane provides a very effective means to quickly browse and read newsgroups.

Regards
Ralph



signature.asc
Description: OpenPGP digital signature


Re: Fetchnews (was Re: Home made backup system)

2019-12-21 Thread rhkramer
On Friday, December 20, 2019 09:40:28 PM songbird wrote:
> Kenneth Parker wrote:

> > Could you please ship me a personal email, on how you configured gmane
> > and LKML to read debian-user?

>   i'd rather post public messages as that way if anyone
> else is reading along or searching they can also use the
> information if they like.  that's why i like usenet.


+1, and thanks from the peanut gallery



Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread Kenneth Parker
On Fri, Dec 20, 2019 at 9:41 PM songbird  wrote:

> Kenneth Parker wrote:
> >songbird wrote:
> ...
> >>   check out eternal-september.org  :)  no binaries.  just
> >> text.  that is all i want to read anyways.
>

You may see a sea7kenp username pop up occasionally.



> Could you please ship me a personal email, on how you configured gmane and
> > LKML to read debian-user?
>
>   gmane is a mail to usenet gateway service.
>

(Google doesn't like leafnode, by the way).

>
>   when you install leafnode and your favorite newsreader
> and get them configured you will still have to download
> an active list from the news service provider and then
> have to subscribe to each group you would like to use.
>
>   so in the case of LKML the group name is:
>
> gmane.linux.kernel
>   which is an alias used for the linux kernel mailing list.
>
>   when you use your newsagent to search for that group
> you will have to pull articles to read (via whatever you
> use to get messages).  leafnode is what i use because i
> understand how it works.  other people use gnus, but
> i've always been used to trn, rn, tin like interfaces
> so slrn works well for me.
>
>   so basically there are three chunks to set up to get
> going.  leafnode, a newsreader/writer and the news
> acct itself.  none of this is super easy but doable to
> anyone who likes to poke at linux/debian, etc.
>

Many thanks.  I just got approval to use gmane and  leafnode for purposes,
such as this.  There might be some excitement, at the Eye Blink Universe (
http://eyeblinkuniverse.com) in the next few Months.

>
>   i'd rather post public messages as that way if anyone
> else is reading along or searching they can also use the
> information if they like.  that's why i like usenet.
>

Fair enough.  Someone else is sure to combine your two, unrelated tasks on
the same line, just like I did.

Kenneth Parker


Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread songbird
Kenneth Parker wrote:
>songbird wrote:
...
>>   check out eternal-september.org  :)  no binaries.  just
>> text.  that is all i want to read anyways.
>>
>
> Thanks!  Name Servers couldn't find it without the "www" in front.  I am
> investigating it now.
>
> Not likely to get too far down the nntp "Rabbit Hole" tonight,  but will
> look closer at what's there.
>
> Could you please ship me a personal email, on how you configured gmane and
> LKML to read debian-user?

  gmane is a mail to usenet gateway service.

  when you install leafnode and your favorite newsreader 
and get them configured you will still have to download
an active list from the news service provider and then
have to subscribe to each group you would like to use.

  so in the case of LKML the group name is:

gmane.linux.kernel 
  which is an alias used for the linux kernel mailing list.

  when you use your newsagent to search for that group
you will have to pull articles to read (via whatever you
use to get messages).  leafnode is what i use because i
understand how it works.  other people use gnus, but 
i've always been used to trn, rn, tin like interfaces
so slrn works well for me.

  so basically there are three chunks to set up to get
going.  leafnode, a newsreader/writer and the news
acct itself.  none of this is super easy but doable to
anyone who likes to poke at linux/debian, etc.

  i'd rather post public messages as that way if anyone
else is reading along or searching they can also use the
information if they like.  that's why i like usenet.


  songbird



Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread Kenneth Parker
On Fri, Dec 20, 2019 at 7:47 PM songbird  wrote:

> Kenneth Parker wrote:
> > songbird wrote:
> ...
> >>   i only use a few commands regularly and have them either
> >> aliased or stuck in history for me in my .bashrc
> >> (i start every session by history -c to get rid of
> >> anything and then use history -s "command" so pretty
> >> much my routine when signing on in the morning is to
> >> do !1 and then !2, !3 if i need to do a dist-upgrade.
> >>
> >> !1 is apt-get update & fetchnews
> >>
> >
> > I opened a possible Hornet's Nest (at least in my understanding) on a
> > Public Server that I administer.   I had not seen "fetchnews" before and,
> > thinking it might give info on Upgradeable Packages, tried it, getting a
> > message about "leafnode" not found.
>
>   i think that would be apt-listchanges (i think).


Bingo!  That's what I thought you were doing!  I installed that also, right
now, and now have a tool, to display changes


> i don't use
> it i just use the apt-get update or apt-get dist-upgrade output
> to see what is potentially being changed and make sure there's
> nothing in there too scary before i answer Y to the download
> and update prompt.
>
>
> > Fair enough.  So, "apt-get install leafnode", which starts asking me when
> > it's supposed to Fetch this News.  Finally smelling something fishy, I
> > clicked on "none", and am examining what I just did to a "Debian-Like"
> > Ubuntu 16.04 Server.
> >
> > Obviously, I am partly to blame, as the "&" means multiple commands on
> one
> > Command Line.  It only "seemed" to be related to the "Apt-get update"
> > command.
>
>   oops sorry!
>

I didn't finish configuring it, so No Problem!

   > So, Mr. Songbird, are we talking  nntp here?  (That takes me back
several years!)

>
>   yes!  :)  it is how i read this group and write back
> via gmane.  it is much faster way to read LKML and a
> bunch of other lists too than via a web interface.  i can
> skim a few thousand posts in just a few moments and
> pick out the stuff that interests me.  mark the rest all
> read and it's done.
>
>
> > On Topic for here, is to make sure others don't make my error.  Off Topic
> > (and feel free to personal reply me, Songbird) is my curiosity on, what
> > remains of our nntp News System.  Are there still good Servers out there?
> > (Last time I checked, I found a smelly batch of Spam, mixed with Make
> Money
> > Fast and Porn, which caused me to back off!)
>
>   check out eternal-september.org  :)  no binaries.  just
> text.  that is all i want to read anyways.
>

Thanks!  Name Servers couldn't find it without the "www" in front.  I am
investigating it now.

Not likely to get too far down the nntp "Rabbit Hole" tonight,  but will
look closer at what's there.

Could you please ship me a personal email, on how you configured gmane and
LKML to read debian-user?

Thank you and best regards,

Kenneth Parker


Re: Fetchnews (was Re: Home made backup system)

2019-12-20 Thread songbird
Kenneth Parker wrote:
> songbird wrote:
...
>>   i only use a few commands regularly and have them either
>> aliased or stuck in history for me in my .bashrc
>> (i start every session by history -c to get rid of
>> anything and then use history -s "command" so pretty
>> much my routine when signing on in the morning is to
>> do !1 and then !2, !3 if i need to do a dist-upgrade.
>>
>> !1 is apt-get update & fetchnews
>>
>
> I opened a possible Hornet's Nest (at least in my understanding) on a
> Public Server that I administer.   I had not seen "fetchnews" before and,
> thinking it might give info on Upgradeable Packages, tried it, getting a
> message about "leafnode" not found.

  i think that would be apt-listchanges (i think).  i don't use
it i just use the apt-get update or apt-get dist-upgrade output
to see what is potentially being changed and make sure there's
nothing in there too scary before i answer Y to the download
and update prompt.


> Fair enough.  So, "apt-get install leafnode", which starts asking me when
> it's supposed to Fetch this News.  Finally smelling something fishy, I
> clicked on "none", and am examining what I just did to a "Debian-Like"
> Ubuntu 16.04 Server.
>
> Obviously, I am partly to blame, as the "&" means multiple commands on one
> Command Line.  It only "seemed" to be related to the "Apt-get update"
> command.

  oops sorry!


> So, Mr. Songbird, are we talking  nntp here?  (That takes me back several
> years!)

  yes!  :)  it is how i read this group and write back
via gmane.  it is much faster way to read LKML and a
bunch of other lists too than via a web interface.  i can
skim a few thousand posts in just a few moments and
pick out the stuff that interests me.  mark the rest all
read and it's done.


> On Topic for here, is to make sure others don't make my error.  Off Topic
> (and feel free to personal reply me, Songbird) is my curiosity on, what
> remains of our nntp News System.  Are there still good Servers out there?
> (Last time I checked, I found a smelly batch of Spam, mixed with Make Money
> Fast and Porn, which caused me to back off!)

  check out eternal-september.org  :)  no binaries.  just
text.  that is all i want to read anyways.


>> !2 is apt-get upgrade
>> !3 is apt-get dist-upgrade
>
>
>
>
> But, in your Personal response, I'm interested in if there are, properly
> Moderated Newsgroups that actually Work?

  there used to be one i was busy in which is now moribund like
many of the others, but you can still find some active enough
corners for some good conversations in good ol' text.  even ones
that are not moderated you can filter out people who you'd 
rather not read any more.


> Thank you and best regards,
>
> Kenneth Parker (Doesn't sing like a Bird, but took College Chorus Classes).

  i also have fetchnews and postnews as separate entries in
the history but as a habit the first thing i want to do when
i sign on is get the news and update my package lists.


  songbird



Fetchnews (was Re: Home made backup system)

2019-12-20 Thread Kenneth Parker
On Thu, Dec 19, 2019 at 11:29 AM songbird  wrote:

> Greg Wooledge wrote:
> ...
> > History expansion is a bloody nightmare.  I recommend simply turning
> > it off and living without it.  Of course, that's a personal preference,
> > and you're free to continue banging your head against it, if you feel
> > that the times it helps you outweigh the times that it hurts you.
>
>   i only use a few commands regularly and have them either
> aliased or stuck in history for me in my .bashrc
> (i start every session by history -c to get rid of
> anything and then use history -s "command" so pretty
> much my routine when signing on in the morning is to
> do !1 and then !2, !3 if i need to do a dist-upgrade.
>
> !1 is apt-get update & fetchnews
>

I opened a possible Hornet's Nest (at least in my understanding) on a
Public Server that I administer.   I had not seen "fetchnews" before and,
thinking it might give info on Upgradeable Packages, tried it, getting a
message about "leafnode" not found.

Fair enough.  So, "apt-get install leafnode", which starts asking me when
it's supposed to Fetch this News.  Finally smelling something fishy, I
clicked on "none", and am examining what I just did to a "Debian-Like"
Ubuntu 16.04 Server.

Obviously, I am partly to blame, as the "&" means multiple commands on one
Command Line.  It only "seemed" to be related to the "Apt-get update"
command.

So, Mr. Songbird, are we talking  nntp here?  (That takes me back several
years!)

On Topic for here, is to make sure others don't make my error.  Off Topic
(and feel free to personal reply me, Songbird) is my curiosity on, what
remains of our nntp News System.  Are there still good Servers out there?
(Last time I checked, I found a smelly batch of Spam, mixed with Make Money
Fast and Porn, which caused me to back off!)

!2 is apt-get upgrade
> !3 is apt-get dist-upgrade




But, in your Personal response, I'm interested in if there are, properly
Moderated Newsgroups that actually Work?

Thank you and best regards,

Kenneth Parker (Doesn't sing like a Bird, but took College Chorus Classes).


Re: Home made backup system

2019-12-19 Thread Keith Bainbridge

On 19/12/19 4:02 am, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little,



I understand. For a while I used mc to copy/update files to USB. I had 
to watch to copy - time consuming. Unreliable, because it was manual.


I also realised that I need to be able to go back, and not just to last 
week or last month. The file I change often, I use only a few days a year.


I get that if you create a file and never touch it again, the rotating 
back-ups are useful.   I use timeshift for system partition back-up's 
(and it has saved me several times).



Anyway, I use rsync with back-up option to a backup/year/month/date/hour 
using $NAMEs. The scripts are on the destination to ensure that the 
destinatoin is available.


Each USB  is listed at /etc/fstab to prevent automount on insertion. 
mount options include  noauto,noexecroot's cron mounts the 
destination, remounts it exec then runs the script, which ends with an 
unmount command


2 * * * * mount /mnt/g502 && mount -o remount,exec /mnt/g502/ && cd 
/mnt/g502  && ./daily.sh


extract from daily.sh:


DAY=`date +%Y%b%d`
NOW=`date +%Y%b%d%H`
HOUR=`date +%H`
YEAR=`date +%Y`

cd /mnt/g502

cp /home/keith/rsyncExclusionList.txt ./

date   >>  ./copydailyStarted
echo "g502" >>  ./copydailyStarted

#$DAY >>
#date >> /mnt/g502/copydailyStarted
#$NOW >>  /mnt//g502/copydailyStarted

mkdir ./rsynccBackupp/$DAY/$HOUR

rsync -rubvLH --backup-dir=./rsynccBackupp/$DAY/$HOUR  --exclude 
'**ache' --exclude '.thunderb**' --exclude '**mozilla**' --exclude 
'**mzzlla**' --exclude '**eamonkey**' --exclude '**hromium** ' 
/mnt/data/keith/ ./



The --exclude bits aren't working, yet; and neither did an 
--exclude-from-list bit.



Anyhow, just my 2 bobs worth.




--
Keith Bainbridge

kkeith.bainbridge.3...@gmail.com
+61 (0)447 667 468



Re: Home made backup system

2019-12-19 Thread David Christensen

On 2019-12-19 21:04, David Christensen wrote:

So, ~47 snapshots of ~892 GB of data.  That is ~51 TB.


Correction -- 42 TB.


David



Re: Home made backup system

2019-12-19 Thread David Christensen

On 2019-12-19 09:45, ghe wrote:


How about writing a little script for rsync saying how you want it to
backup, what to backup, and what not to backup and set cron jobs for
when you want it to run. In the cron jobs, tell it to write to different
directories, so to keep several days or backups.


The fundamental problem is duplication.


Here is the data on my SOHO server:

2019-12-19 20:33:28 toor@soho2 ~
# du -sg /jail/cvs/var/local/cvs /jail/samba/var/local/samba
1   /jail/cvs/var/local/cvs
891 /jail/samba/var/local/samba


So, ~892 GB of live data.


Here are the snapshots (backups):

2019-12-19 20:46:37 toor@soho2 ~
# ls -1 /jail/cvs/var/local/cvs/.zfs/snapshot 
/jail/samba/var/local/samba/.zfs/snapshot

/jail/cvs/var/local/cvs/.zfs/snapshot:
manual-20190530-1804
manual-20190530-1830
manual-20191209-1728
manual-20191209-1741
manual-20191209-1802
zfs-auto-snap_d-2019-12-07-00h07
zfs-auto-snap_d-2019-12-08-00h07
zfs-auto-snap_d-2019-12-14-00h07
zfs-auto-snap_d-2019-12-15-00h07
zfs-auto-snap_d-2019-12-16-00h07
zfs-auto-snap_d-2019-12-17-00h07
zfs-auto-snap_d-2019-12-18-00h07
zfs-auto-snap_d-2019-12-19-00h07
zfs-auto-snap_f-2019-09-05-20h12
zfs-auto-snap_f-2019-09-07-00h00
zfs-auto-snap_f-2019-09-15-23h00
zfs-auto-snap_f-2019-09-19-22h48
zfs-auto-snap_f-2019-10-05-23h12
zfs-auto-snap_f-2019-10-07-20h00
zfs-auto-snap_f-2019-10-15-20h00
zfs-auto-snap_f-2019-11-03-14h36
zfs-auto-snap_f-2019-11-14-19h36
zfs-auto-snap_f-2019-11-15-21h12
zfs-auto-snap_f-2019-11-25-19h48
zfs-auto-snap_f-2019-11-29-17h00
zfs-auto-snap_f-2019-12-19-20h36
zfs-auto-snap_h-2019-12-18-20h02
zfs-auto-snap_h-2019-12-18-21h02
zfs-auto-snap_h-2019-12-18-22h02
zfs-auto-snap_h-2019-12-18-23h02
zfs-auto-snap_h-2019-12-19-00h02
zfs-auto-snap_h-2019-12-19-01h02
zfs-auto-snap_h-2019-12-19-02h02
zfs-auto-snap_h-2019-12-19-03h02
zfs-auto-snap_h-2019-12-19-04h02
zfs-auto-snap_h-2019-12-19-05h02
zfs-auto-snap_h-2019-12-19-06h02
zfs-auto-snap_h-2019-12-19-07h02
zfs-auto-snap_h-2019-12-19-08h02
zfs-auto-snap_h-2019-12-19-09h02
zfs-auto-snap_h-2019-12-19-10h02
zfs-auto-snap_h-2019-12-19-11h02
zfs-auto-snap_h-2019-12-19-12h02
zfs-auto-snap_h-2019-12-19-13h02
zfs-auto-snap_h-2019-12-19-14h02
zfs-auto-snap_h-2019-12-19-15h02
zfs-auto-snap_h-2019-12-19-16h02
zfs-auto-snap_h-2019-12-19-17h02
zfs-auto-snap_h-2019-12-19-18h02
zfs-auto-snap_h-2019-12-19-19h02
zfs-auto-snap_h-2019-12-19-20h02
zfs-auto-snap_m-2019-09-01-00h17
zfs-auto-snap_m-2019-10-01-00h17
zfs-auto-snap_m-2019-11-01-00h17
zfs-auto-snap_m-2019-12-01-00h17
zfs-auto-snap_w-2019-11-17-00h12
zfs-auto-snap_w-2019-11-24-00h12
zfs-auto-snap_w-2019-12-01-00h12
zfs-auto-snap_w-2019-12-08-00h12
zfs-auto-snap_w-2019-12-15-00h12

/jail/samba/var/local/samba/.zfs/snapshot:
manual-20190530-1804
manual-20190530-1830
manual-20191210-1736
zfs-auto-snap_d-2019-12-09-00h07
zfs-auto-snap_d-2019-12-10-00h07
zfs-auto-snap_d-2019-12-14-00h07
zfs-auto-snap_d-2019-12-15-00h07
zfs-auto-snap_d-2019-12-16-00h07
zfs-auto-snap_d-2019-12-17-00h07
zfs-auto-snap_d-2019-12-18-00h07
zfs-auto-snap_d-2019-12-19-00h07
zfs-auto-snap_f-2019-12-08-11h36
zfs-auto-snap_f-2019-12-19-20h36
zfs-auto-snap_h-2019-12-18-20h02
zfs-auto-snap_h-2019-12-18-21h02
zfs-auto-snap_h-2019-12-18-22h02
zfs-auto-snap_h-2019-12-18-23h02
zfs-auto-snap_h-2019-12-19-00h02
zfs-auto-snap_h-2019-12-19-01h02
zfs-auto-snap_h-2019-12-19-02h02
zfs-auto-snap_h-2019-12-19-03h02
zfs-auto-snap_h-2019-12-19-04h02
zfs-auto-snap_h-2019-12-19-05h02
zfs-auto-snap_h-2019-12-19-06h02
zfs-auto-snap_h-2019-12-19-07h02
zfs-auto-snap_h-2019-12-19-08h02
zfs-auto-snap_h-2019-12-19-09h02
zfs-auto-snap_h-2019-12-19-10h02
zfs-auto-snap_h-2019-12-19-11h02
zfs-auto-snap_h-2019-12-19-12h02
zfs-auto-snap_h-2019-12-19-13h02
zfs-auto-snap_h-2019-12-19-14h02
zfs-auto-snap_h-2019-12-19-15h02
zfs-auto-snap_h-2019-12-19-16h02
zfs-auto-snap_h-2019-12-19-17h02
zfs-auto-snap_h-2019-12-19-18h02
zfs-auto-snap_h-2019-12-19-19h02
zfs-auto-snap_h-2019-12-19-20h02
zfs-auto-snap_m-2019-09-01-00h17
zfs-auto-snap_m-2019-10-01-00h17
zfs-auto-snap_m-2019-11-01-00h17
zfs-auto-snap_m-2019-12-01-00h17
zfs-auto-snap_w-2019-11-17-00h12
zfs-auto-snap_w-2019-11-24-00h12
zfs-auto-snap_w-2019-12-01-00h12
zfs-auto-snap_w-2019-12-08-00h12
zfs-auto-snap_w-2019-12-15-00h12


So, ~47 snapshots of ~892 GB of data.  That is ~51 TB.  My backup disks 
are 2.9 TB.



ZFS with de-duplication and compression consumes 1.16 TB for the live 
filesystem plus all snapshots:


2019-12-19 20:39:53 toor@soho2 ~
# zpool list p2
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUP 
HEALTH  ALTROOT

p24.06T  1.16T  2.90T- - 3%28%  1.13x  ONLINE  -


Multiple rsync destination directories are not an option for me.


David



Re: Home made backup system

2019-12-19 Thread Nate Bargmann
I also use rsnapshot on this machine to backup to another drive in the
same case.  I'd thought about off site, perhaps AWS or such but haven't
spent enough time trying to figure out how I might do that with
rsnapshot.

- Nate

-- 

"The optimist proclaims that we live in the best of all
possible worlds.  The pessimist fears this is true."

Web: https://www.n0nb.us
Projects: https://github.com/N0NB
GPG fingerprint: 82D6 4F6B 0E67 CD41 F689 BBA6 FB2C 5130 D55A 8819



signature.asc
Description: PGP signature


Re: Home made backup system

2019-12-19 Thread Charles Curley
On Thu, 19 Dec 2019 10:45:22 -0700
ghe  wrote:

> How about writing a little script for rsync saying how you want it to
> backup, what to backup, and what not to backup and set cron jobs for
> when you want it to run. In the cron jobs, tell it to write to
> different directories, so to keep several days or backups.

Or look into rsnapshot, which does all this and more.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Home made backup system

2019-12-19 Thread Celejar
On Wed, 18 Dec 2019 12:02:56 -0500
rhkra...@gmail.com wrote:

> Aside / Admission: I don't backup all that I should and as often as I should, 
> so I'm looking for ways to improve.  One thought I have is to write my own 
> backup "system" and use it, and I've thought about that a little, and provide 
> some of my thoughts below.
> 
> A purpose of sending this to the mailing-list is to find out if there already 
> exists a solution (or parts of a solution) close to what I'm thinking about 
> (no sense re-inventing the wheel), or if someone thinks I've overlooked 
> something or making a big mistake.

There are certainly tools that do at least most of what you want. For
example, I use rsnapshot, basically a front-end to rsync that is
designed to harness rsync's power to streamline the taking of
incremental backups.

...

>* the backups should be in formats such that I can access them by a 
> variety 
> of other tools (as appropriate) if I need to -- if I backup an entire 
> directory or partition, I should be able to easily access and restore any 
> particular file from within that backup, and do so even if encrypted (i.e., 
> encryption would be done by "standard programs" (a bad example might be 
> ccrypt) that I could use "outside" of the backup system.

rsnapshot uses rsync + hardlinks to recreate the portions of
the filesystem that you want to back up (source) to wherever you tell it
to (target). That recreated filesystem can be accessed in any way that
the original filesystem can - no special tools are required for access
or recovery.

>* the bash subroutine (command) that I write should basically do the 
> following:
> 
>   * check that the specified target exists (for things like removable 
> drives or NAS type things) and has (sufficient) space (not sure I can tell 
> that 

rsnapshot does have a check for target availability. I don't think it
can check for sufficient space before initiating a backup - as you note,
it's a tricky thing to do - but it does have a 'du' option to report on
the target's current level of usage.

> until after backup is attempted) (or an encrypted drive that is not mounted / 
> unencrypted, i.e., available to write to)

>   * if the right conditions don't exist (above) tell me (I'm thinking of 
> an email as email is something that always gets my attention, maybe not 
> immediately, but soon enough)

rsnapshot will fail with an error code if something is wrong - assuming
you run it from cron, cron will email the error message.

>   * if the right conditions do exist, invoke the commands to backup the 
> files
> 
>   * if the backup is unsuccessful for any reason, notify me (email again)

As above.

>   * optionally notify me that the backup was successful (at least to the 
> extent of writing something)

By default rsnapshot prints nothing to stdout upon success (although
it does have a 'verbose' option), but it does log a 'success' message to
syslog, which I suppose you can keep an eye on with a log analyzer
(something like logwatch). Alteratively, I just reconfigured my
rsnapshot deployment to run rsnapshot with this wrapper, which results
in a notification for success but not for failure (since rsnapshot
pulls backups from the source, and in my case, the laptop it's
backing up is often not present, I would normally be flooded with
unnecessary failure notices):

*

#!/bin/sh

# usage 'rsnapshot-script x', where 'x' is a backup interval defined in the
# rsnapshot configuration file

if nc -z lila 22 2>/dev/null
then
echo "Running 'rsnapshot $1' ..."
if rsnapshot $1
then echo Success
fi
fi

*

>   * optionally actually do something to confirm that the backup is 
> readable 
> / usable (need to think about what that could be -- maybe write it (to /tmp 
> or 
> to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes 
> sense) on it and the original file, and confirm they match

rsnapshot has a hook system that allows you to add commands to be run
by it.

>   * ???
> 
> All of the commands invoked by the script should be parameters so that the 
> commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256 
> or whatever, ccrypt or whatever, etc.) 

rsnapshot has configuration options 'cmd_cp', 'cmd_rm', 'cmd_rsync',
'cmd_ssh', 'cmd_logger', 'cmd_du' to do exactly that.

> Then the master script (actually probably scripts, e.g. one or more each for 
> hourly, daily, weekly, ... backups) would be invoked by cron (or maybe 
> include 
> the at command? --my computers run 24/7 unless they crash, but for others, at 
> or something similar might be a better choice) would invoke that subroutine / 
> command for each file, directory, or partition to be backed up, spec

Re: Home made backup system

2019-12-19 Thread ghe


How about writing a little script for rsync saying how you want it to
backup, what to backup, and what not to backup and set cron jobs for
when you want it to run. In the cron jobs, tell it to write to different
directories, so to keep several days or backups.

Not as smart as amanda (it'll backup more than necessary), but I think
it'll do the job with a whole lot less configuration.

I use something like this to backup a domain a thousand miles away.

-- 
Glenn English



Re: Home made backup system

2019-12-19 Thread songbird
Greg Wooledge wrote:
...
> History expansion is a bloody nightmare.  I recommend simply turning
> it off and living without it.  Of course, that's a personal preference,
> and you're free to continue banging your head against it, if you feel
> that the times it helps you outweigh the times that it hurts you.

  i only use a few commands regularly and have them either
aliased or stuck in history for me in my .bashrc
(i start every session by history -c to get rid of
anything and then use history -s "command" so pretty
much my routine when signing on in the morning is to
do !1 and then !2, !3 if i need to do a dist-upgrade.

!1 is apt-get update & fetchnews
!2 is apt-get upgrade
!3 is apt-get dist-upgrade


...
> ... and then, to add insult to injury, the command with the failed history
> expansion isn't even recorded in the shell's history, so you can't just
> "go up" and edit the line.  You have to start all over from scratch, or
> copy and paste the command with the mouse like some kind of Windows user.

  ha, yeah...

  i rarely use shell recording or other tools like
that but once in a while i've been rescued by my
habit of cat'ing the contents of a file to the terminal
to look at it instead of using an editor (and having
an infinite scroll window).


  songbird



Re: Home made backup system

2019-12-19 Thread tomas
On Thu, Dec 19, 2019 at 08:51:51AM -0500, Greg Wooledge wrote:
> On Thu, Dec 19, 2019 at 10:03:57AM +0200, Andrei POPESCU wrote:
> > On Mi, 18 dec 19, 21:42:21, rhkra...@gmail.com wrote:
> > > On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> > > >   #!/bin/bash
> > > >   home=${HOME:-~}
> > 
> > It will set the variable 'home' to the value of the variable 'HOME' if 
> > set (yes, case matters), otherwise to '~'.
> 
> It appears to expand the ~, rather than assigning a literal ~ character
> to the variable.

For bash, it's in the docs:

Quoth the man page:

   ${parameter:-word}
  Use Default Values.  If parameter is unset or null, the expansion
  of word is substituted.  Otherwise, the value of parameter is
  substituted.

For the rest...

I agree that the shell is full of bashisms. I usually don't care very
much when it's a script "to use around home". Whenever scripts get
larger or more widely distributed, I put in some effort.

But thanks for your (as always) insightful comments!

[...]

> So, home=${HOME:-~} seems like some sort of belt-and-suspenders fallback
> check in case the script is executed in a context where $HOME hasn't been
> set.  Maybe in a systemd service or something similar?  That's all I
> can think of.

You are right: HOME belongs to the blessed shell variables (in bash, at
least). Moreover, tilde expansion is done, according to the docs, using
HOME.

Quoth (again) the man page:

  HOME   The home directory of the current user; the default argument
 for the cd builtin command.  The value of this variable is
 also used when performing tilde expansion.

In practical terms:

  tomas@trotzki:~$ export HOME=rumpelstilzchen
  tomas@trotzki:/home/tomas$ echo ~
  rumpelstilzchen

:-)

So this whole "fallback to tilde thing is redundant (at least in bash)!

Cheers
-- tomás


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-19 Thread Greg Wooledge
On Thu, Dec 19, 2019 at 09:47:03AM +0100, to...@tuxteam.de wrote:
> So this "if" means:
> 
>   if   ## if
>   test ##
>   -z "$home"   ## the value of $home is empty
>   -o   ## or
>   \!   ## there is NOT
>   -d "$home"   ## a directory named "$home"
>## we're homeless.

Expanding on what I said in a previous message, the reason this is not
portable is because parsing this kind of expression is hard, and shells
did not all agree on how to do it.

So rather than try to enforce some kind of difficult parsing within
test, POSIX decided to scrap the whole thing.  In POSIX's wording:

  The XSI extensions specifying the -a and -o binary primaries and the
  '(' and ')' operators have been marked obsolescent. (Many expressions
  using them are ambiguously defined by the grammar depending on the
  specific expressions being evaluated.) Scripts using these expressions
  should be converted to the forms given below.

Shells that don't support binary -o and -a are compliant by default, and
shells that DO support it are simply offering an extension.  BUT, this is
only true for some expressions involving -o and -a.  Not all expressions.

What POSIX actually settled on for the test command is a strict
interpretation based on the number of arguments passed.

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html

  0 arguments:
Exit false (1).

  1 argument:
Exit true (0) if $1 is not null; otherwise, exit false.

  2 arguments:
If $1 is '!', exit true if $2 is null, false if $2 is not null.

If $1 is a unary primary, exit true if the unary test is true,
false if the unary test is false.

Otherwise, produce unspecified results.

  3 arguments:
If $2 is a binary primary, perform the binary test of $1 and $3.

If $1 is '!', negate the two-argument test of $2 and $3.

[OB XSI]  If $1 is '(' and $3 is ')', perform the unary test of
$2.   On systems that do not support the XSI option, the results
are unspecified if $1 is '(' and $3 is ')'.

Otherwise, produce unspecified results.

  4 arguments:
If $1 is '!', negate the three-argument test of $2, $3, and $4.

[OB XSI]  If $1 is '(' and $4 is ')', perform the two-argument
test of $2 and $3.   On systems that do not support the XSI option,
the results are unspecified if $1 is '(' and $4 is ')'.

Otherwise, the results are unspecified.

  >4 arguments:
The results are unspecified.


So... your binary -o and -a are only allowed as extensions in one of the
"results are unspecified" cases, e.g. when there are 5 or more arguments
given to test.  Your code above has 6 arguments, so this is allowable, if
a given shell chooses to attempt it.  Bash is one of the shells that does.

Still, you shouldn't be writing this type of code.  If you're going
to require bash extensions, just go all in and use [[ -z $v || ! -d $v ]]
instead.  Otherwise, string together two test commands.

(Also remember that test -a is a legacy synonym for test -e, so a shell
that wants to parse binary -a first has to figure out whether it's
looking at a unary -a or a binary -a.  Bash's [[ || ]] doesn't have
that problem.)

The POSIX page actually goes into a lot more detail about some of the
historical glitches with test.  It's worth a read.

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html#tag_20_128_16



Re: Home made backup system

2019-12-19 Thread Greg Wooledge
On Thu, Dec 19, 2019 at 10:03:57AM +0200, Andrei POPESCU wrote:
> On Mi, 18 dec 19, 21:42:21, rhkra...@gmail.com wrote:
> > On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> > >   #!/bin/bash
> > >   home=${HOME:-~}
> 
> It will set the variable 'home' to the value of the variable 'HOME' if 
> set (yes, case matters), otherwise to '~'.

It appears to expand the ~, rather than assigning a literal ~ character
to the variable.

wooledg:~$ x=${FOO:-~}; echo "$x"
/home/wooledg

I'm not sure I would trust this, though.  Even if the standards require
this behavior (and I'd have to lawyer my way through them to try to
figure out whether they actually DO require it), I wouldn't trust all
shell implementations to get it right.

And in any case, $HOME and ~ should normally both be the same thing,
so long as the ~ isn't quoted, and the $HOME isn't single-quoted.

   Tilde Expansion
   [...]
   If  this  login name is the null string, the tilde is replaced with the
   value of the shell parameter HOME.  If HOME is unset, the  home  direc‐
   tory  of  the  user executing the shell is substituted instead.

So, home=${HOME:-~} seems like some sort of belt-and-suspenders fallback
check in case the script is executed in a context where $HOME hasn't been
set.  Maybe in a systemd service or something similar?  That's all I
can think of.

If that's the intent, then I might prefer something more explicit,
and less likely to trigger an obscure shell bug, like:

if [ -z "$HOME" ]; then HOME=~; export HOME; fi

Then you can simply use $HOME in the rest of the script.

(See also .  And if you're
a set -u person, too bad.  Mangle it for -u compatibility yourself.  You
should know how, or else you shouldn't be using -u.)



Re: Home made backup system

2019-12-19 Thread Greg Wooledge
On Thu, Dec 19, 2019 at 09:53:46AM +0100, to...@tuxteam.de wrote:
> > ...
> > >>   if test -z "$home" -o \! -d "$home" ; then

The main issue here is that the use of the binary -o and -a operators
in "test" or "[" is not portable.  It might work in bash's implementation
of test (sometimes), but you can't count on it in other shells.

The preferred way to write this in a bash script would be:

if [[ -z $home || ! -d $home ]]; then

Or, in an sh script:

if test -z "$home" || test ! -d "$home"; then

or:

if [ -z "$home" ] || [ ! -d "$home" ]; then


> > the backslash is just protecting the ! operator 
> > which is the not operator on what follows.

In the shell, backslash is a form of quoting.  \! is exactly the same as
'!' but it's one character shorter, so you'll see people use the shorter
form a lot of the time.

You don't actually NEED to quote a lone ! character in a shell command.

wooledg:~$ echo hi !
hi !

However, when the ! character is NOT all alone, in bash's default
interactive mode (with history expansion enabled), certain !x
combinations can trigger unexpected and undesired history expansion.

wooledg:~$ set -o histexpand
wooledg:~$ echo hi!!
echo hiset -o histexpand
hiset -o histexpand

So, some people who have run into this in the past have probably
developed a defense mechanism of "always quote ! characters, no matter
what".  Which isn't wrong... but even then, it's not always enough.

History expansion is a bloody nightmare.  I recommend simply turning
it off and living without it.  Of course, that's a personal preference,
and you're free to continue banging your head against it, if you feel
that the times it helps you outweigh the times that it hurts you.

wooledg:~$ set -o histexpand
wooledg:~$ echo "Oh Yeah!.mp3"
bash: !.mp3: event not found

... and then, to add insult to injury, the command with the failed history
expansion isn't even recorded in the shell's history, so you can't just
"go up" and edit the line.  You have to start all over from scratch, or
copy and paste the command with the mouse like some kind of Windows user.



Re: Home made backup system

2019-12-19 Thread tomas
On Wed, Dec 18, 2019 at 10:38:26PM -0500, songbird wrote:
> rhkra...@gmail.com wrote:
> ...
> >>   if test -z "$home" -o \! -d "$home" ; then
> >
> > What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner 
> > -- 
>   no, -o is logical or in that context.

Yes, exactly: it's not bash operating on that, but test [1],
so for bash it's a plain old parameter passed to test.

> the backslash is just protecting the ! operator 
> which is the not operator on what follows.

Again, this is supposed to be passed to test unharmed,
so the \ is telling bash "nothing to see here, pass
along).

>   i'm not going to go any further with reading
> whatever script that is.  i don't want to be
> here all evening.  ;)

Shell can be entertaining, can't it [2]?

Cheers

[1] Of course, this was a little white lie: there is a /bin/test,
   but for bash "test" is a built in, so it's part of bash anyway,
   but it behaves as if it were a binary. Oh, my ;-)

[2] Recommended: Greg Wooledge's pages. He's a regular here. He
   knows much more about shells than me!
   https://mywiki.wooledge.org/

-- tomás


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-19 Thread tomas
On Wed, Dec 18, 2019 at 09:42:21PM -0500, rhkra...@gmail.com wrote:
> Thanks to all who replied!
> 
> This script (or elements of it) looks useful to me, but I don't fully 
> understand it -- I plan to work my way through it -- I have a few questions 
> now, I'm sure I will have more after I get past the first 3 (or more 
> encouraging to me, first 6) lines.
> 
> Questions below:
> 
> On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> > On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:
> 
> >   #!/bin/bash
> >   home=${HOME:-~}
> 
> What does that line do, or more specifically, what does the :-~ do -- note 
> the 
> following:

The "-" doesn't belong to the "~" but to the ":" ;-)

The construction is (see the section "Parameter Expansion" in the bash
manual):

   ${parameter:-word}
 Use Default Values.  If parameter is unset or null, the expansion
 of word is substituted.  Otherwise, the value of parameter is
 substituted.

("parameter" is the bash manual's jargon for what we colloquially call
"shell variable").

So this means: "if HOME is set, then use that. Otherwise use whatever
tilde ('~') expands to".

This is my way to find a home, but to allow the script's user to override
it by setting HOME to some other value.

> rhk@s19:/rhk/git_test$ echo ${HOME:-~}
> /home/rhk
> rhk@s19:/rhk/git_test$ echo ${HOME}
> /home/rhk
> 
> >   if test -z "$home" -o \! -d "$home" ; then
> 
> What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner 
> -- 
> I guess I should look for it in man bash...

No, this exclamation mark ain't for bash -- it's an argument to "test"
which it interprets as "not". Since the "!" can mean to bash something
in some contexts, as you found out, I escaped it with the "\" [1].

So this "if" means:

  if   ## if
  test ##
  -z "$home"   ## the value of $home is empty
  -o   ## or
  \!   ## there is NOT
  -d "$home"   ## a directory named "$home"
   ## we're homeless.

> I'm sure I'll have more questions as I continue, but that is enough for me 
> for 
> tonight.

Questions welcome!

Cheers

[1] Actually, on revisiting things, I would tend to write '!' instead
   of \! these days.

-- tomás


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-19 Thread Andrei POPESCU
On Mi, 18 dec 19, 21:42:21, rhkra...@gmail.com wrote:
> On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> 
> >   #!/bin/bash
> >   home=${HOME:-~}
> 
> What does that line do, or more specifically, what does the :-~ do -- note 
> the 
> following:

It will set the variable 'home' to the value of the variable 'HOME' if 
set (yes, case matters), otherwise to '~'.

See the bash manpage, section 'Parameter Expansion'.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Home made backup system

2019-12-18 Thread David Christensen

On 2019-12-18 09:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.


I wrote and use a homebrew backup and archive solution that started with 
a Perl script to invoke rsync (backup) and tar/ gzip (archive) over ssh 
from a central server according to configurable job files.  

Re: Home made backup system

2019-12-18 Thread songbird
rhkra...@gmail.com wrote:
...
>>   if test -z "$home" -o \! -d "$home" ; then
>
> What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner 
> -- 
  no, -o is logical or in that context.
the backslash is just protecting the ! operator 
which is the not operator on what follows.

  i'm not going to go any further with reading
whatever script that is.  i don't want to be
here all evening.  ;)

  when searching the bash man pages you have to
be aware of context as some of the operators
and options are used in many places but have 
quite different meanings.


  songbird



Re: Home made backup system

2019-12-18 Thread Charles Curley
On Wed, 18 Dec 2019 12:02:56 -0500
rhkra...@gmail.com wrote:

> Aside / Admission: I don't backup all that I should and as often as I
> should, so I'm looking for ways to improve.  One thought I have is to
> write my own backup "system" and use it, and I've thought about that
> a little, and provide some of my thoughts below.

There are different backup programs for different purposes. Some
thoughts:
http://charlescurley.com/blog/posts/2019/Nov/02/backups-on-linux/



-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Home made backup system

2019-12-18 Thread rhkramer
Thanks to all who replied!

This script (or elements of it) looks useful to me, but I don't fully 
understand it -- I plan to work my way through it -- I have a few questions 
now, I'm sure I will have more after I get past the first 3 (or more 
encouraging to me, first 6) lines.

Questions below:

On Wednesday, December 18, 2019 12:26:04 PM to...@tuxteam.de wrote:
> On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:

>   #!/bin/bash
>   home=${HOME:-~}

What does that line do, or more specifically, what does the :-~ do -- note the 
following:

rhk@s19:/rhk/git_test$ echo ${HOME:-~}
/home/rhk
rhk@s19:/rhk/git_test$ echo ${HOME}
/home/rhk

>   if test -z "$home" -o \! -d "$home" ; then

What does the -o \! do -- hmm, I guess \! is a bash "refeence" to the owner -- 
I guess I should look for it in man bash...

Hmm, but that means (in bash) the "history number" of the command

"  \! the history number of this command"

> echo "can't backup the homeless, sorry"
> exit 1
>   fi

I'm sure I'll have more questions as I continue, but that is enough for me for 
tonight.

>   backup=/media/backup/${home#/}
>   rsync -av --delete --filter="merge $home/.backup/filter" $home/ $backup/
>   echo -n "syncing..."
>   sync
>   echo " done."
>   df -h
> 
> I mount an USB stick (currently 128G) on /media/backup (the stick has a
> LUKS encrypted file system on it) and invoke backup.
> 
> The only non-quite obvious thing is the option
> 
>   --filter="merge $home/.backup/filter"
> 
> which controls what (not) to back up. This one has a list of excludes
> (much shortened) like so
> 
>   - /.cache/
>   [...much elided...]
>   - /.xsession-errors
>   - /tmp
>   dir-merge .backup-filter
> 
> The last line is interesting: it tells rsync to merge a file .backup-filter
> in each directory it visits -- so I can exclude huge subdirs I don't need
> to keep (e.g. because they are easy to re-build, etc.).
> 
> One example of that: I've a subdirectory virt, where I keep virtual images
> and install media. Then virt/.backup-filter looks like this:
> 
>   + /.backup-filter
>   + /notes
>   - /*
> 
> i.e. "just keep .backup-filter and notes, ignore the rest".
> 
> This scheme has served me well over the last ten years. It does have its
> limitations: it's sub-optimal with huge files, it won't probably scale
> well for huge amounts of data.
> 
> But it's easy to use and easy to understand.
> 
> Cheers
> -- t



Re: Home made backup system

2019-12-18 Thread elvis
If you don't to reinvent the wheel, and have more than one computer to 
backup...


try Bacula  www.bacula.org


does everything you want

On 19/12/19 3:02 am, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.


--
If we aren't supposed to eat animals, why are they made of meat?



Re: Home made backup system

2019-12-18 Thread tomas
On Wed, Dec 18, 2019 at 12:02:56PM -0500, rhkra...@gmail.com wrote:
> Aside / Admission: I don't backup all that I should and as often as I should, 
> so I'm looking for ways to improve [...]

> Part of the reason for doing my own is that I don't want to be trapped into 
> using a system that might disappear or change and leave me with a problem.

I just use rsync. The whole thing is driven from a minimalist script:

  #!/bin/bash
  home=${HOME:-~}
  if test -z "$home" -o \! -d "$home" ; then
echo "can't backup the homeless, sorry"
exit 1
  fi
  backup=/media/backup/${home#/}
  rsync -av --delete --filter="merge $home/.backup/filter" $home/ $backup/
  echo -n "syncing..."
  sync
  echo " done."
  df -h

I mount an USB stick (currently 128G) on /media/backup (the stick has a
LUKS encrypted file system on it) and invoke backup.

The only non-quite obvious thing is the option

  --filter="merge $home/.backup/filter"

which controls what (not) to back up. This one has a list of excludes
(much shortened) like so

  - /.cache/
  [...much elided...]
  - /.xsession-errors
  - /tmp
  dir-merge .backup-filter

The last line is interesting: it tells rsync to merge a file .backup-filter
in each directory it visits -- so I can exclude huge subdirs I don't need
to keep (e.g. because they are easy to re-build, etc.).

One example of that: I've a subdirectory virt, where I keep virtual images
and install media. Then virt/.backup-filter looks like this:

  + /.backup-filter
  + /notes
  - /*

i.e. "just keep .backup-filter and notes, ignore the rest".

This scheme has served me well over the last ten years. It does have its
limitations: it's sub-optimal with huge files, it won't probably scale
well for huge amounts of data.

But it's easy to use and easy to understand.

Cheers
-- t


signature.asc
Description: Digital signature


Re: Home made backup system

2019-12-18 Thread billium

On 18/12/2019 17:02, rhkra...@gmail.com wrote:

Aside / Admission: I don't backup all that I should and as often as I should,
so I'm looking for ways to improve.  One thought I have is to write my own
backup "system" and use it, and I've thought about that a little, and provide
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already
exists a solution (or parts of a solution) close to what I'm thinking about
(no sense re-inventing the wheel), or if someone thinks I've overlooked
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into
using a system that might disappear or change and leave me with a problem.  (I
subscribe to a mailing list for one particular backup system, and I wrote to
that list with my concerns and a little bit of my thoughts about my own system
(well, at the time, I was hoping for a "universal" configuration file (the file
that would specify what, where, when, how each file, directory, or partition to
be backed up would be treated), one that could be read and acted upon by a
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source,
it would never go away.  (Yet, if I'm not mixing up backup programs, they were
transitioning from using Python 2 as the underlying language to Python 3 --
I'm not sure Python 2 would ever go completely away, or become non-functional,
but it reinforces my belief / fear that any (complex?) backup program, even
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs
and it seeming that no such thing exists (not surprising), I thought I'd try
to create my own -- this morning as I thought about it a little more (despite
a headache and a non-working car what I should be working on), I thought that
the simplest thing for me to do is write a bash script and a bash subroutine,
something along these lines:

* the backups should be in formats such that I can access them by a variety
of other tools (as appropriate) if I need to -- if I backup an entire
directory or partition, I should be able to easily access and restore any
particular file from within that backup, and do so even if encrypted (i.e.,
encryption would be done by "standard programs" (a bad example might be
ccrypt) that I could use "outside" of the backup system.

* the bash subroutine (command) that I write should basically do the
following:

   * check that the specified target exists (for things like removable
drives or NAS type things) and has (sufficient) space (not sure I can tell that
until after backup is attempted) (or an encrypted drive that is not mounted /
unencrypted, i.e., available to write to)

   * if the right conditions don't exist (above) tell me (I'm thinking of
an email as email is something that always gets my attention, maybe not
immediately, but soon enough)

   * if the right conditions do exist, invoke the commands to backup the
files

   * if the backup is unsuccessful for any reason, notify me (email again)

   * optionally notify me that the backup was successful (at least to the
extent of writing something)

   * optionally actually do something to confirm that the backup is readable
/ usable (need to think about what that could be -- maybe write it (to /tmp or
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes
sense) on it and the original file, and confirm they match

   * ???

All of the commands invoked by the script should be parameters so that the
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256
or whatever, ccrypt or whatever, etc.)

Then the master script (actually probably scripts, e.g. one or more each for
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include
the at command? --my computers run 24/7 unless they crash, but for others, at
or something similar might be a better choice) would invoke that subroutine /
command for each file, directory, or partition to be backed up, specifying the
commands to use, what files to backup, where to back them up, encrypted or not,
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash
scripts with the appropriate commands, and invoked at the appropriate time by
cron (or with all backup commands in one script with backup times specified
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to
learn anything about it or any other program that might cease to be
maintainied in the future.

The rsync web site had some good examples.  There is a daily rotating 
one and overall backups also. I use these to backup to a Debian nas and 
a VPS.





Re: Home made backup system

2019-12-18 Thread Levente
It depends what do you want to backup. If that is code, or text files, use
git. If they are photos videos or mostly binary, use some script and
magnetic tapes.


Levente

On Wed, Dec 18, 2019, 18:03  wrote:

> Aside / Admission: I don't backup all that I should and as often as I
> should,
> so I'm looking for ways to improve.  One thought I have is to write my own
> backup "system" and use it, and I've thought about that a little, and
> provide
> some of my thoughts below.
>
> A purpose of sending this to the mailing-list is to find out if there
> already
> exists a solution (or parts of a solution) close to what I'm thinking
> about
> (no sense re-inventing the wheel), or if someone thinks I've overlooked
> something or making a big mistake.
>
> Part of the reason for doing my own is that I don't want to be trapped
> into
> using a system that might disappear or change and leave me with a
> problem.  (I
> subscribe to a mailing list for one particular backup system, and I wrote
> to
> that list with my concerns and a little bit of my thoughts about my own
> system
> (well, at the time, I was hoping for a "universal" configuration file (the
> file
> that would specify what, where, when, how each file, directory, or
> partition to
> be backed up would be treated), one that could be read and acted upon by a
> great variety (and maybe all future backup programs).
>
> The only response I got (iirc) was that since their program was open
> source,
> it would never go away.  (Yet, if I'm not mixing up backup programs, they
> were
> transitioning from using Python 2 as the underlying language to Python 3
> --
> I'm not sure Python 2 would ever go completely away, or become
> non-functional,
> but it reinforces my belief / fear that any (complex?) backup program,
> even
> open source, would someday become unusable.
>
> So, here are my thoughts:
>
> After I thought about (hoped for) a universal config file for backup
> programs
> and it seeming that no such thing exists (not surprising), I thought I'd
> try
> to create my own -- this morning as I thought about it a little more
> (despite
> a headache and a non-working car what I should be working on), I thought
> that
> the simplest thing for me to do is write a bash script and a bash
> subroutine,
> something along these lines:
>
>* the backups should be in formats such that I can access them by a
> variety
> of other tools (as appropriate) if I need to -- if I backup an entire
> directory or partition, I should be able to easily access and restore any
> particular file from within that backup, and do so even if encrypted
> (i.e.,
> encryption would be done by "standard programs" (a bad example might be
> ccrypt) that I could use "outside" of the backup system.
>
>* the bash subroutine (command) that I write should basically do the
> following:
>
>   * check that the specified target exists (for things like removable
> drives or NAS type things) and has (sufficient) space (not sure I can tell
> that
> until after backup is attempted) (or an encrypted drive that is not
> mounted /
> unencrypted, i.e., available to write to)
>
>   * if the right conditions don't exist (above) tell me (I'm thinking
> of
> an email as email is something that always gets my attention, maybe not
> immediately, but soon enough)
>
>   * if the right conditions do exist, invoke the commands to backup
> the
> files
>
>   * if the backup is unsuccessful for any reason, notify me (email
> again)
>
>   * optionally notify me that the backup was successful (at least to
> the
> extent of writing something)
>
>   * optionally actually do something to confirm that the backup is
> readable
> / usable (need to think about what that could be -- maybe write it (to
> /tmp or
> to a ramdrive), do something like a checksum (e.g., sha-256 or whatever
> makes
> sense) on it and the original file, and confirm they match
>
>   * ???
>
> All of the commands invoked by the script should be parameters so that the
> commands can be easily changed in the future (e.g., cp / tar / rsync,
> sha-256
> or whatever, ccrypt or whatever, etc.)
>
> Then the master script (actually probably scripts, e.g. one or more each
> for
> hourly, daily, weekly, ... backups) would be invoked by cron (or maybe
> include
> the at command? --my computers run 24/7 unless they crash, but for others,
> at
> or something similar might be a better choice) would invoke that
> subroutine /
> command for each file, directory, or partition to be backed up, specifying
> the
> commands to use, what files to backup, where

Home made backup system

2019-12-18 Thread rhkramer
Aside / Admission: I don't backup all that I should and as often as I should, 
so I'm looking for ways to improve.  One thought I have is to write my own 
backup "system" and use it, and I've thought about that a little, and provide 
some of my thoughts below.

A purpose of sending this to the mailing-list is to find out if there already 
exists a solution (or parts of a solution) close to what I'm thinking about 
(no sense re-inventing the wheel), or if someone thinks I've overlooked 
something or making a big mistake.

Part of the reason for doing my own is that I don't want to be trapped into 
using a system that might disappear or change and leave me with a problem.  (I 
subscribe to a mailing list for one particular backup system, and I wrote to 
that list with my concerns and a little bit of my thoughts about my own system 
(well, at the time, I was hoping for a "universal" configuration file (the file 
that would specify what, where, when, how each file, directory, or partition to 
be backed up would be treated), one that could be read and acted upon by a 
great variety (and maybe all future backup programs).

The only response I got (iirc) was that since their program was open source, 
it would never go away.  (Yet, if I'm not mixing up backup programs, they were 
transitioning from using Python 2 as the underlying language to Python 3 -- 
I'm not sure Python 2 would ever go completely away, or become non-functional, 
but it reinforces my belief / fear that any (complex?) backup program, even 
open source, would someday become unusable.

So, here are my thoughts:

After I thought about (hoped for) a universal config file for backup programs 
and it seeming that no such thing exists (not surprising), I thought I'd try 
to create my own -- this morning as I thought about it a little more (despite 
a headache and a non-working car what I should be working on), I thought that 
the simplest thing for me to do is write a bash script and a bash subroutine, 
something along these lines:

   * the backups should be in formats such that I can access them by a variety 
of other tools (as appropriate) if I need to -- if I backup an entire 
directory or partition, I should be able to easily access and restore any 
particular file from within that backup, and do so even if encrypted (i.e., 
encryption would be done by "standard programs" (a bad example might be 
ccrypt) that I could use "outside" of the backup system.

   * the bash subroutine (command) that I write should basically do the 
following:

  * check that the specified target exists (for things like removable 
drives or NAS type things) and has (sufficient) space (not sure I can tell that 
until after backup is attempted) (or an encrypted drive that is not mounted / 
unencrypted, i.e., available to write to)

  * if the right conditions don't exist (above) tell me (I'm thinking of 
an email as email is something that always gets my attention, maybe not 
immediately, but soon enough)

  * if the right conditions do exist, invoke the commands to backup the 
files

  * if the backup is unsuccessful for any reason, notify me (email again)

  * optionally notify me that the backup was successful (at least to the 
extent of writing something)

  * optionally actually do something to confirm that the backup is readable 
/ usable (need to think about what that could be -- maybe write it (to /tmp or 
to a ramdrive), do something like a checksum (e.g., sha-256 or whatever makes 
sense) on it and the original file, and confirm they match

  * ???

All of the commands invoked by the script should be parameters so that the 
commands can be easily changed in the future (e.g., cp / tar / rsync, sha-256 
or whatever, ccrypt or whatever, etc.) 

Then the master script (actually probably scripts, e.g. one or more each for 
hourly, daily, weekly, ... backups) would be invoked by cron (or maybe include 
the at command? --my computers run 24/7 unless they crash, but for others, at 
or something similar might be a better choice) would invoke that subroutine / 
command for each file, directory, or partition to be backed up, specifying the 
commands to use, what files to backup, where to back them up, encrypted or not, 
compressed or not, tarred or not, etc.

In other words, instead of a configuration file, the system would just use bash 
scripts with the appropriate commands, and invoked at the appropriate time by 
cron (or with all backup commands in one script with backup times specified 
with at or similar).

Aside: even if Amanda (for example) will always exist, I don't really want to 
learn anything about it or any other program that might cease to be 
maintainied in the future.



Re: Recommendation: Backup system

2016-10-12 Thread Richard Hector
On 07/10/16 23:19, Jonathan Dowland wrote:
> I don't know whether dirvish does something to improve matters, but with
> hard link trees, if you have lots of little files (such as Maildir archives
> of busy mailing lists like LKML), the amount of space consumed by the file
> system metadata to represent the trees is very high, getting towards the
> size used for the files themselves. This is compounded on some filesystems
> (the ext family amongst them) that only ever increase the amount of space
> for storing dirent metadata, never decrease it, so if you have a Maildir
> which has high churn, and you move from having a large amount of mail to
> a small amount, the storage allocated for metadata for the directory will
> remain large.

The metadata you're talking about is merely the directory entries,
right? I'm still searching, but haven't yet found info on how much space
a directory entry takes up. Any pointers would be most welcome :-)

I'm on xfs, btw.

Regardless, in the trivial case where nothing has changed between
backups, the directories in the new backup must take up exactly the same
space as the ones in the old backup, where both point to the same files.
So whether there's one set of links or several, it's still the case that
for small files, the directory entries will take proportionately more
space relative to the file - that's inevitable.

I think I'm right in saying that in the dirvish (or similar) use case,
the number of directory entries in a given directory will never decrease
anyway - except in the odd case where I discover I've backed up
something I shouldn't have (eg a huge or confidential file), and gone
through purging it from all the backups.

Richard




signature.asc
Description: OpenPGP digital signature


Re: Recommendation: Backup system

2016-10-10 Thread Dave Thayer
On Mon, Oct 10, 2016 at 10:55:44PM +0100, Jonathan Dowland wrote:
> On Mon, Oct 10, 2016 at 09:32:02AM -0400, Celejar wrote:
> > Am I being unreasonable? You are certainly more of an expert than I - I
> > suppose you find that it is quality software, and better than
> > rsnapshot, despite basically being dead?
>  
> Lack of bug and development activity is certainly a worrying sign of a program
> with problems. But development activity naturally dies off if a program does
> what it is supposed to do, so it's not always a certainty that it is doomed.
> It remains a concern for me, but regardless the tool has worked very well.
> 

It looks like rdiff-backup has a new maintainer as of February 2016, sol1. See
. There may be some progress forthcoming
after all. 

dt

-- 
Dave Thayer / Denver, Colorado USA / d...@thayer-boyle.com 
 Whenever you read a good book, it's like the author is right there, in
 the room talking to you, which is why I don't like to read good books.
 - Jack Handey "Deep Thoughts"



Re: Recommendation: Backup system

2016-10-10 Thread Jonathan Dowland
On Mon, Oct 10, 2016 at 09:32:02AM -0400, Celejar wrote:
> Interesting, thanks. I've been using rsnapshot for years, and am
> basically satisfied with it, although the performance when run on my
> T61 laptop (backing up to a (slow) USB external disk) is indeed painful
> (I do have largeish Maildirs). [Interestingly, when run on my ARM
> (Kirkland) NAS, pulling from the laptop over ethernet / wifi,
> performance seems much better ...]

The first machine where I really had performance problems
with rsnapshot was an ARM-powered Thecus N2100, but that was a pretty
weak CPU from what I recall.

> One thing I really like about rsnapshot is how the backups are all
> stored just as the files themselves, without any special formats, and
> can therefore be inspected / restored from using just the ordinary
> filesystem tools.

That is a nice property, yes. the rdiff-backup format is not quite this,
but is well specified outside of the code (from the last time I looked)
to give me confidence I could pull my files out by hand if I needed to.

I'm not sure if I mentioned it in this thread or not, but I actually began a
third party tool to parse rdiff-backup format backups[1], and there exists
another third party tool to do the same thing[2].

> A number of years ago I looked at rdiff-backup, but dropped it due to
> suspicions of the code quality: the project seemed to be dead, and even
> significant bugs were being ignored:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=623336
> https://lists.debian.org/debian-user/2012/10/msg00182.html
> 
> I see that you, too, have a report that's been ignored for more than 5
> years ;)

Yes, that's true. The bug that I remember reporting (haven't looked it back up
to be sure) was that reverting a partial (failed) backup requires some disk
space and so fails if the backup device is full - this is a big problem in
theory, but in practice I'm running my backup jobs as non-root, so there's
always the 5% or so reserved space for root users. So in this situation I
can have the roll-back occur as root.

> Am I being unreasonable? You are certainly more of an expert than I - I
> suppose you find that it is quality software, and better than
> rsnapshot, despite basically being dead?
 
Lack of bug and development activity is certainly a worrying sign of a program
with problems. But development activity naturally dies off if a program does
what it is supposed to do, so it's not always a certainty that it is doomed.
It remains a concern for me, but regardless the tool has worked very well.

[1] https://jmtd.net/software/rdifffs/
[2] https://github.com/rbrito/rdiff-backup-fs

-- 
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


signature.asc
Description: Digital signature


Re: Recommendation: Backup system

2016-10-10 Thread Celejar
On Mon, 3 Oct 2016 10:48:57 +0100
Jonathan Dowland  wrote:

> On Sat, Oct 01, 2016 at 11:37:31AM +0200, mo wrote:
> > Make a long story short:
> > Have you guys a recommendation for me?
> > Is there a specific application you use for your backups guys?
> 
> rdiff-backup[1]. I don't know what your NAS is, whether it's an off-the-shelf
> thing or a DIY system, but I tend to initiate the backups on my NAS (which is
> a Debian-powered mini itx PC). You can initiate from the clients instead if
> you prefer.
> 
> rdiff-backup is, conceptually, quite similar to the level of detail you are
> working at with your script, so it should be easy to get working. I'd also
> consider looking at Obnam[2], and I'd avoid rsnapshot if I were you (it
> falls over with large quantities of files, such as mailboxes, and rdiff-backup
> basically does the same job but better).

Interesting, thanks. I've been using rsnapshot for years, and am
basically satisfied with it, although the performance when run on my
T61 laptop (backing up to a (slow) USB external disk) is indeed painful
(I do have largeish Maildirs). [Interestingly, when run on my ARM
(Kirkland) NAS, pulling from the laptop over ethernet / wifi,
performance seems much better ...]

One thing I really like about rsnapshot is how the backups are all
stored just as the files themselves, without any special formats, and
can therefore be inspected / restored from using just the ordinary
filesystem tools.

A number of years ago I looked at rdiff-backup, but dropped it due to
suspicions of the code quality: the project seemed to be dead, and even
significant bugs were being ignored:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=623336
https://lists.debian.org/debian-user/2012/10/msg00182.html

I see that you, too, have a report that's been ignored for more than 5
years ;)

Am I being unreasonable? You are certainly more of an expert than I - I
suppose you find that it is quality software, and better than
rsnapshot, despite basically being dead?

Celejar



Re: Recommendation: Backup system

2016-10-07 Thread Jonathan Dowland
On Wed, Oct 05, 2016 at 03:06:58PM +1300, Richard Hector wrote:
> Can you elaborate on the performance issues? I'm using dirvish for my
> maildirs (dovecot imap server), without noticeable problems.

I don't know whether dirvish does something to improve matters, but with
hard link trees, if you have lots of little files (such as Maildir archives
of busy mailing lists like LKML), the amount of space consumed by the file
system metadata to represent the trees is very high, getting towards the
size used for the files themselves. This is compounded on some filesystems
(the ext family amongst them) that only ever increase the amount of space
for storing dirent metadata, never decrease it, so if you have a Maildir
which has high churn, and you move from having a large amount of mail to
a small amount, the storage allocated for metadata for the directory will
remain large.

On a low-powered device like an embedded NAS, performing operations like
delete (such as during a backup rotation) on a large hardlink tree can be
very slow and require a lot of CPU time.

> My current challenge is to back up windows boxes - if I can get rsync to
> work (maybe DeltaCopy? Not sure if that will work how I want), I guess
> I'll be stuck doing a local rsync of a smbfs mount ... unless someone
> has a better suggestion.

Windows has an rsync-like tool called Robocopy, but I'd skip straight to
Backup PC for backing up Windows machines.

-- 
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


signature.asc
Description: Digital signature


Re: Recommendation: Backup system

2016-10-05 Thread Richard Hector
On 05/10/16 16:03, Daniel Bareiro wrote:
> Hi, Richard.
> 
> On 04/10/16 23:06, Richard Hector wrote:
> 
>> My current challenge is to back up windows boxes - if I can get
>> rsync to work (maybe DeltaCopy? Not sure if that will work how I
>> want), I guess I'll be stuck doing a local rsync of a smbfs mount
>> ... unless someone has a better suggestion.
> 
> Some time ago I started a thread [1] on the list for this topic.
> The Cygwin SSH server seems to run smoothly although I have not
> yet implemented it on a daily basis.

Ah, thanks. I think I even read it at the time, but didn't have the
immediate need, so didn't take so much in :-)

It depends a bit on how involved the setup is - the Windows machines
aren't mine (they're my parents'), and I don't want anything too
confusing for their regular Windows helper ... or for me, for that
matter; I don't touch Windows very often :-)

Richard



Re: Recommendation: Backup system

2016-10-04 Thread Daniel Bareiro
Hi, Richard.

On 04/10/16 23:06, Richard Hector wrote:

> My current challenge is to back up windows boxes - if I can get rsync to
> work (maybe DeltaCopy? Not sure if that will work how I want), I guess
> I'll be stuck doing a local rsync of a smbfs mount ... unless someone
> has a better suggestion.

Some time ago I started a thread [1] on the list for this topic. The
Cygwin SSH server seems to run smoothly although I have not yet
implemented it on a daily basis.


Kind regards,
Daniel

[1] https://lists.debian.org/debian-user/2016/08/threads.html#00040



signature.asc
Description: OpenPGP digital signature


Re: Recommendation: Backup system

2016-10-04 Thread Richard Hector
On 04/10/16 01:45, Markus Grunwald wrote:
> Hello Teemu,
>
>>> rsync, whilst an awesome piece of software, is not, on its own, a
>>> backup system.
>>
>> Yes. With some scripting I think "rsync" with "--link-dest" is quite
>> ideal for incremental backups. Unchanged files are created as hard
>> links for the previous backup files. Every backup generation is just
>> a normal and complete file system tree.
>
> I haven't followed this thread closely, but:
>
> To everybody who ponders using rsync for backup, I strongly suggest a
> closer look at "dirvish". I'm using it to backup servers, laptops,
> raspberries. It is not hard to configure, uses rsync with hard links
> for "incremental" backups and keeps older versions of the backups.

That's what I use too. I use it with rsync over ssh as root, with the
remote set up for root to run forced commands only, and in order to back
up different trees independently, I use different ssh keys, each with
its own forced command. That's a bit fiddly to set up, but seems to work ok.

On 04/10/16 04:06, Jonathan Dowland wrote:
> That's basically a less convenient rsnapshot, with all the caveats
> (bad performance for large trees of files like mailboxes)

Can you elaborate on the performance issues? I'm using dirvish for my
maildirs (dovecot imap server), without noticeable problems.

One change I do make is to enable 'dateext' in my logrotate config, so I
don't end up with endless duplicates of my logfiles due to the names
changing.

My current challenge is to back up windows boxes - if I can get rsync to
work (maybe DeltaCopy? Not sure if that will work how I want), I guess
I'll be stuck doing a local rsync of a smbfs mount ... unless someone
has a better suggestion.

Richard



Re: Recommendation: Backup system

2016-10-04 Thread Jarle Aase

I have moved from simple scripts to simple scripts with zbackup in them :)

Then I rsync the zbackup directories from different machines to my 
central  backup disks (and distribute from there to cloud storage and 
off-site disks).


zbackup supports deduplication and encryption, and is really a nice 
piece of software.



Jarle


Den 01. okt. 2016 12:37, skrev mo:

Hi Debian users :)

Information:
Distributor ID:Debian
Description:Debian GNU/Linux 8.6 (jessie)
Release:8.6
Codename:jessie

As the title say i'm in search for a backup application/system.
Currently i manage my backups with a little script that i wrote... but 
it does not really serve my needs anymore.
I want to be able to make backups on my main PC and also on my server, 
the backups i would then store on my NAS.


Make a long story short:
Have you guys a recommendation for me?
Is there a specific application you use for your backups guys?

Btw: I dont mind configuring or playing around with new applications, 
every recommendation is welcome ;)



Here is my current backup script (Which is run by cron daily):
#!/bin/bash

TO_BACKUP="/home /etc /var/log"
BACKUP_DIR="/var/backup"
BACKUP_ARCHIVE="backup-`date +%d_%m_%Y-%H:%M`.tar"
TAR_OPTIONS='-cpf'

delete_old_backup() {
if [ -f ${BACKUP_DIR}/backup*.tar ]; then
rm -rf $BACKUP_DIR/backup*
fi
}

create_new_backup() {
tar $TAR_OPTIONS ${BACKUP_DIR}/$BACKUP_ARCHIVE $TO_BACKUP
}

main() {
delete_old_backup
create_new_backup
}

main

Greets
mo





Re: Recommendation: Backup system

2016-10-03 Thread Jonathan Dowland
On Mon, Oct 03, 2016 at 02:59:19PM +0300, Teemu Likonen wrote:
> Yes. With some scripting I think "rsync" with "--link-dest" is quite
> ideal for incremental backups. Unchanged files are created as hard links
> for the previous backup files. Every backup generation is just a normal
> and complete file system tree.

That's basically a less convenient rsnapshot, with all the caveats (bad
performance for large trees of files like mailboxes)


-- 
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


signature.asc
Description: Digital signature


Re: Recommendation: Backup system

2016-10-03 Thread Markus Grunwald
Hello Teemu,

> > rsync, whilst an awesome piece of software, is not, on its own, a
> > backup system.
>
> Yes. With some scripting I think "rsync" with "--link-dest" is quite
> ideal for incremental backups. Unchanged files are created as hard links
> for the previous backup files. Every backup generation is just a normal
> and complete file system tree.

I haven't followed this thread closely, but:

To everybody who ponders using rsync for backup, I strongly suggest a
closer look at "dirvish". I'm using it to backup servers, laptops,
raspberries. It is not hard to configure, uses rsync with hard links
for "incremental" backups and keeps older versions of the backups.

--
Markus Grunwald

Fragen zur Mail? http://www.the-grue.de/mail_und_co
http://www.the-grue.de/~markus/markus_grunwald.gpg



Re: Recommendation: Backup system

2016-10-03 Thread Teemu Likonen
Jonathan Dowland [2016-10-03 10:48:57+01] wrote:

> rsync, whilst an awesome piece of software, is not, on its own, a
> backup system.

Yes. With some scripting I think "rsync" with "--link-dest" is quite
ideal for incremental backups. Unchanged files are created as hard links
for the previous backup files. Every backup generation is just a normal
and complete file system tree.

-- 
/// Teemu Likonen   - .-..   <https://github.com/tlikonen> //
// PGP: 4E10 55DC 84E9 DFF6 13D7 8557 719D 69D3 2453 9450 ///


signature.asc
Description: PGP signature


Re: Recommendation: Backup system

2016-10-03 Thread Jonathan Dowland
On Sat, Oct 01, 2016 at 11:37:31AM +0200, mo wrote:
> Make a long story short:
> Have you guys a recommendation for me?
> Is there a specific application you use for your backups guys?

rdiff-backup[1]. I don't know what your NAS is, whether it's an off-the-shelf
thing or a DIY system, but I tend to initiate the backups on my NAS (which is
a Debian-powered mini itx PC). You can initiate from the clients instead if
you prefer.

rdiff-backup is, conceptually, quite similar to the level of detail you are
working at with your script, so it should be easy to get working. I'd also
consider looking at Obnam[2], and I'd avoid rsnapshot if I were you (it
falls over with large quantities of files, such as mailboxes, and rdiff-backup
basically does the same job but better).

rsync, whilst an awesome piece of software, is not, on its own, a backup
system.

When evaluating or building a backup system, consider all of the points in
"The Tao of Backup"[3]. Finally, remember that RAID is not backup[4]. (JWZ's
advice about a 3rd backup disk here is particularly good. I presently use
rsync for my 3rd backup drive.)

Penultimately, I'll echo what another poster has said, backuppc[5] is very good,
especially if you have windows clients (I realise you don't), although the
backend storage (last time I looked) was a somewhat awkward format that I
think would be painful to extract stuff from in an emergency situation should
the BackupPC software itself fail. As for Bacula, I *think* it's more geared
towards an enterprise-level, perhaps using tape backups, like Amanda. I know
you can use Bacula w/o tapes (Amanda too) but that's kind-of where it came
from and it still "feels" like a tape-oriented system to me (others are welcome
to jump to Bacula's defence here)

Finally, and although Tao of Backup covers this I'll repeat it: test restores.
Test how easy restores are for any tool of choice. Restores, restore performance
and restore UI is infinitely more important than the backup process.

[1] http://rdiff-backup.nongnu.org
[2] http://obnam.org/
[3] http://www.taobackup.com
[4] https://www.jwz.org/doc/backups.html
[5] http://backuppc.sourceforge.net


-- 
Jonathan Dowland
Please do not CC me, I am subscribed to the list.


signature.asc
Description: Digital signature


Re: Recommendation: Backup system

2016-10-02 Thread Bob Weber
On 10/02/2016 08:50 AM, rhkra...@gmail.com wrote:
>> Am 01.10.2016 um 23:06 schrieb Bob Weber:
>>> Like I said backuppc uses incremental and full backups.  The web
>>> interface lets you browse any backup (inc or full) and you see all the
>>> files backed up.  I set the incremental for each day up to a week.  So I
>>> have up to 7 of them.  The full can kept for for however long you want.
>>> I currently keep 12 weekly, 8 bi-weekly and 4 monthly full backups so
>>> that covers almost a year.
> I am not the op, but backuppc sounded pretty nice, so yesterday I tried to 
> install it on both my Wheezy and Jessie systems.  It didn't work (with 
> different failures) on either system--I won't give much detail for now, but 
> I'd 
> just ask a few questions:
>
>* what system (what version of Debian) are you using?
>
>* should I expect that it will properly configure a web server (on the 
> Wheezy system it talked about apache2, iirc), or must I have a properly 
> configured web server before installing backuppc?
>
> Some cryptic notes on the failures:
>
> On wheezy, I thought the installation completed successfully--it ran 
> something 
> it called a script, and, in or after the script it gave me a url to log in to 
> manage backuppc along with a username and password.  When I tried to go to 
> that URL, using either http or https on either of my browsers, it gave me a 
> 404 error.
>
> On jessie, it apparently did not complete the installation, it told me it 
> could not run that initial script.
>
> Suggestions?
>
>

I am running stable 8.5 on my backup machine.  It is a 386 install.  Backuppc is
version backuppc/stable,now 3.3.0-2 i386 [installed]

It uses apache2 (dont know if it will use other web servers) and has put the
following in the apache  /etc/apache2/conf-enabled/backuppc.conf file:

Alias /backuppc /usr/share/backuppc/cgi-bin/


AllowOverride None
Allow from all

# Uncomment the line below to ensure that nobody can sniff importanti
# info from network traffic during editing of the BackupPC config or
# when browsing/restoring backups.
# Requires that you have your webserver set up for SSL (https) access.
#SSLRequireSSL

Options ExecCGI FollowSymlinks
AddHandler cgi-script .cgi
DirectoryIndex index.cgi

AuthUserFile /etc/backuppc/htpasswd
AuthType basic
AuthName "BackupPC admin"
require valid-user



So I would assume that there should be a working install of apache2 before
backuppc is installed.  The only install problem I remember (this was quite a
while ago) was that I wanted the backups put in a directory mounted on a
separate partition.  Even though there was a setting for the backup directory in
backuppc the directory is hard coded to  "/var/lib/backuppc" in some of the
installed backuppc programs.  So to use another location you have to symbolic
link /var/lib/backuppc to that directory before install (or mount the partition
on /var/lib/backuppc).  So if you want to use another directory delete
/var/lib/backuppc and make the link (link -s /var/lib/backuppc  /somewhere-else
  ormount /dev/sdxx /var/lib/backuppc).  Then run "apt-get install
--reinstall backuppc" and hopefully things will be setup correctly at that new
location.  Since I access the backup server on my local net I don't use https. 

I use rsync over ssh to connect to linux servers so the data transferred is over
a secure tunnel. So my backup of a remote vm is secure.  The backuppc user
should have public key that is placed in the authorized_keys file of the clients
that are to be backed up. http://backuppc.sourceforge.net/faq/ssh.html explains
this procedure.

Hope this helps.  I have run backuppc over 10 years at several locations where I
have worked and at home.  It just seems to run.

...Bob



Re: Recommendation: Backup system

2016-10-02 Thread rhkramer
> Am 01.10.2016 um 23:06 schrieb Bob Weber:
> > Like I said backuppc uses incremental and full backups.  The web
> > interface lets you browse any backup (inc or full) and you see all the
> > files backed up.  I set the incremental for each day up to a week.  So I
> > have up to 7 of them.  The full can kept for for however long you want.
> > I currently keep 12 weekly, 8 bi-weekly and 4 monthly full backups so
> > that covers almost a year.

I am not the op, but backuppc sounded pretty nice, so yesterday I tried to 
install it on both my Wheezy and Jessie systems.  It didn't work (with 
different failures) on either system--I won't give much detail for now, but I'd 
just ask a few questions:

   * what system (what version of Debian) are you using?

   * should I expect that it will properly configure a web server (on the 
Wheezy system it talked about apache2, iirc), or must I have a properly 
configured web server before installing backuppc?

Some cryptic notes on the failures:

On wheezy, I thought the installation completed successfully--it ran something 
it called a script, and, in or after the script it gave me a url to log in to 
manage backuppc along with a username and password.  When I tried to go to 
that URL, using either http or https on either of my browsers, it gave me a 
404 error.

On jessie, it apparently did not complete the installation, it told me it 
could not run that initial script.

Suggestions?



Re: Recommendation: Backup system

2016-10-02 Thread mo



Am 02.10.2016 um 12:55 schrieb Dan Purgert:

mo wrote:

Am 02.10.2016 um 02:47 schrieb Dan Purgert:

mo wrote:

Maybe this is a little OT, but what kind of backup strategy would you
guys recommend? (Any advice Gene? :) )


If it *must* survive, 3-2-1 is the way to go.

3 copies (Original, Backup, and Backup of the Backup)
2 different media types (such as HDD(Orig) + HDD(Bk1) + Optical(Bk2))
1 stored offsite


Except for the offsite storage (Just for my private data) this sounds
perfect for my needs. Good thing is that i have two free HDD's here with
2TB each, so this would work ;)


Yeah, I don't have a spare copy offsite either -- but I don't
(currently) have any electronic data that would be absolutely
devastating if I lost it.  If / when I have kids or something, then baby
pics among other things will likely have that status.


But i guess i should consider it, loosing all the data is quite hard 
(Had that a while ago).



[snip]
One question Dan: What do u use to encrypt your files? also openssl?


I just use PGP.



I need to delve into PGP a little more ;) (Currently i use the openssl 
command to do my encrpytion on the backup archives)




Re: Recommendation: Backup system

2016-10-02 Thread Dan Purgert
mo wrote:
> Am 02.10.2016 um 02:47 schrieb Dan Purgert:
>> mo wrote:
>>> Maybe this is a little OT, but what kind of backup strategy would you
>>> guys recommend? (Any advice Gene? :) )
>>
>> If it *must* survive, 3-2-1 is the way to go.
>>
>> 3 copies (Original, Backup, and Backup of the Backup)
>> 2 different media types (such as HDD(Orig) + HDD(Bk1) + Optical(Bk2))
>> 1 stored offsite
>
> Except for the offsite storage (Just for my private data) this sounds 
> perfect for my needs. Good thing is that i have two free HDD's here with 
> 2TB each, so this would work ;)

Yeah, I don't have a spare copy offsite either -- but I don't
(currently) have any electronic data that would be absolutely
devastating if I lost it.  If / when I have kids or something, then baby
pics among other things will likely have that status.

> [snip]
> One question Dan: What do u use to encrypt your files? also openssl?

I just use PGP.

-- 
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5  4AEE 8E11 DDF3 1279 A281



Re: Recommendation: Backup system

2016-10-02 Thread mo



Am 02.10.2016 um 02:47 schrieb Dan Purgert:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

mo wrote:

Maybe this is a little OT, but what kind of backup strategy would you
guys recommend? (Any advice Gene? :) )


If it *must* survive, 3-2-1 is the way to go.

3 copies (Original, Backup, and Backup of the Backup)
2 different media types (such as HDD(Orig) + HDD(Bk1) + Optical(Bk2))
1 stored offsite


Except for the offsite storage (Just for my private data) this sounds 
perfect for my needs. Good thing is that i have two free HDD's here with 
2TB each, so this would work ;)



That being said, most of my stuff isn't of the "absolutely must survive"
nature, so I just have it on external HDDs.  If something is
particularly important, it's on HDDs and also somewhere like google
drive / dropbox (although, encrypted when at those places).


I currently use dropbox myself to store my conf files (gzipped and then 
encrypted with aes) in case something bad should ever happen to my NAS, 
so that i can at least restore my servers quickly ;)


One question Dan: What do u use to encrypt your files? also openssl?


-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJX8FjGAAoJEI4R3fMSeaKBX9gH/i6qw2GOY/LSE5haz8uv6u/X
GNT6m44c6olrnzGtKFzEPae3ELTTAT/t47yet2noAR+AS9aJzhtR86l3JfQfu4ug
0XplxY/GkkrEpIeMFrqKy2hc3u++PoEHEBkQut46x6QAw/85ieKs2tsbmfeyUuF7
y5gFbGlowcy3NtSdGIMR6qzn67I+DaO2veTMV3Z+/aR+fAegZtvv/t3CCi1dsmRW
kEWIuPyGR//Foy/vkQsakpHJnl9AvJREB5/T+zRX3pkBXKkzQrIfe80eJYAm5Z8w
rhvZc9lLbO/bVcSvLwRKqYtDmWRY4AeVrqmCitiuXf54KjwZY5Lh16LVq+KfCiQ=
=YzZ4
-END PGP SIGNATURE-



Greets

mo



Re: Recommendation: Backup system

2016-10-02 Thread mo



Am 01.10.2016 um 23:06 schrieb Bob Weber:

Like I said backuppc uses incremental and full backups.  The web
interface lets you browse any backup (inc or full) and you see all the
files backed up.  I set the incremental for each day up to a week.  So I
have up to 7 of them.  The full can kept for for however long you want.
I currently keep 12 weekly, 8 bi-weekly and 4 monthly full backups so
that covers almost a year.


That would serve my needs well also :) (For the moment it is just my 
private data - But either way, data should always be backed up ;) )

Thanks for the info ;)


There is another solution you might like called rsnapshot.  I use it to
backup just my root directory on my desktop before I do updates.  That
way if something goes wrong I can boot into a rescue cd and restore the
system to the state before the update.  I just can't afford to have my
desktop to break.  rsnapshot uses rsync so it can backup any computer
that has rsync.  It uses hard links so duplicate files are only stored
once.  You specify how many backups you want to keep and rsnapshot
deletes older ones over that max before adding the new one.  That way
you always have backups (assuming you set the count greater that 1) that
will be there even if there is a transfer error.  This is similar to
your script but is very versatile.


I heard about it but never used it so far, i should give that a try for 
sure :)




*...Bob*
On 10/01/2016 04:42 PM, mo wrote:

Maybe this is a little OT, but what kind of backup strategy would you
guys recommend? (Any advice Gene? :) )






Greets

mo



Re: Recommendation: Backup system

2016-10-01 Thread Gene Heskett
On Saturday 01 October 2016 13:54:29 Clive Menzies wrote:

> On 01/10/16 18:40, Gene Heskett wrote:
> > On Saturday 01 October 2016 12:39:58 Clive Menzies wrote:
> >> Quick question. Are your backups incremental or complete every
> >> night?
> >
> > This is probably better explained in the manpages. Amanda has the
> > concept of doing a full backup of everything in its disklist
> > according to the days you give it. Amanda will then shuffle the
> > schedule such that those full backups are done at random in that
> > cycle, with an eye toward equalizing, as much as possible, the
> > amount of data saved during each run. It does this by advanceing the
> > the level 0's as it will not let a given file go beyond that cycle
> > before a full copy is made again. Level 0 is a full copy of a file,
> > level 1 is whats been changed in that file since the last level 0.
> > Level 2 is whats been changed since level 1, ad infinitum but most
> > useage never gets past level 2.  I have the drive I use for amanda
> > setup as 30 virtual tapes, using one a day, then recycle. With 4
> > machines feeding amanda, that 1Tb drive stays at around 46% used:
> >
> > gene@coyote:~$ df /amandatapes
> > Filesystem 1K-blocks  Used Available Use% Mounted on
> > /dev/sdc2  960798056 410555684 501429932  46% /amandatapes
> >
> > I stayed on the devel branch of amanda, playing the part of the
> > canary in the coal mine for years while it was being heavily
> > developed, but apparently that grant ran out so not a lot has
> > changed in around 5 years. So while I have a self made, slightly
> > newer version on this machine as master, v3.3.7p1, the slaves are
> > all running 3.3.1 clients from the wheezy repo.  And it Just
> > Works(TM).
> >
> > I would be the first to point out that my way is NOT for archival
> > storage due to this 30 day and recycle setup. I could extend it to
> > 60 days on this drive I suppose, but this is not a business.
> >
> > For a business, I would include the price of the drive as a CODB,
> > fill it up and put it on the shelf at a remote location so it
> > doesn't all go up in smoke when the place burns, thereby giving me
> > the ability to recover something 5+ years old, or for however long
> > one has had that setup running.
> >
> > That $100 or less a month for a new commodity drive is far less of a
> > CODB than the archival storage of tapes would be over a 10 year
> > period. And you would have to add in that the tape drive(s) would be
> > out of service for about a month each annually while they spend the
> > holidays in Oklahoma City getting fresh heads and rubber at about a
> > kilobuck each for the rebuild.  Thats been the track record here in
> > my usage of tapes. The hard drives all have been 10x (or) more
> > dependable.
> >
> > And all it takes is getting rid of the idea that one must do a full
> > backup on Friday nights. Yes, amanda can do that, but do it as a
> > separate configuration else you will drive the poor girl out of her
> > mind when she finds out all her carefully worked out plans have all
> > gone aglay.
> >
> > And don't forget that in ones long term business plans, the
> > technology changes with time and there will come a time when you
> > will have nothing in the house that can read todays 1Tb sata hard
> > drive.  So having a storage location to save the old tech that can
> > read those drives should be part of that long term plan also.  And
> > be damned hard headed about it.
>
> Thanks Gene
>
> A dozen years ago, we found a couple posts on incremental backups
> using rsync and adapted it to provide 6 months of rolling incremental
> backups. We've been running this setup ever since - automated nightly,
> incremental backups. I posted our notes on the installation earlier in
> this thread.
>
> All important stuff is kept on the servers but it would be good to
> backup the laptops/workstations too and maybe Amanda is the answer :-)
> We don't install GUIs on our servers; can this be managed from
> individual workstations?
>
> Regards
>
> Clive

The closest you'll get to a gui is using your fav text editor to 
configure it. If building your own, the example directory has copies of 
all that, very liberally commented.  That may also be in the server deb, 
but if I looked, I've forgotten what I found. Heck most of the time I 
can't remember if I had breakfast this morning...  Synaptic, once the 
amanda server has been installed, will be more than happy to show you 
what is available, and where it was put.  There is also a mailing list, 
at amanda.org with very knowledgable people to explain the fine details. 

As for the other machines here that I backup every night, I only worry 
about the directories that contain linuxcnc or rtapi stuff, all the rest 
of it is no farther away than the install dvd from 
www.linuxcnc.org/downloads if I can't find the written dvd.  The amount 
of data from each of those machines is maybe 25 megabytes each worst 
case per nightly run, 

Re: Recommendation: Backup system

2016-10-01 Thread Dan Purgert
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

mo wrote:
> Maybe this is a little OT, but what kind of backup strategy would you 
> guys recommend? (Any advice Gene? :) )

If it *must* survive, 3-2-1 is the way to go.

3 copies (Original, Backup, and Backup of the Backup)
2 different media types (such as HDD(Orig) + HDD(Bk1) + Optical(Bk2))
1 stored offsite

That being said, most of my stuff isn't of the "absolutely must survive"
nature, so I just have it on external HDDs.  If something is
particularly important, it's on HDDs and also somewhere like google
drive / dropbox (although, encrypted when at those places).

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJX8FjGAAoJEI4R3fMSeaKBX9gH/i6qw2GOY/LSE5haz8uv6u/X
GNT6m44c6olrnzGtKFzEPae3ELTTAT/t47yet2noAR+AS9aJzhtR86l3JfQfu4ug
0XplxY/GkkrEpIeMFrqKy2hc3u++PoEHEBkQut46x6QAw/85ieKs2tsbmfeyUuF7
y5gFbGlowcy3NtSdGIMR6qzn67I+DaO2veTMV3Z+/aR+fAegZtvv/t3CCi1dsmRW
kEWIuPyGR//Foy/vkQsakpHJnl9AvJREB5/T+zRX3pkBXKkzQrIfe80eJYAm5Z8w
rhvZc9lLbO/bVcSvLwRKqYtDmWRY4AeVrqmCitiuXf54KjwZY5Lh16LVq+KfCiQ=
=YzZ4
-END PGP SIGNATURE-

-- 
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5  4AEE 8E11 DDF3 1279 A281



Re: Recommendation: Backup system

2016-10-01 Thread Bob Weber
Like I said backuppc uses incremental and full backups.  The web interface lets
you browse any backup (inc or full) and you see all the files backed up.  I set
the incremental for each day up to a week.  So I have up to 7 of them.  The full
can kept for for however long you want.  I currently keep 12 weekly, 8 bi-weekly
and 4 monthly full backups so that covers almost a year.

There is another solution you might like called rsnapshot.  I use it to backup
just my root directory on my desktop before I do updates.  That way if something
goes wrong I can boot into a rescue cd and restore the system to the state
before the update.  I just can't afford to have my desktop to break.  rsnapshot
uses rsync so it can backup any computer that has rsync.  It uses hard links so
duplicate files are only stored once.  You specify how many backups you want to
keep and rsnapshot deletes older ones over that max before adding the new one. 
That way you always have backups (assuming you set the count greater that 1)
that will be there even if there is a transfer error.  This is similar to your
script but is very versatile.


*...Bob*
On 10/01/2016 04:42 PM, mo wrote:
> Maybe this is a little OT, but what kind of backup strategy would you guys
> recommend? (Any advice Gene? :) )
>
>



Re: Recommendation: Backup system

2016-10-01 Thread mo



Am 01.10.2016 um 21:37 schrieb Glenn English:



On Oct 1, 2016, at 10:22 AM, Gene Heskett  wrote:

On Saturday 01 October 2016 08:40:35 Mark Fletcher wrote:


I know Gene is a fan of Amanda, I have it on my list to try it out
myself based on positive remarks he has made about it in the past.


Yeah. Amanda's a good solution. I use it with tape.

There are a couple advantages to this:

Amanda's been around for a very long time (in computer years). So, like Debian 
stable, the bugs tend to have been worked out.

Amanda does its backups using plain old standard *nix software, dump or tar. So 
if you have a really nasty data loss, you can restore from a backup without 
dealing with data in a format built by the backup program. It's not a trivial 
job, I'm told, but if you're really stuck...

The tapes aren't disks, and every backup is done to a different and simple 
storage device with few moving parts. (The reliability of a disk RAID1 reduces 
this advantage significantly.)

OK, there are disadvantages too: Amanda's not easy to configure, and tapes and 
tape drives are very expensive and very slow.

But I like the advantages more than the disadvantages. It does a really good 
and reliable job.



Amanda is beginning to be more and more interesting to me :)



Re: Recommendation: Backup system

2016-10-01 Thread mo
Maybe this is a little OT, but what kind of backup strategy would you 
guys recommend? (Any advice Gene? :) )




Re: Recommendation: Backup system

2016-10-01 Thread mo



Am 01.10.2016 um 14:20 schrieb Dan Purgert:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

mo wrote:

As the title say i'm in search for a backup application/system.
Currently i manage my backups with a little script that i wrote... but
it does not really serve my needs anymore.
I want to be able to make backups on my main PC and also on my server,
the backups i would then store on my NAS.


There's always rsync from the hosts to the NAS box.


rsync is quite handy for that, indeed ;)
(I actually do that atm - i transfer the data over via rsync to my NAS)


Only thing that I see "wrong" with your script is that:
 - you're deleting the backups (what happens when you delete something,
   and don't catch it til after the next backup run?)


I forgot to mention that i have a script which transfers the current 
backup over if the given backup is not yet saved, sorry, i kind left 
that out. :P



 - you're backing up everything, even if it hasn't changed.


That's indeed a problem atm... That's mainly why i'm on the search for a 
more professional approach so to speak. Honestly, i have not yet planned 
a specific strategy...





-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJX76m+AAoJEI4R3fMSeaKBwWQH/RiG+HH83WcyGT5wOhGM344j
UTOXN+VS7UC0jOsZG+MYrNJkAE8U9qwAYahi/63X+TFmSa2rFRmw6rv0pfvfBeiP
mItlVsjqyeR00tFc+zmAPWKnOZgela6kO6sdM7w2/GZTTThDL/CF/a+BP7YuOyid
QQEM5HLcSqbF1fiIStKkP2bTBZq0C8aSioy69Tg0nWES3vou7WxkXESUyP5ZEjp7
k9WQZj8JHZvJsJlfzRfJWSswkoR5Rj6OjsEjbbgFmoIHjndQ+ivhlKoJH+/6oGIM
txTwx8LzlucuKeYNr9bFQkV74ws8VGQBQPi1JBqvhXQEzW7qlbGg6ri3ADMrfq0=
=jx9W
-END PGP SIGNATURE-



Thanks Dan ;)

Greets

mo



Re: Recommendation: Backup system

2016-10-01 Thread mo



Am 01.10.2016 um 18:22 schrieb Gene Heskett:

On Saturday 01 October 2016 08:40:35 Mark Fletcher wrote:


On Sat, Oct 01, 2016 at 11:37:31AM +0200, mo wrote:

Hi Debian users :)

Information:
Distributor ID: Debian
Description:Debian GNU/Linux 8.6 (jessie)
Release:8.6
Codename:   jessie

As the title say i'm in search for a backup application/system.
Currently i manage my backups with a little script that i wrote...
but it does not really serve my needs anymore.
I want to be able to make backups on my main PC and also on my
server, the backups i would then store on my NAS.

Make a long story short:
Have you guys a recommendation for me?
Is there a specific application you use for your backups guys?


I know Gene is a fan of Amanda, I have it on my list to try it out
myself based on positive remarks he has made about it in the past.

Mark


Yeppers! It runs in the wee hours of the night here, for an hour or so.
Currently backing up this machine, and 3 more on my little home network
here, using its own unique, distribute the nightly load to equalize as
much a it can given its list of what to back up with nightly backups
totaling 11 to 14 Gb per night.

When I first started using it, I had a DDS2 changer, but it wasn't very
dependable, and at only 4Gb per tape, limited what I could do.  About
the time 500Gb Hd's came out, amanda had worked out a way to use virtual
tapes on a hard drive, so I converted. That has the advantage that
should I need to recover something, its random access instead of
sequential, so I can get back anything I need in just a few minutes,
most of which is spent studying the recovery docs because I've forgotten
how to do it. ;-)

With tapes, I could easily be a half a day recovering the same file
because each level of a backup has to be read from the start of the tape
until its found.  If I need something whose last good backup was a level
3, it would have to back up and find the level0, which is the last full
backup, then find the level 1 and merge any changes, wash, rinse, and
repeat.  A hard drive based system can do all that in seconds.


I guess i'm too young for tapes... or let's put it that way: I've never 
used them. (Why would you need tapes at home anyway :D )



And HD's have become much more dependable than tape, along with the
methods of warning the user that the drive is failing, and that alone
beats tape all the way into the trash bin.


True, the oldest HDD here (which is a 500GB drive) is running since 7 
years straight without a problem.



I was rather worried about the drive I use for amanda's v-tapes when I
saw almost 3 years ago that smartctl said it had 25 Reallocated sectors.
It still says 25 10 seconds ago.  That drive, now a 1Tb drive
as /dev/sdc, now has 58357 Power up hours on it. I don't care what you
may have paid for a tape library, it cannot survive that long, when this
HD has done that for a $100 bill at the time I bought it.  And I can put
this HD in a shirt pocket.  The tape library would need a refrigerator
rated 2 wheel dolly to move it, and a similar second dolly to move its
tapes.

Whats not to like?

I've had far more trouble dealing with tar changes as its been updated
over the years than I've had with amanda itself. All have been fixable
in a day or two once you can post the breakage on the tar list. amanda
uses tar to do the bare metal work.  A wrapper for tar in that sense,
and I then wrap amanda with a bash script that fixes the always a day
late record keeping that you can add to the v-tape image by making a
copy of amanda's database and writing it to the V-tape amanda just used,
so I can lose the main drive and have to start with a new install on a
fresh drive. With amanda I would install from the repo's on the new
drive. Its 2 or 3 steps, but in an hours time I can have this working
wheezy system with all its dross, put back on a new drive.

Cheers, Gene Heskett



Thanks for the info Gene ;)



Re: Recommendation: Backup system

2016-10-01 Thread Glenn English

> On Oct 1, 2016, at 11:54 AM, Clive Menzies  wrote:
> 
> We don't install GUIs on our servers; can this be managed from individual 
> workstations?

I'm not sure, but I think Amanda was written before GUIs existed :-)

-- 
Glenn English



Re: Recommendation: Backup system

2016-10-01 Thread Glenn English

> On Oct 1, 2016, at 10:22 AM, Gene Heskett  wrote:
> 
> On Saturday 01 October 2016 08:40:35 Mark Fletcher wrote:
> 
>> I know Gene is a fan of Amanda, I have it on my list to try it out
>> myself based on positive remarks he has made about it in the past.

Yeah. Amanda's a good solution. I use it with tape.

There are a couple advantages to this:

Amanda's been around for a very long time (in computer years). So, like Debian 
stable, the bugs tend to have been worked out.

Amanda does its backups using plain old standard *nix software, dump or tar. So 
if you have a really nasty data loss, you can restore from a backup without 
dealing with data in a format built by the backup program. It's not a trivial 
job, I'm told, but if you're really stuck...

The tapes aren't disks, and every backup is done to a different and simple 
storage device with few moving parts. (The reliability of a disk RAID1 reduces 
this advantage significantly.)

OK, there are disadvantages too: Amanda's not easy to configure, and tapes and 
tape drives are very expensive and very slow. 

But I like the advantages more than the disadvantages. It does a really good 
and reliable job.

-- 
Glenn English



Re: Recommendation: Backup system

2016-10-01 Thread Clive Menzies

On 01/10/16 18:40, Gene Heskett wrote:

On Saturday 01 October 2016 12:39:58 Clive Menzies wrote:


Quick question. Are your backups incremental or complete every night?

This is probably better explained in the manpages. Amanda has the concept
of doing a full backup of everything in its disklist according to the
days you give it. Amanda will then shuffle the schedule such that those
full backups are done at random in that cycle, with an eye toward
equalizing, as much as possible, the amount of data saved during each
run. It does this by advanceing the the level 0's as it will not let a
given file go beyond that cycle before a full copy is made again.
Level 0 is a full copy of a file, level 1 is whats been changed in that
file since the last level 0. Level 2 is whats been changed since level
1, ad infinitum but most useage never gets past level 2.  I have the
drive I use for amanda setup as 30 virtual tapes, using one a day, then
recycle. With 4 machines feeding amanda, that 1Tb drive stays at around
46% used:

gene@coyote:~$ df /amandatapes
Filesystem 1K-blocks  Used Available Use% Mounted on
/dev/sdc2  960798056 410555684 501429932  46% /amandatapes

I stayed on the devel branch of amanda, playing the part of the canary in
the coal mine for years while it was being heavily developed, but
apparently that grant ran out so not a lot has changed in around 5
years. So while I have a self made, slightly newer version on this
machine as master, v3.3.7p1, the slaves are all running 3.3.1 clients
from the wheezy repo.  And it Just Works(TM).

I would be the first to point out that my way is NOT for archival storage
due to this 30 day and recycle setup. I could extend it to 60 days on
this drive I suppose, but this is not a business.

For a business, I would include the price of the drive as a CODB, fill it
up and put it on the shelf at a remote location so it doesn't all go up
in smoke when the place burns, thereby giving me the ability to recover
something 5+ years old, or for however long one has had that setup
running.

That $100 or less a month for a new commodity drive is far less of a CODB
than the archival storage of tapes would be over a 10 year period. And
you would have to add in that the tape drive(s) would be out of service
for about a month each annually while they spend the holidays in
Oklahoma City getting fresh heads and rubber at about a kilobuck each
for the rebuild.  Thats been the track record here in my usage of tapes.
The hard drives all have been 10x (or) more dependable.

And all it takes is getting rid of the idea that one must do a full
backup on Friday nights. Yes, amanda can do that, but do it as a
separate configuration else you will drive the poor girl out of her mind
when she finds out all her carefully worked out plans have all gone
aglay.

And don't forget that in ones long term business plans, the technology
changes with time and there will come a time when you will have nothing
in the house that can read todays 1Tb sata hard drive.  So having a
storage location to save the old tech that can read those drives should
be part of that long term plan also.  And be damned hard headed about
it.


Thanks Gene

A dozen years ago, we found a couple posts on incremental backups using 
rsync and adapted it to provide 6 months of rolling incremental backups. 
We've been running this setup ever since - automated nightly, 
incremental backups. I posted our notes on the installation earlier in 
this thread.


All important stuff is kept on the servers but it would be good to 
backup the laptops/workstations too and maybe Amanda is the answer :-) 
We don't install GUIs on our servers; can this be managed from 
individual workstations?


Regards

Clive

--
Clive Menzies
http://freecriticalthinking.org



Re: Recommendation: Backup system

2016-10-01 Thread Gene Heskett
On Saturday 01 October 2016 12:39:58 Clive Menzies wrote:

> On 01/10/16 17:22, Gene Heskett wrote
>
> > Yeppers! It runs in the wee hours of the night here, for an hour or
> > so. Currently backing up this machine, and 3 more on my little home
> > network here, using its own unique, distribute the nightly load to
> > equalize as much a it can given its list of what to back up with
> > nightly backups totaling 11 to 14 Gb per night.
>
> Hi Gene
>
> Quick question. Are your backups incremental or complete every night?
>
> Regards
>
> Clive

This is probably better explained in the manpages. Amanda has the concept 
of doing a full backup of everything in its disklist according to the 
days you give it. Amanda will then shuffle the schedule such that those 
full backups are done at random in that cycle, with an eye toward 
equalizing, as much as possible, the amount of data saved during each 
run. It does this by advanceing the the level 0's as it will not let a 
given file go beyond that cycle before a full copy is made again.
Level 0 is a full copy of a file, level 1 is whats been changed in that 
file since the last level 0. Level 2 is whats been changed since level 
1, ad infinitum but most useage never gets past level 2.  I have the 
drive I use for amanda setup as 30 virtual tapes, using one a day, then 
recycle. With 4 machines feeding amanda, that 1Tb drive stays at around 
46% used:

gene@coyote:~$ df /amandatapes
Filesystem 1K-blocks  Used Available Use% Mounted on
/dev/sdc2  960798056 410555684 501429932  46% /amandatapes

I stayed on the devel branch of amanda, playing the part of the canary in 
the coal mine for years while it was being heavily developed, but 
apparently that grant ran out so not a lot has changed in around 5 
years. So while I have a self made, slightly newer version on this 
machine as master, v3.3.7p1, the slaves are all running 3.3.1 clients 
from the wheezy repo.  And it Just Works(TM).

I would be the first to point out that my way is NOT for archival storage 
due to this 30 day and recycle setup. I could extend it to 60 days on 
this drive I suppose, but this is not a business.

For a business, I would include the price of the drive as a CODB, fill it 
up and put it on the shelf at a remote location so it doesn't all go up 
in smoke when the place burns, thereby giving me the ability to recover 
something 5+ years old, or for however long one has had that setup 
running.

That $100 or less a month for a new commodity drive is far less of a CODB 
than the archival storage of tapes would be over a 10 year period. And 
you would have to add in that the tape drive(s) would be out of service 
for about a month each annually while they spend the holidays in 
Oklahoma City getting fresh heads and rubber at about a kilobuck each 
for the rebuild.  Thats been the track record here in my usage of tapes.  
The hard drives all have been 10x (or) more dependable.

And all it takes is getting rid of the idea that one must do a full 
backup on Friday nights. Yes, amanda can do that, but do it as a 
separate configuration else you will drive the poor girl out of her mind 
when she finds out all her carefully worked out plans have all gone 
aglay.

And don't forget that in ones long term business plans, the technology 
changes with time and there will come a time when you will have nothing 
in the house that can read todays 1Tb sata hard drive.  So having a 
storage location to save the old tech that can read those drives should 
be part of that long term plan also.  And be damned hard headed about 
it.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 



Re: Recommendation: Backup system

2016-10-01 Thread Dan Purgert
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

mo wrote:
> As the title say i'm in search for a backup application/system.
> Currently i manage my backups with a little script that i wrote... but 
> it does not really serve my needs anymore.
> I want to be able to make backups on my main PC and also on my server, 
> the backups i would then store on my NAS.

There's always rsync from the hosts to the NAS box.

Only thing that I see "wrong" with your script is that:
 - you're deleting the backups (what happens when you delete something,
   and don't catch it til after the next backup run?)
 - you're backing up everything, even if it hasn't changed.



-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJX76m+AAoJEI4R3fMSeaKBwWQH/RiG+HH83WcyGT5wOhGM344j
UTOXN+VS7UC0jOsZG+MYrNJkAE8U9qwAYahi/63X+TFmSa2rFRmw6rv0pfvfBeiP
mItlVsjqyeR00tFc+zmAPWKnOZgela6kO6sdM7w2/GZTTThDL/CF/a+BP7YuOyid
QQEM5HLcSqbF1fiIStKkP2bTBZq0C8aSioy69Tg0nWES3vou7WxkXESUyP5ZEjp7
k9WQZj8JHZvJsJlfzRfJWSswkoR5Rj6OjsEjbbgFmoIHjndQ+ivhlKoJH+/6oGIM
txTwx8LzlucuKeYNr9bFQkV74ws8VGQBQPi1JBqvhXQEzW7qlbGg6ri3ADMrfq0=
=jx9W
-END PGP SIGNATURE-

-- 
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5  4AEE 8E11 DDF3 1279 A281



Re: Recommendation: Backup system

2016-10-01 Thread Clive Menzies

On 01/10/16 17:22, Gene Heskett wrote

Yeppers! It runs in the wee hours of the night here, for an hour or so.
Currently backing up this machine, and 3 more on my little home network
here, using its own unique, distribute the nightly load to equalize as
much a it can given its list of what to back up with nightly backups
totaling 11 to 14 Gb per night.


Hi Gene

Quick question. Are your backups incremental or complete every night?

Regards

Clive

--
Clive Menzies
http://freecriticalthinking.org



Re: Recommendation: Backup system

2016-10-01 Thread Gene Heskett
On Saturday 01 October 2016 08:40:35 Mark Fletcher wrote:

> On Sat, Oct 01, 2016 at 11:37:31AM +0200, mo wrote:
> > Hi Debian users :)
> >
> > Information:
> > Distributor ID: Debian
> > Description:Debian GNU/Linux 8.6 (jessie)
> > Release:8.6
> > Codename:   jessie
> >
> > As the title say i'm in search for a backup application/system.
> > Currently i manage my backups with a little script that i wrote...
> > but it does not really serve my needs anymore.
> > I want to be able to make backups on my main PC and also on my
> > server, the backups i would then store on my NAS.
> >
> > Make a long story short:
> > Have you guys a recommendation for me?
> > Is there a specific application you use for your backups guys?
>
> I know Gene is a fan of Amanda, I have it on my list to try it out
> myself based on positive remarks he has made about it in the past.
>
> Mark

Yeppers! It runs in the wee hours of the night here, for an hour or so. 
Currently backing up this machine, and 3 more on my little home network 
here, using its own unique, distribute the nightly load to equalize as 
much a it can given its list of what to back up with nightly backups 
totaling 11 to 14 Gb per night.

When I first started using it, I had a DDS2 changer, but it wasn't very 
dependable, and at only 4Gb per tape, limited what I could do.  About 
the time 500Gb Hd's came out, amanda had worked out a way to use virtual 
tapes on a hard drive, so I converted. That has the advantage that 
should I need to recover something, its random access instead of 
sequential, so I can get back anything I need in just a few minutes, 
most of which is spent studying the recovery docs because I've forgotten 
how to do it. ;-)

With tapes, I could easily be a half a day recovering the same file 
because each level of a backup has to be read from the start of the tape 
until its found.  If I need something whose last good backup was a level 
3, it would have to back up and find the level0, which is the last full 
backup, then find the level 1 and merge any changes, wash, rinse, and 
repeat.  A hard drive based system can do all that in seconds.

And HD's have become much more dependable than tape, along with the 
methods of warning the user that the drive is failing, and that alone 
beats tape all the way into the trash bin.

I was rather worried about the drive I use for amanda's v-tapes when I 
saw almost 3 years ago that smartctl said it had 25 Reallocated sectors.  
It still says 25 10 seconds ago.  That drive, now a 1Tb drive 
as /dev/sdc, now has 58357 Power up hours on it. I don't care what you 
may have paid for a tape library, it cannot survive that long, when this 
HD has done that for a $100 bill at the time I bought it.  And I can put 
this HD in a shirt pocket.  The tape library would need a refrigerator 
rated 2 wheel dolly to move it, and a similar second dolly to move its 
tapes.

Whats not to like?

I've had far more trouble dealing with tar changes as its been updated 
over the years than I've had with amanda itself. All have been fixable 
in a day or two once you can post the breakage on the tar list. amanda 
uses tar to do the bare metal work.  A wrapper for tar in that sense, 
and I then wrap amanda with a bash script that fixes the always a day 
late record keeping that you can add to the v-tape image by making a 
copy of amanda's database and writing it to the V-tape amanda just used, 
so I can lose the main drive and have to start with a new install on a 
fresh drive. With amanda I would install from the repo's on the new 
drive. Its 2 or 3 steps, but in an hours time I can have this working 
wheezy system with all its dross, put back on a new drive.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 



Re: Recommendation: Backup system

2016-10-01 Thread Mark Fletcher
On Sat, Oct 01, 2016 at 10:46:07AM -0400, Bob Weber wrote:
> I use backuppc.  It is web browser based setup and usage.  It takes 
> incremental
> and full backups that can remain as long as you want or have space for.  It 
> can
> browse files by name or in a version mode where you can see the date where a
> file changed and restore an earlier version if you want (or to a separate 
> download directory).  It compresses files for space and only keeps one copy 
> of a
> file's data if it is located in different directories or servers (using hard
> links as needed).  It can even backup user data for windows users (samba).  I
> use the rsync transfer for Linux machines and even with windows running 
> Cygwin.
> 
> I currently backup 8 computers going back almost 1 year.  I even backup a vm 
> at
> digital ocean.  Backuppc reports this:
> 
> 144 full backups of total size 8951.56GB (prior to pooling and compression),
> 57 incr backups of total size 57.13GB (prior to pooling and compression).
> Pool is 358.94GB comprising 1903010 files and 4369 directories (as of 10/1 
> 01:09).
> 
> So 8951GB is compressed or pooled into just 358 GB! 

Wow, that is impressive!

Mark



Re: Recommendation: Backup system

2016-10-01 Thread mo

Has someone experience with Bacula?
I heard good things about it, although i never looked into it... maybe 
someone has and can give me his report on it :)




Re: Recommendation: Backup system

2016-10-01 Thread mo



Am 01.10.2016 um 16:46 schrieb Bob Weber:

I use backuppc.  It is web browser based setup and usage.  It takes
incremental and full backups that can remain as long as you want or have
space for.  It can browse files by name or in a version mode where you
can see the date where a file changed and restore an earlier version if
you want (or to a separate  download directory).  It compresses files
for space and only keeps one copy of a file's data if it is located in
different directories or servers (using hard links as needed).  It can
even backup user data for windows users (samba).  I use the rsync
transfer for Linux machines and even with windows running Cygwin.


I like the web interface part i have to say (Just browsed their website :) )
I will look into it ;)
(I have currently no windows machines here, but that might change in the 
future since i plan to deplay Windows Server 2012 R2)


Thanks for the hint ;)


I currently backup 8 computers going back almost 1 year.  I even backup
a vm at digital ocean.  Backuppc reports this:

144 full backups of total size 8951.56GB (prior to pooling and compression),
57 incr backups of total size 57.13GB (prior to pooling and compression).
Pool is 358.94GB comprising 1903010 files and 4369 directories (as of
10/1 01:09).

So 8951GB is compressed or pooled into just 358 GB!


The compression is quite nice, i might have at least 100GB to backup 
here. (Maybe a little more depending on what i want to save)



*...Bob*


Greets

mo



Re: Recommendation: Backup system

2016-10-01 Thread Bob Weber
I use backuppc.  It is web browser based setup and usage.  It takes incremental
and full backups that can remain as long as you want or have space for.  It can
browse files by name or in a version mode where you can see the date where a
file changed and restore an earlier version if you want (or to a separate 
download directory).  It compresses files for space and only keeps one copy of a
file's data if it is located in different directories or servers (using hard
links as needed).  It can even backup user data for windows users (samba).  I
use the rsync transfer for Linux machines and even with windows running Cygwin.

I currently backup 8 computers going back almost 1 year.  I even backup a vm at
digital ocean.  Backuppc reports this:

144 full backups of total size 8951.56GB (prior to pooling and compression),
57 incr backups of total size 57.13GB (prior to pooling and compression).
Pool is 358.94GB comprising 1903010 files and 4369 directories (as of 10/1 
01:09).

So 8951GB is compressed or pooled into just 358 GB! 

*...Bob*
On 10/01/2016 05:37 AM, mo wrote:
> Hi Debian users :)
>
> Information:
> Distributor ID:Debian
> Description:Debian GNU/Linux 8.6 (jessie)
> Release:8.6
> Codename:jessie
>
> As the title say i'm in search for a backup application/system.
> Currently i manage my backups with a little script that i wrote... but it does
> not really serve my needs anymore.
> I want to be able to make backups on my main PC and also on my server, the
> backups i would then store on my NAS.
>
> Make a long story short:
> Have you guys a recommendation for me?
> Is there a specific application you use for your backups guys?
>
> Btw: I dont mind configuring or playing around with new applications, every
> recommendation is welcome ;)
>
>
> Here is my current backup script (Which is run by cron daily):
> #!/bin/bash
>
> TO_BACKUP="/home /etc /var/log"
> BACKUP_DIR="/var/backup"
> BACKUP_ARCHIVE="backup-`date +%d_%m_%Y-%H:%M`.tar"
> TAR_OPTIONS='-cpf'
>
> delete_old_backup() {
> if [ -f ${BACKUP_DIR}/backup*.tar ]; then
> rm -rf $BACKUP_DIR/backup*
> fi
> }
>
> create_new_backup() {
> tar $TAR_OPTIONS ${BACKUP_DIR}/$BACKUP_ARCHIVE $TO_BACKUP
> }
>
> main() {
> delete_old_backup
> create_new_backup
> }
>
> main
>
> Greets
> mo
>
>



Re: Recommendation: Backup system

2016-10-01 Thread mo



Am 01.10.2016 um 14:40 schrieb Mark Fletcher:

On Sat, Oct 01, 2016 at 11:37:31AM +0200, mo wrote:

Hi Debian users :)

Information:
Distributor ID: Debian
Description:Debian GNU/Linux 8.6 (jessie)
Release:8.6
Codename:   jessie

As the title say i'm in search for a backup application/system.
Currently i manage my backups with a little script that i wrote... but it
does not really serve my needs anymore.
I want to be able to make backups on my main PC and also on my server, the
backups i would then store on my NAS.

Make a long story short:
Have you guys a recommendation for me?
Is there a specific application you use for your backups guys?


I know Gene is a fan of Amanda, I have it on my list to try it out
myself based on positive remarks he has made about it in the past.


Sounds good to me :) - I'll set that on the todo list myself to (Just 
checked the webpage of Amanda and it does sound pretty good!).


Gene, can you elaborate on Amanda? :) (If Gene does read this mail)


Mark



Greets

mo



Re: Recommendation: Backup system

2016-10-01 Thread Mark Fletcher
On Sat, Oct 01, 2016 at 11:37:31AM +0200, mo wrote:
> Hi Debian users :)
> 
> Information:
> Distributor ID:   Debian
> Description:  Debian GNU/Linux 8.6 (jessie)
> Release:  8.6
> Codename: jessie
> 
> As the title say i'm in search for a backup application/system.
> Currently i manage my backups with a little script that i wrote... but it
> does not really serve my needs anymore.
> I want to be able to make backups on my main PC and also on my server, the
> backups i would then store on my NAS.
> 
> Make a long story short:
> Have you guys a recommendation for me?
> Is there a specific application you use for your backups guys?

I know Gene is a fan of Amanda, I have it on my list to try it out 
myself based on positive remarks he has made about it in the past.

Mark



Recommendation: Backup system

2016-10-01 Thread mo

Hi Debian users :)

Information:
Distributor ID: Debian
Description:Debian GNU/Linux 8.6 (jessie)
Release:8.6
Codename:   jessie

As the title say i'm in search for a backup application/system.
Currently i manage my backups with a little script that i wrote... but 
it does not really serve my needs anymore.
I want to be able to make backups on my main PC and also on my server, 
the backups i would then store on my NAS.


Make a long story short:
Have you guys a recommendation for me?
Is there a specific application you use for your backups guys?

Btw: I dont mind configuring or playing around with new applications, 
every recommendation is welcome ;)



Here is my current backup script (Which is run by cron daily):
#!/bin/bash

TO_BACKUP="/home /etc /var/log"
BACKUP_DIR="/var/backup"
BACKUP_ARCHIVE="backup-`date +%d_%m_%Y-%H:%M`.tar"
TAR_OPTIONS='-cpf'

delete_old_backup() {
if [ -f ${BACKUP_DIR}/backup*.tar ]; then
rm -rf $BACKUP_DIR/backup*
fi
}

create_new_backup() {
tar $TAR_OPTIONS ${BACKUP_DIR}/$BACKUP_ARCHIVE $TO_BACKUP
}

main() {
delete_old_backup
create_new_backup
}

main

Greets
mo



Re: Re(2): Backup system for use when Debian fails.

2012-07-16 Thread Keith McKenzie
Not quite OT :-

For a Debian Live recovery (or install) distro try SalineOS (XFCE desktop)

http://www.salineos.com/

--
Sent from FOSS (Free Open Source Software)
Debian GNU/Linux


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAL36VGmLY4vpFLcOpywasgf3txsqF1cCo-H=d_vwcmmvnky...@mail.gmail.com



Re: Re(2): Backup system for use when Debian fails.

2012-07-16 Thread Camaleón
On Sun, 15 Jul 2012 17:38:33 -0700, peasthope wrote:

 * From: Camale#xF3;n noela...@gmail.com *  Date: Mon, 18 Jun 
2012
 15:52:11 + (UTC)
 Well, that's what LiveCDs and USB sticks with a running system are
 aimed for, ...
 
 Starting from functional cold hardware with a LiveCD or USB stick,
 approximately how much time is needed to put a reference document from a
 server on the screen.
 http://www.gnu.org:80/software/grub/manual/grub.html text/html for
 example.  Half a minute?  Five minutes?  More?

It's very quick and usually after booting you're done... unless you are 
using a special hardware or setup, e.g., no ethernet connection available 
in the box and your wifi card requires from extra non-free firmware, in 
such case you need to spend a bit more time in getting the network to be 
ready.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/ju19fa$5et$4...@dough.gmane.org



Re(2): Backup system for use when Debian fails.

2012-07-15 Thread peasthope

*   From: Camale#xF3;n noela...@gmail.com
*   Date: Mon, 18 Jun 2012 15:52:11 + (UTC)
 Well, that's what LiveCDs and USB sticks with a running system are aimed for, 
 ...

Starting from functional cold hardware with a LiveCD or 
USB stick, approximately how much time is needed to put 
a reference document from a server on the screen.
http://www.gnu.org:80/software/grub/manual/grub.html text/html
for example.  Half a minute?  Five minutes?  More?

Thanks,... Peter E.


-- 456789 123456789 123456789 123456789 123456789 123456789 123456789 12
Telephone 1 360 639 0202.  Bcc: peter at easthope.ca  http://carnot.yi.org/
http://members.shaw.ca/peasthope/index.html#Itinerary 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/171057583.72097.69782@heaviside.invalid



Re: Re(2): Backup system for use when Debian fails.

2012-07-15 Thread Chris Bannister
On Sun, Jul 15, 2012 at 05:38:33PM -0700, peasth...@shaw.ca wrote:
 
 * From: Camale#xF3;n noela...@gmail.com
 * Date: Mon, 18 Jun 2012 15:52:11 + (UTC)
  Well, that's what LiveCDs and USB sticks with a running system are aimed 
  for, ...
 
 Starting from functional cold hardware with a LiveCD or 
 USB stick, approximately how much time is needed to put 
 a reference document from a server on the screen.
 http://www.gnu.org:80/software/grub/manual/grub.html text/html
 for example.  Half a minute?  Five minutes?  More?

 5

-- 
If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing. --- Malcolm X


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120716050239.GZ13121@tal



Re: Re (2): Backup system for use when Debian fails.

2012-06-19 Thread Camaleón
On Mon, 18 Jun 2012 11:47:11 -0700, peasthope wrote:

 From: Camaleon noela...@gmail.com
 Date: Mon, 18 Jun 2012 15:52:11 + (UTC)
 Well, that's what LiveCDs and USB sticks with a running system are
 aimed for, to be a lifesaver when your main system cannot boot or is
 completely hosed.
 
 Yes, I'm flaunting a prejudice.
 
 Also a spare Debian system could be kept with a policy of never monkey
 with it and with the primary system at the same time.  Older laptops
 are discarded like paper cups.

Agree. 

Anyway, having a different OS can be of help when there's a problem 
affecting only your main system's OS version. Despite it's a corner case, 
it can also happen.
 
 From: peasth...@shaw.ca
 Date: Sun, 17 Jun 2012 10:25:02 -0700
 ... unsophisticated and superbly reliable.
 
 uncomplicated would have been better than unsophisticated.

Don't be too picky with your own wording. At least my non-English mind 
could catch the essence of the above unsophisticated :-)

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jrqb5f$cse$9...@dough.gmane.org



Re: Backup system for use when Debian fails.

2012-06-18 Thread Camaleón
On Sun, 17 Jun 2012 10:25:02 -0700, peasthope wrote:

 This weekend I arrived home to find the Debian Squeeze system unable to
 accept a login.  During startup, this line appears.
   Starting deferred execution scheduler: atd failed!

(...)

 I'm shelving the problem for a day to two but want to mention that my
 backup system in such circumstances is ETH Native Oberon. 
 http://www.ethoberon.ethz.ch/
 
 It receives and sends email, browses the Web sufficiently for
 troubleshooting the Linux system and is unsophisticated and superbly
 reliable.  Mentioning it only in case others can benefit.

Well, that's what LiveCDs and USB sticks with a running system are aimed 
for, to be a lifesaver when your main system cannot boot or is completely 
hosed.

The only but I'll say about that Oberon OS is that it will require from 
extra maintenance and some learning curve and both things are not coming 
for free, that is, they're time consuming while if you have an external 
media with Debian loaded on it, being a well-known system, you already 
know what to do within the less possible time.

Greetings,

-- 
Camaleón


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jrnirb$kvm$1...@dough.gmane.org



trivial udev rule wreaks havoc; was Re: Backup system for use when Debian fails.

2012-06-18 Thread peasthope
From:   peasth...@shaw.ca
Date:   Sun, 17 Jun 2012 10:25:02 -0700
 This weekend I arrived home to find the Debian Squeeze system 
 unable to accept a login. 

Turns out that I inflicted the problem by creating udev 
rules for the purpose of making the Unibrain Fire-i camera 
work.  

peter@joule:~$ grep edit /lib/udev/rules.d/*acl*
# do not edit this file, it will be overwritten on update

So I created two files.

peter@joule:~$ cat /etc/udev/rules.d/70-acl.rules
# Rule added for IIDC 1.04, Unibrain Fire-i camera, 2012-06-15.
SUBSYSTEM==firewire, ATTR{units}==*0x00a02d:0x00100*,  GROUP=video

peter@joule:~$ cat /etc/udev/rules.d/91-permissions.rules
# Rule added for IIDC 1.04, Unibrain Fire-i camera, 2012-06-15.
SUBSYSTEM==firewire, ATTR{units}==*0x00a02d:0x00100*,  GROUP=video

/etc/udev/rules.d/ must be the right place for these 
rules.  Did I choose the file names badly or what?

Thanks,... Peter E.

-- 
Telephone 1 360 639 0202.  Bcc: peter at easthope.ca
http://carnot.yi.org/ 
http://members.shaw.ca/peasthope/index.html#Itinerary 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/171057554.47129.38338@heaviside.invalid



Re (2): Backup system for use when Debian fails.

2012-06-18 Thread peasthope
From:   Camaleon noela...@gmail.com
Date:   Mon, 18 Jun 2012 15:52:11 + (UTC)
 Well, that's what LiveCDs and USB sticks with a running system are aimed 
 for, to be a lifesaver when your main system cannot boot or is completely 
 hosed.

Yes, I'm flaunting a prejudice.

Also a spare Debian system could be kept with a policy of 
never monkey with it and with the primary system at the 
same time.  Older laptops are discarded like paper cups.

From:   peasth...@shaw.ca
Date:   Sun, 17 Jun 2012 10:25:02 -0700
 ... unsophisticated and superbly reliable. 

uncomplicated would have been better than unsophisticated.  

Regards,  ... Peter E.

-- 
Telephone 1 360 639 0202.  Bcc: peter at easthope.ca
http://carnot.yi.org/ 
http://members.shaw.ca/peasthope/index.html#Itinerary 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/171057554.48075.38339@heaviside.invalid



Backup system for use when Debian fails.

2012-06-17 Thread peasthope
Hi,

This weekend I arrived home to find the Debian Squeeze system 
unable to accept a login.  During startup, this line appears.
  Starting deferred execution scheduler: atd failed!

Xdm behaves as though the wrong password is given.  Command 
line login works but then X fails with this error.
  (EE) open /dev/fb0: no such file or directory

Login with telnet on an Ethernet connection gets this 
message repeatedly for about 30 s prior to the command prompt.
  root@joule:/home/peter# ls -l /dev/null
Permissions are easily fixed but are back to the 
wrong values after a reboot. 

I'm shelving the problem for a day to two but want 
to mention that my backup system in such circumstances 
is ETH Native Oberon.  http://www.ethoberon.ethz.ch/

It receives and sends email, browses the Web 
sufficiently for troubleshooting the Linux system and 
is unsophisticated and superbly reliable.  Mentioning 
it only in case others can benefit.

TTFN,   ... Peter E.

-- 
Telephone 1 360 639 0202.  Bcc: peter at easthope.ca
http://carnot.yi.org/ 
http://members.shaw.ca/peasthope/index.html#Itinerary 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/171057553.42562.35321@heaviside.invalid



Re: Backup System part deux

2012-02-11 Thread Rob Owens
On Thu, Feb 09, 2012 at 06:08:04PM -0800, Gary Roach wrote:
 
 
 Thanks for all of the help last time . I installed Backupninja
 easily. After getting it running I realized that it didn't quite do
 what I wanted. I was looking for a package that could be installed
 in the server with little need for additional programs to be
 installed on each machine. So, after some searching, I switched to
 Backuppc. Now Backuppc will do what I want but it needs to run
 first. To cut to the chase I have (I think) the config.pl file set
 to backup /home, /etc, and /var with a couple of exclusions. As the

You should be able to do all your configuration from the web interface.
(Once you get backuppc running, of course).

 debian readme file suggested, I pasted the supplied script into the
 Apache2 config file. This seems to be working OK. I then moved the
 /var/lib/backuppc files to my spare hard drive '/backupdisk' and did
 a soft link from /var/lib to /backupdisk/backuppc. I then did the
 following
 
 root/.../etc# service backuppc start
 Starting backuppc...2012-02-09 17:17:05 Can't create a test hardlink
 between a file in /backupdisk/pc and /backupdisk/cpool.  Either
 these are different file systems, or this file system doesn't
 support hardlinks, or these directories don't exist, or there is a
 permissions problem, or the file system is out of inodes or full.
 Use df, df -i, and ls -ld to check each of these possibilities.
 Quitting...
 
 I searched for an answer and found dozens, all different and mostly
 old.  The either made no sense or didn't work. Specifically- all of
 my disks on all of my machines are ext3 file systems. Changing the
 permissions to 755 on all of the files didn't work.
 
 The path's involved look like:
 /var/lib/backuppc - /backupdisk/backuppc
 /backupdisk/backuppc/ cpool, pc etc.
 
Your symlink should be ok.  I think you've got a permissions/ownership
problem.  /var/lib/backuppc needs to be owned by backuppc.backuppc

ext3 is definitely ok.  FAT32 is the most common one that you cannot use
with backuppc since it doesn't support hard links.

 I am running Debian Squeeze on all systems
 I am using rsync and have ssh installed on all systems.
 All of the systems communicate with each other over ssh
 My backup server is running two GB size disks. Backuppc is running
 on one disk and using the other for the backup data.

I have backuppc running on Squeeze.  Let me know if you have any other
specific questions.  There is a backuppc mailing list, as well.

-Rob


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20120212004042.ga30...@aurora.owens.net



Re: Backup System part deux

2012-02-10 Thread Sebastian Steinhuber
Hi Gary!

Am 10.02.2012 03:08, schrieb Gary Roach:
 
 
 Thanks for all of the help last time . I installed Backupninja easily.
 After getting it running I realized that it didn't quite do what I
 wanted. I was looking for a package that could be installed in the
 server with little need for additional programs to be installed on each
 machine. So, after some searching, I switched to Backuppc. Now Backuppc
 will do what I want but it needs to run first. To cut to the chase I
 have (I think) the config.pl file set to backup /home, /etc, and /var
 with a couple of exclusions. As the debian readme file suggested, I
 pasted the supplied script into the Apache2 config file. This seems to
 be working OK. I then moved the /var/lib/backuppc files to my spare hard
 drive '/backupdisk' and did a soft link from /var/lib to
 /backupdisk/backuppc. I then did the following
 
 root/.../etc# service backuppc start
 Starting backuppc...2012-02-09 17:17:05 Can't create a test hardlink
 between a file in /backupdisk/pc and /backupdisk/cpool.  Either these
 are different file systems, or this file system doesn't support
 hardlinks, or these directories don't exist, or there is a permissions
 problem, or the file system is out of inodes or full.  Use df, df -i,
 and ls -ld to check each of these possibilities. Quitting...
 
 I searched for an answer and found dozens, all different and mostly
 old.  The either made no sense or didn't work. Specifically- all of my
 disks on all of my machines are ext3 file systems. Changing the
 permissions to 755 on all of the files didn't work.
 
 The path's involved look like:
 /var/lib/backuppc - /backupdisk/backuppc
 /backupdisk/backuppc/ cpool, pc etc.
 
 I am running Debian Squeeze on all systems
 I am using rsync and have ssh installed on all systems.
 All of the systems communicate with each other over ssh
 My backup server is running two GB size disks. Backuppc is running on
 one disk and using the other for the backup data.
 
 Any ideas of how to fix this problem will be sincerely appreciated.
 
 Gary R

The features of backuppc also suited my needs of a backup solution and
convinced me some years ago.
I'm remembering that I did the same linking thing to /var/lib/backuppc
and had the very same problem due to a permission issue. However, it
does a great job since then.

Assumed there is no obvious reason for directory linking, backuppc and
the web-server are running as user backuppc, you might want to do
 'chown backuppc:backuppc /var/lib/backuppc',
 'rm -R /var/lib/backuppc/*',
 mount the data drive to /var/lib/backuppc and
 'aptitude reinstall backuppc' to recreate its subdirectories (cpool,
log, pc, pool and trash).

HTH,
Sebastian

 PS: For the average user the documentation for this package sucks. The
 detail included may be exactly what the developer needs but is really
 rough sledding for the average user. This seems to be a very common
 problem in the Linux world and is the main reason that Linux is still
 relegated to the number three spot way behind Windows and Mackintosh. I
 spent 40 year in industry and was a tech writer for many of those years.
 My last 5 years was as a developer of medium size database systems for
 use by real dummies. So I speak with some authority. Every program that
 is for general use must have one of two things. Either a GUI that can be
 used without any special training by your wife/girlfriend/secretary
 without asking _any_ questions or documentation that passes the same
 test. This is the criteria that I always used and it works. I would love
 to rewrite some of this stuff but am not nearly conversant enough with
 the nitty gritty programming details.



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/jh3drk$43n$1...@dough.gmane.org



Backup System part deux

2012-02-09 Thread Gary Roach



Thanks for all of the help last time . I installed Backupninja easily. 
After getting it running I realized that it didn't quite do what I 
wanted. I was looking for a package that could be installed in the 
server with little need for additional programs to be installed on each 
machine. So, after some searching, I switched to Backuppc. Now Backuppc 
will do what I want but it needs to run first. To cut to the chase I 
have (I think) the config.pl file set to backup /home, /etc, and /var 
with a couple of exclusions. As the debian readme file suggested, I 
pasted the supplied script into the Apache2 config file. This seems to 
be working OK. I then moved the /var/lib/backuppc files to my spare hard 
drive '/backupdisk' and did a soft link from /var/lib to 
/backupdisk/backuppc. I then did the following


root/.../etc# service backuppc start
Starting backuppc...2012-02-09 17:17:05 Can't create a test hardlink 
between a file in /backupdisk/pc and /backupdisk/cpool.  Either these 
are different file systems, or this file system doesn't support 
hardlinks, or these directories don't exist, or there is a permissions 
problem, or the file system is out of inodes or full.  Use df, df -i, 
and ls -ld to check each of these possibilities. Quitting...


I searched for an answer and found dozens, all different and mostly 
old.  The either made no sense or didn't work. Specifically- all of my 
disks on all of my machines are ext3 file systems. Changing the 
permissions to 755 on all of the files didn't work.


The path's involved look like:
/var/lib/backuppc - /backupdisk/backuppc
/backupdisk/backuppc/ cpool, pc etc.

I am running Debian Squeeze on all systems
I am using rsync and have ssh installed on all systems.
All of the systems communicate with each other over ssh
My backup server is running two GB size disks. Backuppc is running on 
one disk and using the other for the backup data.


Any ideas of how to fix this problem will be sincerely appreciated.

Gary R

PS: For the average user the documentation for this package sucks. The 
detail included may be exactly what the developer needs but is really 
rough sledding for the average user. This seems to be a very common 
problem in the Linux world and is the main reason that Linux is still 
relegated to the number three spot way behind Windows and Mackintosh. I 
spent 40 year in industry and was a tech writer for many of those years. 
My last 5 years was as a developer of medium size database systems for 
use by real dummies. So I speak with some authority. Every program that 
is for general use must have one of two things. Either a GUI that can be 
used without any special training by your wife/girlfriend/secretary 
without asking _any_ questions or documentation that passes the same 
test. This is the criteria that I always used and it works. I would love 
to rewrite some of this stuff but am not nearly conversant enough with 
the nitty gritty programming details.


Re: Backup System

2012-02-04 Thread Paul Lewis
On 04/02/12 01:57:14, Scott Ferguson wrote:

  That I have backups of stuff for a couple weeks is worth the 
  massively slow tapes (and tape drive $$), IMHO.
 
 Tape slow? Depends on your budget I guess and needs I guess. DLT is
 dirt cheap nowadays (cost of shipping only in many cases, compared 
 to LTO).
 
 I still use tape (DDS-4) for a couple of SGI Octanes, bang for bucks
 it's pretty fast backup and restore (I also have other uses for the
 tapes).

No one seems to have suggested DVDs or BlueRay as storage medium. Is 
there a reason these do not seem to be used much?



Re: Backup System

2012-02-04 Thread Scott Ferguson
On 04/02/12 22:35, Paul Lewis wrote:
 On 04/02/12 01:57:14, Scott Ferguson wrote:
 
  

snipped

  
 
 No one seems to have suggested DVDs or BlueRay as storage medium. Is
 
 there a reason these do not seem to be used much?
 
Some people like them - I only use them for cheap off-site backups (for
small sites). My reasoning being that they're slow to backup to, but
mainly because they're unreliable. The nature of the format means they
have several levels of redundancy built-in, but without special software
you can't tell how damaged they are (you can still read them with a fair
amount of damage, so you don't know how close to being unreadable they
are).
There's a Debian package you can use that allows you to test them (the
name escapes me, I haven't used it since Etch) but you need to run it
before burning the disks.

Additionally - hard drives are cheap (and so are tapes).

What disks are good for is Clonezilla type boot from disk and restore
from image recovery tools - but they're restricted to systems that will
fit on a single disk.


Kind regards

-- 
Iceweasel/Firefox extensions for finding answers to Debian questions:-
https://addons.mozilla.org/en-US/firefox/collections/Scott_Ferguson/debian/

NOTE: new update available for Debian Buttons
(New button for querying Debian Developer Package):-
https://addons.mozilla.org/en-US/firefox/addon/debian-buttons/


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4f2d1ee0.9010...@gmail.com



Re: Backup System

2012-02-03 Thread Jon Dowland

On 03/02/12 05:36, David Christensen wrote:


$ apt-cache search backup | less


HTH,


It's hardly likely to.  I don't know why you bothered to suggest it.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4f2ba9eb.4040...@debian.org



  1   2   3   >