Re: Looking for archive management system for backups burned to optical discs

2024-01-25 Thread Anders Andersson
On Tue, Jan 23, 2024 at 4:03 PM Thomas Schmitt  wrote:
>
> About timestamps and incremental backup:
>
> If you only go for mtime, them you miss changes of file attributes
> which are indicated by ctime.
> Even more, timestamps alone are not a reliable way to determine which
> files are new at their current location in the directory tree.
> If you move a file from one directory to the other, then the timestamps
> of the file do _not_ get updated. Only the two involved directories get
> new timestamps.
> So when the backup tool encounters directories with young timestamps
> it has to use other means to determine whether their data files were
> moved. scdbackup uses recorded device and inode numbers, and as last
> resort recorded MD5 sums for that purpose.
>
> (Of course, content MD5 comparison is slow and causes high disk load,
> compared to simple directory tree traversal with timestamps and inode
> numbers. So scdbackup tries to avoid this when possible and allowed
> by the -changetest_options in the backup configuration file.)
>
>
> Have a nice day :)
>
> Thomas
>

This is one thing I enjoy with btrfs. It knows exactly every little
thing that changed to your files since last time you backed it up,
without having to scan everything. Even if you manually try to fake
the datestamps etc. Finding that information is more or less instant,
making backups easy.



Re: Looking for archive management system for backups burned to optical discs

2024-01-23 Thread Thomas Schmitt
Hi,

David Christensen wrote:
> I have a SOHO file server with ~1 TB of data.  I would like archive the data
> by burning it to a series of optical discs organized by time (e.g. mtime).
> I expect to periodically burn additional discs in the future, each covering
> a span of time from the previous last disc to the then-current time.

I use my own software for making incremental multi-volume backups, based
on file timestamps (m and c), inode numbers, and content checksums.
  http://scdbackup.webframe.org/main_eng.html
  http://scdbackup.webframe.org/examples.html#incremental

The software and the texts are quite old. The proposed backup scheme
is not in use here any more.
Instead i have four independent backup families, each comprised of
level 0 to N with no repetitions below the current level N.
Further i have backups of the configuration and memorized file lists
on 4 CDs.

Level 0 fills dozens of BD-RE discs. The other levels fill at most one
BD-RE. Level N of each family exists in three copies which get larger
with each backup run of that level. Whenever this level BD threatens to
overflow, i archive the latest BD of that level and start level N+1.
That step is a bit bumpy, because i have to restore the file lists of
level N from CD after a backup has been planned but was not performed.
When overflow is foreseeable, i make a copy of the file lists on disk
before i start the planning run, or i simply start level N+1 without
waiting for the overflow.

I use scdbackup for the slowly growing bulk of my file collection.
The agile parts of my hard disk are only about 5 GB and get covered by
incremental multi-session backups of xorriso (which learned a lot about
incrementality from scdbackup). With zisofs compression i can put about
30 incremental backups of one DVD+RW or 250 backups on one BD-RE.
Day by day.


> The term "archive management system" comes to mind.

I would not attribute this title to scdbackup. It was created to scratch
my itch when hard disks grew much faster in capacity than the backup
media. (Also it was the motivation to start programming on ISO 9660
producers and burn programs.)

So it might be that you are better off with a more professional backup
system. :))

(Else we will probably have to read together
  http://scdbackup.webframe.org/cd_backup_planer_help
and my backup configurations to compose configurations for you.)

--
About timestamps and incremental backup:

If you only go for mtime, them you miss changes of file attributes
which are indicated by ctime.
Even more, timestamps alone are not a reliable way to determine which
files are new at their current location in the directory tree.
If you move a file from one directory to the other, then the timestamps
of the file do _not_ get updated. Only the two involved directories get
new timestamps.
So when the backup tool encounters directories with young timestamps
it has to use other means to determine whether their data files were
moved. scdbackup uses recorded device and inode numbers, and as last
resort recorded MD5 sums for that purpose.

(Of course, content MD5 comparison is slow and causes high disk load,
compared to simple directory tree traversal with timestamps and inode
numbers. So scdbackup tries to avoid this when possible and allowed
by the -changetest_options in the backup configuration file.)


Have a nice day :)

Thomas



Re: Looking for archive management system for backups burned to optical discs

2024-01-22 Thread David Christensen

On 1/22/24 20:30, Charles Curley wrote:

On Mon, 22 Jan 2024 18:27:51 -0800
David Christensen  wrote:


debian-user:

I have a SOHO file server with ~1 TB of data.  I would like archive
the data by burning it to a series of optical discs organized by time
(e.g. mtime).  I expect to periodically burn additional discs in the
future, each covering a span of time from the previous last disc to
the then-current time.


I am looking for FOSS software for Unix platforms that goes beyond a
disc burner with multi-volume spanning.  The term "archive management
system" comes to mind.


Comments or suggestions?


gene heskett 's suggestion of Amanda is a good
one. It has its kinks, but is solid and reliable. Amanda also handles
compression and encryption for you. I currently use Amanda to back up
to a RAID array. I then use rsnapshot to back portions of that
(including the Amanda virtual tapes) to one of three rotating off-site
USB external drives. I suspect the latter could be adapted to your
requirements.

If you don't find anything readily available, I'd look at using find
and the mtimes to copy to a holding disk, which you can then burn to
archive media.

I suggest you look at Blu-Ray for archiving.



Thank you for the reply.  :-)


David




Re: Looking for archive management system for backups burned to opticaldiscs

2024-01-22 Thread David Christensen

On 1/22/24 19:44, gene heskett wrote:

On 1/22/24 21:28, David Christensen wrote:

debian-user:

I have a SOHO file server with ~1 TB of data.  I would like archive 
the data by burning it to a series of optical discs organized by time 
(e.g. mtime).  I expect to periodically burn additional discs in the 
future, each covering a span of time from the previous last disc to 
the then-current time.



I am looking for FOSS software for Unix platforms that goes beyond a 
disc burner with multi-volume spanning.  The term "archive management 
system" comes to mind.



Comments or suggestions?


Take a look at amanda, although the optical is not on my radar because 
of the low capacity of a dvd at 4.7gigs.  Amanda can use an lvm for 
v-disks, which if you want say 60 days back as bare metal recover, would 
actually be the contents of 60 directories in that lvm, allowing any 
file up to 60 days old to be recovered.


Here, with all my machine on my local network, and before I started with 
3d printers which need maybe 30 gigabyte of gcode to drive the printers 
to make one complex part, 5 machines backed up for 60 days, was filling 
a 1T drive to around 87%. So I bought 2 2T seagates, got buster 
installed and everything running smoothly. Installed buster on oe drive, 
configure amanda to use the other.  3 or 4 weeks later, both of those 
seagates dropped off the sata controller in the night, and I rebuilt 
with SSD's and bookworm which has been the disaster you all have been 
trying to help me with since, but its not running well enough to be 
worth reinventing my amanda setup which just grew since about 1999.


Amanda was developed for use with qic or thereabouts tapes so its prime 
directive is to juggle the backup levels to fully fill the tape(s) while 
doing a level0 on everything withing the time limit in days between 
level0's. It can use a tape library of however many tapes the library 
contains. Bring big red wagonloads of cash for those.  But I gave up on 
tapes 15 years ago, they simply weren't dependable enough, while 
spinning rust, never shut down can sit there and spin for 50,000+ hours, 
they've done it for me.


The 1T I took out, a cheap 5400 rev barracuda, had just under 70,000 
spinning hours on it with maybe 40 power downs on it, when I retired it 
cuz it was getting too small, for the 2T that lasted less than a month.


In spinning rust, power downs are the drive killers, so spin them up and 
leave them spinning, the heads aren't wearing while they are flying on 3 
microns of air between the head and the platter.
Set it up right, and amanda will have your back till the place is a few 
inches of ashes.  And you can set to cycle the storage drive(s) offsite, 
exchanging them weekly or monthly, what ever you're comfortable with. 
That adds to the powerdown count however so I never did that.



Thank you for the reply.  :-)


David



Re: Looking for archive management system for backups burned to optical discs

2024-01-22 Thread Charles Curley
On Mon, 22 Jan 2024 18:27:51 -0800
David Christensen  wrote:

> debian-user:
> 
> I have a SOHO file server with ~1 TB of data.  I would like archive
> the data by burning it to a series of optical discs organized by time
> (e.g. mtime).  I expect to periodically burn additional discs in the
> future, each covering a span of time from the previous last disc to
> the then-current time.
> 
> 
> I am looking for FOSS software for Unix platforms that goes beyond a 
> disc burner with multi-volume spanning.  The term "archive management 
> system" comes to mind.
> 
> 
> Comments or suggestions?

gene heskett 's suggestion of Amanda is a good
one. It has its kinks, but is solid and reliable. Amanda also handles
compression and encryption for you. I currently use Amanda to back up
to a RAID array. I then use rsnapshot to back portions of that
(including the Amanda virtual tapes) to one of three rotating off-site
USB external drives. I suspect the latter could be adapted to your
requirements.

If you don't find anything readily available, I'd look at using find
and the mtimes to copy to a holding disk, which you can then burn to
archive media.

I suggest you look at Blu-Ray for archiving.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Looking for archive management system for backups burned to opticaldiscs

2024-01-22 Thread gene heskett

On 1/22/24 21:28, David Christensen wrote:

debian-user:

I have a SOHO file server with ~1 TB of data.  I would like archive the 
data by burning it to a series of optical discs organized by time (e.g. 
mtime).  I expect to periodically burn additional discs in the future, 
each covering a span of time from the previous last disc to the 
then-current time.



I am looking for FOSS software for Unix platforms that goes beyond a 
disc burner with multi-volume spanning.  The term "archive management 
system" comes to mind.



Comments or suggestions?


Take a look at amanda, although the optical is not on my radar because 
of the low capacity of a dvd at 4.7gigs.  Amanda can use an lvm for 
v-disks, which if you want say 60 days back as bare metal recover, would 
actually be the contents of 60 directories in that lvm, allowing any 
file up to 60 days old to be recovered.


Here, with all my machine on my local network, and before I started with 
3d printers which need maybe 30 gigabyte of gcode to drive the printers 
to make one complex part, 5 machines backed up for 60 days, was filling 
a 1T drive to around 87%. So I bought 2 2T seagates, got buster 
installed and everything running smoothly. Installed buster on oe drive, 
configure amanda to use the other.  3 or 4 weeks later, both of those 
seagates dropped off the sata controller in the night, and I rebuilt 
with SSD's and bookworm which has been the disaster you all have been 
trying to help me with since, but its not running well enough to be 
worth reinventing my amanda setup which just grew since about 1999.


Amanda was developed for use with qic or thereabouts tapes so its prime 
directive is to juggle the backup levels to fully fill the tape(s) while 
doing a level0 on everything withing the time limit in days between 
level0's. It can use a tape library of however many tapes the library 
contains. Bring big red wagonloads of cash for those.  But I gave up on 
tapes 15 years ago, they simply weren't dependable enough, while 
spinning rust, never shut down can sit there and spin for 50,000+ hours, 
they've done it for me.


The 1T I took out, a cheap 5400 rev barracuda, had just under 70,000 
spinning hours on it with maybe 40 power downs on it, when I retired it 
cuz it was getting too small, for the 2T that lasted less than a month.


In spinning rust, power downs are the drive killers, so spin them up and 
leave them spinning, the heads aren't wearing while they are flying on 3 
microns of air between the head and the platter.
Set it up right, and amanda will have your back till the place is a few 
inches of ashes.  And you can set to cycle the storage drive(s) offsite, 
exchanging them weekly or monthly, what ever you're comfortable with. 
That adds to the powerdown count however so I never did that.



David

Take care, stay warm, dry and well, David.


.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis



Looking for archive management system for backups burned to optical discs

2024-01-22 Thread David Christensen

debian-user:

I have a SOHO file server with ~1 TB of data.  I would like archive the 
data by burning it to a series of optical discs organized by time (e.g. 
mtime).  I expect to periodically burn additional discs in the future, 
each covering a span of time from the previous last disc to the 
then-current time.



I am looking for FOSS software for Unix platforms that goes beyond a 
disc burner with multi-volume spanning.  The term "archive management 
system" comes to mind.



Comments or suggestions?


David



Re: backing up backups

2022-04-21 Thread David Wright
On Mon 18 Apr 2022 at 16:06:48 (-0400), Default User wrote:

> BTW,  I think I have narrowed the previous restore problem down to what I
> believe is a "buggy" early UEFI implementation on my computer (circa 2014).
> Irrelevant now; I have re-installed with BIOS (not UEFI) booting and MBR
> (not GPT) partitioning. And have successfully tested restoring using both
> Timeshift and Conezilla.
> 
> And regarding learning by experience - oh, how I know. I've done so much of
> that, I have a degree from the "school of hard knocks"!

Just FTR, reverting to MBR should not be necessary, and you lose GPT's
advantages and future-proofing. BIOS machines should be able to boot
Grub from GPT because it's using the protective MBR that's left in
place. And Grub should tell you if you don't leave the necessary space
for its core image somewhere on a GPT disk (ie a BIOS Boot partition).

Cheers,
David.



Re: backing up backups

2022-04-19 Thread David Wright
On Tue 19 Apr 2022 at 07:19:58 (+0200), DdB wrote:

> So i came up with the idea to create a sort of inventory using a sparse
> copy of empty files only (using mkdir, truncate + touch). The space
> requirements are affordable (like 2.3M for an inventory representing
> 3.5T of data). The effect being, that find will see those files by
> name/directory/time/size and permissions, allowing to find duplicates
> according to those attributes quite nicely.
> 
> Since i started doing that, i always do know exactly, what's out there
> and where to find it, as if i had only the inodes at hand. Suits MY
> needs. :-)

That sounds somewhat like my scheme for indexing my caddies, USB
sticks and SD cards. When I unmount them, a script makes three
indexes: the first running updatedb to a private database, the
second running find to produce a list of strictly alphabetically
ordered files (full name, --full-time and size) and the directories
containing them, and the third, most like yours, running ls -lAR
to another private database. The three indexes are propagated to
my other hosts.

Apart from being a plain text file, the output of ls -lAR, stored
either as ….lslR or ….lslR.gz, can be navigated with Midnight
Commander (mc) just as if it were a real filesystem. (Permissions
don't concern me.)

Cheers,
David.



Re: backing up backups

2022-04-18 Thread DdB
Hello,

Am 11.04.2022 um 04:58 schrieb Default User:
> So . . .   what IS the correct way to make "backups of backups"?
> 

I don't know that for sure, but at first glance, i dont understand the
complexity of your setup either. Seems to by quite elaborate, which is
certainly suiting your needs. And since my base is also quite different
from yours, ideas might not transfer that well... but anyhow

Here is my use case:
Apart from the main system, which resides on an NVME-SSD, all my storage
consists of zfs pools, which allow snapshotting (instead of
time-shifting). And because of the pools being hosted on raid, there is
redundancy PLUS backups (also with partition images among them). Only
SOME rarely used data (like movies), i do push into pools on removable
media (spinning hard drives), and was interested to have their content
searchable online.

So i came up with the idea to create a sort of inventory using a sparse
copy of empty files only (using mkdir, truncate + touch). The space
requirements are affordable (like 2.3M for an inventory representing
3.5T of data). The effect being, that find will see those files by
name/directory/time/size and permissions, allowing to find duplicates
according to those attributes quite nicely.

Since i started doing that, i always do know exactly, what's out there
and where to find it, as if i had only the inodes at hand. Suits MY
needs. :-)

just my 2 cents
DdB



Re: backing up backups

2022-04-18 Thread Keith Bainbridge

On 11/4/22 10:58, Default User wrote:


So . . .   what IS the correct way to make "backups of backups"?



Sorry to take so long to respond. I am traveling and have only short 
periods that I can spend on non-pressing matters.


To answer your question: the method that gets you the result you want.

I have used 2 switches in rsync to create  a date/time copy of files 
that are updated:



--backup-dir=/mnt/data/rsynccBackupp/$YEAR/$MON/$TODAY/$HOUR/  source/ 
target


creates an archive directory of older versions of files. I have noticed 
at times that it creates copies of files that I haven't fouched. Needs 
looking at, but it will something I'm doing.


The variables are set in the script:


YEAR=`date +%Y`
MON=`date +%b`
TODAY=`date +%d`
NOW=`date +%Y%b%d%H`
HOUR=`date +%H`

rsync -avbH  --suffix="."$(date +"%Y%m%d%H%M")source/ target/

replaces the standard ~ at the end of the file being updated with 
current time, in a numerical string. Keeps all versions of the files 
together.  Which suits you better.


As for how to 'backup your backup?'   I copy everything from source 
documents to my backup drives, and set cron to run the script for each 
drive at a different time of day/hour - yes I backup current docs 
hourly. I'm lucky - I don't notice rsync affecting my performance.


Timeshift is a great tool. I know you are correct to copy the system to 
a different drive. I figure that if I get to a point where I can't 
access the time shift files on /timeshift I'm in deep trouble.  I just 
re-install.  / has only system files on it. /home/keith is sym-linked 
from another partition.


In any case, copying /timeshift  from your system OR your first backup 
drive should be trivial for rsync.


Out of time, but I think that's enough to think about for now

--
All the best

Keith Bainbridge

keithrbaugro...@gmail.com



Re: backing up backups

2022-04-18 Thread David Christensen

On 4/18/22 13:06, Default User wrote:


Finally, fun fact:
Many years ago, at a local Linux user group meeting, Sun Microsystems put
on a demonstration of their ZFS filesystem. To prove how robust it was,
they pulled the power cord out of the wall socket on a running desktop
computer. Then they plugged the cord back in and re-booted, with no
problems! Yes, I was impressed.



I bumped the USB cable of an external HDD while doing a ZFS incremental 
replication from my server to a backup disk.  The backup pool was corrupted:


2022-03-28 18:16:25 toor@f1 ~
# zpool status -v z6000b
  pool: z6000b
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 0 days 09:36:15 with 0 errors on Mon Mar 28 
00:09:09 2022

config:

NAME  STATE READ WRITE CKSUM
z6000bONLINE   0 0 0
  gpt/z6000b.eli  ONLINE   0 0 0
cache
  gpt/z60a.eliONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

:<0x0>
:<0x39>


I bought a SATA 6 Gbps drive rack:

https://www.startech.com/en-us/hdd/drw150satbk

destroyed the backup pool, recreated the backup pool, and did a full 
replication.



David



Re: backing up backups

2022-04-18 Thread Default User
On Thu, Apr 14, 2022 at 3:24 AM David Christensen 
wrote:

> On 4/13/22 20:03, Default User wrote:
> > On Wed, Apr 13, 2022 at 4:42 PM David Christensen wrote:
>
> >> As you find system administration commands that work, put them into
> >> scripts:
> >>
> >> #!/bin/sh
> >> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> >> /media/default/MSD1/ /media/default/MSD2/
> >>
> >>
> >> Use a version control system for system administration.  Create a
> >> project for every machine.  Check in system configuration files,
> >> scripts, partition table backups, encryption header backups, RAID header
> >> backups, etc..  Maintain a plain text log file with notes of what you
> >> did (e.g. console sessions), when, and why.
> >>
> >>
> >> Put your OS on a small, fast device (e.g. SSD) and put your data on an
> >> array of large devices (e.g. ZFS pool with one or more HDD mirrors).
> >> Backup both as before.  Additionally, take images of your OS device.
>
> > Yikes!
> >
> > David, I really think I am too old to learn all of that.  But maybe I can
> > learn at least some of it, over time.  Please understand that I am not
> > training to be a real system administrator, except that I guess anyone is
> > (or should be able to be) actually the "system administrator" of their
> own
> > computer(s).
> >
> > Anyway, thanks for the advice.
>
>
> I learned the above tools because they save time, save effort, and
> provide features I want.
>
>
> I use dd(1) and an external HDD for images.  You will want to write
> scripts (like the simple example I previously showed).
>
>
> CVS has more than enough power for a single user/ system administrator,
> and is simpler than Git.  Here are the common use-cases:
>
> 1.  Install CVS (and SSH) on Debian:
>
>  # apt-get install cvs openssh-client openssh-server
>
> 2.  Create a CVS repository:
>
>  # mkdir -p /var/local/cvs/dpchrist
>  # cvs -d /var/local/cvs/dpchrist init
>  # chown -R dpchrist:dpchrist /var/local/cvs/dpchrist
>
> 3.  Add CVS client environment variables to your shell (adjust host and
> username as required):
>
>  export
> CVSROOT=dpchr...@cvs.tracy.holgerdanske.com:/var/local/cvs/dpchrist
>  export CVS_RSH=ssh
>
> 4.  Create a project:
>
>  $ mkdir -p import/myproject
>  $ cd import/myproject
>  $ touch .exists
>  $ cvs import myproject dpchrist start
>
> 5.  Check-out a working directory of a project from the repository:
>
>  $ cd
>  $ cvs co myproject
>
> 6.  Add a file to the project working directory meta-data:
>
>  $ cd myproject
>  $ vi myfile
>  $ cvs add myfile
>
> 7.  See changes in the working directory compared to the repository:
>
>  $ cvs diff
>
> 8.  Bring in changes made elsewhere and checked-in to the repository:
>
>  $ cvs update
>
> 9.  Check-in working directory to the repository:
>
>  $ cvs ci
>
> 10. Remove a file from the project:
>
>  $ rm myfile
>  $ cvs rm myfile
>
>
> See the GNU CVS manual for more information:
>
>
> https://www.gnu.org/software/trans-coord/manual/cvs/html_node/index.html
>
>
> ZFS is a new way of doing storage with disks, arrays, volumes,
> filesystems, etc., including backup/ restore (snapshots and
> replication).  The learning curve is non-trivial.  The Lucas book gave
> me enough confidence to go for it:
>
>  https://mwl.io/nonfiction/os#fmzfs
>
>
> David
>
>


Hey David, thanks for the information.

BTW,  I think I have narrowed the previous restore problem down to what I
believe is a "buggy" early UEFI implementation on my computer (circa 2014).
Irrelevant now; I have re-installed with BIOS (not UEFI) booting and MBR
(not GPT) partitioning. And have successfully tested restoring using both
Timeshift and Conezilla.

And regarding learning by experience - oh, how I know. I've done so much of
that, I have a degree from the "school of hard knocks"!

Finally, fun fact:
Many years ago, at a local Linux user group meeting, Sun Microsystems put
on a demonstration of their ZFS filesystem. To prove how robust it was,
they pulled the power cord out of the wall socket on a running desktop
computer. Then they plugged the cord back in and re-booted, with no
problems! Yes, I was impressed.


Re: backing up backups

2022-04-14 Thread David Christensen

On 4/13/22 20:03, Default User wrote:

On Wed, Apr 13, 2022 at 4:42 PM David Christensen wrote:



As you find system administration commands that work, put them into
scripts:

#!/bin/sh
sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2/


Use a version control system for system administration.  Create a
project for every machine.  Check in system configuration files,
scripts, partition table backups, encryption header backups, RAID header
backups, etc..  Maintain a plain text log file with notes of what you
did (e.g. console sessions), when, and why.


Put your OS on a small, fast device (e.g. SSD) and put your data on an
array of large devices (e.g. ZFS pool with one or more HDD mirrors).
Backup both as before.  Additionally, take images of your OS device.



Yikes!

David, I really think I am too old to learn all of that.  But maybe I can
learn at least some of it, over time.  Please understand that I am not
training to be a real system administrator, except that I guess anyone is
(or should be able to be) actually the "system administrator" of their own
computer(s).

Anyway, thanks for the advice.



I learned the above tools because they save time, save effort, and 
provide features I want.



I use dd(1) and an external HDD for images.  You will want to write 
scripts (like the simple example I previously showed).



CVS has more than enough power for a single user/ system administrator, 
and is simpler than Git.  Here are the common use-cases:


1.  Install CVS (and SSH) on Debian:

# apt-get install cvs openssh-client openssh-server

2.  Create a CVS repository:

# mkdir -p /var/local/cvs/dpchrist
# cvs -d /var/local/cvs/dpchrist init
# chown -R dpchrist:dpchrist /var/local/cvs/dpchrist

3.  Add CVS client environment variables to your shell (adjust host and 
username as required):


export 
CVSROOT=dpchr...@cvs.tracy.holgerdanske.com:/var/local/cvs/dpchrist

export CVS_RSH=ssh

4.  Create a project:

$ mkdir -p import/myproject
$ cd import/myproject
$ touch .exists
$ cvs import myproject dpchrist start

5.  Check-out a working directory of a project from the repository:

$ cd
$ cvs co myproject

6.  Add a file to the project working directory meta-data:

$ cd myproject
$ vi myfile
$ cvs add myfile

7.  See changes in the working directory compared to the repository:

$ cvs diff

8.  Bring in changes made elsewhere and checked-in to the repository:

$ cvs update

9.  Check-in working directory to the repository:

$ cvs ci

10. Remove a file from the project:

$ rm myfile
$ cvs rm myfile


See the GNU CVS manual for more information:


https://www.gnu.org/software/trans-coord/manual/cvs/html_node/index.html


ZFS is a new way of doing storage with disks, arrays, volumes, 
filesystems, etc., including backup/ restore (snapshots and 
replication).  The learning curve is non-trivial.  The Lucas book gave 
me enough confidence to go for it:


https://mwl.io/nonfiction/os#fmzfs


David



Re: backing up backups

2022-04-13 Thread Default User
On Wed, Apr 13, 2022 at 4:42 PM David Christensen 
wrote:

> On 4/13/22 09:20, Default User wrote:
>
> >> Hey guys, sorry for just getting back with you now.
> >> Unfortunately, I am just now recovering from a self-inflicted computer
> >> disaster.
> >>
> >> While fighting with rsync, I did either:
> >>
> >> sudo rsync -aAXHSxvv --delete --info=progress2,stats2,name2
> >> /media/default/MSD1/ /media/default/MSD2
> >> or
> >> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> >> /media/default/MSD1/ /media/default/MSD2/
> >>
> >> Just one small problem: MSD2 was not connected to my computer!
> >> (Don't say it . . .  )
> >>
> >> Instead of giving an error message, rsync just created a directory on my
> >> computer called /media/defaultMSD2, and filled it up until my /
> partition
> >> was full, and THEN my desktop environment (Cinnamon) popped up a
> >> notification saying so.  How thoughtful.
> >>
> >> The computer then would not reboot into the operating system.
> >>
> >> No problem, I say. I will just use Timeshift to restore from its backup
> of
> >> a few hours earlier.
> >>
> >> But that did not work, even after deleting the extra directory, and
> trying
> >> restores from multiple Timeshift backups.
> >>
> >> Anyway, I never could fix the problem. But I did take it as an
> opportunity
> >> to "start over". I put in a new(er) SSD, and did a fresh install,
> replacing
> >> Cinnamon with Gnome. Mistake - now I remember why I dislike Gnome, ever
> >> since Gnome 3. Wish I had re-installed Cinnamon, but too late now, out
> of
> >> time. For now I will just have to grit my teeth and live with it.
> >>
> >> [BTW, yes, I do have all of my data. Backfilling it into my new setup
> will
> >> no doubt be an ongoing adventure.]
> >>
> >> Anyway, just a few notes about the rsync situation:
> >>
> >> 1) Having or not having a trailing / on the destination directory did
> not
> >> seem to make any difference in the size of the copy made, or otherwise.
> >> Nevertheless, I intend to heed the advice given to have a trailing /
> after
> >> both source and destination, or neither, as appropriate.
> >>
> >> 2) Using or not using an "S" option with rsync did not seem to make any
> >> difference, at least concerning the size of the copy made.
> >>
> >> 3) Yes, I really should check into using checksums to avoid "bot rot".
> >> Good advice.
> >>
> >> Finally, Gnome sucks.  (Did I mention that?)
> >>
> >> Thanks for the replies.
>
>
> Congratulations!  You now have more experience:
>
> "Doing things right is a matter of experience.  Experience is a matter
> of doing things wrong."
>
>
> As you find system administration commands that work, put them into
> scripts:
>
> #!/bin/sh
> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> /media/default/MSD1/ /media/default/MSD2/
>
>
> Use a version control system for system administration.  Create a
> project for every machine.  Check in system configuration files,
> scripts, partition table backups, encryption header backups, RAID header
> backups, etc..  Maintain a plain text log file with notes of what you
> did (e.g. console sessions), when, and why.
>
>
> Put your OS on a small, fast device (e.g. SSD) and put your data on an
> array of large devices (e.g. ZFS pool with one or more HDD mirrors).
> Backup both as before.  Additionally, take images of your OS device.
>
>
> David
>
>


Yikes!

David, I really think I am too old to learn all of that.  But maybe I can
learn at least some of it, over time.  Please understand that I am not
training to be a real system administrator, except that I guess anyone is
(or should be able to be) actually the "system administrator" of their own
computer(s).

Anyway, thanks for the advice.


Re: backing up backups

2022-04-13 Thread David Christensen

On 4/13/22 09:20, Default User wrote:


Hey guys, sorry for just getting back with you now.
Unfortunately, I am just now recovering from a self-inflicted computer
disaster.

While fighting with rsync, I did either:

sudo rsync -aAXHSxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2
or
sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2/

Just one small problem: MSD2 was not connected to my computer!
(Don't say it . . .  )

Instead of giving an error message, rsync just created a directory on my
computer called /media/defaultMSD2, and filled it up until my / partition
was full, and THEN my desktop environment (Cinnamon) popped up a
notification saying so.  How thoughtful.

The computer then would not reboot into the operating system.

No problem, I say. I will just use Timeshift to restore from its backup of
a few hours earlier.

But that did not work, even after deleting the extra directory, and trying
restores from multiple Timeshift backups.

Anyway, I never could fix the problem. But I did take it as an opportunity
to "start over". I put in a new(er) SSD, and did a fresh install, replacing
Cinnamon with Gnome. Mistake - now I remember why I dislike Gnome, ever
since Gnome 3. Wish I had re-installed Cinnamon, but too late now, out of
time. For now I will just have to grit my teeth and live with it.

[BTW, yes, I do have all of my data. Backfilling it into my new setup will
no doubt be an ongoing adventure.]

Anyway, just a few notes about the rsync situation:

1) Having or not having a trailing / on the destination directory did not
seem to make any difference in the size of the copy made, or otherwise.
Nevertheless, I intend to heed the advice given to have a trailing / after
both source and destination, or neither, as appropriate.

2) Using or not using an "S" option with rsync did not seem to make any
difference, at least concerning the size of the copy made.

3) Yes, I really should check into using checksums to avoid "bot rot".
Good advice.

Finally, Gnome sucks.  (Did I mention that?)

Thanks for the replies.



Congratulations!  You now have more experience:

"Doing things right is a matter of experience.  Experience is a matter 
of doing things wrong."



As you find system administration commands that work, put them into scripts:

#!/bin/sh
sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2 
/media/default/MSD1/ /media/default/MSD2/



Use a version control system for system administration.  Create a 
project for every machine.  Check in system configuration files, 
scripts, partition table backups, encryption header backups, RAID header 
backups, etc..  Maintain a plain text log file with notes of what you 
did (e.g. console sessions), when, and why.



Put your OS on a small, fast device (e.g. SSD) and put your data on an 
array of large devices (e.g. ZFS pool with one or more HDD mirrors). 
Backup both as before.  Additionally, take images of your OS device.



David



Re: backing up backups

2022-04-13 Thread Default User
On Wed, Apr 13, 2022 at 12:09 PM Default User 
wrote:

>
>
> On Mon, Apr 11, 2022 at 12:03 PM David Christensen <
> dpchr...@holgerdanske.com> wrote:
>
>> On 4/10/22 22:15, to...@tuxteam.de wrote:
>> > On Sun, Apr 10, 2022 at 09:44:59PM -0700, David Christensen wrote:
>> >> On 4/10/22 19:58, Default User wrote:
>> >>> Hello!
>> >>>
>> >>> My setup:
>> >>> - single home x86-64 computer running Debian 11 Stable, up to date.
>> >>> - one 4-Tb external usb hard drive to use as a backup device, labeled
>> MSD1.
>> >>> - another identical usb hard drive, labeled MSD2, to use as a copy of
>> the
>> >>> backups on MSD1.
>> >>> - the computer and all storage devices are formatted ext4, not
>> encrypted.
>> >>> - two old Clonezilla disk images from when I installed Debian 11 last
>> year
>> >>> (probably irrelevant).
>> >>> - Timeshift to daily back up system EXCEPT for data directories.
>> >>> - Back in Time to daily back up data directories.
>> >>> - Borgbackup to also daily back up data directories.
>> >>> - Rsync to frequently backup any changed data between the daily Back
>> in
>> >>> Time and Borgbackup backups of data directories, using this command:
>> >>>
>> >>> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
>> --exclude-from
>> >>> "/home/default/rsync_exclude_list.txt" /home
>> >>> /media/default/MSD1/rsync_backups_of_host_home_directory_only
>> >>>
>> >>> Each type of backup is in a separate subdirectory on MSD1 (Timeshift,
>> Back
>> >>> in TIme, Rsync, etc.).
>> >>>
>> >>> It "seems" to work okay, BUT . . .
>> >>>
>> >>> Then I try to use rsync to make an identical copy of backup device
>> MSD1 on
>> >>> an absolutely identical 4-Tb external usb hard drive,
>> >>> labeled MSD2, using this command:
>> >>>
>> >>> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
>> >>> /media/default/MSD1/ /media/default/MSD2
>> >>
>> >>
>> >> See 'man 1 rsync'.  You have a slash at the end of SRC, but not at the
>> end
>> >> of DEST.  I would add a slash after "MSD2":
>> >
>> > The only thing I find in rsync's man page about trailing slashes
>> > in the `dest' argument would be relevant if MSD2 didn't exist (in
>> > the OP's case it seems it does, no?)
>>
>>
>> There are four combinations for rsync(1) SRC and DEST vs. trailing
>> slashes.  I use two -- trailing slashes on SRC and DEST for directories,
>> and no trailing slashes on SRC and DEST for single files.  The other two
>> combinations may "work" under certain circumstances, but they have
>> caused me grief in the past and I avoid them as a matter of habit.
>>
>>
>> David
>>
>>
>
> Hey guys, sorry for just getting back with you now.
> Unfortunately, I am just now recovering from a self-inflicted computer
> disaster.
>
> While fighting with rsync, I did either:
>
> sudo rsync -aAXHSxvv --delete --info=progress2,stats2,name2
> /media/default/MSD1/ /media/default/MSD2
> or
> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> /media/default/MSD1/ /media/default/MSD2/
>
> Just one small problem: MSD2 was not connected to my computer!
> (Don't say it . . .  )
>
> Instead of giving an error message, rsync just created a directory on my
> computer called /media/defaultMSD2, and filled it up until my / partition
> was full, and THEN my desktop environment (Cinnamon) popped up a
> notification saying so.  How thoughtful.
>
> The computer then would not reboot into the operating system.
>
> No problem, I say. I will just use Timeshift to restore from its backup of
> a few hours earlier.
>
> But that did not work, even after deleting the extra directory, and trying
> restores from multiple Timeshift backups.
>
> Anyway, I never could fix the problem. But I did take it as an opportunity
> to "start over". I put in a new(er) SSD, and did a fresh install, replacing
> Cinnamon with Gnome. Mistake - now I remember why I dislike Gnome, ever
> since Gnome 3. Wish I had re-installed Cinnamon, but too late now, out of
> time. For now I will just have to grit my teeth and live with it.
>
> [BTW, yes, I do have all of my data. Backfilling it into my new setup will
> no doubt be

Re: backing up backups

2022-04-11 Thread David Christensen

On 4/10/22 22:15, to...@tuxteam.de wrote:

On Sun, Apr 10, 2022 at 09:44:59PM -0700, David Christensen wrote:

On 4/10/22 19:58, Default User wrote:

Hello!

My setup:
- single home x86-64 computer running Debian 11 Stable, up to date.
- one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the computer and all storage devices are formatted ext4, not encrypted.
- two old Clonezilla disk images from when I installed Debian 11 last year
(probably irrelevant).
- Timeshift to daily back up system EXCEPT for data directories.
- Back in Time to daily back up data directories.
- Borgbackup to also daily back up data directories.
- Rsync to frequently backup any changed data between the daily Back in
Time and Borgbackup backups of data directories, using this command:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2 --exclude-from
"/home/default/rsync_exclude_list.txt" /home
/media/default/MSD1/rsync_backups_of_host_home_directory_only

Each type of backup is in a separate subdirectory on MSD1 (Timeshift, Back
in TIme, Rsync, etc.).

It "seems" to work okay, BUT . . .

Then I try to use rsync to make an identical copy of backup device MSD1 on
an absolutely identical 4-Tb external usb hard drive,
labeled MSD2, using this command:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2



See 'man 1 rsync'.  You have a slash at the end of SRC, but not at the end
of DEST.  I would add a slash after "MSD2":


The only thing I find in rsync's man page about trailing slashes
in the `dest' argument would be relevant if MSD2 didn't exist (in
the OP's case it seems it does, no?)



There are four combinations for rsync(1) SRC and DEST vs. trailing 
slashes.  I use two -- trailing slashes on SRC and DEST for directories, 
and no trailing slashes on SRC and DEST for single files.  The other two 
combinations may "work" under certain circumstances, but they have 
caused me grief in the past and I avoid them as a matter of habit.



David



Re: backing up backups

2022-04-10 Thread tomas
On Sun, Apr 10, 2022 at 09:44:59PM -0700, David Christensen wrote:
> On 4/10/22 19:58, Default User wrote:
> > Hello!
> > 
> > My setup:
> > - single home x86-64 computer running Debian 11 Stable, up to date.
> > - one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
> > - another identical usb hard drive, labeled MSD2, to use as a copy of the
> > backups on MSD1.
> > - the computer and all storage devices are formatted ext4, not encrypted.
> > - two old Clonezilla disk images from when I installed Debian 11 last year
> > (probably irrelevant).
> > - Timeshift to daily back up system EXCEPT for data directories.
> > - Back in Time to daily back up data directories.
> > - Borgbackup to also daily back up data directories.
> > - Rsync to frequently backup any changed data between the daily Back in
> > Time and Borgbackup backups of data directories, using this command:
> > 
> > sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2 --exclude-from
> > "/home/default/rsync_exclude_list.txt" /home
> > /media/default/MSD1/rsync_backups_of_host_home_directory_only
> > 
> > Each type of backup is in a separate subdirectory on MSD1 (Timeshift, Back
> > in TIme, Rsync, etc.).
> > 
> > It "seems" to work okay, BUT . . .
> > 
> > Then I try to use rsync to make an identical copy of backup device MSD1 on
> > an absolutely identical 4-Tb external usb hard drive,
> > labeled MSD2, using this command:
> > 
> > sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> > /media/default/MSD1/ /media/default/MSD2
> 
> 
> See 'man 1 rsync'.  You have a slash at the end of SRC, but not at the end
> of DEST.  I would add a slash after "MSD2":

The only thing I find in rsync's man page about trailing slashes
in the `dest' argument would be relevant if MSD2 didn't exist (in
the OP's case it seems it does, no?)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: backing up backups

2022-04-10 Thread David Christensen

On 4/10/22 19:58, Default User wrote:

Hello!

My setup:
- single home x86-64 computer running Debian 11 Stable, up to date.
- one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the computer and all storage devices are formatted ext4, not encrypted.
- two old Clonezilla disk images from when I installed Debian 11 last year
(probably irrelevant).
- Timeshift to daily back up system EXCEPT for data directories.
- Back in Time to daily back up data directories.
- Borgbackup to also daily back up data directories.
- Rsync to frequently backup any changed data between the daily Back in
Time and Borgbackup backups of data directories, using this command:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2 --exclude-from
"/home/default/rsync_exclude_list.txt" /home
/media/default/MSD1/rsync_backups_of_host_home_directory_only

Each type of backup is in a separate subdirectory on MSD1 (Timeshift, Back
in TIme, Rsync, etc.).

It "seems" to work okay, BUT . . .

Then I try to use rsync to make an identical copy of backup device MSD1 on
an absolutely identical 4-Tb external usb hard drive,
labeled MSD2, using this command:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2



See 'man 1 rsync'.  You have a slash at the end of SRC, but not at the 
end of DEST.  I would add a slash after "MSD2":


sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2 
/media/default/MSD1/ /media/default/MSD2/



I have experienced USB connection failures with external drives, 
resulting in filesystem corruption.  I prefer mobile racks and caddies 
for my backup drives, and black SATA 6 GBps cables with locking connectors:


https://www.startech.com/en-us/hdd/drw150satbk

https://www.cablematters.com/pc-187-156-3-pack-straight-60-gbps-sata-iii-cable.aspx



Here's the problem.  Although the size and number of items in each
subdirectory on MSD1 and MSD2 seem to be identical, more space in total
seems to be taken up on MSD2 than on MSD1:

df -h /dev/sdb1
Filesystem  Size  Used  Avail  Use%  Mounted on
/dev/sdb1   3.6T   68G   3.4T   2%  /media/default/MSD1

df -h /dev/sdc1
Filesystem  Size  Used  Avail Use%   Mounted on
/dev/sdc1   3.6T   69G   3.4T   2%   /media/default/MSD2

df /dev/sdb1
Filesystem  1K-blocks Used  AvailableUse%
Mounted on
/dev/sdb1384451788071255588  35778967642%
/media/default/MSD1

df /dev/sdc1
Filesystem  1K-blocksUsed   AvailableUse%
Mounted on
/dev/sdc1384451788071906088  35772462642%
/media/default/MSD2

I have tried "everything", even re-formatted MSD2 and re-done:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2
but the results were exactly the same.

I doubt that this is a problem with hard links, as I am using the "H"
option in rsync.

Now , pardon me for thinking that a copy of backups should not take up any
more or less space than the original.  And I consider backups to be much
too important to not be absolutely "correct".  So, what SHOULD be done so
that MSD2 will always be EXACTLY the same as MSD1?

Is there a way to do this using rsync?



Do you pause Timeshift, Back in Time, Borgbackup, etc., before using 
rsync(1) to copy MSD1 to MSD2?



Have you compared MSD1 to MSD2 using diff(1) afterwards?  If diff(1) is 
happy, I would call it good.  If diff(1) finds differences, figure out why.




I suppose I could just make images of MSD1, using dd for example.  But then
each time wouldn't I just be backing up not just the changed data, but all
the data, AND backing up all of the free space on MSD1 also.  Aside from
wasting TONS of space, each time it could take many hours or even days to
do. Which means it just won't get done. So much for redundancy!



I agree that dd(1) is overkill and impractical for this use-case.



So . . .   what IS the correct way to make "backups of backups"?



For ext4 backups and ext4 copies of backups, I use rsync(1).


Are you doing any integrity checking?  E.g. generating a checksum file 
of the backup contents?  Without such, you are exposed to bit rot.



David



Re: backing up backups

2022-04-10 Thread Default User
On Sun, Apr 10, 2022 at 11:13 PM David  wrote:

> On Mon, 11 Apr 2022 at 12:59, Default User 
> wrote:
>
> > Then I try to use rsync to make an identical copy of backup device MSD1
> on an absolutely identical 4-Tb external usb hard drive,
> > labeled MSD2, using this command:
> >
> > sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
> /media/default/MSD1/ /media/default/MSD2
> >
> > Here's the problem.  Although the size and number of items in each
> subdirectory on MSD1 and MSD2 seem to be identical, more space in total
> seems to be taken up on MSD2 than on MSD1:
>
> I suggest to try adding option -S to rsync, and see if that makes
> any difference. It might only affect the creation of new files, I'm
> unsure about that aspect.
>




Hi David, thanks for the reply.

I really don't know anything about sparse blocks, but I will check it out
and give it a try.


Re: backing up backups

2022-04-10 Thread David
On Mon, 11 Apr 2022 at 12:59, Default User  wrote:

> Then I try to use rsync to make an identical copy of backup device MSD1 on an 
> absolutely identical 4-Tb external usb hard drive,
> labeled MSD2, using this command:
>
> sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2 
> /media/default/MSD1/ /media/default/MSD2
>
> Here's the problem.  Although the size and number of items in each 
> subdirectory on MSD1 and MSD2 seem to be identical, more space in total seems 
> to be taken up on MSD2 than on MSD1:

I suggest to try adding option -S to rsync, and see if that makes
any difference. It might only affect the creation of new files, I'm
unsure about that aspect.



backing up backups

2022-04-10 Thread Default User
Hello!

My setup:
- single home x86-64 computer running Debian 11 Stable, up to date.
- one 4-Tb external usb hard drive to use as a backup device, labeled MSD1.
- another identical usb hard drive, labeled MSD2, to use as a copy of the
backups on MSD1.
- the computer and all storage devices are formatted ext4, not encrypted.
- two old Clonezilla disk images from when I installed Debian 11 last year
(probably irrelevant).
- Timeshift to daily back up system EXCEPT for data directories.
- Back in Time to daily back up data directories.
- Borgbackup to also daily back up data directories.
- Rsync to frequently backup any changed data between the daily Back in
Time and Borgbackup backups of data directories, using this command:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2 --exclude-from
"/home/default/rsync_exclude_list.txt" /home
/media/default/MSD1/rsync_backups_of_host_home_directory_only

Each type of backup is in a separate subdirectory on MSD1 (Timeshift, Back
in TIme, Rsync, etc.).

It "seems" to work okay, BUT . . .

Then I try to use rsync to make an identical copy of backup device MSD1 on
an absolutely identical 4-Tb external usb hard drive,
labeled MSD2, using this command:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2

Here's the problem.  Although the size and number of items in each
subdirectory on MSD1 and MSD2 seem to be identical, more space in total
seems to be taken up on MSD2 than on MSD1:

df -h /dev/sdb1
Filesystem  Size  Used  Avail  Use%  Mounted on
/dev/sdb1   3.6T   68G   3.4T   2%  /media/default/MSD1

df -h /dev/sdc1
Filesystem  Size  Used  Avail Use%   Mounted on
/dev/sdc1   3.6T   69G   3.4T   2%   /media/default/MSD2

df /dev/sdb1
Filesystem  1K-blocks Used  AvailableUse%
Mounted on
/dev/sdb1384451788071255588  35778967642%
/media/default/MSD1

df /dev/sdc1
Filesystem  1K-blocksUsed   AvailableUse%
Mounted on
/dev/sdc1384451788071906088  35772462642%
/media/default/MSD2

I have tried "everything", even re-formatted MSD2 and re-done:

sudo rsync -aAXHxvv --delete --info=progress2,stats2,name2
/media/default/MSD1/ /media/default/MSD2
but the results were exactly the same.

I doubt that this is a problem with hard links, as I am using the "H"
option in rsync.

Now , pardon me for thinking that a copy of backups should not take up any
more or less space than the original.  And I consider backups to be much
too important to not be absolutely "correct".  So, what SHOULD be done so
that MSD2 will always be EXACTLY the same as MSD1?

Is there a way to do this using rsync?

I suppose I could just make images of MSD1, using dd for example.  But then
each time wouldn't I just be backing up not just the changed data, but all
the data, AND backing up all of the free space on MSD1 also.  Aside from
wasting TONS of space, each time it could take many hours or even days to
do. Which means it just won't get done. So much for redundancy!

So . . .   what IS the correct way to make "backups of backups"?


Re: backups Was: SSD and HDD

2020-10-13 Thread Dan Ritter
mick crane wrote: 
> On 2020-10-13 00:46, Dan Ritter wrote:
> > mick crane wrote:
> > > 
> 
> This looks like good advice, thanks Dan and all.
> One thing I wonder about if I reboot and change boot order to start windows
> is if I might create some confusion on the network as pfsense PC does DHCP
> and that will think there are 2 PCs with the same MAC address ?

It doesn't care. You might care, but probably not.

-dsr-



Re: backups Was: SSD and HDD

2020-10-13 Thread mick crane

On 2020-10-13 00:46, Dan Ritter wrote:

mick crane wrote:


might I ask a favour for information on accepted wisdom for this stuff 
?
I being a home user have pfsense on old lenovo between ISP router and 
switch

to PCs
another old buster lenovo doing email
another Buster PC I do bits of programming on.
Windows PC I play poker on and some games.
My approach to backup has been to copy files I want to keep to 
external HDs

and other disks when I remember. If something goes wrong so long as I
remember what the config files do it's not such a big deal to start 
again.

I suppose I should try to make it more formal


Only if you care about the data...

Tips for understood accepted wisdom appreciated, like is it better if 
want
to use a windows program have this Virtualization or reboot and change 
boot

order or just have it on another PC.


That would depend. Is it a bother to reboot? Do you have a spare
PC? Both of those are easier and potentially faster than
virtualizing an existing system.



And also practical method for backup
hardware as consumer hardware only seem to have room for 2 disks at 
most.


So you have four or more PCs around, and don't mind having
another. Get an older machine and put two shiny new spinning
disks in it. Have the Debian installer set it up as MDADM RAID-1
-- mirrors. Use backupninja on the three Linux machines and have
them store their data on the backup machine.


This looks like good advice, thanks Dan and all.
One thing I wonder about if I reboot and change boot order to start 
windows is if I might create some confusion on the network as pfsense PC 
does DHCP and that will think there are 2 PCs with the same MAC address 
?


mick

--
Key ID4BFEBB31



backups Was: SSD and HDD

2020-10-12 Thread Dan Ritter
mick crane wrote: 
> 
> might I ask a favour for information on accepted wisdom for this stuff ?
> I being a home user have pfsense on old lenovo between ISP router and switch
> to PCs
> another old buster lenovo doing email
> another Buster PC I do bits of programming on.
> Windows PC I play poker on and some games.
> My approach to backup has been to copy files I want to keep to external HDs
> and other disks when I remember. If something goes wrong so long as I
> remember what the config files do it's not such a big deal to start again.
> I suppose I should try to make it more formal

Only if you care about the data...

> Tips for understood accepted wisdom appreciated, like is it better if want
> to use a windows program have this Virtualization or reboot and change boot
> order or just have it on another PC.

That would depend. Is it a bother to reboot? Do you have a spare
PC? Both of those are easier and potentially faster than
virtualizing an existing system.


> And also practical method for backup
> hardware as consumer hardware only seem to have room for 2 disks at most.

So you have four or more PCs around, and don't mind having
another. Get an older machine and put two shiny new spinning
disks in it. Have the Debian installer set it up as MDADM RAID-1
-- mirrors. Use backupninja on the three Linux machines and have
them store their data on the backup machine. 

Now, you could have your Windows machine back up to your backup
as well. The problem is, Windows' built-in backup system wants
to write to a network filesystem, and once it has access to that
-- well, so does nasty ransomware on your Windows system. And
that would put your whole backup set at risk.

If there's a Windows backup system that runs across sftp or
rsync-over-ssh, that would be much better. Or you can plug an
external USB disk into the Windows machine and ask it to store
the backups there directly.

-dsr-




KopiaUI backups

2020-10-08 Thread Peter Ehlert

I am a long time user of LuckyBackup, and am very satisfied.

experimenting with Clear Linux OS system, I have been looking for a 
backup solution LuckyBackup is not readily available.
Clear OS provides KopiaUI ...reading the Kopia webpage and YouTube 
tutorial the KopiaUI app seems to be worthwhile.
I am now making a massive backup on Clear OS, and it occurs to me that 
KopiaUI would be good for using in Debian also.


Issue: KopiaUI is not in the Debian Stable repos.

Q: is there a reason to avoid KopiaUI?



Re: Recommendation for filesystem for USB external drive for backups

2020-08-19 Thread Michael Stone

On Thu, Aug 20, 2020 at 11:33:34AM +1200, Ben Caradoc-Davies wrote:

On 20/08/2020 10:08, David Christensen wrote:

On 2020-08-13 01:31, David Christensen wrote:
Without knowing anything about your resources, needs, 
expectations, "consistent backup plan", etc., and given the 
choices ext2, ext3, or ext4 for an external USB drive presumably 
to store backup repositories, I would also pick ext4.

If you want to access the backup drive from foreign operating systems:


If interoperability is a consideration, FAT32 and NTFS should also be 
considered. 


These days exfat is the best choice. No (practical) file size limit like 
fat32, and more reliable across systems than ntfs.




Re: Recommendation for filesystem for USB external drive for backups

2020-08-19 Thread Ben Caradoc-Davies

On 20/08/2020 10:08, David Christensen wrote:

On 2020-08-13 01:31, David Christensen wrote:
Without knowing anything about your resources, needs, expectations, 
"consistent backup plan", etc., and given the choices ext2, ext3, or 
ext4 for an external USB drive presumably to store backup 
repositories, I would also pick ext4.

If you want to access the backup drive from foreign operating systems:


If interoperability is a consideration, FAT32 and NTFS should also be 
considered. FAT32 is widely used for removable flash media but has a 4GB 
file size limit, no journaling, and no support for permissions. NTFS is 
widely used for external hard drives and has journaling and support for 
attributes. If you buy a consumer-grade external hard drive, it will 
most likely be formatted with NTFS. Backup archives (such as tar 
archives) can be used to preserve Linux file metadata (permissions and 
timestamps) on foreign filesystems.


Kind regards,

--
Ben Caradoc-Davies 
Director
Transient Software Limited 
New Zealand



Re: Recommendation for filesystem for USB external drive for backups

2020-08-19 Thread David Christensen

On 2020-08-13 01:31, David Christensen wrote:

On 8/12/20 5:14 PM, rhkra...@gmail.com wrote:
I'm getting closer to setting up a consistent backup plan, backing up 
to an
external USB drive.  I'm wondering about a reasonable filesystem to 
use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if 
there is any

good reason to use anything beyond ext2?

(Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't 
see any
point (and don't want to learn) either of them at this point -- I 
don't see

much need for a backup filesystem.)

But, I'll listen to opinions ;-)


Without knowing anything about your resources, needs, expectations, 
"consistent backup plan", etc., and given the choices ext2, ext3, or 
ext4 for an external USB drive presumably to store backup repositories, 
I would also pick ext4.


If you want to access the backup drive from foreign operating systems:

https://www.freebsd.org/doc/handbook/filesystems-linux.html

https://www.howtogeek.com/112888/3-ways-to-access-your-linux-partitions-from-windows/

https://apple.stackexchange.com/questions/29842/how-can-i-mount-an-ext4-file-system-on-os-x


David



Re: Recommendation for filesystem for USB external drive for backups

2020-08-15 Thread Andrei POPESCU
On Vi, 14 aug 20, 10:31:51, David Wright wrote:
> 
> I'm dubious whether I shall ever start using these filesystems.
> I create multiple backups on ext4 filesystems on LUKS, and keep
> MD5 digests of their contents. Would that qualify as your
> "additional tools"?

Assuming you are also periodically checking that the files still produce 
the MD5SUMs, yes, as far as I know from playing around with btrfs and 
reading on it and differences to ZFS.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Recommendation for filesystem for USB external drive for backups

2020-08-14 Thread David Wright
On Fri 14 Aug 2020 at 08:25:20 (+0300), Andrei POPESCU wrote:
> On Mi, 12 aug 20, 20:14:03, rhkra...@gmail.com wrote:
> > I'm getting closer to setting up a consistent backup plan, backing up to an 
> > external USB drive.  I'm wondering about a reasonable filesystem to use, I 
> > think I want to stay in the ext2/3/4 family, and I'm wondering if there is 
> > any 
> > good reason to use anything beyond ext2?
>  
> In my opinion using ext2 in 2020 is mostly pointless, beyond the rare 
> situation where some software doesn't support ext4 (e.g. some odd/old 
> bootloader, other OSs, etc.).
> 
> Because it is getting significantly less use support for it is also more 
> likely to bit rot.
> 
> As far as I know ext3 is mostly ext2 with journalling added.
> 
> In comparison ext4 was developed from scratch with journaling and 
> support for other newer techniques, like better allocation of space to 
> prevent fragmentation and improve performance.

If you create your backup partitions on a newer distribution,
just check that wheezy can mount it before you start filling
it up. There are one of two options that wheezy kernels can't handle
in ext4, though I don't know whether any of them get enabled by
default. man ext4 for details.
Ditto for any encryption scheme you might use.

> > (Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't see 
> > any 
> > point (and don't want to learn) either of them at this point -- I don't see 
> > much need for a backup filesystem.)
> 
> As has been stated already, both btrfs and ZFS have built-in bitrot 
> protections that are very useful for backups and archives. To achieve 
> the same level of protection on ext4 you need additional tools.

I'm dubious whether I shall ever start using these filesystems.
I create multiple backups on ext4 filesystems on LUKS, and keep
MD5 digests of their contents. Would that qualify as your
"additional tools"?

Cheers,
David.



Re: Recommendation for filesystem for USB external drive for backups

2020-08-14 Thread tomas
On Fri, Aug 14, 2020 at 10:31:08AM +0100, Jonathan Dowland wrote:
> On Thu, Aug 13, 2020 at 09:32:13PM +, ghe2001 wrote:
> >Two for sure and put them in a RAID1 -- formatted ext4. And watch that
> >mdstat.
> >
> >And a third or fourth to see if you can get ZFS going.
> 
> For playing around with tech, sure: for part of a mundane, reliable
> backup strategy for the OP, and as an external, hot-pluggable drive,
> I disagree, RAID is not a good idea for this use-case.

Absolutely. It's like jumping on your semi-trailer to go grocery-shopping.

What's a RAID good for? It's to ensure service continuity in the case
of a disk failure. Perfect for a server which has to be running 24/7
with as little downtime as possible.

Is backup a case for that? No. For backup, you

 (a) check the fs on your backup media
 (b) mount that file system
 (c) run the backup
 (d) unmount

When you discover your media is corrupt/broken, yon restart with a
new medium.

If you need any redundancy, you keep several backups in parallel
(which you keep physically separate, so your house burning down
doesn't catch all of them at once).

Adjust accordingly for over-the-net backups. Other variations
possible. No variation of it makes any kind of RAID look attractive.

Now if you are setting a backup server for hundreds of clients,
that's another story. I'd consider RAID for that (not RAID1, though).

If you want to play with toys, by all means, do it. That's what
we're here for. Backup doesn't seem the right playground for
this toy.

Cheers
 - t


signature.asc
Description: Digital signature


Re: Recommendation for filesystem for USB external drive for backups

2020-08-14 Thread Jonathan Dowland

On Thu, Aug 13, 2020 at 09:32:13PM +, ghe2001 wrote:

Two for sure and put them in a RAID1 -- formatted ext4. And watch that
mdstat.

And a third or fourth to see if you can get ZFS going.


For playing around with tech, sure: for part of a mundane, reliable
backup strategy for the OP, and as an external, hot-pluggable drive,
I disagree, RAID is not a good idea for this use-case.


--


PS your sig-separator is broken (missing ' ' after '--')


--
Please do not CC me, I am subscribed to the list.

  Jonathan Dowland
✎j...@debian.org
   https://jmtd.net



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread Andrei POPESCU
On Mi, 12 aug 20, 20:14:03, rhkra...@gmail.com wrote:
> I'm getting closer to setting up a consistent backup plan, backing up to an 
> external USB drive.  I'm wondering about a reasonable filesystem to use, I 
> think I want to stay in the ext2/3/4 family, and I'm wondering if there is 
> any 
> good reason to use anything beyond ext2?
 
In my opinion using ext2 in 2020 is mostly pointless, beyond the rare 
situation where some software doesn't support ext4 (e.g. some odd/old 
bootloader, other OSs, etc.).

Because it is getting significantly less use support for it is also more 
likely to bit rot.

As far as I know ext3 is mostly ext2 with journalling added.

In comparison ext4 was developed from scratch with journaling and 
support for other newer techniques, like better allocation of space to 
prevent fragmentation and improve performance.

> (Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't see 
> any 
> point (and don't want to learn) either of them at this point -- I don't see 
> much need for a backup filesystem.)

As has been stated already, both btrfs and ZFS have built-in bitrot 
protections that are very useful for backups and archives. To achieve 
the same level of protection on ext4 you need additional tools.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread David Christensen

On 2020-08-13 01:31, David Christensen wrote:
> Migrating to ZFS was non-trivial, and I am still wresting with
> disaster preparedness.

I should have qualified that -- when I used ZFS only as a volume manager 
and file system, it was not much harder than md and ext4.  You could put 
a GPT partition scheme on the external USB drive, create one large 
partition, encrypt the partition (optional), put the partition into a 
ZFS pool, create one filesystem inside the pool, and set the 
'mountpoint' property on the filesystem (ZFS does not use /etc/fstab). 
Use the filesystem like any other Linux filesystem.  If any bits rot, 
ZFS will detect them.  Do a "scrub" periodically (e.g. monthly).



On 2020-08-13 18:28, rhkra...@gmail.com wrote:

On Thursday, August 13, 2020 04:09:46 PM David Christensen wrote:

On 2020-08-13 12:52, rhkra...@gmail.com wrote:

On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:

I would recommend installing from buster-backports to get the current
openzfs release which includes improvements (notably native encryption)
as well as fixes.


Two questions:
 * Most of my backup will be done from a Wheezy system -- can I
 install ZFS

on Wheezy?


I do not see any ZFS packages for Wheezy:

https://packages.debian.org/search?keywords=zfs=names=all
ction=all


The simplest answer would be to install Buster and then install
'zfs-dkms' (either Stable or backport, depending upon preference).


Just for closure, that system has to stay Wheezy for the forseeable future,
and then maybe TDE (I need to keep using kmail and kate from Wheezy).



I did some more searching for ZFS on Wheezy.  See the second FAQ item:

https://wiki.debian.org/DebianWheezy


Add this line to /etc/apt/sources.list:

deb http://archive.debian.org/debian wheezy main contrib


Then run these commands:

# apt-get update

# apt-cache search zfs


Look for "zfs-dkms".


David



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread Tom Dial



On 8/13/20 13:52, rhkra...@gmail.com wrote:
> On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
>> Debian ZFS root (and boot) is not *that* hard; see the instructions at
>>
>>  https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
>> uster%20Root%20on%20ZFS.html
>>
>> They certainly are not harder than installing early Debian releases (as
>> I remember it from around 20 years ago, and should not be hard for
>> anyone building a backup system and server. Installation as an
>> additional file system should not be notably different from installing a
>> file system package from main, except for the notice the GPL
>> incompatibility notice that will pop up during installation.
>>
>> I would recommend installing from buster-backports to get the current
>> openzfs release which includes improvements (notably native encryption)
>> as well as fixes.
> 
> Two questions:
> 
>* Most of my backup will be done from a Wheezy system -- can I install ZFS 
> on Wheezy?

Probably. It probably would be a lot more work than on buster, work that
arguably would be spent better upgrading a system that is over two years
out of security support,

> 
>* If I plug the USB drive into another machine without ZFS installed -- 
> hmm, well I guess I'd have to install ZFS to use the drive?

Yes.
> 

Regards,
Tom Dial



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread rhkramer
On Thursday, August 13, 2020 04:09:46 PM David Christensen wrote:
> On 2020-08-13 12:52, rhkra...@gmail.com wrote:
> > On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
> >> I would recommend installing from buster-backports to get the current
> >> openzfs release which includes improvements (notably native encryption)
> >> as well as fixes.
> > 
> > Two questions:
> > * Most of my backup will be done from a Wheezy system -- can I
> > install ZFS
> > 
> > on Wheezy?
> 
> I do not see any ZFS packages for Wheezy:
> 
> https://packages.debian.org/search?keywords=zfs=names=all
> ction=all
> 
> 
> The simplest answer would be to install Buster and then install
> 'zfs-dkms' (either Stable or backport, depending upon preference).

Just for closure, that system has to stay Wheezy for the forseeable future, 
and then maybe TDE (I need to keep using kmail and kate from Wheezy). 

> 
> > * If I plug the USB drive into another machine without ZFS installed
> > --
> > 
> > hmm, well I guess I'd have to install ZFS to use the drive?
> 
> Yes.
> 
> 
> David



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread ghe2001
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



‐‐‐ Original Message ‐‐‐
On Thursday, August 13, 2020 2:50 PM, Dan Ritter  wrote:

> D. R. Evans wrote:
>
> > Greg Wooledge wrote on 8/13/20 2:29 PM:
> >
> > > The simplest answer would be to use ext4.
> >
> > I concur, given the OP's use case. And I speak as someone who raves about 
> > ZFS
> > at every reasonable opportunity :-)
>
> Also concur. But by all means buy a spare drive and experiment
> with ZFS -- just not on live data.

Two for sure and put them in a RAID1 -- formatted ext4. And watch that mdstat.

And a third or fourth to see if you can get ZFS going.

--
Glenn English
-BEGIN PGP SIGNATURE-
Version: ProtonMail

wsBzBAEBCAAGBQJfNbFRACEJEJ/XhjGCrIwyFiEELKJzD0JScCVjQA2Xn9eG
MYKsjDJZkAf8CPw1bMoVhGGTeCNrpGWHr2edZ+LIgPqdjALbT7Ce35KuqLTN
hy9TPLKg6K6zZNCsB02bD23IpDxioDjiq4OzPtEuNB871vsr76cXKY+Lw4YU
MB98ThaE0jnFCINE7rTSCEbdlzmoI+syAgIzD496/xhnhJoYWEh8x6q0qCnp
9PI4b7fV0TfqwY5tPt/kNqjrve5gcmAqp3iY2qPzFbzS0d1sllqHQavrNPFk
iADua8OVl9/zBKl5KAuT1ksiKFTPVISh6JYgVJ/zIXuMvSE2hpqftMHUOjj7
58UlFhJrwsHNe8zhUkArSTbOsPNOAc+4nX580T3MSMqNAu/EAKdFFw==
=Fm3L
-END PGP SIGNATURE-



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread Dan Ritter
D. R. Evans wrote: 
> Greg Wooledge wrote on 8/13/20 2:29 PM:
> 
> > 
> > The simplest answer would be to use ext4.
> > 
> 
> I concur, given the OP's use case. And I speak as someone who raves about ZFS
> at every reasonable opportunity :-)

Also concur. But by all means buy a spare drive and experiment
with ZFS -- just not on live data.

-dsr-



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread D. R. Evans
Greg Wooledge wrote on 8/13/20 2:29 PM:

> 
> The simplest answer would be to use ext4.
> 

I concur, given the OP's use case. And I speak as someone who raves about ZFS
at every reasonable opportunity :-)

  Doc

-- 
Web:  http://enginehousebooks.com/drevans



signature.asc
Description: OpenPGP digital signature


Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread Greg Wooledge
On Thu, Aug 13, 2020 at 01:09:46PM -0700, David Christensen wrote:
> On 2020-08-13 12:52, rhkra...@gmail.com wrote:
> > * Most of my backup will be done from a Wheezy system -- can I install 
> > ZFS
> > on Wheezy?
> 
> I do not see any ZFS packages for Wheezy:
> 
> The simplest answer would be to install Buster and then install 'zfs-dkms'
> (either Stable or backport, depending upon preference).

The simplest answer would be to use ext4.



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread David Christensen

On 2020-08-13 12:52, rhkra...@gmail.com wrote:

On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:

Debian ZFS root (and boot) is not *that* hard; see the instructions at

  https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
uster%20Root%20on%20ZFS.html

They certainly are not harder than installing early Debian releases (as
I remember it from around 20 years ago, and should not be hard for
anyone building a backup system and server. Installation as an
additional file system should not be notably different from installing a
file system package from main, except for the notice the GPL
incompatibility notice that will pop up during installation.

I would recommend installing from buster-backports to get the current
openzfs release which includes improvements (notably native encryption)
as well as fixes.


Two questions:

* Most of my backup will be done from a Wheezy system -- can I install ZFS
on Wheezy?


I do not see any ZFS packages for Wheezy:

https://packages.debian.org/search?keywords=zfs=names=all=all


The simplest answer would be to install Buster and then install 
'zfs-dkms' (either Stable or backport, depending upon preference).




* If I plug the USB drive into another machine without ZFS installed --
hmm, well I guess I'd have to install ZFS to use the drive?


Yes.


David



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread rhkramer
On Thursday, August 13, 2020 01:45:59 PM Tom Dial wrote:
> Debian ZFS root (and boot) is not *that* hard; see the instructions at
> 
>  https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20B
> uster%20Root%20on%20ZFS.html
> 
> They certainly are not harder than installing early Debian releases (as
> I remember it from around 20 years ago, and should not be hard for
> anyone building a backup system and server. Installation as an
> additional file system should not be notably different from installing a
> file system package from main, except for the notice the GPL
> incompatibility notice that will pop up during installation.
> 
> I would recommend installing from buster-backports to get the current
> openzfs release which includes improvements (notably native encryption)
> as well as fixes.

Two questions:

   * Most of my backup will be done from a Wheezy system -- can I install ZFS 
on Wheezy?

   * If I plug the USB drive into another machine without ZFS installed -- 
hmm, well I guess I'd have to install ZFS to use the drive?



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread local10


Aug 13, 2020, 00:14 by rhkra...@gmail.com:

> I'm getting closer to setting up a consistent backup plan, backing up to an 
> external USB drive.  I'm wondering about a reasonable filesystem to use, I 
> think I want to stay in the ext2/3/4 family, and I'm wondering if there is 
> any 
> good reason to use anything beyond ext2?
>

I've been using an external USB drive for backups for years (more specifically, 
a regular HDD in a USB enclosure), it works reasonably well. I use ext4.

ext2 is more prone to lose stuff and become corrupted if your PC shuts down 
suddenly as it does not have journaling.
ext3 has journaling but is a bit slower than ext4, in my experience
ext4 works well and is able to recover from crashes and is a bit faster than 
ext3.

Regards,



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread Tom Dial



On 8/13/20 02:31, David Christensen wrote:
> On 8/12/20 5:14 PM, rhkra...@gmail.com wrote:
>> I'm getting closer to setting up a consistent backup plan, backing up
>> to an
>> external USB drive.  I'm wondering about a reasonable filesystem to
>> use, I
>> think I want to stay in the ext2/3/4 family, and I'm wondering if
>> there is any
>> good reason to use anything beyond ext2?
>>
>> (Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't
>> see any
>> point (and don't want to learn) either of them at this point -- I
>> don't see
>> much need for a backup filesystem.)
>>
>> But, I'll listen to opinions ;-)
> 
> Without knowing anything about your resources, needs, expectations,
> "consistent backup plan", etc., and given the choices ext2, ext3, or
> ext4 for an external USB drive presumably to store backup repositories,
> I would also pick ext4.
> 
> 
> But, none of the ext* filesystems have bit rot protection.  btrfs and
> ZFS do.
> 
> 
> btrfs is supported by the Debian Installer.  I used btrfs for Debian
> system disks for several years.  I discovered too late that btrfs
> requires routine maintenance (to balance its binary trees?).  The disks
> got progressively slower and software started misbehaving.  Notably,
> Thunderbird began losing messages when moving them from an IMAP folder
> to a local folder (!).  I went back to ext4 for my Debian system disks.
> 
> 
> Due to GPL and CDDL license conflicts, Debian does not support ZFS OOTB.
>  Notably, the Debian Installer lacks support for ZFS.  (Some brave and
> skilled people have figured out how to install Debian with ZFS on root;
> STFW for details.)  There is a 'contrib' ZFS kernel package available
> that can be installed on a working Debian system.  This makes it
> possible to use ZFS for most everything except boot and root.  ZFS is
> mature and reliable.  I use ZFS for FreeBSD system disks, file server
> live data, backups, archives, and images.  Migrating to ZFS was
> non-trivial, and I am still wresting with disaster preparedness.

Debian ZFS root (and boot) is not *that* hard; see the instructions at

 
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html

They certainly are not harder than installing early Debian releases (as
I remember it from around 20 years ago, and should not be hard for
anyone building a backup system and server. Installation as an
additional file system should not be notably different from installing a
file system package from main, except for the notice the GPL
incompatibility notice that will pop up during installation.

I would recommend installing from buster-backports to get the current
openzfs release which includes improvements (notably native encryption)
as well as fixes.

Tom Dial

> 
> 
> David



Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread tomas
On Wed, Aug 12, 2020 at 09:15:21PM -0600, Charles Curley wrote:
> On Wed, 12 Aug 2020 20:14:03 -0400
> rhkra...@gmail.com wrote:
> 
> > I'm getting closer to setting up a consistent backup plan, backing up
> > to an external USB drive.  I'm wondering about a reasonable
> > filesystem to use, I think I want to stay in the ext2/3/4 family, and
> > I'm wondering if there is any good reason to use anything beyond ext2?
> 
> I use my external USB drives for off-site backup, so I use ext4 on top
> of an encrypted partition.

That's what I do. Actually, I don't even partition: the whole stick is
a LUKS encrypted volume with a single ext4 whithin.

Cheers
 - t


signature.asc
Description: Digital signature


Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread tomas
On Thu, Aug 13, 2020 at 12:55:35PM +1200, Ben Caradoc-Davies wrote:
> On 13/08/2020 12:14, rhkra...@gmail.com wrote:
> >I'm getting closer to setting up a consistent backup plan, backing up to an
> >external USB drive.  I'm wondering about a reasonable filesystem to use, I
> >think I want to stay in the ext2/3/4 family, and I'm wondering if there is 
> >any
> >good reason to use anything beyond ext2?
> 
> I use and recommend ext4. Journaling protects against filesystem
> metadata corruption, which can be caused by an electrical outage or
> system crash.

...or by early extraction of the media. After all, it's USB sticks
we are talking about.

Definitely: if you have to ask, ext4 is the answer. If you really need
anything else, you definitely know.

Cheers
 - t


signature.asc
Description: Digital signature


Re: Recommendation for filesystem for USB external drive for backups

2020-08-13 Thread David Christensen

On 8/12/20 5:14 PM, rhkra...@gmail.com wrote:

I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive.  I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use anything beyond ext2?

(Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't see any
point (and don't want to learn) either of them at this point -- I don't see
much need for a backup filesystem.)

But, I'll listen to opinions ;-)


Without knowing anything about your resources, needs, expectations, 
"consistent backup plan", etc., and given the choices ext2, ext3, or 
ext4 for an external USB drive presumably to store backup repositories, 
I would also pick ext4.



But, none of the ext* filesystems have bit rot protection.  btrfs and 
ZFS do.



btrfs is supported by the Debian Installer.  I used btrfs for Debian 
system disks for several years.  I discovered too late that btrfs 
requires routine maintenance (to balance its binary trees?).  The disks 
got progressively slower and software started misbehaving.  Notably, 
Thunderbird began losing messages when moving them from an IMAP folder 
to a local folder (!).  I went back to ext4 for my Debian system disks.



Due to GPL and CDDL license conflicts, Debian does not support ZFS OOTB. 
 Notably, the Debian Installer lacks support for ZFS.  (Some brave and 
skilled people have figured out how to install Debian with ZFS on root; 
STFW for details.)  There is a 'contrib' ZFS kernel package available 
that can be installed on a working Debian system.  This makes it 
possible to use ZFS for most everything except boot and root.  ZFS is 
mature and reliable.  I use ZFS for FreeBSD system disks, file server 
live data, backups, archives, and images.  Migrating to ZFS was 
non-trivial, and I am still wresting with disaster preparedness.



David



Re: Recommendation for filesystem for USB external drive for backups

2020-08-12 Thread Charles Curley
On Wed, 12 Aug 2020 20:14:03 -0400
rhkra...@gmail.com wrote:

> I'm getting closer to setting up a consistent backup plan, backing up
> to an external USB drive.  I'm wondering about a reasonable
> filesystem to use, I think I want to stay in the ext2/3/4 family, and
> I'm wondering if there is any good reason to use anything beyond ext2?

I use my external USB drives for off-site backup, so I use ext4 on top
of an encrypted partition.

http://charlescurley.com/blog/index.html

Start with
http://charlescurley.com/blog/posts/2019/Nov/02/backups-on-linux/ and
work your way forward in http://charlescurley.com/blog/tag/backups.html.

-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Recommendation for filesystem for USB external drive for backups

2020-08-12 Thread Mark Allums

On 8/12/2020 7:14 PM, rhkra...@gmail.com wrote:

I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive.  I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use anything beyond ext2?

(Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't see any
point (and don't want to learn) either of them at this point -- I don't see
much need for a backup filesystem.)

But, I'll listen to opinions ;-)



Go for ext4.  No reason not to.



Re: Recommendation for filesystem for USB external drive for backups

2020-08-12 Thread Ben Caradoc-Davies

On 13/08/2020 12:14, rhkra...@gmail.com wrote:

I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive.  I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use anything beyond ext2?


I use and recommend ext4. Journaling protects against filesystem 
metadata corruption, which can be caused by an electrical outage or 
system crash. ext3 also has journaling, but I see no reason to not use 
ext4. It is robust, widely deployed, and the default in Debian.


My backups are pigz-compressed tar archives, encrypted with gpg 
symmetric encryption, with a "pigz -0" outer wrapper to add a 32-bit 
checksum wrapper for convenient verification with "gzip -tv" or similar 
without requiring decryption. Archives are written to both external 
local storage and uploaded to cloud storage.


Because I have a small number of large backup files, I make backup 
filesystems to optimise for large files and maximise space: minimal 
journal, no reserved blocks, large file inode ratio, and no support for 
resizing while mounted. I also disable mount count and interval checking:


mkfs.ext4 -J size=4 -m 0 -T largefile4 -O "^resize_inode" /dev/sdb1
tune2fs -c 0 -i 0 -L Backup /dev/sdb1

I have this line in /etc/fstab:

LABEL=Backup /media/backup ext4 noatime,noauto,user,errors=remount-ro 0 0

Kind regards,

--
Ben Caradoc-Davies 
Director
Transient Software Limited <https://transient.nz/>
New Zealand



Re: Recommendation for filesystem for USB external drive for backups

2020-08-12 Thread Keith Bainbridge

On 13/8/20 10:14 am, rhkra...@gmail.com wrote:

I'm getting closer to setting up a consistent backup plan, backing up to an
external USB drive.  I'm wondering about a reasonable filesystem to use, I
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any
good reason to use anything beyond ext2?

(Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't see any
point (and don't want to learn) either of them at this point -- I don't see
much need for a backup filesystem.)

But, I'll listen to opinions ;-)



I use ext4.   The advantage is journaling, which as I understand it 
reduces the chance of loss in case of power failure or the like. If 
you're using rsync, it will correct any prior bad copies (I think) at 
the next run.


On the basis that 1 back-up is better than none, can you cope with 2 
external devices?   As somebody else said in another topic, (approx) as 
long as you have a well tested spare lying around for when the main 
target fails.   Somebody put it more like this many years ago: There are 
2 kinds of drives around; those that have failed, and those that are 
going to fail.  Thankfully failure is getting rarer.





--
Keith Bainbridge

keithrbaugro...@gmail.com
or ke1thozgro...@gmx.com



Recommendation for filesystem for USB external drive for backups

2020-08-12 Thread rhkramer
I'm getting closer to setting up a consistent backup plan, backing up to an 
external USB drive.  I'm wondering about a reasonable filesystem to use, I 
think I want to stay in the ext2/3/4 family, and I'm wondering if there is any 
good reason to use anything beyond ext2?

(Some day I'll try ZFS or BTRFS for my "system" filesystems, but don't see any 
point (and don't want to learn) either of them at this point -- I don't see 
much need for a backup filesystem.)

But, I'll listen to opinions ;-)



Re: Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-04-04 Thread Benoit B
Salut,

Deux pistes parmi d'autres :

a)  la crontab
a1- Une instruction(script) nocturne qui monte un point de montage réseau
a2- Une autre (qlq min plus tard) qui arrête le serveur sql, mais ca
peut aller jusqu'à substituer et présenter une jolie page d'accueil
avec un sablier.
a3- Une autre (qlq min plus tard) qui fait le dump en écrivant sur le
point de montage.
a4- Une autre (qlq min plus tard) qui redémarre le serveur
(éventuellement remet l'appli web en service) et démonte le pt de
montage.

b)Je me demande si lsyncd ne te permettrait pas de lancer des scripts
dans lequel tu ferais un dump
https://axkibe.github.io/lsyncd/manual/config/layer2/
https://packages.debian.org/stretch/lsyncd
J'utilise ça pour des backups de mon système de fichier, ça
m'étonnerait qu'il ne soit pas utilisable au moins pour une partie des
opérations de 1 à 4.

@+
--
Benoit

Le 23 mars 2018 à 16:49,  <vandendaelenclem...@gmail.com> a écrit :
> Bonjour à tous,
>
> Je possède un petit Rasberry sur lequel je me suis amusé à créer une
> application web « locale ». Dans un souci de prévoir à l’imprévu, j’aimerais
> effectuer une backup automatique des bases de données stocké dans ce dernier
> vers une clef USB (par exemple).
>
> Il va de soi que le RPI est sous Rasbian !
>
> Merci d’avance pour vos pistes de réflexion et votre aide avisée !
>
> Bonne fin de semaine à tous !
>
>
>
> Clément VANDENDAELEN
>
> Web : www.vandendaelen.com
>
>



Re: Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-03-27 Thread Daniel Caillibaud
Le 23/03/18 à 20:23, Eric Degenetais  a écrit :

ED> Le 23 mars 2018 21:18,  a écrit :
ED> 
ED> > J'ai trouvé encore plus simple !
ED> > Avec la doc envoyé par Timoté Brusson, je suis retombé sur un truc
ED> > "pré-fait" est-ce viable ? ( https://doc.ubuntu-fr.org/automysqlbackup
ED> > ) Car ça semble de bien fonctionner..
ED> 
ED> Bonsoir, oui c'est un bon outil, ça s'utilise même sur des
ED> environnements de production commerciale.

Pour l'usage perso ce script a l'air très bien (pas regardé en détail mais
il date de 2011 donc si y'avait un gros pb faut espérer que ça se saurait).

Pour un environnement de prod il a quand même un défaut, il lance mysqldump
sur la base alors qu'elle est toujours en usage, donc sur des bases
utilisées au moment du dump ça pose pb.
Soit il lock globalement et le mysql risque fort d'exploser pendant ce
temps là (toutes les requêtes en écriture sont en attente, si le dump dure
plusieurs minutes ça peut suffire à saturer mysql), soit il dump & lock
seulement par table, ce qui peut poser le même pb mais qui peut surtout
conduire à des données inconsistantes (une clé externe référencées dans une
table mais qui n'existait pas au moment du dump de sa table, fait
auparavant).

Pour du perso où pas grand monde écrit, surtout à l'heure du backup, et
avec de petites bases le risque est faible, mais j'utiliserais pas ça sur
ma production ;-)

Pour régler le pb du lock je fais du snapshot lvm avec flush & lock juste
avant et unlock juste après, ça prend 1~3s et ça passe, puis rsync
de /var/lib/mysql (en fait de toute la vm) puis dump tranquillement sur une
autre machine plus tard.

Sinon y'a aussi la solution master/slave, en utilisant le slave pour le
dump (en coupant la réplication avant et en la remettant après), mais ça
consomme bcp d'I/O pour rien sur le slave toute la journée. J'ai fait ça
un moment mais le disque arrivait à 100% 24/24, car la machine avait
plusieurs slave et plusieurs master à suivre, trop cher de prendre une
machine ssd par mysql à backuper, avec le snap+rsync un serveur pas cher
sans ssd dump tout le monde.

-- 
Daniel

Travailler dur n'a jamais tué personne, mais pourquoi prendre le 
risque ?
Edgar Bergen



RE: Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-03-23 Thread Eric Degenetais
Le 23 mars 2018 21:18, <vandendaelenclem...@gmail.com> a écrit :

J'ai trouvé encore plus simple !
Avec la doc envoyé par Timoté Brusson, je suis retombé sur un truc
"pré-fait" est-ce viable ? ( https://doc.ubuntu-fr.org/automysqlbackup ) Car
ça semble de bien fonctionner..

Bonsoir, oui c'est un bon outil, ça s'utilise même sur des environnements
de production commerciale.

Un grand merci pour vos retours, le jeune dev que je suis viens d'apprendre
ce qu'était un cron et comment ce bidule fonctionne ! :D



NB : Oui, c'est bien Raspbian, my bad.
Clément VANDENDAELEN
Web : www.vandendaelen.com

-Message d'origine-
De : Pierre L. <pet...@miosweb.mooo.com>
Envoyé : vendredi 23 mars 2018 20:32
À : debian-user-french@lists.debian.org
Objet : Re: Backups automatiques d'une base de donnée vers un périphérique
de stockage


Le 23/03/2018 à 16:49, vandendaelenclem...@gmail.com a écrit :
>
> Bonjour à tous,
>
> Je possède un petit Rasberry sur lequel je me suis amusé à créer une
> application web « locale ». Dans un souci de prévoir à l’imprévu,
> j’aimerais effectuer une backup automatique des bases de données
> stocké dans ce dernier vers une clef USB (par exemple).
>
Bonsoir,
Un petit script pour "dumper" les bases au format .sql sinon ?
Souvent "root" est l'utilisateur qui a accès à toutes les bases...
/PATH/TO/DUMPFILE.SQL sera le chemin et le nom du fichier de sortie.

- pour toutes les bases :
mysqldump --user= --password=XXX -A > /PATH/TO/DUMPFILE.SQL

- pour de multiples bases, ou une seule au choix (DB_NAME pour le nom de la
base) :
mysqldump --user= --password=XXX --databases DB_NAME1
DB_NAME2 DB_NAME3 > /PATH/TO/DUMPFILE.SQL

- certaine(s) table(s) d'une base :
mysqldump --user= --password= --databases DB_NAME --tables
TABLE_NAME > /PATH/TO/DUMPFILE.SQL

- pour restaurer, au cas où ! (--verbose histoire de voir ce qu'il se
passe) :
mysql --verbose --user= --password= DB_NAME <
/PATH/TO/DUMPFILE.SQL

Ce sont de vieux scripts que je n'ai pas utilisé depuis un certain temps, à
vérifier s'ils fonctionnent toujours... je ne pense pas que ces fonctions
aient beaucoup changé...
Éventuellement à insérer dans un script assez simple à la façon proposée par
Raphael.

> Il va de soi que le RPI est sous Rasbian !
>
Raspbian ? Avec un 'p', ou est-ce une autre distrib que je ne connaitrais
pas ?

Bon courage ;)


RE: Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-03-23 Thread vandendaelenclement
J'ai trouvé encore plus simple ! 
Avec la doc envoyé par Timoté Brusson, je suis retombé sur un truc
"pré-fait" est-ce viable ? ( https://doc.ubuntu-fr.org/automysqlbackup ) Car
ça semble de bien fonctionner..
Un grand merci pour vos retours, le jeune dev que je suis viens d'apprendre
ce qu'était un cron et comment ce bidule fonctionne ! :D

NB : Oui, c'est bien Raspbian, my bad.
Clément VANDENDAELEN
Web : www.vandendaelen.com

-Message d'origine-
De : Pierre L. <pet...@miosweb.mooo.com> 
Envoyé : vendredi 23 mars 2018 20:32
À : debian-user-french@lists.debian.org
Objet : Re: Backups automatiques d'une base de donnée vers un périphérique
de stockage


Le 23/03/2018 à 16:49, vandendaelenclem...@gmail.com a écrit :
>
> Bonjour à tous,
>
> Je possède un petit Rasberry sur lequel je me suis amusé à créer une 
> application web « locale ». Dans un souci de prévoir à l’imprévu, 
> j’aimerais effectuer une backup automatique des bases de données 
> stocké dans ce dernier vers une clef USB (par exemple).
>
Bonsoir,
Un petit script pour "dumper" les bases au format .sql sinon ?
Souvent "root" est l'utilisateur qui a accès à toutes les bases...
/PATH/TO/DUMPFILE.SQL sera le chemin et le nom du fichier de sortie.

- pour toutes les bases :
mysqldump --user= --password=XXX -A > /PATH/TO/DUMPFILE.SQL

- pour de multiples bases, ou une seule au choix (DB_NAME pour le nom de la
base) :
mysqldump --user= --password=XXX --databases DB_NAME1
DB_NAME2 DB_NAME3 > /PATH/TO/DUMPFILE.SQL

- certaine(s) table(s) d'une base :
mysqldump --user= --password= --databases DB_NAME --tables
TABLE_NAME > /PATH/TO/DUMPFILE.SQL

- pour restaurer, au cas où ! (--verbose histoire de voir ce qu'il se
passe) :
mysql --verbose --user= --password= DB_NAME <
/PATH/TO/DUMPFILE.SQL

Ce sont de vieux scripts que je n'ai pas utilisé depuis un certain temps, à
vérifier s'ils fonctionnent toujours... je ne pense pas que ces fonctions
aient beaucoup changé...
Éventuellement à insérer dans un script assez simple à la façon proposée par
Raphael.

> Il va de soi que le RPI est sous Rasbian !
>
Raspbian ? Avec un 'p', ou est-ce une autre distrib que je ne connaitrais
pas ?

Bon courage ;)




Re: Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-03-23 Thread Pierre L.

Le 23/03/2018 à 16:49, vandendaelenclem...@gmail.com a écrit :
>
> Bonjour à tous,
>
> Je possède un petit Rasberry sur lequel je me suis amusé à créer une
> application web « locale ». Dans un souci de prévoir à l’imprévu,
> j’aimerais effectuer une backup automatique des bases de données
> stocké dans ce dernier vers une clef USB (par exemple).
>
Bonsoir,
Un petit script pour "dumper" les bases au format .sql sinon ?
Souvent "root" est l'utilisateur qui a accès à toutes les bases...
/PATH/TO/DUMPFILE.SQL sera le chemin et le nom du fichier de sortie.

- pour toutes les bases :
mysqldump --user= --password=XXX -A > /PATH/TO/DUMPFILE.SQL

- pour de multiples bases, ou une seule au choix (DB_NAME pour le nom de
la base) :
mysqldump --user= --password=XXX --databases DB_NAME1
DB_NAME2 DB_NAME3 > /PATH/TO/DUMPFILE.SQL

- certaine(s) table(s) d'une base :
mysqldump --user= --password= --databases DB_NAME
--tables TABLE_NAME > /PATH/TO/DUMPFILE.SQL

- pour restaurer, au cas où ! (--verbose histoire de voir ce qu'il se
passe) :
mysql --verbose --user= --password= DB_NAME <
/PATH/TO/DUMPFILE.SQL

Ce sont de vieux scripts que je n'ai pas utilisé depuis un certain
temps, à vérifier s'ils fonctionnent toujours... je ne pense pas que ces
fonctions aient beaucoup changé...
Éventuellement à insérer dans un script assez simple à la façon proposée
par Raphael.

> Il va de soi que le RPI est sous Rasbian !
>
Raspbian ? Avec un 'p', ou est-ce une autre distrib que je ne
connaitrais pas ?

Bon courage ;)



signature.asc
Description: OpenPGP digital signature


RE: Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-03-23 Thread vandendaelenclement
Merci pour vos réponses, je vais essayer ça !
Bonne soirée !

Clément VANDENDAELEN
Web : www.vandendaelen.com

-Message d'origine-
De : "Raphaël" POITEVIN <raphael.poite...@gmail.com> 
Envoyé : vendredi 23 mars 2018 17:31
À : debian-user-french@lists.debian.org
Objet : Re: Backups automatiques d'une base de donnée vers un périphérique de 
stockage

Bonjour,

<vandendaelenclem...@gmail.com> writes:
> j’aimerais effectuer une backup automatique des bases de données 
> stocké dans ce dernier vers une clef USB (par exemple).

Un script qui lance des rsync appelé par un cron.

J’ai fait un truc de ce style :

#!/bin/bash

# Backup des données

# Si le répertoires existent
TARGET=/mnt/extern/home/raphael
if [ -d $TARGET/documents ]; then
rsync -avz --delete ~/documents $TARGET/ fi Il y a évidemment 
certainement mieux
--
Raphaël




Re: Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-03-23 Thread Raphaël POITEVIN
Bonjour,

 writes:
> j’aimerais effectuer une backup automatique des bases de données
> stocké dans ce dernier vers une clef USB (par exemple).

Un script qui lance des rsync appelé par un cron.

J’ai fait un truc de ce style :

#!/bin/bash

# Backup des données

# Si le répertoires existent
TARGET=/mnt/extern/home/raphael
if [ -d $TARGET/documents ]; then
rsync -avz --delete ~/documents $TARGET/
fi
Il y a évidemment certainement mieux 
-- 
Raphaël



Backups automatiques d'une base de donnée vers un périphérique de stockage

2018-03-23 Thread vandendaelenclement
Bonjour à tous,

Je possède un petit Rasberry sur lequel je me suis amusé à créer une
application web « locale ». Dans un souci de prévoir à l’imprévu, j’aimerais
effectuer une backup automatique des bases de données stocké dans ce dernier
vers une clef USB (par exemple).

Il va de soi que le RPI est sous Rasbian !

Merci d’avance pour vos pistes de réflexion et votre aide avisée !

Bonne fin de semaine à tous !

 

Clément VANDENDAELEN

Web :   www.vandendaelen.com

 



Re: What tool can I use to make efficient incremental backups?

2017-09-17 Thread Kushal Kumaran
Mario Castelán Castro <marioxcc...@yandex.com> writes:

> On 2017-08-19 23:07 -0400 Celejar <cele...@gmail.com> wrote:
>>There's Borg, which apparently has good deduplication. I've just
>>started using it, but it's a very sophisticated and quite popular piece
>>of software, judging by chatter in various internet threads.
>
> This seems like an excellent tool for my use case. It has an interface
> very much like control version systems (which I am familiar with), makes
> efficient use of space and is no more complex to use than required (I'm
> referencing the saying “make things as simple as possible but not more
> simple”).
>
> I have been testing it with toy cases to have at least some experience
> with it before using it for my real backups.
>
> Using a Git checkout of the latest release I get this warning: “Using a
> pure-python msgpack! This will result in lower performance.”. Yet I have
> the Debian package “python3-msgpack“. Do you know what the problem is?
>

Are you using the virtualenv method recommended by the documentation?
By default, virtualenvs do not get access to packages that might have
been installed by the OS package manager.  You can pass the
--system-site-packages to the virtualenv creation command to make those
packages available in the virtualenv.

Alternatively, you can to recompile the borgbackup dependencies in the
virtualenv by following the instructions at
https://borgbackup.readthedocs.io/en/stable/installation.html#git-installation.
This may require you to install several other packages, as may be
required to compile and modules written in C.

-- 
regards,
kushal



Re: What tool can I use to make efficient incremental backups?

2017-09-16 Thread Celejar
On Sun, 20 Aug 2017 20:04:57 -0500
Mario Castelán Castro  wrote:

> On 2017-08-19 23:07 -0400 Celejar  wrote:
> >There's Borg, which apparently has good deduplication. I've just
> >started using it, but it's a very sophisticated and quite popular piece
> >of software, judging by chatter in various internet threads.

...

> Using a Git checkout of the latest release I get this warning: “Using a
> pure-python msgpack! This will result in lower performance.”. Yet I have
> the Debian package “python3-msgpack“. Do you know what the problem is?

No, sorry.

Celejar



Re: What tool can I use to make efficient incremental backups?

2017-08-22 Thread Celejar
On Tue, 22 Aug 2017 01:25:50 -0400
Gene Heskett <ghesk...@shentel.net> wrote:

...

> Amanda does not do this "deduplication" that I am aware of.
> 
> That is another aspect of data control that does not belong in the job 
> discription of what a backup program should do, which is to be a 
> repository on some other storage medium besides the day to day operating 
> cache, of the data you will need to recover and restore normal 
> operations should your main drive become unusable with no signs of ill 
> health until its falls over.

That's not the only job of backup programs. I have backups of that
nature (rsnapshot), but I want some critical data to be stored offsite,
in the cloud, as well. This is a very bandwidth and storage limited
context, so deduplication is most welcome, even though there is, of
course, the tradeoff that you mention.

...

> Backups are so much a personal preferences thing its hard to
define.

...

> They can't get that amanda keeps records, and if you need to recover the 
> home directories of Joe and Jane Sixpack who work in sales, amanda will 
> look up the last level 0, restore that, and restore over that from the 
> various other level 1 or 2 backups made since until it arrives at and 
> recovers anything of theirs in last nights backup. I am backing up 5 
> machines here, using 20 to 32 GB worth of space a night on a separate 1 
> TB drive thats currently about 78% full.
> 
> You can make up your own mind, but to me amanda has been a good thing.

Sounds like a great program.

> Cheers, Gene Heskett

Celejar



Re: What tool can I use to make efficient incremental backups?

2017-08-21 Thread Gene Heskett
On Monday 21 August 2017 23:43:09 Celejar wrote:

> On Sun, 20 Aug 2017 02:05:46 -0400
>
> Gene Heskett <ghesk...@shentel.net> wrote:
> > On Saturday 19 August 2017 23:07:01 Celejar wrote:
> > > On Thu, 17 Aug 2017 11:47:34 -0500
> > >
> > > Mario Castelán Castro <marioxcc...@yandex.com> wrote:
> > > > Hello.
> > > >
> > > > Currently I use rsync to make the backups of my personal data,
> > > > including some manually selected important files of system
> > > > configuration. I keep old backups to be more safe from the
> > > > scenario where I have deleted something important, I make a
> > > > backup, and I only notice the deletion afterwards.
> > > >
> > > > Each backup snapshot is stored in its own directory. There is
> > > > much redundancy between subsequent backups. I use the option
> > > > "--link-dest" to make hard links and thus save space for files
> > > > that are *identical* to an already-existing file in the backup
> > > > repository. but this is still inefficient. Any change to a file,
> > > > even to its metadata (permission, modification time, etc.), will
> > > > result in the file being saved at whole, instead of a delta.
> > > >
> > > > Can you suggest a more efficient alternative?
> > >
> > > There's Borg, which apparently has good deduplication. I've just
> > > started using it, but it's a very sophisticated and quite popular
> > > piece of software, judging by chatter in various internet threads.
> > >
> > > https://borgbackup.readthedocs.io/en/stable/
> > >
> > > Celejar
> >
> > Amanda has quite intelligent ways to do that. I run it nightly and
> > have
>
> [Snipped lots of miscellaneous, but seemingly irrelevant, discussion
> about Amanda's virtues.]
>
> Amanda does deduplication? Link?
>
> Celejar

Amanda does not do this "deduplication" that I am aware of.

That is another aspect of data control that does not belong in the job 
discription of what a backup program should do, which is to be a 
repository on some other storage medium besides the day to day operating 
cache, of the data you will need to recover and restore normal 
operations should your main drive become unusable with no signs of ill 
health until its falls over.

The backup program should be a relatively simple, so dependable its 
boring, yet smart enough to adjust its internal schedule of backup 
levels so as to use as much of the storage media as it needs on a long 
time continuous use scenario. Amanda carries this to extremes but you 
may have to help it occasionally if a given entry in the disklist grows 
until a level 0 back no longer fits on the amount of media you allow it 
to use per run.  As I've added machines to the list as they've been 
added to my home network over the last 20 years, I find myself needing 
to either buy a bigger drive, or further breakup my home directory into 
smaller pieces to reduce the total size of that one entry.  But amanda 
will never throw you under the bus, it a full won't fit, it continues to 
do level 1's or even level 2's.

A level 1 is anything that has changed since the last level 0, a level 2 
is anything changed since the previous level 1, etc etc.

Amanda is an administrator program, useing, at the PFC level of the 
duty's, usually tar and gzip but can use other compressors, for the 
actual data moving.  It keeps records over the span time you set it up 
to use, so It knows where everything it has backed up is.  But because 
those records are stored on the daily use drive, they aren't of much 
utility if you need to do a bare metal recovery, so I wrote a wrapper 
that adds this database and a copy of the configuration that made the 
back to the end of every backup it makes, so I can. and have actually 
done a bare metal recovery to a new drive in around 8 hours. The only 
thing I lost was about 75 emails that had come in since the nightly 
backup a few hours before that failure.

Backups are so much a personal preferences thing its hard to define.

Some folks who are used to doing a full backup on friday night that may 
take 50 terrabytes worth of tapes and a tape library that costs $50,000 
that they simply cannot wrap their mind around a program that does a 
level 0 of a given disklist entry on any arbitrary nightly run. And 
keeps track of a system such as the NY State Health system, doing it on 
a tape a night.

They can't get that amanda keeps records, and if you need to recover the 
home directories of Joe and Jane Sixpack who work in sales, amanda will 
look up the last level 0, restore that, and restore over that from the 
various other level 1 or 2 backups made since until it ar

Re: What tool can I use to make efficient incremental backups?

2017-08-21 Thread Celejar
On Sun, 20 Aug 2017 02:05:46 -0400
Gene Heskett <ghesk...@shentel.net> wrote:

> On Saturday 19 August 2017 23:07:01 Celejar wrote:
> 
> > On Thu, 17 Aug 2017 11:47:34 -0500
> >
> > Mario Castelán Castro <marioxcc...@yandex.com> wrote:
> > > Hello.
> > >
> > > Currently I use rsync to make the backups of my personal data,
> > > including some manually selected important files of system
> > > configuration. I keep old backups to be more safe from the scenario
> > > where I have deleted something important, I make a backup, and I
> > > only notice the deletion afterwards.
> > >
> > > Each backup snapshot is stored in its own directory. There is much
> > > redundancy between subsequent backups. I use the option
> > > "--link-dest" to make hard links and thus save space for files that
> > > are *identical* to an already-existing file in the backup
> > > repository. but this is still inefficient. Any change to a file,
> > > even to its metadata (permission, modification time, etc.), will
> > > result in the file being saved at whole, instead of a delta.
> > >
> > > Can you suggest a more efficient alternative?
> >
> > There's Borg, which apparently has good deduplication. I've just
> > started using it, but it's a very sophisticated and quite popular
> > piece of software, judging by chatter in various internet threads.
> >
> > https://borgbackup.readthedocs.io/en/stable/
> >
> > Celejar

> Amanda has quite intelligent ways to do that. I run it nightly and have 

[Snipped lots of miscellaneous, but seemingly irrelevant, discussion about 
Amanda's virtues.]

Amanda does deduplication? Link?

Celejar



Re: What tool can I use to make efficient incremental backups?

2017-08-20 Thread Mario Castelán Castro
On 2017-08-19 23:07 -0400 Celejar <cele...@gmail.com> wrote:
>There's Borg, which apparently has good deduplication. I've just
>started using it, but it's a very sophisticated and quite popular piece
>of software, judging by chatter in various internet threads.

This seems like an excellent tool for my use case. It has an interface
very much like control version systems (which I am familiar with), makes
efficient use of space and is no more complex to use than required (I'm
referencing the saying “make things as simple as possible but not more
simple”).

I have been testing it with toy cases to have at least some experience
with it before using it for my real backups.

Using a Git checkout of the latest release I get this warning: “Using a
pure-python msgpack! This will result in lower performance.”. Yet I have
the Debian package “python3-msgpack“. Do you know what the problem is?

Thanks.

-- 
Do not eat animals, respect them as you respect people.
https://duckduckgo.com/?q=how+to+(become+OR+eat)+vegan


pgpYYN3Er6W6E.pgp
Description: OpenPGP digital signature


Re: What tool can I use to make efficient incremental backups?

2017-08-20 Thread Mario Castelán Castro
On 2017-08-20 19:37 + Glenn English <ghe2...@gmail.com> wrote:
>For me, the big drawback to Amanda was the initial configuration. It's
>huge and complex (at least it was a couple decades ago). But after
>it's all done, a cron job will run your backup(s) every night, while
>you sleep, with no problems. If you ask it to, it'll even verify the
>backup for you (an unverified backup isn't a backup, as they say).

I have taken a glance at AMANDA, and it seems indeed to be very complex.
It is great that it works for your use case, but it does not seem to be an
appropriate tool for my case. I do not need any highly sophisticated
tools. As I noted in the first message, I only want to backup a personal
computer to an USB drive.

Since I must manually connect the USB drive to make the backups, there is
no point in automatizing it with cron. Network backups are irrelevant
in my current case.

Regards and thanks.

-- 
Do not eat animals, respect them as you respect people.
https://duckduckgo.com/?q=how+to+(become+OR+eat)+vegan


pgpTbUTJ8j8c9.pgp
Description: OpenPGP digital signature


Re: What tool can I use to make efficient incremental backups?

2017-08-20 Thread Glenn English
On Sun, Aug 20, 2017 at 6:05 AM, Gene Heskett <ghesk...@shentel.net> wrote:

> Amanda has quite intelligent ways to do that. I run it nightly and have
> been since the late 90's. Storage in my case is in what are called
> v-tapes, which in fact are directories on a separate, terrabyte drive.
> However, unlike tapes which are time burning sequential reading devices,
> the terrabyte drive is true random access, so recovery operations are
> about 1000x faster than real tapes. Not to mention the terrabyte drive
> is about 1000 times more dependable than tape can ever be.

Amanda is a very well done collection of programs.

It very efficiently does incremental backups to several types of media
-- Gene goes to disk, I go to tape (takes forever, but there are
several little boxes containing backups that are nowhere near a
failure point).

It backs up in tar (or dump) files so you can restore data when the
program and all its data files have been lost (been there, and it
works). There's a program that will scan through a tape, tell you
what's on that tape, then fetch your selection(s) for you.

For me, the big drawback to Amanda was the initial configuration. It's
huge and complex (at least it was a couple decades ago). But after
it's all done, a cron job will run your backup(s) every night, while
you sleep, with no problems. If you ask it to, it'll even verify the
backup for you (an unverified backup isn't a backup, as they say).

Take a look. It's worth the trouble.

--
Glenn English



Re: What tool can I use to make efficient incremental backups?

2017-08-20 Thread Gene Heskett
On Saturday 19 August 2017 23:07:01 Celejar wrote:

> On Thu, 17 Aug 2017 11:47:34 -0500
>
> Mario Castelán Castro <marioxcc...@yandex.com> wrote:
> > Hello.
> >
> > Currently I use rsync to make the backups of my personal data,
> > including some manually selected important files of system
> > configuration. I keep old backups to be more safe from the scenario
> > where I have deleted something important, I make a backup, and I
> > only notice the deletion afterwards.
> >
> > Each backup snapshot is stored in its own directory. There is much
> > redundancy between subsequent backups. I use the option
> > "--link-dest" to make hard links and thus save space for files that
> > are *identical* to an already-existing file in the backup
> > repository. but this is still inefficient. Any change to a file,
> > even to its metadata (permission, modification time, etc.), will
> > result in the file being saved at whole, instead of a delta.
> >
> > Can you suggest a more efficient alternative?
>
> There's Borg, which apparently has good deduplication. I've just
> started using it, but it's a very sophisticated and quite popular
> piece of software, judging by chatter in various internet threads.
>
> https://borgbackup.readthedocs.io/en/stable/
>
> Celejar
Amanda has quite intelligent ways to do that. I run it nightly and have 
been since the late 90's. Storage in my case is in what are called 
v-tapes, which in fact are directories on a separate, terrabyte drive. 
However, unlike tapes which are time burning sequential reading devices, 
the terrabyte drive is true random access, so recovery operations are 
about 1000x faster than real tapes. Not to mention the terrabyte drive 
is about 1000 times more dependable than tape can ever be.

That drive had 25 re-allocated sectors when smartctl came out all those 
years ago, and still has that same 25 sectors marked bad and re-assigned 
right now.

5 Reallocated_Sector_Ct   0x0033   100   100   036Pre-fail  
Always   -   25

240 Head_Flying_Hours   0x   100   253   000Old_age   
Offline  -   66095 (35 137 0)

Pretty good for coming up on 67,000 head flying hours... :)

Seagate Barracuda's of course, pure "commodity drive" at less than a $70 
bill.

But when I do replace them, the first thing I do is goto the Seagate site 
and download the latest firmware for that drive to a cd, reboot to it 
and let it update the drives firmware if it finds old firmware.  What 
you buy from the box houses is often first run stuff and may be 20 
revisions out of date.  The drive will probably be faster, once going 
from 26 megabytes/second to a hair over 120 megabytes/second. Sata is 
slow, old Asus mainboard shows under 3GB/sec:
root@coyote:/etc/avahi# hdparm -tT /dev/sdc
/dev/sdc:
 Timing cached reads:   4882 MB in  2.00 seconds = 2441.26 MB/sec
 Timing buffered disk reads: 348 MB in  3.00 seconds = 115.84 MB/sec

Amanda and I have had our differences, but at the end of the day, it Just 
Works(TM).  And has been for around 18 years here.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>



Re: What tool can I use to make efficient incremental backups?

2017-08-19 Thread Celejar
On Thu, 17 Aug 2017 11:47:34 -0500
Mario Castelán Castro <marioxcc...@yandex.com> wrote:

> Hello.
> 
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario where I have deleted
> something important, I make a backup, and I only notice the deletion
> afterwards.
> 
> Each backup snapshot is stored in its own directory. There is much
> redundancy between subsequent backups. I use the option "--link-dest" to
> make hard links and thus save space for files that are *identical* to an
> already-existing file in the backup repository. but this is still
> inefficient. Any change to a file, even to its metadata (permission,
> modification time, etc.), will result in the file being saved at whole,
> instead of a delta.
> 
> Can you suggest a more efficient alternative?

There's Borg, which apparently has good deduplication. I've just
started using it, but it's a very sophisticated and quite popular piece
of software, judging by chatter in various internet threads.

https://borgbackup.readthedocs.io/en/stable/

Celejar



Re: What tool can I use to make efficient incremental backups?

2017-08-19 Thread Mario Castelán Castro
On 2017-08-18 23:53 +0100 Liam O'Toole  wrote:
>I use duplicity for exactly this scenario. See the wiki page[1] to get
>started.
>
>1: https://wiki.debian.org/Duplicity

Judging from a quick glance at that project's homepage in GNU Savannah,
this seem indeed to be the right tool for the job, but I have yet to try
it.

Thanks you very much.


pgpN7SLbQgXNO.pgp
Description: OpenPGP digital signature


Re: What tool can I use to make efficient incremental backups?

2017-08-18 Thread Liam O'Toole
On 2017-08-17, Mario Castelán Castro <marioxcc...@yandex.com> wrote:
> Hello.
>
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario where I have deleted
> something important, I make a backup, and I only notice the deletion
> afterwards.
>
> Each backup snapshot is stored in its own directory. There is much
> redundancy between subsequent backups. I use the option "--link-dest" to
> make hard links and thus save space for files that are *identical* to an
> already-existing file in the backup repository. but this is still
> inefficient. Any change to a file, even to its metadata (permission,
> modification time, etc.), will result in the file being saved at whole,
> instead of a delta.
>
> Can you suggest a more efficient alternative?
>

(...)

I use duplicity for exactly this scenario. See the wiki page[1] to get
started.

1: https://wiki.debian.org/Duplicity

-- 

Liam



Debian-user conventions [was: [...] incremental backups?]

2017-08-18 Thread tomas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, Aug 17, 2017 at 07:33:53PM -0500, Mario Castelán Castro wrote:
> On 17/08/17 15:51, to...@tuxteam.de wrote:

[...]

> > [...] And yes, there's a wiki entry encouraging "in-line" quoting [1].
> 
> Ah, I see. I rarely check the Debian Wiki because it is almost
> abandoned. I have never found something useful there. For an example of
> its state of abandonment see this fragment from that page:
> 
> “You really should see _Where is the foo package?_ above, but Debian
> ships with Iceweasel, a rebranded Firefox.”
>
> But a non-rebranded Firefox package is available in both Debian 8 and
> Debian 9 (at least in the later, this is the default browse installed).

It's a wiki (hint, hint ;-)

> > That's why you shoulnd't include the whole message, but snip the
> > relevant parts you are answering to. Believe me, for long threads,
> > this tends to work best.
> 
> Yes, except when nobody deletes the nested quotations (I do when it is
> appropriate). Eventually most of the text in the messages becomes quotes.

Yes, some gardening is needed. Take into account that you are addressing
about 3000 readers here, so sacrificing ten seconds of your time may
save hours in total (yes, I'm dramatizing a bit, but you get the idea).

> Also, it is a problem that one loses track of who do the nested (except
> the topmost) quotations belong to. Do you have any recommendation about
> that?

I try to stay at three layers at most, two typically; a good mail user
agent will help you keeping the levels straight. Add in some judgement :)

[...]

> Right, but here is a note about your wording: The great majority of
> people in debian-user are male (judging by the personal names), and
> moreover “he” is established as the pronoun in English when the sex is
> undetermined. The use of the female pronoun “she” is situations like
> this is unjustified.

Aha, someone noticed. It might be unjustified (don't know, can't assess
that), but it is subversive :)

But getting into more details would be definitely off-topic, I fear:
take it as an idiosyncracy. Feel free to complain from time to time :-)

> I already try to use the inline style when appropriate. I will avoid
> quoting the previous message at all in the cases where formerly I would
> have used top posting. This seems to be the only change necessary to
> comply with your suggestions.

Don't take my word for that. I might be wrong, and all that.

Cheers
- -- tomás
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlmWmBUACgkQBcgs9XrR2kbMFQCcDIsnZMZavSJHyDVt3fmLc5Sm
wRwAn3D4P9f1ORgZ/q8o434A3iz7VrCQ
=hAgd
-END PGP SIGNATURE-



Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread Mario Castelán Castro
On 17/08/17 15:51, to...@tuxteam.de wrote:
> On Thu, Aug 17, 2017 at 03:24:35PM -0500, Mario Castelán Castro wrote:
> [...]
> 
> But in general, folks here tend to be tolerant. And yes, there's a
> wiki entry encouraging "in-line" quoting [1].

Ah, I see. I rarely check the Debian Wiki because it is almost
abandoned. I have never found something useful there. For an example of
its state of abandonment see this fragment from that page:

“You really should see _Where is the foo package?_ above, but Debian
ships with Iceweasel, a rebranded Firefox.”

But a non-rebranded Firefox package is available in both Debian 8 and
Debian 9 (at least in the later, this is the default browse installed).

>> Bottom posting requires scrolling past text that may be not needed. Top
>> posting puts the messages in reverse chronological order, which is not
>> something bad by itself.
> 
> That's why you shoulnd't include the whole message, but snip the
> relevant parts you are answering to. Believe me, for long threads,
> this tends to work best.

Yes, except when nobody deletes the nested quotations (I do when it is
appropriate). Eventually most of the text in the messages becomes quotes.

Also, it is a problem that one loses track of who do the nested (except
the topmost) quotations belong to. Do you have any recommendation about
that?

> Yes, exactly. If someone needs the unabridged original post, it's either
> in her mailbox or in the archives.

Right, but here is a note about your wording: The great majority of
people in debian-user are male (judging by the personal names), and
moreover “he” is established as the pronoun in English when the sex is
undetermined. The use of the female pronoun “she” is situations like
this is unjustified.

> See [1] (there are also other hints on that page). Also [2] is a good
> reference.

I had read [2] in the past, but I did not find anything about posting
styles.

-

I already try to use the inline style when appropriate. I will avoid
quoting the previous message at all in the cases where formerly I would
have used top posting. This seems to be the only change necessary to
comply with your suggestions.

Regards.



signature.asc
Description: OpenPGP digital signature


Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread Mario Castelán Castro
On 17/08/17 13:31, Nicolas George wrote:
> [[elided]]
> 
> No, it is the other way around: we rsync the data to a directory stored
> on a btrfs filesystem, and then we make a snapshot of that directory.
> With btrfs's CoW, only the parts of the files that have changed use
> space.

Thanks for the clarification.

> Please remember not to top-post.

Both bottom posting and top posting each have their own disadvantages.
Bottom posting requires scrolling past text that may be not needed. Top
posting puts the messages in reverse chronological order, which is not
something bad by itself.

When I explicitly want a quote to reply to a specific parts of a
message, I post after the parts, as in this message; I don't know if
that would still be considered bottom posting. When I am including the
previous message *only* for reference, I use top posting because the
previous message is also archived in the inbox of the other users, so
the quotation included for reference is of secondary importance, and
therefore IMO should go after the *important* (new) information, that
is, at the bottom.

Is there a rule, guideline or de-facto standard mandating either style
in debian-user?



signature.asc
Description: OpenPGP digital signature


Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread tomas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, Aug 17, 2017 at 03:24:35PM -0500, Mario Castelán Castro wrote:
> On 17/08/17 13:31, Nicolas George wrote:

[...]

> > Please remember not to top-post.
> 
> Both bottom posting and top posting each have their own disadvantages.

The general convention here is to only quote the relevant parts you
are replying to. For long threads this tends to work better.

Worst is, of course, mixing styles :-)

But in general, folks here tend to be tolerant. And yes, there's a
wiki entry encouraging "in-line" quoting [1].

> Bottom posting requires scrolling past text that may be not needed. Top
> posting puts the messages in reverse chronological order, which is not
> something bad by itself.

That's why you shoulnd't include the whole message, but snip the
relevant parts you are answering to. Believe me, for long threads,
this tends to work best.
 
> When I explicitly want a quote to reply to a specific parts of a
> message, I post after the parts, as in this message; I don't know if
> that would still be considered bottom posting.

Yes, exactly. If someone needs the unabridged original post, it's either
in her mailbox or in the archives.

>   When I am including the
> previous message *only* for reference, I use top posting because the
> previous message is also archived in the inbox of the other users, so
> the quotation included for reference is of secondary importance, and
> therefore IMO should go after the *important* (new) information, that
> is, at the bottom.

But that's why the original message isn't needed as a copy in the
first place.

> Is there a rule, guideline or de-facto standard mandating either style
> in debian-user?

See [1] (there are also other hints on that page). Also [2] is a good
reference.

[1] 
https://wiki.debian.org/FAQsFromDebianUser#What_is_top-posting_.28and_why_shouldn.27t_I_do_it.29.3F
[2] https://www.debian.org/MailingLists/

Cheers
- -- tomás
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlmWAegACgkQBcgs9XrR2kb2xgCffB4H+pR5piD29rDi2w3d6eqX
1egAnjrcCkv4wHQJzJxEk8JKMkoFVEgm
=3lRP
-END PGP SIGNATURE-



Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread Mario Castelán Castro
Thanks for your answer.

Let me know if I understood your approach correctly. You have a
directory in a btrfs filesystem that is the target of your backups. When
you make a backup, you take a brtfs snapshot of this directory and
*then* use rsync. Is this correct?

Regards.

On 17/08/17 12:50, Nicolas George wrote:
> [[elided]]
> 
> We used a similar setup on a server, using the rsnapshot script. But we
> have users with huge mbox files that were copied entirely each time. We
> changed for a setup with normal rsync (no --link-dest) and btrfs
> snapshots, it increased the efficiency (storage and disk bandwidth)
> dramatically.
> 
> Regards,



signature.asc
Description: OpenPGP digital signature


Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread Mario Castelán Castro
On 17/08/17 12:10, Fungi4All wrote:
> [[elided]]
> Stay with rsync

Why? Isn't there a more efficient alternative?



signature.asc
Description: OpenPGP digital signature


What tool can I use to make efficient incremental backups?

2017-08-17 Thread Mario Castelán Castro
Hello.

Currently I use rsync to make the backups of my personal data, including
some manually selected important files of system configuration. I keep
old backups to be more safe from the scenario where I have deleted
something important, I make a backup, and I only notice the deletion
afterwards.

Each backup snapshot is stored in its own directory. There is much
redundancy between subsequent backups. I use the option "--link-dest" to
make hard links and thus save space for files that are *identical* to an
already-existing file in the backup repository. but this is still
inefficient. Any change to a file, even to its metadata (permission,
modification time, etc.), will result in the file being saved at whole,
instead of a delta.

Can you suggest a more efficient alternative?

I know about bup <https://github.com/bup/bup> but I have not used it
because it warns that “This is a very early version. Therefore it will
most probably not work for you, but we don't know why. It is also
missing some probably-critical features.”.

I also know about obnam. Unfortunately, the main author it has been
announced that it will be unmaintained because it has become a piece of
engineering, with all the ugly consequences of that, and real
engineering is “not fun” for him.

Thanks.



signature.asc
Description: OpenPGP digital signature


Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread Nicolas George
Le decadi 30 thermidor, an CCXXV, Mario Castelán Castro a écrit :
> Let me know if I understood your approach correctly. You have a
> directory in a btrfs filesystem that is the target of your backups. When
> you make a backup, you take a brtfs snapshot of this directory and
> *then* use rsync. Is this correct?

No, it is the other way around: we rsync the data to a directory stored
on a btrfs filesystem, and then we make a snapshot of that directory.
With btrfs's CoW, only the parts of the files that have changed use
space.

Hum, I think it requires the --inplace option.

> On 17/08/17 12:50, Nicolas George wrote:

Please remember not to top-post.

Regards,

-- 
  Nicolas George



Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread Nicolas George
Le decadi 30 thermidor, an CCXXV, Mario Castelán Castro a écrit :
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario where I have deleted
> something important, I make a backup, and I only notice the deletion
> afterwards.
> 
> Each backup snapshot is stored in its own directory. There is much
> redundancy between subsequent backups. I use the option "--link-dest" to
> make hard links and thus save space for files that are *identical* to an
> already-existing file in the backup repository. but this is still
> inefficient. Any change to a file, even to its metadata (permission,
> modification time, etc.), will result in the file being saved at whole,
> instead of a delta.
> 
> Can you suggest a more efficient alternative?

We used a similar setup on a server, using the rsnapshot script. But we
have users with huge mbox files that were copied entirely each time. We
changed for a setup with normal rsync (no --link-dest) and btrfs
snapshots, it increased the efficiency (storage and disk bandwidth)
dramatically.

Regards,

-- 
  Nicolas George



Re: What tool can I use to make efficient incremental backups?

2017-08-17 Thread Fungi4All
> From: marioxcc...@yandex.com
> To: debian-user <debian-user@lists.debian.org>
>
> Hello.
>
> Currently I use rsync to make the backups of my personal data, including
> some manually selected important files of system configuration. I keep
> old backups to be more safe from the scenario where I have deleted
> something important, I make a backup, and I only notice the deletion
> afterwards.
>
> Each backup snapshot is stored in its own directory. There is much
> redundancy between subsequent backups. I use the option "--link-dest" to
> make hard links and thus save space for files that are *identical* to an
> already-existing file in the backup repository. but this is still
> inefficient. Any change to a file, even to its metadata (permission,
> modification time, etc.), will result in the file being saved at whole,
> instead of a delta.
>
> Can you suggest a more efficient alternative?
>
> I know about bup <https://github.com/bup/bup> but I have not used it
> because it warns that “This is a very early version. Therefore it will
> most probably not work for you, but we don"t know why. It is also
> missing some probably-critical features.”.
>
> I also know about obnam. Unfortunately, the main author it has been
> announced that it will be unmaintained because it has become a piece of
> engineering, with all the ugly consequences of that, and real
> engineering is “not fun” for him.
>
> Thanks.

Stay with rsync

Fwd: Some Debian package upgrades are corrupting rsync "quick check" backups

2017-01-28 Thread Juan Lavieri

Creo que es de interés para algunos en la lista.

Debido a que aparentemente no está resuelto, quien lo desee puede hacer 
un seguimiento a este post en el siguiente enlace:


https://lists.debian.org/debian-security/2017/01/msg00014.html


Saludos.



 Mensaje reenviado 
Asunto: 	Some Debian package upgrades are corrupting rsync "quick check" 
backups

Resent-Date:Sat, 28 Jan 2017 13:17:06 + (UTC)
Resent-From:debian-secur...@lists.debian.org
Fecha:  Sun, 29 Jan 2017 02:11:41 +1300
De: Adam Warner <li...@consulting.net.nz>
Para:   debian-secur...@lists.debian.org



Hi all,

rsync typically detects file modifications by comparing exact
modification time and size of each identically named file in the source
and destination locations.

An rsync backup can be surreptitiously corrupted by modifying a source
file's content while keeping its size and modification time the same.

If some program is doing this then one way for rsync to detect the
modification is by appending the --checksum option. This requires every
file in the source and destination to be fully read. This can be many
orders of magnitude slower than a "quick check".

If you have been using rsync without --checksum to back up your Debian
partitions then your backups are likely corrupted.

Some packages are being generated with files containing exactly the
same size and timestamps even though the contents of those files are
different. This is how the data corruption arises:

1. You use rsync to back up your OS partition.
2. You perform package upgrades. Some files are replaced with different
content but have the same size and modification time.
3. You use rsync to back up your OS partition to the same destination.
The modified files with the same size and modification time are
skipped. Your backup is now corrupt (you have old files mixed with new
files).

Here's a recent example of a package upgrade causing this:

<https://packages.debian.org/testing/amd64/libqt5concurrent5/download>
<https://packages.debian.org/sid/amd64/libqt5concurrent5/download>

If you're tracking unstable you may have recently upgraded from
libqt5concurrent5_5.7.1+dfsg-3_amd64.deb to
libqt5concurrent5_5.7.1+dfsg-3+b1_amd64.deb (a subsequent binary non-
maintainer upload).

Here are the contents of both packages:
$ dpkg -c libqt5concurrent5_5.7.1+dfsg-3_amd64.deb
drwxr-xr-x root/root 0 2017-01-12 04:14 ./
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/lib/
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/lib/x86_64-linux-gnu/
-rw-r--r-- root/root 27352 2017-01-12 04:14 
./usr/lib/x86_64-linux-gnu/libQt5Concurrent.so.5.7.1
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/share/
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/share/doc/
drwxr-xr-x root/root 0 2017-01-12 04:14 
./usr/share/doc/libqt5concurrent5/
-rw-r--r-- root/root  1196 2016-12-01 21:17 
./usr/share/doc/libqt5concurrent5/LGPL_EXCEPTION.txt
-rw-r--r-- root/root 18232 2017-01-12 04:14 
./usr/share/doc/libqt5concurrent5/changelog.Debian.gz
-rw-r--r-- root/root  3792 2016-12-01 21:17 
./usr/share/doc/libqt5concurrent5/changelog.gz
-rw-r--r-- root/root103466 2017-01-12 04:14 
./usr/share/doc/libqt5concurrent5/copyright
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/share/lintian/
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/share/lintian/overrides/
-rw-r--r-- root/root   230 2017-01-12 04:14 
./usr/share/lintian/overrides/libqt5concurrent5
lrwxrwxrwx root/root 0 2017-01-12 04:14 
./usr/lib/x86_64-linux-gnu/libQt5Concurrent.so.5 -> libQt5Concurrent.so.5.7.1
lrwxrwxrwx root/root 0 2017-01-12 04:14 
./usr/lib/x86_64-linux-gnu/libQt5Concurrent.so.5.7 -> libQt5Concurrent.so.5.7.1

$ dpkg -c libqt5concurrent5_5.7.1+dfsg-3+b1_amd64.deb
drwxr-xr-x root/root 0 2017-01-12 04:14 ./
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/lib/
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/lib/x86_64-linux-gnu/
-rw-r--r-- root/root 27352 2017-01-12 04:14 
./usr/lib/x86_64-linux-gnu/libQt5Concurrent.so.5.7.1
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/share/
drwxr-xr-x root/root 0 2017-01-12 04:14 ./usr/share/doc/
drwxr-xr-x root/root 0 2017-01-12 04:14 
./usr/share/doc/libqt5concurrent5/
-rw-r--r-- root/root  1196 2016-12-01 21:17 
./usr/share/doc/libqt5concurrent5/LGPL_EXCEPTION.txt
-rw-r--r-- root/root   252 2017-01-12 04:14 
./usr/share/doc/libqt5concurrent5/changelog.Debian.amd64.gz
-rw-r--r-- root/root 18232 2017-01-12 04:14 
./usr/share/doc/libqt5concurrent5/changelog.Debian.gz
-rw-r--r-- root/root  3792 2016-12-01 21:17 
./usr/share/doc/libqt5concurrent5/changelog.gz
-rw-r--r-- root/root103466 2017-01-12 04:14 
./usr/share/doc/libqt5concurrent5/copyright
drwxr-xr-x root/root   

Re: Debian server for backups of Windows clients

2016-09-11 Thread Daniel Bareiro
Hi, Celejar

On 09/09/16 18:18, Celejar wrote:

 My laptop has 802.11 a/b/g WiFi and Fast Ethernet.  Wireless data
 transfers are slow (~50 Mbps).  Wired is twice as fast (100 Mbps); still
 slow.  Newer WiFi (n, ac) should be faster, but only the newest WiFi
 hardware can match or beat Gigabit.

>>> You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
>>> everything I've read says that 20-24Mbps is the real-world maximum.

>> Still, 20-24 Mbps is more than 10 Mpbs I was seeing with rsync. There
>> could be a bottleneck somewhere?

> As per your own suggestion in another message, definitely benchmark
> with iperf to see if that's better.

Yes, it can be. I was thinking about what I said in a previous message
about the control information added by rsync on the packets sent.

I think this would be important only if we focus on the performance
(number of bits of data sent / total number of bits sent). In this case,
the focus is the transfer rate, for which the amount of control bits
used would be irrelevant since I think we need to know how many bits per
second we are getting, regardless of the utility have those bits.

> And as we discussed in another thread some time ago, (especially) if 
> you're using wireless, benchmark throughput in *both* directions,
> since the transmitter (or receiver) may be better on one machine than
> on another.

Interesting sidelight. Thanks for sharing.


Kind regards,
Daniel



signature.asc
Description: OpenPGP digital signature


Re: Debian server for backups of Windows clients

2016-09-11 Thread Daniel Bareiro
Hi, deloptes.

On 09/09/16 19:06, deloptes wrote:

>> Still, 20-24 Mbps is more than 10 Mpbs I was seeing with rsync. There
>> could be a bottleneck somewhere?

> In my case it was the IO on the disk - I couldn't do more than 12Mbps even
> on wired connection, because I have encrypted disk ... it took me a while
> to understand why though.

This is an interesting fact. Because 'orion' (the notebook used in the
mentioned test) also has an encrypted disk. In the test, the notebook
was pulling the files on the Windows VM on the wired network.

root@orion:~# dmsetup ls --target crypt
sda5_crypt  (254, 0)

root@orion:~# cryptsetup luksDump /dev/sda5 | grep Version -A3
Version:1
Cipher name:aes
Cipher mode:xts-plain64
Hash spec:  sha1

viper@orion:~$ lsblk --fs
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 /boot
├─sda2
└─sda5
  └─sda5_crypt
├─main-swap[SWAP]
├─main-root/
└─main-datos   /datos
sr0


I did not think this could affect so strongly in the network transfer.


Kind regards,
Daniel



signature.asc
Description: OpenPGP digital signature


Re: Debian server for backups of Windows clients

2016-09-10 Thread Neal P. Murphy
On Sat, 10 Sep 2016 10:53:20 -0400
rhkra...@gmail.com wrote:

> On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> > On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:
> > > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > > > bandwidth of a gigabit ethernet NIC.
> > > 
> > > Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125
> > > Mb/s.  It doesn't (really) change your conclusions.
> > > 
> > > regards,
> > > Randy Kramer
> > 
> > You make an assumption many folks do, but theres a start bit and a stop
> > bit so the math is more like 1000/10=100 Mb/s.
> 
> 
> Well, 1000/8 is still 125 ;-) but I wouldn't have written back just to say 
> that.  Isn't it the case that there is something less than 1 start and 1 stop 
> for every byte--maybe like 1 stop bit for every several bytes?  (I am just 
> (slightly) curious.)
> 
> And, iirc, there are variations (which may be obsolete--I seem to remember 
> one 
> protocol that had either 2 start or 2 stop bits?


Start/stop bits apply to async TIA-232.

Speaking very generally, 100Mb/s Ethernet actually operates at 125Mb/s; that 
includes the LAPB-like protocol that actually transmits the packets. All the 
layer 1 overhead goes in that extra 25Mb/s.

So, more correctly, you have data + TCP + IP overhead + L2 overhead: around 3% 
for full packets, higher for smaller packets. This is why 100Mb/s ethernet 
saturates at around 92%-95% *data* transmission. The rest is protocol overhead 
and delays (probably akin to RR and RNR).



Re: Debian server for backups of Windows clients

2016-09-10 Thread David Christensen
On 09/10/2016 07:23 PM, Celejar wrote:
> FTR: there seem to be more typos / here. The actual figure should be
> 11034157.6344 bits/second.


Yes, let's whip those typos out of this dead horse some more:

On 09/09/2016 08:36 PM, David Christensen wrote:
> Benchmarking using WiFi (48 Mb/s):
>
> 2016-09-09 20:18:51 dpchrist@t7400 ~
> $ time dd if=/dev/urandom of=urandom.100M bs=1M count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 12.6709 s, 8.3 MB/s
>
> real  0m12.703s
> user  0m0.000s
> sys   0m12.481s
>
> 2016-09-09 20:19:32 dpchrist@t7400 ~
> $ time scp -p urandom.100M samba:.
> urandom.100M
>
>
>   100%  100MB   1.5MB/s   01:08
>
> real  1m16.023s
> user  0m4.548s
> sys   0m0.744s

2016-09-10 19:53:48 dpchrist@t7400 ~
$ perl -e 'print 104857600*8/76.023, "\n"'
11034302.7767912


On 09/09/2016 08:36 PM, David Christensen wrote:
> Testing again using Fast Ethernet (100 Mb/s):
>
> 2016-09-09 20:29:54 dpchrist@t7400 ~
> $ time scp -p urandom.100M samba:.
> urandom.100M
>
>
>   100%  100MB   2.4MB/s   00:42
>
> real  0m43.377s
> user  0m4.476s
> sys   0m0.876s

2016-09-10 19:54:43 dpchrist@t7400 ~
$ perl -e 'print 104857600*8/43.377, "\n"'
19338838.5549946


David



Re: Debian server for backups of Windows clients

2016-09-10 Thread Celejar
On Fri, 9 Sep 2016 20:43:44 -0700
David Christensen  wrote:

> On 09/09/2016 12:43 PM, Daniel Bareiro wrote:
> > On 09/08/16 22:57, David Christensen wrote:
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet.  Wireless data
> >> transfers are slow (~50 Mbps).  Wired is twice as fast (100 Mbps); still
> >> slow.  Newer WiFi (n, ac) should be faster, but only the newest WiFi
> >> hardware can match or beat Gigabit.
> > 
> > I think it is reasonable to expect that the wireless transfer rate is
> > lower than the one obtained in a wired network. But there is a big
> > difference compared to the ~50 Mpbs you mentioned. The peak obtained
> > with rsync was 10 Mbps. Maybe the best is to take a metric with iperf,
> > what do you think?
> 
> See the benchmark I just posted for 802.11g WiFi --  dm-crypt -> scp ->
> dm-crypt, all without AES-NI --  110341671 bits/second.  Yuck.

FTR: there seem to be more typos / here. The actual figure should be
11034157.6344 bits/second.

Celejar



Re: Debian server for backups of Windows clients

2016-09-10 Thread Celejar
On Fri, 9 Sep 2016 20:36:39 -0700
David Christensen  wrote:

> On 09/09/2016 11:51 AM, Celejar wrote:
> > On Tue, 9 Aug 2016 18:57:02 -0700
> > David Christensen  wrote:
> > 
> > ...
> > 
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet.  Wireless data
> >> transfers are slow (~50 Mbps).  Wired is twice as fast (100 Mbps); still
> >> slow.  Newer WiFi (n, ac) should be faster, but only the newest WiFi
> >> hardware can match or beat Gigabit.
> > 
> > You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
> > everything I've read says that 20-24Mbps is the real-world maximum.
> > 
> > Celejar
> > 
> 
> Benchmarking using WiFi (48 Mb/s):
> 
> 2016-09-09 20:18:51 dpchrist@t7400 ~
> $ time dd if=/dev/urandom of=urandom.100M bs=1M count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 12.6709 s, 8.3 MB/s

...

> 2016-09-09 20:19:32 dpchrist@t7400 ~
> $ time scp -p urandom.100M samba:.
> urandom.100M
> 
> 
>   100%  100MB   1.5MB/s   01:08
> 
> real  1m16.023s
> user  0m4.548s
> sys   0m0.744s
> 
> 
> So, 1048576900 bytes * 8 bits / byte / 76.024 seconds
> 
> = 110341671 bits/second

So assuming that '9' is a typo, as per another message of yours in this
thread, your actual throughput is more like 11Mpbs, correct?

Celejar



Re: Debian server for backups of Windows clients

2016-09-10 Thread Gene Heskett
On Saturday 10 September 2016 10:53:20 rhkra...@gmail.com wrote:

> On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> > On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:
> > > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > > > bandwidth of a gigabit ethernet NIC.
> > >
> > > Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125
> > > Mb/s.  It doesn't (really) change your conclusions.
> > >
> > > regards,
> > > Randy Kramer
> >
> > You make an assumption many folks do, but theres a start bit and a
> > stop bit so the math is more like 1000/10=100 Mb/s.
>
> Well, 1000/8 is still 125 ;-) but I wouldn't have written back just to
> say that.  Isn't it the case that there is something less than 1 start
> and 1 stop for every byte--maybe like 1 stop bit for every several
> bytes?  (I am just (slightly) curious.)
>
> And, iirc, there are variations (which may be obsolete--I seem to
> remember one protocol that had either 2 start or 2 stop bits?

Yes, still in use in some legacy stuffs.

There may be some inroads into the 10 bits per byte, but tcp is so old I 
doubt that synchronization portion took a hit.  Thats what it is, is 
keeping everything in synch. Even USB has that same data format. Sata 
for disks, being much newer, may have abandoned that, particularly for 
the disks whose native format is a 4096 byte sector. I've also found 
sata cabling is about 1000% flakier, requiring more frequent 
replacements.

> regards,
> Randy Kramer


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 



Re: Debian server for backups of Windows clients

2016-09-10 Thread David Christensen
On 09/10/2016 07:53 AM, rhkra...@gmail.com wrote:
> On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
>> You make an assumption many folks do, but theres a start bit and a stop
>> bit so the math is more like 1000/10=100 Mb/s.
> 
> 
> Well, 1000/8 is still 125 ;-) but I wouldn't have written back just to say 
> that.  Isn't it the case that there is something less than 1 start and 1 stop 
> for every byte--maybe like 1 stop bit for every several bytes?  (I am just 
> (slightly) curious.)
> 
> And, iirc, there are variations (which may be obsolete--I seem to remember 
> one 
> protocol that had either 2 start or 2 stop bits?

I remember start/stop bits from RS-232/485, but Gigabit Ethernet
signaling is more advanced:

https://en.wikipedia.org/wiki/Gigabit_Ethernet


David



Re: Debian server for backups of Windows clients

2016-09-10 Thread rhkramer
On Saturday, September 10, 2016 10:40:26 AM Gene Heskett wrote:
> On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:
> > On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > > bandwidth of a gigabit ethernet NIC.
> > 
> > Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125
> > Mb/s.  It doesn't (really) change your conclusions.
> > 
> > regards,
> > Randy Kramer
> 
> You make an assumption many folks do, but theres a start bit and a stop
> bit so the math is more like 1000/10=100 Mb/s.


Well, 1000/8 is still 125 ;-) but I wouldn't have written back just to say 
that.  Isn't it the case that there is something less than 1 start and 1 stop 
for every byte--maybe like 1 stop bit for every several bytes?  (I am just 
(slightly) curious.)

And, iirc, there are variations (which may be obsolete--I seem to remember one 
protocol that had either 2 start or 2 stop bits?

regards,
Randy Kramer



Re: Debian server for backups of Windows clients

2016-09-10 Thread Gene Heskett
On Saturday 10 September 2016 10:26:15 rhkra...@gmail.com wrote:

> On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> > It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> > bandwidth of a gigabit ethernet NIC.
>
> Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125
> Mb/s.  It doesn't (really) change your conclusions.
>
> regards,
> Randy Kramer

You make an assumption many folks do, but theres a start bit and a stop 
bit so the math is more like 1000/10=100 Mb/s.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 



Re: Debian server for backups of Windows clients

2016-09-10 Thread rhkramer
On Saturday, September 10, 2016 08:41:53 AM Dan Ritter wrote:
> It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
> bandwidth of a gigabit ethernet NIC.

Sorry, I tend to pick at nits, but, for the record, 1000/8 is 125 Mb/s.  It 
doesn't (really) change your conclusions.

regards,
Randy Kramer



Re: Debian server for backups of Windows clients

2016-09-10 Thread Dan Ritter
On Sat, Sep 10, 2016 at 01:22:45AM -0400, Neal P. Murphy wrote:
> On Fri, 9 Sep 2016 23:14:30 -0500
> David Wright  wrote:
> 
> Good eye! I was going to say it's not possible to get 110Mb/s over 802.11g; 
> 40-50 is closer tothe best I get. And 193Mb/s over 100Mb/s ethernet is right 
> out; best I've ever managed is maybe 97Mb/s, and 92-95 is more typical. 
> 11,034,157Mb/s on W/L and 19,338,838Mb/s on wired is *much* more believable.
> 
> Unless one has a very fast multicore CPU with hardware crypto assistance, 
> very fast RAM and the data to be transferred cached in RAM, one will probably 
> never saturate a fastE or gigE link where one end must decrypt the data from 
> disk/cache then encrypt the data to scp, and the other end must decrypt the 
> data from scp then encrypt the data to disk. Even simple compression slows 
> transfer down far too much.

SSDs can routinely read 400-600 MB/s. No need to have everything
cached in RAM.

In 2010, the first generation of i5 CPUs with hardware support for AES
could encrypt at about 15 MB/s, more than filling a 100 Mb/s pipe.

Here's a table of recent CPUs with AES support, running with
OpenSSL/LibreSSL. https://calomel.org/aesni_ssl_performance.html

It's in megabytes per second, so assume 1000/8 = 250 MB/s is the
bandwidth of a gigabit ethernet NIC. Anything which can do 2x
that can approach encrypting/decrypting from SSD, then
decrypting/encrypting over an SSH connection.

There are a lot of 500s and above on that chart.

And that's per-core, so even the 250+ CPUs can fill a gig-e pipe
while reading from SSD.

Nor are they monstrously expensive: an AMD FX-6300 is $90, a
motherboard for it could be another $90, and you can get a
decent SSD for $100 these days. A $400 desktop can be put
together that can saturate a gig-E link with encrypted traffic
from an encrypted disk.

Truly we live in marvelous times.

-dsr-



Re: Debian server for backups of Windows clients

2016-09-10 Thread David Christensen
On 09/09/2016 09:14 PM, David Wright wrote:
> On Fri 09 Sep 2016 at 20:36:39 (-0700), David Christensen wrote:
>> So, 1048576900 bytes * 8 bits / byte / 76.024 seconds
>  ↑
> 
> What's this 9?

A typographical error.

104857600 bytes * 8 bits/byte / 76.024 seconds

= 11034158 bits/seconds


David




Re: Debian server for backups of Windows clients

2016-09-09 Thread Neal P. Murphy
On Fri, 9 Sep 2016 23:14:30 -0500
David Wright  wrote:

> On Fri 09 Sep 2016 at 20:36:39 (-0700), David Christensen wrote:
> > On 09/09/2016 11:51 AM, Celejar wrote:
> > > On Tue, 9 Aug 2016 18:57:02 -0700
> > > David Christensen  wrote:
> > > 
> > > ...
> > > 
> > >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet.  Wireless data
> > >> transfers are slow (~50 Mbps).  Wired is twice as fast (100 Mbps); still
> > >> slow.  Newer WiFi (n, ac) should be faster, but only the newest WiFi
> > >> hardware can match or beat Gigabit.
> > > 
> > > You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
> > > everything I've read says that 20-24Mbps is the real-world maximum.
> > > 
> > > Celejar
> > > 
> > 
> > Benchmarking using WiFi (48 Mb/s):
> > 
> > 2016-09-09 20:18:51 dpchrist@t7400 ~
> > $ time dd if=/dev/urandom of=urandom.100M bs=1M count=100
> > 100+0 records in
> > 100+0 records out
> > 104857600 bytes (105 MB) copied, 12.6709 s, 8.3 MB/s
> > 
> > real0m12.703s
> > user0m0.000s
> > sys 0m12.481s
> > 
> > 2016-09-09 20:19:32 dpchrist@t7400 ~
> > $ time scp -p urandom.100M samba:.
> > urandom.100M
> > 
> > 
> >   100%  100MB   1.5MB/s   01:08
> > 
> > real1m16.023s
> > user0m4.548s
> > sys 0m0.744s
> > 
> > 
> > So, 1048576900 bytes * 8 bits / byte / 76.024 seconds
>  ↑
> 
> What's this 9?
> 
> Cheers,
> David.
> 

Assuming the talk is about transfer rates over the medium, not something like 
pre-compression data rates (which might be called 'marketing-speak').

Good eye! I was going to say it's not possible to get 110Mb/s over 802.11g; 
40-50 is closer tothe best I get. And 193Mb/s over 100Mb/s ethernet is right 
out; best I've ever managed is maybe 97Mb/s, and 92-95 is more typical. 
11,034,157Mb/s on W/L and 19,338,838Mb/s on wired is *much* more believable.

Unless one has a very fast multicore CPU with hardware crypto assistance, very 
fast RAM and the data to be transferred cached in RAM, one will probably never 
saturate a fastE or gigE link where one end must decrypt the data from 
disk/cache then encrypt the data to scp, and the other end must decrypt the 
data from scp then encrypt the data to disk. Even simple compression slows 
transfer down far too much.

Now if one had many CPUs, hacked scp to open as many sockets and thread/child 
procs as there are CPUs, and had each thread work on a small-ish block of data 
at a time, one *might* be able to speed up the tranfser.



Re: Debian server for backups of Windows clients

2016-09-09 Thread David Wright
On Fri 09 Sep 2016 at 20:36:39 (-0700), David Christensen wrote:
> On 09/09/2016 11:51 AM, Celejar wrote:
> > On Tue, 9 Aug 2016 18:57:02 -0700
> > David Christensen  wrote:
> > 
> > ...
> > 
> >> My laptop has 802.11 a/b/g WiFi and Fast Ethernet.  Wireless data
> >> transfers are slow (~50 Mbps).  Wired is twice as fast (100 Mbps); still
> >> slow.  Newer WiFi (n, ac) should be faster, but only the newest WiFi
> >> hardware can match or beat Gigabit.
> > 
> > You get ~50Mbps over a/b/g? 54Mbps is the theoretical maximum, and
> > everything I've read says that 20-24Mbps is the real-world maximum.
> > 
> > Celejar
> > 
> 
> Benchmarking using WiFi (48 Mb/s):
> 
> 2016-09-09 20:18:51 dpchrist@t7400 ~
> $ time dd if=/dev/urandom of=urandom.100M bs=1M count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 12.6709 s, 8.3 MB/s
> 
> real  0m12.703s
> user  0m0.000s
> sys   0m12.481s
> 
> 2016-09-09 20:19:32 dpchrist@t7400 ~
> $ time scp -p urandom.100M samba:.
> urandom.100M
> 
> 
>   100%  100MB   1.5MB/s   01:08
> 
> real  1m16.023s
> user  0m4.548s
> sys   0m0.744s
> 
> 
> So, 1048576900 bytes * 8 bits / byte / 76.024 seconds
 ↑

What's this 9?

Cheers,
David.



  1   2   3   4   5   6   7   8   >