-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Have a coffee or a beer, breathe deeply, then:
On 14/11/15 00:42, Gordon Messmer wrote:
> On 11/13/2015 12:59 PM, J Martin Rushton wrote:
>> Maybe I should have been clearer: use (LVM) OR (RAID1 and
>> break).
>
> I took your meaning. I'm saying
On Fri, 13 Nov 2015, Gordon Messmer wrote:
>Breaking a RAID volume doesn't make filesystems consistent,
While using LVM arranges for some filesystems to be consistent (it is
not always possible), it does nothing to ensure application consistency
which can be just as important. Linux doesn't
On 11/14/2015 03:04 AM, J Martin Rushton wrote:
On 14/11/15 00:42, Gordon Messmer wrote:
For instance, it only works if you mirror a single disk. It
doesn't work if you use RAID10 or RAID5, or RAID6, or RAIDZ, etc.
That of course is exactly why I said RAID1.
I know. And I was trying to
On 11/14/2015 09:01 AM, Mark Milhollan wrote:
On Fri, 13 Nov 2015, Gordon Messmer wrote:
Breaking a RAID volume doesn't make filesystems consistent,
While using LVM arranges for some filesystems to be consistent (it is
not always possible)
Can you explain what you mean? The standard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 13/11/15 01:52, Benjamin Smith wrote:
> I did exactly this with ZFS on Linux and cut over 24 hours of
> backup lag to just minutes.
>
> If you're managing data at scale, ZFS just rocks...
>
>
> On Tuesday, November 10, 2015 01:16:28 PM Warren
On 11/13/2015 01:46 AM, J Martin Rushton wrote:
If you really_need_ the guarantee of a snapshot, consider either LVM
or RAID1. Break out a volume from the RAID set, back it up, then
rebuild.
FFS, don't do the latter. LVM is the standard filesystem backing for
Red Hat and CentOS systems, and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 13/11/15 17:55, Gordon Messmer wrote:
> On 11/13/2015 01:46 AM, J Martin Rushton wrote:
>> If you really_need_ the guarantee of a snapshot, consider either
>> LVM or RAID1. Break out a volume from the RAID set, back it up,
>> then rebuild.
>
>
On 11/13/2015 12:59 PM, J Martin Rushton wrote:
Maybe I should have been clearer: use (LVM) OR (RAID1 and break).
I took your meaning. I'm saying that's a terrible backup strategy, for
a list of reasons.
For instance, it only works if you mirror a single disk. It doesn't
work if you use
On 11/11/15 02:46, Gordon Messmer wrote:
... the process you described is likely to miss files that are
modified while "find" runs.
That's just being picky for the sake of it. A backup is a *point-in-time*
snapshot of the files being backed up. It will not capture files modified
after
I did exactly this with ZFS on Linux and cut over 24 hours of backup lag to
just minutes.
If you're managing data at scale, ZFS just rocks...
On Tuesday, November 10, 2015 01:16:28 PM Warren Young wrote:
> On Nov 10, 2015, at 8:46 AM, Gordon Messmer
wrote:
> > On
On 11/10/2015 11:27 PM, Arun Khan wrote:
rsync will do incremental backup as already discussed earlier in this thread.
Please suggest how to achieve a differential backup with rsync (the
original query).
Already answered. Under rsync based backup systems like rsnapshot,
every backup is a
On 11/09/2015 09:22 PM, Arun Khan wrote:
You can use "newer" options of the find command and pass the file list
to rsync or scp to "backup" only those files that have changed since
the last run. You can keep a file like .lastbackup and timestamp it
(touch) at the start of the backup process.
On 11/10/2015 12:16 PM, Warren Young wrote:
Well, be fair, rsync can also miss files if files are changing while the backup
occurs. Once rsync has passed through a given section of the tree, it will not
see any subsequent changes.
I think you miss my meaning. Consider this sequence of
On Nov 10, 2015, at 8:46 AM, Gordon Messmer wrote:
>
> On 11/09/2015 09:22 PM, Arun Khan wrote:
>> You can use "newer" options of the find command and pass the file list
>
> the process you described is likely to miss files that are modified while
> "find" runs.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/11/15 21:05, Gordon Messmer wrote:
> On 11/10/2015 12:16 PM, Warren Young wrote:
>>
>> Well, be fair, rsync can also miss files if files are changing
>> while the backup occurs. Once rsync has passed through a given
>> section of the tree, it
On Monday, November 09, 2015 09:50:52 AM Gordon Messmer wrote:
> > How I can perform a diff backup?
>
> Save yourself a lot of trouble and use a front-end like rsnapshot or
> backuppc.
If I may, I'd like to put in a plug for ZFS:
Combining rsync and ZFS, you can rsync, then make a ZFS
On 11/10/2015 03:38 PM, J Martin Rushton wrote:
That's plain bad system analysis. Read the start date, record the
current date and THEN start processing. You will get the odd extra
file but will not loose any.
That's my point. "find" doesn't do that and naïve implementations of
the
Folks
I have been using rsnapshot for years now. The only problem I've found is
that it is possible to run out of inodes. So my heads-up is that when you
create the file system, ensure you have more than the default inodes - I
usually multiply the default by 10. Otherwise you can find your 1Tb
On 11/10/2015 12:18 AM, John Logsdon wrote:
I have been using rsnapshot for years now. The only problem I've found is
that it is possible to run out of inodes. So my heads-up is that when you
create the file system, ensure you have more than the default inodes - I
usually multiply the default by
On Tue, Nov 10, 2015 at 10:52 AM, Arun Khan wrote:
> On Mon, Nov 9, 2015 at 9:31 PM, Alessandro Baggi
> wrote:
>> Hi list,
>> how to perform a differential backup using rsync?
>>
>> On web there is a great confusion about diff backup concept when
Thanks John - I haven't used XFS.
This issue arose on ext3 I think some years ago on a rather elderly
system. If XFS avoids this that's great but if someone is still using
legacy systems, they need to be warned!
> On 11/10/2015 12:18 AM, John Logsdon wrote:
>> I have been using rsnapshot for
On Wed, Nov 11, 2015 at 5:39 AM, Gordon Messmer
wrote:
> On 11/10/2015 03:38 PM, J Martin Rushton wrote:
>>
>> That's plain bad system analysis. Read the start date, record the
>> current date and THEN start processing. You will get the odd extra
>> file but will not
On Wed, Nov 11, 2015 at 5:08 AM, J Martin Rushton
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 10/11/15 21:05, Gordon Messmer wrote:
>> On 11/10/2015 12:16 PM, Warren Young wrote:
>>>
>>> Well, be fair, rsync can also miss files if files are
Alessandro Baggi wrote:
how to perform a differential backup using rsync?
On web there is a great confusion about diff backup concept when
searched with rsync.
I think the answer to this question is Rsnapshot, which is an old and
well proven tool:
Ciao Alessandro,
On 11/09/2015 05:01 PM, Alessandro Baggi wrote:
Hi list,
how to perform a differential backup using rsync?
On web there is a great confusion about diff backup concept when
searched with rsync.
Users says diff because it copy only differences. For me differential is
backup
On Mon, November 9, 2015 7:52 pm, Keith Keller wrote:
> On 2015-11-09, John R Pierce wrote:
>>
>> XFS handles this fine. I have a backuppc storage pool with backups of
>> 27 servers going back a year... now, I just have 30 days of
>> incrementals, and 12 months of fulls,
>
On Mon, Nov 9, 2015 at 9:31 PM, Alessandro Baggi
wrote:
> Hi list,
> how to perform a differential backup using rsync?
>
> On web there is a great confusion about diff backup concept when searched
> with rsync.
>
> Users says diff because it copy only differences. For
On 2015-11-10, Valeri Galtsev wrote:
>
> I'm fully with you on -o inode64, but I would think it is not inode number
> that becomes large with extensive use of hard links, but the space used by
> directory data, thus requiring to relocate these once they exceed some
>
On 11/09/2015 09:59 AM, John R Pierce wrote:
On 11/9/2015 9:50 AM, Gordon Messmer wrote:
I don't see the distinction you're making.
a incremental backup copies everything since the last incremental
a differential copies everything since the last full.
I guess that makes sense, but in backup
Gordon Messmer wrote:
> On 11/09/2015 09:59 AM, John R Pierce wrote:
>> On 11/9/2015 9:50 AM, Gordon Messmer wrote:
>>> I don't see the distinction you're making.
>>
>> a incremental backup copies everything since the last incremental
>> a differential copies everything since the last full.
>
> I
On 11/9/2015 9:50 AM, Gordon Messmer wrote:
I don't see the distinction you're making.
a incremental backup copies everything since the last incremental
a differential copies everything since the last full.
rsync is NOT a backup system, its just a incremental file copy
with the
On 11/09/2015 08:01 AM, Alessandro Baggi wrote:
how to perform a differential backup using rsync?
rsync backups are always incremental against the most recent backup
(assuming you're copying to the same location).
Users says diff because it copy only differences. For me differential
is
I beg to differ.
The rsync command is a fantastic backup system. It may not meet your
needs, but it works really great to make different types of backups for
me. I have a script I use (automate everything) to perform nightly
backups with rsync. Using rsync with USB external hard drives works
On Mon, November 9, 2015 10:01 am, Alessandro Baggi wrote:
> Hi list,
> how to perform a differential backup using rsync?
Differential comes from real backup systems. Rsync is much simpler IMHO,
"-b" backup flag only keeps older version or deleted file/directory with
extra "~" (or whatever you
Hi
For backups with rsync a recommend you to follow the approach discussed on
this website.
It provides you everything for getting a full backup and then the
incremental ones (deltas) using rsync.
The only thing you need in order to do that is that the hosting filesystem
supports hard links,
Hi list,
how to perform a differential backup using rsync?
On web there is a great confusion about diff backup concept when
searched with rsync.
Users says diff because it copy only differences. For me differential is
backup from last full backup.
Other users says that to perform a
On Mon, November 9, 2015 12:42 pm, m.r...@5-cent.us wrote:
> Gordon Messmer wrote:
>> On 11/09/2015 09:59 AM, John R Pierce wrote:
>>> On 11/9/2015 9:50 AM, Gordon Messmer wrote:
I don't see the distinction you're making.
>>>
>>> a incremental backup copies everything since the last
On 11/09/2015 11:10 AM, Frank Cox wrote:
>And if you aren't familiar with hard links, which rsync happily creates,
>they were certainly hard enough to wrap my head around, until I got it...
More than one filename for a particular file. What's difficult about that?
I think the difficult part
On 11/9/2015 11:34 AM, Valeri Galtsev wrote:
I wonder how filesystem behaves when almost every file has some 400 hard
links to it. (thinking in terms of a year worth of daily backups).
XFS handles this fine. I have a backuppc storage pool with backups of
27 servers going back a year...
cp -a daily.0 daily.1
cp -al daily.0 daily.1
All these can be combined with an rsyncd module to allow read only root
access to a remote system excluding the dirs you don't normally want to
be backed up like /proc, /var/lib/mysql, /var/lib/libvirt, ...
Oops... My provider email gateway has
On 11/09/2015 11:34 AM, Valeri Galtsev wrote:
I wonder how filesystem behaves when almost every file has some 400 hard
links to it. (thinking in terms of a year worth of daily backups).
Why do you think that would be a problem?
Most inodes have one hard link. When that link is removed, the
On Mon, 9 Nov 2015 11:36:18 -0800
Gordon Messmer wrote:
> I think the difficult part is that so many people don't understand that
> EVERY regular file is a hard link. It doesn't mean "more than one" at
> all. A hard link is the association between a directory entry
> (filename) and an inode
On Mon, 9 Nov 2015 13:42:08 -0500
m.r...@5-cent.us wrote:
> And if you aren't familiar with hard links, which rsync happily creates,
> they were certainly hard enough to wrap my head around, until I got it...
More than one filename for a particular file. What's difficult about that?
> and
On 11/9/2015 12:02 PM, Frank Cox wrote:
Now that you point that out, I agree. I never thought about it that way before
since I've always looked at a hard link as a link that you create after you
create the initial file, though they become interchangeable after that.
on Unix systems, the
On Mon, November 9, 2015 1:41 pm, Gordon Messmer wrote:
> On 11/09/2015 11:34 AM, Valeri Galtsev wrote:
>> I wonder how filesystem behaves when almost every file has some 400 hard
>> links to it. (thinking in terms of a year worth of daily backups).
>
> Why do you think that would be a problem?
On 2015-11-09, John R Pierce wrote:
>
> XFS handles this fine. I have a backuppc storage pool with backups of
> 27 servers going back a year... now, I just have 30 days of
> incrementals, and 12 months of fulls,
I'm sure you know this already, but for those who may not,
Valeri Galtsev wrote:
>
> On Mon, November 9, 2015 12:42 pm, m.r...@5-cent.us wrote:
>> Gordon Messmer wrote:
>>> On 11/09/2015 09:59 AM, John R Pierce wrote:
On 11/9/2015 9:50 AM, Gordon Messmer wrote:
> I don't see the distinction you're making.
a incremental backup copies
47 matches
Mail list logo