On 16/03/2010 19:57, Stroller wrote:
> How does your system boot if your RAID1 system volume fails? The one
> you have grub on? I think you mentioned a flash drive, which I've seen
> mentioned before. This seems sound, but just to point out that's
> another, different, single point of failure.
Well, at the moment, I don't have a RAID system... A flash drive (USB
key) seems a reasonable strategy - I could even have two containing
identical data - so, if the first were to fail then the second would
kick in - if not automatically - then after the duff flash-drive is
removed.  A neat side effect of this would be to eliminate a moving part
on the server - making it quieter... and the drives themselves can be
located at two physically remote places on my LAN.

>>> by one client at a time), the simplest solution is to completely avoid
>>> having a FS on the storage server side -- just export the raw block
>>> device via iSCSI, and do everything on the client.
>> ...
>> Snap-shots, of course, are only really valuable for non-archive data...
>> so, in future, I could add a ZFS volume using the same iSCSI strategy.
> If you do not need data sharing (i.e. if your volumes are only mounted
Yes - I don't think I'd need sharing.  It strikes me that it should be
possible to have a 'live' backup server which just reads until
fail-over...  with a different /var/* - of course.

> I have wondered if it might be possible to create a large file (`dd
> if=/dev/zero of=/path/to/large/file` constrain at a size of 20gig or
> 100gig or whatever) and treat it as a loopback device for stuff like
> this. It's not true snapshotting (in the ZFS / BTFS sense), but you
> can unmount it and make a copy quite quickly.
You could, but the advantage of ZFS is the efficiency of snap-shots.
With your strategy I'd need to process all of the large file every time
I want to make a snapshot... which, even for a mere 100gig, won't be quick.

Reply via email to