On Tue, Sep 21, 2021 at 10:01:04AM +0200, Diego Zuccato wrote:
A different approach here.
I configured a single MD-RAID6 of 16 4TB disks and the vtapes are simple folders. I did it mostly because I didn't know the exact space requirements our backups would have. Today I'd go with a smaller RAID (probably just 10 disks, so to have 32TB and a sane stripe size of 4k).

I didn't like the idea of having "hard-sized" vtapes because of the wasted space.

Doesn't your "tapetype" definition specify a size?  Perhaps huge?

Defining a "hard size" does not allocate that amount of disk to each
vtape.  It sets a maximum size that is taken from the available pool.
In your case the pool is the entire raid, in mine it is each disks
filesystem.

My vtapes are "sized" to 100GB (0.1TB) and I create about 12 vtapes
per base10 TB.  Obviously I can't allocate 1.2TB of space from 1.0TB.
Experience in my environment shows this over allocations results in
about 90% filesystem usage as some tapes fill to 100%, others much less.

Jon

I don't like having to remove SAS/SATA HDDs because the connector is usually specced for about 500 mating cycles. If you need frequent swaps it's better to use a caddy with its own connectors (like ESATA/USB external enclosure, one per removable disk). With USB3 the performance shouldn't be an issue. Maybe the CPU load, but a recent system should be able to handle it quite well.

HIH.

Il 21/09/2021 05:17, Olivier ha scritto:
Jon,

Interesting discussion in other threads got me wondering
whether I should have made some other choices when setting
up my vtape environment.  Particularly whether I should
have used LVM (Logical Volume Management) to create one
large filesystem covering my multiple dedicated disks.

Its a topic I do not recall being discussed, pros & cons.

I am using 7 disks of 3 (or is that 4) and 6 TB (should upgrade them all to 6TB
soon) almost dedicated to vtapes (the last disk also has a copy of the
deleted accounts). I have them configured as individual disks. The size
of my vtapes is also about 100GB and I am using a small chunck size, so
my disks end up being 80% full at least.

When I designed my vtape architecture, I decided to keep each disk
individual so that it can be put offline after usage. My idea was to
have a system that could prompt an operator to "mount a disk" before the
backup and the disk could be manually unmounted and safe stored each
day. It is taking advantage of the automount service on FreeBSD.

Mounting could be USD disk, or hot-swap. I never went very far in the
implementation. I wrote all that many years ago when vtapes were new and
limited to a single directory, that is why I wrote my own tape changer.

I knew about the risk of loosing a disk and it being a good portion of
consecutive backups. But what I had in mind was:

- have the system as simple and as portable as possible, so I can shove
  a disk in another machine and extract contents manually (during the
  great flood of Bangkok in 2011, I moved all the servers and also took
  all my hard disks from Amanda backup, but I did not need to move the
  rack mounted server itself);

- a side advantage of my own tape changer is that I can keep the older
  disks (each disk has an individual label, like any vtape has a label)
  (I have updated them from 500GB to 1TB to 3TB and soon to 6TB) and the
  vtapes are still known into tapelist (they are marked noreuse). If the
  need arise, I can still remount that old disk.

So far (10+ years) the only disk I had failling was the disk having the
holding partition, I guess it was because of excessive usage.

I understand that vtapes have evolved since I started using them, but my
system works for me so I never took the time to search any further.

Best regards,

Olivier



--
Jon H. LaBadie                 j...@labadie.us
 11226 South Shore Rd.          (703) 787-0688 (H)
 Reston, VA  20190              (703) 935-6720 (C)

Reply via email to