I've had a couple of responses talking about the potentially short lifetime of SSDs based on how much is written to them. I have some comments on that, some links, and am still interested if anyone has any first hand experience with the scenario we are considering.

First off, we expect to be looking at server class SSD devices, probably SLC based, but we haven't gotten specs on what options and prices we have with Supermicro yet. I would note that Sun/Oracle have for several years been configuring data storage systems with SSDs for the write-intent-logs for ZFS filesystems. It is fairly routine to spec a Sun/Oracle storage system with a fair number of SSDs alongside a large number of HDDs, all of them server class devices. The write-intent-log is the highest usage component of the storage system, and they wouldn't be using SSDs there if they were routinely failing in a year or two.

This link is one of the better ones explaining calculations, usage scenarios, 
and life spans for SSDs:
http://hblok.net/blog/posts/2013/03/03/concerns-about-ssd-reliability-debunked-again/

The above was based on work in the link in this one:
http://beta.slashdot.org/story/182227
Some interesting comments, one from someone who cycled SLC NAND well over a million cycles without failure.

This link from Toshiba is with regard to their consumer brand, but their comments on SSD Myth 5 also mention enterprise SSDs:
http://www.toshiba.com/taec/news/media_resources/docs/SSDmyths.pdf

Finally, for actual experimental data, these guys took a bunch of consumer 
grade SSDs and hammered them:
http://techreport.com/review/24841/introducing-the-ssd-endurance-experiment
They stood up pretty well, and that's consumer grade devices.

So, I'm still interested in whether anyone has any real life experiences with SSDs as holding disks for Amanda. Would a pair work well and allow Amanda to drive an LTO6 (up to 160MB/s) in streaming mode somewhere near its rated speed?

Also still open as a question is whether Amanda can stage backups to larger holding disk drives, and, when complete, move them to the SSD holding drives before writing them from there to the LTO6 tape.


TIA

Chris Hoogendyk



On 3/10/14 6:20 PM, Syed Zaeem Hosain ([email protected]) wrote:
1. Continuous writing-reading-deleting to an SDD will wear them out _way_ faster than you might 
like - these drives tend to be rated in "so-many-GBytes-per-day" to get their typical 
rated life of 5 years or so. That could be an expensive "solution" too quickly. Although, 
I do see performance that is generally two to three times faster than SATA III disk drives write 
speeds (non-RAID) for the tests I have done.

2. I think that using four large (2 to 4 TB) drives in RAID 0 (or RAID 10 if 
you want drive reliability ... don't use RAID 5 since that will have parity 
calculation performance hits) will get the performance you need. If the drives 
are in a server (rather than an external USB3 or Thunderbolt box like a Drobo 
or Synology). You should be able to sustain 200 to 400 MBytes/sec such a to the 
RAID pretty readily, I'd think!

3. If you still plan on using SSD, OCZ Technology makes PCI Express SSD's that 
run much faster than the typical SATA III interfaces (see 
http://ocz.com/enterprise for info). But, the cost is high.

4. Finally, don't RAID your SSD drives - this usually disables TRIM support in 
the drive, as I recall.

Z

P.S.: What is the streaming rate of the LTO you are acquiring? What is the 
model, etc.? I am wondering what the next gen of tape is now starting to be 
available and have not looked yet ...

-----Original Message-----
From: [email protected] [mailto:[email protected]] On 
Behalf Of Chris Hoogendyk
Sent: Monday, March 10, 2014 1:48 PM
To: AMANDA users
Subject: followup on recommendations for tape libraries

One issue that comes up repeatedly is the requirements of a server and the 
holding disk configuration to keep an LTO6 (or LTO5) streaming at something 
approaching its rated speed, which is faster than that of an individual disk 
drive.

The most common approach is to have an array of high speed disk drives 
configured in raid10 or raid5 for holding space to get data transfer speeds 
high enough.

We are trying to set up a new server and a new tape library without breaking 
the budget, and we need a lot of external storage just for storage, even 
without the question of Amanda and a holding disk.
We have a J4500 hanging off our T5220 and hope to carry it over to a new 
Supermicro server. We planned on filling the internal bays on the Supermicro 
and using software raid to configure them.
We'll be using Ubuntu LTS, I'm guessing 14.04, since this project will be for 
the summer.

Kicking ideas around for speed, throughput, etc. to drive the LTO, we came up 
with the idea of just getting a pair of SSDs for holding disk space. Those 
ought to be individually faster than the LTO by a good bit.

Has anyone done this? Any comments?

This also lead us to the question/idea of whether Amanda could be configured with some 
sort of "staged" holding disk. In other words, suppose you had some multi-TB 
disk drives for holding disk, and a couple of SSDs for transfer to tape. Backups would go 
to the first stage disk drives. When they were complete, they would be transferred to the 
SSDs. The tape would be written from backups that are complete and on the SSDs. If the 
tape were out of order or offline, then the disk drives would provide some capacity for 
holding incrementals for a while, whereas if our holding disks were just SSDs, then our 
fall back capacity would be substantially smaller.

Thoughts?

Might there be a way within Ubuntu of configuring a disk drive with an ssd for 
read/write cache that would achieve what we are after? ZFS does something a bit 
like this, though not exactly.



--
---------------

Chris Hoogendyk

-
   O__  ---- Systems Administrator
  c/ /'_ --- Biology & Geology Departments
 (*) \(*) -- 347 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst

<[email protected]>

---------------

Erdös 4

Reply via email to