Jason Dixon wrote:
> On Oct 20, 2005, at 1:49 PM, Joe Advisor wrote:
> 
>> Congrats on the cool OpenBSD SAN installation.  I was
>> wondering how you are dealing with the relatively
>> large filesystem.  By default, if you lose power to
>> the server, OpenBSD will do a rather long fsck when
>> coming back up.  To alleviate this, there are numerous
>> suggestions running around that involve mounting with
>> softdep, commenting out the fsck portion of rc and
>> doing mount -f.  Are you doing any of these things, or
>> are you just living with the long fsck?  Thanks in
>> advance for any insight into your installation you are
>> willing to provide.
> 
> This is just a subversion repository server for a bunch of  
> developers.  There are no dire uptime requirements, so I don't see a  
> lengthy fsck being an issue.  Not to mention the hefty UPS keeping it  
> powered.  Sorry if this doesn't help you out, but it's not a big  
> problem on my end (thankfully).
> 
> If it was, I would have just created many slices and distributed  
> projects equally across them.

I'm working on a couple "big storage" applications myself, and yes, this
is what I'm planning on doing, as well.  In fact, one app I'm going to
be turning on soon will be (probably) using Accusys 7630 boxes with
about 600G storage each, and I'll probably split that in two 300G pieces
for a number of reasons:
  1) shorter fsck
  2) If a volume gets corrupted, less to restore (they will be backed
up, but the restore will be a pain in the butt)
  3) Smaller chunks to move around if I need to
  4) Testing the "storage rotation" system more often (I really don't
want my app bumping from volume to volume every six months...I'd rather
see that the rotation system is Not Broke more often, with of course,
enough "slop" in the margins to have time to fix it if something quit
working.)
  5) Cost benefit of modular storage.  Today, I can populate an ACS7630
(three drive, RAID5 module) with 300G drives for probably $900.  I could
populate it with 400G drives for $1200.  That's a lotta extra money for
200G more storage.  Yet, if I buy the 300G drives in a couple storage
modules today, and in about a year when those are nearing full, replace
them with (then much cheaper) 500G (or 800G or ...) drives, I'll come
out way ahead.  Beats the heck out of buying a single 3+TB drive array
now and watching people point and laugh at it in a couple years when it
is still only partly full, and you can buy a bigger single drive at your
local office supply store. :)  With this system, I can easily add-on as
we go, and more easily throw the whole thing away when I decide there is
better technology available.

Would I love to see the 1T limit removed?  Sure.  HOWEVER, I think I
would handle this application the exact same way if it didn't exist
(that might not be true: I might foolishly plowed ahead with the One Big
Pile philosophy, and regretted it later).

For this application, the shorter fsck is not really an issue.  In fact,
as long as the archive gets back up within a week or two, it's ok -- the
first stage system is the one that's time critical...and it is designed
to be repairable VERY quickly, and it can temporarily hold a few weeks
worth of data. :)

Nick.

Reply via email to