On 01/25/10 04:50 PM, David Dyer-Bennet wrote:

What's it cost to run a drive for a year again?  Maybe I really should
just replace one existing pool with larger drives and let it go at that,
rather than running two more drives.

It seems to vary nowadays, but it seems the fewer the RPMs and
the fewer the platters, the lower the consumption. A 500GB WD
drive and the older (7200RPM) Seagate 1.5TB drives both seem
to use around 8W when idling. So they use 8/1000 KW; at $0.10
per KWh *24*365 this is around $7/year. The 5900RPM Seagate
1.5TB drive idles at 5W, so $4.38/year. It gets complicated if your
utility uses time of day pricing and $0.10/KWh is probably low
these days in many places. So for a small number of drives, unless
you want to be /really/ green, power cost may not be a serious
factor. But it adds up fast if you have several drives, and it could
well be cost effective to replace (say) 9*500GB drives with 3*
1.5TB drives (see the "[zfs-discuss] Best 1.5TB drives for
consumer RAID?" thread), although the 5900RPM Seagates
are rather new so they may have some startup problems as
you can see from the somewhat tongue-in-check discussion.

My solution to the general problem is to use a replicated system
(simple 1.5TB mirror) and to zfs send/recv incrementals to keep
then in synch, and to periodically have them switch roles to make
sure all is well. Since zfs send/recv IMO has really bizarre rules
about properties (I understand there are RFEs about this), I have
a custom script I use that does incrementals, one FS at a time and
sends baselines for new FSs. If you areinterested, I posted it here:
http://www.apogeect.com/downloads/send_zfs_space
Obviously it is customized for our environment so it would require
changes to be useful. We've been using it for over a year now and
AFAIK it hasn't skipped a beat. But then we've had no disk drive
errors either (well, a COMSTAR related panic that I don't think
has anything to do with the drives).

FWIW I'm sure I did over 1PByte of data transfers whilst
experimenting with this and didn't experience a single error,
including some deliberate resilvers with 750GB of disk in use.

HTH -- Frank


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to