Re: [zfs-discuss] zfs-discuss Digest, Vol 56, Issue 126

2010-06-30 Thread Eric Andersen

On Jun 28, 2010, at 10:03 AM, zfs-discuss-requ...@opensolaris.org wrote:

 Send zfs-discuss mailing list submissions to
   zfs-discuss@opensolaris.org
 
 To subscribe or unsubscribe via the World Wide Web, visit
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 or, via email, send a message with subject or body 'help' to
   zfs-discuss-requ...@opensolaris.org
 
 You can reach the person managing the list at
   zfs-discuss-ow...@opensolaris.org
 
 When replying, please edit your Subject line so it is more specific
 than Re: Contents of zfs-discuss digest...
 
 
 Today's Topics:
 
   1. Re: ZFS bug - should I be worried about this? (Gabriele Bulfon)
   2. Re: ZFS bug - should I be worried about this? (Victor Latushkin)
   3. Re: OCZ Vertex 2 Pro performance numbers (Frank Cusack)
   4. Re: ZFS bug - should I be worried about this? (Garrett D'Amore)
   5. Announce: zfsdump (Tristram Scott)
   6. Re: Announce: zfsdump (Brian Kolaci)
   7. Re: zpool import hangs indefinitely (retry post in parts; too
  long?) (Andrew Jones)
   8. Re: Announce: zfsdump (Tristram Scott)
   9. Re: Announce: zfsdump (Brian Kolaci)
 
 
 --
 
 Message: 1
 Date: Mon, 28 Jun 2010 05:16:00 PDT
 From: Gabriele Bulfon gbul...@sonicle.com
 To: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] ZFS bug - should I be worried about this?
 Message-ID: 593812734.121277727391600.javamail.tweb...@sf-app1
 Content-Type: text/plain; charset=UTF-8
 
 Yes...they're still running...but being aware that a power failure causing an 
 unexpected poweroff may make the pool unreadable is a pain
 
 Yes. Patches should be available.
 Or adoption may be lowering a lot...
 -- 
 This message posted from opensolaris.org
 
 
 --
 
 Message: 2
 Date: Mon, 28 Jun 2010 18:14:12 +0400
 From: Victor Latushkin victor.latush...@sun.com
 To: Gabriele Bulfon gbul...@sonicle.com
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] ZFS bug - should I be worried about this?
 Message-ID: 4c28ae34.1030...@sun.com
 Content-Type: text/plain; CHARSET=US-ASCII; format=flowed
 
 On 28.06.10 16:16, Gabriele Bulfon wrote:
 Yes...they're still running...but being aware that a power failure causing an
 unexpected poweroff may make the pool unreadable is a pain
 
 Pool integrity is not affected by this issue.
 
 
 
 --
 
 Message: 3
 Date: Mon, 28 Jun 2010 07:26:45 -0700
 From: Frank Cusack frank+lists/z...@linetwo.net
 To: 'OpenSolaris ZFS discuss' zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers
 Message-ID: 5f1b59775f3ffc0e1781f...@cusack.local
 Content-Type: text/plain; charset=us-ascii; format=flowed
 
 On 6/26/10 9:47 AM -0400 David Magda wrote:
 Crickey. Who's the genius who thinks of these URLs?
 
 SEOs
 
 
 --
 
 Message: 4
 Date: Mon, 28 Jun 2010 08:17:21 -0700
 From: Garrett D'Amore garr...@nexenta.com
 To: Gabriele Bulfon gbul...@sonicle.com
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] ZFS bug - should I be worried about this?
 Message-ID: 1277738241.5596.4325.ca...@velocity
 Content-Type: text/plain; charset=UTF-8
 
 On Mon, 2010-06-28 at 05:16 -0700, Gabriele Bulfon wrote:
 Yes...they're still running...but being aware that a power failure causing 
 an unexpected poweroff may make the pool unreadable is a pain
 
 Yes. Patches should be available.
 Or adoption may be lowering a lot...
 
 
 I don't have access to the information, but if this problem is the same
 one I think it is, then the pool does not become unreadable.  Rather,
 its state after such an event represents a *consistent* state from some
 point of time *earlier* than that confirmed fsync() (or a write on a
 file opened with O_SYNC or O_DSYNC).
 
 For most users, this is not a critical failing.  For users using
 databases or requiring transactional integrity for data stored on ZFS,
 then yes, this is a very nasty problem indeed.
 
 I suspect that this is the problem I reported earlier in my blog
 (http://gdamore.blogspot.com) about certain kernels having O_SYNC and
 O_DSYNC problems.  I can't confirm this though, because I don't have
 access to the SunSolve database to read the report.
 
 (This is something I'll have to check into fixing... it seems like my
 employer ought to have access to that information...)
 
   - Garrett
 
 
 
 --
 
 Message: 5
 Date: Mon, 28 Jun 2010 08:26:02 PDT
 From: Tristram Scott tristram.sc...@quantmodels.co.uk
 To: zfs-discuss@opensolaris.org
 Subject: [zfs-discuss] Announce: zfsdump
 Message-ID: 311835455.361277738793747.javamail.tweb...@sf-app1
 Content-Type: text/plain; charset=UTF-8
 
 For quite some time I have been using zfs send -R fsn...@snapname | dd 
 of=/dev/rmt/1ln to make a tape backup of my zfs file system.  A few weeks 
 back the size of the file system grew to larger 

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Eric Andersen
 I'm on snv 111b. I attempted to get smartmontools
 workings, but it doesn't seem to want to work as
 these are all sata drives. 

Have you tried using '-d sat,12' when using smartmontools?

opensolaris.org/jive/thread.jspa?messageID=473727
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread Eric Andersen
 Hi all.
 
 Im pretty new to the whole OpenSolaris thing, i've
 been doing a bit of research but cant find anything
 on what i need.
 
 I am thinking of making myself a home file server
 running OpenSolaris with ZFS and utilizing Raid/Z
 
 I was wondering if there is anything i can get that
 will allow Windows Media Center based hardware (HTPC
 or XBOX 360) to stream from my new fileserver?
 
 Any help is appreciated and remember im new :)
 
 Message was edited by: cloudz

If whatever you are streaming to will read CIFS (or NFS) shares, you're golden. 
 Getting set up is literally one command.

If you are looking for uPnP streaming, the easiest (an only thing I ever got to 
work) solution out there is PS3mediaserver.  It says it has Xbox360 support.   
It depends on mplayer and ffmpeg which are both available in the blastwave 
community repository (or you can try building them from source if you want.)

There are a couple howto's on getting TwonkyMedia and MediaTomb running under 
Solaris if you google for them.  I never could get either one to compile, but I 
haven't tried it in quite some time.

I've heard of people running uPnP servers from linux branded zones as well, so 
that might be an option for you.  I have no experience whatsoever with that, so 
I can't tell you much else about it.

Personally, I gave up on trying to stream to my PS3.  That is mainly because I 
don't have ethernet run to it, and trying to stream any media over wireless-g, 
especially the HD stuff, is frustrating to say the least.  I dropped $100 on an 
xtreamer media player, and it's great.  Plays any format/container I can throw 
at it.  Works real well for me.  Good luck!

Eric
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread Eric Andersen
You may be absolutely right.  CPU clock frequency certainly has hit a wall at 
around 4GHz.  However, this hasn't stopped CPUs from getting progressively 
faster.  I know this is mixing apples and oranges, but my point is that no 
matter what limits or barriers computing technology hits, someone comes along 
and finds a way to engineer around it.

I have no idea what storage technology will look like years from now, but I 
will be very surprised if the limitations you've listed have held back advances 
in storage devices.  No idea what those devices will look like or how they'll 
work.  If someone told me roughly 10 years ago that I would be using multi-core 
processors at the same clock speed as my Pentium 4, I would have probably 
scoffed at the idea.  Here we are.  I'm a drinker, not a prophet ;-)

Like I said, I've built my system planning to upgrade with bigger capacity 
drives when I start running out of space rather then adding more drives.  This 
is almost certainly unrealistic.  I've always built my systems around planned 
upgradeability, but whenever it does come time for an upgrade, it never makes 
sense to do so.  It's usually much more cost effective to just build a new 
system with newer and better technology.  It should take me a long while to 
fill up 9TB, but there was a time when I thought a single gigabyte was a 
ridiculous amount of storage too.

Eric

On Apr 8, 2010, at 11:21 PM, Erik Trimble wrote:

 Eric Andersen wrote:
 I find Erik Trimble's statements regarding a 1 TB limit on drives to be a 
 very bold statement.  I don't have the knowledge or the inclination to argue 
 the point, but I am betting that we will continue to see advances in storage 
 technology on par with what we have seen in the past.  If we still are 
 capped out at 2TB as the limit for a physical device in 2 years, I solemnly 
 pledge now that I will drink a six-pack of beer in his name.  Again, I 
 emphasize that this assumption is not based on any sort of knowledge other 
 than past experience with the ever growing storage capacity of physical 
 disks.
 
  
 Why thank you for recognizing my bold, God-like predictive powers.  It comes 
 from my obviously self-descriptive name, which means Powerful/Eternal Ruler 
   wink
 
 Ahem.
 
 I'm not saying that hard drive manufacturers have (quite yet) hit their 
 ability to increase storage densities - indeed, I do expect to see 4TB drives 
 some time in the next couple of years.
 
 What I am saying is that it doesn't matter if areal densities continue to 
 increase - we're at the point now with 1TB drives where the number of 
 predictable hard error rates is just below the level which we can tolerate.   
 That is, error rates (errors per X bits read/written) have dropped linearly 
 over the past 3 decades, while densities are on a rather severe geometric 
 increase, and data transfer rate is effectively stopped increasing at all.  
 What this means is that while you can build a higher-capacity disk, the time 
 you can effectively use it is dropping (i.e. before it experiences a 
 non-recoverable error and has to be replaced), and the time that it takes to 
 copy off all the data from drive to another one is increasing.   If X = (time 
 to use ) and Y = (time to copy off data), when X  2*Y, you're screwed. In 
 fact, from an economic standpoint, when X  100 * Y, you're pretty much 
 screwed. And 1TB drives are about the place where they can still just pass 
 this test.  1.5TB
  drives and up aren't going to be able to pass it.
 
 Everything I've said applies not only to 3.5 drives, but to 2.5 drives. 
 It's a problem with the basic winchester hard drive technology.  We just get 
 a bit more breathing space (maybe two technology cycles, which in the HD 
 sector means about 3 years) with the 2.5 form factor. But even they are 
 doomed shortly.
 
 
 I got a pack of Bud with your name on it.  :-)
 
 
 
 -- 
 Erik Trimble
 Java System Support
 Mailstop:  usca22-123
 Phone:  x17195
 Santa Clara, CA
 Timezone: US/Pacific (GMT-0800)
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread Eric Andersen

 I am doing something very similar.  I backup to external USB's, which I
 leave connected to the server for obviously days at a time ... zfs send
 followed by scrub.  You might want to consider eSATA instead of USB.  Just a
 suggestion.  You should be able to go about 4x-6x faster than 27MB/s. 

I did strongly consider going with eSATA.  What I really wanted to use was 
FireWire 800 as it is reasonably fast and the ability to daisy chain devices is 
very appealing, but some of the stuff I've read regarding the state of 
OpenSolaris FireWire drivers scared me off.  

I decided against eSATA because I don't have any eSATA ports.  I could buy a 
controller or run SATA to eSATA cables of the four available onboard ports, but 
either way, when/if I run out of ports, that's it.  With USB, I can always use 
a hub if needed (at even slower speeds).  If OpenSolaris supported SATA port 
multipliers, I'd have definitely gone with eSATA.  The speed issue isn't really 
critical to me, especially if I'm doing incremental send/receives.  Recovering 
my data from backup will be a drag, but it is what it is.  I decided cheap and 
simple was best, and went with USB.

 I have found external enclosures to be unreliable.  For whatever reason,
 they commonly just flake out, and have to be power cycled.  This is
 unfortunately disastrous to solaris/opensolaris.  The machine crashes, you
 have to power cycle, boot up in failsafe mode, import the pool(s) and then
 reboot once normal.

This is what I've overwhelmingly heard as well.  Most people point to the 
controllers in the enclosures.  If I could find a reasonable backup method that 
avoided external enclosures altogether, I would take that route.  For cost and 
simplicity it's hard to beat externals.

 I am wondering, how long have you been doing what you're doing?  Do you
 leave your drives connected all the time?  Have you seen similar reliability
 issues?  What external hardware are you using?

Not long (1 week), so I'm just getting started.  I don't leave the drives 
connected.  Plug them in, do a backup, zpool export, unplug and throw in my 
safe.  It's far from great, but it beats what I had before (nothing).  I plan 
to do an incremental zfs send/receive every 2-4 weeks depending on how much new 
data I have.  I can't attest to any sort of reliability as I've only been at it 
for a very short period of time.  I am using 2TB WD Elements drives (cheap).  
This particular model (WDBAAU0020HBK-NESN) hasn't been on the market too 
terribly long.  There is one review on Newegg of someone having issues with one 
from the start.  It sucks, but I think the reality is that it's pretty much a 
crapshoot when it comes to reliability on external drives/enclosures.

 I started doing this on one system (via eSATA) about a year ago.  It worked
 flawlessly for about 4 months before the disk started crashing.  I started
 doing it on another system (via USB) about 6 months ago.  It just started
 crashing a couple of weeks ago.
 
 I am now in the market to try and identify any *well made* external
 enclosures.  The best I've seen so far is the Dell RD1000, but we're talking
 crazy overpriced, and hard drives that are too small to be useful to me.

If you find something good, please let me know.  There are a lot of different 
solutions for a lot of different scenarios and price points.  I went with 
cheap.  I won't be terribly surprised if these drives end up flaking out on me. 
 You usually get what you pay for.  What I have isn't great, but it's better 
than nothing.  Hopefully, I'll never need to recover data from them.  If they 
end up proving to be too unreliable, I'll have to look at other options.

Eric

 If we still are capped out at 2TB as the limit for a physical
 device in 2 years, I solemnly pledge now that I will drink a six-pack
 of beer in his name.  
 
 I solemnly pledge to do it anyway.  And why wait?  ;-)
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Eric Andersen
I thought I might chime in with my thoughts and experiences.  For starters, I 
am very new to both OpenSolaris and ZFS, so take anything I say with a grain of 
salt.  I have a home media server / backup server very similar to what the OP 
is looking for.  I am currently using 4 x 1TB and 4 x 2TB drives set up as 
mirrors.  Tomorrow, I'm going to wipe my pool and go to 4 x 1TB and 4 x 2TB in 
two 4 disk raidz's.

I backup my pool to 2 external 2TB drives that are simply striped using zfs 
send/receive followed by a scrub.  As of right now, I only have 1.58TB of 
actual data.  ZFS send over USB2.0 capped out at 27MB/s.  The scrub for 1.5TB 
of backup data on the USB drives took roughly 14 hours.  As needed, I'll 
destroy the backup pool and add more drives as needed.  I looked at a lot of 
different options for external backup, and decided to go with cheap (USB).

I am using 1TB and 2TB WD Caviar Green drives for my storage pool, which are 
about the cheapest and probably close to the slowest consumer drives you can 
buy.  I've only been at this for about 4-5 months now, and thankfully I haven't 
had a drive fail yet so I cannot attest to resilver times.  I do weekly scrubs 
on both my rpool and storage pool via a script called through cron.  I just set 
things up to do scrubs during a timeframe when I know I'm not going to be using 
it for anything.  I can't recall the exact times it took for the scrubs to 
complete, but it wasn't anything that interfered with my usage (yet...)

The vast majority of any streaming media I do (up to 1080p) is over wireless-n. 
 Occasionally, I will get stuttering (on the HD stuff), but I haven't looked 
into whether it was due to a network or I/O bottleneck.  Personally, I would 
think it was due to network traffic, but that is pure speculation.  The vast 
majority of the time, I don't have any issues whatsoever.  The main point I'm 
trying to make is that I'm not I/O bound at this point.  I'm also not streaming 
to 4 media players simultaneously.

I currently have far more storage space than I am using.  When I do end up 
running low on space, I plan to start with replacing the 1TB drives with, 
hopefully much cheaper at that point, 2TB drives.  If using 2 x raidz vdevs 
doesn't work well for me, I'll go back to mirrors and start looking at other 
options for expansion.

I find Erik Trimble's statements regarding a 1 TB limit on drives to be a very 
bold statement.  I don't have the knowledge or the inclination to argue the 
point, but I am betting that we will continue to see advances in storage 
technology on par with what we have seen in the past.  If we still are capped 
out at 2TB as the limit for a physical device in 2 years, I solemnly pledge now 
that I will drink a six-pack of beer in his name.  Again, I emphasize that this 
assumption is not based on any sort of knowledge other than past experience 
with the ever growing storage capacity of physical disks.

My personal advice to the OP would be to set up three 4 x 1TB raidz vdevs, and 
investing in a reasonable backup solution.  If you have to use the last two 
drives, set them up as a mirror.  Redundancy is great, but in my humble 
opinion, for the home user that is using cheap hardware, it's not as critical 
as performance and available storage space.  That particular configuration 
would give you more IOPS than just two raidz2 vdevs, with slightly less 
redundancy and slightly more storage space.  For my own needs, I don't see 
redundancy as being as high a priority as IOPS and available storage space.  
Everyone has to make their own decision on that, and the ability of ZFS to 
accommodate a vast array of different individual needs is a big part of what 
makes it such an excellent filesystem.  With a solid backup, there is really no 
reason you can't redesign your pool at a later date if need be.  Try out what 
you think will work best, and if that configuration doesn't work well in s
 ome way, adjust and move on...

There are a few different schools of thought on how to backup ZFS filesystems.  
ZFS send/receive works for me, but there are certainly weaknesses with using it 
as a backup solution (as has been much discussed on this list.)

Hopefully, in the future it will be possible to remove vdevs from a pool and to 
restripe data across a pool.  Those particular features would certainly be 
great for me.

Just my thoughts.

Eric
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID10

2010-03-26 Thread Eric Andersen
It depends a bit on how you set up the drives really.  You could make one raidz 
vdev of 8 drives, losing one of them for parity, or you could make two raidz 
vdevs of 4 drives each and lose two drives for parity (one for each vdev).  You 
could also do one raidz2 vdev of 8 drives and lose two drives for parity, or 
two raidz2 vdevs of 4 drives each and lose four drives for parity (2 for each 
raidz2 vdev).  That would give you a bit better redundancy than using 4 mirrors 
while giving you the same available storage space.  The list goes on and on.  
There are a lot of different configurations you could use with 8 drives, but 
keep in mind once you add a vdev to your pool, you can't remove it.

Personally, I would not choose to create one vdev of 8 disks, but that's just 
me.  It is important to be aware that when and if you want to replace the 1.5TB 
disks with something bigger, you need to replace ALL the disks in the vdev to 
gain the extra space.  So, if you wanted to go from 1.5TB to 2TB disks down the 
road, and you set up one raidz of 8 drives, you need to replace all 8 drives 
before you gain the additional space.  If you do two raidz vdevs of 4 drives 
each, you need to replace 4 drives to gain additional space.  If you use 
mirrors, you need to replace 2 drives.  Or, you can add a new vdev of 2, 4, 8, 
or however many disks you want if you have the physical space to do so.

I believe you can mix and match mirror vdevs and raidz vdevs within a zpool, 
but I don't think it's recommended to do so.  The ZFS best practices guide has 
a lot of good information in it if you have not read it yet (google).

You might have less usable drive space using mirrors, but you will gain a bit 
of performance, and it's a bit easier to expand your zpool when the time comes. 
 A raidz (1,2,3) can give you more usable space, and can give you better or 
worse redundancy depending on how you set it up.  There is a lot to consider.  
I hope I didn't cloud things up for you any further or misinform you on 
something (I'm a newb too, so don't take my word alone on anything).  

Hell, if you wanted to, you could also do one 8-way mirror that would give you 
an ignorant amount of redundancy at the cost of 7 drives worth of usable space.

It all boils down to personal choice.  You have to determine how much usable 
space, redundancy, performance, and ease of replacing drives mean to you and go 
from there.  ZFS will do pretty much any configuration to suit your needs. 

eric
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-20 Thread Eric Andersen
I went through this determination when setting up my pool.  I decided to go 
with mirrors instead of raidz2 after considering the following:

1.  Drive capacity in my box.  At most, I can realistically cram 10 drives in 
my box and I am not interested in expanding outside of the box.  I could go 
with 2.5 inch drives and fit a lot more, but I don't feel the necessity to do 
so.  That being said, given the historic trend for mass storage drives to 
become cheaper over time, I have a feeling that I will be replacing drives to 
expand storage space long before the drives themselves start failing.  The 
added redundancy of raidz2 is great, but I am betting that, barring a poorly 
manufactured drive, I will be replacing the drives with bigger drives before 
they have a chance to reach the end of their life.

2.  Taking into account the above, it's a great deal easier on the pocket book 
to expand two drives at a time instead of four at a time.  As bigger drives are 
always getting cheaper, I feel that I have a lot more flexibility with mirrors 
when it comes to expanding.  If you have limitless physical space for drives, 
you might feel differently. 

3.  Mirrors are going to perform better than raidz.  Again, redundancy is 
great, but so is performance.  My setup is for home use.  I want to keep my 
data safe but at the same time I am limited by cost and space.  I think that 
given the tradeoff between the two, mirrors win.  I feel that the chances of 
two drives in a mirror failing simultaneously are remote enough that I'll take 
the risk.

4.  Again, I'm running this at home.  It's not mission critical to me to have 
my data available 24/7.  Redundancy is a convenience and not a necessity.  
Regardless of what you choose, backups are what will save your ass in the event 
of catastrophe.  Having said that, I currently don't have a good backup 
solution and how to implement a good backup solution seems to be a hot topic on 
this list lately.  Figuring out how to easily, effectively and cheaply back up 
multiple terabytes of storage is my number one priority at the moment.

So anyways, all things considered, I prefer the better performance and easier 
expansion of storage space vs my physical space over a relatively small layer 
of extra redundancy.  If you aren't doing anything that necessitates the added 
redundancy of raidz2, go with mirrors.  Either way, if you care about your 
data, back it up.

eric
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss