From: James Hoyt
Date: 20. Mai 2014 at 15:37:51
Subject: Re: [zfs-macos] RAIDZ1 running slow =(

> Thanks for the detailed reply.  
> The slow performance is only when I'm using the RAID array so I assume

It would be interesting to try a raw dd of=/dev/null concurrently from each
drive while zpool is not imported/mounted and see if the performance is
still sucky. Many of us have found that sub-standard components are the
cause of this (e.g. a while back I was getting poor performance, soon,
the drive failed completely soon afterwards).

> without it connected means I can't use it means there is no slow  
> performance. I would love instructions on how to scrub/clean the pool.  
> Does it do a data wipe?  

Scrubbing is non-destructive, but it is IO intensive, I typically see IO
near wire speed of the drive. Assuming you currently have some bottleneck
this may be painful for your system.

Scrub via:

        zpool scrub <pool>

It may take a wee while to ramp up to max speed. You can check status via:

        zpool status 5

And cancel via:

        zpool scrub -s <pool>

A very useful thing is to have a bootable FreeBSD (or smartos/illumos if  
you are ok in solaris world) to do faster scrubs from. I can personally
recommend mfsBSD which is a memory-resident FreeBSD.
Boot from that, `zpool import -f ` and scrub away.

> I was trying to think of a good backup solution. I have over 3 TBs of  
> music in FLAC (lots of which I've paid for) and was hoping RAIDZ would  
> take away the need for backups. I was thinking of buying a 4 TB drive  
> and moving all my data on that and storing the drive offsite or  
> something (in case of burglary, fires, etc). Having a single drive  
> fail safe seems secure enough for me so I don't think incremental  
> backups are needed.  
> As for running the latest beta ZFS, I didn't because the FAQ warned me  
> not to. What are the differences? Would I have to format and rebuild  
> the array?  

I’ve been using the beta before it was alpha. It sometimes has trouble
shutting down but then I do that rarely, and other than that I think
the performance is better, and the functionality is the same as other
zfs implementations/ports. A power cycle resolves the hung reboot, and
as it’s zfs I am very sure my data is safe.

I do have a rather robust backup environment, but using `zfs send …`
to a 2nd mac and to a remote FreeBSD server is a very nice addition.
wrt rebuilding the array, personally I would do this. 

> The drives I have are four 3 TB Hitachi HDS723030BLE640.  

I can’t tell if these are actually 4K sector drives under the hood, but gives you some idea of the
issue. Basically if you can, create your pool with 4k alignment by default
even if your drives don’t support it today. It is not possible to change it
after the fact (although I have a sneaking suspicion there are a few dark
art tricks to help with this if you have drives to spare). I found this made
a noticeable difference after I switched over from zfsosx 512B blocks to
zfsosx with 4K alignment. It is easy to do this if you can duplicate your

> I started navigating around my computer again, and the slowdown seems
> to be when going into folders with over 1000 files (for anything more
> it will take 1-3 minutes to just list the files in the directory).
> Also when I'm saving images from Firefox (no virtual machine running)
> it takes awhile to navigate the folder structure and sometimes not all
> the folders show, but they do in the Finder. So I wonder if this is an
> issue with programs not getting along with ZFS but the finder being
> fine with it.

I use a specific format for Finder-friendliness:

    zfs create -o normalization=formD atime=off <name>

Which also inherits the settings I have on the higher dataset of


More Finder notes & tricks here but
a few of the points are out of date wrt zfs-osx, the section
about sending snapshots.

> Other things to note, I did disable Spotlight on the drive to make
> sure that isn't running, but I do have QuickSilver. Originally, I had
> QuickSilver indexing the drive, but the computer was practically
> unusable when it did that so I disabled that.

Could be mds is still indexing, it’s a PITA to disable it. my
fix_finder script above does that for each dataset on the zfs drive.

Dave Cottlehuber
Sent from my Couch


You received this message because you are subscribed to the Google Groups 
"zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
For more options, visit

Reply via email to