Hash: SHA1

 Hi James,

 pretty much all relevant things have already been said, so I can make
 this short (well, didn't worked out).

 MacZFS (stable version) comes with man pages.  A simple "man zpool"
 should give you access to all the ZFS pool maintenance commands.

 I virtually fell off my chair when I read your two statements
 "I thought I need not backups" and "what is scrub, Does it do a data

 As Jason said, ZFS is a wonderful piece of technology, but it is not
 that kind of software one should use by just following some
 setp-by-step guides.  It will sooner or later bite you.  We tried to
 make it save and we tried to make it Mac friendly, but ZFS is
 ultimately designed for big data centers and no interface magic can
 really hide that fact.

 Nevertheless,  to  answer  your  questions:

 Scrub reads all data on a pool and verifies the checksums ZFS
 maintains for each chunk of data stored in a pool.  Jason gave you
 the commands in his other post.

 If (big if) you have redundancy in your pool, that is a mirror or a
 raidz, then and only then it can repair damaged data in the background.

 It does so, by either getting a good copy from the other side(s) of
 the mirror, or by combinatorial calculations from the raidz parity

 In a raidzX you can loose X drives without immediate data loss, in a
 N-way mirror you can loose (N-1) drives without immediate data loss.

 Note!  The keyword here is *immediate* data loss.  If you buy 3 drives
 in a batch, and put these drives in a pool (mirror or raidz), then
 these drives will experience similar workload under similar condition,
 which significantly increases the likelihood to fail around the same

 Which means in a raidz1, you have a significant chance, that a second
 drive will fail while you are in the process of replacing a first
 failed drive.  The moment a second drive fails, your data is gone.

 That is why you need backups.

 I have personally seen this happen more than once, and switched to
 always pairing drives from different manufactures and suppliers into
 mirror pairs.  I say "and suppliers" to not have both drives
 experience the same shuffles and drops to the ground while in

 And you need regular(!) scrubs, to find out that a drive is getting
 weak before it fails completely, so you can replace it in time.

 And one more word on replacing drives:

 Once you have a drive failure, chances are you are in panic mode or at
 least in a hurry to fix things, which means prone to make mistakes.
 We are all just humans and do make mistakes.  So you should exercise a
 drive replacement in advance.  Replacing a random drive on a redundant
 pool using "zpool replace pool drive1 drive2" is supposed to be a safe
 operation, so you can simply try it out.  The tricky part is how to
 hookup the drives and identify the right drive, not the actual

 Using "zpool replace" instead of the sometimes suggested "zpool
 attach" / "zpool detach" saves you from the all to common mistake to
 say "zpool add" instead of "zpool attach", a mistake that would screw
 up your pool layout and that can only be fixed by destroying and
 recreating the pool.

 Regarding the slowness:

 Using 4k drives in a pool configure for 512b drives (the standard type
 since hard drives were invented) will kill performance.

 Using 512b drives in a pool configured for 4k drives does no harm,
 except wasting a bit of space if you have many small files.

 So I suggest to destroy and recreate the pool if your drives are 4k
 (also called "enhanced format").  To configure a pool for 4k, you add
 "-o ashift=12" the the "zpool create" command.  "zpool get all" should
 tell you the current ashift value, which is 9 for 512b and 12 for 4k

 :-) Exercise for the reader: Which ashift value to use for old style
 16k flash memory?  (Not that it would last long, but that's not the
 point here.)

 Regarding slow, long directories:

 Another issue our colleagues working on the new MacZFS find out:
 The Mac OSX kernel has a problem with caching really long directories,
 because it can run out of some internal file resources (the famous
 vnodes).  This hits ZFS especially hard due to the way it handles its
 own short time locking and caching.

 Best regards


Am 20.05.14 18:09, schrieb James Hoyt:
> You have completely lost me at this point. You were rather 
> condescending and not helpful. I was hoping for instructions on how
> to clean and scrub and saw none of that. At least point me to some
> proper links. I also don't know what a 4k drive is.
> I carefully followed and read ALL the instructions and FAQ and
> Getting Started guide on maczfs.org. Please don't speak to me like
> I didn't do my research or follow the proper instructions.
> - James
> On Tue, May 20, 2014 at 9:24 AM, Jason Belec
> <jasonbe...@belecmartin.com> wrote:
>> OK, one thing, any indexing under that version of ZFS is going to
>> kill performance. Long standing issue.
>> No backups? Did you bump your noggin? With your current setup you
>> have improved your chances if your scrubbing regularly and if you
>> only lose a drive at anyone time. And adding backup will
>> drastically increase your chances.
>> Not understanding ZFS is a BIG reason to stop and re-evaluate
>> your priorities. It's amazing tech IF used properly.
>> For what it sounds like you want from ZFS you should use mirrors.
>> You can do 2 mirrors of 2 drives each stripped under ZFS. This
>> will increase the safety of your data. Even that should have a
>> back up drive you move key files or better yet 'snapshots' onto.
>> BUT you are going to have to understand ZFS to have any hope of
>> not drowning in a pool of tears at some point.
>> The new ZFS is under development but far more functional.
>> Eliminating many of the old version issues listed numerous times
>> throughout the forum. Either way you should ALWAYS understand the
>> tech you rely on. Period.
>> Please start learning with the word 'scrub' then the word
>> 'snapshot' and how to swap a failed drive and do it all. Before
>> committing your valuable data. Drives fail. Repeat. Drives fail.
>> Data must be restored at some point. ZFS is magical if you have
>> planned ahead. I have recovered data assumed totally lost, YMMV.
>> As for those drives are they 4k? If so you formatted your pool
>> incorrectly. I don't have any of those so I don't have notes.
>> Should be a simple Google search to find out. And the wiki has
>> the instructions on 4k drive setup.
>> Doing things right is what the wiki tries to help people with.
>> The forum allows you to search for other peoples heartbreak to
>> help prevent your own.  The wizards tracking this stuff have done
>> a wonderful job.
>> Hope this gets you rolling. I'd still check your cables as well.
>> Normally I attach a drive, build a pool, test a lot, destroy
>> pool. Add another drive. Repeat. Better safe than sorry.
>> Manufacturers are not safe guarding your data.
>> Jason Sent from my iPhone 5S
>>> On May 20, 2014, at 9:37 AM, James Hoyt <djnati...@gmail.com>
>>> wrote:
>>> Thanks for the detailed reply.
>>> The slow performance is only when I'm using the RAID array so I
>>> assume without it connected means I can't use it means there is
>>> no slow performance. I would love instructions on how to
>>> scrub/clean the pool. Does it do a data wipe?
>>> I was trying to think of a good backup solution. I have over 3
>>> TBs of music in FLAC (lots of which I've paid for) and was
>>> hoping RAIDZ would take away the need for backups. I was
>>> thinking of buying a 4 TB drive and moving all my data on that
>>> and storing the drive offsite or something (in case of
>>> burglary, fires, etc). Having a single drive fail safe seems
>>> secure enough for me so I don't think incremental backups are
>>> needed.
>>> As for running the latest beta ZFS, I didn't because the FAQ
>>> warned me not to. What are the differences? Would I have to
>>> format and rebuild the array?
>>> The drives I have are four 3 TB Hitachi HDS723030BLE640.
>>> I started navigating around my computer again, and the slowdown
>>> seems to be when going into folders with over 1000 files (for
>>> anything more it will take 1-3 minutes to just list the files
>>> in the directory). Also when I'm saving images from Firefox (no
>>> virtual machine running) it takes awhile to navigate the folder
>>> structure and sometimes not all the folders show, but they do
>>> in the Finder. So I wonder if this is an issue with programs
>>> not getting along with ZFS but the finder being fine with it.
>>> Other things to note, I did disable Spotlight on the drive to
>>> make sure that isn't running, but I do have QuickSilver.
>>> Originally, I had QuickSilver indexing the drive, but the
>>> computer was practically unusable when it did that so I
>>> disabled that.
>>> I look forward to any advice you guys may have.
>>> Thanks,
>>> James
>>>> On Tue, May 20, 2014 at 6:14 AM, Jason Belec
>>>> <jasonbe...@belecmartin.com> wrote: OK, doesn't look like
>>>> RAM, processor etc., are the issue.... Let's work with that
>>>> in mind for now.
>>>> When the pool and the associated drives are not connected, is
>>>> the computer back to your expectation of normal? If so, you
>>>> have one or more bad cables, one or more bad drives, or a bit
>>>> of both, perhaps a bad or not quite capable power supply
>>>> (solves 90% of all issues I come across). Maybe even an issue
>>>> with the motherboard. Simplest thing, have you run a scrub on
>>>> this pool? Clean?
>>>> The type of drives you have is not an issue, the make and
>>>> known issues with said drives might be, but you didn't
>>>> provide that info.
>>>> Using a raidcard and macJournaled terms, thrown out will not
>>>> help you, your either ZFS or not. That said, you will not get
>>>> the same speed from ZFS as from other raid setups, but you
>>>> will get peace of mind on data integrity. I do hope you are
>>>> also backing up data from the pool as well or eventually you
>>>> will be in tears like so many others. A little forum
>>>> searching under old and new versions of mac zfs will be
>>>> helpful.
>>>> Since your getting started, once this is resolved it might be
>>>> better to build/run this under the latest (yes its in
>>>> development) Mac ZFS rather than the old tired version. It is
>>>> quite a bit different, modern and makes many things a lot
>>>> easier. (Insert legal disclaimer here) ;)
>>>> Interesting aside: Dave mentioned an interesting point about
>>>> wearing out SSDs, and I must admit I've had two such
>>>> occurrences but only with a hackintosh and only with less
>>>> than stellar drives. Seems that here around the mad science
>>>> lab Intel SSDs are the most reliable long term. I have two of
>>>> their originals still outlasting several other brands.
>>>> -- Jason Belec Sent from my iPad
>>>>> On May 19, 2014, at 10:05 AM, James Hoyt
>>>>> <djnati...@gmail.com> wrote:
>>>>> Thanks for all the replies guys =D
>>>>> Sorry for lack of information. I'm running a Hackintosh
>>>>> with a 256 GB SSD and I sometimes run Windows 8.1 in a
>>>>> virtual machine via VmWare Fusion. The virtual image file
>>>>> is also located on the SSD. The only files I have on my
>>>>> zpool are data files. I don't run an OS or VM image from
>>>>> it. I have 12 GBs of RAM and a four core i5 processor. On
>>>>> the VM, I dedicate 6 GBs of RAM and 2 cores to it. It
>>>>> should be noted that I experience the slow down even when
>>>>> vmware is off it's just the drives act the slowest when the
>>>>> VM is running.
>>>>> As for how I created the zpool, I followed the Getting
>>>>> Started guide with
>>>>> zpool create murr raidz disk3s2 disk1s2 disk2s2 disk4s2
>>>>> Please help... I really hope I don't have to recreate it,
>>>>> but it's looking that way.
>>>>> Would it be better if I bought a RAID card and use Mac OS
>>>>> Journaled? Cost is an issue... the other issue is these are
>>>>> regular desktop 7200 RPM drives.. not NAS drives.
>>>>> Thanks,
>>>>> James
>>>>>> On Mon, May 19, 2014 at 7:43 AM, Jason Belec
>>>>>> <jasonbe...@belecmartin.com> wrote: Dave has posted some
>>>>>> good info. Reminds me why I prefer Virtualbox. ;) We do
>>>>>> seem to need more detail though to really help the
>>>>>> original OP.
>>>>>> Jason Sent from my iPhone 5S
>>>>>>> On May 19, 2014, at 4:00 AM, Dave Cottlehuber
>>>>>>> <d...@jsonified.com> wrote:
>>>>>>> From: James Hoyt
>>>>>>> djnati...@gmail.com(mailto:djnati...@gmail.com) Reply:
>>>>>>> zfs-macos@googlegroups.com
>>>>>>> zfs-macos@googlegroups.com(mailto:zfs-macos@googlegroups.com)
Date: 19. Mai 2014 at 02:27:36
>>>>>>> To: zfs-macos@googlegroups.com
>>>>>>> zfs-macos@googlegroups.com(mailto:zfs-macos@googlegroups.com)
Subject: [zfs-macos] RAIDZ1 running slow =(
>>>>>>>> So I setup a MacZFS RaidZ rather easily and was happy
>>>>>>>> with myself. I had four 3 TB internal SATA drives in
>>>>>>>> a zpool giving me around 9 TB of space.
>>>>>>>> jamess-imac:~ sangie$ zpool status murr pool: murr 
>>>>>>>> state: ONLINE scrub: none requested config:
>>>>>>>> NAME STATE READ WRITE CKSUM murr ONLINE 0 0 0 raidz1
>>>>>>>> ONLINE 0 0 0 disk3s2 ONLINE 0 0 0 disk1s2 ONLINE 0 0
>>>>>>>> 0 disk2s2 ONLINE 0 0 0 disk4s2 ONLINE 0 0 0
>>>>>>>> errors: No known data errors
>>>>>>>> So I Filled it up with about 5 GBs of data, mainly
>>>>>>>> images and FLAC/music files and everything just drags
>>>>>>>> on it. It takes a long time for files to be listed in
>>>>>>>> finder and when I try to save an image from Firefox,
>>>>>>>> it will just grind and grind while I try to navigate
>>>>>>>> to a folder. I have vmware Fusion setup on my SSD (my
>>>>>>>> main Mac drive) and doing anything on my zpool from
>>>>>>>> Windows (like using MediaMonkey to organize FLAC
>>>>>>>> files on it) uses up 100% of the CPU, freezing up my
>>>>>>>> computer until the moves are done, even when moving
>>>>>>>> around 30 files.
>>>>>>> It’s not clear from this what your actual physical /
>>>>>>> virtual setup is. Are you booting to OSX, and running
>>>>>>> Windows in a VM? Is the entire VM then living on the
>>>>>>> raidz pool?
>>>>>>>> Is my zpool okay? What's going on? Is this type of
>>>>>>>> slowness normal or do I have a bad drive? How will
>>>>>>>> MacZFS report to me if a drive in the array goes bad?
>>>>>>>> I installed SMARTReporter Lite and it shows all
>>>>>>>> drives as green. If I have some drives on SATA II and
>>>>>>>> others on SATA III would that affect anything?
>>>>>>>> If you want me to run any tests on it, I will do so
>>>>>>>> gladly. Just let me know.
>>>>>>>> Thanks!
>>>>>>> I’ve seen precisely this sort of behaviour with vmware
>>>>>>> fusion when:
>>>>>>> 1. my SSD was getting worn down (really, I trashed it
>>>>>>> in 1 year, it was the default apple one coming with
>>>>>>> early 2011 MBP) 2. the host OS & VM doesn’t have
>>>>>>> sufficient memory to run correctly without swapping 3.
>>>>>>> the additional memory within the VM is pulled from a
>>>>>>> disk swap file, which is by default in the same disk
>>>>>>> location as the VM itself
>>>>>>> Anything less than 8GB of RAM is likely to be tight,
>>>>>>> VMs will of course make this more complicated. Some
>>>>>>> notes on
>>>>>>> http://artykul8.com/2012/06/vmware-performance-enhancing/
>>>>>>> may help.
>>>>>>> I found that my SSDs were being worn out with constant
>>>>>>> running of VMs; I use them heavily in my work. The
>>>>>>> solution I found was to get max RAM in my laptop + imac
>>>>>>> (16 vs 32 respectively), make a zfs based ramdisk with
>>>>>>> lz4 compression, and copy the entire VM into the
>>>>>>> ramdisk before running it. The copy phase only takes a
>>>>>>> few seconds from SSD, and it gives me a very nice way
>>>>>>> to “roll back” to the previous image when required. I
>>>>>>> can comfortably run Windows in a 20GiB ramdisk that
>>>>>>> fits inside a 10GiB zpool with compression, even on the
>>>>>>> 16GiB laptop, and allocating 2GiB of ram for the VM
>>>>>>> itself (10 + 2 for virtualisation & leave 4 for all of
>>>>>>> OSX stuff).
>>>>>>> Here’s the zsh functions I use for this.
>>>>>>> # create a 1GiB ramdisk ramdisk-1g () { ramdisk-create
>>>>>>> 2097152 }
>>>>>>> # the generic function for the specific one above 
>>>>>>> ramdisk-create () { diskutil eject /Volumes/ramdisk >
>>>>>>> /dev/null 2>&1 diskutil erasevolume HFS+ 'ramdisk'
>>>>>>> `hdiutil attach -nomount ram://$1` cd /ramdisk }
>>>>>>> # make a zpool backed ramdisk instead of the HFS+ ones
>>>>>>> above. Main advantage is compression. I get at least 2x
>>>>>>> more “disk” for RAM with this approach. zdisk () { sudo
>>>>>>> zpool create -O compression=lz4 -fm /zram zram `hdiutil
>>>>>>> attach -nomount ram://20971520` sudo chown -R $USER
>>>>>>> /zram cd /zram }
>>>>>>> # self explanatory zdisk-destroy () { sudo zpool export
>>>>>>> -f zram }

- -- 
|     Bjoern Kahl   +++   Siegburg   +++    Germany     |
| "googlelogin@-my-domain-"   +++   www.bjoern-kahl.de  |
| Languages: German, English, Ancient Latin (a bit :-)) |
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/



You received this message because you are subscribed to the Google Groups 
"zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to zfs-macos+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to