So it sounds like I need to recreate my zpool ...

"Logical Block Size" = 512
"Physical Block Size" = 4096

So I should use the following command on my next zpool to help finder
performance and make it compatible for 4k drives?

zfs create -o normalization=formD atime=off murr ashift=12
(let me know if I have any errors in this)

As for the slowness in a VM, Mac file sharing would affect it because
Windows 8 accesses the drives with Fusion by mounting
\\jamess-imac\Volumes\murr as the Z drive so it technically is a file
share if that's what you mean. But it could also be because the
slowness of not using a 4k compatible zpool is compounded with a
virtual machine. (Could someone updated the getting started guide to
have you create a 4k zpool by default?)

Thanks for the advice on Songbird! I may try it if it can organize via
masks and support custom ID3 fields. I saw it's discontinued though
but it's still on SourceForge.

I'm at work so can't give a better reply but I have a lot more to look
into and read now =)

- James


On Tue, May 20, 2014 at 2:07 PM, 'Busty' via zfs-macos
<zfs-macos@googlegroups.com> wrote:
> James,
>
> I use my 15TB pool mainly for flac files too, so I thought I'd throw in
> my two cents (even if some is not zfs related):
>
> regarding iTunes recognizing flac: There is a quicktime component that
> will enable flac in quicktime, iirc it also works in itunes, at least
> you can get it too. But it will not play gapless, there is an amount of
> silence between songs.
>
> Another thing is called "TwistedFlac", which in a folder you can specify
> shows all flac files as wave files. These can be imported into iTunes,
> the downside is that the tags are not recognized.
>
> Just in case that helps with your library. I use songbird, which can
> about anything you want, but is not as stable as iTunes.
>
> Regarding your files showing up very slow, I experience that when I
> access my files on the pool from a remote machine which has to do with
> AFP (Apple filesharing protocol), so I have set up a NFS share. But you
> don'T writ eabout accessing the files from a a remote machine, so this
> should not be your issue.
>
> I kinda went the way you did. I had no knowledge of zfs but really
> wanted the features for data safety. That was roughly 3-4 years ago. As
> I set up my pool (and my backup, by the way), I came across all kinds of
> problems (drives vanishing, kernel panics, slow file browsing, scripts
> to automate backups and scrubs, you name it) which had to be solved, so
> I had a lot of reading and googling to do. I kinda was fooled by the
> MacZFS tutorial into thinking that this will be completely easy like you
> describe.
>
> These guys, in the front row Jason and Alex Blewitt and Bjoern helped me
> a lot to get on the way (so thanks again guys)
>
> Sebastian
>
>
> On 20.05.14 20:28, James Hoyt wrote:
>> Hi Bjorn thanks for your reply and thanks for your help Jason in all
>> this. I've actually been in the IT industry for 12 years, A+
>> certified, and currently pursuing my CCNA and MCSA so a technical
>> setup didn't intimidate me. (granted servers are a new beast to me) I
>> came across Mac ZFS while researching RAID options. As I'm getting new
>> music in wav/flac daily from a number of sources, a manual backup
>> system really wouldn't work for me as it's too hard to keep up. I
>> tried once with a blu-ray writer and it was a nightmare.. plus I'm
>> regularly categorizing music with MediaMonkey (why I have a virtual
>> machine because it's Windows only... oh why won't iTunes support FLAC
>> natively!) so all the tracks are updated now and then. So I thought an
>> offsite backup that's updated every few months along with a four-drive
>> RAID setup with one drive for redundancy would be all that I would
>> need.
>>
>> MacZFS.org is well put together and the tutorials isn't intimidating
>> at all. Run a few terminal commands? I can do that. The depth of ZFS
>> wasn't really covered nor did it really state that more research is
>> needed (like if I need 4k setup.. or whatever that is D:  ) So it's
>> rather frustrating to just find out I moved all my data off my
>> individual 2/1.5 TB drives to find out I did it wrong when I was
>> careful, very careful, to follow the Getting Started guide and FAQ
>> precisely.
>>
>> Bleh D:
>>
>> Jason, I apologize for coming out rough. I felt like I was being
>> treated like a lazy moron, which I'm not. I've researched a variety of
>> RAID solutions quite a bit and thought I was all setup for ZFS and
>> just had to change some configurations to speed it up. I'm sure you
>> weren't born with this knowledge and needed the help of others to
>> guide you in the right direction. I was googling things like "slow
>> zpool ZFS" and other similar terms but just couldn't find anything
>> concrete (because my problem isn't concrete).
>>
>> I tried searching if my drives are 4k with no luck. I saw an article
>> back from 2010 stating hard drives were planning to all be 4k in
>> 2011... this leads me to believe that they are 4k since I purchased
>> them new last year. Crap D: Is there a for sure way I can see if they
>> are 4k? Could this be my performance issue or is it just because my
>> directories have large amounts of folders/files in them?
>>
>> Again, sorry if I came out rude. This is all new technology to me and
>> I'm doing my best to become familiar with it.
>>
>> Thank you,
>>
>> James
>>
>> On Tue, May 20, 2014 at 12:52 PM, Daniel Becker <razzf...@gmail.com> wrote:
>>> James,
>>>
>>> Perhaps the takeaway here is that MacZFS (and arguably ZFS in general) is
>>> really not a great fit for the casual user. ZFS is very powerful once you
>>> take the time to really get familiar with it, but it does require a fair
>>> amount of research to get started, and it gives you lots of ways to shoot
>>> yourself in the foot. And as you found out yourself, there are a fair number
>>> of caveats and behavioral oddities when running ZFS on a Mac. If you want
>>> something that "just works" without digging into the details and that gives
>>> you behavior just as you would expect it from other file systems, it's
>>> probably not for you (at least not for anything other than experimentation).
>>>
>>> I know that the MacZFS page likes to give a somewhat different impression,
>>> but in my opinion encouraging non-technical users to install it is really
>>> doing a disservice both to said users and to the community as a whole.
>>>
>>> Daniel
>>>
>>>
>>> On Tue, May 20, 2014 at 9:59 AM, James Hoyt <djnati...@gmail.com> wrote:
>>>>
>>>> I did status as you can see from my original post.. I didn't know
>>>> scrub and clean. I did my research only on MacZFS because I thought
>>>> that's only where it mattered. I didn't trust info on other sites
>>>> because I didn't think it was relevant to how Mac ZFS operated.
>>>>
>>>> Please show me where I could have found the scrub command on
>>>> maczfs.org because it is not there. I see nothing about clean either.
>>>>
>>>> I'm openly stating I don't know it and it's not stated on the wiki or
>>>> FAQ or getting started section on maczfs.org. There is no refusal
>>>> going on.
>>>>
>>>> On Tue, May 20, 2014 at 11:51 AM, Jason Belec
>>>> <jasonbe...@belecmartin.com> wrote:
>>>>> Sorry you feel that way. We have had a lot of people in your situation.
>>>>> You seem to have skipped over the basics.
>>>>>
>>>>> Zpool scub murr
>>>>>
>>>>> Zpool status murr
>>>>>
>>>>>
>>>>> This command is on every ZFS site. Your openly stating you don't know it
>>>>> and refuse to look it up. I wish you the best.
>>>>>
>>>>>
>>>>> Jason
>>>>> Sent from my iPhone 5S
>>>>>
>>>>>> On May 20, 2014, at 12:09 PM, James Hoyt <djnati...@gmail.com> wrote:
>>>>>>
>>>>>> You have completely lost me at this point. You were rather
>>>>>> condescending and not helpful. I was hoping for instructions on how to
>>>>>> clean and scrub and saw none of that. At least point me to some proper
>>>>>> links. I also don't know what a 4k drive is.
>>>>>>
>>>>>> I carefully followed and read ALL the instructions and FAQ and Getting
>>>>>> Started guide on maczfs.org. Please don't speak to me like I didn't do
>>>>>> my research or follow the proper instructions.
>>>>>>
>>>>>> - James
>>>>>>
>>>>>>> On Tue, May 20, 2014 at 9:24 AM, Jason Belec
>>>>>>> <jasonbe...@belecmartin.com> wrote:
>>>>>>> OK, one thing, any indexing under that version of ZFS is going to kill
>>>>>>> performance. Long standing issue.
>>>>>>>
>>>>>>> No backups? Did you bump your noggin? With your current setup you have
>>>>>>> improved your chances if your scrubbing regularly and if you only lose a
>>>>>>> drive at anyone time. And adding backup will drastically increase your
>>>>>>> chances.
>>>>>>>
>>>>>>> Not understanding ZFS is a BIG reason to stop and re-evaluate your
>>>>>>> priorities. It's amazing tech IF used properly.
>>>>>>>
>>>>>>> For what it sounds like you want from ZFS you should use mirrors. You
>>>>>>> can do 2 mirrors of 2 drives each stripped under ZFS. This will 
>>>>>>> increase the
>>>>>>> safety of your data. Even that should have a back up drive you move key
>>>>>>> files or better yet 'snapshots' onto.
>>>>>>>
>>>>>>> BUT you are going to have to understand ZFS to have any hope of not
>>>>>>> drowning in a pool of tears at some point.
>>>>>>>
>>>>>>> The new ZFS is under development but far more functional. Eliminating
>>>>>>> many of the old version issues listed numerous times throughout the 
>>>>>>> forum.
>>>>>>> Either way you should ALWAYS understand the tech you rely on. Period.
>>>>>>>
>>>>>>> Please start learning with the word 'scrub' then the word 'snapshot'
>>>>>>> and how to swap a failed drive and do it all. Before committing your
>>>>>>> valuable data. Drives fail. Repeat. Drives fail.  Data must be restored 
>>>>>>> at
>>>>>>> some point. ZFS is magical if you have planned ahead. I have recovered 
>>>>>>> data
>>>>>>> assumed totally lost, YMMV.
>>>>>>>
>>>>>>> As for those drives are they 4k? If so you formatted your pool
>>>>>>> incorrectly. I don't have any of those so I don't have notes. Should be 
>>>>>>> a
>>>>>>> simple Google search to find out. And the wiki has the instructions on 
>>>>>>> 4k
>>>>>>> drive setup.
>>>>>>>
>>>>>>> Doing things right is what the wiki tries to help people with. The
>>>>>>> forum allows you to search for other peoples heartbreak to help prevent 
>>>>>>> your
>>>>>>> own.  The wizards tracking this stuff have done a wonderful job.
>>>>>>>
>>>>>>> Hope this gets you rolling. I'd still check your cables as well.
>>>>>>> Normally I attach a drive, build a pool, test a lot, destroy pool. Add
>>>>>>> another drive. Repeat. Better safe than sorry. Manufacturers are not 
>>>>>>> safe
>>>>>>> guarding your data.
>>>>>>>
>>>>>>> Jason
>>>>>>> Sent from my iPhone 5S
>>>>>>>
>>>>>>>> On May 20, 2014, at 9:37 AM, James Hoyt <djnati...@gmail.com> wrote:
>>>>>>>>
>>>>>>>> Thanks for the detailed reply.
>>>>>>>>
>>>>>>>> The slow performance is only when I'm using the RAID array so I
>>>>>>>> assume
>>>>>>>> without it connected means I can't use it means there is no slow
>>>>>>>> performance. I would love instructions on how to scrub/clean the
>>>>>>>> pool.
>>>>>>>> Does it do a data wipe?
>>>>>>>>
>>>>>>>> I was trying to think of a good backup solution. I have over 3 TBs of
>>>>>>>> music in FLAC (lots of which I've paid for) and was hoping RAIDZ
>>>>>>>> would
>>>>>>>> take away the need for backups. I was thinking of buying a 4 TB drive
>>>>>>>> and moving all my data on that and storing the drive offsite or
>>>>>>>> something (in case of burglary, fires, etc). Having a single drive
>>>>>>>> fail safe seems secure enough for me so I don't think incremental
>>>>>>>> backups are needed.
>>>>>>>>
>>>>>>>> As for running the latest beta ZFS, I didn't because the FAQ warned
>>>>>>>> me
>>>>>>>> not to. What are the differences? Would I have to format and rebuild
>>>>>>>> the array?
>>>>>>>>
>>>>>>>> The drives I have are four 3 TB Hitachi HDS723030BLE640.
>>>>>>>>
>>>>>>>> I started navigating around my computer again, and the slowdown seems
>>>>>>>> to be when going into folders with over 1000 files (for anything more
>>>>>>>> it will take 1-3 minutes to just list the files in the directory).
>>>>>>>> Also when I'm saving images from Firefox (no virtual machine running)
>>>>>>>> it takes awhile to navigate the folder structure and sometimes not
>>>>>>>> all
>>>>>>>> the folders show, but they do in the Finder. So I wonder if this is
>>>>>>>> an
>>>>>>>> issue with programs not getting along with ZFS but the finder being
>>>>>>>> fine with it.
>>>>>>>>
>>>>>>>> Other things to note, I did disable Spotlight on the drive to make
>>>>>>>> sure that isn't running, but I do have QuickSilver. Originally, I had
>>>>>>>> QuickSilver indexing the drive, but the computer was practically
>>>>>>>> unusable when it did that so I disabled that.
>>>>>>>>
>>>>>>>> I look forward to any advice you guys may have.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> James
>>>>>>>>
>>>>>>>>> On Tue, May 20, 2014 at 6:14 AM, Jason Belec
>>>>>>>>> <jasonbe...@belecmartin.com> wrote:
>>>>>>>>> OK, doesn't look like RAM, processor etc., are the issue.... Let's
>>>>>>>>> work with that in mind for now.
>>>>>>>>>
>>>>>>>>> When the pool and the associated drives are not connected, is the
>>>>>>>>> computer back to your expectation of normal? If so, you have one or 
>>>>>>>>> more bad
>>>>>>>>> cables, one or more bad drives, or a bit of both, perhaps a bad or 
>>>>>>>>> not quite
>>>>>>>>> capable power supply (solves 90% of all issues I come across). Maybe 
>>>>>>>>> even an
>>>>>>>>> issue with the motherboard. Simplest thing, have you run a scrub on 
>>>>>>>>> this
>>>>>>>>> pool? Clean?
>>>>>>>>>
>>>>>>>>> The type of drives you have is not an issue, the make and known
>>>>>>>>> issues with said drives might be, but you didn't provide that info.
>>>>>>>>>
>>>>>>>>> Using a raidcard and macJournaled terms, thrown out will not help
>>>>>>>>> you, your either ZFS or not. That said, you will not get the same 
>>>>>>>>> speed from
>>>>>>>>> ZFS as from other raid setups, but you will get peace of mind on data
>>>>>>>>> integrity. I do hope you are also backing up data from the pool as 
>>>>>>>>> well or
>>>>>>>>> eventually you will be in tears like so many others. A little forum
>>>>>>>>> searching under old and new versions of mac zfs will be helpful.
>>>>>>>>>
>>>>>>>>> Since your getting started, once this is resolved it might be better
>>>>>>>>> to build/run this under the latest (yes its in development) Mac ZFS 
>>>>>>>>> rather
>>>>>>>>> than the old tired version. It is quite a bit different, modern and 
>>>>>>>>> makes
>>>>>>>>> many things a lot easier. (Insert legal disclaimer here) ;)
>>>>>>>>>
>>>>>>>>> Interesting aside:
>>>>>>>>> Dave mentioned an interesting point about wearing out SSDs, and I
>>>>>>>>> must admit I've had two such occurrences but only with a hackintosh 
>>>>>>>>> and only
>>>>>>>>> with less than stellar drives. Seems that here around the mad science 
>>>>>>>>> lab
>>>>>>>>> Intel SSDs are the most reliable long term. I have two of their 
>>>>>>>>> originals
>>>>>>>>> still outlasting several other brands.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Jason Belec
>>>>>>>>> Sent from my iPad
>>>>>>>>>
>>>>>>>>>> On May 19, 2014, at 10:05 AM, James Hoyt <djnati...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Thanks for all the replies guys =D
>>>>>>>>>>
>>>>>>>>>> Sorry for lack of information. I'm running a Hackintosh with a 256
>>>>>>>>>> GB
>>>>>>>>>> SSD and I sometimes run Windows 8.1 in a virtual machine via VmWare
>>>>>>>>>> Fusion. The virtual image file is also located on the SSD. The only
>>>>>>>>>> files I have on my zpool are data files. I don't run an OS or VM
>>>>>>>>>> image
>>>>>>>>>> from it. I have 12 GBs of RAM and a four core i5 processor. On the
>>>>>>>>>> VM,
>>>>>>>>>> I dedicate 6 GBs of RAM and 2 cores to it. It should be noted that
>>>>>>>>>> I
>>>>>>>>>> experience the slow down even when vmware is off it's just the
>>>>>>>>>> drives
>>>>>>>>>> act the slowest when the VM is running.
>>>>>>>>>>
>>>>>>>>>> As for how I created the zpool, I followed the Getting Started
>>>>>>>>>> guide with
>>>>>>>>>>
>>>>>>>>>> zpool create murr raidz disk3s2 disk1s2 disk2s2 disk4s2
>>>>>>>>>>
>>>>>>>>>> Please help... I really hope I don't have to recreate it, but it's
>>>>>>>>>> looking that way.
>>>>>>>>>>
>>>>>>>>>> Would it be better if I bought a RAID card and use Mac OS
>>>>>>>>>> Journaled?
>>>>>>>>>> Cost is an issue... the other issue is these are regular desktop
>>>>>>>>>> 7200
>>>>>>>>>> RPM drives.. not NAS drives.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>>
>>>>>>>>>> James
>>>>>>>>>>
>>>>>>>>>>> On Mon, May 19, 2014 at 7:43 AM, Jason Belec
>>>>>>>>>>> <jasonbe...@belecmartin.com> wrote:
>>>>>>>>>>> Dave has posted some good info. Reminds me why I prefer
>>>>>>>>>>> Virtualbox. ;) We do seem to need more detail though to really help 
>>>>>>>>>>> the
>>>>>>>>>>> original OP.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Jason
>>>>>>>>>>> Sent from my iPhone 5S
>>>>>>>>>>>
>>>>>>>>>>>> On May 19, 2014, at 4:00 AM, Dave Cottlehuber <d...@jsonified.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> From: James Hoyt djnati...@gmail.com(mailto:djnati...@gmail.com)
>>>>>>>>>>>> Reply: zfs-macos@googlegroups.com
>>>>>>>>>>>> zfs-macos@googlegroups.com(mailto:zfs-macos@googlegroups.com)
>>>>>>>>>>>> Date: 19. Mai 2014 at 02:27:36
>>>>>>>>>>>> To: zfs-macos@googlegroups.com
>>>>>>>>>>>> zfs-macos@googlegroups.com(mailto:zfs-macos@googlegroups.com)
>>>>>>>>>>>> Subject: [zfs-macos] RAIDZ1 running slow =(
>>>>>>>>>>>>
>>>>>>>>>>>>> So I setup a MacZFS RaidZ rather easily and was happy with
>>>>>>>>>>>>> myself. I had four 3 TB internal SATA drives in a zpool giving me 
>>>>>>>>>>>>> around 9
>>>>>>>>>>>>> TB of space.
>>>>>>>>>>>>>
>>>>>>>>>>>>> jamess-imac:~ sangie$ zpool status murr
>>>>>>>>>>>>> pool: murr
>>>>>>>>>>>>> state: ONLINE
>>>>>>>>>>>>> scrub: none requested
>>>>>>>>>>>>> config:
>>>>>>>>>>>>>
>>>>>>>>>>>>> NAME STATE READ WRITE CKSUM
>>>>>>>>>>>>> murr ONLINE 0 0 0
>>>>>>>>>>>>> raidz1 ONLINE 0 0 0
>>>>>>>>>>>>> disk3s2 ONLINE 0 0 0
>>>>>>>>>>>>> disk1s2 ONLINE 0 0 0
>>>>>>>>>>>>> disk2s2 ONLINE 0 0 0
>>>>>>>>>>>>> disk4s2 ONLINE 0 0 0
>>>>>>>>>>>>>
>>>>>>>>>>>>> errors: No known data errors
>>>>>>>>>>>>>
>>>>>>>>>>>>> So I Filled it up with about 5 GBs of data, mainly images and
>>>>>>>>>>>>> FLAC/music files and everything just drags on it. It takes a long 
>>>>>>>>>>>>> time for
>>>>>>>>>>>>> files to be listed in finder and when I try to save an image from 
>>>>>>>>>>>>> Firefox,
>>>>>>>>>>>>> it will just grind and grind while I try to navigate to a folder. 
>>>>>>>>>>>>> I have
>>>>>>>>>>>>> vmware Fusion setup on my SSD (my main Mac drive) and doing 
>>>>>>>>>>>>> anything on my
>>>>>>>>>>>>> zpool from Windows (like using MediaMonkey to organize FLAC files 
>>>>>>>>>>>>> on it)
>>>>>>>>>>>>> uses up 100% of the CPU, freezing up my computer until the moves 
>>>>>>>>>>>>> are done,
>>>>>>>>>>>>> even when moving around 30 files.
>>>>>>>>>>>>
>>>>>>>>>>>> It’s not clear from this what your actual physical / virtual
>>>>>>>>>>>> setup is. Are you booting to OSX, and running Windows in a VM? Is 
>>>>>>>>>>>> the entire
>>>>>>>>>>>> VM then living on the raidz pool?
>>>>>>>>>>>>
>>>>>>>>>>>>> Is my zpool okay? What's going on? Is this type of slowness
>>>>>>>>>>>>> normal or do I have a bad drive? How will MacZFS report to me if 
>>>>>>>>>>>>> a drive in
>>>>>>>>>>>>> the array goes bad? I installed SMARTReporter Lite and it shows 
>>>>>>>>>>>>> all drives
>>>>>>>>>>>>> as green. If I have some drives on SATA II and others on SATA III 
>>>>>>>>>>>>> would that
>>>>>>>>>>>>> affect anything?
>>>>>>>>>>>>>
>>>>>>>>>>>>> If you want me to run any tests on it, I will do so gladly. Just
>>>>>>>>>>>>> let me know.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>
>>>>>>>>>>>> I’ve seen precisely this sort of behaviour with vmware fusion
>>>>>>>>>>>> when:
>>>>>>>>>>>>
>>>>>>>>>>>> 1. my SSD was getting worn down (really, I trashed it in 1 year,
>>>>>>>>>>>> it was the default apple one coming with early 2011 MBP)
>>>>>>>>>>>> 2. the host OS & VM doesn’t have sufficient memory to run
>>>>>>>>>>>> correctly without swapping
>>>>>>>>>>>> 3. the additional memory within the VM is pulled from a disk swap
>>>>>>>>>>>> file, which is by default in the same disk location as the VM 
>>>>>>>>>>>> itself
>>>>>>>>>>>>
>>>>>>>>>>>> Anything less than 8GB of RAM is likely to be tight, VMs will of
>>>>>>>>>>>> course make this more complicated. Some notes on
>>>>>>>>>>>> http://artykul8.com/2012/06/vmware-performance-enhancing/ may help.
>>>>>>>>>>>>
>>>>>>>>>>>> I found that my SSDs were being worn out with constant running of
>>>>>>>>>>>> VMs; I use them heavily in my work. The solution I found was to 
>>>>>>>>>>>> get max RAM
>>>>>>>>>>>> in my laptop + imac (16 vs 32 respectively), make a zfs based 
>>>>>>>>>>>> ramdisk with
>>>>>>>>>>>> lz4 compression, and copy the entire VM into the ramdisk before 
>>>>>>>>>>>> running it.
>>>>>>>>>>>> The copy phase only takes a few seconds from SSD, and it gives me 
>>>>>>>>>>>> a very
>>>>>>>>>>>> nice way to “roll back” to the previous image when required. I can
>>>>>>>>>>>> comfortably run Windows in a 20GiB ramdisk that fits inside a 
>>>>>>>>>>>> 10GiB zpool
>>>>>>>>>>>> with compression, even on the 16GiB laptop, and allocating 2GiB of 
>>>>>>>>>>>> ram for
>>>>>>>>>>>> the VM itself (10 + 2 for virtualisation & leave 4 for all of OSX 
>>>>>>>>>>>> stuff).
>>>>>>>>>>>>
>>>>>>>>>>>> Here’s the zsh functions I use for this.
>>>>>>>>>>>>
>>>>>>>>>>>> # create a 1GiB ramdisk
>>>>>>>>>>>> ramdisk-1g () {
>>>>>>>>>>>> ramdisk-create 2097152
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> # the generic function for the specific one above
>>>>>>>>>>>> ramdisk-create () {
>>>>>>>>>>>> diskutil eject /Volumes/ramdisk > /dev/null 2>&1
>>>>>>>>>>>> diskutil erasevolume HFS+ 'ramdisk' `hdiutil attach -nomount
>>>>>>>>>>>> ram://$1`
>>>>>>>>>>>> cd /ramdisk
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> # make a zpool backed ramdisk instead of the HFS+ ones above.
>>>>>>>>>>>> Main advantage is compression. I get at least 2x more “disk” for 
>>>>>>>>>>>> RAM with
>>>>>>>>>>>> this approach.
>>>>>>>>>>>> zdisk () {
>>>>>>>>>>>> sudo zpool create -O compression=lz4 -fm /zram zram `hdiutil
>>>>>>>>>>>> attach -nomount ram://20971520`
>>>>>>>>>>>> sudo chown -R $USER /zram
>>>>>>>>>>>> cd /zram
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> # self explanatory
>>>>>>>>>>>> zdisk-destroy () {
>>>>>>>>>>>> sudo zpool export -f zram
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> —
>>>>>>>>>>>> Dave Cottlehuber
>>>>>>>>>>>> d...@jsonified.com
>>>>>>>>>>>> Sent from my Couch
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>>
>>>>>>>>>>>> ---
>>>>>>>>>>>> You received this message because you are subscribed to the
>>>>>>>>>>>> Google Groups "zfs-macos" group.
>>>>>>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>>>>>>> send an email to zfs-macos+unsubscr...@googlegroups.com.
>>>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>>
>>>>>>>>>>> ---
>>>>>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>>>>>> the Google Groups "zfs-macos" group.
>>>>>>>>>>> To unsubscribe from this topic, visit
>>>>>>>>>>> https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
>>>>>>>>>>> To unsubscribe from this group and all its topics, send an email
>>>>>>>>>>> to zfs-macos+unsubscr...@googlegroups.com.
>>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> ---
>>>>>>>>>> You received this message because you are subscribed to the Google
>>>>>>>>>> Groups "zfs-macos" group.
>>>>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>>>>> send an email to zfs-macos+unsubscr...@googlegroups.com.
>>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>> ---
>>>>>>>>> You received this message because you are subscribed to a topic in
>>>>>>>>> the Google Groups "zfs-macos" group.
>>>>>>>>> To unsubscribe from this topic, visit
>>>>>>>>> https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
>>>>>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>>>>>> zfs-macos+unsubscr...@googlegroups.com.
>>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> ---
>>>>>>>> You received this message because you are subscribed to the Google
>>>>>>>> Groups "zfs-macos" group.
>>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>>> send an email to zfs-macos+unsubscr...@googlegroups.com.
>>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> ---
>>>>>>> You received this message because you are subscribed to a topic in the
>>>>>>> Google Groups "zfs-macos" group.
>>>>>>> To unsubscribe from this topic, visit
>>>>>>> https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
>>>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>>>> zfs-macos+unsubscr...@googlegroups.com.
>>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>> --
>>>>>>
>>>>>> ---
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "zfs-macos" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>>> an email to zfs-macos+unsubscr...@googlegroups.com.
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>> --
>>>>>
>>>>> ---
>>>>> You received this message because you are subscribed to a topic in the
>>>>> Google Groups "zfs-macos" group.
>>>>> To unsubscribe from this topic, visit
>>>>> https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>> zfs-macos+unsubscr...@googlegroups.com.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>> --
>>>>
>>>> ---
>>>> You received this message because you are subscribed to the Google Groups
>>>> "zfs-macos" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send an
>>>> email to zfs-macos+unsubscr...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "zfs-macos" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> zfs-macos+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> On Tue, May 20, 2014 at 1:08 PM, Bjoern Kahl <googlelo...@bjoern-kahl.de> 
>> wrote:
>>
>>  Hi James,
>>
>>  pretty much all relevant things have already been said, so I can make
>>  this short (well, didn't worked out).
>>
>>  MacZFS (stable version) comes with man pages.  A simple "man zpool"
>>  should give you access to all the ZFS pool maintenance commands.
>>
>>  I virtually fell off my chair when I read your two statements
>>  "I thought I need not backups" and "what is scrub, Does it do a data
>>  wipe?"
>>
>>  As Jason said, ZFS is a wonderful piece of technology, but it is not
>>  that kind of software one should use by just following some
>>  setp-by-step guides.  It will sooner or later bite you.  We tried to
>>  make it save and we tried to make it Mac friendly, but ZFS is
>>  ultimately designed for big data centers and no interface magic can
>>  really hide that fact.
>>
>>
>>  Nevertheless,  to  answer  your  questions:
>>
>>
>>  Scrub reads all data on a pool and verifies the checksums ZFS
>>  maintains for each chunk of data stored in a pool.  Jason gave you
>>  the commands in his other post.
>>
>>  If (big if) you have redundancy in your pool, that is a mirror or a
>>  raidz, then and only then it can repair damaged data in the background.
>>
>>  It does so, by either getting a good copy from the other side(s) of
>>  the mirror, or by combinatorial calculations from the raidz parity
>>  stripes.
>>
>>  In a raidzX you can loose X drives without immediate data loss, in a
>>  N-way mirror you can loose (N-1) drives without immediate data loss.
>>
>>  Note!  The keyword here is *immediate* data loss.  If you buy 3 drives
>>  in a batch, and put these drives in a pool (mirror or raidz), then
>>  these drives will experience similar workload under similar condition,
>>  which significantly increases the likelihood to fail around the same
>>  time.
>>
>>  Which means in a raidz1, you have a significant chance, that a second
>>  drive will fail while you are in the process of replacing a first
>>  failed drive.  The moment a second drive fails, your data is gone.
>>
>>  That is why you need backups.
>>
>>  I have personally seen this happen more than once, and switched to
>>  always pairing drives from different manufactures and suppliers into
>>  mirror pairs.  I say "and suppliers" to not have both drives
>>  experience the same shuffles and drops to the ground while in
>>  transportation.
>>
>>  And you need regular(!) scrubs, to find out that a drive is getting
>>  weak before it fails completely, so you can replace it in time.
>>
>>  And one more word on replacing drives:
>>
>>  Once you have a drive failure, chances are you are in panic mode or at
>>  least in a hurry to fix things, which means prone to make mistakes.
>>  We are all just humans and do make mistakes.  So you should exercise a
>>  drive replacement in advance.  Replacing a random drive on a redundant
>>  pool using "zpool replace pool drive1 drive2" is supposed to be a safe
>>  operation, so you can simply try it out.  The tricky part is how to
>>  hookup the drives and identify the right drive, not the actual
>>  replacement.
>>
>>  Using "zpool replace" instead of the sometimes suggested "zpool
>>  attach" / "zpool detach" saves you from the all to common mistake to
>>  say "zpool add" instead of "zpool attach", a mistake that would screw
>>  up your pool layout and that can only be fixed by destroying and
>>  recreating the pool.
>>
>>
>>  Regarding the slowness:
>>
>>  Using 4k drives in a pool configure for 512b drives (the standard type
>>  since hard drives were invented) will kill performance.
>>
>>  Using 512b drives in a pool configured for 4k drives does no harm,
>>  except wasting a bit of space if you have many small files.
>>
>>  So I suggest to destroy and recreate the pool if your drives are 4k
>>  (also called "enhanced format").  To configure a pool for 4k, you add
>>  "-o ashift=12" the the "zpool create" command.  "zpool get all" should
>>  tell you the current ashift value, which is 9 for 512b and 12 for 4k
>>
>>  :-) Exercise for the reader: Which ashift value to use for old style
>>  16k flash memory?  (Not that it would last long, but that's not the
>>  point here.)
>>
>>
>>  Regarding slow, long directories:
>>
>>  Another issue our colleagues working on the new MacZFS find out:
>>  The Mac OSX kernel has a problem with caching really long directories,
>>  because it can run out of some internal file resources (the famous
>>  vnodes).  This hits ZFS especially hard due to the way it handles its
>>  own short time locking and caching.
>>
>>
>>  Best regards
>>
>>     Björn
>>
>>
>> Am 20.05.14 18:09, schrieb James Hoyt:
>>>>> You have completely lost me at this point. You were rather
>>>>> condescending and not helpful. I was hoping for instructions on how
>>>>> to clean and scrub and saw none of that. At least point me to some
>>>>> proper links. I also don't know what a 4k drive is.
>>>>>
>>>>> I carefully followed and read ALL the instructions and FAQ and
>>>>> Getting Started guide on maczfs.org. Please don't speak to me like
>>>>> I didn't do my research or follow the proper instructions.
>>>>>
>>>>> - James
>>>>>
>>>>> On Tue, May 20, 2014 at 9:24 AM, Jason Belec
>>>>> <jasonbe...@belecmartin.com> wrote:
>>>>>> OK, one thing, any indexing under that version of ZFS is going to
>>>>>> kill performance. Long standing issue.
>>>>>>
>>>>>> No backups? Did you bump your noggin? With your current setup you
>>>>>> have improved your chances if your scrubbing regularly and if you
>>>>>> only lose a drive at anyone time. And adding backup will
>>>>>> drastically increase your chances.
>>>>>>
>>>>>> Not understanding ZFS is a BIG reason to stop and re-evaluate
>>>>>> your priorities. It's amazing tech IF used properly.
>>>>>>
>>>>>> For what it sounds like you want from ZFS you should use mirrors.
>>>>>> You can do 2 mirrors of 2 drives each stripped under ZFS. This
>>>>>> will increase the safety of your data. Even that should have a
>>>>>> back up drive you move key files or better yet 'snapshots' onto.
>>>>>>
>>>>>> BUT you are going to have to understand ZFS to have any hope of
>>>>>> not drowning in a pool of tears at some point.
>>>>>>
>>>>>> The new ZFS is under development but far more functional.
>>>>>> Eliminating many of the old version issues listed numerous times
>>>>>> throughout the forum. Either way you should ALWAYS understand the
>>>>>> tech you rely on. Period.
>>>>>>
>>>>>> Please start learning with the word 'scrub' then the word
>>>>>> 'snapshot' and how to swap a failed drive and do it all. Before
>>>>>> committing your valuable data. Drives fail. Repeat. Drives fail.
>>>>>> Data must be restored at some point. ZFS is magical if you have
>>>>>> planned ahead. I have recovered data assumed totally lost, YMMV.
>>>>>>
>>>>>> As for those drives are they 4k? If so you formatted your pool
>>>>>> incorrectly. I don't have any of those so I don't have notes.
>>>>>> Should be a simple Google search to find out. And the wiki has
>>>>>> the instructions on 4k drive setup.
>>>>>>
>>>>>> Doing things right is what the wiki tries to help people with.
>>>>>> The forum allows you to search for other peoples heartbreak to
>>>>>> help prevent your own.  The wizards tracking this stuff have done
>>>>>> a wonderful job.
>>>>>>
>>>>>> Hope this gets you rolling. I'd still check your cables as well.
>>>>>> Normally I attach a drive, build a pool, test a lot, destroy
>>>>>> pool. Add another drive. Repeat. Better safe than sorry.
>>>>>> Manufacturers are not safe guarding your data.
>>>>>>
>>>>>> Jason Sent from my iPhone 5S
>>>>>>
>>>>>>> On May 20, 2014, at 9:37 AM, James Hoyt <djnati...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Thanks for the detailed reply.
>>>>>>>
>>>>>>> The slow performance is only when I'm using the RAID array so I
>>>>>>> assume without it connected means I can't use it means there is
>>>>>>> no slow performance. I would love instructions on how to
>>>>>>> scrub/clean the pool. Does it do a data wipe?
>>>>>>>
>>>>>>> I was trying to think of a good backup solution. I have over 3
>>>>>>> TBs of music in FLAC (lots of which I've paid for) and was
>>>>>>> hoping RAIDZ would take away the need for backups. I was
>>>>>>> thinking of buying a 4 TB drive and moving all my data on that
>>>>>>> and storing the drive offsite or something (in case of
>>>>>>> burglary, fires, etc). Having a single drive fail safe seems
>>>>>>> secure enough for me so I don't think incremental backups are
>>>>>>> needed.
>>>>>>>
>>>>>>> As for running the latest beta ZFS, I didn't because the FAQ
>>>>>>> warned me not to. What are the differences? Would I have to
>>>>>>> format and rebuild the array?
>>>>>>>
>>>>>>> The drives I have are four 3 TB Hitachi HDS723030BLE640.
>>>>>>>
>>>>>>> I started navigating around my computer again, and the slowdown
>>>>>>> seems to be when going into folders with over 1000 files (for
>>>>>>> anything more it will take 1-3 minutes to just list the files
>>>>>>> in the directory). Also when I'm saving images from Firefox (no
>>>>>>> virtual machine running) it takes awhile to navigate the folder
>>>>>>> structure and sometimes not all the folders show, but they do
>>>>>>> in the Finder. So I wonder if this is an issue with programs
>>>>>>> not getting along with ZFS but the finder being fine with it.
>>>>>>>
>>>>>>> Other things to note, I did disable Spotlight on the drive to
>>>>>>> make sure that isn't running, but I do have QuickSilver.
>>>>>>> Originally, I had QuickSilver indexing the drive, but the
>>>>>>> computer was practically unusable when it did that so I
>>>>>>> disabled that.
>>>>>>>
>>>>>>> I look forward to any advice you guys may have.
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>> James
>>>>>>>
>>>>>>>> On Tue, May 20, 2014 at 6:14 AM, Jason Belec
>>>>>>>> <jasonbe...@belecmartin.com> wrote: OK, doesn't look like
>>>>>>>> RAM, processor etc., are the issue.... Let's work with that
>>>>>>>> in mind for now.
>>>>>>>>
>>>>>>>> When the pool and the associated drives are not connected, is
>>>>>>>> the computer back to your expectation of normal? If so, you
>>>>>>>> have one or more bad cables, one or more bad drives, or a bit
>>>>>>>> of both, perhaps a bad or not quite capable power supply
>>>>>>>> (solves 90% of all issues I come across). Maybe even an issue
>>>>>>>> with the motherboard. Simplest thing, have you run a scrub on
>>>>>>>> this pool? Clean?
>>>>>>>>
>>>>>>>> The type of drives you have is not an issue, the make and
>>>>>>>> known issues with said drives might be, but you didn't
>>>>>>>> provide that info.
>>>>>>>>
>>>>>>>> Using a raidcard and macJournaled terms, thrown out will not
>>>>>>>> help you, your either ZFS or not. That said, you will not get
>>>>>>>> the same speed from ZFS as from other raid setups, but you
>>>>>>>> will get peace of mind on data integrity. I do hope you are
>>>>>>>> also backing up data from the pool as well or eventually you
>>>>>>>> will be in tears like so many others. A little forum
>>>>>>>> searching under old and new versions of mac zfs will be
>>>>>>>> helpful.
>>>>>>>>
>>>>>>>> Since your getting started, once this is resolved it might be
>>>>>>>> better to build/run this under the latest (yes its in
>>>>>>>> development) Mac ZFS rather than the old tired version. It is
>>>>>>>> quite a bit different, modern and makes many things a lot
>>>>>>>> easier. (Insert legal disclaimer here) ;)
>>>>>>>>
>>>>>>>> Interesting aside: Dave mentioned an interesting point about
>>>>>>>> wearing out SSDs, and I must admit I've had two such
>>>>>>>> occurrences but only with a hackintosh and only with less
>>>>>>>> than stellar drives. Seems that here around the mad science
>>>>>>>> lab Intel SSDs are the most reliable long term. I have two of
>>>>>>>> their originals still outlasting several other brands.
>>>>>>>>
>>>>>>>> -- Jason Belec Sent from my iPad
>>>>>>>>
>>>>>>>>> On May 19, 2014, at 10:05 AM, James Hoyt
>>>>>>>>> <djnati...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> Thanks for all the replies guys =D
>>>>>>>>>
>>>>>>>>> Sorry for lack of information. I'm running a Hackintosh
>>>>>>>>> with a 256 GB SSD and I sometimes run Windows 8.1 in a
>>>>>>>>> virtual machine via VmWare Fusion. The virtual image file
>>>>>>>>> is also located on the SSD. The only files I have on my
>>>>>>>>> zpool are data files. I don't run an OS or VM image from
>>>>>>>>> it. I have 12 GBs of RAM and a four core i5 processor. On
>>>>>>>>> the VM, I dedicate 6 GBs of RAM and 2 cores to it. It
>>>>>>>>> should be noted that I experience the slow down even when
>>>>>>>>> vmware is off it's just the drives act the slowest when the
>>>>>>>>> VM is running.
>>>>>>>>>
>>>>>>>>> As for how I created the zpool, I followed the Getting
>>>>>>>>> Started guide with
>>>>>>>>>
>>>>>>>>> zpool create murr raidz disk3s2 disk1s2 disk2s2 disk4s2
>>>>>>>>>
>>>>>>>>> Please help... I really hope I don't have to recreate it,
>>>>>>>>> but it's looking that way.
>>>>>>>>>
>>>>>>>>> Would it be better if I bought a RAID card and use Mac OS
>>>>>>>>> Journaled? Cost is an issue... the other issue is these are
>>>>>>>>> regular desktop 7200 RPM drives.. not NAS drives.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>>> James
>>>>>>>>>
>>>>>>>>>> On Mon, May 19, 2014 at 7:43 AM, Jason Belec
>>>>>>>>>> <jasonbe...@belecmartin.com> wrote: Dave has posted some
>>>>>>>>>> good info. Reminds me why I prefer Virtualbox. ;) We do
>>>>>>>>>> seem to need more detail though to really help the
>>>>>>>>>> original OP.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Jason Sent from my iPhone 5S
>>>>>>>>>>
>>>>>>>>>>> On May 19, 2014, at 4:00 AM, Dave Cottlehuber
>>>>>>>>>>> <d...@jsonified.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> From: James Hoyt
>>>>>>>>>>> djnati...@gmail.com(mailto:djnati...@gmail.com) Reply:
>>>>>>>>>>> zfs-macos@googlegroups.com
>>>>>>>>>>> zfs-macos@googlegroups.com(mailto:zfs-macos@googlegroups.com)
>>>>>>>>>>>
>>>>>>>>>>>
>> Date: 19. Mai 2014 at 02:27:36
>>>>>>>>>>> To: zfs-macos@googlegroups.com
>>>>>>>>>>> zfs-macos@googlegroups.com(mailto:zfs-macos@googlegroups.com)
>>>>>>>>>>>
>>>>>>>>>>>
>> Subject: [zfs-macos] RAIDZ1 running slow =(
>>>>>>>>>>>
>>>>>>>>>>>> So I setup a MacZFS RaidZ rather easily and was happy
>>>>>>>>>>>> with myself. I had four 3 TB internal SATA drives in
>>>>>>>>>>>> a zpool giving me around 9 TB of space.
>>>>>>>>>>>>
>>>>>>>>>>>> jamess-imac:~ sangie$ zpool status murr pool: murr
>>>>>>>>>>>> state: ONLINE scrub: none requested config:
>>>>>>>>>>>>
>>>>>>>>>>>> NAME STATE READ WRITE CKSUM murr ONLINE 0 0 0 raidz1
>>>>>>>>>>>> ONLINE 0 0 0 disk3s2 ONLINE 0 0 0 disk1s2 ONLINE 0 0
>>>>>>>>>>>> 0 disk2s2 ONLINE 0 0 0 disk4s2 ONLINE 0 0 0
>>>>>>>>>>>>
>>>>>>>>>>>> errors: No known data errors
>>>>>>>>>>>>
>>>>>>>>>>>> So I Filled it up with about 5 GBs of data, mainly
>>>>>>>>>>>> images and FLAC/music files and everything just drags
>>>>>>>>>>>> on it. It takes a long time for files to be listed in
>>>>>>>>>>>> finder and when I try to save an image from Firefox,
>>>>>>>>>>>> it will just grind and grind while I try to navigate
>>>>>>>>>>>> to a folder. I have vmware Fusion setup on my SSD (my
>>>>>>>>>>>> main Mac drive) and doing anything on my zpool from
>>>>>>>>>>>> Windows (like using MediaMonkey to organize FLAC
>>>>>>>>>>>> files on it) uses up 100% of the CPU, freezing up my
>>>>>>>>>>>> computer until the moves are done, even when moving
>>>>>>>>>>>> around 30 files.
>>>>>>>>>>>
>>>>>>>>>>> It’s not clear from this what your actual physical /
>>>>>>>>>>> virtual setup is. Are you booting to OSX, and running
>>>>>>>>>>> Windows in a VM? Is the entire VM then living on the
>>>>>>>>>>> raidz pool?
>>>>>>>>>>>
>>>>>>>>>>>> Is my zpool okay? What's going on? Is this type of
>>>>>>>>>>>> slowness normal or do I have a bad drive? How will
>>>>>>>>>>>> MacZFS report to me if a drive in the array goes bad?
>>>>>>>>>>>> I installed SMARTReporter Lite and it shows all
>>>>>>>>>>>> drives as green. If I have some drives on SATA II and
>>>>>>>>>>>> others on SATA III would that affect anything?
>>>>>>>>>>>>
>>>>>>>>>>>> If you want me to run any tests on it, I will do so
>>>>>>>>>>>> gladly. Just let me know.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks!
>>>>>>>>>>>
>>>>>>>>>>> I’ve seen precisely this sort of behaviour with vmware
>>>>>>>>>>> fusion when:
>>>>>>>>>>>
>>>>>>>>>>> 1. my SSD was getting worn down (really, I trashed it
>>>>>>>>>>> in 1 year, it was the default apple one coming with
>>>>>>>>>>> early 2011 MBP) 2. the host OS & VM doesn’t have
>>>>>>>>>>> sufficient memory to run correctly without swapping 3.
>>>>>>>>>>> the additional memory within the VM is pulled from a
>>>>>>>>>>> disk swap file, which is by default in the same disk
>>>>>>>>>>> location as the VM itself
>>>>>>>>>>>
>>>>>>>>>>> Anything less than 8GB of RAM is likely to be tight,
>>>>>>>>>>> VMs will of course make this more complicated. Some
>>>>>>>>>>> notes on
>>>>>>>>>>> http://artykul8.com/2012/06/vmware-performance-enhancing/
>>>>>>>>>>> may help.
>>>>>>>>>>>
>>>>>>>>>>> I found that my SSDs were being worn out with constant
>>>>>>>>>>> running of VMs; I use them heavily in my work. The
>>>>>>>>>>> solution I found was to get max RAM in my laptop + imac
>>>>>>>>>>> (16 vs 32 respectively), make a zfs based ramdisk with
>>>>>>>>>>> lz4 compression, and copy the entire VM into the
>>>>>>>>>>> ramdisk before running it. The copy phase only takes a
>>>>>>>>>>> few seconds from SSD, and it gives me a very nice way
>>>>>>>>>>> to “roll back” to the previous image when required. I
>>>>>>>>>>> can comfortably run Windows in a 20GiB ramdisk that
>>>>>>>>>>> fits inside a 10GiB zpool with compression, even on the
>>>>>>>>>>> 16GiB laptop, and allocating 2GiB of ram for the VM
>>>>>>>>>>> itself (10 + 2 for virtualisation & leave 4 for all of
>>>>>>>>>>> OSX stuff).
>>>>>>>>>>>
>>>>>>>>>>> Here’s the zsh functions I use for this.
>>>>>>>>>>>
>>>>>>>>>>> # create a 1GiB ramdisk ramdisk-1g () { ramdisk-create
>>>>>>>>>>> 2097152 }
>>>>>>>>>>>
>>>>>>>>>>> # the generic function for the specific one above
>>>>>>>>>>> ramdisk-create () { diskutil eject /Volumes/ramdisk >
>>>>>>>>>>> /dev/null 2>&1 diskutil erasevolume HFS+ 'ramdisk'
>>>>>>>>>>> `hdiutil attach -nomount ram://$1` cd /ramdisk }
>>>>>>>>>>>
>>>>>>>>>>> # make a zpool backed ramdisk instead of the HFS+ ones
>>>>>>>>>>> above. Main advantage is compression. I get at least 2x
>>>>>>>>>>> more “disk” for RAM with this approach. zdisk () { sudo
>>>>>>>>>>> zpool create -O compression=lz4 -fm /zram zram `hdiutil
>>>>>>>>>>> attach -nomount ram://20971520` sudo chown -R $USER
>>>>>>>>>>> /zram cd /zram }
>>>>>>>>>>>
>>>>>>>>>>> # self explanatory zdisk-destroy () { sudo zpool export
>>>>>>>>>>> -f zram }
>>>>>>>>>>>
>>
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to a topic in the 
>>> Google Groups "zfs-macos" group.
>>> To unsubscribe from this topic, visit 
>>> https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to 
>>> zfs-macos+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the Google 
> Groups "zfs-macos" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/zfs-macos/78gD-0OzKMQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> zfs-macos+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"zfs-macos" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to zfs-macos+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to