bonnie++ should  not be considered until after you read 
https://blogs.oracle.com/roch/decoding-bonnie

  -- richard



> On Jan 24, 2018, at 5:33 PM, Sam Nicholson <[email protected]> wrote:
> 
> Well,  I'm wrong.
> 
> I got to thinking about this and decided to test it.  See 
> github.com/SamCN2/zfs-stats
> 
> Turns out that having 2 cache devices hurts.  Mostly not a lot, but sometimes 
> a lot.
> It never helps, as far as the tests I ran could tell.
> 
> Mind, I'm on a small server.  I think, like Jim, I want to get the most out 
> of what I have.
> 
> As expected, mirrored ZILs cost a percent or two.  It's an extra write.  Even 
> with Parallelism, it costs.
> 
> I also explored mirrors, stripes, raidz raidz2 and mirrored stripes.  I had 
> always thought that mirrored
> stripes were the best.  Raidz2 is pretty darned good, sometimes better.
> 
> Cacheing helps.  I wouldn't be without it on any read-mostly workload, which 
> is what I have: source repos, data lakes...
> Cacheing hurts writes sometimes, tho, and I won't be using it in the future 
> on transaction pools.
> 
> To me, the only thing left to decide is mirrored ZILs.  I'll probably keep 
> them for transaction pools.
> For data repos, I'll just have one.
> 
> Cheers!
> -sam
> 
>> On Fri, Jan 19, 2018 at 10:49 AM, Sam Nicholson <[email protected]> wrote:
>> I'll throw my $.02 behind you Jim.  Perhaps splitting an SSD is not the best 
>> way to get the most performance out of the SSD.
>> But even a part of an SSD is a huge win.  In fact, separate logs have always 
>> been a win, even back in the pre-ZFS days.  I
>> recall some old Bonnie results from the late '90 that show separating logs 
>> onto partitions *on the same drive* is a good deal.
>> Relative to the bare drive performance, that is.  Not relative to optimal 
>> combinations of fast 2.5 inch 15K SAS for logs.
>> 
>> My standard is to use 2 SSDs and 2 HDDs.  Mirror the HDDs for reliability, 
>> split the SSD into a small slice for a mirrored log,
>> and a spanned cache.  Remember, there is no need to mirror the cache.  
>> Errors there are treated just like misses, they are 
>> read from primary volume.
>> 
>> It looks like this:
>> 
>> # zpool create zones mirror c0t0d0 c0t1d0 log mirror c0t2d0s4 c0t3d0s4 cache 
>> c0t2d0s5 c0t3d0s5
>> 
>> Having used format(1m) to partition c0t2 and c0t3 with Solaris labels with 
>> partitions 4 and 5 as the LOG and CACHE parts, respectively.
>> Size of part 4 depends upon your write speed.  Max write rate * 5 seconds 
>> rounded up.  I use 8 GB logs for my 10G connected  NFS server.
>> 10 Gbits * 5 (rounded up to 56) / 8bits = 7, which I round up to 8 GBytes.  
>> The rest is cache.
>> 
>> It's *so* much better than just the mirrored HDD.  But I do see contention 
>> for the SSD under heavy load.  So What!?  
>> My HDDs would have melted by now.
>> 
>> Only caveat is that TRIM support may be affected.  I don't know, I haven't 
>> looked into the behavior.  That's a good Q for another topic.
>> 
>> As to Parted being not there.  I'd gently advise to use native tools.  
>> format(1m) has fdisk, partition, and label within.
>> I'm going to be rebuilding a couple of servers later today.  I'll capture 
>> the format session I use and send it to you, if you like.
>> I use Parted(8) on Linux.  I don't like it, but I use it.  :)
>> 
>> Cheer!
>> -sam
>> 
>>> On Fri, Jan 19, 2018 at 10:07 AM, Jim Wiggs <[email protected]> wrote:
>>> I've been told by quite a few folks that splitting the SSD between log and 
>>> cache is "a bad idea" or "suboptimal" but frankly, I don't buy it.  It may 
>>> just be my personal experience, but for my use cases, I've been operating 
>>> with limited resources and haven't been able to justify the expense of 
>>> having three or more SSDs to do this.  Since I've never needed a ZIL with 
>>> more than 2 GB of space and the smallest SSDs you can buy are more than 10x 
>>> that size, mirroring a pair of SSDs for the ZIL was a huge waste of space.  
>>> I started doing this about 4 years ago when SSDs were much more expensive 
>>> and I couldn't justify that waste, so I'd partition a 1-2 GB slice on each 
>>> SSD and mirror them for my ZIL, and use the remaining space on both SSDs, 
>>> un-mirrored, for cache.  Again, in my experience, this has always resulted 
>>> in better general performance than either adding only log or only cache.
>>> 
>>> YMMV.
>>> 
>>>> On Fri, Jan 19, 2018 at 1:07 AM, Ian Collins <[email protected]> 
>>>> wrote:
>>>>> On 01/19/2018 08:45 PM, Jim Wiggs wrote:
>>>>>  So all is right with the world again.  But I'm still left with one 
>>>>> question: why on Earth is *parted* not included as part of the SmartOS 
>>>>> hypervisor image?  The old Solaris format command is spectacularly 
>>>>> user-unfriendly and always has been. I can't imagine that parted requires 
>>>>> so much additional space that it couldn't be included.  Was there any 
>>>>> particular rationale to not put a better and more user-friendly 
>>>>> partitioning tool into the OS that runs at the top level and manages the 
>>>>> hardware?
>>>>> 
>>>> 
>>>> As Jussi said, partitioning disks on a ZFS only system is uncommon.
>>>> 
>>>> Splitting an SSD between log and cache generally isn't a very performant 
>>>> option.  Devices that make good cache devices may not be good logs.  With 
>>>> SmartOS running KVM, there is a big gain from log devices, but not much 
>>>> from a cache.
>>>> 
>>>> Cheers,
>>>> Ian.
>>>> 
>>> 
>> 
> 
> smartos-discuss | Archives | Modify Your Subscription  



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to