Humberto,

If you wanted to recreate my earlier tests before those settings are in place and after that might yield something and be helpful.

Thank You,
Will

On 2/24/17 08:11, Humberto Ramirez wrote:
Will, I have a server were we can run some tests, let me know if you're interested.



On Feb 23, 2017 2:49 PM, "Will Beazley" <[email protected] <mailto:[email protected]>> wrote:

    Humberto,

    I.
    I have not yet tested it.

    It is on shared system and since it seems I must reboot to reload
    the '/etc/system' I should wait until a reboot window (and of
    course proceed with due care when messing about with Kernel
    Tunables and their ilk).

    I instead went through the relevant sections of SI, SP&T, and SP
    and then searched the Intertubes high and low.

    Here is a good one:
    http://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/
    <http://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/> (nice links)

    What I had trouble working out specifically is how hidden/extant
    those variables are in SmartOS/Solaris docs(and systems) but
    figure greatly in relevant (antiquated) OpenSol pages & FreeBSD &
    Linux & OpenZFS.
    What I mean typing 'sysdef' doesn't give all the possible
    tunables, nor does the 'zfs get all' command show them.

    From SP table 8-12 defines zfs_txg_timeout thus: "time-out for
    TXGs (seconds): sets the lowest rate they can occur", and later
    refers one to the "ZFS Evil Tuning Guide".

    From
    
http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/dsl_pool.c
    
<http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/fs/zfs/dsl_pool.c>
    we find the following:

    The tunable * zfs_dirty_data_max determines the dirty space limit.
    .....
    The zfs_dirty_data_sync tunable dictates the threshold at which we
    ensure that there is a txg syncin ..... /* * zfs_dirty_data_max
    will be set to zfs_dirty_data_max_percent% of all memory, * capped
    at zfs_dirty_data_max_max. It can also be overridden in
    /etc/system. */ ..... uint64_t zfs_dirty_data_max; uint64_t
    zfs_dirty_data_max_max = 4ULL * 1024 * 1024 * 1024;
    .....
    uint64_t zfs_dirty_data_sync = 64 * 1024 * 1024;

    II.
    The other consideration is that I am ultimately looking for speed
    ups on/with/from the ramdisk not to slow down the HDD+SSD zpool.

    What I think is going on so far from what I have read is that the
    HDD+SSD is so quick because it hasn't written really anything to
    disk yet, waiting for a timeout to be reached or a threshold of
    dirty data to be crossed.

    It could be that the ramdisk doesn't participate in those
    performance enhancing schemes but still being brought online by
    ZFS has its overheads; thus it gets all slowdowns but none of the
    compensating speed-ups.

    I am not sure if I am going to abandon this path and just be happy
    with the overall ZFS speed-ups.

    If anyone has any insights to share I am a willing ear.

    Thank You,
    Will


    On 2/23/17 12:47, Humberto Ramirez wrote:
    Did you re-run the dd tests after tweaking those parameters?



    On Feb 22, 2017 11:52 AM, "Will Beazley"
    <[email protected]
    <mailto:[email protected]>> wrote:

        Mille Grazie!


        On 2/21/17 23:12, Artem Penner wrote:
        Read about this kernel parameters
        |zfs:zfs_dirty_data_max|
        |zfs:zfs_txg_timeout|
        |zfs:zfs_dirty_data_sync|
        |
        |
It's limut your i/o.

Example of /etc/config
|set ibft_noprobe=||1|
|set noexec_user_stack=||1|
|set noexec_user_stack_log=||1|
|set idle_cpu_no_deep_c=||1|
|set idle_cpu_prefer_mwait = ||0|
|set hires_tick=||1|
|set ip:ip_squeue_fanout=||1|
|set pcplusmp:apic_panic_on_nmi=||1|
|set apix:apic_panic_on_nmi=||1|
|set dump_plat_mincpu=||0|
|set dump_bzip2_level=||1|
|set dump_metrics_on=||1|
|set sata:sata_auto_online=||1|
|set sd:sd_max_throttle = ||128|
|set sd:sd_io_time=||10|
|set rlim_fd_max = ||131072|
|set rlim_fd_cur = ||65536|
|set ndd:tcp_wscale_always = ||1|
|set ndd:tstamp_if_wscale = ||1|
|set ndd:tcp_max_buf = ||166777216|
|set nfs:nfs_allow_preepoch_time = ||1|
|set nfs:nfs3_max_threads = ||256|
|set nfs:nfs4_max_threads = ||256|
|set nfs:nfs3_nra = ||32|
|set nfs:nfs4_nra = ||32|
|set nfs:nfs3_bsize = ||1048576|
|set nfs:nfs4_bsize = ||1048576|
|set nfs3:max_transfer_size = ||1048576|
|set nfs4:max_transfer_size = ||1048576|
|set nfs:nfs4_async_clusters = ||16|
|set rpcmod:svc_default_stksize=||0x6000|
|set rpcmod:cotsmaxdupreqs = ||4096|
|set rpcmod:maxdupreqs = ||4096|
|set rpcmod:clnt_max_conns = ||8|
|set maxphys=||1048576|
|set zfs:zfs_dirty_data_max = ||0x600000000|
|set zfs:zfs_txg_timeout = ||0xc|
|set zfs:zfs_dirty_data_sync = ||0x400000000|
|set zfs:zfs_arc_max = ||0x6400000000|
|set zfs:zfs_arc_shrink_shift=||12|
|set zfs:l2arc_write_max = ||0x6400000|
|set zfs:l2arc_write_boost = ||0xC800000|
|set zfs:l2arc_headroom = ||12|
|set zfs:l2arc_norw=||0|

22 февр. 2017 г. 7:28 пользователь "Will Beazley"
<[email protected]
<mailto:[email protected]>> написал:

Christopher, et al.,

I am trying to get my head around why the performance of
ramdisk is so much poorer than that of HDD-pool+SSD-slog.

/usbkey/test_dir]#  time dd if=/dev/zero
of=/tmpfs/testfile bs=64k count=32768;time dd
if=/dev/zero of=/usbkey/test_dir/testfile bs=64k count=32768
32768+0 records in
32768+0 records out
2147483648 <tel:%28214%29%20748-3648> bytes transferred
in 2.279053 secs (942270169 bytes/sec)

real    0m2.312s
user    0m0.021s
sys     0m1.062s
32768+0 records in
32768+0 records out
2147483648 <tel:%28214%29%20748-3648> bytes transferred
in 0.743729 secs (2887453957 bytes/sec)

real    0m0.760s
user    0m0.016s
sys     0m0.652s

I created the ramdisk thus:
ramdiskadm -a rd1 3072m
...
 zfs create -o  mountpoint=/tmpfs -o sync=disabled
ramdsk1/rd1

I've run it many times and although the results vary yet
the tale is always the same.

Thank You,
Will




*smartos-discuss* | Archives <https://www.listbox.com/member/archive/184463/=now> <https://www.listbox.com/member/archive/rss/184463/26368559-ba231e1c> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>




-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to