Thanks for the advice Ric, do you understand the causes for the slow
performing ramdisk or is it simply performing as design?



On Feb 24, 2017 6:39 PM, "Richard Elling" <[email protected]>
wrote:

> This is what we call an "/etc/system virus"
> Unless you understand each of the tunables and what they impact, do
> yourself a favor
> and avoid putting them on your system. In some cases, they are documented
> in the
> Solaris kernel tunables guide, in other cases they are documented only in
> source code.
>  — richard
>
>
> On Feb 23, 2017, at 10:47 AM, Humberto Ramirez <[email protected]> wrote:
>
> Did you re-run the dd tests after tweaking those parameters?
>
>
>
> On Feb 22, 2017 11:52 AM, "Will Beazley" <[email protected]>
> wrote:
>
> Mille Grazie!
>
>
> On 2/21/17 23:12, Artem Penner wrote:
>
> Read about this kernel parameters
> zfs:zfs_dirty_data_max
> zfs:zfs_txg_timeout
> zfs:zfs_dirty_data_sync
>
> It's limut your i/o.
>
> Example of /etc/config
> set ibft_noprobe=1
> set noexec_user_stack=1
> set noexec_user_stack_log=1
> set idle_cpu_no_deep_c=1
> set idle_cpu_prefer_mwait = 0
> set hires_tick=1
> set ip:ip_squeue_fanout=1
> set pcplusmp:apic_panic_on_nmi=1
> set apix:apic_panic_on_nmi=1
> set dump_plat_mincpu=0
> set dump_bzip2_level=1
> set dump_metrics_on=1
> set sata:sata_auto_online=1
> set sd:sd_max_throttle = 128
> set sd:sd_io_time=10
> set rlim_fd_max = 131072
> set rlim_fd_cur = 65536
> set ndd:tcp_wscale_always = 1
> set ndd:tstamp_if_wscale = 1
> set ndd:tcp_max_buf = 166777216
> set nfs:nfs_allow_preepoch_time = 1
> set nfs:nfs3_max_threads = 256
> set nfs:nfs4_max_threads = 256
> set nfs:nfs3_nra = 32
> set nfs:nfs4_nra = 32
> set nfs:nfs3_bsize = 1048576
> set nfs:nfs4_bsize = 1048576
> set nfs3:max_transfer_size = 1048576
> set nfs4:max_transfer_size = 1048576
> set nfs:nfs4_async_clusters = 16
> set rpcmod:svc_default_stksize=0x6000
> set rpcmod:cotsmaxdupreqs = 4096
> set rpcmod:maxdupreqs = 4096
> set rpcmod:clnt_max_conns = 8
> set maxphys=1048576
> set zfs:zfs_dirty_data_max = 0x600000000
> set zfs:zfs_txg_timeout = 0xc
> set zfs:zfs_dirty_data_sync = 0x400000000
> set zfs:zfs_arc_max = 0x6400000000
> set zfs:zfs_arc_shrink_shift=12
> set zfs:l2arc_write_max = 0x6400000
> set zfs:l2arc_write_boost = 0xC800000
> set zfs:l2arc_headroom = 12
> set zfs:l2arc_norw=0
>
>
> 22 февр. 2017 г. 7:28 пользователь "Will Beazley" <
> [email protected]> написал:
>
>> Christopher, et al.,
>> 
>> I am trying to get my head around why the performance of ramdisk is so
>> much poorer than that of HDD-pool+SSD-slog.
>> 
>> /usbkey/test_dir]#  time dd if=/dev/zero of=/tmpfs/testfile bs=64k
>> count=32768;time dd if=/dev/zero of=/usbkey/test_dir/testfile bs=64k
>> count=32768
>> 32768+0 records in
>> 32768+0 records out
>> 2147483648 <(214)%20748-3648> bytes transferred in 2.279053 secs
>> (942270169 bytes/sec)
>> 
>> real    0m2.312s
>> user    0m0.021s
>> sys     0m1.062s
>> 32768+0 records in
>> 32768+0 records out
>> 2147483648 <(214)%20748-3648> bytes transferred in 0.743729 secs
>> (2887453957 bytes/sec)
>> 
>> real    0m0.760s
>> user    0m0.016s
>> sys     0m0.652s
>> 
>> I created the ramdisk thus:
>> ramdiskadm -a rd1 3072m
>> ...
>> zfs create -o  mountpoint=/tmpfs -o sync=disabled  ramdsk1/rd1
>> 
>> I've run it many times and although the results vary yet the tale is
>> always the same.
>> 
>> Thank You,
>> Will
>> 
>
>
>
>
> *smartos-discuss* | Archives
> <https://www.listbox.com/member/archive/184463/=now>
> <https://www.listbox.com/member/archive/rss/184463/27825507-82e246d1> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to