Thank you for your opinion of max_sectors_kb, Ulrich.
It provided me some inspiration for understanding the whole thing.

Thank you for your advice,  Mike.
I will tune some parameters as you have mentioned below.

I will share if I could make achievements in performance tuning.


在 2018年9月21日星期五 UTC+8上午1:43:14,Mike Christie写道:
>
> On 09/12/2018 09:26 PM, 3kbo...@gmail.com <javascript:> wrote: 
> > Thank you for your reply, Mike. 
> > Now my iscsi disk performance can be around 300MB/s in 4M sequence 
> > write(TCMU+LIO) 
> > It increase from 20MB/s to 300MB/s, after I can change max_data_area_mb 
> > from 8 to 256 && hw_max_sectors from 128 to 8192. 
> > To my cluster, after a lot of tests I found that I should keep 
> > "max_data_area_mb>128 &&  hw_max_sectors>=4096" in order to get a good 
> > performance. 
> > Does my setting can cause some side effects? 
>
> It depends on the kernel. For the RHEL/Centos kernel you are using the 
> kernel will preallocate max_data_area_mb of memory for each device. For 
> upstream, we no longer preallocate, but once it is allocated we do not 
> free the memory unless global_max_data_area_mb is hit or the device is 
> removed. 
>
> With a high hw_max_sectors latency will increase due to sending really 
> large commands, so it depends on your workload and what you need. 
>
> We used to set hw_max_sectors to the rbd object size (4MB by default), 
> but in our testing we would see throughput go down around 512k - 1MB. 
>
> > Are there any other parameters can improve the performance quite 
> obvious? 
>
> The normal networking ones like using jumbo frames, 
> net.core.*/net.ipv4.*, etc. Check your nic's documentation for the best 
> settings. 
>
> There are some iscsi ones like the cmdsn/cmds_max I mentioned and then 
> also the segment related ones like MaxRecvDataSegmentLength, 
> MaxXmitDataSegmentLength, MaxBurstLength, FirstBurstLength and have 
> ImmediateData on. 
>
>
> > Why the default value of max_data_area_mb and hw_max_sectors is so 
> > small, and bad performance? 
>
> I do not know. It was just what the original author had used initially. 
>
> > Could you talk something about this? 
> > At least, max_data_area_mb>128 &&  hw_max_sectors>=4096, I can get a 
> > better performance seems to be acceptable. 
> > If my settings can give other users some help, I will be happy. 
> > 
> > 在 2018年9月12日星期三 UTC+8上午12:39:16,Mike Christie写道: 
> > 
> >     On 09/11/2018 11:30 AM, Mike Christie wrote: 
> >     > Hey, 
> >     > 
> >     > Cc mchr...@redhat.com <javascript:>, or I will not see these 
> >     messages until I check 
> >     > the list maybe once a week. 
> >     > 
> >     > On 09/05/2018 10:36 PM, 3kbo...@gmail.com <javascript:> wrote: 
> >     >>         What lio fabric driver are you using? iSCSI? What kernel 
> >     version 
> >     >>         and 
> >     >>         what version of tcmu-runner? 
> >     >> 
> >     >>     io fabric driver :            iscsi 
> >     >> 
> >     >>     iscsid version:              2.0-873 
> >     >> 
> >     >>     OS version:                  CentOS Linux release 7.5.1804 
> >     (Core) 
> >     >> 
> >     >>     kernel version:              3.10.0-862.el7.x86_64 
> >     >> 
> >     >>     tcmu-runner version:    1.4.0-rc1 
> >     >> 
> >     >> 
> > 
> >     There is also a perf bug in that initiator if the 
> node.session.cmds_max 
> >     is greater than the LIO default_cmdsn_depth and your IO test tries 
> to 
> >     send cmds > node.session.cmds_max. 
> > 
> > I have known the bug before, because I had google a lot. 
> > It increase from 320MB/s to 340MB/s (4M seq write), seems  like a stable 
> > promotion. 
> > 
> > Settings Before: 320MB/s 
> >  node.session.cmds_max = 2048 
> >  default_cmdsn_depth = 64 
> > 
> > Settings After: 340MB/s 
> >  node.session.cmds_max = 64 
> >  default_cmdsn_depth = 64 
> > 
> >     So set the node.session.cmds_max and default_cmdsn_depth to the same 
> >     value. You can set the default_cmdsn_depth in saveconfig.json, and 
> set 
> >     cmds_max in the iscsiadm node db (after you set it make sure you 
> logout 
> >     and login the session again). 
> > 
> > Now  I set set the node.session.cmds_max and default_cmdsn_depth both to 
> > be 64. 
> > 
> > Thank you very much! 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "open-iscsi" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to open-iscsi+...@googlegroups.com <javascript:> 
> > <mailto:open-iscsi+unsubscr...@googlegroups.com <javascript:>>. 
> > To post to this group, send email to open-...@googlegroups.com 
> <javascript:> 
> > <mailto:open-...@googlegroups.com <javascript:>>. 
> > Visit this group at https://groups.google.com/group/open-iscsi. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to