Errors when log out a deleted target

2015-07-23 Thread LSZhu
Hi experts,

When I try to log out a target which is already deleted at the target 
server side, I can see a error :

Logging out of session [sid: 1, target: 
iqn.2015-07.com.example:test:844a36e0-e921-4988-9538-a32112aa40d4, portal: 
147.2.207.200,3260]
iscsiadm: Could not logout of [sid: 1, target: 
iqn.2015-07.com.example:test:844a36e0-e921-4988-9538-a32112aa40d4, portal: 
147.2.207.200,3260].
iscsiadm: initiator reported error (9 - internal error)
iscsiadm: Could not logout of all requested sessions

I think maybe the reason is that, the target is deleted, so the initiator 
can not get a logout response PDU, so errors.We would meet the same 
situation when target still alive but network down.
I have a proposal, can we set a time out values, for example 10 seconds, 
once time out, log out anyway, then print some message like The target is 
not reachable, logged out. I think this operation is not harmful to the 
target side.
Do you experts think that make sense?


Thanks
BR
Zhu Lingshan

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Errors when log out a deleted target

2015-07-23 Thread LSZhu
Hi experts,

I have another concern, if we log out a deleted target, I think that is 
fine. 
But if we log out a network blocked target anyway when the log out PDU time 
out,  will the target side keep the information of the initiator? Maybe 
that would introduce some confliction in next login.

Thanks,
BR
Zhu Lingshan

在 2015年7月23日星期四 UTC+8下午2:37:18,LSZhu写道:

 Hi experts,

 When I try to log out a target which is already deleted at the target 
 server side, I can see a error :

 Logging out of session [sid: 1, target: 
 iqn.2015-07.com.example:test:844a36e0-e921-4988-9538-a32112aa40d4, portal: 
 147.2.207.200,3260]
 iscsiadm: Could not logout of [sid: 1, target: 
 iqn.2015-07.com.example:test:844a36e0-e921-4988-9538-a32112aa40d4, portal: 
 147.2.207.200,3260].
 iscsiadm: initiator reported error (9 - internal error)
 iscsiadm: Could not logout of all requested sessions

 I think maybe the reason is that, the target is deleted, so the initiator 
 can not get a logout response PDU, so errors.We would meet the same 
 situation when target still alive but network down.
 I have a proposal, can we set a time out values, for example 10 seconds, 
 once time out, log out anyway, then print some message like The target is 
 not reachable, logged out. I think this operation is not harmful to the 
 target side.
 Do you experts think that make sense?


 Thanks
 BR
 Zhu Lingshan


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Is it a valid bug : some errors when discover or delete targets if we manually create default file in folder sendtargets and nodes

2015-07-19 Thread LSZhu
Hi MIke,

Please let me know if you can find the patch and if the patch can fix the 
issue. If not, I will be ready and happy to fix it.

Thanks,
BR
Zhu Lingshan

在 2015年7月17日星期五 UTC+8下午11:32:24,Mike Christie写道:

 On 07/17/2015 04:00 AM, LSZhu wrote: 
 
  

  
  For now, the only way we know that can 100% trigger the bug is creating 
  these files manually, but some guys had ever see the bugs randomly. Are 
  they valid bugs? I think it is nice to enhance the code. 
  

 I do not think it is a valid bug to have a empty config/db file. It 
 should never happen. Feel free to send a patch to make our handling more 
 robust, but I do not think it is that important. 

 The bug would be what is causing a empty config/db file. Now, that I 
 think of it, I think Chris Leech figured this out and sent a patch. I 
 must have forgot to merge it. I will try to dig it up. 




-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Is it a valid bug : some errors when discover or delete targets if we manually create default file in folder sendtargets and nodes

2015-07-17 Thread LSZhu
Hi experts,

Our team find two bugs, please help me check whether they are valid, if 
they are, I am happy to fix them:

--
--
--

(A) If we touch a empty default file in sendtargets, when we try to 
discover the targets, we would got a empty default file in sendtargets. 
Here is the detailed information and how to reproduce.

(1) we have a target on 147.2.207.131. Target name: iqn.2015-07.com.example

(2) Create a folder like 
/etc/iscsi/send_targets/147.2.207.131,3260/iqn.2015-07.com.example,147.2.207.131,3260,1,default
 
which matches the target name, here it is iqn.2015-07.com.example
linux-askc:/etc/iscsi # mkdir -p 
/etc/iscsi/send_targets/147.2.207.131,3260/iqn.2015-07.com.example,147.2.207.131,3260,1,default

(3) Create a empty default file in the folder created above.
linux-askc:/etc/iscsi # touch  
/etc/iscsi/send_targets/147.2.207.131,3260/iqn.2015-07.com.example,147.2.207.131,3260,1,default/default

(4) Run discover command, 
linux-askc:/etc/iscsi # iscsiadm -m discovery -t st -p 147.2.207.131
147.2.207.131:3260,1 iqn.2015-07.com.example (we got the target)
(5) check the default file in sendtargets:
linux-askc:/etc/iscsi # cat  
/etc/iscsi/send_targets/147.2.207.131,3260/iqn.2015-07.com.example,147.2.207.131,3260,1,default/default


Then we will find the default file : 
/etc/iscsi/send_targets/147.2.207.131,3260/iqn.2015-07.com.example,147.2.207.131,3260,1,default/default
 
still a empty

(6) If we want to delete the target, we would see:

linux-askc:/etc/iscsi/nodes # iscsiadm -m node -o delete 
iscsiadm: Could not remove link 
/etc/iscsi/send_targets/147.2.207.131,3260/iqn.2015-07.com.example,147.2.207.131,3260,1,default:
 
Is a directory

iscsiadm: Could not execute operation on all records: encountered iSCSI 
database failure

(7) Then run discover command again, we would get a error:
linux-askc:/etc/iscsi # iscsiadm -m discovery -t st -p 147.2.207.131
iscsiadm: Could not remove link 
/etc/iscsi/send_targets/147.2.207.131,3260/iqn.2015-07.com.example,147.2.207.131,3260,1,default:
 
Is a directory

iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 
147.2.207.131,3260,1 iqn.2015-07.com.example]
147.2.207.131:3260,1 iqn.2015-07.com.example
linux-askc:/etc/iscsi # 

--
--
--

(B) If we touch a empty default file in nodes when we try to discover the 
targets,
 
(1) we have a target on 147.2.207.131. Target name: iqn.2015-07.com.example

(2) Create a folder like 
/etc/iscsi/nodes/iqn.2015-07.com.example/147.2.207.131,3260,1 which 
matches the target name, here it is iqn.2015-07.com.example
mkdir -p /etc/iscsi/nodes/iqn.2015-07.com.example/147.2.207.131,3260,1

(3) Create a empty default file in the folder created above.
linux-askc:/etc/iscsi # touch 
/etc/iscsi/nodes/iqn.2015-07.com.example/147.2.207.131,3260,1/default

(4) Run discover command, we will see a error:
linux-askc:/etc/iscsi # iscsiadm -m discovery -t st -p 147.2.207.131
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: 
No such file or directory

iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 
147.2.207.131,3260,1 iqn.2015-07.com.example]
147.2.207.131:3260,1 iqn.2015-07.com.example
linux-askc:/etc/iscsi # 


Re: May I do some contributions for iSCSI multiqueue

2015-07-02 Thread LSZhu
Hi,
This is my new account based on my SUSE email, I think I should use this in 
future.

Thanks
Zhu Lingshan

在 2015年6月26日星期五 UTC+8下午2:31:26,LSZhu写道:

 Hi,
 I have been working for iSCSI in SUSE for half a year, I have some basic 
 knowledge of iSCSI. I did some debug  and performance analyze work before.I 
 am quite interested in iSCSI-mq, I am not a expert here, but may I do some 
 contributions for iSCSI-mq? If you need me in somewhere, please let me know.

 In my view, there seems such works need to be done:

 (1) open-iscsi should simulate a multi-queue block device on the initiator 
 side, I mean, /dev/sdc, sdd which simulated by open-iscsi should be 
 multi-queue, just like we want a multi-queue hardware device as a backstore.
 (2)  I/O scheduler is needed in the block layer for multi-queue, for the 
 simulated device mentioned above.
 (3) open-iscsi should establish more than one connections to the target in 
 a session, and a  I/O scheduler is needed here.
 (4) some performance improve work like how to manage multi-queue threads 
 on multiple CPU cores, how to reduce latency, how to create a better 
 pipeline.

 I have heard that on the target LIO side, multi-queue work is done.
 If I am wrong  somewhere, please tell me, I would appreciate for that.

 I know you have did a lot of work for multi-queue, so if you need me 
 somewhere, or I can help in some work, please let me know.

 Thanks
 BR
 Zhu Lingshan



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: May I do some contributions for iSCSI multiqueue

2015-06-29 Thread lszhu
Hi,

This is my new account based on my SUSE email, I would use this one in 
future.

Thanks
BR
Zhu Lingshan

在 2015年6月26日星期五 UTC+8下午2:31:26,LSZhu写道:

 Hi,
 I have been working for iSCSI in SUSE for half a year, I have some basic 
 knowledge of iSCSI. I did some debug  and performance analyze work before.I 
 am quite interested in iSCSI-mq, I am not a expert here, but may I do some 
 contributions for iSCSI-mq? If you need me in somewhere, please let me know.

 In my view, there seems such works need to be done:

 (1) open-iscsi should simulate a multi-queue block device on the initiator 
 side, I mean, /dev/sdc, sdd which simulated by open-iscsi should be 
 multi-queue, just like we want a multi-queue hardware device as a backstore.
 (2)  I/O scheduler is needed in the block layer for multi-queue, for the 
 simulated device mentioned above.
 (3) open-iscsi should establish more than one connections to the target in 
 a session, and a  I/O scheduler is needed here.
 (4) some performance improve work like how to manage multi-queue threads 
 on multiple CPU cores, how to reduce latency, how to create a better 
 pipeline.

 I have heard that on the target LIO side, multi-queue work is done.
 If I am wrong  somewhere, please tell me, I would appreciate for that.

 I know you have did a lot of work for multi-queue, so if you need me 
 somewhere, or I can help in some work, please let me know.

 Thanks
 BR
 Zhu Lingshan



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


May I do some contributions for iSCSI multiqueue

2015-06-26 Thread LSZhu
Hi,
I have been working for iSCSI in SUSE for half a year, I have some basic 
knowledge of iSCSI. I did some debug  and performance analyze work before.I 
am quite interested in iSCSI-mq, I am not a expert here, but may I do some 
contributions for iSCSI-mq? If you need me in somewhere, please let me know.

In my view, there seems such works need to be done:

(1) open-iscsi should simulate a multi-queue block device on the initiator 
side, I mean, /dev/sdc, sdd which simulated by open-iscsi should be 
multi-queue, just like we want a multi-queue hardware device as a backstore.
(2)  I/O scheduler is needed in the block layer for multi-queue, for the 
simulated device mentioned above.
(3) open-iscsi should establish more than one connections to the target in 
a session, and a  I/O scheduler is needed here.
(4) some performance improve work like how to manage multi-queue threads on 
multiple CPU cores, how to reduce latency, how to create a better pipeline.

I have heard that on the target LIO side, multi-queue work is done.
If I am wrong  somewhere, please tell me, I would appreciate for that.

I know you have did a lot of work for multi-queue, so if you need me 
somewhere, or I can help in some work, please let me know.

Thanks
BR
Zhu Lingshan

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


ISCSI target LIO performance bottle neck analyze

2015-03-18 Thread LSZhu
Hi, 

I have been working on LIO performance work for weeks, now I can release 
some results and issues, in this mail, I would like to talk about issues on 
CPU usage and  transaction speed. There are also some CPU cycles in wait 
status in the initiator side, I really hope can get some hints and 
suggestion from you! 

Summary: 
(1) In 512Bytes, single process, reading case, I found the transaction 
speed is 2.818MB/s in a 1GB network, the running CPU core in initiator side 
spent over 80% cycles in waiting, while one core of LIO side spent 43.6% in 
Sys, even no cycles in user, no cycles in wait. I assume the bottle neck of 
this small package, one thread transaction is the lock operations on LIO 
target side. 

(2) In 512Bytes, 32 process, reading case, I found the transaction speed is 
11.259MB/s in a 1GB network, I found there is only one CPU core in the LIO 
target side running, and the load is 100% in SYS. While other cores totally 
free, no workload. I assume the bottle neck of this small package, multi 
threads transaction is the that, no workload balance on target side. 

---
 

Here are all detailed information: 


My environment: 
Two blade severs with E5 CPU and 32GB ram, one run LIO and the other is the 
initiator. 
ISCSI backstore: RAM disk, I use the command line modprobe brd 
rd_size=420 max_part=1 rd_nr=1 to create it.(/dev/ram0, and in the 
initiator side it is /dev/sdc). 
1GB network. 
OS: SUSE Enterprise Linux Sever on both sides, kernel version 3.12.28-4. 
Initiator: Open-iSCSI Initiator 2.0873-20.4 
LIO-utils: version: 4.1-14.6 
My tools: perf, netperf, nmon, FIO 


---
 

For case (1): 

In 512Bytes, single process, reading case, I found the transaction speed is 
2.897MB/s in a 1GB network, the running CPU core in initiator side spent 
over 80% cycles in waiting, while one core of LIO side spent 43.6% in Sys, 
even no cycles in user, no cycles in wait. 

I run this test case by the command line: 
fio -filename=/dev/sdc  -direct=1 -rw=read  -bs=512 -size=2G -numjobs=1 
-runtime=600 -group_reporting -name=test. 

part of the results: 
Jobs: 1 (f=1): [R(1)] [100.0% done] [2818KB/0KB/0KB /s] [5636/0/0 iops] 
[eta 00m:00s] 
test: (groupid=0, jobs=1): err= 0: pid=1258: Mon Mar 16 21:48:14 2015 
  read : io=262144KB, bw=2897.8KB/s, iops=5795, runt= 90464msec 

I run a netperf test with buffer set to 512Bytes and 512Bytes per package, 
get a transaction speed of 6.5MB/s, better than our LIO did, so I tried 
nmon and perf to find why. 
This is the screen shot of what nmon show about CPU in the initiator side: 


┌nmon─14i─Hostname=INIT─Refresh=10secs 
───21:30.42┐
 

│ CPU Utilisation 
─
 
│ 
│---+-+ 
│ 
│CPU  User%  Sys% Wait% Idle|0  |25 |50 |75 100| │ 
│  1   0.0   0.0   0.2 99.8| | │ 
│  2   0.1   0.1   0.0 99.8| | │ 
│  3   0.0   0.2   0.0 99.8| | │ 
│  4   0.0   0.0   0.0 100.0| | │ 
│  5   0.0   0.0   0.0 100.0| | │ 
│  6   0.0   3.1   0.0 96.9|s | │ 
│  7   2.8  12.2  83.8 
1.2|UssW| │ 
│  8   0.0   0.0   0.0 100.0| | │ 
│  9   0.0   0.0   0.0 100.0| | │ 
│ 10   0.0   0.0   0.0 100.0| | │ 
│ 11   0.0   0.0   0.0 100.0| | │ 
│ 12   0.0   0.0   0.0 100.0| | │ 
│---+-+ 
│ 
│Avg   0.2   1.1   5.8 92.8|WW | │ 
│---+-+ 


We can see on the initiator side, there is only one core running, that is 
ok, but this core spent 83.8% in wait, that seems strange, while on the LIO 
target side, the only running core spent 43.6% in SYS, even no cycles in 
user or wait. Why the initiator waited while there is still some free 
resource(CPU core cycles) on the target side? Then I use perf record to 
monitor the LIO target, I find locks, especially spin lock consumed nearly 
40% CPU cycles. I assume this is the reason why the initiator side shown 
wait and low speed,lock operation is the bottle neck of this case(small 
package, single thread transaction) Do you have any