SPAM control measure stepped up

2023-12-07 Thread The Lee-Man
Hi All:

Most have probably noticed quite a bit of SPAM recently. This was because 
anyone could join this group, and then post.

I have changed the group settings so that joining must be approved. And I 
have kicked out the latest SPAMer, and will continue to do so. Sadly, 
google groups makes it a bit of a PITA to block an existing member, but 
it's not *that* hard.

Lee -- group moderator

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/6914e825-c84b-498d-9254-19aaeeddc946n%40googlegroups.com.


Updatges to iscsiuio

2023-10-17 Thread The Lee-Man
Hi All:

A co-worker and I have recently pushed some updates to iscsiuio to make it 
more reliable. The first wave has been merged, and I have just submitted a 
pull request on github for the 2nd (and last) group of changes.

If you care to review them, please check out github pull request 428 


Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/e0441182-8487-46d0-a7b7-a226b57e7514n%40googlegroups.com.


Re: I have successfully mounted iSCSI targets from Synology NAS in Debian 11 Linux server for a construction company at Defu Lane 10, Singapore on 10 Feb 2023 Fri

2023-02-28 Thread The Lee-Man
This really isn't useful here IMHO. Did you have a question? Or a 
suggestion for changes? If not, we don't really need advertising here.

On Friday, February 10, 2023 at 7:33:14 AM UTC-8 tdte...@gmail.com wrote:

> Subject: I have successfully mounted iSCSI targets from Synology NAS in 
> Debian 11 Linux server for a construction company at Defu Lane 10, 
> Singapore on 10 Feb 2023 Fri
>
> Good day from Singapore,
>
> I have successfully mounted iSCSI targets from Synology NAS in Debian 11 
> Linux server for a construction company at Defu Lane 10, Singapore on 10 
> Feb 2023 Friday.
>
> These are the 5 reference guides I have followed. Please use the following 
> guides in sequence.
>
> [1] How to Configure Static IP on Debian 10
>
> Link: 
> https://www.snel.com/support/how-to-configure-static-ip-on-debian-10/
>
> [2] Debian SourcesList
>
> Link: https://wiki.debian.org/SourcesList
>
> [3] About the /etc/resolv.conf File
>
> Link: 
> https://docs.oracle.com/en/operating-systems/oracle-linux/6/admin/about-etc-resolve.html
>
> [4] iSCSI: Introduction and Steps to Configure iSCSI Initiator and Target
>
> Link: 
> https://calsoftinc.com/blogs/2017/03/iscsi-introduction-steps-configure-iscsi-initiator-target.html
>
> [5] How Do You Make an iSCSI Target in Synology?
>
> Link: https://linuxhint.com/make-iscsi-target-synology/#b6
>
> Please also note that openssh-server was not installed. To install it, run
>
> # apt install openssh-server
>
> Edit /etc/ssh/sshd_config
>
> and set
>
> PermitRootLogin yes
>
> # systemctl restart sshd
>
> Regards,
>
> Mr. Turritopsis Dohrnii Teo En Ming
> Targeted Individual in Singapore
> Blogs:
> https://tdtemcerts.blogspot.com
> https://tdtemcerts.wordpress.com
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/d55fe062-61a8-4c4a-a511-32d9c084002cn%40googlegroups.com.


RFC: Making iSNS and SLP discovery code optional in open-iscsi?

2023-02-28 Thread The Lee-Man
Hi All:

I posted an issue on github about this, but I wanted to mention it here, as 
well.

As part of trying to make open-iscsi "container-ized", I have been looking 
at how to make the iscsi executable footprint smaller.

As part of that, I have been looking first at shared libraries being used 
by iscsid and iscsiadm. Of particular interest are libisns and libslp.

Open-iscsi uses both iSNS and SLP as optional discovery methods, instead of 
using "Send Targets". But neither of these are used much, and in particular 
the SLP code isn't even functional, as far as I know. (Yes, I need to test 
it.)

So I will soon submit a pull request to github making the meson build 
system have options to skip building iSNS and/or SLP discovery code.

I have a working prototype, but I need to clean it up before submitting it. 
For example, the man page will have to be updated to mention that these 
things are optional at build time. (It should do that now for systemd, but 
our man pages currently don't mention systemd at all.)

My belief is that making iSNS and SLP usage optional will benefit some of 
the users of open-iscsi, particularly when looking for a lightweight 
solution.

Anyone else think so?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/f49ba61e-199e-46f3-9d7b-74f1f42ba922n%40googlegroups.com.


Re: [RFC 0/9] Make iscsid-kernel communications namespace-aware

2023-02-28 Thread The Lee-Man
Don: Agreed.

I have been playing with podman, and I actually have a working prototype! 
Right now, I'm looking how to end sessions started in a container. Just 
killing the container leaves a dangling session.

On Wednesday, February 8, 2023 at 12:10:37 PM UTC-8 don.e.w...@gmail.com 
wrote:

> I agree that sharing the initiator name and the other config files would 
> not be recommended. 
>
> The initiator name is often used as an access controll method on the 
> target.   Which could lead to double mounting volumes. 
>
> Also one benefit would be better isolating an iSCSI SAN with multiple 
> customers accessing the SAN via unique containers.
>
> It should be possible to specify those files and directories as part of 
> the container config.  
>
> Don
>
> On Wed, 2023-02-08 at 11:17 -0800, The Lee-Man wrote:
>
> I wanted to mention some issues I've discovered as part of testing this:
>
>
>- Currently, only some sysfs entries are going to be different per 
>namespace
>- This means that the Configuration and Initiator Name are going to be 
>common to all running daemons (this is /etc/iscsi)
>- This also means that the Node database (and discovery DB, and 
>interface DB) are common to all running daemons
>
> I'm really not sure all running daemons should have the same initiator 
> name. If we think of them as separate initiators, then this seems wrong.
>
> Sharing the Node database may not be a good idea, either. This assumes 
> that nodes discovered (and saved) from one namespace can actually be 
> reached from other namespaces, but this may not be true. Having the Node DB 
> and initiatorname shared means the different iscsid instances must 
> cooperate with each other, else their requests can collide. Also, I can 
> imagine situations where different daemons may want to set different 
> configuration values. Currently they cannot.
>
> On Wednesday, February 8, 2023 at 9:41:02 AM UTC-8 The Lee-Man wrote:
>
> From: Lee Duncan 
>
> This is a request for comment on a set of patches that
> modify the kernel iSCSI initiator communications so that
> they are namespace-aware. The goal is to allow multiple
> iSCSI daemon (iscsid) to run at once as long as they
> are in separate namespaces, and so that iscsid can
> run in containers.
>
> Comments and suggestions are more than welcome. I do not
> expect that this code is production-ready yet, and
> networking isn't my strongest suit (yet).
>
> These patches were originally posted in 2015 by Chris
> Leech. There were some issues at the time about how
> to handle namespaces going away. I hope to address
> any issues raised with this patchset and then
> to merge these changes upstream to address working
> in working in containers.
>
> My contribution thus far has been to update these patches
> to work with the current upstream kernel.
>
> Chris Leech/Lee Duncan (9):
> iscsi: create per-net iscsi netlink kernel sockets
> iscsi: associate endpoints with a host
> iscsi: sysfs filtering by network namespace
> iscsi: make all iSCSI netlink multicast namespace aware
> iscsi: set netns for iscsi_tcp hosts
> iscsi: check net namespace for all iscsi lookup
> iscsi: convert flashnode devices from bus to class
> iscsi: rename iscsi_bus_flash_* to iscsi_flash_*
> iscsi: filter flashnode sysfs by net namespace
>
> drivers/infiniband/ulp/iser/iscsi_iser.c | 7 +-
> drivers/scsi/be2iscsi/be_iscsi.c | 6 +-
> drivers/scsi/bnx2i/bnx2i_iscsi.c | 6 +-
> drivers/scsi/cxgbi/libcxgbi.c | 6 +-
> drivers/scsi/iscsi_tcp.c | 7 +
> drivers/scsi/qedi/qedi_iscsi.c | 6 +-
> drivers/scsi/qla4xxx/ql4_os.c | 64 +--
> drivers/scsi/scsi_transport_iscsi.c | 625 ---
> include/scsi/scsi_transport_iscsi.h | 63 ++-
> 9 files changed, 537 insertions(+), 253 deletions(-)
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/7df2eb67-99f0-412a-82f8-2501d36b45bcn%40googlegroups.com.


Re: [RFC 0/9] Make iscsid-kernel communications namespace-aware

2023-02-08 Thread The Lee-Man
I wanted to mention some issues I've discovered as part of testing this:


   - Currently, only some sysfs entries are going to be different per 
   namespace
   - This means that the Configuration and Initiator Name are going to be 
   common to all running daemons (this is /etc/iscsi)
   - This also means that the Node database (and discovery DB, and 
   interface DB) are common to all running daemons

I'm really not sure all running daemons should have the same initiator 
name. If we think of them as separate initiators, then this seems wrong.

Sharing the Node database may not be a good idea, either. This assumes that 
nodes discovered (and saved) from one namespace can actually be reached 
from other namespaces, but this may not be true. Having the Node DB and 
initiatorname shared means the different iscsid instances must cooperate 
with each other, else their requests can collide. Also, I can imagine 
situations where different daemons may want to set different configuration 
values. Currently they cannot.

On Wednesday, February 8, 2023 at 9:41:02 AM UTC-8 The Lee-Man wrote:

> From: Lee Duncan 
>
> This is a request for comment on a set of patches that
> modify the kernel iSCSI initiator communications so that
> they are namespace-aware. The goal is to allow multiple
> iSCSI daemon (iscsid) to run at once as long as they
> are in separate namespaces, and so that iscsid can
> run in containers.
>
> Comments and suggestions are more than welcome. I do not
> expect that this code is production-ready yet, and
> networking isn't my strongest suit (yet).
>
> These patches were originally posted in 2015 by Chris
> Leech. There were some issues at the time about how
> to handle namespaces going away. I hope to address
> any issues raised with this patchset and then
> to merge these changes upstream to address working
> in working in containers.
>
> My contribution thus far has been to update these patches
> to work with the current upstream kernel.
>
> Chris Leech/Lee Duncan (9):
> iscsi: create per-net iscsi netlink kernel sockets
> iscsi: associate endpoints with a host
> iscsi: sysfs filtering by network namespace
> iscsi: make all iSCSI netlink multicast namespace aware
> iscsi: set netns for iscsi_tcp hosts
> iscsi: check net namespace for all iscsi lookup
> iscsi: convert flashnode devices from bus to class
> iscsi: rename iscsi_bus_flash_* to iscsi_flash_*
> iscsi: filter flashnode sysfs by net namespace
>
> drivers/infiniband/ulp/iser/iscsi_iser.c | 7 +-
> drivers/scsi/be2iscsi/be_iscsi.c | 6 +-
> drivers/scsi/bnx2i/bnx2i_iscsi.c | 6 +-
> drivers/scsi/cxgbi/libcxgbi.c | 6 +-
> drivers/scsi/iscsi_tcp.c | 7 +
> drivers/scsi/qedi/qedi_iscsi.c | 6 +-
> drivers/scsi/qla4xxx/ql4_os.c | 64 +--
> drivers/scsi/scsi_transport_iscsi.c | 625 ---
> include/scsi/scsi_transport_iscsi.h | 63 ++-
> 9 files changed, 537 insertions(+), 253 deletions(-)
>
> -- 
> 2.39.1
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/9ff09a3d-1a75-436a-bbc3-0f154285cfa3n%40googlegroups.com.


Re: iscsi daemon in docker container

2023-02-08 Thread The Lee-Man
Hi:

I'm trying to update open-iscsi, using Chris' patches as a starting place, 
so that it's network namespace aware, and hence could work in a docker 
container.

I will soon (I hope) be testing it in a container. So far, it seems to work 
using "ip netns exec".

Please see the RFC for patches I posted.

On Thursday, February 9, 2017 at 8:33:12 AM UTC-8 ayyanar1...@gmail.com 
wrote:

>
>
> On Wednesday, March 9, 2016 at 10:53:58 PM UTC+5:30, Chris Leech wrote:
>>
>> On Tue, Mar 08, 2016 at 02:54:29AM +, Serguei Bezverkhi (sbezverk) 
>> wrote: 
>> > Hello, 
>> > 
>> > As per Michael Christie suggestion, I am reaching out to a wider 
>> > audience.  I am trying to run iscsid inside of a Docker container but 
>> > without using systemd. When I start iscsid -d 8 -f, it fails with 
>> > "Cannot bind IPC socket". I would appreciate if somebody who managed 
>> > to get it working, share his/her steps. 
>>
>> You'll probably need to run it using dockers host mode networking, not 
>> using a container specific network namespace. The iSCSI netlink control 
>> code in the kernel is not network namespace aware, and can only be 
>> accessed from the default/original network namespace (that's the IPC 
>> socket). Not being able to use a new network namespace also means that 
>> you can only run a single iscsid instance on the system. 
>>
>> I had the start of a kernel patch series to deal with this posted a 
>> while back.  I never finished the sysfs object filtering by network 
>> namespace for iSCSI, particularly moving the flash node db sysfs code 
>> from bus to class devices to allow for namespace filtering was still an 
>> open issue. 
>>
>> - Chris 
>>
>>
> Hi Chris,
>
> Do you any latest update on this?
>
> Thanks. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/ef516f53-0ad8-4bc5-88d4-91730e2b480an%40googlegroups.com.


Re: Kernel BUG: kernel NULL pointer dereference on Windows server connect/disconnect

2023-01-24 Thread The Lee-Man
This is a mailing list for open-iscsi, the iSCSI initiator. Your issues 
seems to be with LIO, the in-kernel target code.

I suggest the target development kernel mailing list: 
target-de...@vger.kernel.org

It would help if, when you post there, you include your distribution. Also, 
is there any other way to reproduce it, other than using a MS initiator (so 
that others might reproduce it)?

I use LIO (targetcli) targets regularly and haven't seen this issue.

On Monday, January 23, 2023 at 12:22:49 AM UTC-8 Forza wrote:

> Hi!
> I have an issue with spontaneous reboots of my ISCSI target server. It 
> seems to happen especially often when Windows Server 2016 clients reboot, 
> but I can't say for sure it happens every time either.
>
> The target is using a fileio backing store ontop of a BTRFS filesystem. I 
> have write-back enabled on this target (but it happens even with write 
> caching disabled).
>
> I am running Alpine Linux. The issue has happened since first installed 
> about a year ago. It is at least happeneing on kernels 5.15.60+ and 6.1.4, 
> 6.1.6.
>
>
> I managed to capture the following trace using pstore:
>
> <6>[   69.123671] RPC: Registered named UNIX socket transport module.
> <6>[   69.123674] RPC: Registered udp transport module.
> <6>[   69.123675] RPC: Registered tcp transport module.
> <6>[   69.123675] RPC: Registered tcp NFSv4.1 backchannel transport module.
> <6>[   70.281192] NFSD: Using UMH upcall client tracking operations.
> <6>[   70.281199] NFSD: starting 90-second grace period (net f000)
> <6>[   75.683777] Rounding down aligned max_sectors from 4294967295 to 
> 4294967288
> <4>[   76.014381] ignoring deprecated emulate_dpo attribute
> <4>[   76.014497] ignoring deprecated emulate_fua_read attribute
> <4>[   76.019775] dev[4a04ffbe]: Backstore name 
> 'sesoco3092-export' is too long for INQUIRY_MODEL, truncating to 15 
> characters
> <3>[   76.813023] bond0: (slave igb0): speed changed to 0 on port 1
> <6>[   76.868842] bond0: (slave igb0): link status definitely down, 
> disabling slave
> <6>[   76.868849] bond0: active interface up!
> <3>[   78.912829] bond0: (slave igb1): speed changed to 0 on port 2
> <6>[   78.982125] bond0: (slave igb1): link status definitely down, 
> disabling slave
> <6>[   78.982149] bond0: now running without any active interface!
> <6>[   80.332786] igb :03:00.0 igb0: igb: igb0 NIC Link is Up 1000 
> Mbps Full Duplex, Flow Control: RX
> <6>[   80.545683] bond0: (slave igb0): link status definitely up, 1000 
> Mbps full duplex
> <6>[   80.545691] bond0: active interface up!
> <6>[   82.449424] igb :04:00.0 igb1: igb: igb1 NIC Link is Up 1000 
> Mbps Full Duplex, Flow Control: RX
> <6>[   82.622345] bond0: (slave igb1): link status definitely up, 1000 
> Mbps full duplex
> <4>[ 7428.832735] hrtimer: interrupt took 7712 ns
> <6>[ 8364.650898] ice :01:00.0 ice0: NIC Link is up 25 Gbps Full 
> Duplex, Requested FEC: RS-FEC, Negotiated FEC: RS-FEC, Autoneg Advertised: 
> Off, Autoneg Negotiated: False, Flow Control: None
> <6>[ 8364.651003] IPv6: ADDRCONF(NETDEV_CHANGE): ice0: link becomes ready
> <6>[11218.918482] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com 
> for information.
> <6>[11218.918484] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <
> ja...@zx2c4.com>. All Rights Reserved.
> <3>[154888.236235] Did not receive response to NOPIN on CID: 1, failing 
> connection for I_T Nexus 
> iqn.1991-05.com.microsoft:srv,i,0x41370001,iqn.2022-02.com.example.srv04:srv,t,0x01
> <3>[154908.716136] Time2Retain timer expired for SID: 1, cleaning up iSCSI 
> session.
> <1>[154908.716177] BUG: kernel NULL pointer dereference, address: 
> 0140
> <1>[154908.717023] #PF: supervisor write access in kernel mode
> <1>[154908.717842] #PF: error_code(0x0002) - not-present page
> <6>[154908.718667] PGD 0 P4D 0 
> <4>[154908.719486] Oops: 0002 [#1] PREEMPT SMP PTI
> <4>[154908.720289] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.1.3-0-lts 
> #1-Alpine
> <4>[154908.721089] Hardware name: Supermicro Super Server/X11SCL-F, BIOS 
> 1.9 09/21/2022
> <4>[154908.721888] RIP: 0010:sbitmap_queue_clear+0x3a/0xa0
> <4>[154908.722667] Code: 65 48 8b 04 25 28 00 00 00 48 89 44 24 08 31 c0 
> 8b 4f 04 ba ff ff ff ff 89 f0 d3 e2 d3 e8 f7 d2 48 c1 e0 07 48 03 47 10 21 
> f2  48 0f ab 50 40 c7 44 24 04 01 00 00 00 48 8d 74 24 04 48 89 df
> <4>[154908.724376] RSP: 0018:a6e43d48 EFLAGS: 00010202
> <4>[154908.725249] RAX: 0100 RBX: 8ddc43039428 RCX: 
> 0005
> <4>[154908.726137] RDX: 000b RSI: 004b RDI: 
> 8ddc43039428
> <4>[154908.727030] RBP: 004b R08:  R09: 
> 
> <4>[154908.727921] R10:  R11:  R12: 
> 
> <4>[154908.728812] R13: 8ddc43039380 R14: 8ddc49015370 R15: 
> 8ddc490157e0
> <4>[154908.729715] FS:  () GS:8df9aec0() 
> 

Re: iscsiadm error "Could not load transport iser"

2022-11-25 Thread The Lee-Man
The iser transport is only supported for some cards. It's normally an 
infiniband transport.

Do you have a CNA card (and infrastructure) that supports iser?

On Tuesday, November 22, 2022 at 11:28:47 PM UTC-8 Luis Navarro wrote:

> Hi all,
>
> I'm trying to test a new Ubuntu 22.04.1 LIO iSCSI target with iscsiadm 
> 2.1.5 (installed via "apt").  iscsiadm works fine over "tcp" transport but 
> always fails over the "iser" transport with the following error:
>
> iscsiadm: Could not load transport iser.Dropping interface iface0.
>
> Here are the commands I ran:
>
> $ sudo iscsiadm -m iface -I iface0 --op=new
> $ sudo iscsiadm -m iface -I iface0 -o update -n iface.transport_name -v 
> iser
> $ sudo iscsiadm -m discovery -t st -p 192.168.25.5:3260 -I iface0 -d 8
> iscsiadm: Max file limits 1024 1048576
> iscsiadm: updating defaults from '/etc/iscsi/iscsid.conf'
> iscsiadm: updated 'discovery.sendtargets.iscsi.MaxRecvDataSegmentLength', 
> '32768' => '32768'
> iscsiadm: updated 'node.startup', 'manual' => 'manual'
> iscsiadm: updated 'node.leading_login', 'No' => 'No'
> iscsiadm: updated 'node.session.timeo.replacement_timeout', '120' => '120'
> iscsiadm: updated 'node.conn[0].timeo.login_timeout', '30' => '15'
> iscsiadm: updated 'node.conn[0].timeo.logout_timeout', '15' => '15'
> iscsiadm: updated 'node.conn[0].timeo.noop_out_interval', '5' => '5'
> iscsiadm: updated 'node.conn[0].timeo.noop_out_timeout', '5' => '5'
> iscsiadm: updated 'node.session.err_timeo.abort_timeout', '15' => '15'
> iscsiadm: updated 'node.session.err_timeo.lu_reset_timeout', '30' => '30'
> iscsiadm: updated 'node.session.err_timeo.tgt_reset_timeout', '30' => '30'
> iscsiadm: updated 'node.session.initial_login_retry_max', '4' => '8'
> iscsiadm: updated 'node.session.cmds_max', '128' => '128'
> iscsiadm: updated 'node.session.queue_depth', '32' => '32'
> iscsiadm: updated 'node.session.xmit_thread_priority', '-20' => '-20'
> iscsiadm: updated 'node.session.iscsi.InitialR2T', 'No' => 'No'
> iscsiadm: updated 'node.session.iscsi.ImmediateData', 'Yes' => 'Yes'
> iscsiadm: updated 'node.session.iscsi.FirstBurstLength', '262144' => 
> '262144'
> iscsiadm: updated 'node.session.iscsi.MaxBurstLength', '16776192' => 
> '16776192'
> iscsiadm: updated 'node.conn[0].iscsi.MaxRecvDataSegmentLength', '262144' 
> => '262144'
> iscsiadm: updated 'node.conn[0].iscsi.MaxXmitDataSegmentLength', '0' => '0'
> iscsiadm: updated 'node.session.nr_sessions', '1' => '1'
> iscsiadm: updated 'node.session.reopen_max', '0' => '0'
> iscsiadm: updated 'node.session.iscsi.FastAbort', 'Yes' => 'Yes'
> iscsiadm: updated 'node.session.scan', 'auto' => 'auto'
> iscsiadm: looking for iface conf /etc/iscsi/ifaces/iface0
> iscsiadm: updated 'iface.iscsi_ifacename', 'iface0' => 'iface0'
> iscsiadm: updated 'iface.prefix_len', '0' => '0'
> iscsiadm: updated 'iface.transport_name', '' => 'iser'
> iscsiadm: updated 'iface.vlan_id', '0' => '0'
> iscsiadm: updated 'iface.vlan_priority', '0' => '0'
> iscsiadm: updated 'iface.iface_num', '0' => '0'
> iscsiadm: updated 'iface.mtu', '0' => '0'
> iscsiadm: updated 'iface.port', '0' => '0'
> iscsiadm: updated 'iface.tos', '0' => '0'
> iscsiadm: updated 'iface.ttl', '0' => '0'
> iscsiadm: updated 'iface.tcp_wsf', '0' => '0'
> iscsiadm: updated 'iface.tcp_timer_scale', '0' => '0'
> iscsiadm: updated 'iface.def_task_mgmt_timeout', '0' => '0'
> iscsiadm: updated 'iface.erl', '0' => '0'
> iscsiadm: updated 'iface.max_receive_data_len', '0' => '0'
> iscsiadm: updated 'iface.first_burst_len', '0' => '0'
> iscsiadm: updated 'iface.max_outstanding_r2t', '0' => '0'
> iscsiadm: updated 'iface.max_burst_len', '0' => '0'
> iscsiadm: in read_transports
> iscsiadm: Adding new transport tcp
> iscsiadm: Matched transport tcp
> iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'handle'
> iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'caps'
> iscsiadm: in read_transports
> iscsiadm: Updating transport tcp
> iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'handle'
> iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'caps'
> iscsiadm: Could not load transport iser.Dropping interface iface0.
>
> Looking at the /sys/class/iscsi_transport and 
> /sys/devices/virtual/iscsi_transport directories on the client system, I 
> only see "tcp".  Should I also be seeing "iser"?  Is there an extra package 
> I need to install or step I need to take to get "iser" devices under the 
> "iscsi_transport" directory?
>
> Thanks!
> Luis
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/9d24cea1-f8a8-43e9-9982-c9e6df0cf770n%40googlegroups.com.


Re: Could not logout of all requested sessions reported error (9 - internal error)

2022-11-08 Thread The Lee-Man
Could you try this with debugging in the daemon as well as the command? 
Also, can you share you iscsid.conf? Of course obfiscate your name/password 
settings if you wish. (But be sure username/password is not the same as 
username_in/password_in -- the sets have to be different).

On Thursday, November 3, 2022 at 6:56:43 AM UTC-7 Andinet Gebre wrote:

> 
>
> I am able to discover and login into the Target from the iscsi client and 
> CHAP is also configured to authenticate to/from the ISCSI Initiator client. 
> I am getting the following error when trying logging out from the target to 
> check if the CHAP config is working as expected while log back in,
>
> [root@ltolx2020 ~]# iscsiadm --mode node --target 
> iqn.1992-08.com.redhat:sn.120f265e82be345ecb111d039ea331262:vs.14 --portal 
> 10.85.64.270 --logout Logging out of session [sid: 1, target: 
> iqn.1992-08.com.redhat:sn.120f265e82be345ecb111d039ea331262:vs.14, portal: 
> 10.85.64.270 ,3260] iscsiadm: Could not logout of [sid: 1, target: 
> iqn.1992-08.com.redhat:sn.120f265e82be345ecb111d039ea331262:vs.14, portal: 
> 10.85.64.270,3260]. iscsiadm: initiator reported error (9 - internal error) 
> iscsiadm: Could not logout of all requested sessions
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/09fbab38-c3ce-4724-b69e-e0fde2468578n%40googlegroups.com.


Release 2.1.8 available

2022-09-26 Thread The Lee-Man
Hi All:

I just tagged version 2.1.8 of open-iscsi in github.

This release fixes a few bugs, and it adds support for building using 
meson. Building using make/autoconf is still supported but deprecated.

Please check it out if you get a chance.

-- 
Lee Duncan

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/9e0f4ddf-fd5a-43c9-94ee-91fb879921edn%40googlegroups.com.


Re: FCoE target with LIO

2022-09-13 Thread The Lee-Man
You should perhaps try the target-devel@vger mailing list? This list is 
about the initiator and (mostly) iSCSI.

I don't _believe_ that LIO handles FCoE, but I may be wrong. Good luck.

On Monday, September 5, 2022 at 2:51:23 AM UTC-7 opansz...@gmail.com wrote:

> Hi dear Members.
>
> I want to create a 10G FCoE target with LIO software, the topology is 
> simple.
> I would like to export Luns over fcoe target. 
>
> The fabric will be FCoE.
> Luns are local block devices attached throught SAS raid controller.
> Could someone share any of deploymant guides to this scenario?
>
> I tried deploy this environment to my lab. i used HP server with Intel 
> x520 DA2 converged hba adapter, with no success. 
>
> i found old discussions about target_core, tcm_fc, rtslib.  None of them 
> helped to me.
>
> i will be happy if i could get deployment guide or help from anyone.
> Thanks,
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/e6e0ec92-a48e-4d4d-b460-9b9dd4618846n%40googlegroups.com.


Using meson to build open-iscsi/iscsiuio

2022-08-30 Thread The Lee-Man

Hi All:

I am planning on converting open-iscsi to use meson for building instead of 
'make'. This would convert iscsiuio as well, which currently uses 
autoconf/autotools.

It looks like the resulting systems is functionally equivalent (i.e. it 
builds the same stuff), and it's faster and a bit smaller. And easier to 
understand and use!

I have the changes in a branch of open-iscsi: 'use-meson-v1', i.e.:

open-iscsi github sources use-meson-v1 branch 


I would really appreciate other eyes looking at this and trying it out.

Anyone interested? If so, let me know.

The README isn't updated yet (in this branch) to explain how to build 
things, so let me know if you want the secret sauce.  It's something like 
(from the top level):

sh$ rm -rf builddir
sh$ meson buildir
sh$ ninja --verbose -C builddir

to build, and

sh$ ninja --verbose install

to install

You'll need to install meson and ninja.

Let the hacking begin!
-- 
Lee D

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/8d5e3b93-2ce2-4a34-a58f-606d68915034n%40googlegroups.com.


Should we use autotools (or something like it) for build system?

2022-08-04 Thread The Lee-Man

Hi!

I'm considering updating the build system to use autoconf/automake, or 
maybe something newer like meson?

See the discussion at: https://github.com/open-iscsi/open-iscsi/issues/359

-- 
Lee Duncan

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/4d1f831b-2408-41d3-b448-360d4053e3f6n%40googlegroups.com.


Re: iscsi device with multipath

2022-06-29 Thread The Lee-Man
I'm sorry I didn't see this earlier. I don't see any replies. I can't quite 
parse your question, though. What are you asking about?

On Thursday, June 16, 2022 at 11:57:20 PM UTC-7 zhuca...@gmail.com wrote:

> How can we repoduce the error with "Multiply-cliamed blocks"?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/79431b97-c434-47c0-9e1c-2206fcad4a93n%40googlegroups.com.


Re: [PATCH] drivers: scsi: Directly use ida_alloc()/free()

2022-05-29 Thread The Lee-Man
On Sunday, May 29, 2022 at 11:33:06 AM UTC-7 keliu wrote:

> Use ida_alloc()/ida_free() instead of deprecated 
> ida_simple_get()/ida_simple_remove() . 
>
> Signed-off-by: keliu  
> --- 
> drivers/scsi/hosts.c | 4 ++-- 
> drivers/scsi/scsi_transport_iscsi.c | 6 +++--- 
> 2 files changed, 5 insertions(+), 5 deletions(-) 
>
> diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c 
> index f69b77cbf538..ec16cfad034e 100644 
> --- a/drivers/scsi/hosts.c 
> +++ b/drivers/scsi/hosts.c 
> @@ -350,7 +350,7 @@ static void scsi_host_dev_release(struct device *dev) 
>
> kfree(shost->shost_data); 
>
> - ida_simple_remove(_index_ida, shost->host_no); 
> + ida_free(_index_ida, shost->host_no); 
>
> if (shost->shost_state != SHOST_CREATED) 
> put_device(parent); 
> @@ -395,7 +395,7 @@ struct Scsi_Host *scsi_host_alloc(struct 
> scsi_host_template *sht, int privsize) 
> init_waitqueue_head(>host_wait); 
> mutex_init(>scan_mutex); 
>
> - index = ida_simple_get(_index_ida, 0, 0, GFP_KERNEL); 
> + index = ida_alloc(_index_ida, GFP_KERNEL); 
> if (index < 0) { 
> kfree(shost); 
> return NULL; 
> diff --git a/drivers/scsi/scsi_transport_iscsi.c 
> b/drivers/scsi/scsi_transport_iscsi.c 
> index 2c0dd64159b0..2578db4c095d 100644 
> --- a/drivers/scsi/scsi_transport_iscsi.c 
> +++ b/drivers/scsi/scsi_transport_iscsi.c 
> @@ -1975,7 +1975,7 @@ static void __iscsi_unbind_session(struct 
> work_struct *work) 
> scsi_remove_target(>dev); 
>
> if (session->ida_used) 
> - ida_simple_remove(_sess_ida, target_id); 
> + ida_free(_sess_ida, target_id); 
>
> unbind_session_exit: 
> iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION); 
> @@ -2044,7 +2044,7 @@ int iscsi_add_session(struct iscsi_cls_session 
> *session, unsigned int target_id) 
> return -ENOMEM; 
>
> if (target_id == ISCSI_MAX_TARGET) { 
> - id = ida_simple_get(_sess_ida, 0, 0, GFP_KERNEL); 
> + id = ida_alloc(_sess_ida, GFP_KERNEL); 
>
> if (id < 0) { 
> iscsi_cls_session_printk(KERN_ERR, session, 
> @@ -2083,7 +2083,7 @@ int iscsi_add_session(struct iscsi_cls_session 
> *session, unsigned int target_id) 
> device_del(>dev); 
> release_ida: 
> if (session->ida_used) 
> - ida_simple_remove(_sess_ida, session->target_id); 
> + ida_free(_sess_ida, session->target_id); 
> destroy_wq: 
> destroy_workqueue(session->workq); 
> return err; 
> -- 
> 2.25.1 
>
>
Reviewed-by: Lee Duncan 
 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/7c9e58af-b862-4c46-89a5-c541706790aan%40googlegroups.com.


Version 2.1.6 Available now.

2022-02-14 Thread The Lee-Man

I just tagged version 2.1.6 of open-iscsi. Enjoy. :) 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/474b9ddc-8643-49e1-96da-da5de38e6d55n%40googlegroups.com.


Re: iSCSI initiator setting max_sectors_kb=4 when target optimal_io_size=4096

2021-12-08 Thread The Lee-Man
Perhaps someone from RedHat can comment? I suspect this is a kernel change, 
though.

On Sunday, November 21, 2021 at 6:31:39 PM UTC-8 alexi...@gmail.com wrote:

> Hi,
>
> Looking into whether this is a bug, or an expect behavior with kernel 4.18+
>
> RHEL 8.4 on AWS r5.xlarge hardware type, attaching nvme disks, observing 
> the nvme device is configuring optimal_io_size to 4KB
> i.e.
> /sys/devices/pci:00/:00:1c.0/nvme/nvme4/nvme4n1/queue/optimal_io_size 
> 4096
>
> When attaching this device remotely using Linux-IO, the initiator device 
> is using the target's 'optimal_io_size' to set the max_sectors_kb. 
> i.e.
> /sys/devices/platform/host1/session8/target1:0:0/1:0:0:0/block/sdb/queue/max_sectors_kb
>  
> 4
>
> This does not seem to be correct behavior. optimal_io_size and 
> max_sectors_kb should not be directly related.  Do not observe this 
> behavior with RHEL7.
>
> target:
>  - RHEL 8.4, 4.18.0-305.12.1.el8_4.x86_64
> initiator: 
>  - RHEL 8.4, 4.18.0-305.12.1.el8_4.x86_64
>  - iscsi-initiator-utils-iscsiuio-6.2.1.2-1.gita8fcb37.el8.x86_64
>  - iscsi-initiator-utils-6.2.1.2-1.gita8fcb37.el8.x86_64
>
>
> Thanks,
>
> Alexis.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/49efbe97-d0ce-474e-9705-ffb690208e26n%40googlegroups.com.


Re: Yet another timed out connection

2021-12-08 Thread The Lee-Man
The target thinks there are two ways to reach it, so during discovery it is 
telling you both of those paths. Discovery, by design, reports all LUNs on 
the target, not just the one you want.

So now, after discovery, your iscsi database thinks there are two IQNs to 
connect to, both reaching the same exact target. so when you then tell 
iscsi to connect to that target, it tries to connect to that target through 
all the paths it has -- and there are two of them. But it looks like your 
target is actually not reachable on the IPv6 Path, for some reason. Perhaps 
the ACL you've set up on your target? The actual reason doesn't matter to 
the initiator -- it just tries and fails to talk to the target through the 
IPv6 path.

There are a couple of ways you can fix this. After discovery, you can 
delete the database node you do not want. Simply run "iscsi -m node" to see 
the list of database nodes discovered, then run 'iscsiadm -m node --op 
delete -p "fe80::211:32ff:fe15:74eb,3260]"' to delete the IPv6 node. 
Another option would be to leave the nodes in the database, but specify the 
node you wish to use when logging on. (Right now, you're telling it to 
login to all nodes in the database). If you still want IPv6 running on your 
box, you could fix the issue with your target so that open-iscsi could log 
into both the IPv4 and IPv6 nodes, as it is trying to do.

On Thursday, November 25, 2021 at 11:19:52 AM UTC-8 Mauricio wrote:

> I know this has been asked many time before but I still do not know what I 
> am doing wrong. I am handing out iSCSI LUNs from a host at 
> 192.168.10.18:3260 to a host called testbox (initiator). 
>
> [root@testbox ~]# iscsiadm -m discovery -t sendtargets -p 192.168.10.18
> 192.168.10.18:3260,1 iqn.2000-01.com.synology-iSCSI:storage.01
> [fe80::211:32ff:fe15:74eb]:3260,1 iqn.2000-01.com.synology-iSCSI:storage.01
> [root@testbox ~]#
> [root@testbox ~]# fgrep address 
> /var/lib/iscsi/nodes/iqn.2000-01.com.synology-iSCSI\:storage.01/
> 192.168.10.18\,3260\,1/default
> node.discovery_address = 192.168.10.18
> node.conn[0].address = 192.168.10.18
> [root@testbox ~]#
>
> When I try to connect I am getting the connection timed out issue. Correct 
> me if I am wrong but it is barking at when It tries to connect using IPv6:
>
> [root@testbox ~]# iscsiadm -m node --loginall all
> Logging in to [iface: default, target: 
> iqn.2000-01.com.synology-iSCSI:storage.01, portal: 192.168.10.18,3260]
> Logging in to [iface: default, target: 
> iqn.2000-01.com.synology-iSCSI:storage.01, portal: 
> fe80::211:32ff:fe15:74eb,3260]
> Login to [iface: default, target: 
> iqn.2000-01.com.synology-iSCSI:storage.01, portal: 192.168.10.18,3260] 
> successful.
> iscsiadm: Could not login to [iface: default, target: 
> iqn.2000-01.com.synology-iSCSI:storage.01, portal: 
> fe80::211:32ff:fe15:74eb,3260].
> iscsiadm: initiator reported error (8 - connection timed out)
> iscsiadm: Could not log into all portals
> [root@testbox ~]#
>
> which sometimes seems to be what it wants to do by default:
>
> [root@testbox ~]# iscsiadm -m node -T 
> iqn.2000-01.com.synology-iSCSI:storage.01 -l
> Logging in to [iface: default, target: 
> iqn.2000-01.com.synology-iSCSI:storage.01, portal: 
> fe80::211:32ff:fe15:74eb,3260]
> iscsiadm: Could not login to [iface: default, target: 
> iqn.2000-01.com.synology-iSCSI:storage.01, portal: 
> fe80::211:32ff:fe15:74eb,3260].
> iscsiadm: initiator reported error (8 - connection timed out)
> iscsiadm: Could not log into all portals
> [root@testbox ~]#
>
> I did not really setup IPv6 in this network; is I guesstimation for the 
> source of the problem correct? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/2a875b1c-35b8-4693-81c4-df65dbc7e2a9n%40googlegroups.com.


Re: hostbyte=DID_TRANSPORT_DISRUPTED: network issues or?

2021-12-08 Thread The Lee-Man
Yes, I believe your problems are network-related.

I would advise taking iscsi NOPs off the table -- if you have a slow 
connection, the error recovery involved in a ping timeout can screw up I/O 
big time.

On Friday, November 26, 2021 at 6:52:45 AM UTC-8 Mauricio wrote:

>   Now I was able to address my issue with the testbox, I can mount the 
> LUN in that host without issues. So it is time to switch back to the 
> problem box, which started having issues since the last reboot. I apply the 
> solution used in the testbox and then restart the service:
>
> [root@problembox ~]# systemctl restart iscsi
> [root@problembox ~]#
>
> And it acts like it is happy (so far; did not check dmesg or fdisk):
>
> [root@problembox ~]# systemctl status iscsi
> o iscsi.service - Login and scanning of iSCSI devices
>Loaded: loaded (/usr/lib/systemd/system/iscsi.service; enabled; vendor 
> preset: disabled)
>Active: active (exited) since Thu 2021-11-25 23:21:40 EST; 9h ago
>  Docs: man:iscsiadm(8)
>man:iscsid(8)
>   Process: 3414 ExecStart=/usr/sbin/iscsiadm -m node --loginall=automatic 
> (code=exited, status=0/SUCCESS)
>  Main PID: 3414 (code=exited, status=0/SUCCESS)
> Tasks: 0 (limit: 203741)
>Memory: 0B
>CGroup: /system.slice/iscsi.service
>
> Nov 25 23:17:52 problembox systemd[1]: Starting Login and scanning of 
> iSCSI devices...
> Nov 25 23:21:40 problembox iscsiadm[3414]: Logging in to [iface: default, 
> target: iqn.2000-01.com.synology-iSCSI:storage.01, portal: 
> 192.168.10.18,3260]
> Nov 25 23:21:40 problembox iscsiadm[3414]: Login to [iface: default, 
> target: iqn.2000-01.com.synology-iSCSI:storage.01, portal: 
> 192.168.10.18,3260] successful.
> Nov 25 23:21:40 problembox systemd[1]: Started Login and scanning of iSCSI 
> devices.
> [root@problembox ~]#
>
> [root@problembox ~]# ls -lh /dev/sd*
> brw-rw. 1 root disk 8,  0 Nov 25 21:42 /dev/sda
> brw-rw. 1 root disk 8,  1 Nov 25 21:42 /dev/sda1
> brw-rw. 1 root disk 8,  2 Nov 25 21:42 /dev/sda2
> brw-rw. 1 root disk 8,  3 Nov 25 21:42 /dev/sda3
> brw-rw. 1 root disk 8, 16 Nov 25 23:33 /dev/sdb
> [root@problembox ~]# ls -l /dev/disk/by-path/|grep ip
> lrwxrwxrwx. 1 root root  9 Nov 25 23:33 
> ip-192.168.10.18:3260-iscsi-iqn.2000-01.com.synology-iSCSI:storage.01-lun-0 
> -> ../../sdb
> [root@problembox ~]#
>
> Time to go probe the elephant in the room
>
> [root@problembox ~]# fdisk -l /dev/sdb
> fdisk: cannot open /dev/sdb: Input/output error
> [root@problembox ~]#
>
> What does dmesg has to tell me? The expected behaviour as seen in the 
> testbox (mounting the very same LUN):
>
> [root@testbox ~]# dmesg -T
> [...]
> [Thu Nov 25 19:58:00 2021] Loading iSCSI transport class v2.0-870.
> [Thu Nov 25 19:58:00 2021] iscsi: registered transport (tcp)
> [Thu Nov 25 19:58:00 2021] scsi host2: iSCSI Initiator over TCP/IP
> [Thu Nov 25 19:58:00 2021] scsi 2:0:0:0: Direct-Access SYNOLOGY iSCSI 
> Storage3.1  PQ: 0 ANSI: 5
> [Thu Nov 25 19:58:00 2021] scsi 2:0:0:0: alua: supports implicit TPGS
> [Thu Nov 25 19:58:00 2021] scsi 2:0:0:0: alua: device 
> naa.6001405e61f8c59d35fdd4481da3e1d3 port group 1 rel port 1
> [Thu Nov 25 19:58:00 2021] scsi 2:0:0:0: Attached scsi generic sg1 type 0
> [Thu Nov 25 19:58:00 2021] scsi 2:0:0:0: alua: transition timeout set to 
> 60 seconds
> [Thu Nov 25 19:58:00 2021] scsi 2:0:0:0: alua: port group 01 state A 
> non-preferred supports TOlUSNA
> [Thu Nov 25 19:58:00 2021] sd 2:0:0:0: [sda] 754974720 512-byte logical 
> blocks: (387 GB/360 GiB)
> [Thu Nov 25 19:58:00 2021] sd 2:0:0:0: [sda] Write Protect is off
> [Thu Nov 25 19:58:00 2021] sd 2:0:0:0: [sda] Mode Sense: 3b 00 00 00
> [Thu Nov 25 19:58:00 2021] sd 2:0:0:0: [sda] Write cache: disabled, read 
> cache: enabled, doesn't support DPO or FUA
> [Thu Nov 25 19:58:00 2021]  sda: sda1
> [Thu Nov 25 19:58:00 2021] sd 2:0:0:0: [sda] Attached SCSI disk
> [root@testbox ~]#
>
> Behaviour seen in the problembox
>
> [root@problembox ~]# dmesg -T
> [Thu Nov 25 23:17:51 2021] scsi host8: iSCSI Initiator over TCP/IP
> [Thu Nov 25 23:17:51 2021] scsi 8:0:0:0: Direct-Access SYNOLOGY iSCSI 
> Storage3.1  PQ: 0 ANSI: 5
> [Thu Nov 25 23:17:51 2021] scsi 8:0:0:0: alua: supports implicit TPGS
> [Thu Nov 25 23:17:51 2021] scsi 8:0:0:0: alua: device 
> naa.6001405e61f8c59d35fdd4481da3e1d3 port group 1 rel port 1
> [Thu Nov 25 23:17:51 2021] sd 8:0:0:0: Attached scsi generic sg1 type 0
> [Thu Nov 25 23:18:02 2021]  connection4:0: ping timeout of 5 secs expired, 
> recv timeout 5, last rx 4300399244, last ping 4300404736, now 4300409856
> [Thu Nov 25 23:18:02 2021]  connection4:0: detected conn error (1022)
> [...]
> [Thu Nov 25 23:31:56 2021]  connection4:0: detected conn error (1022)
> [Thu Nov 25 23:31:56 2021] sd 8:0:0:0: [sdb] tag#76 FAILED Result: 
> hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK cmd_age=72s
> [Thu Nov 25 23:31:56 2021] sd 8:0:0:0: [sdb] tag#76 CDB: Read(10) 28 00 2c 
> ff ff 80 00 00 08 00
> [Thu Nov 

Re: Concurrent usage of iscsiadm

2021-10-21 Thread The Lee-Man
P.S. Perhaps Chris will chime in with his opinion, which may be a bit 
different than mine on this subject. Chris?

On Wednesday, October 20, 2021 at 7:18:47 AM UTC-7 Vojtech Juranek wrote:

> Hi,
> I'd like to follow up with discussion about concurrent usage iscsiadm 
> tool. It 
> was discussed here about year ago, with suggestion not to use it 
> concurrently 
> [1]. On the other hand, comment [2] says it should be fine. Is the an 
> agreement 
> in open-iscsi community if the concurrent usage of iscsiadm is safe or 
> not? If 
> it's not safe, is there any bug for open-iscsi describing the issue and 
> potential problems if iscsiadm is used concurrently?
>
> The motivation why I'm popping up this again is that in oVirt project [3] 
> we 
> use a lock before calling iscsiadm to make sure it's not run in parallel. 
> This 
> causes us various issues (see e.g. BZ #1787192 [2]) and we'd like to get 
> rid 
> off this lock.
>
> I run several thousands tests with our typical usage of iscsiadm [4], 
> running 
> iscsiadm in parallel and haven't spot any issue so far. This suggests 
> removing 
> the lock can be safe, but of course my tests could be just a pure luck. So 
> before removing this lock from our code base, I'd like to know your 
> thoughts 
> about it.
>
> Thanks
> Vojta
>
> [1] https://groups.google.com/g/open-iscsi/c/OHOdIm1W274/m/9l5NcPQHBAAJ
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1787192#c18
> [3] https://www.ovirt.org/
> [4] https://github.com/oVirt/vdsm/blob/master/tests/storage/stress/
> initiator.py

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/ac5b23db-ce21-45ef-a0b1-56eb4f049bc3n%40googlegroups.com.


Re: [EXT] Concurrent usage of iscsiadm

2021-10-21 Thread The Lee-Man
Hi Vojtech:

I know there's confusion around this issue, and as some testing at RH has 
shown, you can get away with using iscsiadm in parallel, as long as you're 
careful about what you do. For example, if each instance is trying to log 
into a different target, and there is no error handling occurring, testing 
seems to show this to be fine.

But I continue to recommend against doing this, because (as I've said 
before) there isn't sufficient locking in iscsiadm to allow completely 
parallel execution. [By the way, I'm willing to entertain patches that fix 
that.]

Yes, some of the code in iscsiadm is safe for parallel execution, such as 
talking the iscsid and accessing the node database. But much is not, such 
as error handling and sysfs access.

And there is very little reason to try to login into multiple targets using 
parallel calls to iscsiadm when iscsiadm now how the "no-wait" option, 
allowing it to send off login requests without waiting for success or 
timing out and failing.

Bottom line, I'd say using iscsiadm in parallel at your own risk. And if 
you do so and find issues, then we can try to address them. But the last 
thing I want for iscsiadm is one giant lock. And locking individual pieces 
of iscsiadm can lead to deadlock situations, if not sequenced correctly.

I hope that answers your questions.

On Wednesday, October 20, 2021 at 11:19:24 PM UTC-7 Uli wrote:

> Hi!
>
> Another thing is: Whether you like systemd or not: It runs many processes 
> automatically and concurrently.
> So it seems wise that iscasadm may be run concurrently. If there are 
> issues, iscsiadm should use a MUTEX internally to avoid those IMHO
>
> Regards,
> Ulrich
> >>> Vojtech Juranek  schrieb am 20.10.2021 um 08:58 in
> Nachricht <4882593.9...@localhost.localdomain>:
> > Hi,
> > I'd like to follow up with discussion about concurrent usage iscsiadm 
> tool. 
> > It 
> > was discussed here about year ago, with suggestion not to use it 
> > concurrently 
> > [1]. On the other hand, comment [2] says it should be fine. Is the an 
> > agreement 
> > in open-iscsi community if the concurrent usage of iscsiadm is safe or 
> not? 
> > If 
> > it's not safe, is there any bug for open-iscsi describing the issue and 
> > potential problems if iscsiadm is used concurrently?
> > 
> > The motivation why I'm popping up this again is that in oVirt project 
> [3] we 
> > 
> > use a lock before calling iscsiadm to make sure it's not run in 
> parallel. 
> > This 
> > causes us various issues (see e.g. BZ #1787192 [2]) and we'd like to get 
> rid 
> > 
> > off this lock.
> > 
> > I run several thousands tests with our typical usage of iscsiadm [4], 
> > running 
> > iscsiadm in parallel and haven't spot any issue so far. This suggests 
> > removing 
> > the lock can be safe, but of course my tests could be just a pure luck. 
> So 
> > before removing this lock from our code base, I'd like to know your 
> thoughts 
> > 
> > about it.
> > 
> > Thanks
> > Vojta
> > 
> > [1] https://groups.google.com/g/open-iscsi/c/OHOdIm1W274/m/9l5NcPQHBAAJ 
> > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1787192#c18 
> > [3] https://www.ovirt.org/ 
> > [4] https://github.com/oVirt/vdsm/blob/master/tests/storage/stress/ 
> > initiator.py
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "open-iscsi" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to open-iscsi+...@googlegroups.com.
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/open-iscsi/4882593.9CP3fYhb5E%40localhost.l 
> > ocaldomain.
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/9b1f3139-8efe-4b1d-aa07-b623c212c9fbn%40googlegroups.com.


Re: iscsiadm: iface iter could not read dir /var/lib/iscsi/nodes/

2021-09-09 Thread The Lee-Man
Hi!

My apologies to bharatvivek2972. I believe Mike, who was answering 
questions here, moved on from open-iscsi, and I didn't notice this thread 
needed more attention.

I believe the problem here is that iscsiadm is not set up for parallel 
operation. So no, you can not run logins in parallel.

But, there is a new "no wait" option that one could pass to iscsiadm that 
would speed up such serialized requests. It's the "-W"/"--no_wait". Your 
distro may not have this feature yet, but it's upstream, and it tells 
iscsiadm not to wait for the login to complete. When the caller uses this 
option, iscsiadm returns as soon as it has sent the login request to the 
target. The login either fails or succeeds, in the background, as managed 
by iscsid. It's up to the caller to poll or check for success. Using this 
option, one could do:

/sbin/iscsiadm -m node -T  -p  --login -W
 /sbin/iscsiadm -m node -T  -p  --login -W
 /sbin/iscsiadm -m node -T  -p  --logiDhirajn -W
 /sbin/iscsiadm -m node -T  -p  --login -W

Of course, it might be faster to run:

/sbin/iscsiadm -m node -l -W

i.e. let iscsiadm log into all the targets in the node database, but in the 
case where you only want some of the targets to be logged into, the above 
should work.

In your case Dhiraj I'm not sure you are doing things in parallel. Are you? 
It might be that the node database was screwed up by some earlier parallel 
operation(s)?

On Wednesday, September 8, 2021 at 10:24:12 AM UTC-7 Dhiraj Surana wrote:

> hi , 
>
> did you get the resolution for this issue , even i am seeing hte similar 
> kind of issue while deleting and creating the iscsi and iser session.
>
> 20210903 10:44:36 [ 11175] run_system_cmd: Running: iscsiadm -m node 
> --targetname iqn.1986-03.com.ibm:2145.stand2fab3plus9.118.54.153.node1 -I 
> iface.stand2host1:1 -p 192.170.15.10:3260 --login iscsiadm: Could not 
> execute operation on all records: encountered iSCSI database failure 
> 20210903 10:44:36 [ 11175] run_system_cmd: rc=6, signal=0 core_available=0 
> when running iscsiadm -m node --targetname 
> iqn.1986-03.com.ibm:2145.stand2fab3plus9.118.54.153.node1 -I 
> iface.stand2host1:1 -p 192.170.15.10:3260 --login, 20210903 10:44:36 [ 
> 11175] -- 
> run_system_cmd: iscsiadm -m node --targetname 
> iqn.1986-03.com.ibm:2145.stand2fab3plus9.118.54.153.node1 -I 
> iface.stand2host1:1 -p 192.170.15.10:3260 --login failed, rc=6, signal=0, 
> core_available=0 - exiting 20210903 10:44:36 [ 11175] 
> --
>
> [root@stand2host1 nodes]# iscsiadm -m node -o delete
> iscsiadm: Could not execute operation on all records: encountered iSCSI 
> database failure
>
>
> On Friday, 26 December 2014 at 18:50:50 UTC+5:30 bharatv...@gmail.com 
> wrote:
>
>> Dear All,
>>
>> I was trying to login to a SAN Device logical volume from my linux 
>> server, which is connected to SAN Device with 10Gb NIC card.
>> IQN of volume was 
>> : 
>> iqn.2001-03.jp.nec:storage01:ist-m000-sn-000942014090.lx-ddsldset-0018.target0016
>>  
>> and I executed command as follows:
>>
>> "/sbin/iscsiadm -m node -T 
>> iqn.2001-03.jp.nec:storage01:ist-m000-sn-000942014090.lx-ddsldset-0018.target0016
>>  
>> -p 172.168.2.165 --login"
>>
>> But, it failed with:
>> Error Code : 6
>> Error message : iscsiadm: iface iter could not read dir 
>> /var/lib/iscsi/nodes/iqn.2001-03.jp.nec:storage01:ist-m000-sn-000942014090.lx-ddsldset-0048.target0012/
>> 172.168.2.165,3260,3.
>> iscsiadm: Could not execute operation on all records: encountered iSCSI 
>> database failure
>>
>> This error message was quite confusing as it shows that reading iface for 
>> IQN : 
>> "iqn.2001-03.jp.nec:storage01:ist-m000-sn-000942014090.lx-ddsldset-0048.target0012"
>>  
>> failed but I tried login to 
>> "iqn.2001-03.jp.nec:storage01:ist-m000-sn-000942014090.lx-ddsldset-0018.target0016".
>> Moreover, iSCSI session for 
>> "iqn.2001-03.jp.nec:storage01:ist-m000-sn-000942014090.lx-ddsldset-0048.target0012"
>>  
>> was logged out around 6 minutes ago.
>>
>> I am unaware about the iscsi internals, and could not understand the 
>> reason for it.
>> I suspect that at the time of iSCSI login all iface are read.
>>
>> Please help me in this.
>>
>> Thanks in anticipation.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/83e04502-b97c-4d5d-8a31-be038a788df9n%40googlegroups.com.


Release 2.1.5 tagged

2021-09-05 Thread The Lee-Man

I tagged and pushed release 2.1.5, which contains bug fixes. Enjoy!

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/7f774cd8-201a-4e08-bbb9-9b31a4e21361n%40googlegroups.com.


Re: Antw: [EXT] ISCSI Target and Initiator on same host

2021-07-01 Thread The Lee-Man
I need a bit more information about your setup. What target are you using? 
I'm guessing LIO, since that's the most common (using targetcli), but there 
are others, and each one is different with respect to ACLs, using 
passwords, etc.

I use LIO/targetcli, and I usually work with the initiator and target on 
the same system, no problem. This is a network protocol, so it shouldn't 
matter if the initiator and target are on the same system or miles apart, 
as long as they are connected via the network.

With LIO, you have to either add your initiator IQN to the ACL for the 
target, or you need to put the target in "demo" mode. Though poorly named, 
demo mode allows connection without ACLs (it generates the ACLs on the fly).

Are you using initiator and/or target name/password protection? If so, that 
adds a layer of complication. For testing, I do not set up any 
names/passwords.

How to you try to connect to your target? What distribution are you on and 
what version of that distro? Do you run iscsi discovery first, then 
connect? Show us the sequence of commands you use, and the actual error 
messages you say you are getting?

Are you setting up the target the automatically reconnect on each reboot? 
If so, the steps you take to do that may differ per distrubution. And what 
systemd services do you have running?

You need to supply much more information, in general, when asking for 
technical help. :)

On Wednesday, June 30, 2021 at 8:15:03 AM UTC-7 riaan.p...@4cgroup.co.za 
wrote:

> I get strange messages in my logs when i tried do that. and get disk 
> "flapping" when the disk just appear and reappear continuously after a 
> reboot.   logically it would make sense that you can do this, but 
> practically  there is weird issues. Would you guys say it might be a 
> misconfiguration ?
>
> On Wednesday, 30 June 2021 at 15:10:54 UTC+2 Paul Koning wrote:
>
>>
>>
>> > On Jun 30, 2021, at 7:29 AM, Ulrich Windl <
>> ulrich...@rz.uni-regensburg.de> wrote:
>> > 
>> > I think I did that about 10 years ago...
>> > 
>>  Riaan Pretorius  schrieb am 30.06.2021 um 
>> 12:41
>> > in Nachricht <07b30064-72b3-42c1...@googlegroups.com>:
>> >> I have an interesting question to ask:
>> >> 
>> >> Is it possible to share the target on the same server as a initiator ?
>> >> e.g. server1: target -> server1: initiator 
>>
>> Yes, I've used that in a test setup when I needed to put a file system on 
>> iSCSI (to test pNFS).
>>
>> paul
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/303b57a8-5d06-44ef-9212-ee350ea6bd1fn%40googlegroups.com.


Re: Submitting a change

2021-06-03 Thread The Lee-Man
On Thursday, June 3, 2021 at 8:01:35 AM UTC-7 Anjali Kulkarni wrote:

> *fall. -> foll. 
> *Gothic -> GitHub
>
> On May 25, 2021, at 12:26 PM, Anjali Kulkarni  
> wrote:
>
> Hi, 
> I am interested in submitting a change upstream for open-iscsi. How can I 
> go about doing this? 
> Also, is the iscsi utils on the fall. Gothic location, used on redhat as 
> well?
> https://github.com/open-iscsi/open-iscsi
>
> Thanks
> Anjali
>
>
> There are a couple of ways to submit changes. The best way is to submit a 
> pull request on github.
>
 

> Open-iscsi is at github.com/open-iscsi/open-iscsi. If you don't use 
> github, you could submit a patch to this list, though you must be careful 
> your email client doesn't munge up the patch.
>

-- 
Lee Duncan

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/fb43a674-d703-4b2d-ac74-3940e6ed2ba0n%40googlegroups.com.


Shouldn't firmware nodes be marked as "onboot", for consistency?

2021-05-10 Thread The Lee-Man

Hi All:

I'm working on getting iBFT (firmware) booting working well using 
open-iscsi with dual paths and DM/multipathing, and I noticed something.

When you run "iscsiadm -m discovery -t fw", it creates node database 
entries for your firmware targets. But it sets "node.startup", and 
"node.conn[0].startup" to "manual" instead of "onboot", even though 
open-iscsi treats these entries like "onboot", since they are based on 
firmware.

I find it a little more consistent if they are marked as "onboot". A simple 
path in iscsiadm would change this. Any objections?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/43e49037-b88c-4954-aa42-54e6807ff259n%40googlegroups.com.


Tagged new version of open-iscsi

2021-03-11 Thread The Lee-Man

I have tagged version 2.1.4 of open-iscsi. Follow the link for more info:

open-iscsi release here 


-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/8b2a125c-a751-4cd6-8f3e-7ac734654544n%40googlegroups.com.


Re: iface.c:36:21: fatal error: libkmod.h: No such file or directory

2021-02-19 Thread The Lee-Man
open-iscsi relies on libkmod, which didn't exist in 3.10 kernel. You'll 
have to use an older open-iscsi, such as 2.0.876.

On Wednesday, February 17, 2021 at 7:58:35 AM UTC-8 Manish Dusane wrote:

> All,
>
> Trying to compile latest open-iscsi-master on 3.10.0-957.10.1.el7.x86_64
> Getting following ilbkmod.h error.  
>
> What am i missing ? 
>
> TIA,
> Manish
>
> $ make
> make -C libopeniscsiusr
> make[1]: Entering directory `/open-iscsi-master/libopeniscsiusr'
> cc -O2 -g -Wall -Werror -Wextra -fvisibility=hidden -fPIC-c -o iface.o 
> iface.c
> iface.c:36:21: fatal error: libkmod.h: No such file or directory
>  #include 
>  ^
> compilation terminated.
> make[1]: *** [iface.o] Error 1
> make[1]: Leaving directory `/open-iscsi-master/libopeniscsiusr'
> make: *** [user] Error 2
> $
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/59df9104-0929-42c6-a2c8-9014ba45628dn%40googlegroups.com.


Re: Clarification request on open-iscsi affected by uIP vulnerabilities (AMNESIA:33)

2020-12-18 Thread The Lee-Man
Hi Christian:

Chris Leech just merged in the mitigations for these CVEs and tagged a new 
release.

These CVEs were all related to the uip package that iscsiuio uses. But in 
fact iscsiuio only uses uip for network "services", such as DHCP, ARP, etc, 
and not for normal TCP/IP communications. So the risk was, honestly, never 
very high.

I believe all the CVEs were published 12/8 (or so), but we were working on 
them for a while before that.

P.S. Thanks to Chris for doing the mitigation work and research, and then 
merging/publishing the result!

On Thursday, December 17, 2020 at 10:41:06 AM UTC-8 Christian Fischer wrote:

> Hi,
>
> the following CVEs related to the recent AMNESIA:33 vulnerabilities 
> affecting various open source network stack components:
>
> https://nvd.nist.gov/vuln/detail/CVE-2020-13987
> https://nvd.nist.gov/vuln/detail/CVE-2020-13988
> https://nvd.nist.gov/vuln/detail/CVE-2020-17437
> https://nvd.nist.gov/vuln/detail/CVE-2020-17438
> https://nvd.nist.gov/vuln/detail/CVE-2020-17439
> https://nvd.nist.gov/vuln/detail/CVE-2020-17440
> https://nvd.nist.gov/vuln/detail/CVE-2020-24334
> https://nvd.nist.gov/vuln/detail/CVE-2020-24335 (not published yet)
>
> While the CVEs are mentioning Contiki and / or uIP a paper [1] of the 
> research teams reveals this detail:
>
> > The open-iscsi project, which provides an implementation of the iSCSI
> > protocol used by Linux distributions, such as Red Hat, Fedora, SUSE
> > and Debian, also imports part of the uIP code. Again, we were able to
> > detect that some CVEs apply to it.
>
> and
>
> > Some of the vendors and projects using these original stacks, such as
> > open-iscsi, issued their own patches.
>
> Unfortunately the "some CVEs apply to it" is not further specified (not 
> even the CVEs for open-iscsi are listen) and i wasn't able to pinpoint 
> the exact details. Some sources [2] mention 2.1.12 as the fixed version 
> of open-iscsi (which is wrong as the latest available version is 2.1.2 
> from July 2020, i have already contacted the CISA about that a few days 
> ago but haven't received any response yet) while others [3] mention <= 
> 2.1.1 as vulnerable.
>
> As none of the current releases listed at [4] mention the uIP 
> vulnerabilities in some way i would like to ask for clarification of the 
> following:
>
> - Which CVEs of uIP applies to the code base of uIP imported into 
> open-iscsi?
> - Which releases of open-iscsi are affected?
> - Which release of open-iscsi is fixing one or more of this 
> vulnerabilities?
>
> Thank you very much in advance for a response.
>
> Regards,
>
> [1] 
>
> https://www.forescout.com/company/resources/amnesia33-how-tcp-ip-stacks-breed-critical-vulnerabilities-in-iot-ot-and-it-devices/
> [2] https://us-cert.cisa.gov/ics/advisories/icsa-20-343-01
> [3] 
>
> https://www.heise.de/news/Amnesia-33-Sicherheitshinweise-und-Updates-zu-den-TCP-IP-Lecks-im-Ueberblick-4984341.html
> [4] https://github.com/open-iscsi/open-iscsi/releases
>
> -- 
>
> Christian Fischer | PGP Key: 0x54F3CE5B76C597AD
> Greenbone Networks GmbH | https://www.greenbone.net
> Neumarkt 12, 49074 Osnabrück, Germany | AG Osnabrück, HR B 202460
> Geschäftsführer: Dr. Jan-Oliver Wagner
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/92c2365f-197a-4ae3-a2b1-e9f544cf71b7n%40googlegroups.com.


Re: Hi help me please

2020-12-17 Thread The Lee-Man
As Ulrich replied, there's not much we can do with the data you provided.

On Wednesday, December 16, 2020 at 12:29:20 PM UTC-8 go xayyasang wrote:

> [root@target ~]# iscsiadm -m node -o show
> iscsiadm: No records found
>
>
That's normal if you have no records in your database. If you want records 
in your database, you have to perform discovery.

Please browse the README file that comes with open-iscsi. We don't have a 
general open-iscsi HowTo tutorial, but search the internet (as I just did), 
and you'll find several.

Next time, supply: OS and version used, open-iscsi version number, what you 
are trying to do, and all steps leading up to your error, so that we can 
reproduce your error if needed.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/0821d935-d1d7-4483-b5af-aad16d2f85c7n%40googlegroups.com.


Re: [PATCH] iscsi: Do Not set param when sock is NULL

2020-11-21 Thread The Lee-Man
The patch concept is good. I have a couple of issues though.

First, you say you will return EPERM, but you return ENOTCONN. Update the 
text to match to code?

Also, the indentation seems messed up? It's hard to tell using the web 
interface. If so, please fix.

On Monday, November 16, 2020 at 9:08:59 PM UTC-8 Gulam Mohamed wrote:

> Gentle reminder.
>
> Regards,
> Gulam Mohamed.
>
> -Original Message-
> From: Gulam Mohamed 
> Sent: Thursday, November 5, 2020 11:11 AM
> To: Lee Duncan ; Chris Leech ; James 
> E.J. Bottomley ; Martin K. Petersen <
> martin@oracle.com>; open-...@googlegroups.com; 
> linux...@vger.kernel.org; linux-...@vger.kernel.org
> Cc: Junxiao Bi 
> Subject: [PATCH] iscsi: Do Not set param when sock is NULL
>
> Description
> =
> 1. This Kernel panic could be due to a timing issue when there is a race 
> between the sync thread and the initiator was processing of a login 
> response from the target. The session re-open can be invoked from two places
> a. Sessions sync thread when the iscsid restart
> b. From iscsid through iscsi error handler
> 2. The session reopen sequence is as follows in user-space 
> (iscsi-initiator-utils)
> a. Disconnect the connection
> b. Then send the stop connection request to the kernel which releases the 
> connection (releases the socket)
> c. Queues the reopen for 2 seconds delay
> d. Once the delay expires, create the TCP connection again by calling the 
> connect() call
> e. Poll for the connection
> f. When poll is successful i.e when the TCP connection is established, it 
> performs
> i. Creation of session and connection data structures
> ii. Bind the connection to the session. This is the place where we assign 
> the sock to tcp_sw_conn->sock
> iii. Sets the parameters like target name, persistent address etc .
> iv. Creates the login pdu
> v. Sends the login pdu to kernel
> vi. Returns to the main loop to process further events. The kernel then 
> sends the login request over to the target node
> g. Once login response with success is received, it enters into full 
> feature phase and sets the negotiable parameters like max_recv_data_length, 
> max_transmit_length, data_digest etc . 3. While setting the negotiable 
> parameters by calling "iscsi_session_set_neg_params()", kernel panicked as 
> sock was NULL
>
> What happened here is
> 
> 1. Before initiator received the login response mentioned in above point 
> 2.f.v, another reopen request was sent from the error handler/sync session 
> for the same session, as the initiator utils was in main loop to process 
> further events (as 
> mentioned in point 2.f.vi above). 
> 2. While processing this reopen, it stopped the connection which released 
> the socket and queued this connection and at this point of time the login 
> response was received for the earlier one
> 3. The kernel passed over this response to user-space which then sent the 
> set_neg_params request to kernel
> 4. As the connection was stopped, the sock was NULL and hence while the 
> kernel was processing the set param request from user-space, it panicked
>
> Fix
> 
> 1. While setting the set_param in kernel, we need to check if sock is NULL
> 2. If the sock is NULL, then return EPERM (operation not permitted)
> 3. Due to this error handler will be invoked in user-space again to 
> recover the session
>
> Signed-off-by: Gulam Mohamed 
> Reviewed-by: Junxiao Bi 
> ---
> diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c index 
> df47557a02a3..fd668a194053 100644
> --- a/drivers/scsi/iscsi_tcp.c
> +++ b/drivers/scsi/iscsi_tcp.c
> @@ -711,6 +711,12 @@ static int iscsi_sw_tcp_conn_set_param(struct 
> iscsi_cls_conn *cls_conn,
> struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
> struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
>
> + if (!tcp_sw_conn->sock) {
> + iscsi_conn_printk(KERN_ERR, conn,
> + "Cannot set param as sock is NULL\n");
> + return -ENOTCONN;
> + }
> +
> switch(param) {
> case ISCSI_PARAM_HDRDGST_EN:
> iscsi_set_param(cls_conn, param, buf, buflen);
> --
> 2.18.4
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/e7accc7d-26e7-4d89-9f0d-77c299326aeen%40googlegroups.com.


Re: [PATCH] iscsid: drop uid privileges after locking memory

2020-10-26 Thread The Lee-Man
Hi Anythony:

On Thursday, October 22, 2020 at 12:33:08 PM UTC-7 Anthony Iliopoulos wrote:

> Move the setuid call after mlockall, since the latter requires elevated 
> privileges, and will cause iscsid startup to fail when an unprivileged 
> uid is specified. 
>

I appreciate your patch, but I'm not sure this one has any value.

When I run regular iscsid (not patched), it dies almost at the start of 
main(), in the mgmt_ipc_listen() call, if I'm not root. So it never even 
gets to your patch.

Was there an actual bug or problem you were trying to fix?

P.S. This patch was mangled. Please submit patches in text only, or better 
yet as a github pull request, since I don't have time to edit submitted 
patches. Thanks!

>
> Signed-off-by: Anthony Iliopoulos  
> --- 
> usr/iscsid.c | 12 ++-- 
> 1 file changed, 6 insertions(+), 6 deletions(-) 
>
> diff --git a/usr/iscsid.c b/usr/iscsid.c 
> index e50149823bee..9f1a09fe28f2 100644 
> --- a/usr/iscsid.c 
> +++ b/usr/iscsid.c 
> @@ -525,12 +525,6 @@ int main(int argc, char *argv[]) 
> } 
> } 
>
> - if (uid && setuid(uid) < 0) { 
> - log_error("Unable to setuid to %d", uid); 
> - log_close(log_pid); 
> - exit(ISCSI_ERR); 
> - } 
> - 
> memset(_config, 0, sizeof (daemon_config)); 
> daemon_config.pid_file = pid_file; 
> daemon_config.config_file = config_file; 
> @@ -601,6 +595,12 @@ int main(int argc, char *argv[]) 
> exit(ISCSI_ERR); 
> } 
>
> + if (uid && setuid(uid) < 0) { 
> + log_error("Unable to setuid to %d", uid); 
> + log_close(log_pid); 
> + exit(ISCSI_ERR); 
> + } 
> + 
> set_state_to_ready(); 
> event_loop(ipc, control_fd, mgmt_ipc_fd); 
>
> -- 
> 2.29.0 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/825ccece-a2a6-411d-bdd2-591b5e9d045dn%40googlegroups.com.


Re: Slow iSCSI tape performance

2020-10-25 Thread The Lee-Man
I haven't heard about disabling TUR for iSCSI tape improvement. Even if 
true, I'm not sure how you'd do that. You'd need to modify your target IMHO 
to always reply "ready" for TUR. But TUR is used to clear some conditions 
at the target, if present, so not sure about the semantics of ignoring 
TURs. Have you tried setting the streaming bit for the tape drive?

On Wednesday, October 21, 2020 at 6:43:22 AM UTC-7 david.p...@perdrix.co.uk 
wrote:

> I've seen a report that disabling Test Unit Ready across the iSCSI link 
> can hugely improve performance of remote tape drives.
>
> Is this something I do at the machine hosting the tape drive or at the 
> client?
>
> Is it relevant to open iscsi?
>
> Thanks
> David
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/4ad354c3-5d6a-4b1f-b978-afee5d1219aen%40googlegroups.com.


Re: [PATCH] TODO: Update to todo list.

2020-10-13 Thread The Lee-Man
I applied this patch, but I had to perform patch surgery to do so. My work 
flow is not set up for email patches. Please submit as a pull request on 
github.com/open-iscsi/open-iscsi next time. Thanks!

On Friday, September 25, 2020 at 9:28:36 AM UTC-7 sonukum...@gmail.com 
wrote:

> This patch is to update the todo list. Tasks are suggested by The
> Lee-Man
>
> Signed-off-by: Sonu k 
> ---
> TODO | 13 +
> 1 file changed, 13 insertions(+)
>
> diff --git a/TODO b/TODO
> index 7328180..a3d1d91 100644
> --- a/TODO
> +++ b/TODO
> @@ -377,3 +377,16 @@ I am working on this one. Hopefully it should be done 
> soon.
> it gets out of sync with the kernel version, and that's not good.
>
> ---
> +
> +13. Node database
> +
> +Current implementation of node data is not scalable. It handles database 
> using
> +some bunch of files and directories. It has not locking and can not handle
> +thousands of targets.
> +
>
> +---
> +
> +14. Migration of duplicate functionality out of iscsid/iscsiadm into 
> libopeniscsi
> +and add better error handling .
> +
>
> +---
> -- 
> 1.8.3.1
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/d1b086ed-4852-49a8-a08f-1008db7d7d4fn%40googlegroups.com.


Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-09-15 Thread The Lee-Man
ault', '--portal', '
> 0.0.0.0:0,0', '--login'] failed rc=8 out='Logging in to [iface: default, 
> target: iqn.2003-01.org.vm-18-220.iqn2, portal: 0.0.0.0,0]' err='iscsiadm: 
> Could not login to [iface: default, target: iqn.2003-01.org.vm-18-220.iqn2, 
> portal: 0.0.0.0,0].\niscsiadm: initiator reported error (8 - connection 
> timed out)\niscsiadm: Could not log into all portals'
> 2020-08-18 16:03:02,321 INFO(login_0) Login to target 
> iqn.2003-01.org.vm-18-220.iqn1 portal 10.35.18.220:3260,1 (nowait=False)
> 2020-08-18 16:03:02,695 INFO(MainThread) Connecting completed in 
> 240.752s
>
> -- Simulating one portal down (2 connections down) with one worker, using 
> node login with --no-wait
>
> # python3 ./initiator.py  -j 1 -i 10.35.18.220 10.35.18.156  -d 
> 10.35.18.156  --nowait
>
> 2020-08-18 16:16:05,802 INFO(MainThread) Removing prior sessions and 
> nodes
> 2020-08-18 16:16:06,075 INFO(MainThread) Deleting all nodes
> 2020-08-18 16:16:06,090 INFO(MainThread) No active sessions
> 2020-08-18 16:16:06,130 INFO(MainThread) Setting 10.35.18.156 as 
> invalid address for target iqn.2003-01.org.vm-18-220.iqn2
> 2020-08-18 16:16:06,131 INFO(MainThread) Setting 10.35.18.156 as 
> invalid address for target iqn.2003-01.org.vm-18-220.iqn1
> 2020-08-18 16:16:06,131 INFO(MainThread) Discovered connections: 
> {('iqn.2003-01.org.vm-18-220.iqn2', '10.35.18.220:3260,1'), 
> ('iqn.2003-01.org.vm-18-220.iqn1', '0.0.0.0:0,0'), 
> ('iqn.2003-01.org.vm-18-220.iqn1', '10.35.18.220:3260,1'), 
> ('iqn.2003-01.org.vm-18-220.iqn2', '0.0.0.0:0,0')}
> 2020-08-18 16:16:06,132 INFO(MainThread) Adding node for target 
> iqn.2003-01.org.vm-18-220.iqn2 portal 10.35.18.220:3260,1
> 2020-08-18 16:16:06,147 INFO(MainThread) Adding node for target 
> iqn.2003-01.org.vm-18-220.iqn1 portal 0.0.0.0:0,0
> 2020-08-18 16:16:06,162 INFO(MainThread) Adding node for target 
> iqn.2003-01.org.vm-18-220.iqn1 portal 10.35.18.220:3260,1
> 2020-08-18 16:16:06,176 INFO(MainThread) Adding node for target 
> iqn.2003-01.org.vm-18-220.iqn2 portal 0.0.0.0:0,0
> 2020-08-18 16:16:06,190 INFO(login_0) Login to target 
> iqn.2003-01.org.vm-18-220.iqn2 portal 10.35.18.220:3260,1 (nowait=True)
> 2020-08-18 16:16:06,324 INFO(login_0) Login to target 
> iqn.2003-01.org.vm-18-220.iqn1 portal 0.0.0.0:0,0 (nowait=True)
> 2020-08-18 16:18:06,351 INFO(login_0) Login to target 
> iqn.2003-01.org.vm-18-220.iqn1 portal 10.35.18.220:3260,1 (nowait=True)
> 2020-08-18 16:18:06,356 ERROR   (MainThread) Job failed: Command 
> ['iscsiadm', '--mode', 'node', '--targetname', 
> 'iqn.2003-01.org.vm-18-220.iqn1', '--interface', 'default', '--portal', '
> 0.0.0.0:0,0', '--login', '--no_wait'] failed rc=8 out='Logging in to 
> [iface: default, target: iqn.2003-01.org.vm-18-220.iqn1, portal: 
> 0.0.0.0,0]' err='iscsiadm: Could not login to [iface: default, target: 
> iqn.2003-01.org.vm-18-220.iqn1, portal: 0.0.0.0,0].\niscsiadm: initiator 
> reported error (8 - connection timed out)\niscsiadm: Could not log into all 
> portals'
> 2020-08-18 16:18:06,589 INFO(login_0) Login to target 
> iqn.2003-01.org.vm-18-220.iqn2 portal 0.0.0.0:0,0 (nowait=True)
> 2020-08-18 16:20:06,643 ERROR   (MainThread) Job failed: Command 
> ['iscsiadm', '--mode', 'node', '--targetname', 
> 'iqn.2003-01.org.vm-18-220.iqn2', '--interface', 'default', '--portal', '
> 0.0.0.0:0,0', '--login', '--no_wait'] failed rc=8 out='Logging in to 
> [iface: default, target: iqn.2003-01.org.vm-18-220.iqn2, portal: 
> 0.0.0.0,0]' err='iscsiadm: Could not login to [iface: default, target: 
> iqn.2003-01.org.vm-18-220.iqn2, portal: 0.0.0.0,0].\niscsiadm: initiator 
> reported error (8 - connection timed out)\niscsiadm: Could not log into all 
> portals'
> 2020-08-18 16:20:06,656 INFO(MainThread) Connecting completed in 
> 240.524s
>
>
> Thanks for helping out,
> Amit
>
> On Thursday, August 13, 2020 at 5:32:26 PM UTC+3 nir...@gmail.com wrote:
>
>> On Thu, Aug 13, 2020 at 1:32 AM The Lee-Man  wrote:
>>
>>> On Sunday, August 9, 2020 at 11:08:50 AM UTC-7, Amit Bawer wrote:
>>>>
>>>> ...
>>>>
>>>>>
>>>>>> The other option is to use one login-all call without parallelism, 
>>>>>> but that would have other implications on our system to consider.
>>>>>>
>>>>>
>>>>> Such as? 
>>>>>
>>>> As mentioned above,  unless there is a way to specify a list of targets 
>>>> and portals for a single login (all) command.
>>>>
>>>>>
>>>>>> Your answers would be helpful once again.
>>>>>

Re: BUG, lockdep warnings during iSCSI login?

2020-08-15 Thread The Lee-Man
See https://www.spinics.net/lists/kernel/msg3607739.html

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/b056c771-7c09-48cf-954e-98c29924b0ceo%40googlegroups.com.


Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-08-12 Thread The Lee-Man
On Sunday, August 9, 2020 at 11:08:50 AM UTC-7, Amit Bawer wrote:
>
> ...
>
>>
>>> The other option is to use one login-all call without parallelism, but 
>>> that would have other implications on our system to consider.
>>>
>>
>> Such as? 
>>
> As mentioned above,  unless there is a way to specify a list of targets 
> and portals for a single login (all) command.
>
>>
>>> Your answers would be helpful once again.
>>>
>>> Thanks,
>>> - Amit
>>>
>>>
>> You might be interested in a new feature I'm considering adding to 
>> iscsiadm to do asynchronous logins. In other words, the iscsiadm could, 
>> when asked to login to one or more targets, would send the login request to 
>> the targets, then return success immediately. It is then up to the end-user 
>> (you in this case) to poll for when the target actually shows up.
>>
> This sounds very interesting, but probably will be available to us only on 
> later RHEL releases, if chosen to be delivered downstream.
> At present it seems we can only use the login-all way or logins in a 
> dedicated threads per target-portal.
>
>>
>> ...
>>
>
So you can only use RH-released packages? That's fine with me, but I'm 
asking you to test a new feature and see if it fixes your problems. If it 
helped, I would add up here in this repo, and redhat would get it by 
default when they updated, which they do regularly, as does my company 
(SUSE).

Just as a "side" point, I wouldn't attack your problem by manually listing 
nodes to login to.

It does seem as if you assume you are the only iscsi user on the system. In 
that case, you have complete control of the node database. Assuming your 
targets do not change, you can set up your node database once and never 
have to discover iscsi targets again. Of course if targets change, you can 
update your node database, but only as needed, i.e. full discovery 
shouldn't be needed each time you start up, unless targets are really 
changing all the time in your environment.

If you do discovery and have nodes in your node database you don't like, 
just remove them.

Another point about your scheme: you are setting each node's 'startup' to 
'manual', but manual is the default, and since you seem to own the 
open-iscsi code on this system, you can ensure the default is manual. 
Perhaps because this is a test?

So, again, I ask you if you will test the async login code? It's really not 
much extra work -- just a "git clone" and a "make install" (mostly). If 
not, the async feature may make it into iscsiadm any way, some time soon, 
but I'd really prefer other testers for this feature before that.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/a86b42a0-bbc8-426e-9926-e87b6cb1a998o%40googlegroups.com.


Re: Todo list for open-iscsi

2020-08-07 Thread The Lee-Man
Heh. I just realized you uncovered one item you could do: update the todo 
list! But there *are* things in that list that you could help with.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/8f4b4991-b3da-4a5c-9bc2-fb51225b2bd5o%40googlegroups.com.


Re: [PATCH] scsi: iscsi: jump to correct label in an error path

2020-08-07 Thread The Lee-Man
On Sunday, July 26, 2020 at 7:40:48 PM UTC-7, Jing Xiangfeng wrote:
>
> In current code, it jumps to put_host() when scsi_host_lookup() 
> failes to get host. Jump to correct label to fix it. 
>
> Signed-off-by: Jing Xiangfeng  
> --- 
>  drivers/scsi/scsi_transport_iscsi.c | 11 +-- 
>  1 file changed, 5 insertions(+), 6 deletions(-) 
>
> diff --git a/drivers/scsi/scsi_transport_iscsi.c 
> b/drivers/scsi/scsi_transport_iscsi.c 
> index 7ae5024..5984596 100644 
> --- a/drivers/scsi/scsi_transport_iscsi.c 
> +++ b/drivers/scsi/scsi_transport_iscsi.c 
> @@ -3341,7 +3341,7 @@ static int iscsi_new_flashnode(struct 
> iscsi_transport *transport, 
>  pr_err("%s could not find host no %u\n", 
> __func__, ev->u.new_flashnode.host_no); 
>  err = -ENODEV; 
> -goto put_host; 
> +goto exit_new_fnode; 
>  } 
>   
>  index = transport->new_flashnode(shost, data, len); 
> @@ -3351,7 +3351,6 @@ static int iscsi_new_flashnode(struct 
> iscsi_transport *transport, 
>  else 
>  err = -EIO; 
>   
> -put_host: 
>  scsi_host_put(shost); 
>   
>  exit_new_fnode: 
> @@ -3376,7 +3375,7 @@ static int iscsi_del_flashnode(struct 
> iscsi_transport *transport, 
>  pr_err("%s could not find host no %u\n", 
> __func__, ev->u.del_flashnode.host_no); 
>  err = -ENODEV; 
> -goto put_host; 
> +goto exit_del_fnode; 
>  } 
>   
>  idx = ev->u.del_flashnode.flashnode_idx; 
> @@ -3418,7 +3417,7 @@ static int iscsi_login_flashnode(struct 
> iscsi_transport *transport, 
>  pr_err("%s could not find host no %u\n", 
> __func__, ev->u.login_flashnode.host_no); 
>  err = -ENODEV; 
> -goto put_host; 
> +goto exit_login_fnode; 
>  } 
>   
>  idx = ev->u.login_flashnode.flashnode_idx; 
> @@ -3470,7 +3469,7 @@ static int iscsi_logout_flashnode(struct 
> iscsi_transport *transport, 
>  pr_err("%s could not find host no %u\n", 
> __func__, ev->u.logout_flashnode.host_no); 
>  err = -ENODEV; 
> -goto put_host; 
> +goto exit_logout_fnode; 
>  } 
>   
>  idx = ev->u.logout_flashnode.flashnode_idx; 
> @@ -3520,7 +3519,7 @@ static int iscsi_logout_flashnode_sid(struct 
> iscsi_transport *transport, 
>  pr_err("%s could not find host no %u\n", 
> __func__, ev->u.logout_flashnode.host_no); 
>  err = -ENODEV; 
> -goto put_host; 
> +goto exit_logout_sid; 
>  } 
>   
>  session = iscsi_session_lookup(ev->u.logout_flashnode_sid.sid); 
> -- 
> 1.8.3.1 
>

Reviewed-by: Lee Duncan  

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/2d53d3d8-c253-422c-9a51-edfe778390dao%40googlegroups.com.


Re: Todo list for open-iscsi

2020-08-07 Thread The Lee-Man
On Thursday, July 30, 2020 at 9:58:41 PM UTC-7, sonu kumar wrote:
>
> Hi All, 
>
> I looked into the TODO list of open-iscsi. It is quite old and written 
> on July 7th,2011. Do we have any updated version of it? 
>
> I am looking for some low hanging tasks to getting started with 
> open-iscsi and iscsi. It would be really helpful if somebody helps me 
> to figure it out. 
>
> Thanks 
>

You don't need to "fix" something to learn about the code.Try tracing an 
iscsi login request all the way from the command line to the target. What 
happens. How do the "/dev/sd*" discs show up? What does iscsiadm do? What 
does iscsid do? What does the kernel do? What does the target do? What do 
the commands look like on the wire?

There are a lot of things to improve. The node database is a joke, in that 
it's not a database, it's a bunch of files and directories, and it has no 
locking and can't handle thousands of targets.

Many of the problems left in open-iscsi are non-trivial, or they would have 
been fixed.

Another area you can consider is further migration of duplicated 
functionality out of iscsid/iscsiadm and into libopeniscsi. And then 
there's adding better error handling, since open-iscsi just punts in such 
cases and disconnects/reconnects.

What about adding in multi-queue to the kernel code, or better network 
connections with the kernel. Support for network namespaces? What about 
zeroconf for initiator/target discovery? Also, open-iscsi has very little 
security -- even if you require a login/password to connect, the IO is sent 
over the network in the clear. Is there a solution for that?

As I said, many of these problems are non-trivial. But just learning the 
code can be done without tackling these larger issues I believe. Those on 
this list (including me) would be glad to help you. You will tend to get 
more intelligent answers if you ask intelligent questions.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/a817d6f9-f73f-4bc3-8a1d-87ca41995eebo%40googlegroups.com.


Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-08-07 Thread The Lee-Man
On Monday, July 27, 2020 at 10:38:05 AM UTC-7, Amit Bawer wrote:
>
> Thank you for your answers,
>
> The motivation behind the original question is for reducing the waiting 
> time for different iscsi connections logins
> in case some of the portals are down.
>
> We have a limitation on our RHEV system where all logins to listed iscsi 
> targets should finish within 180 seconds in total.
> In our current implementation we serialize the iscsiadm node logins one 
> after the other,
> each is for specific target and portal. In such scheme, each login would 
> wait 120 seconds in case a portal is down
> (default 15 seconds login timeout * 8 login retries), so if we have 2 or 
> more connections down, we spend at least 240 seconds
> which exceeds our 180 seconds time limit and the entire operation is 
> considered to be failed (RHEV-wise).
>

Of course these times are tunable, as the README distributed with 
open-iscsi suggests. But each setting has a trade-off. For example, if you 
shorten the timeout, you may miss connecting to a target that is just 
temporarily unreachable. 

>
> Testing [1] different login schemes is summarized in the following table 
> (logins to 2 targets with 2 portals each).
> It seems that either login-all nodes after creating them, as suggested in 
> previous answer here, compares in  total time spent 
> with doing specific node logins concurrently (i.e. running iscsiadm -m 
> node -T target -p portal -I interface  -l in parallel per
> each target-portal), for both cases of all portals being online and when 
> one portal is down:
>
> Login scheme Online  Portals Active 
> Sessions   Total Login Time (seconds)
>
> -
> All at once
> 2/2 4   2.1
> All at once1/2 
> 2   120.2
> Serial target-portal  2/2
> 48.5
> Serial target-portal  1/2
> 2243.5
> Concurrent target-portal 2/2   
> 42.1
> Concurrent target-portal1/2
> 2   120.1
>

So it looks like "All at once" is as fast as concurrent? I must be missing 
something. Maybe I'm misunderstanding what "all at once" means? 

>
> Using concurrent target-portal logins seems to be preferable in our 
> perspective as it allows to connect only to the
> specified target and portals without the risk of intermixing with other 
> potential iscsi targets.
>

Okay, maybe that explains it. You don't trust the "all" option? You are, 
after all, in charge of the node database. But of course that's your 
choice. 

>
> The node creation part is kept serial in all tests here and we have seen 
> it may result in the iscsi DB issues if run in parallel.
> But using only node logins in parallel doesn't seems to have issues for at 
> least 1000 tries of out tests.
>

In general the heavy lifting here is done by the kernel, which has proper 
multi-thread locking. And I believe iscsiadm has a single lock to the 
kernel communication socket, so that doesn't get messed up. So I wouldn't 
go as far as guaranteeing that this will work, but I agree it certainly 
seems to reliably work. 

>
> The question to be asked here is it advisable by open-iscsi?
> I know I have been answered already that iscsiadm is racy, but does it 
> applies to node logins as well?
>

I guess I answered that. I wouldn't advise against it, but I also wouldn't 
call best practice in general. 

>
> The other option is to use one login-all call without parallelism, but 
> that would have other implications on our system to consider.
>

Such as? 

>
> Your answers would be helpful once again.
>
> Thanks,
> - Amit
>
>
You might be interested in a new feature I'm considering adding to iscsiadm 
to do asynchronous logins. In other words, the iscsiadm could, when asked 
to login to one or more targets, would send the login request to the 
targets, then return success immediately. It is then up to the end-user 
(you in this case) to poll for when the target actually shows up.

This would mean that you system boot could occur much more quickly, 
especially when using for example multipathing on top of two paths to a 
target, and one path is not up. The problem is that this adds a layer of 
functionality needed in the client (again, you in this case), since the 
client has to poll for success, handle timeouts, etc. Also, this is just 
test code, so you could try it at your own risk. :)

If interested, let me know, and I'll point you at a 

Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-08-07 Thread The Lee-Man
On Thursday, August 6, 2020 at 1:42:35 AM UTC-7, Amit Bawer wrote:
>
> Another point i'd like to ask about is iSER fallback that we have: 
>
> Currently we check during connection flow if 'iser' is set on 
> iscsi_default_ifaces in our configuration. 
> If yes, it is first checked if its working on server side by attempting 
>
> iscsiadm -m node -T target -I iser -p portal -l 
> iscsiadm -m node -T target -I iser -p portal -u 
>
> If the login/logout worked it is kept as 'iser' instead of 'default' 
> interface setup, otherwise it fallbacks to 'default'. 
> This is used later for the actual node login. 
> The thing is that this check can also waste valuable time when the portal 
> is down, is there a way to fallback in the iscsiadm command itself, or 
> prefer a specific interface type when trying all/parallel logins for same 
> target+portal but with different interfaces types?
>
> There is no way to have the iscsi subsystem "fall back" to default from 
iser given the current code. The problem is when to fallback? Also, falling 
back to a secondary interface could add an addition 180 seconds, if it 
times out, as well. So it's up to the higher-level code (you, in this case) 
to make decisions like that. 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/709390a8-7a98-4990-afbb-4b54bef2d4dao%40googlegroups.com.


Re: About to tag a new version

2020-07-24 Thread The Lee-Man
On Friday, July 24, 2020 at 1:04:43 PM UTC-7, The Lee-Man wrote:
>
> Hi All:
>
> I',m planning on tagging a new version of open-iscsi, which will be 2.1.2.
>
> This would be a bug-fix and cleanup release.
>
> Any comments/objections?
>

See  https://github.com/open-iscsi/open-iscsi/releases/tag/2.1.2

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/0d9405a9-883f-4a22-91c4-53ed89b4013eo%40googlegroups.com.


About to tag a new version

2020-07-24 Thread The Lee-Man
Hi All:

I',m planning on tagging a new version of open-iscsi, which will be 2.1.2.

This would be a bug-fix and cleanup release.

Any comments/objections?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/75d8c579-b124-4e5e-936d-f7a98b608c97o%40googlegroups.com.


Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-06-30 Thread The Lee-Man
On Tuesday, June 30, 2020 at 8:55:13 AM UTC-7, Donald Williams wrote:
>
> Hello,
>  
>  Assuming that devmapper is running and MPIO properly configured you want 
> to connect to the same volume/target from different interfaces. 
>
> However in your case you aren't specifying the same interface. "default"  
> but they are on the same subnet.  Which typically will only use the default 
> NIC for that subnet. 
>

Yes, generally best practices require that each component of your two paths 
between initiator and target are redundant. This means that, in the case of 
networking, you want to be on different subnets, served by different 
switches. You also want two different NICs on your initiator, if possible, 
although many times they are on the same card. But, obviously, some points 
are not redundant (like your initiator or target). 

>
> What iSCSI target are you using?  
>
>  Regards,
> Don
>
> On Tue, Jun 30, 2020 at 9:00 AM Amit Bawer  wrote:
>
>> [Sorry if this message is duplicated, haven't seen it is published in the 
>> group]
>>
>> Hi,
>>
>> Have couple of question regarding iscsiadm version 6.2.0.878-2:
>>
>> 1) Is it safe to have concurrent logins to the same target from different 
>> interfaces? 
>> That is, running the following commands in parallel:
>>
>> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p 
>> 10.35.18.121:3260,1 -l
>> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p 
>> 10.35.18.166:3260,1 -l
>>
>> 2) Is there a particular reason for the default values of  
>> node.conn[0].timeo.login_timeout and node.session.initial_login_
>> retry_max?
>> According to comment in iscsid.conf it would spend 120 seconds in case of 
>> an unreachable interface login:
>>
>> # The default node.session.initial_login_retry_max is 8 and
>> # node.conn[0].timeo.login_timeout is 15 so we have:
>> #
>> # node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max 
>> =
>> #   120 
>> seconds
>>
>>
>> Thanks,
>> Amit
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "open-iscsi" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to open-iscsi+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/open-iscsi/cc3ad021-753a-4ac4-9e6f-93e8da1e19bbn%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/bf75d5e8-f4ed-4a16-86a8-ab78d0cac1cco%40googlegroups.com.


Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-06-30 Thread The Lee-Man
On Tuesday, June 30, 2020 at 6:00:03 AM UTC-7, Amit Bawer wrote:
>
> [Sorry if this message is duplicated, haven't seen it is published in the 
> group]
>
> Hi,
>
> Have couple of question regarding iscsiadm version 6.2.0.878-2:
>
> 1) Is it safe to have concurrent logins to the same target from different 
> interfaces? 
> That is, running the following commands in parallel:
>
> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p 
> 10.35.18.121:3260,1 -l
> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p 
> 10.35.18.166:3260,1 -l
>
> 2) Is there a particular reason for the default values of  
> node.conn[0].timeo.login_timeout and node.session.initial_login_retry_max?
> According to comment in iscsid.conf it would spend 120 seconds in case of 
> an unreachable interface login:
>
> # The default node.session.initial_login_retry_max is 8 and
> # node.conn[0].timeo.login_timeout is 15 so we have:
> #
> # node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max 
> =
> #   120 seconds
>
>
> Thanks,
> Amit
>

No, iscsiadm is not designed for parallel use. There is some locking, but 
IIRC there are still issues, like a single connection to the kernel?

After discovery, you should have NODE entries for each path, and you can 
login to both with "iscsiadm -m node -l".

As far as the default timeouts and retry counts are of course trade-offs. 
In general, iscsi can have flakey connections, since it's at the mercy of 
the network. In the event of a transient event, like a switch or target 
rebooting, the design allows reconnecting if and when the target finally 
comes back up, since giving up generally can mean data corruption (e.g. for 
a filesystem).

As the README for open-iscsi describes, you must tweak some of those 
numbers if you want to use multipathing, since the requirements for one of 
many paths usually requires a faster timeout, for example.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/a179f9d6-05e1-4a08-bf3c-aebb23d59afdo%40googlegroups.com.


Re: Large Immediate and/or Unsolicted Data causes long delays on R2T responses

2020-06-29 Thread The Lee-Man
On Saturday, May 2, 2020 at 11:30:27 AM UTC-7, ajhutc...@gmail.com wrote:
>
> I am able to create a condition where the open-iscsi initiator fails to 
> respond to an R2T request if the immediate/unsolicited data support is 
> large ~128KB.  I've seen instances where a delay on an R2T is only a few 
> seconds and other instances where no response is received in 180 seconds.
>
> If the host is doing a prefill operation with large writes that can be 
> completed with immediate data alone and a large write that requires an R2T 
> is sent, the open-iscsi initiator sometimes fails to respond to the 
> target's R2T. 
>
> After inspecting the code, I am convinced it is caused by the lack of 
> fairness in the *libiscsi  **iscsi_data_xmit* routine, which always 
> favors the sending a new command over responding to R2Ts. 
>
> /**
>  * iscsi_data_xmit - xmit any command into the scheduled connection
>  * @conn: iscsi connection
>  *
>  * Notes:
>  * The function can return -EAGAIN in which case the caller must
>  * re-schedule it again later or recover. '0' return code means
>  * successful xmit.
>  **/
> static int iscsi_data_xmit(struct iscsi_conn *conn)
> {
> ...
> /*
> * process mgmt pdus like nops before commands since we should
> * only have one nop-out as a ping from us and targets should not
> * overflow us with nop-ins
> */
> while (!list_empty(>mgmtqueue)) {
> ...
> /* process pending command queue */
> while (!list_empty(>cmdqueue)) {
> ...
> while (!list_empty(>requeue)) {
>
>
> Am I looking at this code correctly?  I guess this order might be better 
> for parallelization at the target by getting more commands onboard before 
> responding to outstanding R2Ts. With immediate/unsolicited data enabled, 
> the overhead of transmitting a new commands if higher and probably 
> shouldn't come before responding to R2Ts. 
>
>
> Do you have NOPs enabled? If so, do you see this issue with them disabled? 
I seriously dislike and advise against NOPs. I've never seen them actually 
help anything.

Have you tried playing with this code, i.e. changing the order? Without 
looking deeply, are the R2Ts in the command queue and not in the requeue 
queue?

What kind of load are you presenting to the server?

What do you mean by "the immediate/unsolicited data support is large 
~128KB"? What setting(s) did you change?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/4c70b62c-467c-4860-a951-663fb88158c7o%40googlegroups.com.


Re: [PATCH] iscsi: Add break to while loop

2020-06-05 Thread The Lee-Man
On Thursday, June 4, 2020 at 5:10:49 AM UTC-7, Wu Bo wrote:
>
> From: liubo  
>
> Fix the potential risk of rc value being washed out by jumping out of the 
> loop 
>
> Signed-off-by: liubo  
> Reported-by: Zhiqiang Liu  
> --- 
>  utils/fwparam_ibft/fwparam_sysfs.c | 5 - 
>  1 file changed, 4 insertions(+), 1 deletion(-) 
>
> diff --git a/utils/fwparam_ibft/fwparam_sysfs.c 
> b/utils/fwparam_ibft/fwparam_sysfs.c 
> index a0cd1c7..87fd6d4 100644 
> --- a/utils/fwparam_ibft/fwparam_sysfs.c 
> +++ b/utils/fwparam_ibft/fwparam_sysfs.c 
> @@ -115,8 +115,11 @@ static int get_iface_from_device(char *id, struct 
> boot_context *context) 
>  break; 
>  } 
>   
> -if (sscanf(dent->d_name, "net:%s", 
> context->iface) != 1) 
> +if (sscanf(dent->d_name, "net:%s", 
> context->iface) != 1) { 
>  rc = EINVAL; 
> +break; 
> +} 
> + 
>  rc = 0; 
>  break; 
>  } else { 
> -- 
> 2.21.0.windows.1 
>
>
This looks fine to me. Any chance you could submit a pull request on 
GitHub? It saves me having to cut-and-paste, since I sadly do not have a 
good workflow setup for patches from the mailing list. 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/a167b02a-53af-48ce-907a-5e43c67dd086o%40googlegroups.com.


Re: [EXT] [PATCH] iscsi: Add break to while loop

2020-06-05 Thread The Lee-Man
On Thursday, June 4, 2020 at 7:43:13 AM UTC-7, Uli wrote:
>
> >>> Wu Bo  schrieb am 04.06.2020 um 14:23 in Nachricht 
> <7784_1591272646_5ED8E4C6_7784_490_1_1591273415-689835-1-git-send-email-wubo40@h
>  
>
> awei.com>: 
> > From: liubo  
> > 
> > Fix the potential risk of rc value being washed out by jumping out of 
> the 
> > loop 
> > 
> > Signed-off-by: liubo  
> > Reported-by: Zhiqiang Liu  
> > --- 
> >  utils/fwparam_ibft/fwparam_sysfs.c | 5 - 
> >  1 file changed, 4 insertions(+), 1 deletion(-) 
> > 
> > diff --git a/utils/fwparam_ibft/fwparam_sysfs.c 
> > b/utils/fwparam_ibft/fwparam_sysfs.c 
> > index a0cd1c7..87fd6d4 100644 
> > --- a/utils/fwparam_ibft/fwparam_sysfs.c 
> > +++ b/utils/fwparam_ibft/fwparam_sysfs.c 
> > @@ -115,8 +115,11 @@ static int get_iface_from_device(char *id, struct 
> > boot_context *context) 
> >  break; 
> >  } 
> >   
> > -if (sscanf(dent->d_name, "net:%s", 
> context->iface) != 1) 
> > +if (sscanf(dent->d_name, "net:%s", 
> context->iface) != 1) { 
> >  rc = EINVAL; 
> > +break; 
> > +} 
> > + 
> >  rc = 0; 
> >  break; 
> >  } else { 
> > -- 
> > 2.21.0.windows.1 
>
> It seems to me the whole code could be more readable if the rc were preset 
> either to "success" (0) or "error" (something else), and if the "other" 
> result is needed just set the desired rc. Those multiple "break"s make the 
> code hard to read. 
>
>
>
Agreed that the code could be easier to read, but (1) it's working now, and 
(2) the suggested fix is inline with the current code style and format.

So I'm inclined to accept the patch. But I would also strongly consider a 
rewrite that makes it more readable, if you submitted such a patch.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/3c3b346e-1d17-4e7a-ad38-5ef355146a45o%40googlegroups.com.


RFC: what to do about the open-iscsi GPL license vs. the open-ssl BSD license?

2020-06-05 Thread The Lee-Man
Hi All:

I believe there is a conflict between the current open-iscsi license and 
the open-ssl license, noticed recently when Chris Leech updated open-iscsi 
to use newer encryption algorithms.

You can see more about this on github, where it was brought up as an issue 
https://github.com/open-iscsi/open-iscsi/issues/208>

It seems like there are several options, in order of progressively more 
work:

   1. ignore this problem
   2. add a disclaimer to our license
   3. Revert the update Chris did
   4. re-write open-iscsi encryption code to use a different package

It seems some other packages handle this case by simply ifdefing out the 
"offending" code. Of course others are welcome add a define to include that 
code, but by default this "fixes" the license issue. I do not like this 
approach, as many open-iscsi users care about authentication, and removing 
it would cripple open-iscsi IMHO.

Ignoring the problem won't fix anything, and I vote against reverting the 
changes Chris put in, as well as rewriting the code, as I'm no encryption 
expert and have no desire to become one. I would certainly be willing to 
entertain a patch series that did that, if any enterprising user wanted to 
do that work.

That leaves us with the disclaimer. I believe this will be good enough, as 
it has worked with other similar situations. And although I'm certainly not 
a lawyer, I so far have not seen anyone that worries about the open-iscsi 
license, with the exception of one distribution that runs nit-picking 
license checkers, just for fun. :)

So this is the official request for comment. Anyone?

-- 
The Lee-man

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/6b348ceb-6f19-4f11-858d-c710a0872a72o%40googlegroups.com.


Re: [RFC RESEND PATCH v2] scsi: iscsi: register sysfs for iscsi workqueue

2020-05-17 Thread The Lee-Man
On Monday, May 4, 2020 at 6:24:20 PM UTC-7, Bob Liu wrote:
>
> Motivation: 
> This patch enable setting cpu affinity through "cpumask" for iscsi 
> workqueues 
> (iscsi_q_xx and iscsi_eh), so as to get performance isolation. 
>

Please summarize for this performance-idiot how setting thee CPU affinity 
helps you in any way? Is it for testing purposes only, does it cause some 
performance gains in some cases, and if so, which ones? 

>
> The max number of active worker was changed form 1 to 2, because "cpumask" 
> of 
> ordered workqueue isn't allowed to change. 
>
> Notes: 
> - Having 2 workers break the current ordering guarantees, please let me 
> know 
>   if anyone depends on this. 
>

Have you tested with normal iSCSI IO from multiple initiators and targets?


> - __WQ_LEGACY have to be left because of 
> 23d11a5(workqueue: skip flush dependency checks for legacy workqueues) 
>

I have no issue with this part (now), but normally, when you send out a 
second version of a patch, you add a section that says something like:

> Changes since V1:
> * change 1
> * ...

And you change the subject from "[PATCH] ..." to "[PATCHv2] ..." or "[PATCH 
v2] ...". This helps folks that review lots of patches to recognize they 
might only need to review the new bits.

In your case, you may have only changed the Description, but even that's 
worthy of a mention IMHO. But I won't (normally) reject a patch for this, 
but I appreciate when it's done correctly.

>
> Signed-off-by: Bob Liu  
> --- 
>  drivers/scsi/libiscsi.c | 4 +++- 
>  drivers/scsi/scsi_transport_iscsi.c | 4 +++- 
>  2 files changed, 6 insertions(+), 2 deletions(-) 
>
> diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c 
> index 70b99c0..adf9bb4 100644 
> --- a/drivers/scsi/libiscsi.c 
> +++ b/drivers/scsi/libiscsi.c 
> @@ -2627,7 +2627,9 @@ struct Scsi_Host *iscsi_host_alloc(struct 
> scsi_host_template *sht, 
>  if (xmit_can_sleep) { 
>  snprintf(ihost->workq_name, sizeof(ihost->workq_name), 
>  "iscsi_q_%d", shost->host_no); 
> -ihost->workq = 
> create_singlethread_workqueue(ihost->workq_name); 
> +ihost->workq = alloc_workqueue("%s", 
> +WQ_SYSFS | __WQ_LEGACY | WQ_MEM_RECLAIM | 
> WQ_UNBOUND, 
> +2, ihost->workq_name); 
>  if (!ihost->workq) 
>  goto free_host; 
>  } 
> diff --git a/drivers/scsi/scsi_transport_iscsi.c 
> b/drivers/scsi/scsi_transport_iscsi.c 
> index dfc726f..bdbc4a2 100644 
> --- a/drivers/scsi/scsi_transport_iscsi.c 
> +++ b/drivers/scsi/scsi_transport_iscsi.c 
> @@ -4602,7 +4602,9 @@ static __init int iscsi_transport_init(void) 
>  goto unregister_flashnode_bus; 
>  } 
>   
> -iscsi_eh_timer_workq = create_singlethread_workqueue("iscsi_eh"); 
> +iscsi_eh_timer_workq = alloc_workqueue("%s", 
> +WQ_SYSFS | __WQ_LEGACY | WQ_MEM_RECLAIM | 
> WQ_UNBOUND, 
> +2, "iscsi_eh"); 
>  if (!iscsi_eh_timer_workq) { 
>  err = -ENOMEM; 
>  goto release_nls; 
> -- 
> 2.9.5 
>
>
If you answer is that is has been tested, then I'll be glad to add my 
reviewed-by. 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/f5da6f33-c444-4746-8ebf-94003efbbfc2%40googlegroups.com.


Re: [EXT] Re: udev events for iscsi

2020-04-22 Thread The Lee-Man
On Tuesday, April 21, 2020 at 11:56:23 PM UTC-7, Uli wrote:
>
> >>> The Lee-Man  schrieb am 21.04.2020 um 20:44 
> in 
> Nachricht 
> <618_1587494664_5E9F3F08_618_445_1_7f583720-8a84-4872-8d1a-5cd284295c22@googlegr
>  
>
> ups.com>: 
> > On Tuesday, April 21, 2020 at 12:31:24 AM UTC-7, Gionatan Danti wrote: 
> >> 
> >> [reposting, as the previous one seems to be lost] 
> >> 
> >> Hi all, 
> >> I have a question regarding udev events when using iscsi disks. 
> >> 
> >> By using "udevadm monitor" I can see that events are generated when I 
> >> login and logout from an iscsi portal/resource, creating/destroying the 
> >> relative links under /dev/ 
> >> 
> >> However, I can not see anything when the remote machine simple 
> >> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout 
> expiring, I 
> >> don't see anything about a removed disk (and the links under /dev/ 
> remains 
> >> unaltered, indeed). At the same time, when the remote machine and disk 
> >> become available again, no reconnection events happen. 
> >> 
> > 
> > Because of the design of iSCSI, there is no way for the initiator to 
> know 
> > the server has gone away. The only time an initiator might figure this 
> out 
> > is when it tries to communicate with the target. 
>
> My knowlege of the SCSI stack is quite poor, but I think the last 
> revisions of parallel SCSI (like Ultra 320 (or was it 160?)) had a concept 
> of "domain validation". AFAIK the leatter meant measuring the quality of 
> the wires, adjusting the transfer speed. 
> While basically SCSI assumes "the bus" won't go away magically, a future 
> iSCSI standard might contain  regular "bus checks" to trigger recovery 
> actions if the "bus" (network transport connection) seems to be gone. 
>
> > 
> > This assumes we are not using some sort of directory service, like iSNS, 
> > which can send asynchronous notifications. But even then, the iSNS 
> server 
> > would have to somehow know that the target went down. If the target 
> > crashed, that might be difficult to ascertain. 
>
> To be picky: If the traget went down (like a classical failing SCSI disk), 
> it could issue some attention message, but when the transport went down, no 
> such message can be received. So I think there's a difference between 
> "target down" (device not present, device fails to respond) and "bus down" 
> (no communication possible any more). In the second case no assumptions can 
> be made about the health of the traget device. 
>
> > 
> > So in the absence of some asynchronous notification, the initiator only 
> > knows the target is not responding if it tries to talk to that target. 
> > 
> > Normally iscsid defaults to sending periodic NO-OPs to the target every 
> 5 
> > seconds. So if the target goes away, the initiator usually notices, even 
> if 
> > no regular I/O is occurring. 
>
> So the target went away, or the bus went down? 
>

The initiator does not know the difference. As you know, there are dozens 
of things (conservatively) that can go wrong, which is why I say the disk 
"goes away". It could be sleeping. It could be dead. The cable could be 
unplugged. The system could be rebooting. The switch could be down. The 
ACLs could have changed (which is how I simulate a target going away). 

>
> > 
> > But this is where the error recovery gets tricky, because iscsi tries to 
> > handle "lossy" connections. What if the server will be right back? Maybe 
> > it's rebooting? Maybe the cable will be plugged back in? So iscsi keeps 
> > trying to reconnect. As a matter of fact, if you stop iscsid and restart 
> > it, it sees the failed connection and retries it -- forever, by default. 
> I 
> > actually added a configuration parameter called reopen_max, that can 
> limit 
> > the number of retries. But there was pushback on changing the default 
> value 
> > from 0, which is "retry forever". 
> > 
> > So what exactly do you think the system should do when a connection 
> "goes 
> > away"? How long does it have to be gone to be considered gone for good? 
> If 
> > the target comes back "later" should it get the same disc name? Should 
> we 
> > retry, and if so how much before we give up? I'm interested in your 
> views, 
> > since it seems like a non-trivial problem to me. 
>
> IMHO a "bus down" is a critical event affecting _all_ devices on that bus, 
> not just

Re: udev events for iscsi

2020-04-21 Thread The Lee-Man
On Tuesday, April 21, 2020 at 8:20:23 AM UTC-7, Robert ECEO Townley wrote:
>
> Wondering myself.
>
> On Apr 21, 2020, at 2:31 AM, Gionatan Danti  
> wrote:
>
> 
> [reposting, as the previous one seems to be lost]
>
> Hi all,
> I have a question regarding udev events when using iscsi disks.
>
> By using "udevadm monitor" I can see that events are generated when I 
> login and logout from an iscsi portal/resource, creating/destroying the 
> relative links under /dev/
>
>
> So running “udevadm monitor” on the initiator, you can see when a block 
> device becomes available locally.   
>
>
>
> However, I can not see anything when the remote machine simple 
> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I 
> don't see anything about a removed disk (and the links under /dev/ remains 
> unaltered, indeed). At the same time, when the remote machine and disk 
> become available again, no reconnection events happen.
>
>
> As someone who has had an inordinate amount of experience with the iSCSi 
> connection breaking ( power outage, Network switch dies,  wrong ethernet 
> cable pulled, the target server machine hardware crashes, ...) in the 
> middle of production, the more info the better.   Udev event triggers would 
> help.   I wonder exactly how XenServer handles this as it itself seemed 
> more resilient.  
>
> XenServer host initiators  do something correct to recover and wonder how 
> that compares to the normal iSCSi initiator.  
>

I was under the impression that XenServer used open-iscsi.

>  
> But unfortunately, XenServer LVM-over-iSCSi  does not pass the message 
> along to its Linux virtual drives and VMs in the same way as Windows VMs.   
>  
>
> When the target drives became available again,   MS Windows virtual 
> machines would gracefully recover on their own.All Linux VM 
>  filesystems went read only and those VM machines required forceful 
>  rebooting.   mount remount would not work. 
>

A filesystem going read-only means it was likely ext3, which does that if 
it gets IO errors, I believe. (Disclaimer: I'm not a filesystem person.) 

>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/a3ff8e76-fa9b-4290-ba20-f3bf43989b66%40googlegroups.com.


Re: udev events for iscsi

2020-04-21 Thread The Lee-Man
On Tuesday, April 21, 2020 at 12:31:24 AM UTC-7, Gionatan Danti wrote:
>
> [reposting, as the previous one seems to be lost]
>
> Hi all,
> I have a question regarding udev events when using iscsi disks.
>
> By using "udevadm monitor" I can see that events are generated when I 
> login and logout from an iscsi portal/resource, creating/destroying the 
> relative links under /dev/
>
> However, I can not see anything when the remote machine simple 
> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I 
> don't see anything about a removed disk (and the links under /dev/ remains 
> unaltered, indeed). At the same time, when the remote machine and disk 
> become available again, no reconnection events happen.
>

Because of the design of iSCSI, there is no way for the initiator to know 
the server has gone away. The only time an initiator might figure this out 
is when it tries to communicate with the target.

This assumes we are not using some sort of directory service, like iSNS, 
which can send asynchronous notifications. But even then, the iSNS server 
would have to somehow know that the target went down. If the target 
crashed, that might be difficult to ascertain.

So in the absence of some asynchronous notification, the initiator only 
knows the target is not responding if it tries to talk to that target.

Normally iscsid defaults to sending periodic NO-OPs to the target every 5 
seconds. So if the target goes away, the initiator usually notices, even if 
no regular I/O is occurring.

But this is where the error recovery gets tricky, because iscsi tries to 
handle "lossy" connections. What if the server will be right back? Maybe 
it's rebooting? Maybe the cable will be plugged back in? So iscsi keeps 
trying to reconnect. As a matter of fact, if you stop iscsid and restart 
it, it sees the failed connection and retries it -- forever, by default. I 
actually added a configuration parameter called reopen_max, that can limit 
the number of retries. But there was pushback on changing the default value 
from 0, which is "retry forever".

So what exactly do you think the system should do when a connection "goes 
away"? How long does it have to be gone to be considered gone for good? If 
the target comes back "later" should it get the same disc name? Should we 
retry, and if so how much before we give up? I'm interested in your views, 
since it seems like a non-trivial problem to me.

>
> I can read here that, years ago, a patch was in progress to give better 
> integration with udev when a device disconnects/reconnects. Did the patch 
> got merged? Or does the one I described above remain the expected behavior? 
> Can be changed?
>

So you're saying as soon as a bad connection is detected (perhaps by a 
NOOP), the device should go away? 

>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/7f583720-8a84-4872-8d1a-5cd284295c22%40googlegroups.com.


Re: [EXT] [PATCH] open-iscsi:Modify iSCSI shared memory permissions for logs

2020-04-21 Thread The Lee-Man
On Monday, April 20, 2020 at 5:08:36 AM UTC-7, Uli wrote:
>
> Hi! 
>
> Maybe this could be made a symbolic constant, or even be made 
> configurable. 
> The other interesting thing is that there are three seemingly very similar 
> code fragements to create the shared memory, but each with a different size 
> parameter (sizeof(struct logarea) vs. size vs. MAX_MSG_SIZE + sizeof(struct 
>  logmsg)) ;-) 
>

If you'd like to submit a pull request, I'll consider it. I don't think the 
symbolic constant and machinery around making the permission configurable 
are worth the trouble, since they shouldn't be changed. But I could saying 
making this permission a define in an include file, perhaps with an 
"ifndef" around it. :)

As far as automating the shared memory creation for just 3 cases is not 
worth it, particularly since we're filling in info about the 2nd and 3rd 
segment into our control structure, as we go.

I merge this pull request.

>
> Regards, 
> Ulrich 
>
> >>> Wu Bo  schrieb am 17.04.2020 um 11:08 in Nachricht 
> <6355_1587114536_5E997228_6355_294_1_d6a22a2f-3730-45ee-5256-8a8fe4b017bf@huawei
>  
>
> com>: 
> > Hi, 
> > 
> > Iscsid log damon is responsible for reading data from shared memory 
> > and writing syslog. Iscsid is the root user group. 
> > Currently, it is not seen that non-root users need to read logs. 
> > The principle of minimizing the use of permissions, all the permissions 
> > are changed from 644 to 600. 
> > 
> > Signed-off-by: Wu Bo  
> > --- 
> >   usr/log.c | 6 +++--- 
> >   1 file changed, 3 insertions(+), 3 deletions(-) 
> > 
> > diff --git a/usr/log.c b/usr/log.c 
> > index 6e16e7c..2fc1850 100644 
> > --- a/usr/log.c 
> > +++ b/usr/log.c 
> > @@ -73,7 +73,7 @@ static int logarea_init (int size) 
> >  logdbg(stderr,"enter logarea_init\n"); 
> > 
> >  if ((shmid = shmget(IPC_PRIVATE, sizeof(struct logarea), 
> > -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> > +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
> >  syslog(LOG_ERR, "shmget logarea failed %d", errno); 
> >  return 1; 
> >  } 
> > @@ -93,7 +93,7 @@ static int logarea_init (int size) 
> >  size = DEFAULT_AREA_SIZE; 
> > 
> >  if ((shmid = shmget(IPC_PRIVATE, size, 
> > -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> > +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
> >  syslog(LOG_ERR, "shmget msg failed %d", errno); 
> >  free_logarea(); 
> >  return 1; 
> > @@ -114,7 +114,7 @@ static int logarea_init (int size) 
> >  la->tail = la->start; 
> > 
> >  if ((shmid = shmget(IPC_PRIVATE, MAX_MSG_SIZE + sizeof(struct 
> > logmsg), 
> > -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> > +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
> >  syslog(LOG_ERR, "shmget logmsg failed %d", errno); 
> >  free_logarea(); 
> >  return 1; 
> > -- 
> > 1.8.3.1 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "open-iscsi" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to open-iscsi+unsubscr...@googlegroups.com. 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/open-iscsi/d6a22a2f-3730-45ee-5256-8a8fe4b0 
> > 17bf%40huawei.com. 
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/ef8b7483-b1fc-46b5-abee-10d0bd6f9d0c%40googlegroups.com.


Re: [PATCH] open-iscsi:Modify iSCSI shared memory permissions for logs

2020-04-19 Thread The Lee-Man
On Friday, April 17, 2020 at 2:08:57 AM UTC-7, Wu Bo wrote:
>
> Hi, 
>
> Iscsid log damon is responsible for reading data from shared memory 
> and writing syslog. Iscsid is the root user group. 
> Currently, it is not seen that non-root users need to read logs. 
> The principle of minimizing the use of permissions, all the permissions 
> are changed from 644 to 600. 
>
> Signed-off-by: Wu Bo  ... 
> --- 
>   usr/log.c | 6 +++--- 
>   1 file changed, 3 insertions(+), 3 deletions(-) 
>
> diff --git a/usr/log.c b/usr/log.c 
> index 6e16e7c..2fc1850 100644 
> --- a/usr/log.c 
> +++ b/usr/log.c 
> @@ -73,7 +73,7 @@ static int logarea_init (int size) 
>  logdbg(stderr,"enter logarea_init\n"); 
>
>  if ((shmid = shmget(IPC_PRIVATE, sizeof(struct logarea), 
> -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
>  syslog(LOG_ERR, "shmget logarea failed %d", errno); 
>  return 1; 
>  } 
> @@ -93,7 +93,7 @@ static int logarea_init (int size) 
>  size = DEFAULT_AREA_SIZE; 
>
>  if ((shmid = shmget(IPC_PRIVATE, size, 
> -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
>  syslog(LOG_ERR, "shmget msg failed %d", errno); 
>  free_logarea(); 
>  return 1; 
> @@ -114,7 +114,7 @@ static int logarea_init (int size) 
>  la->tail = la->start; 
>
>  if ((shmid = shmget(IPC_PRIVATE, MAX_MSG_SIZE + sizeof(struct 
> logmsg), 
> -   0644 | IPC_CREAT | IPC_EXCL)) == -1) { 
> +   0600 | IPC_CREAT | IPC_EXCL)) == -1) { 
>  syslog(LOG_ERR, "shmget logmsg failed %d", errno); 
>  free_logarea(); 
>  return 1; 
> -- 
> 1.8.3.1 
>
>
This looks good to me. Any chance you can make this a pull request for 
open-iscsi/open-iscsi on github? 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/c4ca6d31-2fe4-4f7f-a822-8b951f8807a5%40googlegroups.com.


Re: replacement_timeout Override

2020-03-26 Thread The Lee-Man
I'm glad you figured it out. Sorry I didn't reply sooner.

On Monday, March 16, 2020 at 9:40:53 PM UTC-7, Marc Smith wrote:
>
> On Sat, Mar 14, 2020 at 10:28 AM Marc Smith  wrote: 
> > 
> > Hi, 
> > 
> > I'm using open-iscsi version 2.1.1. I noticed that my 
> > "replacement_timeout" value set in the node record is not being 
> > applied, or rather is not overriding the default value set in 
> > iscsid.conf: 
> > 
> > # iscsiadm -m node -T internal_redirect | grep replacement_timeout 
> > node.session.timeo.replacement_timeout = 5 
> > 
> > # cat /etc/iscsi/iscsid.conf | grep replacement_timeout 
> > node.session.timeo.replacement_timeout = 120 
> > 
> > # cat /sys/class/iscsi_session/session1/recovery_tmo 
> > 120 
> > 
> > # iscsiadm -m session -P 2 | grep Recovery 
> > Recovery Timeout: 120 
> > 
> > I can certainly change this value in iscsid.conf, but I was thinking 
> > my value in the node record would override this (for this specific 
> > target). Is it expected that this value should override what's in 
> > iscsid.conf? If so, then I assume I've hit a bug, or perhaps I have 
> > something configured incorrectly? 
>
> Okay, so after digging a bit, the default values from iscsid.conf are 
> in fact being superseded by the specific session values. That is 
> demonstrated when I run "iscsiadm -m node -T internal_redirect". The 
> only problem is the values aren't applied to the running session (the 
> sysfs attribute files for the session are not updated when the record 
> is updated). 
>
> I was changing the values for a session that was already established. 
> The solution is to set the node record values, then simply logout and 
> login. 
>
> --Marc 
>
>
> > 
> > Thanks for your time. 
> > 
> > --Marc 
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/62df8000-a715-4658-b36d-cf5552a9c766%40googlegroups.com.


Re: There are two same sessions on the on client node? what's happen with it?

2020-03-26 Thread The Lee-Man
Those are two different sessions, as distinguished by their session numbers.

On Thursday, March 5, 2020 at 7:42:46 PM UTC-8, can zhu wrote:
>
> [image: 微信图片_20200306114227.png]
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/503795d3-3b41-48ef-9945-4de6ff83e605%40googlegroups.com.


Re: [PATCH] iscsi-iname: fix iscsi-iname -p access NULL pointer without given IQN prefix

2020-03-25 Thread The Lee-Man
Thank you very much for this bug report and suggested patch, but I cleaned 
up the code and fixed it a little differently.

On Wednesday, March 18, 2020 at 6:46:06 PM UTC-7, wubo40 wrote:
>
> From: Wu Bo  
>
> iscsi-iname -p access NULL pointer without give IQN prefix. 
>
> # iscsi-iname -p 
> Segmentation fault 
>
> Signed-off-by: Wu Bo  
> --- 
>   utils/iscsi-iname.c | 2 +- 
>   1 file changed, 1 insertion(+), 1 deletion(-) 
>
> diff --git a/utils/iscsi-iname.c b/utils/iscsi-iname.c 
> index da850dc..7df7bb0 100644 
> --- a/utils/iscsi-iname.c 
> +++ b/utils/iscsi-iname.c 
> @@ -69,7 +69,7 @@ main(int argc, char *argv[]) 
>   exit(0); 
>   } else if ( strcmp(prefix, "-p") == 0 ) { 
>   prefix = argv[2]; 
> -if (strnlen(prefix, PREFIX_MAX_LEN + 1) > PREFIX_MAX_LEN) { 
> +if (prefix && (strnlen(prefix, PREFIX_MAX_LEN + 1) > 
> PREFIX_MAX_LEN)) { 
>   printf("Error: Prefix cannot exceed %d " 
>  "characters.\n", PREFIX_MAX_LEN); 
>   exit(1); 
> -- 
> 2.21.0 
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/c6cb350a-f8d2-486b-888c-dfc6e5b69a79%40googlegroups.com.


Tagging version 2.1.1 of open-iscsi/open-iscsi

2020-02-26 Thread The Lee-Man
Hi All:

Just a heads up (for those that don't hang out on github) that I'm tagging 
version 2.1.1 of open-iscsi today, if there are no objections.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/54aefe03-21e2-4b91-8593-7d178bbf50c7%40googlegroups.com.


Re: iSCSI and Ceph RBD

2020-01-30 Thread The Lee-Man
Did Donald answer your question(s)?

On Friday, January 24, 2020 at 1:50:28 PM UTC-8, Bobby wrote:
>
> Hi,
>
> I have some questions regarding iSCSI and Ceph RBD. If I have understood 
> correctly, the RBD backstore module 
> on target side can translate SCSI IO into Ceph OSD requests. The iSCSI 
> target driver with rbd.ko can expose Ceph cluster
> on iSCSI protocol. If correct, then that all is happening on target side.  
>
> My confusion is what is  happening on client side?
>
> Meaning, does linux mainline kernel code called "rbd" has any role with  
> Open-iSCSI initiator on client side? To put it more simple, 
> is there any common ground for both protocols (iSCSI and rbd) in the linux 
> kernel  of the client side? 
>
> Thanks :-)
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/cd66a3d8-6351-41b3-8126-3ab22341e6d8%40googlegroups.com.


Version v0.100 of open-isns released

2020-01-23 Thread The Lee-Man
Hello:

I've released version v0.100 of open-isns. This version includes:

* fixes to IPv6 handling
* fixes to existing test suite for openssl
* adding new python3-based unittests, to replace deprecated perl-based tests

Please help yourself, at https://github.com/open-iscsi/open-isns


-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/f980a812-7db6-4c1e-94e3-bae0eefb3ec4%40googlegroups.com.


Re: [LSF/MM TOPIC] iSCSI MQ adoption via MCS discussion

2020-01-23 Thread The Lee-Man


On Tuesday, January 21, 2020 at 1:15:29 AM UTC-8, Bobby wrote:
>
> Hi all,
>
> I have a question please. Are these todo's finally part of Open-iSCSi 
> initiator?
>
> Thanks
>

No, not really. It's a "hard problem", and offload cards have somewhat 
worked around the problem by doing all of the work in the card. 

>
> On Wednesday, January 7, 2015 at 5:57:14 PM UTC+1, hare wrote:
>>
>> On 01/07/2015 05:25 PM, Sagi Grimberg wrote: 
>> > Hi everyone, 
>> > 
>> > Now that scsi-mq is fully included, we need an iSCSI initiator that 
>> > would use it to achieve scalable performance. The need is even greater 
>> > for iSCSI offload devices and transports that support multiple HW 
>> > queues. As iSER maintainer I'd like to discuss the way we would choose 
>> > to implement that in iSCSI. 
>> > 
>> > My measurements show that iSER initiator can scale up to ~2.1M IOPs 
>> > with multiple sessions but only ~630K IOPs with a single session where 
>> > the most significant bottleneck the (single) core processing 
>> > completions. 
>> > 
>> > In the existing single connection per session model, given that command 
>> > ordering must be preserved session-wide, we end up in a serial command 
>> > execution over a single connection which is basically a single queue 
>> > model. The best fit seems to be plugging iSCSI MCS as a multi-queued 
>> > scsi LLDD. In this model, a hardware context will have a 1x1 mapping 
>> > with an iSCSI connection (TCP socket or a HW queue). 
>> > 
>> > iSCSI MCS and it's role in the presence of dm-multipath layer was 
>> > discussed several times in the past decade(s). The basic need for MCS 
>> is 
>> > implementing a multi-queue data path, so perhaps we may want to avoid 
>> > doing any type link aggregation or load balancing to not overlap 
>> > dm-multipath. For example we can implement ERL=0 (which is basically 
>> the 
>> > scsi-mq ERL) and/or restrict a session to a single portal. 
>> > 
>> > As I see it, the todo's are: 
>> > 1. Getting MCS to work (kernel + user-space) with ERL=0 and a 
>> >round-robin connection selection (per scsi command execution). 
>> > 2. Plug into scsi-mq - exposing num_connections as nr_hw_queues and 
>> >using blk-mq based queue (conn) selection. 
>> > 3. Rework iSCSI core locking scheme to avoid session-wide locking 
>> >as much as possible. 
>> > 4. Use blk-mq pre-allocation and tagging facilities. 
>> > 
>> > I've recently started looking into this. I would like the community to 
>> > agree (or debate) on this scheme and also talk about implementation 
>> > with anyone who is also interested in this. 
>> > 
>> Yes, that's a really good topic. 
>>
>> I've pondered implementing MC/S for iscsi/TCP but then I've figured my 
>> network implementation knowledge doesn't spread that far. 
>> So yeah, a discussion here would be good. 
>>
>> Mike? Any comments? 
>>
>> Cheers, 
>>
>> Hannes 
>> -- 
>> Dr. Hannes Reinecke  zSeries & Storage 
>> ha...@suse.de  +49 911 74053 688 
>> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg 
>> GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg) 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/4f57a546-6857-4a5a-968b-2b51a6e6ad68%40googlegroups.com.


Re: [PATCH] iscsi: Add support for asynchronous iSCSI session destruction

2020-01-23 Thread The Lee-Man
On Friday, January 17, 2020 at 3:33:35 PM UTC-8, Gabriel Krisman Bertazi 
wrote:
>
> From: Frank Mayhar  
>
> iSCSI session destruction can be arbitrarily slow, since it might 
> require network operations and serialization inside the scsi layer. 
> This patch adds a new user event to trigger the destruction work 
> asynchronously, releasing the rx_queue_mutex as soon as the operation is 
> queued and before it is performed.  This change allow other operations 
> to run in other sessions in the meantime, removing one of the major 
> iSCSI bottlenecks for us. 
>
> To prevent the session from being used after the destruction request, we 
> remove it immediately from the sesslist. This simplifies the locking 
> required during the asynchronous removal. 
>
> Co-developed-by: Khazhismel Kumykov  
> Signed-off-by: Khazhismel Kumykov  
> Signed-off-by: Frank Mayhar  
> Co-developed-by: Gabriel Krisman Bertazi  
> Signed-off-by: Gabriel Krisman Bertazi  
> --- 
>
> This patch requires a patch that just went upstream to apply cleanly. 
> it is ("iscsi: Don't destroy session if there are outstanding 
> connections"), which was just merged by Martin into 5.6/scsi-queue. 
> Please make sure you have it in your tree, otherwise this one won't 
> apply. 
>
>  drivers/scsi/scsi_transport_iscsi.c | 36 + 
>  include/scsi/iscsi_if.h |  1 + 
>  include/scsi/scsi_transport_iscsi.h |  1 + 
>  3 files changed, 38 insertions(+) 
>
> diff --git a/drivers/scsi/scsi_transport_iscsi.c 
> b/drivers/scsi/scsi_transport_iscsi.c 
> index ba6cfaf71aef..e9a8e0317b0d 100644 
> --- a/drivers/scsi/scsi_transport_iscsi.c 
> +++ b/drivers/scsi/scsi_transport_iscsi.c 
> @@ -95,6 +95,8 @@ static DECLARE_WORK(stop_conn_work, stop_conn_work_fn); 
>  static atomic_t iscsi_session_nr; /* sysfs session id for next new 
> session */ 
>  static struct workqueue_struct *iscsi_eh_timer_workq; 
>   
> +static struct workqueue_struct *iscsi_destroy_workq; 
> + 
>  static DEFINE_IDA(iscsi_sess_ida); 
>  /* 
>   * list of registered transports and lock that must 
> @@ -1615,6 +1617,7 @@ static struct sock *nls; 
>  static DEFINE_MUTEX(rx_queue_mutex); 
>   
>  static LIST_HEAD(sesslist); 
> +static LIST_HEAD(sessdestroylist); 
>  static DEFINE_SPINLOCK(sesslock); 
>  static LIST_HEAD(connlist); 
>  static LIST_HEAD(connlist_err); 
> @@ -2035,6 +2038,14 @@ static void __iscsi_unbind_session(struct 
> work_struct *work) 
>  ISCSI_DBG_TRANS_SESSION(session, "Completed target removal\n"); 
>  } 
>   
> +static void __iscsi_destroy_session(struct work_struct *work) 
> +{ 
> +struct iscsi_cls_session *session = 
> +container_of(work, struct iscsi_cls_session, 
> destroy_work); 
> + 
> +session->transport->destroy_session(session); 
> +} 
> + 
>  struct iscsi_cls_session * 
>  iscsi_alloc_session(struct Scsi_Host *shost, struct iscsi_transport 
> *transport, 
>  int dd_size) 
> @@ -2057,6 +2068,7 @@ iscsi_alloc_session(struct Scsi_Host *shost, struct 
> iscsi_transport *transport, 
>  INIT_WORK(>block_work, __iscsi_block_session); 
>  INIT_WORK(>unbind_work, __iscsi_unbind_session); 
>  INIT_WORK(>scan_work, iscsi_scan_session); 
> +INIT_WORK(>destroy_work, __iscsi_destroy_session); 
>  spin_lock_init(>lock); 
>   
>  /* this is released in the dev's release function */ 
> @@ -3617,6 +3629,23 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct 
> nlmsghdr *nlh, uint32_t *group) 
>  else 
>  transport->destroy_session(session); 
>  break; 
> +case ISCSI_UEVENT_DESTROY_SESSION_ASYNC: 
> +session = iscsi_session_lookup(ev->u.d_session.sid); 
> +if (!session) 
> +err = -EINVAL; 
> +else if (iscsi_session_has_conns(ev->u.d_session.sid)) 
> +err = -EBUSY; 
> +else { 
> +unsigned long flags; 
> + 
> +/* Prevent this session from being found again */ 
> +spin_lock_irqsave(, flags); 
> +list_move(>sess_list, ); 
> +spin_unlock_irqrestore(, flags); 
> + 
> +queue_work(iscsi_destroy_workq, 
> >destroy_work); 
> +} 
> +break; 
>  case ISCSI_UEVENT_UNBIND_SESSION: 
>  session = iscsi_session_lookup(ev->u.d_session.sid); 
>  if (session) 
> @@ -4662,8 +4691,14 @@ static __init int iscsi_transport_init(void) 
>  goto release_nls; 
>  } 
>   
> +iscsi_destroy_workq = 
> create_singlethread_workqueue("iscsi_destroy"); 
> +if (!iscsi_destroy_workq) 
> +goto destroy_wq; 
> + 
>  return 0; 
>   
> +destroy_wq: 
> +destroy_workqueue(iscsi_eh_timer_workq); 
>  release_nls: 
>  

Re: [PATCH v4] iscsi: Perform connection failure entirely in kernel space

2020-01-23 Thread The Lee-Man
On Wednesday, January 15, 2020 at 7:52:39 PM UTC-8, Martin K. Petersen 
wrote:
>
>
> > Please consider the v4 below with the lock added. 
>
> Lee: Please re-review this given the code change. 
>

Martin:

The recent change makes sense, so please still include my:

Reviewed-by: Lee Duncan 

>
> > From: Bharath Ravi  
> > 
> > Connection failure processing depends on a daemon being present to (at 
> > least) stop the connection and start recovery.  This is a problem on a 
> > multipath scenario, where if the daemon failed for whatever reason, the 
> > SCSI path is never marked as down, multipath won't perform the 
> > failover and IO to the device will be forever waiting for that 
> > connection to come back. 
> > 
> > This patch performs the connection failure entirely inside the kernel. 
> > This way, the failover can happen and pending IO can continue even if 
> > the daemon is dead. Once the daemon comes alive again, it can execute 
> > recovery procedures if applicable. 
>
> -- 
> Martin K. PetersenOracle Linux Engineering 
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/1ece4cbc-29df-4b10-952d-a8d829e24e6f%40googlegroups.com.


Re: iSCSI Multiqueue

2020-01-23 Thread The Lee-Man
On Wednesday, January 15, 2020 at 7:16:48 AM UTC-8, Bobby wrote:
>
>
> Hi all,
>
> I have a question regarding multi-queue in iSCSI. AFAIK, *scsi-mq* has 
> been functional in kernel since kernel 3.17. Because earlier,
> the block layer was updated to multi-queue *blk-mq* from single-queue. So 
> the current kernel has full-fledged *multi-queues*.
>
> The question is:
>
> How an iSCSI initiator uses multi-queue? Does it mean having multiple 
> connections? I would like 
> to see where exactly that is achieved in the code, if someone can please 
> me give me a hint. Thanks in advance :)
>
> Regards
>

open-iscsi does not use multi-queue specifically, though all of the block 
layer is now converted to using multi-queue. If I understand correctly, 
there is no more single-queue, but there is glue that allows existing 
single-queue drivers to continue on, mapping their use to multi-queue. 
(Someone please correct me if I'm wrong.)

The only time multi-queue might be useful for open-iscsi to use would be 
for MCS -- multiple connections per session. But the implementation of 
multi-queue makes using it for MCS problematic. Because each queue is on a 
different CPU, open-iscsi would have to coordinate the multiple connections 
across multiple CPUs, making things like ensuring correct sequence numbers 
difficult.

Hope that helps. I _believe_ there is still an effort to map open-iscsi MCS 
to multi-queue, but nobody has tried to actually do it yet that I know of. 
The goal, of course, is better throughput using MCS.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/8f236c4a-a207-4a0e-8dff-ad14a74e57dc%40googlegroups.com.


Re: how it works

2020-01-10 Thread The Lee-Man
On Friday, January 10, 2020 at 8:44:05 AM UTC-8, Bobby wrote:
>
>
> Hi,
>
>
> -  Question 1: The kernel still contains 2 files?
> -  Question  2:  Do we still have those diagrams available online?
>
>
> The kernel has many files, but those two files are still present for 
open-iscsi. If you look in drivers/scsi/*iscsi*.[ch], each of those files 
are either initiator or target files.

I don't know what diagrams were around in the past, but we no longer have 
any on the web page, which is hosted by github now. A simple google of 
"open-iscsi architecture diagrams" yields quite a few pictures, though, 
such as this one: 
https://www.researchgate.net/figure/General-iSCSI-architecture_fig1_221396996

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/9745153a-66eb-4abb-8628-23e2ba1b28fd%40googlegroups.com.


Re: Who know more about this issue for iscsid?

2020-01-10 Thread The Lee-Man
On Tuesday, January 7, 2020 at 12:20:15 AM UTC-8, can zhu wrote:
>
> kernel: connection2:0: detected conn error (1020)
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> iscsid: Kernel reported iSCSI connection 2:0 error (1020 - 
> ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (1)
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> systemd: Started Session 3742 of user root.
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
> iscsid: conn 0 login rejected: initiator failed authorization with target
>
>
> *env*
>
> kernel:3.10.0-693.el7.x86_64
>
> os:CentOS Linux release 7.4.1708 (Core) 
>
> *iscsi*-initiator-utils: *iscsi*-initiator-utils-6.2.0.874-11.el7.x86_64
>
>
> I can't configure acl and username、password.
>
>
>
>
You have "auth" enabled but not set up correctly.

There are two types of auth: discovery, and session. It looks like your 
session auth is not set up correctly. You need to know the auth username 
and password. It has to be set up on the target (targetcli) and the 
initiator (open-iscsi).

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/b7141d18-99d9-4d93-9252-a5e27393dfc6%40googlegroups.com.


Re: [PATCH RESEND] iscsi: Don't destroy session if there are outstanding connections

2020-01-10 Thread The Lee-Man


On Thursday, December 26, 2019 at 12:31:55 PM UTC-8, Gabriel Krisman 
Bertazi wrote:
>
> From: Nick Black  
>
> Hi, 
>
> I thought this was already committed for some reason, until it bit me 
> again today.  Any opposition to this one? 
>
> >8 
>
> A faulty userspace that calls destroy_session() before destroying the 
> connections can trigger the failure.  This patch prevents the 
> issue by refusing to destroy the session if there are outstanding 
> connections. 
>
> [ cut here ] 
> kernel BUG at mm/slub.c:306! 
> invalid opcode:  [#1] SMP PTI 
> CPU: 1 PID: 1224 Comm: iscsid Not tainted 5.4.0-rc2.iscsi+ #7 
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 
> 04/01/2014 
> RIP: 0010:__slab_free+0x181/0x350 
> [...] 
> [ 1209.686056] RSP: 0018:a93d4074fae0 EFLAGS: 00010246 
> [ 1209.686694] RAX: 934efa5ad800 RBX: 801a RCX: 
> 934efa5ad800 
> [ 1209.687651] RDX: 934efa5ad800 RSI: eb4041e96b00 RDI: 
> 934efd402c40 
> [ 1209.688582] RBP: a93d4074fb80 R08: 0001 R09: 
> bb5dfa26 
> [ 1209.689425] R10: 934efa5ad800 R11: 0001 R12: 
> eb4041e96b00 
> [ 1209.690285] R13: 934efa5ad800 R14: 934efd402c40 R15: 
>  
> [ 1209.691213] FS:  7f7945dfb540() GS:934efda8() 
> knlGS: 
> [ 1209.692316] CS:  0010 DS:  ES:  CR0: 80050033 
> [ 1209.693013] CR2: 55877fd3da80 CR3: 77384000 CR4: 
> 06e0 
> [ 1209.693897] DR0:  DR1:  DR2: 
>  
> [ 1209.694773] DR3:  DR6: fffe0ff0 DR7: 
> 0400 
> [ 1209.695631] Call Trace: 
> [ 1209.695957]  ? __wake_up_common_lock+0x8a/0xc0 
> [ 1209.696712]  iscsi_pool_free+0x26/0x40 
> [ 1209.697263]  iscsi_session_teardown+0x2f/0xf0 
> [ 1209.698117]  iscsi_sw_tcp_session_destroy+0x45/0x60 
> [ 1209.698831]  iscsi_if_rx+0xd88/0x14e0 
> [ 1209.699370]  netlink_unicast+0x16f/0x200 
> [ 1209.699932]  netlink_sendmsg+0x21a/0x3e0 
> [ 1209.700446]  sock_sendmsg+0x4f/0x60 
> [ 1209.700902]  ___sys_sendmsg+0x2ae/0x320 
> [ 1209.701451]  ? cp_new_stat+0x150/0x180 
> [ 1209.701922]  __sys_sendmsg+0x59/0xa0 
> [ 1209.702357]  do_syscall_64+0x52/0x160 
> [ 1209.702812]  entry_SYSCALL_64_after_hwframe+0x44/0xa9 
> [ 1209.703419] RIP: 0033:0x7f7946433914 
> [...] 
> [ 1209.706084] RSP: 002b:7fffb99f2378 EFLAGS: 0246 ORIG_RAX: 
> 002e 
> [ 1209.706994] RAX: ffda RBX: 55bc869eac20 RCX: 
> 7f7946433914 
> [ 1209.708082] RDX:  RSI: 7fffb99f2390 RDI: 
> 0005 
> [ 1209.709120] RBP: 7fffb99f2390 R08: 55bc84fe9320 R09: 
> 7fffb99f1f07 
> [ 1209.710110] R10:  R11: 0246 R12: 
> 0038 
> [ 1209.711085] R13: 55bc8502306e R14:  R15: 
>  
>  Modules linked in: 
>  ---[ end trace a2d933ede7f730d8 ]--- 
>
> Co-developed-by: Salman Qazi  
> Signed-off-by: Salman Qazi  
> Co-developed-by: Junho Ryu  
> Signed-off-by: Junho Ryu  
> Co-developed-by: Khazhismel Kumykov  
> Signed-off-by: Khazhismel Kumykov  
> Signed-off-by: Nick Black  
> Co-developed-by: Gabriel Krisman Bertazi  
> Signed-off-by: Gabriel Krisman Bertazi  
> --- 
>  drivers/scsi/iscsi_tcp.c|  4  
>  drivers/scsi/scsi_transport_iscsi.c | 26 +++--- 
>  2 files changed, 27 insertions(+), 3 deletions(-) 
>
> diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c 
> index 0bc63a7ab41c..b5dd1caae5e9 100644 
> --- a/drivers/scsi/iscsi_tcp.c 
> +++ b/drivers/scsi/iscsi_tcp.c 
> @@ -887,6 +887,10 @@ iscsi_sw_tcp_session_create(struct iscsi_endpoint 
> *ep, uint16_t cmds_max, 
>  static void iscsi_sw_tcp_session_destroy(struct iscsi_cls_session 
> *cls_session) 
>  { 
>  struct Scsi_Host *shost = iscsi_session_to_shost(cls_session); 
> +struct iscsi_session *session = cls_session->dd_data; 
> + 
> +if (WARN_ON_ONCE(session->leadconn)) 
> +return; 
>   
>  iscsi_tcp_r2tpool_free(cls_session->dd_data); 
>  iscsi_session_teardown(cls_session); 
> diff --git a/drivers/scsi/scsi_transport_iscsi.c 
> b/drivers/scsi/scsi_transport_iscsi.c 
> index ed8d9709b9b9..271afea654e2 100644 
> --- a/drivers/scsi/scsi_transport_iscsi.c 
> +++ b/drivers/scsi/scsi_transport_iscsi.c 
> @@ -2947,6 +2947,24 @@ iscsi_set_path(struct iscsi_transport *transport, 
> struct iscsi_uevent *ev) 
>  return err; 
>  } 
>   
> +static int iscsi_session_has_conns(int sid) 
> +{ 
> +struct iscsi_cls_conn *conn; 
> +unsigned long flags; 
> +int found = 0; 
> + 
> +spin_lock_irqsave(, flags); 
> +list_for_each_entry(conn, , conn_list) { 
> +if (iscsi_conn_get_sid(conn) == sid) { 
> +found = 1; 
> +break; 
> +} 
> + 

Re: Open-iSCSI in research paper

2020-01-01 Thread The Lee-Man
 On Tuesday, December 31, 2019 at 7:49:49 AM UTC-8, Bobby wrote:
>
> Hi all,
>
> I have come across this research paper (attached) called "*Design and 
> implementation of IP-based iSCSI Offoad Engine on an FPGA*"  and the 
> authors have mentioned they have used open source software based 
> *Open-iSCSI* for their research. At the moment there are 2 questions 
> based on this paper.
>
> *Question 1:*
> On page 3 and under section 2.4 ( *Performance Analysis of Open-iSCSI*), 
> they have started the paragraph with following lines:
>
> "*We analyzed iSCSI traffic with Wireshark, the open source network 
> packet analyzer. We measured traffic between a software initiator and a 
> target by using a set of microbenchmarks. The microbenchmarks transmitted 
> arbitrary number of data in both directions* "
>
> The question is...what are these microbenchmarks. There is no reference to 
> these microbenchmarks in this paper. Any idea, what are these 
> microbenchmarks? 
>

I have no idea. They didn't consult me when doing this paper. :) 

>
> *Question 2:*
> Similarly, on the same page 3 and under section 2.3 (Related Work), they 
> have written "*The most common software implementations in the research 
> community are open source Open-iSCSI and UNH-iSCSI projects*".
>
> After my research on UNH-iSCSI, I have found some work where some 
> researchers have proposed a hardware accelerator for data transfer iSCSI 
> functions. They analyzed UNH-iSCSI source code and presented a general 
> methodology that transforms the software C code into the hardware HDL 
> (FPGA) implementation. Hence their hardware accelerator is designed with 
> direct C-to-HDL translation of specific sub-modules of UNH-iSCSI software.
>
> The question: Is there any similar work like this for Open-iSCSI where 
> specific sub-modules of Open-iSCSI are translated to a hardware language 
> like Verilog/VHDL on hardware (FPGA)? If not, can you please give a hint 
> what would possibly a starting point in case of Open-iSCSI? Because the 
> attached paper does not mention the specific functions of Open-iSCSI code 
> that could be translated to HDL. 
>

No, none that I know of.

There are really two major chunks of open-iscsi: user-land and kernel 
driver(s). The user-land is only used for error handling, setting up 
connections, tearing them down, and other administrative tasks (like 
directing discovery). The kernel code is where all the IO goes on.

There are several adapters available for Linux that move the iSCSI stack 
into hardware. See the qedi driver, for example. These effectively act as 
the "transport" for open-iscsi, when available. I'd be interested in 
comparing throughput using these available adapters to the FPGA in the 
paper -- if I had infinite time. :)

>
> Thanks !
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/e6557758-3d4e-4f71-8374-c308e2f09835%40googlegroups.com.


Re: Openiscsi Release Schedule

2019-12-13 Thread The Lee-Man
On Friday, December 13, 2019 at 9:01:40 AM UTC-8, Karla Thurs wrote:
>
> Hello,
> What is your latest version of Openiscsi and what was the release date? Do 
> you happen to have a release schedule that you follow? I see your dates for 
> old releases but don't see anything about following a certain schedule for 
> new releases. If you have anything to provide that would be great.
>
> Thank you,
> Karla Thurs
>
> Configuration Manager
> By Light Professional IT Services LLC
> karla.thurs@.com
>
> You can always find that out by going to github "release" directory for 
our project: https://github.com/open-iscsi/open-iscsi/releases


I just updated the "current" release to 2.1.0, which has been out about a 
month (11/14).

The version available to end users is more controlled by what each 
distribution does, since most users do not download and compile their own 
open-iscsi package. If you wish to use the latest open-iscsi code many 
times you also need the latest kernel, for example. So if you are using 
RedHat, or SUSE, you want to be using the latest package that vendor 
supports for your OS version. If that helps you.

And no, we do not have any schedule for future releases. We are a little 
short on valuable resources (like people and time) to have such things. 
Instead, we fix any bugs we find and hope to get time to make some 
improvements.

Now my turn to ask a question: why do you wish to know?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/fab5bd32-ae8d-4475-8faf-d0318498893c%40googlegroups.com.


Re: [PATCH] Check whether socket is opened successfully in find_vlan_dev func

2019-12-13 Thread The Lee-Man
On Sunday, December 8, 2019 at 10:32:54 PM UTC-8, liuzhiqiang (I) wrote:
In find_vlan_dev func, socket should be checked before used. 

Signed-off-by: Zhiqiang Liu  
--- 
usr/iscsi_net_util.c | 4  
1 file changed, 4 insertions(+) 

diff --git a/usr/iscsi_net_util.c b/usr/iscsi_net_util.c 
index b5a910f..c38456f 100644 
--- a/usr/iscsi_net_util.c 
+++ b/usr/iscsi_net_util.c 
@@ -192,6 +192,10 @@ static char *find_vlan_dev(char *netdev, int vlan_id) 
{ 
int sockfd, i, rc; 

sockfd = socket(AF_INET, SOCK_DGRAM, 0); 
+ if (sockfd < 0) { 
+ log_error("Could not open socket for ioctl."); 
+ return NULL; 
+ } 

strlcpy(if_hwaddr.ifr_name, netdev, IFNAMSIZ); 
ioctl(sockfd, SIOCGIFHWADDR, _hwaddr); 
-- 
2.24.0.windows.2 

Reviewed-by: Lee Duncan 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/28683699-a9e8-44e7-be9e-d0bbb4057dc8%40googlegroups.com.


Re: reboot hangs with "Reached target shutdown", who can help me?

2019-12-12 Thread The Lee-Man
Okay, I checked CentOS 8, and the services seem very similar to what I'm 
familiar with.

You do indeed need to make sure your nodes have startup set to automatic.

Use something like:

> zsh> sudo iscsiadm -m node --op update --name 'node.conn[0].startup' 
--value automatic

to update all nodes to start and stop automatically, and update startup in 
/etc/iscsi/iscsid.conf to change the default.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/5bdc3a89-e685-453d-8908-5a5c0e2f7332%40googlegroups.com.


Re: reboot hangs with "Reached target shutdown", who can help me?

2019-12-12 Thread The Lee-Man
On Tuesday, December 10, 2019 at 6:25:00 AM UTC-8, can zhu wrote:
>
> os version:
>
> CentOS Linux release 7.4.1708 (Core)
>
> kernel version:  
>
> 3.10.0-693.el7.x86_64
>
>
> systemd version:
>
> *systemd*-219-42.el7.x86_64
>
>
> Mount iscsi devices on the node(iscsi client node) and reboot os, hangs:
> ...
>

Hello:

Such issue are common if the proper sequencing is not followed when 
shutting down iSCSI connections.

One has to (in order):


   - if iscsi devices are being used, stop using them. That generally means 
   unmounting any filesystems that use the devices.
   - logout of the iSCSI connection, i.e. end the iscsi session cleanly
   - stop the iscsi daemon
   - now the network can be shutdown

And this of course assumes your target(s) are on other systems that are not 
being shut down.

As mentioned by Ulrich, this sequencing is now handled by systemd on most 
Linux systems. And the way in which this is handled is that these different 
layers are handled by different services. For example, on SUSE, the daemon 
is controlled by iscsid.socket and iscsid.service, and the login/logout is 
handled by iscsi.service.

I do not have a CentOS 7 system, but I'm downloading CentOS 8 to see how RH 
has set up the iSCSI services there.

But, at a low level, you must have the "startup" value set to "automatic" 
for targets to be disconnected automatically at shutdown time. So you 
should be able to run:

> zsh> sudo iscsiadm -m node --op show | fgrep startup

to see the startup value.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/1cf29039-73ae-46df-8e0e-76c8f17fed15%40googlegroups.com.


Re: Re: iSCSI packet generator

2019-12-11 Thread The Lee-Man
On Tuesday, December 10, 2019 at 4:20:28 AM UTC-8, Bobby wrote:
>
>
> Like above, can you please give me more hints/clues in which other code(s) 
> I need to see. Which part of this gigantic MQ-Block layer code base to see 
> to understand the complete data flow? I am particularly interested in Hash 
> , Map data structures. 
>

No, not really.

There are many great resources for learning about the kernel and its 
drivers, and the block layer. For example, join lwn.net and checkout 
https://lwn.net/Articles/736534/

And there are many great books, such as Linux Kernel Development.

>
>
> On Tuesday, December 10, 2019 at 11:34:49 AM UTC+1, Bobby wrote:
>>
>>
>> Perfect ! After this reply, I had to dig deeper and now it makes 
>> sensethanks a lot The Lee-Man for explaining it so effectively...
>>
>>
>> On Saturday, November 9, 2019 at 7:52:52 PM UTC+1, The Lee-Man wrote:
>>>
>>> On Friday, November 8, 2019 at 10:40:08 AM UTC-8, Bobby wrote:
>>>>
>>>>
>>>> Hi Ulrich,
>>>>
>>>> Thanks for the hint. Can you please help me regarding following two 
>>>> questions. 
>>>>
>>>> - Linux block layer perform IO scheduling IO submissions to storage 
>>>> device driver. If there is a physical device, the block layer interacts 
>>>> with it through SCSI mid layer and SCSI low level drivers. So, how 
>>>> *actually* a software initiator (*Open-iSCSI*) interacts with "*block 
>>>> layer*"? 
>>>>
>>>> - What confuses me, where does the "*disk driver*" comes into play?
>>>>
>>>> Thanks :-)
>>>>
>>>>
>>> In an iSCSI connection (session), there is the initiator and the target. 
>>> I assume you are talking about the initiator.
>>>
>>> On the initiator, the "magic" is done by the kernel, in particular the 
>>> iSCSI initiator code in the kernel, specifically by the 
>>> scsi_transport_iscsi.c in drivers/scsi. When an iSCSI connection is made, 
>>> the code creates a new "host" object, and then tests the device at the 
>>> other end of the connection. If it's a disc drive, then an instance of sd 
>>> is created (the disc driver). If the device is tape, a tape driver is 
>>> instantiated (st). Unrecognized devices still get a generic SCSI device 
>>> node, I believe.
>>>
>>> So, in this way, iSCSI is acting like an adapter driver, which plugs 
>>> into the SCSI mid-layer.
>>>
>>> You can run "sudo journalctl -xe --follow" in one window, then log into 
>>> an existing target in another (I used "sudo iscsiadm -m node -l"), and you 
>>> should see this kind of output from journalctl:
>>>
>>> ...
>>>
>>>  
>>>
>>>> Nov 09 10:46:59 linux-dell kernel: iscsi: registered transport (tcp)
>>>> Nov 09 10:46:59 linux-dell kernel: scsi host3: iSCSI Initiator over 
>>>> TCP/IP
>>>> Nov 09 10:46:59 linux-dell iscsid[13175]: iscsid: Connection1:0 to 
>>>> [target: iqn.2003-01.org.linux-iscsi.linux-dell.x8664:sn.2a6e21b1b53c, 
>>>> portal: 192.168.20.3,3260] through [iface: default] is operational now
>>>> Nov 09 10:46:59 linux-dell kernel: scsi 3:0:0:0: Direct-Access 
>>>> LIO-ORG  test-disc4.0  PQ: 0 ANSI: 5
>>>> Nov 09 10:46:59 linux-dell kernel: scsi 3:0:0:0: alua: supports 
>>>> implicit and explicit TPGS
>>>> Nov 09 10:46:59 linux-dell kernel: scsi 3:0:0:0: alua: device 
>>>> naa.6001405de01c6e7933b414e901e22b0f port group 0 rel port 1
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: Attached scsi generic 
>>>> sg1 type 0
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] 2097152 512-byte 
>>>> logical blocks: (1.07 GB/1.00 GiB)
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Write Protect is 
>>>> off
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Mode Sense: 43 00 
>>>> 10 08
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Write cache: 
>>>> enabled, read cache: enabled, supports DPO and FUA
>>>> Nov 09 10:46:59 linux-dell kernel: 
>>>> iSCSI/iqn.1996-04.de.suse:01:54cab487975b: Unsupported SCSI Opcode 0xa3, 
>>>> sending CHECK_CONDITION.
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Optimal transfer 
>>>> size 8388608 bytes
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Attached SCSI disk
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: alua: transition timeout 
>>>> set to 60 seconds
>>>> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: alua: port group 00 
>>>> state A non-preferred supports TOlUSNA
>>>>
>>>... 
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/e3a64e35-ac06-4e2d-a397-1e6f53ab25e7%40googlegroups.com.


New Version Tagged

2019-11-14 Thread The Lee-Man
I have tagged version 2.1.0 of open-iscsi.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/38c6b2f0-2e96-4a77-a243-4c2b0e90db3f%40googlegroups.com.


Re: Re: iSCSI packet generator

2019-11-09 Thread The Lee-Man
On Friday, November 8, 2019 at 10:40:08 AM UTC-8, Bobby wrote:
>
>
> Hi Ulrich,
>
> Thanks for the hint. Can you please help me regarding following two 
> questions. 
>
> - Linux block layer perform IO scheduling IO submissions to storage device 
> driver. If there is a physical device, the block layer interacts with it 
> through SCSI mid layer and SCSI low level drivers. So, how *actually* a 
> software initiator (*Open-iSCSI*) interacts with "*block layer*"? 
>
> - What confuses me, where does the "*disk driver*" comes into play?
>
> Thanks :-)
>
>
In an iSCSI connection (session), there is the initiator and the target. I 
assume you are talking about the initiator.

On the initiator, the "magic" is done by the kernel, in particular the 
iSCSI initiator code in the kernel, specifically by the 
scsi_transport_iscsi.c in drivers/scsi. When an iSCSI connection is made, 
the code creates a new "host" object, and then tests the device at the 
other end of the connection. If it's a disc drive, then an instance of sd 
is created (the disc driver). If the device is tape, a tape driver is 
instantiated (st). Unrecognized devices still get a generic SCSI device 
node, I believe.

So, in this way, iSCSI is acting like an adapter driver, which plugs into 
the SCSI mid-layer.

You can run "sudo journalctl -xe --follow" in one window, then log into an 
existing target in another (I used "sudo iscsiadm -m node -l"), and you 
should see this kind of output from journalctl:

...

 

> Nov 09 10:46:59 linux-dell kernel: iscsi: registered transport (tcp)
> Nov 09 10:46:59 linux-dell kernel: scsi host3: iSCSI Initiator over TCP/IP
> Nov 09 10:46:59 linux-dell iscsid[13175]: iscsid: Connection1:0 to 
> [target: iqn.2003-01.org.linux-iscsi.linux-dell.x8664:sn.2a6e21b1b53c, 
> portal: 192.168.20.3,3260] through [iface: default] is operational now
> Nov 09 10:46:59 linux-dell kernel: scsi 3:0:0:0: Direct-Access 
> LIO-ORG  test-disc4.0  PQ: 0 ANSI: 5
> Nov 09 10:46:59 linux-dell kernel: scsi 3:0:0:0: alua: supports implicit 
> and explicit TPGS
> Nov 09 10:46:59 linux-dell kernel: scsi 3:0:0:0: alua: device 
> naa.6001405de01c6e7933b414e901e22b0f port group 0 rel port 1
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: Attached scsi generic sg1 
> type 0
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] 2097152 512-byte 
> logical blocks: (1.07 GB/1.00 GiB)
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Write Protect is off
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Mode Sense: 43 00 10 
> 08
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Write cache: enabled, 
> read cache: enabled, supports DPO and FUA
> Nov 09 10:46:59 linux-dell kernel: 
> iSCSI/iqn.1996-04.de.suse:01:54cab487975b: Unsupported SCSI Opcode 0xa3, 
> sending CHECK_CONDITION.
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Optimal transfer size 
> 8388608 bytes
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: [sdb] Attached SCSI disk
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: alua: transition timeout 
> set to 60 seconds
> Nov 09 10:46:59 linux-dell kernel: sd 3:0:0:0: alua: port group 00 state A 
> non-preferred supports TOlUSNA
>
   ... 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/ac95713b-ca5e-47f5-ad64-96ff6c43196a%40googlegroups.com.


Re: iSCSI packet generator

2019-11-04 Thread The Lee-Man
On Monday, November 4, 2019 at 2:49:08 AM UTC-8, Bobby wrote:
>
> Hi
>
> I have two virtual machines. One is a client and other is a sever (SAN). I 
> am using Wireshark to  analyze the iSCSI protocols between them.
>
> Someone recommended me, in addition to a packet analyzer, I can also use a 
> packet generator. Any good packet generator for iSCSI client/server model?
>
> Thanks
>

Your question is not clear, but I'm *guessing*  you are asking if you can 
use some sort of software to inject iSCSI packets into your client/server 
stream, e.g. so that you can simulate errors and see how your software 
handles them?

If so, then the answer is no, there is nothing I know of.

Such "bad command injection" can be done with fancy hardware analyzers. A 
good (expensive) network analyzer can (I believe) inject bad packets of any 
type.See https://www.firewalltechnical.com/packet-injection-tools/

It sound like none of this is directly related to open-iscsi, though.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/3db42c4c-1a52-4716-ae8f-fe289da32cc0%40googlegroups.com.


Re: after changing storage ports from 1Gig to 10Gig unable to access the vm, Openstack level vm is going to error state

2019-10-11 Thread The Lee-Man
On Monday, October 7, 2019 at 7:46:04 AM UTC-7, Kuruva Maddileti wrote:
>
> Hi Team,
>
> We have changed the Unity storage ports from 1Gig to 10Gig. After that we 
> have deleted the old iqn's from storage level and compute level.
>
> old iqn's :
>
> 170.0.0.10
> 170.0.0.11
>
> New 10Gig iqn's :
>
> 170.0.0.20
> 170.0.0.21
>
>  After deleting old iqn's we are able to access the storage at compute 
> level, that storage is assigned to openstack vm.
>
> We have rebooted one compute host, after that we are unable to access the 
> storage and the openstack vm is going to error state. 
>
> As per the nova-compute logs, the nova is trying to search for old iqn's 
> which is not present that's the reason vm is going to error state.
>
> Could you please find the below logs and advise further...
>
>
>
>
> root@compute75 ~]# iscsiadm -m session
> tcp: [1] 170.0.0.11:3260,8 iqn.1992-04.com.emc:cx.ckm00185002995.b0 
> (non-flash)
> tcp: [2] 170.0.0.10:3260,9 iqn.1992-04.com.emc:cx.ckm00185002995.a0 
> (non-flash)
> tcp: [3] 170.0.0.20:3260,7 iqn.1992-04.com.emc:cx.ckm00185002995.a1 
> (non-flash)
> tcp: [4] 170.0.0.21:3260,6 iqn.1992-04.com.emc:cx.ckm00185002995.b1 
> (non-flash)
>
> We have rebooted the old iqn's from compute level, Storage team already 
> removed 1Gig ports from Unity storage side.
>
> [root@compute75 ~]# iscsiadm -m node -T  
> iqn.1992-04.com.emc:cx.ckm00185002995.b0 -p 170.0.0.11 -u
> Logging out of session [sid: 1, target: 
> iqn.1992-04.com.emc:cx.ckm00185002995.b0, portal: 170.0.0.11,3260]
> Logout of [sid: 1, target: iqn.1992-04.com.emc:cx.ckm00185002995.b0, 
> portal: 170.0.0.11,3260] successful.
> [root@compute75 ~]# iscsiadm -m node -T  
> iqn.1992-04.com.emc:cx.ckm00185002995.a0  -p 170.0.0.10 -u
> Logging out of session [sid: 2, target: 
> iqn.1992-04.com.emc:cx.ckm00185002995.a0, portal: 170.0.0.10,3260]
> Logout of [sid: 2, target: iqn.1992-04.com.emc:cx.ckm00185002995.a0, 
> portal: 170.0.0.10,3260] successful.
> [root@compute75 ~]# iscsiadm -m node -o delete -T  
> iqn.1992-04.com.emc:cx.ckm00185002995.b0
> [root@compute75 ~]# iscsiadm -m node -o delete -T  
> iqn.1992-04.com.emc:cx.ckm00185002995.a0
> [root@compute75 ~]#
> [root@compute75 ~]# systemctl restart iscsi
> [root@compute75 ~]# systemctl restart multipathd
>
> Try `iscsiadm --help' for more information.
> [root@compute75 ~]# iscsiadm --m node
> 170.0.0.20:3260,7 iqn.1992-04.com.emc:cx.ckm00185002995.a1
> 170.0.0.21:3260,6 iqn.1992-04.com.emc:cx.ckm00185002995.b1
> [root@compute75 ~]#
>
>
> After reboot of the compute host 
>
>
> root@compute75 ~]# iscsiadm --m session
> tcp: [3] 170.0.0.20:3260,7 iqn.1992-04.com.emc:cx.ckm00185002995.a1 
> (non-flash)
> tcp: [4] 170.0.0.21:3260,6 iqn.1992-04.com.emc:cx.ckm00185002995.b1 
> (non-flash)
> [root@compute75 ~]# multipath -ll
> mpathb (36006016029104b0084e7955d71109aa0) dm-1 DGC ,VRAID
> size=1.0G features='2 queue_if_no_path retain_attached_hw_handler' 
> hwhandler='1 alua' wp=rw
> |-+- policy='service-time 0' prio=50 status=active
> | `- 10:0:0:12395 sdm 8:192 active ready running
> `-+- policy='service-time 0' prio=10 status=enabled
>   `- 11:0:0:12395 sdn 8:208 active ready running
> mpatha (36006016029104b0050e3955d0d37f4ae) dm-0 DGC ,VRAID
> size=10G features='2 queue_if_no_path retain_attached_hw_handler' 
> hwhandler='1 alua' wp=rw
> |-+- policy='service-time 0' prio=50 status=active
> | `- 11:0:0:4390  sdl 8:176 active ready running
> `-+- policy='service-time 0' prio=10 status=enabled
>   `- 10:0:0:4390  sdk 8:160 active ready running
> [root@compute75 ~]#
>
>
> After removing old paths, still we are able to access the storage.
>
>
> [root@compute75 ~]# ssh sdn@192.0.2.14
> sdn@192.0.2.14's password:
> Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-142-generic x86_64)
>
>  * Documentation:  https://help.ubuntu.com
>  * Management: https://landscape.canonical.com
>  * Support:https://ubuntu.com/advantage
>
> 98 packages can be updated.
> 56 updates are security updates.
>
>
> Last login: Fri Oct  4 16:49:06 2019 from 192.0.2.1
> sdn@ubuntu:~$ df -hT /mnt/test
> Filesystem Type  Size  Used Avail Use% Mounted on
> /dev/vdb1  ext3  991M   35M  906M   4% /mnt/test
> sdn@ubuntu:~$ cd /mnt/test
> sdn@ubuntu:/mnt/test$ touch bb
> touch: cannot touch 'bb': Permission denied
> sdn@ubuntu:/mnt/test$ sudo su -
> [sudo] password for sdn:
> root@ubuntu:~# cd /mnt/test
> root@ubuntu:/mnt/test# touch bb
> root@ubuntu:/mnt/test# ls
> aa  bb  docs  docs2  lost+found
> root@ubuntu:/mnt/test#
>
>
> Now rebooted the compute node. The vm is going to error state.
>
>
>
> [root@compute75 ~]# reboot
> Connection to compute75 closed by remote host.
> Connection to compute75 closed.
> [root@osc ~(keystone_admin)]#
>
>
>
> [root@osc ~(keystone_admin)]# openstack server list
>
> +--+---++-++
> | ID   | Name  | Status | Networks
> | Image Name |
>

Re: iscsiadm unable to connect to iscsd

2019-09-30 Thread The Lee-Man
See https://github.com/open-iscsi/open-iscsi/pull/174

On Friday, September 20, 2019 at 2:02:20 AM UTC-7, Dirk Laurenz wrote:
>
> Hi,
>
> want to read the session stats for a connection, but iscsiadm claims not 
> to be able to connect to iscsd.
> I'm not sure how to debug this:
>
> $host:/etc/iscsi # iscsiadm -m session
> tcp: [1] $IP1:3260,1032 $host1-lun01 (non-flash)
> tcp: [2] $IP2:3260,1032 $host2-lun01 (non-flash)
> tcp: [3] $IP3:3260,1 $host3:lun01 (non-flash)
> $host:/etc/iscsi # iscsiadm -m session -r 2 -s
> iscsiadm: Could not execute operation on all sessions: could not connect 
> to iscsid
>
> any idea?
>
> OS is SLES4SAP12 SP4
>
> Regards,
>
> Dirk
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/f4da4808-aa2a-4de5-ace3-8f8a072b6593%40googlegroups.com.


Re: iscsiadm unable to connect to iscsd

2019-09-30 Thread The Lee-Man
Okay, I believe I found the problem, and it's one that I've seen before. On 
one particular path -- in this case, when you specify "-s" as well as "-r 
N" -- the code path forgets to set the timeout to "none" when communicating 
with iscsid.

I have pushed my change to https://github.com/gonzoleeman/open-iscsi branch 
fix-session-display-error

Please feel free to try this out before I merge it into the main line, but 
it seems to fix the problem for me.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/30c2ecf6-5d7b-42be-9487-4d07b18fd5c5%40googlegroups.com.


Re: iscsiadm unable to connect to iscsd

2019-09-30 Thread The Lee-Man
On Friday, September 20, 2019 at 2:02:20 AM UTC-7, Dirk Laurenz wrote:
>
> Hi,
>
> want to read the session stats for a connection, but iscsiadm claims not 
> to be able to connect to iscsd.
> I'm not sure how to debug this:
>
> $host:/etc/iscsi # iscsiadm -m session
> tcp: [1] $IP1:3260,1032 $host1-lun01 (non-flash)
> tcp: [2] $IP2:3260,1032 $host2-lun01 (non-flash)
> tcp: [3] $IP3:3260,1 $host3:lun01 (non-flash)
> $host:/etc/iscsi # iscsiadm -m session -r 2 -s
> iscsiadm: Could not execute operation on all sessions: could not connect 
> to iscsid
>
> any idea?
>

That looks like a bug! Let me check it out. 

>
> OS is SLES4SAP12 SP4
>
> Regards,
>
> Dirk
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/64893402-6977-497d-91a4-9eb557d7a705%40googlegroups.com.


Re: Open-iscsi slow boot

2019-06-27 Thread The Lee-Man
On Thursday, June 27, 2019 at 11:44:11 AM UTC-4, Randy Broman wrote:
>
> I appreciate your interest, and I've attached a text file which I hope 
> is responsive to your request. 
>
> R 
>
> On Wed, Jun 26, 2019 at 8:55 AM The Lee-Man wrote: 
> > 
> > On Tuesday, June 25, 2019 at 11:31:03 AM UTC-4, Randy Broman wrote: 
> >> 
> >> Thanks for your response. I'm using Kubuntu 19.04. I disabled the iscsi 
> service and in fact the boot was much faster: 
> >> 
> >> 
> > I'm not understanding what's going on with your system. I suspect 
> there's more than just an unused open-iscsi initiator involved here. 
> > 
> > Do you have any iscsi targets set up? Existing sessions? 
> > 
> > I downloaded kunbuntu, and open-iscsi.service is enabled by default. Can 
> you give me the systemctl status for open-iscsi.service, iscsid.socket, and 
> iscsid.service? Also, an "ls" of /etc/iscsi/nodes and 
> /sys/class/iscsi_session? 
> > 
> > And please don't assume that the numbers that "systemd-analyze blame" 
> show -- they don't always mean what you think. Can you just please time the 
> boot (or reboot) sequence yourself, using the log files? 
> > 
> > On my test VM, I have iscsid.socket, iscsid.service, and 
> open-iscsi.service at their default settings, but I have never discovered 
> any targets, so I don't have any history of nodes or sessions. And when I 
> run "systemd-analyze blame", iscsi does not show up at all. 
> > 
>
>
Your error messages make it clear that you are having initiator/target 
issues. If you look at the status of the open-iscsi.service unit, you can 
see it waits for the target to connect, then times out. Timing out always 
adds lots of time to a boot process.

It seems there is some issue with your "QNAP Target". I cannot help you 
with that. But you might want to check there for error messages, if there 
is some way to do that.


-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/485a70e6-e456-42d3-ad52-9f1e570cff0a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Open-iscsi slow boot

2019-06-26 Thread The Lee-Man
On Tuesday, June 25, 2019 at 11:31:03 AM UTC-4, Randy Broman wrote:
>
> Thanks for your response. I'm using Kubuntu 19.04. I disabled the iscsi 
> service and in fact the boot was much faster:
>
>
> I'm not understanding what's going on with your system. I suspect there's 
more than just an unused open-iscsi initiator involved here.

Do you have any iscsi targets set up? Existing sessions?

I downloaded kunbuntu, and open-iscsi.service is enabled by default. Can 
you give me the systemctl status for open-iscsi.service, iscsid.socket, and 
iscsid.service? Also, an "ls" of /etc/iscsi/nodes and 
/sys/class/iscsi_session?

And please don't assume that the numbers that "systemd-analyze blame" show 
-- they don't always mean what you think. Can you just please time the boot 
(or reboot) sequence yourself, using the log files?

On my test VM, I have iscsid.socket, iscsid.service, and open-iscsi.service 
at their default settings, but I have never discovered any targets, so I 
don't have any history of nodes or sessions. And when I run 
"systemd-analyze blame", iscsi does not show up at all.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/8fe010f4-fc0f-4021-a20e-9d7bdfaf0a76%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Open-iscsi slow boot

2019-06-25 Thread The Lee-Man
On Saturday, June 22, 2019 at 11:00:44 AM UTC-4, Randy Broman wrote:
>
> I have open-iscsi installed on Kubuntu 19.04, to access shared storage on 
> a QNAP NAS server. The setup works, but open-iscsi slows boot:
>
> $ systemd-analyze blame
>  2min 6.105s open-iscsi.service
>  10.076s rtslib-fb-targetctl.service
>   6.042s NetworkMan.
>   ..
>   
> and I don't need QNAP/open-iscsi to boot, so I'm trying to set up a timer 
> to delay iscsi connection until after the boot completes and the 
> Kubuntu/Plasma desktop 
> loads. Here's what I have:
>
> $ cat /lib/systemd/system/open-iscsi.timer
> [Unit]
> Description=open-iscsi timer
>
> [Timer]
> # Time to wait after booting before it run for first time
> OnBootSec=3min
> Unit=open-iscsi.service
>
> [Install]
> WantedBy=timers.target
>
> $ ls -l /lib/systemd/system/open-iscsi.service
> -rw-r--r-- 1 root root 1068 Dec 11  2018 
> /lib/systemd/system/open-iscsi.service
>
> ls -l /etc/systemd/system/timers.target.wants/open-iscsi.timer
> lrwxrwxrwx 1 root root 36 Jun 21 20:59 
> /etc/systemd/system/timers.target.wants/open-iscsi.timer -> 
> /lib/systemd/system/open-iscsi.timer
>
> (I ran $ sudo systemctl daemon-reload and $ sudo systemctl enable 
> open-iscsi.timer after creating the timer)
>
> What am I doing wrong, and/or what do I need to do to fix this?
>
> Thx!
>

I don't know anything about systemd timers, but there should be no reason 
for this.

What distro are you using? What iscsi service files are there, and which 
ones are enabled?

In SUSE we have iscsid.socket, iscsid.service, and iscsi.service. The first 
two are for the iscsid daemon, and the last is for iscsi logins/logouts. 
Then, if you're using broadcom, you might also have iscsiuio.socket and 
iscsiuio.service.

I investigated a bug once where a customer was unhappy the iscsi service 
was taking so long to startup, according the systemd "blame", but it really 
wasn't taking a long time, but the dependencies made it look that way. You 
can always completely disable iscsi serivces and compare the actual boot 
time to when it is enabled to see if it really impacting your boot time.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/2604ac31-fea0-4963-9077-1942c71f8a85%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bug report][scsi blk_mq] data corruption due to a bug in SCSI LLD error handling

2019-05-09 Thread The Lee-Man
I think the point is that the sg path is not equal to the real IO path. You 
are (currently) never going to get correct error handling from the sg path, 
which is considered out of band.

So what are you trying to do here? Are you really developing software that 
uses the sg path, and you need this to work, or are you testing the error 
handling and decided to use sg? In other words, is this a real bug, or just 
something you think should work but does not?

On Tuesday, April 9, 2019 at 5:36:42 PM UTC-4, Jaesoo Lee wrote:
>
> Hi, 
>
> Thanks for the comments. 
>
> I tried to run sg_write_same over sg device and I am seeing the same 
> problem. 
>
> The result is as follows: 
>
> 0. Kernel configs 
> Version: 5.1-rc1 
> Boot parameter: dm_mod.use_blk_mq=Y scsi_mod.use_blk_mq=Y 
>
> 1. Normal state 
> : (As expected) The command succeeded 
>
> $ sg_write_same --lba=100 --xferlen=512 /dev/sg5 
> $ 
>
> 2. Immediately after bringing down the iSCSI interface at the target 
> : (As expected) Failed with DID_TRANSPORT_DISRUPTED after a few seconds 
>
> $ sg_write_same --lba=100 --xferlen=512 /dev/sg5 
> Write same: transport: Host_status=0x0e [DID_TRANSPORT_DISRUPTED] 
> Driver_status=0x00 [DRIVER_OK, SUGGEST_OK] 
>
> Write same(10) command failed 
>
> 3. Immediately after the DID_TRANSPORT_DISRUPTED error 
> : (BUG) The command succeeded after a few seconds 
>
> $ sg_write_same --lba=100 --xferlen=512 /dev/sg5 
> $ 
>
> : Kernel logs 
> Apr  8 18:28:03 init21-16 kernel: session1: session recovery timed out 
> after 10 secs 
> Apr  8 18:28:03 init21-16 kernel: sd 8:0:0:5: rejecting I/O to offline 
> device 
>
> 4. Issued IO again 
> : (As expected) The command failed 
>
> $ sg_write_same --lba=100 --xferlen=512 /dev/sg5 
> Write same: pass through os error: No such device or address 
> Write same(10) command failed 
>
> Let me upload my fix for this and please give me some comments on that. 
>
> Thanks, 
>
> Jaesoo Lee. 
>
> -- Forwarded message - 
> From: Douglas Gilbert 
> Date: Wed, Apr 3, 2019 at 2:06 PM 
> Subject: Re: [bug report][scsi blk_mq] data corruption due to a bug in 
> SCSI LLD error handling 
> To: <...>
>
>
> On 2019-04-03 4:18 p.m., Jaesoo Lee wrote: 
> > Hello All, 
> > 
> > I encountered this bug while trying to enable dm_blk_mq for our 
> > iSCSI/FCP targets. 
> > 
> > The bug is that the sg_io issued to scsi_blk_mq would succeed even if 
> > LLD wants to error out those requests. 
> > 
> > Let me explain the scenario in more details. 
> > 
> > Setup: 
> > 0. Host kernel configuration 
> > - 4.19.9, 4.20.16 
> > - boot parameter: dm_mod.use_blk_mq=Y scsi_mod.use_blk_mq=Y 
> > scsi_transport_iscsi.debug_session=1 scsi_transport_iscsi.debug_conn=1 
> > 
> > Scenario: 
> > 1. Connect the host to iSCSI target via four paths 
> > : A dm device is created for those target devices 
> > 2. Start an application in the host which generates sg_io ioctl for 
> > XCOPY and WSAME to the dm device with the ratio of around 50% 
> > (pread/pwrite for the rest). 
> > 3. Perform system crash (sysrq-trigger) in the iSCSI target 
> > 
> > Expected result: 
> > - Any outstanding IOs should get failed with errors 
> > 
> > Actual results: 
> > - Normal read/write IOs get failed as expected 
> > - SG_IO ioctls SUCCEEDED!! 
>
> Not all ioctl(SG_IO)s are created equal! 
>
> If you are using the sg v3 interface (i.e. struct sg_io_hdr) then I would 
> expect DRIVER_TIMEOUT in sg_io_obj.driver_status or DID_TIME_OUT in 
> sg_io_obj.host_status to be set on completion. [BTW You will _not_ see 
> a ETIMEDOUT errno; only errors prior to submission yield errno style 
> errors.] 
>
> If you don't see that with ioctl(SG_IO) on a block device then try again 
> on 
> a sg device. If neither report that then the mid-level error processing 
> is broken. 
>
> Doug Gilbert 
>
>
> > - log message: 
> > [Tue Apr  2 11:26:34 2019]  session3: session recovery timed out after 
> 11 secs 
> > [Tue Apr  2 11:26:34 2019]  session3: session_recovery_timedout: 
> > Unblocking SCSI target 
> > .. 
> > [Tue Apr  2 11:26:34 2019] sd 8:0:0:8: scsi_prep_state_check: 
> > rejecting I/O to offline device 
> > [Tue Apr  2 11:26:34 2019] sd 8:0:0:8: scsi_prep_state_check: 
> > rejecting I/O to offline device 
> > [Tue Apr  2 11:26:34 2019] sd 8:0:0:8: scsi_prep_state_check: 
> > rejecting I/O to offline device 
> > [Tue Apr  2 11:26:34 2019] print_req_error: I/O error, dev sdi, sector 
> 30677580 
> > [Tue Apr  2 11:26:34 2019] device-mapper: multipath: Failing path 8:128. 
> > [Tue Apr  2 11:26:34 2019] SG_IO disk=sdi, result=0x0 
> > 
> > - This causes the DATA corruption for the application 
> > 
> > Relavant call stacks: (SG_IO issue path) 
> > [Tue Apr  2 11:26:33 2019] sd 8:0:0:8: [sdi] sd_ioctl: disk=sdi, 
> cmd=0x2285 
> > [Tue Apr  2 11:26:33 2019] SG_IO disk=sdi, retried 1 cmd 93 
> > [Tue Apr  2 11:26:33 2019] CPU: 30 PID: 16080 Comm: iostress Not 
> > tainted 4.19.9-purekernel_dbg.x86_64+ #30 
> > [Tue Apr  2 11:26:33 

Re: tcmu-runner failed to find module target_core_user

2019-03-29 Thread The Lee-Man
Note: target development mailing list is at: target-de...@vger.kernel.org, 
which you may have to join, first.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: tcmu-runner failed to find module target_core_user

2019-03-29 Thread The Lee-Man
On Thursday, March 28, 2019 at 8:55:07 PM UTC-7, Ravi Salamani wrote:
>
> Hi all,
> I am able to build and install tcmu-runner, but failed to run run 
> tcmu-runner with following error
>
> $tcmu-runner
>
> Starting... kmod_module_new_from_lookup() failed to find module 
> target_core_user couldn't load module
>
>
> $cat /var/log//tcmu-runner.log
>
> 2019-03-28 09:42:31.652 25895 [ERROR] load_our_module:511: 
> kmod_module_new_from_lookup() failed to find module target_core_user
>
>
> Any idea on this?
>
>
>
> Regards,
>
> Ravi Salamani
>

First of all, this is the wrong list. This list is for open-iscsi, which is 
an iscsi initiator. Your problem is with an iscsi target. So you're likely 
to get more help there.

Never the less, it looks to me like you might not have the 
"target_core_user" module. Try "modinfo target_core_user".

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: [PATCH] Update iscsid to always restart

2019-03-08 Thread The Lee-Man
On Thursday, February 21, 2019 at 2:35:46 PM UTC-8, fred.her...@oracle.com 
wrote:
>
> From: Fred Herard  
>
> This change adds Restart=always systemd service option to iscsid.service 
> config file so that iscsid daemon is always restarted.  This is 
> particularly useful when using iscsi boot device and iscsid daemon 
> crashes or is inadvertently killed. 
> --- 
>  etc/systemd/iscsid.service | 1 + 
>  1 file changed, 1 insertion(+) 
>
> diff --git a/etc/systemd/iscsid.service b/etc/systemd/iscsid.service 
> index f5e8979..e22b372 100644 
> --- a/etc/systemd/iscsid.service 
> +++ b/etc/systemd/iscsid.service 
> @@ -10,6 +10,7 @@ Type=notify 
>  NotifyAccess=main 
>  ExecStart=/sbin/iscsid -f 
>  KillMode=mixed 
> +Restart=always 
>   
>  [Install] 
>  WantedBy=multi-user.target 
> -- 
> 1.8.3.1 
>
>
I'm not sure I agree with "always". I believe "on-failure" might make more 
sense?

The daemon iscsid only does an "exit(0)" if it is exiting cleanly, so why 
would we restart the service in that case?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: [RFC] Add source address binding to iSER

2019-02-15 Thread The Lee-Man
On Wednesday, February 6, 2019 at 9:11:45 AM UTC-8, Nikhil Valluru wrote:
>
> Hi Robert,
>
> I am looking for a similar fix and wanted to know if this patch was 
> accepted into the linux kernel? 
>
> Thanks,
> Nikhil
>
>
As a quick look at linux upstream shows, this code is not in the kernel.

More over, looking at this old thread, I believe the original problem 
turned out to be a non-problem when Robert updated to the latest upstream.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: [RFC 1/1] libiscsi: Fix race between iscsi_xmit_task and iscsi_complete_task

2019-02-14 Thread The Lee-Man
On Tuesday, July 17, 2018 at 8:56:53 AM UTC-7, Anoob Soman wrote:
>
> On 09/07/18 11:43, Anoob Soman wrote: 
> > On 02/07/18 16:00, Anoob Soman wrote: 
> >> --- 
> >>   drivers/scsi/libiscsi.c | 6 ++ 
> >>   1 file changed, 6 insertions(+) 
> >> 
> >> diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c 
> >> index d609383..aa3be6f 100644 
> >> --- a/drivers/scsi/libiscsi.c 
> >> +++ b/drivers/scsi/libiscsi.c 
> >> @@ -1449,7 +1449,13 @@ static int iscsi_xmit_task(struct iscsi_conn 
> >> *conn) 
> >>   if (test_bit(ISCSI_SUSPEND_BIT, >suspend_tx)) 
> >>   return -ENODATA; 
> >>   +spin_lock_bh(>session->back_lock); 
> >> +if (conn->task == NULL) { 
> >> +spin_unlock_bh(>session->back_lock); 
> >> +return -ENODATA; 
> >> +} 
> >>   __iscsi_get_task(task); 
> >> +spin_unlock_bh(>session->back_lock); 
> >>   spin_unlock_bh(>session->frwd_lock); 
> >>   rc = conn->session->tt->xmit_task(task); 
> >>   spin_lock_bh(>session->frwd_lock); 
> > 
> > 
> > Hi Chris, Lee. 
> > 
> > Could one of you look at this change and provide some comments ? 
> > 
> > Thanks, 
> > 
> > -Anoob. 
> > 
>
> Hi, 
>
> Can someone look at this change ? 
>
> Thanks, 
>
> Anoob. 
>
> Anoob:

I have been looking at this code for other reasons and believe I'm 
qualified to say that this change looks correct to me.

Please add my: Signed-off-by: ldun...@suse.com and cc me when you submit it.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Error while performing writes using dd or mkfs on iSCSI initiator.

2019-02-06 Thread The Lee-Man
On Wednesday, January 23, 2019 at 1:48:19 PM UTC-8, iamlinonym...@gmail.com 
wrote:
>
> We have a LIO target on RHEL 7.5 with the lun created using fileio through 
> targetcli. We exported it
> to RHEL initiator on the same box (Tried with other box as well). 
> On the lun, when we do mkfs for ext3/ext4, it fails with following message 
> and can not be mounted.
>
>
> -
> [root@linux_machine /]# mkfs -t ext4 /dev/sdh
> mke2fs 1.42.9 (28-Dec-2013)
> /dev/sdh is entire device, not just one partition!
> Proceed anyway? (y,n) y
> Filesystem label=
> OS type: Linux
> Block size=4096 (log=2)
> Fragment size=4096 (log=2)
> Stride=0 blocks, Stripe width=1024 blocks
> 2621440 inodes, 10485760 blocks
> 524288 blocks (5.00%) reserved for the super user
> First data block=0
> Maximum filesystem blocks=2157969408
> 320 block groups
> 32768 blocks per group, 32768 fragments per group
> 8192 inodes per group
> Superblock backups stored on blocks:
> 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 
> 2654208,
> 4096000, 7962624
>
> Allocating group tables: done
> Writing inode tables: done
> Creating journal (32768 blocks): done
> Writing superblocks and filesystem accounting information:
> Warning, had trouble writing out superblocks.
>
> -
> while above task fails, /var/log/messages on initiator has following 
> errors.
>
>
> -
> kernel: connection1:0: detected conn error (1020)
> Kernel reported iSCSI connection 1:0 error (1020 - 
> ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (3)
> connection1:0 is operational after recovery (1 attempts)
> connection1:0: detected conn error (1020)
> Kernel reported iSCSI connection 1:0 error (1020 - 
> ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (3)
> connection1:0 is operational after recovery (1 attempts)
> connection1:0: detected conn error (1020)
> kernel: sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> kernel: sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 54 00 10 00 10 00 00
> kernel: blk_update_request: I/O error, dev sdf, sector 5505040
> Kernel: Buffer I/O error on dev sdf, logical block 688130, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688131, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688132, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688133, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688134, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688135, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688136, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688137, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688138, lost async page 
> write
> kernel: Buffer I/O error on dev sdf, logical block 688139, lost async page 
> write
> kernel: sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> kernel: sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 50 00 10 00 10 00 00
> kernel: blk_update_request: I/O error, dev sdf, sector 5242896
> kernel: sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> kernel: sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 4c 00 10 00 10 00 00
> kernel: blk_update_request: I/O error, dev sdf, sector 4980752
> kernel: sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 48 00 10 00 10 00 00
> blk_update_request: I/O error, dev sdf, sector 4718608
> sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 44 00 10 00 10 00 00
> blk_update_request: I/O error, dev sdf, sector 4456464
> sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 40 00 10 00 10 00 00
> blk_update_request: I/O error, dev sdf, sector 4194320
> kernel: sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> kernel: sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 3c 00 10 00 10 00 00
> kernel: blk_update_request: I/O error, dev sdf, sector 3932176
> kernel: sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> kernel: sd 7:0:0:1: [sdf] CDB: Write(10) 2a 00 00 38 00 10 00 10 00 00
> kernel: blk_update_request: I/O error, dev sdf, sector 3670032
> kernel: sd 7:0:0:1: [sdf] FAILED Result: hostbyte=DID_TRANSPORT_DISRUPTED 
> driverbyte=DRIVER_OK
> kernel: sd 7:0:0:1: [sdf] CDB: 

Re: sysfs interface sometimes does not get the block device name, for a logged in iSCSI target

2019-01-21 Thread The Lee-Man
On Monday, December 17, 2018 at 9:08:54 AM UTC-8, Satyajit Deshmukh wrote:
>
>
>
> On Sunday, December 16, 2018 at 5:30:29 PM UTC-8, The Lee-Man wrote:
>>
>> On Friday, December 14, 2018 at 12:13:46 PM UTC-8, Satyajit Deshmukh 
>> wrote:
>>>
>>> Hello,
>>>
>>> An update on the issue. I could observe that the target entries were not 
>>> populated under sysfs.
>>>
>>> This is for a session that has a valid block device:
>>> $ ls /sys/class/iscsi_session/session778/device
>>> connection778:0 iscsi_session power target162:0:0 uevent 
>>>
>>>
I am trying to reproduce your errors myself, but so far no success.

It sounds like what is occurring is that some event or sequence of steps is 
failing under your conditions. So I need to reproduce your conditions as 
closely as possible.

I've created 75 targets on my target host system and connected to them 
repeatedly, but so far none of the disc devices has failed to show up. I 
suspect it's timing- and/or load-related. And how often are you seeing 
these "no disc device" events relative to normal behavior?

> This is for a session that does not have a valid block device:
>>> $ls /sys/class/iscsi_session/session780/device
>>> connection780:0 iscsi_session power uevent
>>>
>>> As we can see, the target... directory is missing.
>>> So, an event responsible to create the sysfs entry could not get created.
>>>
>>> journalctl does not print this info. Is there a way to enable some 
>>> debugging, to debug this?
>>>
>>>
>>>
>> It seems like the iscsi initiator code in the kernel is not creating the 
>> target directory. I will have to look at the code to figure out why. Is 
>> there any difference between the two targets? How many targets to you have? 
>> What type of targets are they (i.e. hardware, software)?
>>
>>
> There is no difference between the two targets. We have 100s of iSCSI 
> targets on a single VM. All of these are software targets.
> The target device does get created most of the times.
>
> Another related issue we found is during log outs. In that scenario, the 
> block device was not cleanly removed, during the iscsiadm logout command. I 
> will share details about that shortly.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Running iscsiadm in containers

2018-12-17 Thread The Lee-Man
On Sunday, December 16, 2018 at 5:14:03 PM UTC-8, m...@datera.io wrote:
>
> As a baseline here are the previous threads discussing this issue 
> including the RFC from Chris Leech.
>
> https://groups.google.com/forum/#!msg/open-iscsi/vWbi_LTMEeM/P8-oUDkb14YJ
> https://groups.google.com/forum/#!msg/open-iscsi/kgjck_GixsM/U_FqTbYhCgAJ
>
> I think we should try and move Chris's implementation forward if 
> possible.  Many organizations are hitting real pain-points not having a 
> container-compatible open-iscsi implementation, especially with the advent 
> of the ubiquity of containers within the storage vendor world.
>
> On Tuesday, October 23, 2018 at 9:03:01 AM UTC-7, Shailesh Mittal wrote:
>>
>> Hi there,
>>
>> I understand that it was the topic of discussion earlier. As containers 
>> are getting used more and more to run applications, there are frameworks 
>> like Kubernetes (and more) where the calls to talk to scsi storage devices 
>> are being made through a container.
>>
>> Here, vendors are in flux as they need to execute iscsiadm commands to 
>> connect their iscsi based storage to the application-containers. These 
>> caller modules (responsible for connecting to the remote storage) are 
>> running in the containers and thus have no choice but executing iscsiadm 
>> commands from the containers itself.
>>
>> Is there a well understood way to implement this? I remember the thread 
>> from Chris Leech talking about containerizing iscsid but not sure the 
>> end-result of that. (
>> https://groups.google.com/forum/#!msg/open-iscsi/vWbi_LTMEeM/NdZPh33ed0oJ
>> )
>>
>> Any help/direction here is much appreciated.
>>
>> Thanks,
>> Shailesh Mittal.
>>
>
Yes, after reviewing the thread, I'd like to move forward with these 
changes as well.

Chris, do you have time to move forward with this? I'd be glad to help.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: sysfs interface sometimes does not get the block device name, for a logged in iSCSI target

2018-12-16 Thread The Lee-Man
On Friday, December 14, 2018 at 12:13:46 PM UTC-8, Satyajit Deshmukh wrote:
>
> Hello,
>
> An update on the issue. I could observe that the target entries were not 
> populated under sysfs.
>
> This is for a session that has a valid block device:
> $ ls /sys/class/iscsi_session/session778/device
> connection778:0 iscsi_session power target162:0:0 uevent 
>
> This is for a session that does not have a valid block device:
> $ls /sys/class/iscsi_session/session780/device
> connection780:0 iscsi_session power uevent
>
> As we can see, the target... directory is missing.
> So, an event responsible to create the sysfs entry could not get created.
>
> journalctl does not print this info. Is there a way to enable some 
> debugging, to debug this?
>
>
>
It seems like the iscsi initiator code in the kernel is not creating the 
target directory. I will have to look at the code to figure out why. Is 
there any difference between the two targets? How many targets to you have? 
What type of targets are they (i.e. hardware, software)?

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: sysfs interface sometimes does not get the block device name, for a logged in iSCSI target

2018-12-16 Thread The Lee-Man
On Monday, December 10, 2018 at 5:52:19 PM UTC-8, Satyajit Deshmukh wrote:
>
> Hello,
>
> We are fairly frequently now running into this. Looks like an issue with 
> the sysfs interface.
>
> iscsi_sysfs_get_blockdev_from_lun() sometimes does not find the SCSI block 
> device (say /dev/sdd) for a logged in iSCSI target.
>
> Thus, iscsiadm -m session -P0 does not print the SCSI device block path 
> (say /dev/sdd)
>
> From code walk-through, I can see there are 2 places, where it can happen. 
> Sharing that here.
>
> diff -uNr open-iscsi-2.0.874/usr/iscsi_sysfs.c 
> open-iscsi-2.0.874_2/usr/iscsi_sysfs.c
> --- open-iscsi-2.0.874/usr/iscsi_sysfs.c2018-11-19 19:25:38.935602682 
> +
> +++ open-iscsi-2.0.874_2/usr/iscsi_sysfs.c  2018-11-19 19:43:08.214013706 
> +
> @@ -1557,19 +1557,24 @@
> snprintf(id, sizeof(id), "%d:0:%d:%d", host_no, target, lun);
> if (!sysfs_lookup_devpath_by_subsys_id(devpath, sizeof(devpath),
>SCSI_SUBSYS, id)) {
> -   log_debug(3, "Could not lookup devpath for %s %s",
> +   log_error("Could not lookup devpath for %s %s",
>   SCSI_SUBSYS, id);
> return NULL;
> }
> +   log_debug(0, "devpath is %s for id %s", devpath, id);
>
>
> sysfs_len = strlcpy(path_full, sysfs_path, sizeof(path_full));
> if (sysfs_len >= sizeof(path_full))
> sysfs_len = sizeof(path_full) - 1;
> strlcat(path_full, devpath, sizeof(path_full));
>  
> +   log_debug(0, "path_full is %s", path_full);
> dirfd = opendir(path_full);
> -   if (!dirfd) 
> +   if (!dirfd) {
> +   log_error("Could not open sysfs dir %s",
> + path_full);
> return NULL;
> +   }
> 
> while ((dent = readdir(dirfd))) {
> if (!strcmp(dent->d_name, ".") || !strcmp(dent->d_name, ".."))
>
>
> Are there such known issues with the sysfs where device paths are not 
> found, for logged in targets?
> Any pointers/suggestions on debugging this greatly appreciated.
> We too are trying to root-cause and would share our finding here.
>
> Thanks,
> Satyajit
>
>
>
>
> Based on the code, I'm not sure it should always be considered an error 
when no block device is found, so I'm not comfortable changing the code to 
assume this.

I believe the path created depends on the target. For example, for a tape 
device target, there will be no block device.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Query regarding iscsi initiator login via specific port

2018-11-26 Thread The Lee-Man
On Monday, November 26, 2018 at 4:43:17 PM UTC-8, Sastha M wrote:
>
> Hello all,
>
> Is there a provision today with iscsiadm utility to create a iscsi login 
> connection via specific initiator port number? The initiator port can be 
> provided from the user end. Please help me in finding the answer.
>
> Thanks,
> Sastha
>

No, not that I know of. Why would you want to control the initiator socket? 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Patch for faster iscsi logins by optimized idbm_lock() and idbm_unlock() APIs

2018-11-13 Thread The Lee-Man
On Monday, November 12, 2018 at 5:02:44 PM UTC-8, 
satyajit.deshm...@nutanix.com wrote:
>
> Hello,
>
> We had a use case, where we had to log into multiple iSCSI targets in 
> parallel. And it was important for these operations to complete in a timely 
> manner.
> We found 2 major codes that caused large delays in the login command:
> 1. idbm_lock() and idbm_unlock() APIs
> 2. session_is_running() duplicate session checking code. This performs the 
> entire sysfs scan for each logged in target and results in O(n^2) sysfs 
> lookups, causing huge delays, when there are multiple iSCSI targets logged 
> into.
>
> We have a patch for issue #1 above and have attached it.
> This definitely optimized the login times, as the code now does not 
> perform redundant sleeps.
> Would be great if someone could review this patch, and see if this could 
> be generally useful to optimize the login times.
>
>
LOL. I spent almost a week, a while ago, looking at issue #2, but my eyes 
glossed over and my laziness won out. As you point out, the sysfs stuff 
does not scale, and already some users of open-iscsi are being bit by this.

For issue#1, I like the idea of getting better locking, but your suggested 
code has some issues:

1. It does not time out, ever (unless sent a signal). The original code 
times out after 300 seconds. This may not be an issue in practice, since 
(theoretically) if a process that has this lock goes away, the lock goes 
away. And if we assume that all other users of the lock play fair, this may 
be ok.

2. It does not allow lock "stacking". The current code just increments a 
counter if a second call to lock the process is done. But I don't think 
your code keeps track of number of locks. With the old code, two locks then 
one unlock would leave it locked, but in your code it would leave it 
unlocked. So it needs "stacking".

3. The old code creates the LOCK_DIR if needed. I'm not sure anybody else 
does, so we'd have to make sure.

Thank you, as this spurs me to look at this code again. I look forward to 
seeing your reply.

Thanks,
> Satyajit
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Running iscsiadm in containers

2018-11-12 Thread The Lee-Man
On Wednesday, November 7, 2018 at 1:09:32 PM UTC-8, Shailesh Mittal wrote:
>
> Thanks for the reply. Does that mean that we need to disable the iscsid 
> running on the host or can they co-exist (one running on the host and other 
> running in container)?
>
> Thanks,
> Shailesh.
>
> On Monday, October 29, 2018 at 10:57:06 AM UTC-7, The Lee-Man wrote:
>>
>> On Monday, October 29, 2018 at 10:40:07 AM UTC-7, dat...@gmail.com wrote:
>>>
>>> Thanks for the reply. We are facing issues when we run iscsiadm in the 
>>> container and iscsid on the host. At that time, iscsiadm can't reach to 
>>> iscsid at all and all iscsiadm commands fail.
>>>
>>> If we run iscsiadm and iscsid in the same container, it works but we 
>>> don't know if this is how it is designed to run. So few specific questions;
>>>
>>>
I _believe_ they are separated, using different network namespaces, thanks 
to some changes from Chris, but I can't seem to find them at the moment. 
@cleech ??

> 1. If we run iscsid in container, do we need to shut the the iscsid that 
>>> is running on host?
>>> 2. iscsid running in the container, requires kernel module iscsi_tcp to 
>>> be part of the container image. Is this ok?
>>> 3. What is the standard topology for dealing with iscsi from 
>>> containerized environments?
>>>
>>> Appreciate your help here.
>>>
>>> Thanks,
>>> Shailesh.
>>>
>>>
>>> You need to run either "iscsid and iscsiadm" or "iscsistart" in each 
>> container. The "iscsistart" command is meant to be used as a replacement 
>> for the iscsid/iscsiadm pair at startup time.
>>
>> Yes, using iscsi_tcp (the iscsi transport) is required. I guess that 
>> means it's ok.
>>
>> I have no idea about what is standard in a containerized environment for 
>> topology. Generally, iscsi doesn't use any directory service (since people 
>> don't like iSNS).
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   >