Re: Memory leak in pSCSI backend?

2018-10-29 Thread Ben Klein
Thank you. It's been fun just trying to find the right place to report
this to :)

I have successfully set up tgtd with --device-type pt --bstype sg now.
BluRay playback is working fine! But tgtd doesn't seem to like
encrypted DVDs.
On Tue, 30 Oct 2018 at 04:50, The Lee-Man  wrote:
>
> On Saturday, October 27, 2018 at 8:30:52 AM UTC-7, Ben Klein wrote:
>>
>> Hi,
>>
>> I've been trying to get a DVD-ROM drive working over iSCSI. I'm using 
>> targetcli on the target and iscsiadm/iscsi_discovery on the initiator.
>>
>> Target is running 4.18.16 unpatched from kernel.org, with AMD64 
>> Devuan/Debian stable userland. 10 year old Core 2 Duo CPU.
>>
>> Initiator is running 4.18.15 unpatched from kernel.org with AMD64 
>> Devuan/Debian unstable userland. Ryzen 7 CPU.
>>
>> Using pSCSI backend on /dev/sr0 triggers a memory leak that consumes all 5GB 
>> of system RAM on the target in a matter of seconds after attempting a read 
>> (I've been testing with "file -s") on the initiator side. It doesn't go into 
>> swap memory.
>>
>> I tested pSCSI and IBLOCK backends on a WD Green SSD, and found IBLOCK works 
>> fine (I was able to run mkfs.ext4 from the initiator without any memory leak 
>> on the target), but pSCSI still seems to trigger a memory leak, albeit a lot 
>> slower than with the DVD-ROM. I suspect udev might have something to do with 
>> this, as tries to read the volume label off optical discs on insertion.
>>
>> I have attached my targetcli output, and have configured my target-side 
>> kernel for coredumping and debugging. If someone can talk me through what to 
>> look for, I can poke around in the 5GB vmcore that gets produced by kdump.
>>
>> Please let me know if there's any other information I can provide.
>>
>> Thanks,
>> Ben Klein
>
>
> You would probably get more useful advise on the target-de...@vger.kernel.org 
> mailing list, since we mainly have initiator expertise here. They might have 
> even seen and fixed this issue.
>
> Never the less, you can try debugging the target driver. I believe the target 
> code uses dynamic debugging, which means you can enable the debugging at run 
> time, using the path /sys/kernel/debug/dynamic_debug/control. You can enable 
> debug printing for one or more lines, for a function, for a module, etc. 
> Pretty flexible. But it really helps to look at the code to see what is being 
> reported on (IMHO).
>
> I have recently used a similar setup to test tape over iscsi using the pscsi 
> backend, and I did not notice a memory leak like the one you're reporting, if 
> that helps.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Running iscsiadm in containers

2018-10-29 Thread The Lee-Man
On Monday, October 29, 2018 at 10:40:07 AM UTC-7, dat@gmail.com wrote:
>
> Thanks for the reply. We are facing issues when we run iscsiadm in the 
> container and iscsid on the host. At that time, iscsiadm can't reach to 
> iscsid at all and all iscsiadm commands fail.
>
> If we run iscsiadm and iscsid in the same container, it works but we don't 
> know if this is how it is designed to run. So few specific questions;
>
> 1. If we run iscsid in container, do we need to shut the the iscsid that 
> is running on host?
> 2. iscsid running in the container, requires kernel module iscsi_tcp to 
> be part of the container image. Is this ok?
> 3. What is the standard topology for dealing with iscsi from containerized 
> environments?
>
> Appreciate your help here.
>
> Thanks,
> Shailesh.
>
>
> You need to run either "iscsid and iscsiadm" or "iscsistart" in each 
container. The "iscsistart" command is meant to be used as a replacement 
for the iscsid/iscsiadm pair at startup time.

Yes, using iscsi_tcp (the iscsi transport) is required. I guess that means 
it's ok.

I have no idea about what is standard in a containerized environment for 
topology. Generally, iscsi doesn't use any directory service (since people 
don't like iSNS).

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Memory leak in pSCSI backend?

2018-10-29 Thread The Lee-Man
On Saturday, October 27, 2018 at 8:30:52 AM UTC-7, Ben Klein wrote:
>
> Hi,
>
> I've been trying to get a DVD-ROM drive working over iSCSI. I'm using 
> targetcli on the target and iscsiadm/iscsi_discovery on the initiator.
>
> Target is running 4.18.16 unpatched from kernel.org, with AMD64 
> Devuan/Debian stable userland. 10 year old Core 2 Duo CPU.
>
> Initiator is running 4.18.15 unpatched from kernel.org with AMD64 
> Devuan/Debian unstable userland. Ryzen 7 CPU.
>
> Using pSCSI backend on /dev/sr0 triggers a memory leak that consumes all 
> 5GB of system RAM on the target in a matter of seconds after attempting a 
> read (I've been testing with "file -s") on the initiator side. It doesn't 
> go into swap memory.
>
> I tested pSCSI and IBLOCK backends on a WD Green SSD, and found IBLOCK 
> works fine (I was able to run mkfs.ext4 from the initiator without any 
> memory leak on the target), but pSCSI still seems to trigger a memory leak, 
> albeit a lot slower than with the DVD-ROM. I suspect udev might have 
> something to do with this, as tries to read the volume label off optical 
> discs on insertion.
>
> I have attached my targetcli output, and have configured my target-side 
> kernel for coredumping and debugging. If someone can talk me through what 
> to look for, I can poke around in the 5GB vmcore that gets produced by 
> kdump.
>
> Please let me know if there's any other information I can provide.
>
> Thanks,
> Ben Klein
>

You would probably get more useful advise on the 
target-de...@vger.kernel.org mailing list, since we mainly have initiator 
expertise here. They might have even seen and fixed this issue.

Never the less, you can try debugging the target driver. I believe the 
target code uses dynamic debugging, which means you can enable the 
debugging at run time, using the path 
/sys/kernel/debug/dynamic_debug/control. You can enable debug printing for 
one or more lines, for a function, for a module, etc. Pretty flexible. But 
it really helps to look at the code to see what is being reported on (IMHO).

I have recently used a similar setup to test tape over iscsi using the 
pscsi backend, and I did not notice a memory leak like the one you're 
reporting, if that helps.

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Running iscsiadm in containers

2018-10-29 Thread dat . gce
Thanks for the reply. We are facing issues when we run iscsiadm in the 
container and iscsid on the host. At that time, iscsiadm can't reach to 
iscsid at all and all iscsiadm commands fail.

If we run iscsiadm and iscsid in the same container, it works but we don't 
know if this is how it is designed to run. So few specific questions;

1. If we run iscsid in container, do we need to shut the the iscsid that is 
running on host?
2. iscsid running in the container, requires kernel module iscsi_tcp to be 
part of the container image. Is this ok?
3. What is the standard topology for dealing with iscsi from containerized 
environments?

Appreciate your help here.

Thanks,
Shailesh.

On Wednesday, October 24, 2018 at 1:28:11 PM UTC-7, The Lee-Man wrote:
>
> Sorry, I should have asked: what issues have you experienced with 
> containers?
>
> On Wednesday, October 24, 2018 at 1:27:43 PM UTC-7, The Lee-Man wrote:
>>
>> On Tuesday, October 23, 2018 at 9:03:01 AM UTC-7, Shailesh Mittal wrote:
>>>
>>> Hi there,
>>>
>>> I understand that it was the topic of discussion earlier. As containers 
>>> are getting used more and more to run applications, there are frameworks 
>>> like Kubernetes (and more) where the calls to talk to scsi storage devices 
>>> are being made through a container.
>>>
>>> Here, vendors are in flux as they need to execute iscsiadm commands to 
>>> connect their iscsi based storage to the application-containers. These 
>>> caller modules (responsible for connecting to the remote storage) are 
>>> running in the containers and thus have no choice but executing iscsiadm 
>>> commands from the containers itself.
>>>
>>> Is there a well understood way to implement this? I remember the thread 
>>> from Chris Leech talking about containerizing iscsid but not sure the 
>>> end-result of that. (
>>> https://groups.google.com/forum/#!msg/open-iscsi/vWbi_LTMEeM/NdZPh33ed0oJ
>>> )
>>>
>>> Any help/direction here is much appreciated.
>>>
>>> Thanks,
>>> Shailesh Mittal.
>>>
>>
>> I have never messed with containers w/r/t iscsi, but I was under the 
>> impression Chris got this working. @Chris 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.