Re: [EXT] Re: Question about iscsi session block

2022-02-16 Thread Donald Williams
they are using for iSCSI storage. Regards, Don On Wed, Feb 16, 2022 at 5:12 AM Ulrich Windl < ulrich.wi...@rz.uni-regensburg.de> wrote: > >>> Donald Williams schrieb am 15.02.2022 um > 17:25 in > Nachricht > : > > Hello, > >Something else to check i

Re: Question about iscsi session block

2022-02-15 Thread Donald Williams
Hello, Something else to check is your MPIO configuration. I have seen this same symptom when the linux MPIO feature "queue_if_no_path" was enabled From the /etc/multipath.conf file showing it enabled. failbackimmediate features"1 queue_if_no_path"

Re: trimming iscsi luns?

2021-05-26 Thread Donald Williams
Hello, It is also the OS/filesystem that must support the TRIM or UNMAP command. I.e. in EXT4 you have to set the option 'discard' when mounting a volume to support TRIM/UNMAP feature. Using something like 'fsttrim' If your backend storage is RAIDed then typically any SSDs are not presented as

Re: Hi help me please

2020-12-18 Thread Donald Williams
Hello, You didn't say what iSCSI target you are using. This PDF below covers how to use open-iSCSI with RHEL v6.x / 7.x with Dell PS Series SANs. The open-iSCSI part is basically the same for all iSCSI. With one major exception. Dell PS Series iSCSI SANs have all the IPs for iSCSI in the

Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-06-30 Thread Donald Williams
Re: Subnets. Not all iSCSI targets operate on multiple subnets. The Equallogic for example is intended for a single IP subnet schema., Multiple subnet require routing be enabled. Don On Tue, Jun 30, 2020 at 1:02 PM The Lee-Man wrote: > On Tuesday, June 30, 2020 at 8:55:13 AM UTC-7, Don

Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-06-30 Thread Donald Williams
Hello, Assuming that devmapper is running and MPIO properly configured you want to connect to the same volume/target from different interfaces. However in your case you aren't specifying the same interface. "default" but they are on the same subnet. Which typically will only use the default

Re: [EXT] Re: udev events for iscsi

2020-04-22 Thread Donald Williams
Hello Re: Errors That's likely from a bad / copy paste. I referenced the source document I took that from. That was done against an older RHEL kernel. Don On Wed, Apr 22, 2020 at 3:04 AM Ulrich Windl < ulrich.wi...@rz.uni-regensburg.de> wrote: > >>> Donald Williams sch

Re: udev events for iscsi

2020-04-21 Thread Donald Williams
Hello, If the loss exceeds the timeout value yes. If the 'drive' doesn't come back in 30 to 60 seconds it's not likely a transitory event like a cable pull. NOOP-IN and NOOP-OUT are also know as KeepAlive. That's when the connection is up but the target or initiator isn't responding. If

Re: udev events for iscsi

2020-04-21 Thread Donald Williams
Hello, re: XenServer. The initiator is the same but I suspect your issue with the disk timeout value on Linux. When the connection drops Linux gets the error and mount RO. In VMware for example the VMware tools sets Windows Disktimeout to 60 seconds to not give up so quickly. I suspect if

Re: iSCSI and Ceph RBD

2020-01-24 Thread Donald Williams
Hello, I am not an expert in CEPH. However, iSCSI is the transport protocols to connect an initiator to a target. On the client side, iSCSI traffic coming from target is broken down and the SCSI commands are handed to the client. When writing data, the iSCSI initiator encoded the command

Re: Two types of initiator stacks

2020-01-10 Thread Donald Williams
bby wrote: > ah OK thanks ! > > > On Thursday, January 9, 2020 at 7:35:07 PM UTC+1, Donald Williams wrote: >> >> Hello, >> >> It is referring to iSCSI HBA cards like Broadcom BCM58xx/57xxx or just >> using a standard NIC and the Software iSCSI adapter

Re: Two types of initiator stacks

2020-01-09 Thread Donald Williams
Hello, It is referring to iSCSI HBA cards like Broadcom BCM58xx/57xxx or just using a standard NIC and the Software iSCSI adapter open-iSCSI provides. Regards, Don On Thu, Jan 9, 2020 at 11:57 AM Bobby wrote: > Under section "How to setup iSCSI interfaces (iface) for binding" of > README,

Re: Re: iSCSI packet generator

2019-11-08 Thread Donald Williams
Hello, iSCSI is just a transport method for SCSI commands. Same as Fibre Channel, SAS, etc.. When the network takes in the iSCSI packets, the SCSI commands and data are separated and they go to their respective devices or 'disks' in this case. Regards Don On Fri, Nov 8, 2019 at 1:40 PM

Re: iSCSI packet generator

2019-11-04 Thread Donald Williams
Hello, Can you provide a little more info? iSCSI is for storage, so unless your 'server' is running an iSCSI target service there won't be 'iSCSI' traffic to monitor. If you do have an iSCSI service running then providing a disk via that service to the 'client' then doing normal I/O to that

Re: isid persistence?

2018-07-08 Thread Donald Williams
ion (on both target and > initiator) the same isid should be allocated but that is not the case. > > hope, I was able to explain that. > > Thanks. > Mayur > > On Fri, Jul 6, 2018 at 4:30 AM Donald Williams > wrote: > >> Hello, >> >> It's an ID for

Re: isid persistence?

2018-07-05 Thread Donald Williams
Hello, It's an ID for that session what would be benefit of persistence?For my purposes the fact it's only for that session helps me when going through logs or traces. Makes it much easier to follow that session through the iSCSID logs and on the storage device as well. - SSID (Session

Re: Error Recovery and TUR

2018-05-01 Thread Donald Williams
Hello, Part of the iSCSI protocol is recovering from different error conditions. https://www.ietf.org/proceedings/51/slides/ips-6.pdf That's link is the spec for it. On a connection reset, the initiator will go back to the Discovery address and attempt to log back in. Another key piece

Re: How to address more than one LUN in a target

2018-04-11 Thread Donald Williams
Hi Paul, CML Does Target 0, then Lun0, the next volume will be target 0, LUN1, etc.. With the standard two paths and two fault domains each server will see four TARGETS, then the volumes will be LUNs underneath them. As opposed to EQL with Target 0, LUN 0 for first volume then Target 1 LUN 0

Re: facing problem to mount EMC storage in ubuntu 14.04/

2017-10-05 Thread Donald Williams
Hello, Additionally, what SCSI disk device name are you using to create the filesystem? I you have multipathd running device mapper will create a new device name. If the volume is partitioned then it will have a p1 at the end of the device name. That's what you want to use to create the

Re: open-iscsi default interface behavior

2017-06-20 Thread Donald Williams
ke E. > > On Saturday, June 17, 2017 at 7:59:40 PM UTC-5, Donald Williams wrote: >> >> Hello, >>I read over that thread. Once thing missing from that discussion is >> how Linux routing works with multiple NICs on the same subnet. If you have >> two NICs IP'

Re: open-iscsi default interface behavior

2017-06-17 Thread Donald Williams
Hello, I read over that thread. Once thing missing from that discussion is how Linux routing works with multiple NICs on the same subnet. If you have two NICs IP'd on same IP subnet, only one NIC will be active. That becomes the default NIC for that subnet. Down that interface and the other

Re: Updated iscsi timeout and retry values in iscsi.conf file are not getting effective

2015-08-18 Thread Donald Williams
Hello Manish, No restarting the iSCSI daemon does not update the node files. There are iscsiadm commands that will do so on the fly. This is taken from the Dell Tech Report TR1062. Configuring iSCSI and MPIO for RHEL v5.x. Which is 99% the same for RHEL v6.x. A search for Dell TR1062

Re: Broken pipe on my target. Is there any option on my initiator to fix it?

2015-05-13 Thread Donald Williams
Hello Felipe, I'm not sure about anyone else, but I wouldn't expect that tweaking the iSCSI settings you've been talking about will improve this. Have you tested just connecting from server to storage via iSCSI? Take NFS out of the picture. iSCSI is very dependent on the network. What kind

Re: Changes to iSCSI device are not consistent across network

2015-02-25 Thread Donald Williams
Hello, Unless you have a cluster file system in place what you are seeing is expected. Each node believes it owns that volume exclusively. There's nothing in iSCSI or SCSI protocol to address this. A write from one node doesn't tell the other node to update its cached image of that disk.

Re: Slow dir / Performance.

2014-12-02 Thread Donald Williams
Hello, What Linux distro are you using?There are some common tweaks to the /etc/iscsid.conf and sysctl.conf files that help improve performance. This link covers how to configure RHEL with EQL. The same principles apply to any recent Linux distro.

Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-25 Thread Donald Williams
I find upping some of the default Linux network params helps with throughput Edit /etc/sysctl.conf, then update the system using #sysctl –p # Increase network buffer sizes net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 8192 87380 16777216 net.ipv4.tcp_wmem = 4096

Re: Very strange behavior - 4 devices but only one nic getting traffic?

2014-04-11 Thread Donald Williams
Configuring Mulitpath Connections: To create the multiple logins needed for Linux dev-mapper to work you need to create an 'interface' file for each GbE interface you wish to use to connect to the array. Use the following commands to create the interface files for MPIO. (Select the appropriate

Re: Automatic update of files between a group of hosts

2013-04-22 Thread Donald Williams
Hello, No. Open-iSCSI can't fix this issue. You're problem is that you don't have a Cluster file system in place, each server believes they own the disk exclusively. If you keep this as-is you will corrupt the data, that's for sure. Easiest thing is to connect with one server, then share

Re: How to recover iscsi connection if LAN switch goes down/failed for 1 hour or above

2013-04-04 Thread Donald Williams
Hello, What you were seeing was the stale mount info. No disk is going to survive an hour disconnected. Even a short disconnect will cause a SCSI disk error and Linux to remount the volume RO. Best practices for iSCSI connectivity is redundant switches and configure MPIO to use both paths

Re: problems connecting to Equallogic 6100

2013-04-04 Thread Donald Williams
by the other.Using a snapshot is the safest way to backup an EQL volume. Don On Thu, Apr 4, 2013 at 11:13 AM, Elvar el...@elvar.org wrote: On 3/29/2013 2:16 PM, Donald Williams wrote: Hello, What version of FW is on the EQL array? Any error messages on the EQL array events? I work for Dell

Re: problems connecting to Equallogic 6100

2013-03-29 Thread Donald Williams
Hello, What version of FW is on the EQL array? Any error messages on the EQL array events? I work for Dell/Equallogic and I have no issues connecting up ubuntu 12.10 to EQL.The only recent issue I've seen with 12.x is 12.04 the startup scripts don't login to iSCSI after a reboot or on

Re: Multipath or not ?

2013-03-11 Thread Donald Williams
The iSCSI layer doesn't do MPIO, it's done at the host level. Depending on the tape system, you could benefit performance wise using MPIO. Open-iSCSI can be configured to initiate multiple sessions to a single target. Once those volumes are presented to the host, having the same Serial number,

Re: access SAN through win7

2012-10-01 Thread Donald Williams
If I understand you, you want to directly connect multiple servers directly to the same SAN volume. If you do not have a cluster file system in place, you will absolutely corrupt that volume. Each server connected believes it owns the volume exclusively. So updates on one server aren't seen by

Re: problem in sharing disk through iscsi

2012-08-30 Thread Donald Williams
Mike is (of course) correct. When just the iSCSI connection is in place, each host believes it owns the volume exclusively. So when you write to a volume like that, you don't first (or periodically) re-read the volume for updates. Why would you? As far as the host is concerned nothing has

Re: How many S/W iSCSI Initiators on same machine?

2009-08-04 Thread Donald Williams
-03.com.hp:Ethernet5 InitiatorName=iqn.1986-03.com.hp:Ethernet6 or which syntax has to be used? Rainer On Aug 3, 10:36 pm, Donald Williams don.e.willi...@gmail.com wrote: Hello, I'm not sure what your question really is. Yes, you can have 6x GbE interfaces on different subnets

Re: same volume two different hosts

2009-08-03 Thread Donald Williams
Hello Nick, While an iSCSI SAN will not have any problem allowing multiple hosts to connect to the same volume, what it doesn't do is protect you from the resultant corruption. Each host will believe it owns that volume exclusively. Writes from one host won't be seen by the other host. They

Re: How many S/W iSCSI Initiators on same machine?

2009-08-03 Thread Donald Williams
Hello, I'm not sure what your question really is. Yes, you can have 6x GbE interfaces on different subnets and run iSCSI over them. What target are you using? Typically, your iSCSI SAN is on one subnet. It avoids the need to do IP routing. Which adds latency and can reduce performance.

Re: iscsiadm -m iface + routing

2009-07-30 Thread Donald Williams
Hello Jullian, The EQL MIBs are available from their website, http://www.equallogic.comunder Downoads-Firmware-Release Version. The MIBs are tied into the firmware version the array is running. Currently, Equallogic arrays don't support SMI-s. Equallogic, has a bundled monitoring program

Wierd problem with current GIT version of open-iscsi

2009-07-15 Thread Donald Williams
Mike, I decided to try the current repository version (as of 3PM, 7/15). Compiled and installed w/o issue. Rebooted and I couldn't connect to my EQL targets. The login process complained no iSCSI driver.So I installed 2.0-871 from the website tar ball. Rebooted, same problem. Tried an

Re: Wierd problem with current GIT version of open-iscsi

2009-07-15 Thread Donald Williams
Ending time: Wed Jul 15 17:54:17 2009 On Wed, Jul 15, 2009 at 4:40 PM, Donald Williams don.e.willi...@gmail.com wrote: I'll try running depmod and see if that helps. I'm not great a hacking Makefiles. I'll see what I can do. I'm going to use a test VM this time though. Not my server. :-D

Re: iscsiadm -m iface + routing

2009-07-14 Thread Donald Williams
Hi Mike, Thanks for helping out. When you say Dell fixed something, did you mean Dell / Equallogic or another part of Dell? I'm not aware of anything Dell/EQL submitted but that doesn't mean anything. ;-) What I'm seeing from the array logs are resets coming from the initiator.

Re: A fundamental question to iSCSI proponents;

2009-07-01 Thread Donald Williams
Hello, You are correct that a SAN is more than just a protocol. You can create a SAN with SCSI, FC, infiniband, 10GbE, GbE, etc... Where I used to work, Storage Computer, we could create a SAN with 4x 160MB SCSI ports. They could all connect to the same volume, thus creating a SCSI SAN. You

Re: equallogic - load balancing and xfs

2009-04-13 Thread Donald Williams
You don't want to disable connection load balancing (CLB) in the long run. CLB will balance out IO across the available ports as servers need IO. I.e. during the day your file server or SQL server will be busy. Then at night other servers or backups are running. Without CLB you could end up

Re: Possible bug in open-iSCSI

2009-03-14 Thread Donald Williams
Another test might be to take the filesystem out of the equation. Use 'dd' or 'dt' to write out past the 2GB mark and see what error results. Don On Wed, Mar 11, 2009 at 4:42 AM, sushrut shirole shirole.sush...@gmail.comwrote: Thanks a lot .. ill let u know about this .. 2009/3/10 Konrad

Re: Frequent Connection Errors with Dell Equallogic RAID

2008-12-08 Thread Donald Williams
Hello, I would strongly suggest using the code version Mike mentioned. I use ubuntu 8.04/8.10 with that code without issues w/EQL arrays. Running the older transport kernel module has caused NOOP errors. The initiator sends out NOOPs w/different SN numbers than what the array is expecting.