Re: [EXT] Re: Question about iscsi session block

2022-02-16 Thread Donald Williams
Hello,

 Thanks. On the app side, with iSCSI SANs  I extend the disk\ timeout value
in the OS.  To better handle any transitory network events, and controller
failovers.
In linux that's important to prevent filesystems like EXT4 from remounting
RO on an error.

  I would like to know which vendor they are using for iSCSI storage.

Regards,
Don


On Wed, Feb 16, 2022 at 5:12 AM Ulrich Windl <
ulrich.wi...@rz.uni-regensburg.de> wrote:

> >>> Donald Williams  schrieb am 15.02.2022 um
> 17:25 in
> Nachricht
> :
> > Hello,
> >Something else to check is your MPIO configuration.  I have seen this
> > same symptom when the linux MPIO feature "queue_if_no_path" was enabled
> >
> >  From the /etc/multipath.conf file showing it enabled.
> >
> > failbackimmediate
> > features"1 queue_if_no_path"
>
> Yes, the actual config is interesting. Especially when usind MD-RAID, you
> typically do not want "1 queue_if_no_path", but if the app can't handle I/O
> errors, one might want it.
> For a FC SAN featuring ALUA we use:
> ...
> polling_interval 5
> max_polling_interval 20
> path_selector "service-time 0"
> ...
> path_checker "tur"
> ...
> fast_io_fail_tmo 5
> dev_loss_tmo 600
>
> The logs are helpful, too. For example (there were some paths remaining
> all the time):
> Cable was unplugged:
> Feb 14 12:56:05 h16 kernel: qla2xxx [:41:00.0]-500b:3: LOOP DOWN
> detected (2 7 0 0).
> Feb 14 12:56:10 h16 multipathd[5225]: sdbi: mark as failed
> Feb 14 12:56:10 h16 multipathd[5225]: SAP_V11-PM: remaining active paths: 7
> Feb 14 12:56:10 h16 kernel: sd 3:0:6:3: rejecting I/O to offline device
> Feb 14 12:56:10 h16 kernel: sd 3:0:6:14: rejecting I/O to offline device
> Feb 14 12:56:10 h16 kernel: sd 3:0:6:15: rejecting I/O to offline device
>
> So 5 seconds later the paths are offlined.
>
> Cable was re-plugged:
> Feb 14 12:56:22 h16 kernel: qla2xxx [:41:00.0]-500a:3: LOOP UP
> detected (8 Gbps).
> Feb 14 12:56:22 h16 kernel: qla2xxx [:41:00.0]-11a2:3: FEC=enabled
> (data rate).
> Feb 14 12:56:26 h16 multipathd[5225]: SAP_CJ1-PM: sdbc - tur checker
> reports path is up
> Feb 14 12:56:26 h16 multipathd[5225]: 67:96: reinstated
> Feb 14 12:56:26 h16 multipathd[5225]: SAP_CJ1-PM: remaining active paths: 5
> Feb 14 12:56:26 h16 kernel: device-mapper: multipath: 254:4: Reinstating
> path 67:96.
> Feb 14 12:56:26 h16 kernel: device-mapper: multipath: 254:6: Reinstating
> path 67:112.
>
> So 4 seconds later new paths are discovered.
>
>
> Regards,
> Ulrich
>
>
>
> >
> >  Also, in the past some versions of linux multipathd would wait for a
> > very long time before moving all I/O to the remaining path.
> >
> >  Regards,
> > Don
> >
> >
> > On Tue, Feb 15, 2022 at 10:49 AM Zhengyuan Liu <
> liuzhengyuang...@gmail.com>
> > wrote:
> >
> >> Hi, all
> >>
> >> We have an online server which uses multipath + iscsi to attach storage
> >> from Storage Server. There are two NICs on the server and for each it
> >> carries about 20 iscsi sessions and for each session it includes about
> 50
> >>  iscsi devices (yes, there are totally about 2*20*50=2000 iscsi block
> >> devices
> >>  on the server). The problem is: once a NIC gets faulted, it will take
> too
> >> long
> >> (nearly 80s) for multipath to switch to another good NIC link, because
> it
> >> needs to block all iscsi devices over that faulted NIC firstly. The
> >> callstack is
> >>  shown below:
> >>
> >> void iscsi_block_session(struct iscsi_cls_session *session)
> >> {
> >> queue_work(iscsi_eh_timer_workq, &session->block_work);
> >> }
> >>
> >>  __iscsi_block_session() -> scsi_target_block() -> target_block() ->
> >>   device_block() ->  scsi_internal_device_block() -> scsi_stop_queue()
> ->
> >>  blk_mq_quiesce_queue()>synchronize_rcu()
> >>
> >> For all sessions and all devices, it was processed sequentially, and we
> >> have
> >> traced that for each synchronize_rcu() call it takes about 80ms, so
> >> the total cost
> >> is about 80s (80ms * 20 * 50). It's so long that the application can't
> >> tolerate and
> >> may interrupt service.
> >>
> >> So my question is that can we optimize the procedure to reduce the time
> >> cost on
> >> blocking all iscsi devices

Re: Question about iscsi session block

2022-02-15 Thread Donald Williams
Hello,
   Something else to check is your MPIO configuration.  I have seen this
same symptom when the linux MPIO feature "queue_if_no_path" was enabled

 From the /etc/multipath.conf file showing it enabled.

failbackimmediate
features"1 queue_if_no_path"

 Also, in the past some versions of linux multipathd would wait for a
very long time before moving all I/O to the remaining path.

 Regards,
Don


On Tue, Feb 15, 2022 at 10:49 AM Zhengyuan Liu 
wrote:

> Hi, all
>
> We have an online server which uses multipath + iscsi to attach storage
> from Storage Server. There are two NICs on the server and for each it
> carries about 20 iscsi sessions and for each session it includes about 50
>  iscsi devices (yes, there are totally about 2*20*50=2000 iscsi block
> devices
>  on the server). The problem is: once a NIC gets faulted, it will take too
> long
> (nearly 80s) for multipath to switch to another good NIC link, because it
> needs to block all iscsi devices over that faulted NIC firstly. The
> callstack is
>  shown below:
>
> void iscsi_block_session(struct iscsi_cls_session *session)
> {
> queue_work(iscsi_eh_timer_workq, &session->block_work);
> }
>
>  __iscsi_block_session() -> scsi_target_block() -> target_block() ->
>   device_block() ->  scsi_internal_device_block() -> scsi_stop_queue() ->
>  blk_mq_quiesce_queue()>synchronize_rcu()
>
> For all sessions and all devices, it was processed sequentially, and we
> have
> traced that for each synchronize_rcu() call it takes about 80ms, so
> the total cost
> is about 80s (80ms * 20 * 50). It's so long that the application can't
> tolerate and
> may interrupt service.
>
> So my question is that can we optimize the procedure to reduce the time
> cost on
> blocking all iscsi devices?  I'm not sure if it is a good idea to increase
> the
> workqueue's max_active of iscsi_eh_timer_workq to improve concurrency.
>
> Thanks in advance.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/CAOOPZo4uNCicVmoHa2za0%3DO1_XiBdtBvTuUzqBTeBc3FmDqEJw%40mail.gmail.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EZbJMDHkozGiz8LnMNAZ%2BSoCA%2BQeK0kpkqM4vQ4pz86SQ%40mail.gmail.com.


Re: trimming iscsi luns?

2021-05-26 Thread Donald Williams
Hello,
 It is also the OS/filesystem that must support the TRIM or UNMAP command.
I.e. in EXT4 you have to set the option 'discard' when mounting a volume to
support TRIM/UNMAP feature. Using something like 'fsttrim'

 If your backend storage is RAIDed then typically any SSDs are not
presented as SSD/FLASH drives to the host. Physical drives are virtualized
by the RAID controller and LUNs are presented to the host.

  Once the TRIM/UNMAP command is sent  it's up to the backend storage
device to handle that properly.

 Open-iSCSI itself is the transport to the target from the OS.  It does not
initiate TRIP/UNMAP or any other SCSI commands on its own.  It will pass
along those SCSI commands the OS sends and send back all results.

 Regards,
Don





On Wed, May 26, 2021 at 10:33 AM 'H. Giebels' via open-iscsi <
open-iscsi@googlegroups.com> wrote:

> I think I've got it. It is the emulate_tpu parameter on the target side.
> Needs some more confirmation, though
>
> H. Giebels schrieb am Mittwoch, 26. Mai 2021 um 15:26:39 UTC+2:
>
>>
>> Hello,
>>
>> not exactly sure, wether this is an issue of targetcli or open iscsi. The
>> target lun is a sparse file, and I would like to be able to trim that lun
>> to reclaim free space. Think thin volume on a file backend.
>>
>> Now iscsiadm -m session shows me (non-flash), what I suppose is the
>> reason, why I get an operation not permitted error when trying to so so.
>>
>> The manpage talks about a flash node, but it is nowhere explained, what
>> that is and wether this is related to flash storage at all. So maybe there
>> is some documentation about the terms used?
>>
>> But primarily I would like to know, wether the information about the
>> trimability is a matter of the target advertising it or wether this has to
>> be defined during creation of the lun on the client side (-o new).
>>
>> Thanks
>>
>> Hermann
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/086c0e6e-4df9-409c-80a4-d611fd36a363n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EYB3NurR6MzAzf2WBRUMDiKKbH23w3OFF8u86zW-nRj-g%40mail.gmail.com.


Re: Hi help me please

2020-12-18 Thread Donald Williams
Hello,

 You didn't say what iSCSI target you are using.  This PDF below  covers
how to use open-iSCSI with RHEL v6.x / 7.x with Dell PS Series SANs.  The
open-iSCSI part is basically the same for all iSCSI.  With one major
exception.  Dell PS Series iSCSI SANs have all the IPs for iSCSI in the
same IP subnet.  Which requires some custom configuration settings in
open-iSCSI to make MPIO work in Linux.  If your iSCSI SAN uses two
different IP subnets you can skip the section on setting egress ports.

https://downloads.dell.com/solutions/storage-solution-resources/%283199-CD-L%29RHEL-PSseries-Configuration.pdf


Regards,
Don

On Thu, Dec 17, 2020 at 1:46 PM The Lee-Man  wrote:

> As Ulrich replied, there's not much we can do with the data you provided.
>
> On Wednesday, December 16, 2020 at 12:29:20 PM UTC-8 go xayyasang wrote:
>
>> [root@target ~]# iscsiadm -m node -o show
>> iscsiadm: No records found
>>
>>
> That's normal if you have no records in your database. If you want records
> in your database, you have to perform discovery.
>
> Please browse the README file that comes with open-iscsi. We don't have a
> general open-iscsi HowTo tutorial, but search the internet (as I just did),
> and you'll find several.
>
> Next time, supply: OS and version used, open-iscsi version number, what
> you are trying to do, and all steps leading up to your error, so that we
> can reproduce your error if needed.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/0821d935-d1d7-4483-b5af-aad16d2f85c7n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EZcnKEU1Px8t2q8A-UBRPichLVgQdUEaRgR6t_j%3Dh59Nw%40mail.gmail.com.


Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-06-30 Thread Donald Williams
Re: Subnets.  Not all iSCSI targets operate on multiple subnets.  The
Equallogic for example is intended for a single IP subnet schema.,
Multiple subnet require routing be enabled.

Don


On Tue, Jun 30, 2020 at 1:02 PM The Lee-Man  wrote:

> On Tuesday, June 30, 2020 at 8:55:13 AM UTC-7, Donald Williams wrote:
>>
>> Hello,
>>
>>  Assuming that devmapper is running and MPIO properly configured you want
>> to connect to the same volume/target from different interfaces.
>>
>> However in your case you aren't specifying the same interface. "default"
>> but they are on the same subnet.  Which typically will only use the default
>> NIC for that subnet.
>>
>
> Yes, generally best practices require that each component of your two
> paths between initiator and target are redundant. This means that, in the
> case of networking, you want to be on different subnets, served by
> different switches. You also want two different NICs on your initiator, if
> possible, although many times they are on the same card. But, obviously,
> some points are not redundant (like your initiator or target).
>
>>
>> What iSCSI target are you using?
>>
>>  Regards,
>> Don
>>
>> On Tue, Jun 30, 2020 at 9:00 AM Amit Bawer  wrote:
>>
>>> [Sorry if this message is duplicated, haven't seen it is published in
>>> the group]
>>>
>>> Hi,
>>>
>>> Have couple of question regarding iscsiadm version 6.2.0.878-2:
>>>
>>> 1) Is it safe to have concurrent logins to the same target from
>>> different interfaces?
>>> That is, running the following commands in parallel:
>>>
>>> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p
>>> 10.35.18.121:3260,1 -l
>>> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p
>>> 10.35.18.166:3260,1 -l
>>>
>>> 2) Is there a particular reason for the default values of
>>> node.conn[0].timeo.login_timeout and node.session.initial_login_
>>> retry_max?
>>> According to comment in iscsid.conf it would spend 120 seconds in case
>>> of an unreachable interface login:
>>>
>>> # The default node.session.initial_login_retry_max is 8 and
>>> # node.conn[0].timeo.login_timeout is 15 so we have:
>>> #
>>> # node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max
>>> =
>>> #   120
>>> seconds
>>>
>>>
>>> Thanks,
>>> Amit
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "open-iscsi" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to open-iscsi+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/open-iscsi/cc3ad021-753a-4ac4-9e6f-93e8da1e19bbn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/open-iscsi/cc3ad021-753a-4ac4-9e6f-93e8da1e19bbn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/bf75d5e8-f4ed-4a16-86a8-ab78d0cac1cco%40googlegroups.com
> <https://groups.google.com/d/msgid/open-iscsi/bf75d5e8-f4ed-4a16-86a8-ab78d0cac1cco%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EbOmtv1j6c8AecE1jkGy%3Dzt_0Jdj%3DQwj41DOXuwVNPavA%40mail.gmail.com.


Re: Concurrent logins to different interfaces of same iscsi target and login timeout

2020-06-30 Thread Donald Williams
Hello,

 Assuming that devmapper is running and MPIO properly configured you want
to connect to the same volume/target from different interfaces.

However in your case you aren't specifying the same interface. "default"
but they are on the same subnet.  Which typically will only use the default
NIC for that subnet.

What iSCSI target are you using?

 Regards,
Don

On Tue, Jun 30, 2020 at 9:00 AM Amit Bawer  wrote:

> [Sorry if this message is duplicated, haven't seen it is published in the
> group]
>
> Hi,
>
> Have couple of question regarding iscsiadm version 6.2.0.878-2:
>
> 1) Is it safe to have concurrent logins to the same target from different
> interfaces?
> That is, running the following commands in parallel:
>
> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p
> 10.35.18.121:3260,1 -l
> iscsiadm -m node -T iqn.2003-01.org.vm-18-198.iqn2 -I default -p
> 10.35.18.166:3260,1 -l
>
> 2) Is there a particular reason for the default values of
> node.conn[0].timeo.login_timeout and node.session.initial_login_retry_max?
> According to comment in iscsid.conf it would spend 120 seconds in case of
> an unreachable interface login:
>
> # The default node.session.initial_login_retry_max is 8 and
> # node.conn[0].timeo.login_timeout is 15 so we have:
> #
> # node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max
> =
> #   120 seconds
>
>
> Thanks,
> Amit
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/cc3ad021-753a-4ac4-9e6f-93e8da1e19bbn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EYtSezXByd_YavtAVGMk9S_i7%3D%3DeAYSABLxeSn9h%2BtE5w%40mail.gmail.com.


Re: [EXT] Re: udev events for iscsi

2020-04-22 Thread Donald Williams
Hello

 Re: Errors  That's likely from a bad / copy paste.  I referenced the
source document I took that from.  That was done against an older RHEL
kernel.

 Don


On Wed, Apr 22, 2020 at 3:04 AM Ulrich Windl <
ulrich.wi...@rz.uni-regensburg.de> wrote:

> >>> Donald Williams  schrieb am 21.04.2020 um
> 20:49 in
> Nachricht
>
> <30147_1587494977_5E9F4041_30147_801_1_CAK3e-EawwxYGb3Gw74+P-yBmrnE0ktOL=Fj1OT_L
> q+czyz...@mail.gmail.com>:
> > Hello,
> >
> >  If the loss exceeds the timeout value yes.  If the 'drive' doesn't come
> > back in 30 to 60 seconds it's not likely a transitory event like a cable
> > pull.
> >
> > NOOP-IN and NOOP-OUT are also know as KeepAlive.  That's when the
>
> Actually I think that's two different mechanisms: Keepalive just prevents
> the connection from being discarded (some firewall like to do that), while
> the No_op actually is an end-to-end (almost at least) connection test.
>
> > connection is up but the target or initiator isn't responding.   If those
> > timeout the connection will be dropped and a new connection attempt made.
>
> I think the original intention for SCSI timeouts was to conclude a device
> has failed if it does not respond within time (actually there are different
> timeouts depending on the operation (like the famous rewinding of a long
> tape)). Next step for the OS would be to block I/O to a seemingly failed
> device. Recent operating systems like Linux have the choice to remove the
> device logically, requiring it to re-appear before it can be used. In some
> cases it seems preferrable to keep the device, because otherwise there
> could be a cascading effect like killing processes that have the device
> open (UNIX processes do not like it when opened devices suddenly disappear).
>
> Regards,
> Ulrich
>
> >
> >  Don
> >
> >
> > On Tue, Apr 21, 2020 at 2:44 PM The Lee-Man 
> wrote:
> >
> >> On Tuesday, April 21, 2020 at 12:31:24 AM UTC-7, Gionatan Danti wrote:
> >>>
> >>> [reposting, as the previous one seems to be lost]
> >>>
> >>> Hi all,
> >>> I have a question regarding udev events when using iscsi disks.
> >>>
> >>> By using "udevadm monitor" I can see that events are generated when I
> >>> login and logout from an iscsi portal/resource, creating/destroying the
> >>> relative links under /dev/
> >>>
> >>> However, I can not see anything when the remote machine simple
> >>> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout
> expiring, I
> >>> don't see anything about a removed disk (and the links under /dev/
> remains
> >>> unaltered, indeed). At the same time, when the remote machine and disk
> >>> become available again, no reconnection events happen.
> >>>
> >>
> >> Because of the design of iSCSI, there is no way for the initiator to
> know
> >> the server has gone away. The only time an initiator might figure this
> out
> >> is when it tries to communicate with the target.
> >>
> >> This assumes we are not using some sort of directory service, like iSNS,
> >> which can send asynchronous notifications. But even then, the iSNS
> server
> >> would have to somehow know that the target went down. If the target
> >> crashed, that might be difficult to ascertain.
> >>
> >> So in the absence of some asynchronous notification, the initiator only
> >> knows the target is not responding if it tries to talk to that target.
> >>
> >> Normally iscsid defaults to sending periodic NO-OPs to the target every
> 5
> >> seconds. So if the target goes away, the initiator usually notices,
> even if
> >> no regular I/O is occurring.
> >>
> >> But this is where the error recovery gets tricky, because iscsi tries to
> >> handle "lossy" connections. What if the server will be right back? Maybe
> >> it's rebooting? Maybe the cable will be plugged back in? So iscsi keeps
> >> trying to reconnect. As a matter of fact, if you stop iscsid and restart
> >> it, it sees the failed connection and retries it -- forever, by
> default. I
> >> actually added a configuration parameter called reopen_max, that can
> limit
> >> the number of retries. But there was pushback on changing the default
> value
> >> from 0, which is "retry forever".
> >>
> >> So what exactly do you think the system should do when a connection
> "go

Re: udev events for iscsi

2020-04-21 Thread Donald Williams
Hello,

 If the loss exceeds the timeout value yes.  If the 'drive' doesn't come
back in 30 to 60 seconds it's not likely a transitory event like a cable
pull.

NOOP-IN and NOOP-OUT are also know as KeepAlive.  That's when the
connection is up but the target or initiator isn't responding.   If those
timeout the connection will be dropped and a new connection attempt made.

 Don


On Tue, Apr 21, 2020 at 2:44 PM The Lee-Man  wrote:

> On Tuesday, April 21, 2020 at 12:31:24 AM UTC-7, Gionatan Danti wrote:
>>
>> [reposting, as the previous one seems to be lost]
>>
>> Hi all,
>> I have a question regarding udev events when using iscsi disks.
>>
>> By using "udevadm monitor" I can see that events are generated when I
>> login and logout from an iscsi portal/resource, creating/destroying the
>> relative links under /dev/
>>
>> However, I can not see anything when the remote machine simple
>> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I
>> don't see anything about a removed disk (and the links under /dev/ remains
>> unaltered, indeed). At the same time, when the remote machine and disk
>> become available again, no reconnection events happen.
>>
>
> Because of the design of iSCSI, there is no way for the initiator to know
> the server has gone away. The only time an initiator might figure this out
> is when it tries to communicate with the target.
>
> This assumes we are not using some sort of directory service, like iSNS,
> which can send asynchronous notifications. But even then, the iSNS server
> would have to somehow know that the target went down. If the target
> crashed, that might be difficult to ascertain.
>
> So in the absence of some asynchronous notification, the initiator only
> knows the target is not responding if it tries to talk to that target.
>
> Normally iscsid defaults to sending periodic NO-OPs to the target every 5
> seconds. So if the target goes away, the initiator usually notices, even if
> no regular I/O is occurring.
>
> But this is where the error recovery gets tricky, because iscsi tries to
> handle "lossy" connections. What if the server will be right back? Maybe
> it's rebooting? Maybe the cable will be plugged back in? So iscsi keeps
> trying to reconnect. As a matter of fact, if you stop iscsid and restart
> it, it sees the failed connection and retries it -- forever, by default. I
> actually added a configuration parameter called reopen_max, that can limit
> the number of retries. But there was pushback on changing the default value
> from 0, which is "retry forever".
>
> So what exactly do you think the system should do when a connection "goes
> away"? How long does it have to be gone to be considered gone for good? If
> the target comes back "later" should it get the same disc name? Should we
> retry, and if so how much before we give up? I'm interested in your views,
> since it seems like a non-trivial problem to me.
>
>>
>> I can read here that, years ago, a patch was in progress to give better
>> integration with udev when a device disconnects/reconnects. Did the patch
>> got merged? Or does the one I described above remain the expected behavior?
>> Can be changed?
>>
>
> So you're saying as soon as a bad connection is detected (perhaps by a
> NOOP), the device should go away?
>
>>
>> Thanks.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/7f583720-8a84-4872-8d1a-5cd284295c22%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EawwxYGb3Gw74%2BP-yBmrnE0ktOL%3DFj1OT_LEQ%2BCZyZUkg%40mail.gmail.com.


Re: udev events for iscsi

2020-04-21 Thread Donald Williams
Hello,

 re:  XenServer.  The initiator is the same but I suspect your issue with
the disk timeout value on Linux.  When the connection drops Linux gets the
error and mount RO.   In VMware for example the VMware tools sets Windows
Disktimeout to 60 seconds to not give up so quickly.

 I suspect if you do the same in your Lilnux VM, increase the Disk Timeout
you will likely ride out  transitory network issues and SAN controller
failovers.  Which is where I see this occur all the time.

  This is from a Dell PS Series document. that shows one way to set the
value
http://downloads.dell.com/solutions/storage-solution-resources/(3199-CD-L)RHEL-PSseries-Configuration.pdf


Starting on Page 14.

  Disk timeout values The PS Series arrays can deliver more network I/O
than an initiator can handle, resulting in dropped packets and
retransmissions. Other momentary interruptions in network connectivity can
also cause problems, such as a mount point becoming read-only as a result
of interruptions. To mitigate against unnecessary iSCSI resets during very
brief network interruptions, change the value the kernel uses.

The default setting for Linux is 30 seconds. This can be verified using the
command:

 # for i in $(find /sys/devices/platform –name timeout ) ; do cat $i ; done
30 30

To increase the time it takes before an iSCSI connection is reset to 60
seconds, use the command:

 # for i in $(find /sys/devices/platform –name timeout ); do echo “60” >
$i; done

To verify the changes, re-run the first command.

# for i in $(find /sys/devices/platform –name timeout ); do cat $i; done
60 60

When the system is rebooted, the timeout value will revert to 30 seconds,
unless the appropriate udev rules file is created.

Create a file named /lib/udev/rules.d/99-eqlsd.rules and add the following
content: ACTION!=”remove”, SUBSYSTEM==”block”, ENV{ID_VENDOR}==”EQLOGIC”,
RUN+=”/bin/sh – c ‘echo 60 > /sys/%p/device/timeout’” To test the efficacy
of the new udev rule, reboot the system.

Test that the reboot occurred, and then run the “cat $i” command above.

# uptime 12:31:22 up 1 min, 1 user, load average: 0.78, 0.29, 0.10

# for i in $(find /sys/devices/platform –name timeout ) ; do cat $i ; done
60 60

 Regards,

Don



On Tue, Apr 21, 2020 at 11:20 AM  wrote:

> Wondering myself.
>
> On Apr 21, 2020, at 2:31 AM, Gionatan Danti 
> wrote:
>
> 
> [reposting, as the previous one seems to be lost]
>
> Hi all,
> I have a question regarding udev events when using iscsi disks.
>
> By using "udevadm monitor" I can see that events are generated when I
> login and logout from an iscsi portal/resource, creating/destroying the
> relative links under /dev/
>
>
> So running “udevadm monitor” on the initiator, you can see when a block
> device becomes available locally.
>
>
>
> However, I can not see anything when the remote machine simple
> dies/reboots/disconnects: while "dmesg" shows the iscsi timeout expiring, I
> don't see anything about a removed disk (and the links under /dev/ remains
> unaltered, indeed). At the same time, when the remote machine and disk
> become available again, no reconnection events happen.
>
>
> As someone who has had an inordinate amount of experience with the iSCSi
> connection breaking ( power outage, Network switch dies,  wrong ethernet
> cable pulled, the target server machine hardware crashes, ...) in the
> middle of production, the more info the better.   Udev event triggers would
> help.   I wonder exactly how XenServer handles this as it itself seemed
> more resilient.
>
> XenServer host initiators  do something correct to recover and wonder how
> that compares to the normal iSCSi initiator.
>
> But unfortunately, XenServer LVM-over-iSCSi  does not pass the message
> along to its Linux virtual drives and VMs in the same way as Windows VMs.
>
>
> When the target drives became available again,   MS Windows virtual
> machines would gracefully recover on their own.All Linux VM
>  filesystems went read only and those VM machines required forceful
>  rebooting.   mount remount would not work.
>
>
>
> I can read here that, years ago, a patch was in progress to give better
> integration with udev when a device disconnects/reconnects. Did the patch
> got merged? Or does the one I described above remain the expected behavior?
> Can be changed?
>
> Thanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/13d4c963-b633-4672-97d9-dd41eec5fb5b%40googlegroups.com
> 
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop

Re: iSCSI and Ceph RBD

2020-01-24 Thread Donald Williams
Hello,

 I am not an expert in CEPH.

However,  iSCSI is the transport protocols to connect an initiator to a
target.  On the client side, iSCSI traffic coming from target is broken
down and the SCSI commands are handed to the client.   When writing data,
the iSCSI initiator encoded the command and data and transports it to the
iSCSI target where the target breaks down the iSCSI portion to get the
client SCSI command,  I.e. a WRITE10, and processes that request.

 So things like CEPH are above the iSCSI layer.  This is why CEPH can work
with Fibre Channel, SAS, SCSI, iSCSI, etc...

https://docs.ceph.com/docs/mimic/glossary/#term-ceph-osd-daemon
OSD

A physical or logical storage unit (*e.g.*, LUN). Sometimes, Ceph users use
the term “OSD” to refer to Ceph OSD Daemon
, though
the proper term is “Ceph OSD”.
So iSCSI will provide the physical LUN that CEPH will use. This is good
otherwise for every protocol of storage you want to use, you would have to
have a specific driver for CEPH.

This is a pretty good intro to iSCSI

https://blog.calsoftinc.com/2017/03/iscsi-introduction-steps-configure-iscsi-initiator-target.html


Regards,

Don







On Fri, Jan 24, 2020 at 4:50 PM Bobby  wrote:

> Hi,
>
> I have some questions regarding iSCSI and Ceph RBD. If I have understood
> correctly, the RBD backstore module
> on target side can translate SCSI IO into Ceph OSD requests. The iSCSI
> target driver with rbd.ko can expose Ceph cluster
> on iSCSI protocol. If correct, then that all is happening on target side.
>
> My confusion is what is  happening on client side?
>
> Meaning, does linux mainline kernel code called "rbd" has any role with
> Open-iSCSI initiator on client side? To put it more simple,
> is there any common ground for both protocols (iSCSI and rbd) in the linux
> kernel  of the client side?
>
> Thanks :-)
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/dc5e17db-5e78-49ff-be38-a17706428655%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EYmEN1ETc_Ru0xcuaKUisDSV0B2%3Ded2Kx9Wk2rksAXnQg%40mail.gmail.com.


Re: iSCSI Multiqueue

2020-01-23 Thread Donald Williams
Hello

 Thanks for sending this.  I too believe this is how it works and given the
current performance of OiS it's certainly not single threaded per iSCSI
session, and with multiple iSCSI sessions over different NICs, connecting
into multipathd,  performance and redundancy needs are met for the vast
majority of SAN applications.

 Often the bottleneck is the backend storage given the interface speeds
available today for iSCSI.Especially as you add more hosts. since the
IO load as seen by storage is typically very random.

 Regards,
Don




On Thu, Jan 23, 2020 at 4:51 PM The Lee-Man  wrote:

> On Wednesday, January 15, 2020 at 7:16:48 AM UTC-8, Bobby wrote:
>>
>>
>> Hi all,
>>
>> I have a question regarding multi-queue in iSCSI. AFAIK, *scsi-mq* has
>> been functional in kernel since kernel 3.17. Because earlier,
>> the block layer was updated to multi-queue *blk-mq* from single-queue.
>> So the current kernel has full-fledged *multi-queues*.
>>
>> The question is:
>>
>> How an iSCSI initiator uses multi-queue? Does it mean having multiple
>> connections? I would like
>> to see where exactly that is achieved in the code, if someone can please
>> me give me a hint. Thanks in advance :)
>>
>> Regards
>>
>
> open-iscsi does not use multi-queue specifically, though all of the block
> layer is now converted to using multi-queue. If I understand correctly,
> there is no more single-queue, but there is glue that allows existing
> single-queue drivers to continue on, mapping their use to multi-queue.
> (Someone please correct me if I'm wrong.)
>
> The only time multi-queue might be useful for open-iscsi to use would be
> for MCS -- multiple connections per session. But the implementation of
> multi-queue makes using it for MCS problematic. Because each queue is on a
> different CPU, open-iscsi would have to coordinate the multiple connections
> across multiple CPUs, making things like ensuring correct sequence numbers
> difficult.
>
> Hope that helps. I _believe_ there is still an effort to map open-iscsi
> MCS to multi-queue, but nobody has tried to actually do it yet that I know
> of. The goal, of course, is better throughput using MCS.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/8f236c4a-a207-4a0e-8dff-ad14a74e57dc%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EbuwXpvxzTnaGtq3URrfhC4aUvX0%2B4zKat3A2STrON5%3Dg%40mail.gmail.com.


Re: Two types of initiator stacks

2020-01-10 Thread Donald Williams
Hello,
 You are very welcome.

Also, iSCSI offload cards like the Broadcom (Now owned by Qlogic) are
typically called "dependent hardware initiators'.  Since it depends on
connection to the OS network stack to make it fully functional.  Otherwise,
it behaves just like a standard NIC.

Cards that completely offload the network and iSCSI functions are known as
"Independent hardware initiators'.   Since they don't require that OS
network connection.  They appear solely as a SCSI adapter to the OS.  All
the network configuration is done on the card.  Qlogic used to make the
best examples of this.  The Qlogic 4xxx series iSCSI HBAs.   Now you see
this in cards that support DCB, they are called "Converged Network
Adapters"  CNAs.   Since very few Software Initiators support DCB naively
the card has to handle everything.

Regards,
Don



On Fri, Jan 10, 2020 at 11:18 AM Bobby  wrote:

> ah OK thanks !
>
>
> On Thursday, January 9, 2020 at 7:35:07 PM UTC+1, Donald Williams wrote:
>>
>> Hello,
>>
>>  It is referring to iSCSI HBA cards like Broadcom BCM58xx/57xxx or just
>> using a standard NIC and the Software iSCSI adapter open-iSCSI provides.
>>
>> Regards,
>> Don
>>
>>
>>
>> On Thu, Jan 9, 2020 at 11:57 AM Bobby  wrote:
>>
>>> Under section "How to setup iSCSI interfaces (iface) for binding" of
>>> README, there is this paragraph:
>>>
>>> " To manage both types of initiator stacks, iscsiadm uses the interface 
>>> (iface)
>>> structure. For each HBA port or for software iscsi for each network
>>> device (ethX) or NIC, that you wish to bind sessions to you must create
>>> a iface config /etc/iscsi/ifaces. "
>>>
>>>
>>>
>>>  Here I am confused. Which both types of initiator stacks we mean here?
>>>
>>>
>>>
>>> Thanks !
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "open-iscsi" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to open-...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/open-iscsi/587116d0-ebce-45b9-b5cf-e6fbc3437b41%40googlegroups.com
>>> <https://groups.google.com/d/msgid/open-iscsi/587116d0-ebce-45b9-b5cf-e6fbc3437b41%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/566e8911-552e-4dbf-afd5-89c156929bf1%40googlegroups.com
> <https://groups.google.com/d/msgid/open-iscsi/566e8911-552e-4dbf-afd5-89c156929bf1%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EZqkCqnoZi5fbeXGZeZ8k57eLA6NGgw_BxQGiz32M1_5g%40mail.gmail.com.


Re: Two types of initiator stacks

2020-01-09 Thread Donald Williams
Hello,

 It is referring to iSCSI HBA cards like Broadcom BCM58xx/57xxx or just
using a standard NIC and the Software iSCSI adapter open-iSCSI provides.

Regards,
Don



On Thu, Jan 9, 2020 at 11:57 AM Bobby  wrote:

> Under section "How to setup iSCSI interfaces (iface) for binding" of
> README, there is this paragraph:
>
> " To manage both types of initiator stacks, iscsiadm uses the interface 
> (iface)
> structure. For each HBA port or for software iscsi for each network
> device (ethX) or NIC, that you wish to bind sessions to you must create
> a iface config /etc/iscsi/ifaces. "
>
>
>
>  Here I am confused. Which both types of initiator stacks we mean here?
>
>
>
> Thanks !
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/587116d0-ebce-45b9-b5cf-e6fbc3437b41%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EagWT_YBz%3DakXUsM%3DqMJX_T4%3DSowxOWCVjWQ2W_17LyAw%40mail.gmail.com.


Re: Open-iSCSI in research paper

2020-01-02 Thread Donald Williams
Hello,
 There are so many benchmarks out there.   Much depends on filesystem vs.
RAW,  random vs. sequential,  large vs. small blocksize.

 If you keep the capacity tested small then you will get a better idea of
network performance vs. actual storage performance.

 I often use IOMETER with Windows.  AFAIK,  IOMETER with Linux is still
basically broken based on the libraries they are using you don't get proper
threading to increase IO loads.  Acts basically like a single threaded
copy.

 I like that I can create various tests using different loads and run them
one right after the other in a single configuration file.

 With IOmeter you can set a very small test size to basically insure you
are getting cache reads to verify what your hosts, NICs and switches can
do.   Then test larger values if you want to actually test backend storage.

 Regards,

Don


On Thu, Jan 2, 2020 at 12:22 PM Lee Duncan  wrote:

> On Jan 2, 2020, at 8:51 AM, Bobby  wrote:
>
>
> One of the good things about this forum is, you always get helpthanks
> for the reply :-)
>
> I will soon have some questions regarding the user-land and kernel
> driver(s) :-)
>
> Regarding microbenchmarks, I think this one is good
> https://fio.readthedocs.io/en/latest/fio_doc.html.
>
> What do you think?
>
>
>
> Actually, I interpreted their lack of supplying a benchmark name to mean
> that they had rolled their own.
>
> Fio is a well-known benchmark. I’m not an expert on it so I can’t comment
> on it’s features and shortcomings, but I’m sure you could get some valuable
> numbers out of it. First, you have to decide what you want to measure. Is
> it IOPs, it is throughput, is it latency? Are you trying to simulate a
> specific workload (since that’s what really matters, in the end), or just
> get some numbers?
>
> —
> Lee
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/FCB787D9-40BE-4164-A726-05C66249A8B0%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EYqX18M5mBvVe5C1Pe4D3py5upa2crok1jvqG69DwyxgA%40mail.gmail.com.


Re: Re: iSCSI packet generator

2019-11-08 Thread Donald Williams
Hello,

 iSCSI is just a transport method for SCSI commands.  Same as Fibre
Channel, SAS, etc..

 When the network takes in the iSCSI packets, the SCSI commands and data
are separated and they go to their respective devices or 'disks' in this
case.

 Regards
Don


On Fri, Nov 8, 2019 at 1:40 PM Bobby  wrote:

>
> Hi Ulrich,
>
> Thanks for the hint. Can you please help me regarding following two
> questions.
>
> - Linux block layer perform IO scheduling IO submissions to storage device
> driver. If there is a physical device, the block layer interacts with it
> through SCSI mid layer and SCSI low level drivers. So, how *actually* a
> software initiator (*Open-iSCSI*) interacts with "*block layer*"?
>
> - What confuses me, where does the "*disk driver*" comes into play?
>
> Thanks :-)
>
>
>
>> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/1972273e-83e5-4e7f-9c76-00d0deb31185%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-EZ9Wi3gsDEPgg5A-AArib_D9h_1VbQtPxp%3DM4tH%2Be_8tg%40mail.gmail.com.


Re: iSCSI packet generator

2019-11-04 Thread Donald Williams
Hello,

 Can you provide a little more info?   iSCSI is for storage, so unless your
'server' is running an iSCSI target service there won't be 'iSCSI' traffic
to monitor.

 If you do have an iSCSI service running then providing a disk via that
service to the 'client' then doing normal I/O to that iSCSI disk will
provide all the traffic you will typically need.  I.e. discovering the
device, formatting the disk, doing writes and reads, etc.

 What is it that you are trying to do?   iSCSI is the transport for SCSI
commands over a network.   You can use SCSI tools to generate SCSI commands
to that disk, then the iSCSI initiator on the 'client' will create the
respective iSCSI packets.

 Regards,
Don




On Mon, Nov 4, 2019 at 5:49 AM Bobby  wrote:

> Hi
>
> I have two virtual machines. One is a client and other is a sever (SAN). I
> am using Wireshark to  analyze the iSCSI protocols between them.
>
> Someone recommended me, in addition to a packet analyzer, I can also use a
> packet generator. Any good packet generator for iSCSI client/server model?
>
> Thanks
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/open-iscsi/8a89dcdb-8fae-4c97-9a76-db621b01bcaf%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/open-iscsi/CAK3e-Eamy-nQLNqruGuUDcOd1cF4nmGQ8GqBxCnuuy4rrM7cpQ%40mail.gmail.com.


Re: isid persistence?

2018-07-08 Thread Donald Williams
Hello,

 I don't believe that is required.  In part because ISID is transport
level, and SCSI3 PR is at the SCSI level.

To be clear you are talking about a SCSI-3 Persistent Reservation
correct?   You are not talking about the older SCSI2 Exclusive
Reservation.

 When I look at the PR table on my array it shows for each volume with a PR
the volume ID, PR checksum and KEY along with the number of servers
registered to that volume by iqn name of each initiator.

 Nothing about the ISID. The hosts uses the KEY and initiator name to
maintain the reservation.  Hosts can unregister an re-register if need be,
done via SCSI commands not iSCSI transport. Ie. you remove a server from a
cluster that PR entry will be removed from the table.

 I believe RedHat clustering uses SCSI3 PR with open-iSCSI without any
issues?

 Regards,

Don
.





On Sat, Jul 7, 2018 at 2:42 PM mayur kulkarni  wrote:

> for the SCSI PR to work, the same ISID should be given to the session, but
> the open-iscsi is implemented such that it starts giving isids from 0 (not
> considering the 3 byte prefix) and counts upwards. if an initiator logs out
> and logs in again, for the same iqn port configuration (on both target and
> initiator) the same isid should be allocated but that is not the case.
>
> hope, I was able to explain that.
>
> Thanks.
> Mayur
>
> On Fri, Jul 6, 2018 at 4:30 AM Donald Williams 
> wrote:
>
>> Hello,
>>
>>   It's an ID for that session what would be benefit of persistence?
>> For my purposes the fact it's only  for that session helps me when going
>> through logs or traces. Makes it much easier to follow that session through
>> the iSCSID logs and on the storage device as well.
>>
>>- SSID (Session ID): A session between an iSCSI initiator and an
>>  iSCSI target is defined by a session ID that is a tuple composed of
>>  an initiator part (ISID) and a target part (Target Portal Group
>>  Tag).  *The ISID is explicitly specified by the initiator at session
>>  establishment. * The Target Portal Group Tag is implied by the
>>  initiator through the selection of the TCP endpoint at connection
>>  establishment.  The TargetPortalGroupTag key must also be returned
>>  by the target as a confirmation during connection establishment
>>  when TargetName is given.
>>
>> Regards,
>> Don
>>
>>
>> On Thu, Jul 5, 2018 at 3:02 PM mayur kulkarni 
>> wrote:
>>
>>> did people move onto some other iscsi initiator that I don't know about
>>> or my question is so stupid that no one cares to answer :(
>>>
>>>
>>> On Wednesday, 27 June 2018 00:38:13 UTC+5:30, mayur kulkarni wrote:
>>>>
>>>> open-iscsi is not allocating same isid when logging in again, are there
>>>> any plans to support it?
>>>>
>>>> if not then how to overcome this? am I missing anything?
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "open-iscsi" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to open-iscsi+unsubscr...@googlegroups.com.
>>> To post to this group, send email to open-iscsi@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/open-iscsi.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "open-iscsi" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to open-iscsi+unsubscr...@googlegroups.com.
>> To post to this group, send email to open-iscsi@googlegroups.com.
>> Visit this group at https://groups.google.com/group/open-iscsi.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: isid persistence?

2018-07-05 Thread Donald Williams
Hello,

  It's an ID for that session what would be benefit of persistence?For
my purposes the fact it's only  for that session helps me when going
through logs or traces. Makes it much easier to follow that session through
the iSCSID logs and on the storage device as well.

   - SSID (Session ID): A session between an iSCSI initiator and an
 iSCSI target is defined by a session ID that is a tuple composed of
 an initiator part (ISID) and a target part (Target Portal Group
 Tag).  *The ISID is explicitly specified by the initiator at session
 establishment. * The Target Portal Group Tag is implied by the
 initiator through the selection of the TCP endpoint at connection
 establishment.  The TargetPortalGroupTag key must also be returned
 by the target as a confirmation during connection establishment
 when TargetName is given.

Regards,
Don


On Thu, Jul 5, 2018 at 3:02 PM mayur kulkarni  wrote:

> did people move onto some other iscsi initiator that I don't know about or
> my question is so stupid that no one cares to answer :(
>
>
> On Wednesday, 27 June 2018 00:38:13 UTC+5:30, mayur kulkarni wrote:
>>
>> open-iscsi is not allocating same isid when logging in again, are there
>> any plans to support it?
>>
>> if not then how to overcome this? am I missing anything?
>>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Error Recovery and TUR

2018-05-01 Thread Donald Williams
Hello,

 Part of the iSCSI protocol is recovering from different error conditions.

 https://www.ietf.org/proceedings/51/slides/ips-6.pdf

  That's link is the spec for it.

  On a connection reset, the initiator will go back to the Discovery
address and attempt to log back in.

 Another key piece is Keep Alive Timeout (KATO) also known as NOOP-IN and
NOOP-OUT.The iSCSI initiator and iSCSI Target  periodically 'ping' each
other.  Sending a NOOP command.  If acknowledged the connection stays
alive.  If the initiator doesn't get a reply it will drop the connection
and try again.  If the target doesn't get a reply it will close that
connection.  This should cause the iSCSI initiator to reconnect if it's
still up.  You see that often when you reboot server or target.

 iSCSI is very robust and well tested.  Open-iSCSI is the standard iSCSI
initiator for Linux platforms.

 The University of NH has a Compliance lab, with the test plans and test
suite

https://www.iol.unh.edu/testing/storage/iscsi/test-plans

Here's some more info on compliance testing.

https://www.snia.org/sites/default/files/files2/files2/SDC2013/presentations/TestingMethodologies/RonnieSahlberg_iscsi_testing.pdf

 Regards,

Don




On Mon, Apr 30, 2018 at 5:28 PM, Shoaib  wrote:

> Hi,
>
> I am new to iSCSI and dealing with an iSCSI recovery issue. I have a few
> questions that hopefully the community can answer.
>
> 1) Is there an iSCSI initiator test suite which tests recovery?
>
> 2) Has open-iscsi initiator been tested for recovery and can I get access
> to the results?
>
> 3) My understanding is that TUR is only issued once. What happens if for
> whatever reason, say connection reset the TUR gets dropped. How does iSCSI
> recovers?
>
> Thanks a lot,
>
> Shoaib
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: How to address more than one LUN in a target

2018-04-11 Thread Donald Williams
Hi Paul,

 CML Does Target 0, then Lun0, the next volume will be target 0, LUN1,
etc.. With the standard two paths and two fault domains each server will
see four TARGETS, then the volumes will be LUNs underneath them.   As
opposed to EQL with Target 0, LUN 0 for first volume then Target 1 LUN 0
for next volume and so on.

 Gerry,

  Here are couple of links for working with Linux and Compellent Storage.

http://en.community.dell.com/techcenter/extras/m/white_papers/20421201/

http://en.community.dell.com/cfs-file/__key/telligent-evolution-components-attachments/13-4491-00-00-20-44-03-04/SC_2D00_Series_2D00_with_2D00_RHEL_2D00_7x_2D00_Dell_2D00_EMC_2D00_2018_2D002800_CML1071_2900_.pdf?forcedownload=true

http://en.community.dell.com/techcenter/extras/m/white_papers/20440304

If these don't help, please open a support case with Dell and they can
assist you. Debian isn't a supported OS, but RHEL and SuSE use same iSCSI
initiator.

Kind of sounds like a new UDEV rules needs to be created or modified.  But
that's a bit of a guess.

 Regards,

 Don



On Wed, Apr 11, 2018 at 2:21 PM, Paul Koning  wrote:

> Gerry,
>
> I'm not sure I understand the question.  iSCSI doesn't talk to LUNs, it
> talks to iSCSI targets.  You address targets (by their IP address).
>
> Once you're talking to a target, iSCSI provides a path for SCSI to do its
> thing with that target.  One of the things SCSI (not iSCSI) does is ask the
> target to "Report LUNs".  For most storage devices, a list of LUNs comes
> back, and SCSI will make each LUN it sees available as a Linux device.
>  (EqualLogic is an exception, it has just one LUN per target; but few
> targets, many more LUNs, is the much more common pattern and Compellent
> follows that common pattern.)
>
> The softlink you showed indicates that those LUNs are detected and mapped
> to /dev/sdX Linux block devices, as expected.
>
> One more point: Report LUNs will see the LUNs that the storage device
> wants you to see.  A lot of them do "LUN Masking" which means that only
> some LUNs are visible to a given client, according to access control
> settings.  If there are 20 LUNs, but LUN Masking is set so your client does
> not have permission to see 19 of them, then from your client only one LUN
> will appear in the Report LUNs reply, and only one /dev/sdX will be created.
>
> paul
>
> > On Apr 11, 2018, at 7:24 AM, jmgerryobr...@gmail.com wrote:
> >
> > Hi,
> >
> >   How do you address more than one LUN in a target? We are trying to
> connect from Debian Stretch to an iSCSI volume in a DellEMC Compellent
> SC5050 array. In the Compellents the targets are the WQNs for the
> controller interfaces not the volumes Individual iSCSI volumes are given
> LUN numbers, eg. LUN1, LUN2, etc. Is there any way in open-iscsi to address
> individual LUNs within a given target? If we reboot the server the
> individual LUNs are connected to by the Device Mapper and given individual
> /dev/disk/by-path entries with "-lin-N" suffix (see below). We can't find
> any way to get open-iscsi to discover individual LUNs when there is more
> than on LUN attached to a target.
> >
> > Can you help us please.
> >
> > Regards,
> >Gerry
> >
> > /dev/disk/by-path/ip-10.32.141.10:3260-iscsi-iqn.2002-03.com.compellent:5000d310055ba23d-lun-4
> -> ../../sdz
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: facing problem to mount EMC storage in ubuntu 14.04/

2017-10-05 Thread Donald Williams
Hello,

 Additionally, what SCSI disk device name are you using to create the
filesystem?   I you have multipathd running device mapper will create a new
device name.   If the volume is partitioned then it will have a p1 at the
end of the device name.  That's what you want to use to create the
filesystem.  If you use the 'base' name w/o the p1 you will get a device
busy error.

 Regards,
Don


On Tue, Oct 3, 2017 at 12:23 PM, The Lee-Man 
wrote:

>
> On Saturday, September 30, 2017 at 11:11:24 AM UTC-7, sali wrote:
>>
>> Dear Team,
>>
>> anybody can help me to mount storage in linux ubuntu 14.04 ?
>>
>> i tried with open iscsi, but i can see the drives but i cant format the
>> disk
>>
>> could you please help me to fix this issue ?
>>
>>
>> regards,
>> Salih
>>
>>
> Can you be more specific? What do you see when you "see the drives"? What
> command are you running to try to "format the disk" that is failing?
>
> It seems like if you can see the drives then iSCSI is working.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: installing initiator and target

2017-08-01 Thread Donald Williams
Hello,

 Both the iSCSI initiator and target are available as pre-compiled RPM
packages for Fedora.  You do not need to compile them.

iscsi-initiator-utils for the initiator and I believe iscsi-target-utils
for the iSCSI target server.

 Regards,
Don


On Tue, Aug 1, 2017 at 12:49 AM, 'jayshankar nair' via open-iscsi <
open-iscsi@googlegroups.com> wrote:

> Hi,
>
> configure file is missing. Hence i am unable to make and make install.
>
> Thanks,
> Jayshankar
>
>
> On Monday, July 31, 2017 11:59 PM, The Lee-Man 
> wrote:
>
>
> On Sunday, July 30, 2017 at 10:01:20 PM UTC-7, jayshankar nair wrote:
>
> Hi,
>  I like to install the iscsi initiator and target on fedora 25. PLease
> email me the README file.
>
> Thanks,
> Jayshankar
>
>
> Why do you not install the package and look at the README?
>
> Or use git to download the sources, and look at the README.
>
> This is not a file to email service.
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: open-iscsi default interface behavior

2017-06-20 Thread Donald Williams
Hi Mike,
 I have found that at least with the EQL SANs, that setting RP filter is
needed, but still have to set the interface files to create the session
from each designated port. Otherwise when you look at the traffic it only
comes from one port on the server.

Don


On Tue, Jun 20, 2017 at 12:59 PM, Michael Eklund 
wrote:

> Don,
>
> That is not necessarily true.  If rp_filter is set to 2 for each network
> device on the same subnet, they can all be active at the same time. This is
> even addressed in the open-iscsi documentation.
>
> Mike E.
>
> On Saturday, June 17, 2017 at 7:59:40 PM UTC-5, Donald Williams wrote:
>>
>> Hello,
>>I read over that thread.  Once thing missing from that discussion is
>> how Linux routing works with multiple NICs on the same subnet.  If you have
>> two NICs IP'd on same IP subnet, only one NIC will be active.  That becomes
>> the default NIC for that subnet.  Down that interface and the other will
>> become active.  That's the default behavior.  That's why the -I interface
>> option or creating interfaces files  is needed, not just for offload
>> cards.  On iSCSI SANs like the Equallogic where all interfaces are on the
>> same subnet, the iSCSI initiator must initiate iSCSI connections from each
>> interface desired for iSCSI traffic.  iSCSI SANs like the MD and CML can
>> use different network subnets to get around how Linux networking works.
>>
>> Don
>>
>>
>>
>>
>> On Fri, Jun 16, 2017 at 4:04 PM,  wrote:
>>
>>> I have been facing an issue with kubernetes implementation here is the
>>> link if you care to look:
>>>
>>> https://github.com/kubernetes/kubernetes/issues/46041#issuec
>>> omment-308762723
>>>
>>> The question I have is why does open-iscsi behave differently when you
>>> use the default interface in the first place?
>>>
>>> That is why does the behavior of these things differ:
>>>
>>> iscsiadm -m discovery -t st -p X.X.X.X
>>> vs
>>> iscsiadm -m discovery -t st -p X.X.X.X -I default
>>>
>>> and
>>>
>>> iscsiadm -m node -p  -T iqn.2001-05.com.equallogic:
>>> X-XX-X--kubetesting --login
>>> vs
>>> iscsiadm -m node -p  -T iqn.2001-05.com.equallogic:
>>> X-XX-X--kubetesting -I default --login
>>>
>>> Thanks,
>>>
>>> Mike E
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "open-iscsi" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to open-iscsi+...@googlegroups.com.
>>> To post to this group, send email to open-...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/open-iscsi.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: open-iscsi default interface behavior

2017-06-17 Thread Donald Williams
Hello,
   I read over that thread.  Once thing missing from that discussion is how
Linux routing works with multiple NICs on the same subnet.  If you have two
NICs IP'd on same IP subnet, only one NIC will be active.  That becomes the
default NIC for that subnet.  Down that interface and the other will become
active.  That's the default behavior.  That's why the -I interface option
or creating interfaces files  is needed, not just for offload cards.  On
iSCSI SANs like the Equallogic where all interfaces are on the same subnet,
the iSCSI initiator must initiate iSCSI connections from each interface
desired for iSCSI traffic.  iSCSI SANs like the MD and CML can use
different network subnets to get around how Linux networking works.

Don




On Fri, Jun 16, 2017 at 4:04 PM,  wrote:

> I have been facing an issue with kubernetes implementation here is the
> link if you care to look:
>
> https://github.com/kubernetes/kubernetes/issues/46041#
> issuecomment-308762723
>
> The question I have is why does open-iscsi behave differently when you use
> the default interface in the first place?
>
> That is why does the behavior of these things differ:
>
> iscsiadm -m discovery -t st -p X.X.X.X
> vs
> iscsiadm -m discovery -t st -p X.X.X.X -I default
>
> and
>
> iscsiadm -m node -p  -T iqn.2001-05.com.equallogic:X-XX-X-
> -kubetesting --login
> vs
> iscsiadm -m node -p  -T 
> iqn.2001-05.com.equallogic:X-XX-X--kubetesting
> -I default --login
>
> Thanks,
>
> Mike E
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at https://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at https://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Updated iscsi timeout and retry values in iscsi.conf file are not getting effective

2015-08-18 Thread Donald Williams
Hello Manish,

 No restarting the iSCSI daemon does not update the node files.  There are
iscsiadm commands that will do so on the fly.

 This is taken from the Dell Tech Report TR1062.   Configuring iSCSI and
MPIO for RHEL v5.x.   Which is 99% the same for RHEL v6.x.

  A search for Dell TR1062 will give you a link to the PDF.

  For example:

This creates the interface file for Ethernet port2.

# iscsiadm --mode iface --interface eth2 -o update --name
iface.net_ifacename -–value=eth2

#iscsiadm --mode iface --interface eth2 -o update --name
iface.net_ifacename -–value=eth2

This updates the option in the interface file for Ethernet2 so that MPIO
will work properly

 The way you found, editing the /etc/iscsi/iscsid.conf then re-running
discovery will push out the changes to all the node files at once.

 So both techniques have their value.

 Regards,

 Don



On Tue, Aug 18, 2015 at 8:10 AM, Manish Singh  wrote:

> Dear All,
>
> We are using RHEL 6.6 and iscsi-initiator-utils-6.2.0.873-13 in our test
> environment.
>
> we have modified following iscsi timeout and retry values in iscsi.conf,
>
> node.session.timeo.replacement_timeout = 30 (previously 180)
>
> node.session.initial_login_retry_max = 1 (previously 8, By Default)
>
> node.session.err_timeo.abort_timeout = 5 (previously 15, By Default)
>
> node.session.err_timeo.lu_reset_timeout = 10 (previously 30, By Default)
> node.session.err_timeo.tgt_reset_timeout = 10 (previously 30, By Default)
>
>
> As per our understanding, the above changes should get effective for
> iscsiadm commands just after restarting iscsi daemon.
>
> But, these are not getting effective until we execute the discovery
> command( iscsiadm -m discovery --type sendtargets -p x.x.x.x) after
> restarting iscsi daemon.
>
>
> Can someone please respond to my following queries:
>
> 1> As iscsi database(/var/lib/iscsi/nodes) resides at initiator side,
> restarting iscsi daemon should update the database(/var/lib/iscsi/nodes).
>
> Is the above understanding correct?
>
> If not, is it always required to explicitly execute discovery command to
> make changes in iscsi.conf become effective ?
>
> 2> Is it possible to achieve the same without executing discovery command
> [Is there any other possible way] ?
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Broken pipe on my target. Is there any option on my initiator to fix it?

2015-05-13 Thread Donald Williams
Hello Felipe,

 I'm not sure about anyone else, but I wouldn't expect that tweaking the
iSCSI settings you've been talking about will improve this.

 Have you tested just connecting from server to storage via iSCSI?   Take
NFS out of the picture.  iSCSI is very dependent on the network.  What kind
of switch are you using?   Is flowcontrol enabled?Have you configured
MPIO?

 With just iSCSI you can potentially getting better triage data from the
iSCSId logs.

 don



On Wed, May 13, 2015 at 10:24 AM, Felipe Gutierrez  wrote:

> I am using async option to export me nfs disk.
> http://unixhelp.ed.ac.uk/CGI/man-cgi?exports
>
> This help all writes on the disk to be very fast.
>
>
> On Saturday, May 9, 2015 at 2:05:19 PM UTC-3, Felipe Gutierrez wrote:
>>
>> Hi, I am using jscsi.org target and open-iscsi initiator. Through NFS I
>> can copy a bunch of files and it seems ok. When I execute a virtual machine
>> from vmware (vmware -> NFS -> open-iscsi -> target jscsi) the target throws
>> a broken pipe some times. THe initiator reestabilish the connection, but
>> this broken pipe is corrupting my VM file system.
>>
>> On a good work my target sends SCSIResponseParser PDU and after that
>> receives SCSICommandParser PDU from the initiator. When the broken pipe is
>> up to happen the target sends SCSIResponseParser PDU and does not receive
>> SCSICommandParser PDU. Instead of it, the target receives after 5 seconds
>> NOPOutParser PDU, and sends  NOPInParser PDU. After 60 seconds my target
>> receives TaskManagementFunctionRequestParser PDU with OpCode: 0x2, which
>> means to abort the task. So, the target do what the initiator is asking.
>> The broken pipe happens ans a nes connections is estabilished.
>>
>> My question is, why the initiator does not keep the comunication after
>> the SCSIResponseParser PDU sent by the target? Is there any way to see if
>> this message is wrong? Or any initiator log error?
>> Here is the target debug.
>>
>> (228)19:19:01 DEBUG [main] fullfeature.WriteStage - PDU sent 4:
>> ParserClass: SCSIResponseParser
>>   ImmediateFlag: false
>>   OpCode: 0x21
>>   FinalFlag: true
>>   TotalAHSLength: 0x0
>>   DataSegmentLength: 0x0
>>   InitiatorTaskTag: 0x2810
>>   Response: 0x0
>>   SNACK TAG: 0x0
>>   StatusSequenceNumber: 0xc8a
>>   ExpectedCommandSequenceNumber: 0xc6e
>>   MaximumCommandSequenceNumber: 0xc6e
>>   ExpDataSN: 0x0
>>   BidirectionalReadResidualOverflow: false
>>   BidirectionalReadResidualUnderflow: false
>>   ResidualOverflow: false
>>   ResidualUnderflow: false
>>   ResidualCount: 0x0
>>   Bidirectional Read Residual Count: 0x0
>>
>> (273)19:19:06 DEBUG [main] connection.TargetSenderWorker - Receiving this
>> PDU:
>>   ParserClass: NOPOutParser
>>   ImmediateFlag: true
>>   OpCode: 0x0
>>   FinalFlag: true
>>   TotalAHSLength: 0x0
>>   DataSegmentLength: 0x0
>>   InitiatorTaskTag: 0x2910
>>   LUN: 0x0
>>   Target Transfer Tag: 0x
>>   CommandSequenceNumber: 0xc6e
>>   ExpectedStatusSequenceNumber: 0xc8b
>>
>> (144)19:19:06 DEBUG [main] connection.TargetSenderWorker -
>> connection.getStatusSequenceNumber: 3211
>> (167)19:19:06 DEBUG [main] connection.TargetSenderWorker - Sending this
>> PDU:
>>   ParserClass: NOPInParser
>>   ImmediateFlag: false
>>   OpCode: 0x20
>>   FinalFlag: true
>>   TotalAHSLength: 0x0
>>   DataSegmentLength: 0x0
>>   InitiatorTaskTag: 0x2910
>>   LUN: 0x0
>>   Target Transfer Tag: 0x
>>   StatusSequenceNumber: 0xc8b
>>   ExpectedCommandSequenceNumber: 0xc6e
>>   MaximumCommandSequenceNumber: 0xc6e
>>
>> (228)19:19:11 DEBUG [main] connection.TargetSenderWorker - Receiving this
>> PDU:
>>   ParserClass: NOPOutParser
>>   ImmediateFlag: true
>>   OpCode: 0x0
>>   FinalFlag: true
>>   TotalAHSLength: 0x0
>>   DataSegmentLength: 0x0
>>   InitiatorTaskTag: 0x2a10
>>   LUN: 0x0
>>   Target Transfer Tag: 0x
>>   CommandSequenceNumber: 0xc6e
>>   ExpectedStatusSequenceNumber: 0xc8c
>>
>>
>> 
>> ...
>> ...
>> ...
>> (228)19:20:02 DEBUG [main] connection.TargetSenderWorker - Receiving this
>> PDU:
>>   ParserClass: TaskManagementFunctionRequestParser
>>   ImmediateFlag: true
>>   OpCode: 0x2
>>   FinalFlag: true
>>   TotalAHSLength: 0x0
>>   DataSegmentLength: 0x0
>>   InitiatorTaskTag: 0x3610
>>   LUN: 0x0
>>   Referenced Task Tag: 0x6b10
>>   CommandSequenceNumber: 0xc6e
>>   ExpectedStatusSequenceNumber: 0xc98
>>   RefCmdSN: 0xab6
>>   ExpDataSN: 0x0
>>
>>
>> Thanks, Felipe
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To un

Re: Changes to iSCSI device are not consistent across network

2015-02-25 Thread Donald Williams
Hello,

Unless you have a cluster file system in place what you are seeing is
expected.   Each node believes it owns that volume exclusively.   There's
nothing in iSCSI or SCSI protocol to address this.  A write from one node
doesn't tell the other node to update its cached image of that disk.
Without a file system to handle that process there's no workaround.

Regards,

Don
On Feb 25, 2015 8:21 PM,  wrote:

> Hey guys,
>
> Forgive me, but I'm super new to this.
>
> I have two CentOS 7 nodes. I'm using LIO to export a sparse file over
> iSCSI.
>
> The sparse file was created as a LIO FILEIO with write-back disabled
> (write-through)
> In targetcli, I create a LUN on my iSCSI frontend
>
> I formatted the sparse file to have an EXT4 filesystem.
>
> On both the target node and the initiator node, I can initiate a iSCSI
> session (iscsiadm -m node --login), mount the device, and read and write to
> it.
>
> However, changes to the device are not consistent across the network until
> i logout of the iSCSI session. (iscsiadm -m node --logout) (both nodes have
> to logout. The first logout writes the changes, and the second one
> refreshes them)
>
> Somewhere, caching is occurring, but I'm not sure where.
>
> Just in case you're curious, my use case is to have multiple nodes write
> to the same remote disk (or file) in parallel.
>
> Any direction or advice would be great. Thank you.
>
> -Matt
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Slow dir / Performance.

2014-12-02 Thread Donald Williams
Hello,

 What Linux distro are you using?There are some common tweaks to the
/etc/iscsid.conf and sysctl.conf files that help improve performance.

This link covers how to configure RHEL with EQL.  The same principles apply
to any recent Linux distro.


http://en.community.dell.com/cfs-filesystemfile/__key/communityserver-components-postattachments/00-19-86-14-22/TR1062_2D00_LinuxDeploy_2D00_v1-2.pdf

 Most distros locate all the files under /etc/iscsi.   RHEL/CentOS have
some in /etc/iscsi and the rest in /var/lib/iscsi

 Are you using MPIO?

 What FW are you running on the EQL storage?

 A common tweak I run to help improve read performance is "blockdev".
/sbin/blockdev --setra  /dev/device

 This increases the read ahead value which is pretty low by default.   It's
covered in the PDF as well.

  Lastly have you opened a support case with Dell? They can review the
diags and switch logs.

 Regards

 Don


On Tue, Dec 2, 2014 at 9:06 AM,  wrote:

> We have 2 Equallogic Systems, And a Dell Servers.
>
> We have every server give a block device home dir , so the user data are
> on the home dir this is working great its format on xfs filesystem, and
> running iscsiadm version 2.0-870 with linux kernel 3.10.57
>
> But when we logging on the server, the first time we do a dir command on
> the home dir, its take a long time when we get feedback on the dir , the
> next time is fast when we do a dir.
> Now we have users on this systems and this problem give not good
> performance on sql and websites an e-mail we are running on the server.
>
> Are the some information how i can fix ore maybe get this problem better
> under control ?
>
> I hope so that somebody can help me, i am searching for more than 3 weeks
> now.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: twice mounted LUN with iscsiadm login ?

2014-10-23 Thread Donald Williams
Bonjour,

 I'm not sure why it's saying (multiple).  But you notice it does that on
each separate login session.  Possibly Mike Christie will chime in with an
answer.   It isn't anything I would worry about.

Run this to see the existing sessions.

#iscsiadm -m session

e.g.  My environment is using MPIO and IPv4/IPv6 at same time.   You should
just see one entry per EQL volume.

tcp: [1] [2001:db8:00:00:10:126:204:10]:3260,1
iqn.2001-05.com.equallogic:4-52aed6-a813e0689-e2f004a89e651b0e-linux-iscsi-test-vol0
(non-flash)
tcp: [10] [2001:db8:00:00:10:126:204:10]:3260,1
iqn.2001-05.com.equallogic:4-52aed6-a813e0689-e2f004a89e651b0e-linux-iscsi-test-vol0
(non-flash)
tcp: [2] 10.126.204.10:3260,1
iqn.2001-05.com.equallogic:4-52aed6-a813e0689-e2f004a89e651b0e-linux-iscsi-test-vol0
(non-flash)
tcp: [3] 10.126.204.10:3260,1
iqn.2001-05.com.equallogic:4-52aed6-a813e0689-e2f004a89e651b0e-linux-iscsi-test-vol0
(non-flash)
tcp: [4] 10.126.204.10:3260,1
iqn.2001-05.com.equallogic:0-1cb196-c2c0def3d-7a613a8fd9d53da8-ois-chaptest-vol
(non-flash)
tcp: [5] 10.126.204.10:3260,1
iqn.2001-05.com.equallogic:0-1cb196-c2c0def3d-7a613a8fd9d53da8-ois-chaptest-vol
(non-flash)
tcp: [6] 10.126.205.110:3260,1
iqn.2001-05.com.equallogic:0-af1ff6-03b521dd7-b550051e2d854404-rhelv6-mpio-test-dw-vol
(non-flash)
tcp: [7] 10.126.205.110:3260,1
iqn.2001-05.com.equallogic:0-af1ff6-03b521dd7-b550051e2d854404-rhelv6-mpio-test-dw-vol
(non-flash)
tcp: [8] 10.126.202.240:3260,1
iqn.2001-05.com.equallogic:4-52aed6-08a90c064-074004f1fb153daa-ois-chaptest-vol-dw
(non-flash)
tcp: [9] 10.126.202.240:3260,1
iqn.2001-05.com.equallogic:4-52aed6-08a90c064-074004f1fb153daa-ois-chaptest-vol-dw
(non-flash)

And for more detail:

#iscsiadm -m session -P 1


 Out of curiosity, why don't you want to use MPIO, possibly getting better
performance and get redundancy as well?

 Cordialement

 Don


On Thu, Oct 23, 2014 at 5:46 AM, Laurent HENRY 
wrote:

> Thank you both Donald and Paul.
>
> That's it, i have 2 "ifaces" in my isci/ifaces:
> libvirt-iface-08bf216d
> libvirt-iface-33d9c275
>
> I don't know how they appear there.
> deleting them and all my nodes config and rebooting, i am getting just one
> new:
> libvirt-iface-3e97ef13
>
> discory seems to work fine now.
> login mount the resource just once, but i still see something about
> "multiple"
> mentionned while loggin in (see bellow)
>
> #iscsiadm -m node -T
> iqn.2001-05.com.equallogic:4-52aed6-65a51e6aa-5157080d44c5447a-slash -p
> 192.168.99.55:3260 --login
>
> Logging in to [iface: libvirt-iface-3e97ef13, target:
> iqn.2001-05.com.equallogic:4-52aed6-65a51e6aa-5157080d44c5447a-slash,
> portal:
> 192.168.99.55,3260] (multiple)
>
> Login to [iface: libvirt-iface-3e97ef13, target:
> iqn.2001-05.com.equallogic:4-52aed6-65a51e6aa-5157080d44c5447a-slash,
> portal:
> 192.168.99.55,3260] successful.
>
>
> Le mercredi 22 octobre 2014, 13:01:30 Donald Williams a écrit :
> > Hello,
> >
> >  The issue isn't on the Equallogic side, you have "Interface" AKA "IFACE"
> > files, configured in open-iscsi.   This does discovery out each defined
> > physical interface and logins as well.
> >
> > They are located in the open-iscsi directory in the "iface" subdirectory.
> >
> > Logging in to [*iface: libvirt-iface-33d9c275*, target:
> > iqn.2001-05.com.equallogic:4-52aed6-a9251e6aa-f437080d41b5434f-13-1-orig,
> > portal: 192.168.99.55,3260] (multiple)
> >
> > Logging in to *[iface: libvirt-iface-08bf216d*, target:
> > iqn.2001-05.com.equallogic:4-52aed6-a9251e6aa-f437080d41b5434f-13-1-orig,
> > portal: 192.168.99.55,3260] (multiple)
> >
> >  If you have multipathd installed it will create an MPIO device allowing
> > you to use multiple paths to reach the SAN, providing better performance
> > and redundancy  Typically /dev/mapper/mpath0  /dev/mapper/mpath1, etc...
> >
> >  This Tech Report covers how to configure MPIO for RedHat.   The basic
> > process is the same for SuSE.   Files might be located in different
> > directories.
> >
> >
> http://en.community.dell.com/cfs-filesystemfile/__key/communityserver-compon
> > ents-postattachments/00-19-86-14-22/TR1062_2D00_LinuxDeploy_2D00_v1-2.pdf
> >
> >
> >  Regards,
> >
> >  Don
> >
> >
> >
> > On Wed, Oct 22, 2014 at 12:49 PM, Paul Koning 
> >
> > wrote:
> > > On Oct 22, 2014, at 12:39 PM, Laurent HENRY 
> > >
> > > wrote:
> > > > Hello,
> > > >
> > > >  I am noticing a strange behavior with one of my Linux server
> > >
> > > (Opensuse
> > >
> > > > 13.1 with 

Re: How do i calculate the time delay of iscsi target ,please?

2014-10-22 Thread Donald Williams
Hello,

 I believe what you are looking for is known as "latency".   The time from
when an I/O is submitted until the acknowledgement is received.

 Different OS's have different tools for monitoring this.   Windows has
PERFMON.EXE,  or task manager.   ESXi has built in Performance monitoring
as well.

 I have not tried this myself, but looked interesting for various OS's.

https://code.google.com/p/ioping/

ioping

This tool lets you monitor I/O latency in real time. It shows disk latency
in the same way as ping shows network latency.

Usage: ioping [-LABCDWRq] [-c count] [-w deadline] [-pP period] [-i interval]
   [-s size] [-S wsize] [-o offset] directory|file|device
ioping -h | -v

  -c   stop after  requests
  -wstop after 
  -p  print raw statistics for every  requests
  -P  print raw statistics for every  in time
  -iinterval between requests (1s)
  -srequest size (4k)
  -S   working set size (1m)
  -o  working set offset (0)
  -k  keep and reuse temporary working file
  -L  use sequential operations (includes -s 256k)
  -A  use asynchronous I/O
  -C  use cached I/O
  -D  use direct I/O
  -W  use write I/O *DANGEROUS*
  -R  seek rate test (same as -q -i 0 -w 3 -S 64m)
  -B  print final statistics in raw format
  -q  suppress human-readable output
  -h  display this message and exit
  -v  display version and exit



On Mon, Oct 20, 2014 at 10:57 PM, Lei Xue  wrote:

> Can you catch the network packets via tcpdump or wireshark?
> You can use Wireshark to investigate them after you get the network data,
> you will find the "time" field.
>
> Thanks,
> -Lei
>
> 2014-10-21 10:44 GMT+08:00 木木夕 :
>
>> Hey all,
>> i build an iSCSI Enterprise Target, and use open-iscsi initiator to
>> login,which works well.
>> now i want to know the response time of the target, where can i get that?
>> or in which function should i add something in IET to calculate the
>> response time ?
>> for example:
>> one read request comes to target from initiator ,then the target accepts
>> that and returns a response to initiator. can i get the time from request
>> to response?
>> any answer will be appreciated, thank you!
>> best regards!
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "open-iscsi" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to open-iscsi+unsubscr...@googlegroups.com.
>> To post to this group, send email to open-iscsi@googlegroups.com.
>> Visit this group at http://groups.google.com/group/open-iscsi.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: twice mounted LUN with iscsiadm login ?

2014-10-22 Thread Donald Williams
Hello,

 The issue isn't on the Equallogic side, you have "Interface" AKA "IFACE"
files, configured in open-iscsi.   This does discovery out each defined
physical interface and logins as well.

They are located in the open-iscsi directory in the "iface" subdirectory.

Logging in to [*iface: libvirt-iface-33d9c275*, target:
iqn.2001-05.com.equallogic:4-52aed6-a9251e6aa-f437080d41b5434f-13-1-orig,
portal: 192.168.99.55,3260] (multiple)

Logging in to *[iface: libvirt-iface-08bf216d*, target:
iqn.2001-05.com.equallogic:4-52aed6-a9251e6aa-f437080d41b5434f-13-1-orig,
portal: 192.168.99.55,3260] (multiple)

 If you have multipathd installed it will create an MPIO device allowing
you to use multiple paths to reach the SAN, providing better performance
and redundancy  Typically /dev/mapper/mpath0  /dev/mapper/mpath1, etc...

 This Tech Report covers how to configure MPIO for RedHat.   The basic
process is the same for SuSE.   Files might be located in different
directories.

http://en.community.dell.com/cfs-filesystemfile/__key/communityserver-components-postattachments/00-19-86-14-22/TR1062_2D00_LinuxDeploy_2D00_v1-2.pdf


 Regards,

 Don



On Wed, Oct 22, 2014 at 12:49 PM, Paul Koning 
wrote:

>
> On Oct 22, 2014, at 12:39 PM, Laurent HENRY 
> wrote:
>
> > Hello,
> >  I am noticing a strange behavior with one of my Linux server
> (Opensuse
> > 13.1 with open-iscsi)
> >
> > While connecting manually to a iscsi node, my lun is getting mounted
> twice.
> > This produce troubles on my iscsi disk array, which refuse a multihost
> request
> > (and i don't want to allow it either).
> >
> > I think the reason is my disk array (dell equallogic) announce every lun
> > twice, i don't know why.
>
> That would certainly be unexpected, if it is really doing that.
> >
> > Here is an example:
> >
> > # iscsiadm -m discovery -t sendtargets -p 192.168.99.55|grep 13.1-orig
> >
> > 192.168.99.55:3260,1 iqn.2001-05.com.equallogic:4-52aed6-a9251e6aa-
> > f437080d41b5434f-13-1-orig
> > 192.168.99.55:3260,1 iqn.2001-05.com.equallogic:4-52aed6-a9251e6aa-
> > f437080d41b5434f-13-1-orig
>
> That means either it was announced twice, or it was announced once but the
> Linux end turned it into two records.  To determine which is correct, you
> might use Wireshark or equivalent to capture the iSCSI discovery session.
> I expect you'll see the target announced once; if so then the issue is at
> the initiator end.  If you do see the target announced twice, that would be
> something to investigate more in detail because the system isn't supposed
> to do that.
>
> paul
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: amazonAWS/VTL with open-iscsi on EC2 /dev/by-path/ not present

2014-09-14 Thread Donald Williams
What are you using for an iSCSI target?   IETD doesn't support tape drives.


Try STGT  http://stgt.sourceforge.net/

They claim to support VTL.  I haven't tried it myself.   I believe there
are one or two other Linux iSCSI targets that support tape drives as well.

Regards,
Don


On Thu, Sep 11, 2014 at 2:38 PM, Kelley, Jared  wrote:

>  I’m trying to setup a VTL on some AWS EC2 instances and have done so
> with success however the scsi devices (virtual tape drives)
> show up as connected to the client but there is nothing listed under
> /dev/st* nor is there any /dev/by-path directory on the client.
>
>  Has anyone experienced this and what might be the issue?
>
>  Thanks in advance
>
>  Jk
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Best way to create multiple TCP flows on 10 Gbps link

2014-08-25 Thread Donald Williams
I find upping some of the default Linux network params helps with
throughput


Edit /etc/sysctl.conf, then update the system using #sysctl –p

# Increase network buffer sizes net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.wmem_default = 262144
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_default = 262144

I also find that increasing the disk read ahead really helps with
sequential read loads.

blockdev –setra X 

 i.e. #blockdev –setra 4096 /dev/sda or /dev/mapper/mpath1


Also some small tweaks to iscsid.conf can yield some improvements.

#/etc/iscsi/iscsid.conf

node.session.cmds_max = 1024 < --- Default is 128
node.session.queue_depth = 128 < --- Default is
node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072 <--- try 64K-512K





On Mon, Aug 25, 2014 at 2:58 PM, Mark Lehrer  wrote:

> I am trying to achieve10Gbps in my single initiator/single target
>>> env. (open-iscsi and IET)
>>>
>>
> On a semi-related note, are there any good guides out there to tuning
> Linux for maximum single-socket performance?  On my 40 gigabit setup, I
> seem to hit a wall around 3 gigabits when doing a single TCP socket.  To go
> far above that I need to do multipath, initiator-side RAID, or RDMA.
>
> Thanks,
> Mark
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: iscsi initiator connecting to 2 address

2014-06-27 Thread Donald Williams
I would say that target is on same host, since the other connection is
127.0.0.1.

You could specify an interface for iSCSI.

#iscsiadm –m iface –I eth1 –o new
New interface eth1 added

Now update the interface name
#iscsiadm –m iface –I eth1 - -op=update –n iface.net_ifacename –v eth1
eth1 updated

Do a rediscovery, now only ETH1 will be used for iSCSI.   (Assuming that's
the NIC you want to use)




On Fri, Jun 27, 2014 at 3:06 AM, Anish Bhatt  wrote:

> If you're running iscsi-target then discovery will return a portal for
> every single IP configured on the machine. Are you running both on the same
> machine ?
> -Anish
> 
> From: open-iscsi@googlegroups.com [open-iscsi@googlegroups.com] on behalf
> of fel...@usto.re [fel...@usto.re]
> Sent: Wednesday, June 18, 2014 1:40 PM
> To: open-iscsi@googlegroups.com
> Subject: iscsi initiator connecting to 2 address
>
> Hi, I am connecting to the iscsi-target with open-iscsi. I don't know why
> my iscsi-initiator connect to 2 IP address on the same target name. Maybe
> is some wrong configuration on /etc/hosts.
>
> Does anyone have any idea? Thanks in advance!
>
> root@dell-felipe:~# iscsiadm -m node --login 10.0.1.37
> Logging in to [iface: default, target: iqn.2014-06.ustore-test:disk1,
> portal: 127.0.0.1,3260] (multiple)
> Logging in to [iface: default, target: iqn.2014-06.ustore-test:disk1,
> portal: 10.0.1.37,3260] (multiple)
> Login to [iface: default, target: iqn.2014-06.ustore-test:disk1, portal:
> 127.0.0.1,3260] successful.
> Login to [iface: default, target: iqn.2014-06.ustore-test:disk1, portal:
> 10.0.1.37,3260] successful.
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com open-iscsi+unsubscr...@googlegroups.com>.
> To post to this group, send email to open-iscsi@googlegroups.com open-iscsi@googlegroups.com>.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Very strange behavior - 4 devices but only one nic getting traffic?

2014-04-11 Thread Donald Williams
Configuring Mulitpath Connections:
To create the multiple logins needed for Linux dev-mapper to work you need
to create an 'interface'
file for each GbE interface you wish to use to connect to the array.
Use the following commands to create the interface files for MPIO. (Select
the appropriate Ethernet
interfaces you're using.)
#iscsiadm -m iface -I eth0 -o new
New interface eth0 added

Repeat for the other interface, i.e. eth1

#iscsiadm -m iface -I eth1 -o new
New interface eth1 added

Now update the interface name for each port:

#iscsiadm -m iface -I eth0 - -op=update -n iface.net_ifacename -v eth0
eth0 updated
#iscsiadm -m iface -I eth1 - -op=update -n iface.net_ifacename -v eth1
eth1 updated

Here's an example of what the /var/lib/iscsi/ifaces/eth0 looks like:

iface.iscsi_ifacename = eth0
iface.net_ifacename = eth0
iface.hwaddress = default
iface.transport_name = tcp


Additionally,  I found that with OVMS under load the initiator can get
starved out.   I found this helps prevent that.

Edit /etc/sysctl.conf, then update the system using #sysctl -p
# Increase network buffer sizes
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.wmem_default = 262144
net.core.rmem_default = 262144

Also make sure that these are set in /etc/sysctl.conf or only one NIC will
be used.

* Linux ARP behavior:
If multiple interfaces are on the same subnet, Linux's ARP behavior will
result in a reset (RST) being sent to the array from the client. The
following changes need to be made to /etc/sysctl.conf to work around this
behavior:
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2


* Linux Netfilter:
Per: https://bugzilla.redhat.com/show_bug.cgi?id=493226, it appears that
netfilter will mark packets as being invalid under heavy load.
To work around this bug, the following needs to be added to
/etc/sysctl.conf:
net.ipv4.netfilter.ip_conntrack_tcp_be_liberal=1

Also if you upgrade EQL firmware to 7.0.x,  then you will need to make sure
that readsector0 is NOT used for path_checker in /etc/multipath.conf.   A
bug in multipathd will result in protocol errors shortly after login.
Change the path_checker to TUR or DIRECTIO instead.   This is covered in
the EQL v7.x release notes.





On Fri, Apr 11, 2014 at 12:56 PM, Mike Christie wrote:

> On 04/11/2014 08:04 AM, Eric Raskin wrote:
> > Thanks.  I see that I missed that part of the setup. I guess I thought
> > that the name of the connection was what matched it to the device.
> >
> > I assume that "netdev" is the ipconfig device name (eth0, eth1, etc.),
> > right?
> >
>
> Yes.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.


Re: Automatic update of files between a group of hosts

2013-04-22 Thread Donald Williams
Hello,

 No.  Open-iSCSI can't fix this issue.  You're problem is that you don't
have a Cluster file system in place, each server believes they own the disk
exclusively.   If you keep this as-is you will corrupt the data, that's for
sure.

 Easiest thing is to connect with one server, then share out that disk over
NFS.

 Regards,

 Don



On Fri, Apr 19, 2013 at 8:22 AM, Antonio López  wrote:

> Hello,
>
> I have a virtual disk shared by a group of two hosts. Both are connecting
> to the virtual disk correctly, but if I write a file from one host to the
> virtual disk the other host cannot see it until the virtual disk is
> unmounted and mounted again. It happens the same from the other host. Both
> hosts are running linux, one is Ubuntu 10.04 and the other is Fedora 17.
>
> I am concerned that this behavious can lead to data corruption, as one
> machine is not aware of changes made on the virtual disk by the other
> machine.
>
> Is it possible to fix this issue by tunning any open-iscsi parameter?
>
> Thanks in advance
>
> Antonio
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: problems connecting to Equallogic 6100

2013-04-04 Thread Donald Williams
Re: Startup.  The problem was resolved in 12.10, so you should be fine.

Re: Backup.  Are you connecting to the live volume?   If you connect two
servers to the same volume w/o a cluster file system in place you will end
up with a corrupted volume.   Updates on one server won't be seen by the
other.Using a snapshot is the safest way to backup an EQL volume.

Don


On Thu, Apr 4, 2013 at 11:13 AM, Elvar  wrote:

>
> On 3/29/2013 2:16 PM, Donald Williams wrote:
>
>> Hello,
>>
>>  What version of FW is on the EQL array?   Any error messages on the EQL
>> array events?
>>
>>  I work for Dell/Equallogic and I have no issues connecting up ubuntu
>> 12.10 to EQL.The only recent issue I've seen with 12.x is 12.04 the
>> startup scripts don't login to iSCSI after a reboot or on start up.
>>
>> # iscsiadm -m session -P 1
>> Target: iqn.2001-05.com.equallogic:4-**52aed6-2fdd8cd64-**
>> 5e6001a5b9d511bd-ubuntu-test-**vol1
>> Current Portal: 172.23.10.242:3260 <http://172.23.10.242:3260>,1
>> Persistent Portal: 172.23.10.240:3260 <http://172.23.10.240:3260
>> >,1
>>
>> **
>> Interface:
>> **
>> Iface Name: eth1
>> Iface Transport: tcp
>> Iface Initiatorname: iqn.1993-08.org.debian:01:**
>> 8ab9cf5340f0
>> Iface IPaddress: 172.23.74.186
>> Iface HWaddress: 
>> Iface Netdev: eth1
>> SID: 1
>> iSCSI Connection State: LOGGED IN
>> iSCSI Session State: LOGGED_IN
>> Internal iscsid Session State: NO CHANGE
>> Current Portal: 172.23.10.204:3260 <http://172.23.10.204:3260>,1
>> Persistent Portal: 172.23.10.240:3260 <http://172.23.10.240:3260
>> >,1
>>
>> **
>> Interface:
>> **
>> Iface Name: eth0
>> Iface Transport: tcp
>> Iface Initiatorname: iqn.1993-08.org.debian:01:**
>> 8ab9cf5340f0
>> Iface IPaddress: 172.23.71.231
>> Iface HWaddress: 
>> Iface Netdev: eth0
>> SID: 2
>> iSCSI Connection State: LOGGED IN
>> iSCSI Session State: LOGGED_IN
>> Internal iscsid Session State: NO CHANGE
>>
>> I'm curious why you don't want MPIO?
>>
>> You'll need to modify /etc/sysctl.conf for MPIO
>>
>> # ARP connection mods
>>
>> net.ipv4.conf.all.arp_ignore=1
>> net.ipv4.conf.all.arp_**announce=2
>> net.ipv4.conf.all.rp_filter=2
>>
>>
>> For ACLs I typically use the initiator name from the ubuntu server.
>> I.e. iqn.1993-08.org.debian:01:**8ab9cf5340f0
>>
>> Have you opened a case with Dell?While not a 'supported' OS, we will
>> do best effort to assist you.
>>
>>  Can you dump the following?
>>
>> $sudo iscsiadm -m node
>>
>> $sudo iscsiadm -m session
>>
>> $sudo iscsiadm -m iface
>>
>> $sudo iscsiadm -m discovery
>>
>> Do you have a NIC that is on the same subnet as the array or are you
>> routing to the SAN?
>>
>> NICs on the same subnet is the preferred way.
>>
>> Regards,
>>
>> Don
>>
>>
> Hey Don, I did finally get it connected. The reason I don't need mpio is
> because I'm just connecting to this volume to run backups with Duplicity.
> Performance and redundancy isn't really a big deal and I only have one link
> connected to that linux box at this time anyway. How do I get around the
> startup script issues connecting on boot?
>
> Thanks!
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 
> open-iscsi+unsubscribe@**googlegroups.com
> .
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at 
> http://groups.google.com/**group/open-iscsi?hl=en<http://groups.google.com/group/open-iscsi?hl=en>
> .
> For more options, visit 
> https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: How to recover iscsi connection if LAN switch goes down/failed for 1 hour or above

2013-04-04 Thread Donald Williams
Hello,

What you were seeing was the stale mount info.   No "disk" is going to
survive an hour disconnected.  Even a short disconnect will cause a SCSI
disk error and Linux to remount the volume RO.

 Best practices for iSCSI connectivity is redundant switches and configure
MPIO to use both paths when up for better performance and re-route I/O when
a path fails.

 Regards,

 Don



On Thu, Apr 4, 2013 at 6:11 AM, parveen kumar wrote:

> I tried to find out for how many seconds/hours iscsi session try to
> re-establish the old connection if LAN disconnects between Host and Storage
> (or you can say for how many seconds/hours iscsi will try for login in
> target).
>
>
> In my case i had mounted the 1TB partition on my CentOS release 5.3
> (Final) host/server from storage.
>
> i am using that 1TB volume and able to write data on it.
>
> Today i switched off the LAN switch for 1 hour, and after 1 hour i am able
> to see the iscsi partition mounted on my host/server and its also showing
> in "fdisk -l" but m not able to write data  no it :o
>
> =
> Below error when i switch off the LAN switch (the output of
> /var/log/messages)
> =
> Apr  4 13:51:15 master kernel: ping timeout of 5 secs expired, last rx
> 4559435595, last ping 4559440595, now 4559445595
> Apr  4 13:51:15 master kernel:  connection1:0: iscsi: detected conn error
> (1011)
> Apr  4 13:51:16 master iscsid: Kernel reported iSCSI connection 1:0 error
> (1011) state (3)
> Apr  4 13:51:18 master /usr/sbin/gmetad[4843]: data_thread() got not
> answer from any [cluster] datasource
> Apr  4 13:51:58 master last message repeated 3 times
> Apr  4 13:53:13 master last message repeated 4 times
> Apr  4 13:53:16 master kernel:  session1: iscsi: session recovery timed
> out after 120 secs
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x2a is not queued (8)
> Apr  4 13:53:16 master last message repeated 17 times
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x28 is not queued (8)
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x28 is not queued (8)
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x2a is not queued (8)
> Apr  4 13:53:16 master last message repeated 3 times
> Apr  4 13:53:16 master kernel: sd 14:0:0:6: SCSI error: return code =
> 0x0001
> Apr  4 13:53:16 master kernel: end_request: I/O error, dev sde, sector
> 531628095
> Apr  4 13:53:16 master kernel: Buffer I/O error on device sde1, logical
> block 66453504
> Apr  4 13:53:16 master kernel: lost page write due to I/O error on sde1
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x2a is not queued (8)
> Apr  4 13:53:16 master last message repeated 2 times
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x28 is not queued (8)
> Apr  4 13:53:16 master last message repeated 4 times
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x2a is not queued (8)
> Apr  4 13:53:16 master kernel: sd 14:0:0:6: SCSI error: return code =
> 0x0001
> Apr  4 13:53:16 master kernel: end_request: I/O error, dev sde, sector
> 475005183
> Apr  4 13:53:16 master kernel: Buffer I/O error on device sde1, logical
> block 59375640
> Apr  4 13:53:16 master kernel: lost page write due to I/O error on sde1
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x2a is not queued (8)
> Apr  4 13:53:16 master kernel: sd 14:0:0:6: SCSI error: return code =
> 0x0001
> Apr  4 13:53:16 master kernel: end_request: I/O error, dev sde, sector
> 475005175
> Apr  4 13:53:16 master kernel: Buffer I/O error on device sde1, logical
> block 59375639
> Apr  4 13:53:16 master kernel: lost page write due to I/O error on sde1
> Apr  4 13:53:16 master kernel: iscsi: cmd 0x2a is not queued (8)
> Apr  4 13:53:16 master kernel: sd 14:0:0:6: SCSI error: return code =
> 0x0001
> Apr  4 13:53:16 master kernel: end_request: I/O error, dev sde, sector
> 475005135
> Apr  4 13:53:16 master kernel: Buffer I/O error on device sde1, logical
> block 59375634
> Apr  4 13:53:16 master kernel: lost page write due to I/O error on sde1
> Apr  4 13:53:16 master kernel: Buffer I/O error on device sde1, logical
> block 59375635
> Apr  4 13:53:16 master kernel: lost page write due to I/O error on sde1
> Apr  4 13:53:16 master kernel: Buffer I/O error on device sde1, logical
> block 59375636
> Apr  4 13:53:16 master kernel: lost page write due to I/O error on sde1
>
> ===
> "dmesg" output when switch off the LAN switch
> ===
> iscsi: cmd 0x28 is not queued (6)
> iscsi: cmd 0x28 is not queued (6)
> sd 14:0:0:6: SCSI error: return code = 0x0001
> end_request: I/O error, dev sde, sector 633227487
> sd 14:0:0:6: SCSI error: return code = 0x0001
> sd 14:0:0:6: rejecting I/O to device being removed
> end_request: I/O error, dev sde, sector 633228031
>
>
>
> Is there any parameter in iscsi file need to set that relogin/re-establish
> the session automatically whenever LAN switch comes u

Re: problems connecting to Equallogic 6100

2013-03-29 Thread Donald Williams
Hello,

 What version of FW is on the EQL array?   Any error messages on the EQL
array events?

 I work for Dell/Equallogic and I have no issues connecting up ubuntu 12.10
to EQL.The only recent issue I've seen with 12.x is 12.04 the startup
scripts don't login to iSCSI after a reboot or on start up.

# iscsiadm -m session -P 1
Target:
iqn.2001-05.com.equallogic:4-52aed6-2fdd8cd64-5e6001a5b9d511bd-ubuntu-test-vol1
Current Portal: 172.23.10.242:3260,1
Persistent Portal: 172.23.10.240:3260,1
**
Interface:
**
Iface Name: eth1
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:8ab9cf5340f0
Iface IPaddress: 172.23.74.186
Iface HWaddress: 
Iface Netdev: eth1
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 172.23.10.204:3260,1
Persistent Portal: 172.23.10.240:3260,1
**
Interface:
**
Iface Name: eth0
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:8ab9cf5340f0
Iface IPaddress: 172.23.71.231
Iface HWaddress: 
Iface Netdev: eth0
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

I'm curious why you don't want MPIO?

You'll need to modify /etc/sysctl.conf for MPIO

# ARP connection mods

net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.all.rp_filter=2


For ACLs I typically use the initiator name from the ubuntu server.   I.e.
iqn.1993-08.org.debian:01:8ab9cf5340f0

Have you opened a case with Dell?While not a 'supported' OS, we will do
best effort to assist you.

 Can you dump the following?

$sudo iscsiadm -m node

$sudo iscsiadm -m session

$sudo iscsiadm -m iface

$sudo iscsiadm -m discovery

Do you have a NIC that is on the same subnet as the array or are you
routing to the SAN?

NICs on the same subnet is the preferred way.

Regards,

Don





On Thu, Mar 28, 2013 at 11:51 AM, Elvar  wrote:

>
> Hey all,
>
> I'm having no luck connecting open-iscsi from Ubuntu 12.10 to an
> Equallogic 6100XV. I've tried allowing unauthenticated access from the
> subnet the ubuntu box is on and also setting up a CHAP account but no
> matter what I do I constantly get the following error...
>
> iscsiadm -m node --login
>
> "iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI
> login failure)"
>
> The discovery portion seems to work fine but not the mounting portion. I
> do not need multipath at all for this scenario.
>
> Any help on this would be greatly appreciated!!
>
> Kind regards,
> Elvar
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 
> open-iscsi+unsubscribe@**googlegroups.com
> .
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at 
> http://groups.google.com/**group/open-iscsi?hl=en
> .
> For more options, visit 
> https://groups.google.com/**groups/opt_out
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Multipath or not ?

2013-03-11 Thread Donald Williams
The iSCSI layer doesn't do MPIO, it's done at the host level.   Depending
on the tape system, you could benefit performance wise using MPIO.
 Open-iSCSI can be configured to initiate multiple sessions to a single
target.  Once those volumes are presented to the host, having the same
Serial number, the host will create an MPIO device to encompass those
paths.   Multipathd allows you to customize how that device is managed.




On Sat, Mar 9, 2013 at 6:58 AM, Guillaume  wrote:

> Hello,
>
> I have a virtual tape library and a iscsi SAN. All have multiple ethernet
> interfaces, This will ressult in multiples sessions to the targets.So I
> wonder if I must use dm-multipath or not ? Does the current iscsi layer
> handle the multiple paths to an iqn or not ?
>
> Another question about the output of "iscsiadm -m session" : the lines of
> output begins by @IP:3260,n  where n is an integer. Is this number a
> priority level in some way, or does it only distinguish multiple sessions
> to the same iqn ?
>
>
> Regards,
> Guillaume
>
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to open-iscsi+unsubscr...@googlegroups.com.
> To post to this group, send email to open-iscsi@googlegroups.com.
> Visit this group at http://groups.google.com/group/open-iscsi?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-iscsi+unsubscr...@googlegroups.com.
To post to this group, send email to open-iscsi@googlegroups.com.
Visit this group at http://groups.google.com/group/open-iscsi?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: iscsi connection errors

2012-10-08 Thread Donald Williams
Hello,

If when you see these errors, look for an INFO: event from the EQL array of
"Load Balancing request" or "Volume membership has changed".   If so, then
as Paul mentioned, these events should not be considered an error.

 Re: Connection load balancing. (CLB) This should NOT normally be disabled.
 It can result in reduced performance.  Where very busy sessions on the
same physical ports will have to share that single port.  While others may
be available to better balance out the load.

If you have more than three members in a pool, as blocks are balanced
between members, log out requests will still occur and those cannot be
disabled.

Regards,

 Don

On Mon, Oct 8, 2012 at 2:18 AM, squadra  wrote:

> Hello Paul,
>
> we thought something, too. thats why we disabled connection loadbalancing
> on the eql array, without success so far.
>
> -- juergen
>
> Am Freitag, 5. Oktober 2012 23:29:53 UTC+2 schrieb (unbekannt):
>
>>
>> On Oct 5, 2012, at 3:39 PM, squadra wrote:
>>
>> > Hi,
>> >
>> > from time to time i see connection errors like this to our equallogic
>> 6100xv / 4100e stack.
>> >
>> > ct  5 21:22:20 xxx kernel: connection4:0: detected conn error (1020)
>> > Oct  5 21:22:21 xxx iscsid: Kernel reported iSCSI connection 4:0 error
>> (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (3)
>> > Oct  5 21:22:23 xxx kernel: connection4:0: detected conn error (1020)
>> > Oct  5 21:22:24 xxx iscsid: connection4:0 is operational after recovery
>> (1 attempts)
>> >
>> > any ideas what this error code means?
>> >
>> > cheers,
>> >
>> > Juergen
>>
>> I wonder if that is a connection close due to an async logout request
>> from the array, which is what it does if it wants to move a connection to
>> another port.
>>
>> If yes, then that's a bad message from the iscsi kernel code: an async
>> logout is not an error and logging it with "error" in the text is
>> incorrect.
>>
>> paul
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/open-iscsi/-/2yyQoiYcDKIJ.
>
> To post to this group, send email to open-iscsi@googlegroups.com.
> To unsubscribe from this group, send email to
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/open-iscsi?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: access SAN through win7

2012-10-01 Thread Donald Williams
If I understand you, you want to directly connect multiple servers directly
to the same SAN volume.

If you do not have a cluster file system in place, you will absolutely
corrupt that volume.  Each server connected believes it owns the volume
exclusively.  So updates on one server aren't seen by the other.

Best solution, is share out that volume using CIFS over the network.

 Regards,

 Don


On Mon, Oct 1, 2012 at 10:44 AM, Anam  wrote:

> I have a SAN establish with windows server 2008 drives SAN Hard drive
> are online and showing capacity .But now i want these drives in
> windows 7.
> SAN directly connected to Server and one cable from SAN goes to
> network switch and from server Windows 7 computer is also connected
> with the network switch.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To post to this group, send email to open-iscsi@googlegroups.com.
> To unsubscribe from this group, send email to
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/open-iscsi?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: problem in sharing disk through iscsi

2012-08-30 Thread Donald Williams
Mike is (of course) correct.

 When just the iSCSI connection is in place, each host believes it owns the
volume exclusively.  So when you write to a volume like that, you don't
first (or periodically) re-read the volume for updates.  Why would you?  As
far as the host is concerned nothing has changed.

 When you use a file sharing protocol, that issue is well understood and
handled by the file server.

 There are some clustering filesystems out there.  Open source GFS is one
of them.   The commercial solutions are quite expensive.   Running
thousands of dollars per node.

 Best/cheap option, is mount the volume to one server and share it via the
network to the others.

 Don

On Thu, Aug 30, 2012 at 3:59 PM, Mike Christie  wrote:

> On 08/29/2012 12:17 AM, shivraj dongawe wrote:
> > I am new to iscsi.
> > I had one problem.
> > I am using a NetbSD target for getting storage through iscsi protocol.
> > I want to access this storage from two remote machines(primary and
> > secondary).
> > From one machine(primary) i have mounted this storage in read right mode
> > and from another
> > remote machine(secondary) i have mounted this storage in read-only mode.
> > The  problems i am facing are
> > 1. When i write something from primary side then it is not visible at
> > secondary side
> > until and unless i remount the exported storage.
> > 2. When writing is on from primary side and meanwhile if i try to read
> the
> > exported disk from secondary side
> >  then i get the corruption in the file which i was transferring from
> > primary side.
> >
>
> I think you want some sort of clustering software. open-iscsi is just a
> iscsi initiator. It does not handle any of those types of issues.
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To post to this group, send email to open-iscsi@googlegroups.com.
> To unsubscribe from this group, send email to
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/open-iscsi?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: about the mtu

2012-07-17 Thread Donald Williams
Hello,

 MTU size is set on the NICs iSCSI is using, not by open-iscsi itself.   So
it depends on the distro of Linux where that change needs to be made.

 Don


On Tue, Jul 17, 2012 at 6:14 AM, jiliang  wrote:

> Hi, dear all
> I want to change mtu to 8000. But I don't know how about the
> open-iscsi support. Any reply will be helpful, thanks.
>
>
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---
>
> --
> You received this message because you are subscribed to the Google Groups
> "open-iscsi" group.
> To post to this group, send email to open-iscsi@googlegroups.com.
> To unsubscribe from this group, send email to
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/open-iscsi?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: How many S/W iSCSI Initiators on same machine?

2009-08-04 Thread Donald Williams
I don't know if there's a way to set unique initiator names for each NIC.  A
quick scan of the config file didn't show anything.  I *believe* iscsid has
the initiator name so it's a global parameter.
 Why do you want unique names for each initiator?   What do you think it
will gain you?

 -don

On Tue, Aug 4, 2009 at 4:16 AM, Rainer Bläs
wrote:

>
> Thanks for your answer!
> Yes, by using "#iscsiadm -m iface -I ethN, N=1...6" we can have 6
> iSCSI sessions.
>
> But now there is the question "HOWTO assign an initiator name for EACH
> session"?
> For one iSCSI session it can be found in the /etc/iscsi/
> initiatorname.iscsi File:
>
> InitiatorName=iqn.1986-03.com.hp:Ethernet1
>
> Can it be done by adding these 5 entries
>
> InitiatorName=iqn.1986-03.com.hp:Ethernet2
> InitiatorName=iqn.1986-03.com.hp:Ethernet3
> InitiatorName=iqn.1986-03.com.hp:Ethernet4
> InitiatorName=iqn.1986-03.com.hp:Ethernet5
> InitiatorName=iqn.1986-03.com.hp:Ethernet6
>
> or which syntax has to be used?
>
> Rainer
>
>
>
>
>
>
> On Aug 3, 10:36 pm, Donald Williams  wrote:
> > Hello,
> > I'm not sure what your question really is.  Yes, you can have 6x GbE
> > interfaces on different subnets and run iSCSI over them. What target are
> you
> > using?   Typically, your iSCSI SAN is on one subnet.  It avoids the need
> to
> > do IP routing.   Which adds latency and can reduce performance.
> >
> >  -don
> >
> > On Fri, Jul 31, 2009 at 6:03 AM, Rainer Bläs
> > wrote:
> >
> >
> >
> > > Dear all,
> >
> > > we are running a SLES 10SP2 system with 6 physical Ethernet ports.
> > > For instance is it possible to have 6 iSCSI initiators onthis system
> > > when each IP of these six ports are belonging to 6 different (Sub)
> > > Lans?
> >
> > > THX, Rainer
> >
> > --
> >
> > Marie von Ebner-Eschenbach<
> http://www.brainyquote.com/quotes/authors/m/marie_von_ebnereschenbac>
> > - "Even a stopped clock is right twice a day."
>
> >
>


-- 

Pablo Picasso<http://www.brainyquote.com/quotes/authors/p/pablo_picasso.html>
- "Computers are useless. They can only give you answers."

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: How many S/W iSCSI Initiators on same machine?

2009-08-03 Thread Donald Williams
Hello,
I'm not sure what your question really is.  Yes, you can have 6x GbE
interfaces on different subnets and run iSCSI over them. What target are you
using?   Typically, your iSCSI SAN is on one subnet.  It avoids the need to
do IP routing.   Which adds latency and can reduce performance.

 -don

On Fri, Jul 31, 2009 at 6:03 AM, Rainer Bläs
wrote:

>
> Dear all,
>
> we are running a SLES 10SP2 system with 6 physical Ethernet ports.
> For instance is it possible to have 6 iSCSI initiators onthis system
> when each IP of these six ports are belonging to 6 different (Sub)
> Lans?
>
> THX, Rainer
>
> >
>


-- 

Marie von 
Ebner-Eschenbach
- "Even a stopped clock is right twice a day."

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: same volume two different hosts

2009-08-03 Thread Donald Williams
Hello Nick,
 While an iSCSI SAN will not have any problem allowing multiple hosts to
connect to the same volume, what it doesn't do is protect you from the
resultant corruption.   Each host will believe it owns that volume
exclusively.  Writes from one host won't be seen by the other host.   They
will eventually overwrite blocks written by the other and corrupt the file
allocation table.  The solution is to use a global or clustering fllesystem
that will manage the cache and writes.  Filesystems like GFS, Polyserve,
IBRIX, Tivoli, etc...   GFS is open source, the others are commercial
filesystems that run many thousands of dollars.

 -don


On Fri, Jul 31, 2009 at 7:12 PM, nick  wrote:

>
> Hi All,
>
> I would like to knw if i can present same volume to two hosts?
>
> I am using Stonefly Voyager as SAN and the host would be Xen.
>
> Thanks in Advance
> Nick
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iscsiadm -m iface + routing

2009-07-30 Thread Donald Williams
Hello Jullian,
 The EQL MIBs are available from their website,
http://www.equallogic.comunder Downoads->Firmware->Release Version.
The MIBs are tied into the
firmware version the array is running.   Currently,  Equallogic arrays don't
support SMI-s.

 Equallogic, has a bundled monitoring program called SANHQ.  Others have
used programs like Cacti for monitoring.  lastly you can create an MRTG
compatible configuration file from the array CLI.

 Regards,
 -don


On Wed, Jul 29, 2009 at 1:15 PM, julian thomas  wrote:

> Hello,
>
> Could you please send the mib,snmpwalk output of EqualLogic.If it supports
> SMI-s could you post the mof files for the same.Or is there any other
> way(CLI Interface)to monitor equallogic...?
>
> On Tue, Jul 28, 2009 at 11:42 PM, Mike Christie wrote:
>
>>
>> Ulrich Windl wrote:
>> > On 28 Jul 2009 at 0:22, Moi meme wrote:
>> >
>> >> Hello,
>> >>
>> >> I am using a DELL Equallogic at work and I use a SLES10 SP2 (was
>> >> SP1 before last week-end), are they known problems with the SLES SP2 ?
>> >> I didn't notice any problem since the upgrade !
>> >
>> > Same here: Only when the network has a problem, I see _many_ messages.
>> > Only problem (not iSCSI) is that he links in /dev/disk/by-id are not
>> reliably
>> > populated after boot. This may be a multipath/udev feature. As we boot
>> very
>> > rarely, I did not put much effort into examining this...
>> >
>>
>> What EQL firmware are you using? On the EQL box if you do a "show"
>> command it is in there.
>>
>> I was having a similar problem and updated the firmware to 4.1.4 and it
>> has been working for me now. For some reason the udev scsi_id callout
>> would send some commands to the target, and the target would never
>> respond.
>>
>>
>>
>
> >
>


-- 

Stephen 
Leacock
- "I detest life-insurance agents: they always argue that I shall some
day
die, which is not so."

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Wierd problem with current GIT version of open-iscsi

2009-07-15 Thread Donald Williams
Thanks.  It's working on my test VM after running a #depmod -a   However,
after a reboot you have to do #depmod -a again. Then relogin your targets.

 ext3 filesystem, noatime, read ahead set to 8K, Jumbo frames set to 9000.
It's a few MB slower with standard frames.

A quick 'dt' run with one GbE interface.
r...@ubuntu-804-svr:/src/dt.d# ./dt of=/mnt/test/test.dt bs=8k
pattern=0x3939393
   9393939 disable=compare capacity=5G

Write Statistics:
 Total records processed: 655360 @ 8192 bytes/record (8.000 Kbytes)
 Total bytes transferred: 5368709120 (5242880.000 Kbytes, 5120.000
Mbytes)
  Average transfer rates: 101181853 bytes/sec, 98810.403 Kbytes/sec
 Number I/O's per second: 12351.300
  Total passes completed: 0/1
   Total errors detected: 0/1
  Total elapsed time: 00m53.06s
   Total system time: 00m16.00s
 Total user time: 00m00.13s
   Starting time: Wed Jul 15 17:52:35 2009
 Ending time: Wed Jul 15 17:53:28 2009


Read Statistics:
 Total records processed: 655360 @ 8192 bytes/record (8.000 Kbytes)
 Total bytes transferred: 5368709120 (5242880.000 Kbytes, 5120.000
Mbytes)
  Average transfer rates: 109386901 bytes/sec, 106823.146 Kbytes/sec
 Number I/O's per second: 13352.893
  Total passes completed: 1/1
   Total errors detected: 0/1
  Total elapsed time: 00m49.08s
   Total system time: 00m05.67s
 Total user time: 00m00.03s
   Starting time: Wed Jul 15 17:52:35 2009
 Ending time: Wed Jul 15 17:54:17 2009


Total Statistics:
 Output device/file name: /mnt/test/test.dt (device type=regular)
 Type of I/O's performed: sequential (forward)
Data pattern string used: '0x39393939393939'
   Data pattern read/written: 0x39337830 (data compare disabled)
 Total records processed: 1310720 @ 8192 bytes/record (8.000 Kbytes)
 Total bytes transferred: 10737418240 (10485760.000 Kbytes, 10240.000
Mbytes)
  Average transfer rates: 105124518 bytes/sec, 102660.662 Kbytes/sec
 Number I/O's per second: 12832.583
  Total passes completed: 1/1
   Total errors detected: 0/1
  Total elapsed time: 01m42.14s
   Total system time: 00m21.67s
 Total user time: 00m00.16s
   Starting time: Wed Jul 15 17:52:35 2009
 Ending time: Wed Jul 15 17:54:17 2009

On Wed, Jul 15, 2009 at 4:40 PM, Donald Williams 
wrote:
>
> I'll try running depmod and see if that helps.
> I'm not great a hacking Makefiles.   I'll see what I can do.
>  I'm going to use a test VM this time though.  Not my server.  :-D
> Thx.
>
> On Wed, Jul 15, 2009 at 4:25 PM, Mike Christie 
wrote:
>>
>> Donald Williams wrote:
>> > Mike,
>> >
>> > I decided to try the current repository version (as of 3PM, 7/15).
 Compiled
>> > and installed w/o issue.  Rebooted and I couldn't connect to my EQL
targets.
>> > The login process complained "no iSCSI driver".So I installed
2.0-871
>> > from the website tar ball.   Rebooted, same problem.  Tried an older
kernel,
>> > 2.6.24-23, came up fine.   Installed (stupidly) the git version on that
>> > kernel, reboot, couldn't log in either.  Again, trying to downgrade
failed.
>> >  Installed an even older kernel, 2.6.24-22 installed 871 from the tar
ball,
>> > that worked fine.   Removed the modified kernels and re-installed one,
>> > 2.6.24-24, then installed 871 from tar ball, works fine.
>> >  I'm running ubuntu 8.04 LTS.  2.6.24-24-generic kernel right from
ubuntu.
>> >
>> > Is this anything you've seen?
>> >
>> >  What I see in the log that's different is, non-working configs had
these
>> > errors.
>> >
>> > Jul 15 15:17:56 ietd-tape kernel: [   73.760376] Loading iSCSI
transport
>> > class v2.0-871.
>> > Jul 15 15:17:56 ietd-tape kernel: [   73.789017] iscsi_tcp: Unknown
symbol
>> > iscsi_tcp_segment_done
>>
>> I think you or the Makefile just needs to run depmod.
>>
>> There is a new iscsi module, so there is now
>>
>> iscsi_tcp
>> libiscsi_tcp
>> libiscsi
>> scsi_transport_iscsi
>>
>> The above error log messages indicated that you are using a newer
>> iscsi_tcp module but the libiscsi_tcp is not getting loaded.
>>
>> I think we have been getting lucky and since the older modules were the
>> same as the distro they got loaded right. Now with the new module we
>> should probably add a depmod in the Makefile somewhere. Do you by any
>> chance know how to hack Makefiles?
>>
>> >>
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Wierd problem with current GIT version of open-iscsi

2009-07-15 Thread Donald Williams
I'll try running depmod and see if that helps.
I'm not great a hacking Makefiles.   I'll see what I can do.

 I'm going to use a test VM this time though.  Not my server.  :-D

Thx.

On Wed, Jul 15, 2009 at 4:25 PM, Mike Christie  wrote:

>
> Donald Williams wrote:
> > Mike,
> >
> > I decided to try the current repository version (as of 3PM, 7/15).
>  Compiled
> > and installed w/o issue.  Rebooted and I couldn't connect to my EQL
> targets.
> > The login process complained "no iSCSI driver".So I installed 2.0-871
> > from the website tar ball.   Rebooted, same problem.  Tried an older
> kernel,
> > 2.6.24-23, came up fine.   Installed (stupidly) the git version on that
> > kernel, reboot, couldn't log in either.  Again, trying to downgrade
> failed.
> >  Installed an even older kernel, 2.6.24-22 installed 871 from the tar
> ball,
> > that worked fine.   Removed the modified kernels and re-installed one,
> > 2.6.24-24, then installed 871 from tar ball, works fine.
> >  I'm running ubuntu 8.04 LTS.  2.6.24-24-generic kernel right from
> ubuntu.
> >
> > Is this anything you've seen?
> >
> >  What I see in the log that's different is, non-working configs had these
> > errors.
> >
> > Jul 15 15:17:56 ietd-tape kernel: [   73.760376] Loading iSCSI transport
> > class v2.0-871.
> > Jul 15 15:17:56 ietd-tape kernel: [   73.789017] iscsi_tcp: Unknown
> symbol
> > iscsi_tcp_segment_done
>
> I think you or the Makefile just needs to run depmod.
>
> There is a new iscsi module, so there is now
>
> iscsi_tcp
> libiscsi_tcp
> libiscsi
> scsi_transport_iscsi
>
> The above error log messages indicated that you are using a newer
> iscsi_tcp module but the libiscsi_tcp is not getting loaded.
>
> I think we have been getting lucky and since the older modules were the
> same as the distro they got loaded right. Now with the new module we
> should probably add a depmod in the Makefile somewhere. Do you by any
> chance know how to hack Makefiles?
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Wierd problem with current GIT version of open-iscsi

2009-07-15 Thread Donald Williams
Mike,

I decided to try the current repository version (as of 3PM, 7/15).  Compiled
and installed w/o issue.  Rebooted and I couldn't connect to my EQL targets.
The login process complained "no iSCSI driver".So I installed 2.0-871
from the website tar ball.   Rebooted, same problem.  Tried an older kernel,
2.6.24-23, came up fine.   Installed (stupidly) the git version on that
kernel, reboot, couldn't log in either.  Again, trying to downgrade failed.
 Installed an even older kernel, 2.6.24-22 installed 871 from the tar ball,
that worked fine.   Removed the modified kernels and re-installed one,
2.6.24-24, then installed 871 from tar ball, works fine.
 I'm running ubuntu 8.04 LTS.  2.6.24-24-generic kernel right from ubuntu.

Is this anything you've seen?

 What I see in the log that's different is, non-working configs had these
errors.

Jul 15 15:17:56 ietd-tape kernel: [   73.760376] Loading iSCSI transport
class v2.0-871.
Jul 15 15:17:56 ietd-tape kernel: [   73.789017] iscsi_tcp: Unknown symbol
iscsi_tcp_segment_done
Jul 15 15:17:56 ietd-tape kernel: [   73.789230] iscsi_tcp: Unknown symbol
iscsi_segment_seek_sg
Jul 15 15:17:56 ietd-tape kernel: [   73.789357] iscsi_tcp: Unknown symbol
iscsi_tcp_segment_unmap
Jul 15 15:17:56 ietd-tape kernel: [   73.789771] iscsi_tcp: Unknown symbol
iscsi_tcp_hdr_recv_prep
Jul 15 15:17:56 ietd-tape kernel: [   73.789901] iscsi_tcp: Unknown symbol
iscsi_tcp_cleanup_task
Jul 15 15:17:56 ietd-tape kernel: [   73.790156] iscsi_tcp: Unknown symbol
iscsi_tcp_conn_setup
Jul 15 15:17:56 ietd-tape kernel: [   73.790445] iscsi_tcp: Unknown symbol
iscsi_tcp_r2tpool_alloc
Jul 15 15:17:56 ietd-tape kernel: [   73.790757] iscsi_tcp: Unknown symbol
iscsi_tcp_r2tpool_free
Jul 15 15:17:56 ietd-tape kernel: [   73.790953] iscsi_tcp: Unknown symbol
iscsi_tcp_task_xmit
Jul 15 15:17:56 ietd-tape kernel: [   73.791162] iscsi_tcp: Unknown symbol
iscsi_tcp_recv_skb
Jul 15 15:17:56 ietd-tape kernel: [   73.791348] iscsi_tcp: Unknown symbol
iscsi_segment_init_linear
Jul 15 15:17:56 ietd-tape kernel: [   73.791426] iscsi_tcp: Unknown symbol
iscsi_tcp_conn_get_stats
Jul 15 15:17:56 ietd-tape kernel: [   73.791502] iscsi_tcp: Unknown symbol
iscsi_tcp_task_init
Jul 15 15:17:56 ietd-tape kernel: [   73.791703] iscsi_tcp: Unknown symbol
iscsi_tcp_dgst_header
Jul 15 15:17:56 ietd-tape kernel: [   73.791788] iscsi_tcp: Unknown symbol
iscsi_tcp_conn_teardown

 The last boot up which works shows this:

Jul 15 15:46:39 ietd-tape kernel: [   65.114950] Loading iSCSI transport
class v2.0-724.
Jul 15 15:46:39 ietd-tape kernel: [   65.124429] iscsi: registered transport
(tcp)
Jul 15 15:46:39 ietd-tape kernel: [   65.189444] iscsi: registered transport
(iser)
Jul 15 15:46:39 ietd-tape kernel: [   65.555085] scsi6 : iSCSI Initiator
over TCP/IP
Jul 15 15:46:39 ietd-tape kernel: [   65.599434] scsi7 : iSCSI Initiator
over TCP/IP
Jul 15 15:46:39 ietd-tape kernel: [   65.603998] scsi8 : iSCSI Initiator
over TCP/IP
Jul 15 15:46:39 ietd-tape kernel: [   65.608528] scsi9 : iSCSI Initiator
over TCP/IP
Jul 15 15:46:39 ietd-tape kernel: [   65.613056] scsi10 : iSCSI Initiator
over TCP/IP
Jul 15 15:46:39 ietd-tape kernel: [   65.617812] scsi11 : iSCSI Initiator
over TCP/IP
Jul 15 15:46:40 ietd-tape kernel: [   66.453071] scsi 11:0:0:0:
Direct-Access EQLOGIC  100E-00  4.1  PQ: 0 ANSI: 5
Jul 15 15:46:40 ietd-tape kernel: [   66.453459] sd 11:0:0:0: [sdb]
2097162240 512-byte hardware sectors (1073747 MB)
Jul 15 15:46:40 ietd-tape kernel: [   66.455149] scsi 10:0:0:0:
Direct-Access EQLOGIC  100E-00  4.1  PQ: 0 ANSI: 5
Jul 15 15:46:40 ietd-tape kernel: [   66.455757] sd 10:0:0:0: [sdc]
2097162240 512-byte hardware sectors (1073747 MB)
Jul 15 15:46:40 ietd-tape kernel: [   66.456174] sd 11:0:0:0: [sdb] Write
Protect is off
Jul 15 15:46:40 ietd-tape kernel: [   66.456659] scsi 8:0:0:0: Direct-Access
EQLOGIC  100E-00  4.1  PQ: 0 ANSI: 5
Jul 15 15:46:40 ietd-tape kernel: [   66.457095] sd 11:0:0:0: [sdb] Write
cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jul 15 15:46:40 ietd-tape kernel: [   66.457262] sd 8:0:0:0: [sdd] 524298240
512-byte hardware sectors (268441 MB)
Jul 15 15:46:40 ietd-tape kernel: [   66.457926] scsi 7:0:0:0: Direct-Access
EQLOGIC  100E-00  4.1  PQ: 0 ANSI: 5
Jul 15 15:46:40 ietd-tape kernel: [   66.458148] sd 11:0:0:0: [sdb]
2097162240 512-byte hardware sectors (1073747 MB)
Jul 15 15:46:40 ietd-tape kernel: [   66.458539] sd 10:0:0:0: [sdc] Write
Protect is off
Jul 15 15:46:40 ietd-tape kernel: [   66.458612] sd 7:0:0:0: [sde] 419450880
512-byte hardware sectors (214759 MB)
Jul 15 15:46:40 ietd-tape kernel: [   66.459105] scsi 6:0:0:0: Direct-Access
EQLOGIC  100E-00  4.1  PQ: 0 ANSI: 5
Jul 15 15:46:40 ietd-tape kernel: [   66.461939] sd 11:0:0:0: [sdb] Write
Protect is off
Jul 15 15:46:40 ietd-tape kernel: [   66.462420] sd 8:0:0:0: [sdd] Write
Protect is off
Jul 15 15:46:40 ietd-tape kernel: [   66.462651] sd 11:0:0:0: [sdb] W

Re: iscsiadm -m iface + routing

2009-07-14 Thread Donald Williams
Hi Mike,
 Thanks for helping out.  When you say "Dell" fixed something, did you mean
Dell / Equallogic or another part of Dell?  I'm not aware of anything
Dell/EQL submitted but that doesn't mean anything. ;-)

 What I'm seeing from the array logs are resets coming from the initiator.

SATA001:MgmtExec:13-Jul-2009
10:01:32.015700:targetAttr.cc:939:INFO:7.2.15:iSCSI session to target '
192.168.0.31:3260,
iqn.2001-05.com.equallogic:0-8a0906-23416c402-0b3000293644a537-test' f
rom initiator '192.168.0.39:50972, iqn.1994-05.com.redhat:561933d78489' was
closed.
  *  iSCSI initiator connection failure.*
*Reset received on the connection.*

The other message I see are these.   That haven't happened recently so
possibly after upgrading to the new iSCSI code helped a little.

734030:720998:SATA001:MgmtExec:10-Jul-2009
08:07:53.973826:targetAttr.cc:939:INFO:7.2.15:iSCSI session to target '
192.168.0.31:3260,
iqn.2001-05.com.equallogic:0-8a0906-8a416c402-cbfb3414a3bc-ovm-1-l
un1' from initiator '192.168.0.154:35475,
iqn.1994-05.com.redhat:9693ecdf6c66' was closed.
*iSCSI initiator connection failure.*
*Connection was closed by peer.*

 So, CHAP is getting involved since connections are dropping then having to
re-login.   Why they are dropping in the first place is a mystery.

  Everything else in the logs looks fine.

On Tue, Jul 14, 2009 at 6:00 PM, Mike Christie  wrote:

>
> On 07/13/2009 09:20 AM, Hoot, Joseph wrote:
> > Mike,
> >
> > Just as an FYI (in case you were most curious about this issue) I've
> > narrowed this issue down to something with CHAP.  On my EqualLogic, if I
> > disable CHAP, I can't reproduce this issue.
> >
> > So I did the following.  I after upgrading to the latest OEL 5.3 release
> > of the iscsi-initiator, I could still reproduce the problem.  Therefore,
> > I did the following:
> >
> > 1) Setup another test environment using the same hardware (physical
> > different hardware, but all same firmware, models, etc..)
> > 2) presented a new test volume from EqualLogic
> > 3) ran the ping test (ping -Ieth2 192.168.0.19&  ping -Ieth3
> > 192.168.0.19).
> > 4) I couldn't reproduce the issue.
> > 5) I checked what the difference were-- CHAP the only difference.
> > 6) So I turned on CHAP authentication to the volume.
> > 7) rm -rf /var/lib/iscsi/nodes/* /var/lib/iscsi/send_targets/*
> > 8) rediscovered targets (after modifying /etc/iscsi/iscsid.conf with
> > CHAP information)
> >
> > node.session.auth.authmethod = CHAP
> > node.session.auth.username = mychapuserhere
> > node.session.auth.password = mypasshere
> >
> > 9) ran the same ping test and was able to get iscsi sessions to fail
> > within 2 minutes.
> > 10) I wanted to prove that CHAP was the issue. So I logout out of all
> > iscsi sessions.
> > 11) I disabled CHAP on the EqualLogic
> > 12) rediscovery targets and re-logged in to the sessions (without CHAP
> > authentication)
> > 13) ran the ping tests and couldn't break it after 30 minutes.
> > 14) added CHAP again and was able to break the sessions within 2
> > minutes.
> >
> > So definitely something odd with CHAP (my guess, either in open-iscsi
> > code or EqualLogic code).  I've asked Roger Lopez, from Dell, to attempt
> > to reproduce this in his lab.  He has EqualLogic and Oracle VM Servers.
> > Oracle developers that I'm working with don't currently have access to
> > an EqualLogic, but they are attempting to reproduce this with their
> > iSCSI equipment as well.  I'm going to setup port mirroring on our
> > switch and run tcpdumps to see what I can get.
> >
>
> This is strange because CHAP does not come into play in the normal IO
> path. When we login we will do authentication with CHAP, but after that
> succeeds it nevers comes up when doing IO. It would only come up again
> when the session/connection is dropped/disconnected and we relogin. For
> the relogin we will do the CHAP authentication again.
>
> Maybe some memory error where chap values are overwriting some other
> memory.
>
> There was one recent fix by Dell, where when using CHAP they could
> segfault iscsid. Here is the updated RPM that I am working on for RHEL
> 5.4 that has the fix:
>
> http://people.redhat.com/mchristi/iscsi/rhel5.4/iscsi-initiator-utils/
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: A fundamental question to iSCSI proponents;

2009-07-01 Thread Donald Williams
Hello,
 You are correct that a "SAN" is more than just a protocol.  You can create
a "SAN" with SCSI, FC, infiniband, 10GbE, GbE, etc... Where I used to work,
 Storage Computer, we could create a SAN with 4x 160MB SCSI ports.  They
could all connect to the same volume, thus creating a SCSI SAN. You could
also add 16x FC ports and have a SCSI/FC "SAN".

  However, the proof that iSCSI based SANs are here to stay is that sales of
iSCSI SANs continue to grow where others are not. Fibre Channel as a
protocol doesn't have any built in features that make it perfect for SAN
use.  In fact, some of its design gets in the way.  I.e. replication.
Since you can't route it, you end up having to encapsulate it with TCP/IP to
go across a WAN.Even the FC industry recognizes this.  IMHO, that's a
major driving force behind FCoE.

http://wikibon.org/wiki/v/ISCSI_forecast:_DAS_is_odd_man_out

You can argue
whether Gartner or whoever are correct in their predictions.

 A SAN must provide scalability, and have a feature set that customers want
at a reasonable cost.   Modern iSCSI SANs can do just that.   In part their
popularity is due to reduced cost to purchase, install and maintain.  iSCSI
SAN vendors recognize this and focused on creating easier to setup and
maintain.   Some traditional SAN products, require dedicated staff,
installation and maintenance costs are very high.  It can cost thousands of
dollars to have your FC SAN reconfigured.  Or additional license fees for
their advanced features.

There are even free iSCSI SANs, i.e. ietd, out there.  Costing little more
than a PC with local storage, NICs and a small GbE switch.

 Is an iSCSI SAN perfect for every customer?   Probably not.  Is the iSCSI
protocol suitable for SANs?   Absolutely.





On Wed, Jun 24, 2009 at 8:59 AM, Peter Chacko wrote:

>
> Hi ,
>
> First of all, please correct me if  you can prove that i need more
> education !!
>
> My question is, is IP-SAN  just a dream ? how far iSCSI reached that
> goal ? Whats features that iSCSI have, which force to call it a SAN ?
> I wish to argue that its just a client-server protocol that access
> block storage over an IP Network. Its just a SAN access protocol, not
> a SAN itself.
>
> Please beat me professionally,  i would appreciate that...:-)
>
> Peter chacko,
> Principal technologist,
> Sciendix information systems Pvt.Ltd,
> Bangalore, India.
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Tuning iscsi read performance with multipath Redhat 5.3 / SLES 10 SP2 / Oracle Linux / Equallogic

2009-04-24 Thread Donald Williams
Have you tried increasing the disk readahead value?
#blockdev --setra X /dev/

 The default is 256.Use --getra to see current setting.

 Setting it too high will probably hurt your database performance.  Since
databases tend to be random, not sequential.

 Don

On Fri, Apr 24, 2009 at 11:07 AM, jnantel  wrote:

>
> If you recall my thread on tuning performance for writes.  Now I am
> attempting to squeeze as much read performance as I can from my
> current setup.  I've read a lot of the previous threads, and there has
> been mention of "miracle" settings that resolved slow reads vs
> writes.  Unfortunately, most posts reference the effects and not the
> changes.   If I were tuning for read performance in the 4k to 128k
> block range what would the best way to go about it?
>
> Observed behavior:
> - Read performance seems to be capped out at 110meg/sec
> - Write performance I get upwards of 190meg/sec
>
> Tuning options I'll be trying:
> block alignment (stride)
> Receiving buffers
> multipath min io changes
> iscsi cmd depth
>
>
> Hardware:
> 2 x Cisco 3750  with 32gig interconnect
> 2 x Dell R900 with 128gig ram and 1 broadcom Quad (5709) and 2 dual
> port intels (pro 1000/MT)
> 2 x Dell Equallogic PS5000XV with 15 x SAS in raid 10 config
>
>
> multipath.conf:
>
> device {
>vendor "EQLOGIC"
>product "100E-00"
>path_grouping_policy multibus
>getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
>features "1 queue_if_no_path"
>path_checker readsector0
>failback immediate
>path_selector "round-robin 0"
>rr_min_io 128
>rr_weight priorities
> }
>
> iscsi settings:
>
> node.tpgt = 1
> node.startup = automatic
> iface.hwaddress = default
> iface.iscsi_ifacename = ieth10
> iface.net_ifacename = eth10
> iface.transport_name = tcp
> node.discovery_address = 10.1.253.10
> node.discovery_port = 3260
> node.discovery_type = send_targets
> node.session.initial_cmdsn = 0
> node.session.initial_login_retry_max = 4
> node.session.cmds_max = 1024
> node.session.queue_depth = 128
> node.session.auth.authmethod = None
> node.session.timeo.replacement_timeout = 120
> node.session.err_timeo.abort_timeout = 15
> node.session.err_timeo.lu_reset_timeout = 30
> node.session.err_timeo.host_reset_timeout = 60
> node.session.iscsi.FastAbort = Yes
> node.session.iscsi.InitialR2T = No
> node.session.iscsi.ImmediateData = Yes
> node.session.iscsi.FirstBurstLength = 262144
> node.session.iscsi.MaxBurstLength = 16776192
> node.session.iscsi.DefaultTime2Retain = 0
> node.session.iscsi.DefaultTime2Wait = 2
> node.session.iscsi.MaxConnections = 1
> node.session.iscsi.MaxOutstandingR2T = 1
> node.session.iscsi.ERL = 0
> node.conn[0].address = 10.1.253.10
> node.conn[0].port = 3260
> node.conn[0].startup = manual
> node.conn[0].tcp.window_size = 524288
> node.conn[0].tcp.type_of_service = 0
> node.conn[0].timeo.logout_timeout = 15
> node.conn[0].timeo.login_timeout = 15
> node.conn[0].timeo.auth_timeout = 45
> node.conn[0].timeo.noop_out_interval = 10
> node.conn[0].timeo.noop_out_timeout = 30
> node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
> node.conn[0].iscsi.HeaderDigest = None,CRC32C
> node.conn[0].iscsi.DataDigest = None
> node.conn[0].iscsi.IFMarker = No
> node.conn[0].iscsi.OFMarker = No
>
> /etc/sysctl.conf
>
> net.core.rmem_default= 65536
> net.core.rmem_max=2097152
> net.core.wmem_default = 65536
> net.core.wmem_max = 262144
> net.ipv4.tcp_mem= 98304 131072 196608
> net.ipv4.tcp_window_scaling=1
>
> #
> # Additional options for Oracle database server
> #ORACLE
> kernel.panic = 2
> kernel.panic_on_oops = 1
> net.ipv4.ip_local_port_range = 1024 65000
> net.core.rmem_default=262144
> net.core.wmem_default=262144
> net.core.rmem_max=524288
> net.core.wmem_max=524288
> fs.aio-max-nr=524288
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: equallogic - load balancing and xfs

2009-04-13 Thread Donald Williams
You don't want to disable connection load balancing (CLB) in the long run.
 CLB will balance out IO across the available ports as servers need IO.
 I.e. during the day your file server or SQL server will be busy.  Then at
night other servers or backups are running.   Without CLB you could end up
stacking connections onto a single interface while other ports are idle.
Upgrading to 5.3 and enabling MPIO is the best solution.

Don



On Mon, Apr 13, 2009 at 4:56 PM, Konrad Rzeszutek wrote:

>
> >
> > I am not sure how to config the EQL box to not load balance or load
>
> At the array CLI prompt type:
>
> grpparams conn-balancing disable
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Possible bug in open-iSCSI

2009-03-14 Thread Donald Williams
Another test might be to take the filesystem out of the equation.   Use 'dd'
or 'dt' to write out past the 2GB mark and see what error results.
Don

On Wed, Mar 11, 2009 at 4:42 AM, sushrut shirole
wrote:

> Thanks a lot .. ill let u know about this ..
>
> 2009/3/10 Konrad Rzeszutek 
>
>
>> On Tue, Mar 10, 2009 at 12:34:55PM +0530, sushrut shirole wrote:
>> >
>> > Hi All,
>>
>> Hey Sushrut,
>>
>> I am also cross-posting my response to the linux-scsi mailing list
>> in case they have insight in this problem.
>>
>> >   I am currently guiding few students who are working on unh-iSCSI
>> > target. Currently we are simulating some faults at a target side .
>> > Like we are adding an error injection module to unh-iSCSI , so that
>> > one can test how initiator behaves on particular error .
>> >   as a part of it we injects a fault in report LUN size . where we
>> > report a wrong LUN size . ( Suppose a LUN is of size 2 gb we report it
>> > as a 4 gb ).(Microsoft and open-iSCSI initiators we are using ).When
>> > we try formatting this LUN on open-iSCSI initiator it formats this LUN
>> > . In fact it doesn't give any error when we try to read or lseek 4gb
>> > of data . But on Microsoft initiator we get an error when we try to
>> > format this LUN . So is this a bug of open-iSCSI or this is bug of
>> > read lseek ?
>>
>> The Open-iSCSI does not investigate any SCSI commands (except the AEN
>> which
>> gets is own special iSCSI PDU header).
>>
>> What you are looking at is the SCSI middle-layer, or the block-device
>> layer,
>> or the target not reporting an error, at being potentially faulty.
>> What Linux kernel does when you lseek to a location past 2GB and do a
>> read,
>> is to transmute the request to a SCSI READ command.
>>
>> That SCSI READ command (you can see what the fields look like when you
>> capture it under ethereal) specifies what sector it wants. Open-iSCSI
>> wraps that SCSI command in its own header and puts it in a TCP packet
>> destined to the target. The the target should then report a failure
>> (sending a SCSI SENSE value reporting a problem). Now it might be that
>> SCSI
>> middle layer doesn't understand that error condition and passes it on as
>> OK.
>> Or it might be that the target doesn't report a failure and returns
>> garbage/null data.
>>
>> What I would suggest is to do a comparison. Create a test setup where you
>> have a real 4GB LUN, do a lseek/read above 2GB and capture all of that
>> traffic using wireshark/ethereal. Then do the same test but with a 2GB LUN
>> that looks like a 4GB and see what the traffic looks like.
>>
>> If it looks the same then somehow the target isn't reporting the right
>> error. Which implies that when Microsoft formats the disks they verify it
>> - by
>> rereading the data they wrote in and failing if the doesn't match. Which
>> might
>> not be what mkfs.ext3 does under Linux - look in the man-page to find out.
>> But
>> by using lseek/read (or just do a dd with the skip argument - look in the
>> manpage
>> for more details) a couple of times on the same sector and you should see
>> different data as well.
>>
>> If the TCP dump looks different, and the target reports a error and the
>> Linux kernel
>> doesn't do anything then it is time to dig through the code (scsi_error.c)
>> to find
>> why Linux doesn't see it as. Make sure you do use the latest kernel
>> thought - which as of
>> today is 2.6.29-rc7-git3. And if you do find the problem post a patch
>> on the linux-scsi mailing list.
>>
>> >
>> > --
>> > Thanks,
>>
>> Hope this lengthy explanation helps in your endeavor.
>>
>>
>>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Frequent Connection Errors with Dell Equallogic RAID

2008-12-08 Thread Donald Williams
Hello,
 I would strongly suggest using the code version Mike mentioned.  I use
ubuntu 8.04/8.10 with that code without issues w/EQL arrays.

Running the older transport kernel module has caused NOOP errors.  The
initiator sends out NOOPs w/different SN numbers than what the array is
expecting.   Upgrading the kernel module has resolved this at quite a few
customers.

 If this doesn't resolve the issue,  please open a case with EQL.
 877.887.7337.   They can review the logs and compare it to when the errors
occur.

 Don



On Mon, Dec 8, 2008 at 9:07 PM, Mike Christie <[EMAIL PROTECTED]> wrote:

>
> Mike Christie wrote:
> >
> > If you can play around on the box, it would be helpful to run the
> > open-iscsi tarball release. Build it with
>
> Oh yeah that is here:
> http://www.open-iscsi.org/bits/open-iscsi-2.0-870.1.tar.gz
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Multi-Path Connecitons

2008-07-21 Thread Donald Williams
Are you asking about how to create multiple logins to the same target within
Open-iSCSI?

 In the /etc/iscsi/ifaces directory there is an example file on how to
configure the egress port for iSCSI connections.  By creating a file for
each GbE interface you want to use, you'll have multiple logins that point
to the same target.  (I.e. /dev/sdb, /dev/sdc)  Then you can layer on Linux
dm-multipath to create an MPIO device.

 What iSCSI target are you trying to connect to?

 Regards,

 Don




On Sat, Jul 19, 2008 at 2:36 PM, Mail Man <[EMAIL PROTECTED]> wrote:

>
> I've been looking through the documentation and can find no reference
> for multi-path. Is this feature available under a different name?
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---