[storage-discuss] iscsi loop back supported?

2009-02-06 Thread David Weibel
Is iSCSI loopback with the initiator and target supported on the same
system.  I thought that was fixed a while back.  I have a clean snv_104
system installed.  (We wanted to test the iSNS server and figured I would
point the initiator and target at it to make sure things were working.)

The initiator was configured with isns enabled and 127.0.0.1 for the
isns server.  iscsitadm was used to create a basic 1G target.  Then I 
used iscsitadm to enable isns and point to 127.0.0.1.  Both the initiator
and target appear in the iSNS BUI.  So I create a ds and dd with the
initiator and target.  I ran devfsadm -i iscsi and in the iscsiadm list target
I can see the local target appearing but I never get a connection between
the two of them.  The logs show.

...
ISNS SCN register failed
NOTICE: iscsi connection failed to set socket optionTCP_NODELAY,
SO_RCVBU or SO_SNDBUF.
NOTICE: iscsi connection(4) unable to connect to target iqn
...

Just wondering...
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Share physical Tape Library via ISCSI

2008-09-09 Thread David Weibel
Is there any change you see a message like the below in your log during this 
testing?
  WARNING: iscsi driver unable to online iqn lun #
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iscsi / CR6497777 : What should become a tunable?

2008-08-25 Thread David Weibel
iSCSI already has lots of parameters that effect transfer sizes, io flow,
timeouts, etc.  Instead of hiding these new parameters in iscsi.conf 
how about making these vendor specific parameters and have them
managed though the same management interfaces (ie. iscsiadm).  The
iscsi.conf parameters tend to be misused if anything.  The only intended
iscsi.conf parameter was mpxio-disable=yes since that matched what was
done with scsi_vhci.conf, fcp.conf, and srp.conf.  (The other iscsi.conf
parameters are ones that were slipped in by a developer and didn't really
match the management goals of the rest of the software at the time.
One might consider prompting or removing those if they are really used.
These are related to TCP socket options.)

The other advantage of using iscsiadm.  Is if you follow the same
management style with those parameters they could be updated 
without requiring a reboot.  This is more work but would lead to a 
more consistent / usable product.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iSCSI Multipathing

2008-08-15 Thread David Weibel
Since your a NetApp customer you should really leverage your
[EMAIL PROTECTED] account (which you can also get free.)  It contains
great documentation targeted at you.  NetApp has a script that
would have taken care of this for you.  They also have lots of other
good scripts, documents, etc.

http://now.netapp.com/NOW/knowledge/docs/hba/iscsi/solaris/iscsi_sol_sk_10/html/software/setup/hst_cfg.htm#1155654
  - iSCSI Solaris™ Initiator Support Kit 1.0
...
+ Configure multipathing support
+ Set the MPIO parameters using mpxio_set
...
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iSCSI Multipathing

2008-08-14 Thread David Weibel
Do not change iscsi.conf.  If you did change it back.  (Also setting 
mpxio-disable=no is a double negative, it doesn't do anything.)

Add this to your /kernel/drv/scsi_vhci.conf

device-type-scsi-options-list = NETAPP  LUN, symmetric-option; 
symmetric-option = 0x100; 

Make sure there are 2 spaces after NETAPP.  The syntax is SCSI VID/PID and a 
VID is 8 characters.  Then reboot
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Project Proposal: NWS merge with OS/Net

2008-08-12 Thread David Weibel
+1  A sad and happy day.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Also needed for LUN resizing

2008-06-19 Thread David Weibel
 Solaris support on dynamic lun expansion is coming soon. This feature
 basically works on two sides: disk driver [s]sd detects the LUN size
 changes and the disk consumer (i.e. file system) pulls the trigger to
 change disk label when notified.

Here is the white paper by NetApp how to do this on Solaris without
the official Sun support.  Its clunky but will work now.

http://www.google.com/url?sa=tct=rescd=1url=http%3A%2F%2Fwww.virtual.com%2Fwhitepapers%2FNetApp_Solaris_File.pdfei=IlZaSIeWDILiiQHI9MCQDAusg=AFQjCNEEfKbIelfmT1UjMSaaPEXXvMhN2wsig2=AdsUxnxmmrIV2mWhOaxvSQ
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] How to unconfigure/configure LUNs on an iSCSI target properly?

2008-06-18 Thread David Weibel
 0 - iscsi_lun_offline iscsi_lun_offline - return:1
 0 - iscsi_lun_offline iscsi_lun_offline - return:1
 1 - iscsi_lun_offline iscsi_lun_offline - return:1
 1 - iscsi_lun_offline iscsi_lun_offline - return:1

It looks like your right. Assuming arg1 was the return value then a return of 1 
would be ISCSI_STATUS_INTERNAL_ERROR.  Based on your initial posting we know 
your device paths are not under MPxIO, /dev/rdsk/c2t4d0s2, so your hitting the 
line...

http://src.opensolaris.org/source/xref/nwsc/src/sun_nws/iscsi/src/iscsi_lun.c#631

The fact it doesn't even try the ndi_devi_offline() without lun_free is 
interesting.  I'm close to positive this usage of devfsadm -C used to do what 
you want.  If you look at line...

http://src.opensolaris.org/source/xref/nwsc/src/sun_nws/iscsi/src/iscsi_lun.c#22

We can tell that someone has been changing this file recently, atleast in 2008, 
although since NWS doesn't post there revision history there is no way we can 
see if this is a regression.  (I will try and get a hold of an old friend to 
get his input on this area.)

I guess the question is still open for Sun to confirm.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Also needed for LUN resizing

2008-06-18 Thread David Weibel
 A proper method is also needed for LUN resizing. After unmapping a LUN
 and remapping a LUN of different size onto the same LUN ID, Solaris keeps
 the old SCSI inquiry information. devfsadm -C and devfsadm dont help for
 this case either.

This has nothing to do with iSCSI.  The problem is a more generic issue with 
all disk transports. The problem is the the partition table created when you 
format the disk holds the disk size.  There are a number of blogs / white 
papers how to dynamically change the size of the disk with Solaris around on 
the internet.  Try googling for something like solaris dynamic disk 
expansion.  Also I know NetApp has a white paper about this exact topic on 
their support site for Solaris, unfortunately I no longer have that link.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] How to unconfigure/configure LUNs on an iSCSI target properly?

2008-06-17 Thread David Weibel
If memory serves me issuing a devfsadm -C should have removed the unmapped 
nodes, with two requirements.  One you didn't still have the LUNs mounted or in 
use on the host side.  And two the target actually removed the the LUNs from 
its SCSI report LUNs response.  

The code path that devfsadm -C should follow is something like the below...

iscsi_tran_bus_config()
 iscsid_config_all()
iscsid_login_tgt()
  iscsi_sess_online()
iscsi_sess_state_machine(N1)
  iscsi_sess_state_logged_in(N1)
iscsi_sess_enumeration()
  iscsi_sess_reportluns()
iscsi_lun_offline()

Its down in iscsi_sess_reportluns() where this gets interesting.  In this code 
it should identify the LUNs have been removed.  This code still without looking 
closely looks correct.  The identified luns are then offlined and I thought 
removed if not in use.  Although looking at the code now I don't see any 
attempt to remove the LUNs.  So maybe my memory is rusty.  There is an 
individual that knew this area of the code better than me.  I will see if I can 
still contact him as he also is not at Sun either any more.  Unfortunately, 
Sun's Network Storage source consolidation doesn't post there revision history 
like the core consolidation so its impossible for an outside to see if there 
was a regression.

The reason your disabling /enabling discovery is working is because if goes 
thru 
iscsi_sess_destroy() / iscsi_lun_destroy() which will remove the LUNs 
structures in the driver.  Disabling removes them all and when you re-enable it 
just add the current ones back.  I can't think of anything specific that would 
be causing a problem on your other host though.  Maybe on the disable the 
detach of the sd driver is sending a target reset out.  That could impact the 
other initiators although I wouldn't expect a minute or two delay.  There is 
nothing specific in the iSCSI initiator that would be causing that issue.  I 
suspect its a target driver detach issuing a reset.  Maybe someone else could 
confirm that?

(This was just a really really fast examination of the current code.)
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] How to unconfigure/configure LUNs on an iSCSI target properly?

2008-06-17 Thread David Weibel
(Remembered this on my drive home...)

I just remembered another aspect of the problem.  lun structs can still be 
present but in a removed, no longer present state.  These should not be 
reported back though.  See resulting IOCTL code in ...

  http://cvs.opensolaris.org/source/xref/nwsc/src/sun_nws/iscsi/src/iscsi.c#3038
- If not ISCSI_LUN_STATE_ONLINE they are skipped
  http://cvs.opensolaris.org/source/xref/nwsc/src/sun_nws/iscsi/src/iscsi.c#3115

That leads us back to if your LUNs are even getting marked offline.  You might 
want to try using dtrace to confirm this on the entry/exit of 
iscsi_lun_offline().  I'm rusty at dtrace so this might be slightly wrong and I 
don't currently have a system to test it.  But, Try putting this in a file with 
execute permissions, run it and then issue the devfsadm -C test case...

#!/usr/sbin/dtrace -s
fbt:iscsi:iscsi_lun_offline:entry
{
   printf(iscsi_lun_offline - lun:%d, (iscsi_lun_t *)arg1)-lun_num);
}

fbt:iscsi:iscsi_lun_offline:return
{
   printf(iscsi_lun_offline - return:%d, arg1);
}

(NOTE!!! I'm not sure if arg0 or arg1 was the return value on dtrace.  I think 
its 1 for some reason.)

... It should tell you whether the luns you expect to go away are having 
iscsi_lun_offline() called and whether it was successful.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] creating ZFS mirror over iSCSI between to DELL MD3000i arrays

2008-06-09 Thread David Weibel
I was hoping and interested if someone at Sun would respond with a more 
official answer.  To get you a answer in the mean time.  What your describing 
should work fine and was tested a number of years ago.  A long time ago there 
were some issues issues with ZFS when iSCSI LUs went offline.  Although I think 
those issues have since been resolved.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iSCSI docs

2008-05-13 Thread David Weibel
All seems ok but after I put offline then unmap a LUN the output shown was...

Based on the output it looks like the target is still online and the LUNs are 
mapped.  What exactly did you do to offline and unmap the LUNs?  Was that a 
target side or host side action?  

Here is some documentation...
http://docs.sun.com/app/docs/doc/817-5093/fmvcd?l=enq=iscsiadma=view
http://docs.sun.com/app/docs/doc/816-5166/iscsiadm-1m?a=view
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] StorageStop Updates

2008-04-02 Thread David Weibel
I just wanted to say !Nice Job! on all the recent content at StorageStop.
Keep up the great support for the Open Solaris community.
http://blogs.sun.com/storage/
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iscsi connection aborted.

2008-02-15 Thread David Weibel
Based on the trace its definitely the Target terminating the login request.  
This is seen by a TCP level disconnect (FIN) at frame 12.  I would expect the 
target doesn't like something in the (Operational) Login Request (Frame 10).  
Does the target debug log have some sort of error that points to the field it 
doesn't like or maybe a core file?  Nothing is jumping out at me.  (Off 
Topic...Those initiator requested transfer sizes (4K and 8K) are ridiculously 
small, but legal.  That should hurt the initiators performance some.)

Not knowing the target code I would guess your making it to around line 526 in 
iscsi_login.c and your encountering a problem somewhere after there (Just 
making wild guesses).  It looks like most failure paths would issue 
send_login_reject() and since you don't see that your in some corner case or 
the target crashed?  Just checking. 
  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/iscsi/iscsitgtd/iscsi_login.c

Again these are just guesses and suggestions.  Check for a target debug log 
and/or core file.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Project proposal: ISCSI Extensions for RDMA

2007-12-03 Thread David Weibel
+1 (Very Cool!)
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] iSCSI initiator does not support the HP EVA controller device

2007-10-25 Thread David Weibel
Whom ever is working on the following bug ...

  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6568240

I believe the solution will be adding the below to /etc/driver_aliases.

  sgen scsiclass,0c
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iSCSI initiator does not support the HP EVA controller device

2007-10-25 Thread David Weibel
It's nice to see a full description in a bug.  It makes it easier for the open 
solaris community to present suggestions.  99% of the bugs say see comments.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iscsi between 10u4 and 10u4 not working for me.

2007-10-02 Thread David Weibel
 The target host is multihomed, so I'm not surprised to see two entries,
 one for each NIC, but why the 0.0.0.0 entries?

I have seen a similiar issue about 2-3 years ago with an ATTO bridge.  I
think Sun has an initiator bug open to have it ignore 0.0.0.0 addresses.  At
the time it was causing problems similiar to what your seeing and the
workaround was to use static configuration.

In the case of ATTO I couldn't convince them not to return the bogus
0.0.0.0 addresses.  They were using this information as a place holder
in the management software.

Maybe the multi-homing is confusing the solaris target, causing it to 
return 0.0.0.0 addresses.  Rick or the code might have a better idea on
that one?

The current iSCSI initiator will only use the last (due to backward sort
order) SendTargets address unless you get into the -c options,  although
I don't think you want to do that in this case.  There is an open RFE for
the iSCSI initiator to implement a feature Microsoft coined as Portal
Hopping.  If the initiator can't open a SendTargets address it should
hop to the next address in the list and open that address.  This would
workaround the 0.0.0.0 address in a more useful fashion.

(I will add the SendTargets ordering of connect to my debug notes.)
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iSCSI troubleshooting

2007-09-07 Thread David Weibel
 Thanks for all the advice. Configuring as a static target worked just fine in 
 the end

I wish I would have saw this sooner.  I'm starting to wonder if there is 
something really busted in the Solaris iSCSI initiator SendTargets handling 
with the latest patches.  I recently got done helping another guy in Europe.

http://groups.google.com/group/comp.unix.solaris/browse_thread/thread/e14b16b24d94c64d/a4b77e511c758b73?lnk=gstq=+Pb+to+discover+my+iSCSI+targets+from+a+Solaris+10+Netra+X1+++rnum=2#a4b77e511c758b73

Anyway...Here are some random notes I composed on configuring the Solaris iSCSI 
Initiator, for the first time.  They are a little different than the Solaris 
Administration Guides document.  It's more correct in its assumptions that 
things will work.  These notes assume things won't work.  I will add to them 
when I have more time.

http://weibeltech.com/?p=146
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Safely Breaking Down iSCSI Connections

2007-07-02 Thread David Weibel
I don't understand the point of a new command.  You have...

  iscsiadm modify discovery -[tsi] disable

... which will en/disable the initiator.  I think think two things
need addressed before adding another option, that does the
exact same thing as an existing option.

1) The system shouldn't panic when you disable discovery.  It sounds
like Sun has done a good job addressing that problem in an up
coming build based on Jeff's post.

2) What is the real reason your en/disabling discovery?  If you can
give Sun a better idea of those use cases and maybe they can
fix the root case of your problem instead of giving you a work
around.

Just one opinion.

(In relation to the SMF iSCSI initiator service.  I think it
would be better just to get rid of it all together.  It doesn't do
much of anything useful and just causes user confusion.  I
should have pushed for that a couple years ago.)
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: Low-latency switching IP (IP-FC) w/ standard FC-switch?

2007-06-25 Thread David Weibel
Another alternative to FC for low latency IP on Solaris is Myricon's products.  
They have Solaris performance results posted on their web site if your 
interested.
  http://www.myri.com/Myri-10G/10gbe_solutions.html
(I do not work for Myricon or any related company.)
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: Performance expectations of iscsi targets?

2007-06-19 Thread David Weibel
Here are some things to look at.  (NOTE: I'm definitely not a ZFS expert.)

The most common low hanging fruit, in iSCSI performance tweaks, are in
the networking stack setup and the following tweak.
http://www.opensolaris.org/jive/thread.jspa?messageID=95566#95566

1) In your testing your using dd.  If I'm not mistaken that is a single threaded
IO tool, ie. only one outstanding IO at a time.  So you are dependent on the
latency of that single command.  iSCSI is known to have a higher latency
that direct attach storage or fibre-channel.  You have a good block size of
64K but I'm not sure how ZFS is breaking up that size.  You might check your
performance with a different tool like iometer or vdbench.

1)  Are there Errors.  Checked /var/adm/messages for any type of errors
during your IO run.  Check netstat for network errors and collisions.

2)  Simplify the Problem.  Take ZFS out of the picture.  I'm not trying to
point a finger at ZFS but you could get a lot of useful data by taking it out
of the equation.  Instead run your iSCSI target off a ramdisk/tmp, ufs, or
svm (preferred in that order.)  Do you see a performance improvement to
the level you expect?

 A couple years ago I helped someone out with a NFS-iSCSI-ZFS performance
 problem.  When we dug into the problem all the layers had their own issues
 and when put together they all magnified each others problems.  In that case
 NFS has its problem of each file write it has to write the data, metadata,
 and other junk.  Instead of a single write.  Thats great for data sharing but
 bad on performance.  ZFS then had its problem of needing to protect each
 one of the NFS writes with a SCSI SYNC CACHE command.  Which is great
 but again adds a lot of overhead.  Then you layer this on iSCSI.  iSCSI is a
 great technology but in general its down fall is latency.  Its best used
 with applications needing lower latency.  It can handle either low or high
 bandwidth.  There are ways to lower iSCSI's latency, but thats a much larger
 topic and most of those approaches are not supported with Solaris.

3) Evaluate Performance at the Different Layers. 
  a) It sounds like you already did the test of ZFS to direct storage vs iSCSI.
  b) You might want to quickly do some sanity check of the network
performance with netperf.
  c) iSCSI is a pretty dumb layer.  It doesn't add much loss from what I have 
seen.  The loss tends to be in the other layers around it.  !OR! iSCSI 
errors
or poor configuration.  Poor configuration:
i) iSCSI cmdsn windowing.  This is a pretty old post but it still pretty 
much
  applies, http://blogs.sun.com/dweibel/entry/iscsi_kernel_visibilty.
  You want to make sure the maxcmdsn windows is not sitting less that
  current cmdsn.  Otherwise the initiator is just sitting around doing 
nothing.
  I have also seen targets with a cmdsn window of 1, which is pretty sad
  (unless its a tape device.)
ii) iSCSI command windows.  You can check the initiators pending and active 
  queue counts to see if you have a large pending queue.  Thats a sign of a 
  problem.  The depending queue should run very low if not at 0.  I thought
  I blogged about getting at that data a couple years ago but I can't find 
the 
  post now.
 iii) Window size.
  http://www.opensolaris.org/jive/thread.jspa?messageID=95566#95566

  In general those are the most common iSCSI performance issues, excluding 
errors.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: iSCSI Related panic

2007-06-18 Thread David Weibel
Ben,

  This bug looks like a match for your problem...

http://bugs.opensolaris.org/view_bug.do?bug_id=6550424

  ... which is a duplicate of ...

http://bugs.opensolaris.org/view_bug.do?bug_id=6480294

  ... based on the limited bug description available.  It looks like an MPxIO 
engineer is working on the problem.  Also it looks like there are related FCP 
bugs with MPxIO.  If you only have a single path to the storage one possible 
workaround would just to be to disable MPxIO for iSCSI.  Check out 
/kernel/drv/iscsi.conf for 'mpxio-disable=no;'
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] iTunes: Sun Microsystems - Storage News and Training

2007-06-13 Thread David Weibel
I was just wondering what ever happened to the iTunes: Sun Microsystems - 
Storage News and Training (aka. Data Management / Solutions) podcast.  Was 
this cut with the RIFs last year?
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: iTunes: Sun Microsystems - Storage News and Training

2007-06-13 Thread David Weibel
It does look like the [EMAIL PROTECTED] and Sun Developer Network SDN along 
with many other devisions at Sun podcasts are still alive and well. (via 
iTunes.)
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: iscsi difference between sparc and x86

2007-06-12 Thread David Weibel
I would predict this has more to do with the disk driver (SD), than iSCSI.  x86 
and Sparc SD both use a different sun vtoc and if I remember right they way the 
values for these were generated was slightly different.  Maybe one of the SD 
experts can chime in here and correct me.

Anyway...Couldn't you just use format - type - auto config to set the 
geometry.  This option should be used with all raid type devices.  You should 
really only set hand geometries these days for very old disks or to workaround 
bugs.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: iscsi difference between sparc and x86

2007-06-12 Thread David Weibel
I thought putting a volume under ZFS automatically converted the label to EFI?  
If thats still true you can skip format all together and just use ZFS if thats 
your intension.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: Resize iSCSI LUN on client

2007-06-12 Thread David Weibel
Similiar to the approach you suggested.  If you have a non-ZFS LUN you can use 
follow the below NetApp document for a little more detail.  The steps are not 
NetApp specific.

  http://www.virtual.com/whitepapers/NetApp_Solaris_File.pdf
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: [zfs-discuss] ZFS + ISCSI + LINUX QUESTIONS

2007-06-06 Thread David Weibel
Why don't you just turn on iSCSI Header and Data digests?  They add an 
additional CRC32 to all the iSCSI transactions.  IPSec in general is still not 
supported with most targets and the setup is fairly painful.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: iscsi target compatibility with Mac initiators

2007-05-29 Thread David Weibel
I think you need to restate this posting.  It makes very little sense.  What 
doesn't work with what?  And what is going wrong?
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: Iscsi target and initiator on svn 55b x86_32

2007-03-21 Thread David Weibel
This is slightly off topic.  I noticed it in your logs...

 on the initiator side :
 # svcs iscsi_initiator
 STATE STIME FMRI
 online 15:19:06 svc:/network/iscsi_initiator:default

The iSCSI Initiator SMF service is pretty pointless.  It has/had to goals.  1) 
To workaround a devfs problem.  This bug was resolved in S10U1.  2) To do 
reverse name lookup of domain names returned as part of a send targets 
response.  I have yet to see a iSCSI target return a domain name in the send 
targets response.  Although it is defined in the standard.  

Unless you know one of these two topics is a problem then I recommend you leave 
this service disabled, as it is/was by default.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: Multiple connections to a target from the same initiator

2007-02-05 Thread David Weibel
[Off Topic...Is loopback access fixed yet?]

clip from nwsmith's post
# iscsiadm list target -vS
Target: iqn.1986-03.com.sun:02:1b993d5b-ca55-e822-f4d1-faa99b949707.sandbox
  TPGT: 1
  ISID: 402a0001  
...
  Discovery Method: Static  
...
Target: iqn.1986-03.com.sun:02:1b993d5b-ca55-e822-f4d1-faa99b949707.sandbox
  TPGT: 1
  ISID: 402a  
...
  Discovery Method: Static  
...
/clip

You have two different ISIDs listed.  This will cause two sessions to be 
created per the iSCSI specification, leading to two connections.  There are a 
couple different use cases how you could have gotten here.  Also this can 
depend on the software release level you running.  If your using the latest 
software this eliminates one use case.  (That case being if you use 
send-targets and static discovery to discover the same target at the same time 
it can lead to problems like your describing.  This was outlined as a bad thing 
to do in the Solaris Administrator Guide and also NetApp's documentation.  That 
problem has since been resolved to my knowledge.  No I don't remember which 
builds.  Sorry.)

The other remaining use cases I can think of are as follows.

1) The target returned two different ISIDs for the same target name.  (I don't 
think the Solaris target will do this unless specifically configured that way.)
2) The user used static configuration to specifically tell the initiator about 
two ISIDs.  (With the latest iSCSI software it will only use discovery 
information from the first discovery source static vs. sendtargets so you won't 
be able to create the old conflict.  But, You can still create a manual one by 
hand with static config.)

It's interesting to note the listed discovery method is static configuration.  
So I would guess this is either that old bug not being patched and mixing 
discovery methods or you mistakenly misconfigured the initiator with two static 
mappings and different ISIDs.  I would guess you didn't do that.  So verify 
your software levels are current with SunSolve or opensolaris.org.

---

I think this is more to Rick's question.  In both cases if you are using the 
Solaris Target since it supports SCSI T-10 TPGS both paths should have been 
merged by MPxIO unless you did one of the two following things.

1) As Rick pointed out...Set the following on the target side.
disable-tpgstrue/disable-tpgs
2) Set the following on the initiator side in iscsi.conf.
mpxio-disable=yes;

If your still in this situation can you repost the following information from 
the initiator.  (Its a little scary I can still remember these commands.)
  1) iscsiadm list discovery
  2) iscsiadm list static-config
  3) iscsiadm list discovery-address -v
This should provide all the discovery information to understand the mappings.  
The other interesting thing would be to look at the discovery method listed in 
iscsiadm list target -v,
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Network storage source has no history?

2007-01-29 Thread David Weibel
Is if possible for the network storage consolidation to start posting history 
information for source files?  (Peer Pressure)All the other consolidations on 
OpenSolaris are doing it.This is a very handy feature for the public to 
have any idea whats going on inside Sun.  

Examples...
ON: 
http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/cmd/iscsi/iscsitgtd/iscsi_login.c
JDS: 
http://cvs.opensolaris.org/source/history/jds/spec-files/trunk/scripts/prepare-ChangeLog.pl
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Re: Box hangs probing iSCSI target

2007-01-15 Thread David Weibel
had to disable the iscsi_initaitor SMF service

I'm not sure if things have changed.  But, development never really tests with 
the iscsi_initiator SMF service enabled.  I would recommend you always leave 
that service disabled with atleast S10-S10U2 of the iSCSI initiator.  This 
service was originally developed to workaround a couple devfs solaris framework 
bugs.  Although those problems have been resolved.  You might be hitting 
strange boot issues with the iscsi_initiator service enabled these days.
 
 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Multiple session support in the iSCSI target?

2006-10-30 Thread David Weibel
There is nothing in the Solaris iSCSI target to preclude it from 
supporting multiple sessions.  It does not support multiple connections 
per session, yet.  Give that time and knowing Rick he will get it 
working.  In relation to multiple sessions there isn't anything special 
you need to setup on the target.  Most of the configuration is initiator 
side.  Although you can use iscsitadm to configure the portal 
information.  Check out the documentation for details.


Torrey McMahon wrote:
I don't think MC/S is there yet. 
http://www.opensolaris.org/os/project/iscsitgt/  should have more 
details.


Matty wrote:
Does anyone happen to know if the Solaris iSCSI target supports 
multiple sessions and MC/S? If these are supported, do we need to 
configure anything special on the target to take advantage of them? 


___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Re: Overland REO and Solaris 10U2 iSCSI

2006-09-26 Thread David Weibel
The target value should be based on the /etc/path_to_inst value if I 
remember correctly.


Louwtjie Burger wrote:

I've noticed from prtconf the following:

name='lun' type=int items=1
value=
name='target' type=int items=1
value=0074
Device Minor Nodes:
dev=(33,1560)
dev_path=/iscsi/[EMAIL PROTECTED],0:
spectype=chr type=minor
dev_link=/dev/rmt/9


Could someone please explain the target value ...

___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss