Is iSCSI loopback with the initiator and target supported on the same
system. I thought that was fixed a while back. I have a clean snv_104
system installed. (We wanted to test the iSNS server and figured I would
point the initiator and target at it to make sure things were working.)
The
Is there any change you see a message like the below in your log during this
testing?
WARNING: iscsi driver unable to online iqn lun #
--
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
iSCSI already has lots of parameters that effect transfer sizes, io flow,
timeouts, etc. Instead of hiding these new parameters in iscsi.conf
how about making these vendor specific parameters and have them
managed though the same management interfaces (ie. iscsiadm). The
iscsi.conf parameters
Since your a NetApp customer you should really leverage your
[EMAIL PROTECTED] account (which you can also get free.) It contains
great documentation targeted at you. NetApp has a script that
would have taken care of this for you. They also have lots of other
good scripts, documents, etc.
Do not change iscsi.conf. If you did change it back. (Also setting
mpxio-disable=no is a double negative, it doesn't do anything.)
Add this to your /kernel/drv/scsi_vhci.conf
device-type-scsi-options-list = NETAPP LUN, symmetric-option;
symmetric-option = 0x100;
Make
+1 A sad and happy day.
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Solaris support on dynamic lun expansion is coming soon. This feature
basically works on two sides: disk driver [s]sd detects the LUN size
changes and the disk consumer (i.e. file system) pulls the trigger to
change disk label when notified.
Here is the white paper by NetApp how to do this on
0 - iscsi_lun_offline iscsi_lun_offline - return:1
0 - iscsi_lun_offline iscsi_lun_offline - return:1
1 - iscsi_lun_offline iscsi_lun_offline - return:1
1 - iscsi_lun_offline iscsi_lun_offline - return:1
It looks like your right. Assuming arg1 was the return value then a return of 1
would be
A proper method is also needed for LUN resizing. After unmapping a LUN
and remapping a LUN of different size onto the same LUN ID, Solaris keeps
the old SCSI inquiry information. devfsadm -C and devfsadm dont help for
this case either.
This has nothing to do with iSCSI. The problem is a more
If memory serves me issuing a devfsadm -C should have removed the unmapped
nodes, with two requirements. One you didn't still have the LUNs mounted or in
use on the host side. And two the target actually removed the the LUNs from
its SCSI report LUNs response.
The code path that devfsadm
(Remembered this on my drive home...)
I just remembered another aspect of the problem. lun structs can still be
present but in a removed, no longer present state. These should not be
reported back though. See resulting IOCTL code in ...
I was hoping and interested if someone at Sun would respond with a more
official answer. To get you a answer in the mean time. What your describing
should work fine and was tested a number of years ago. A long time ago there
were some issues issues with ZFS when iSCSI LUs went offline.
All seems ok but after I put offline then unmap a LUN the output shown was...
Based on the output it looks like the target is still online and the LUNs are
mapped. What exactly did you do to offline and unmap the LUNs? Was that a
target side or host side action?
Here is some
I just wanted to say !Nice Job! on all the recent content at StorageStop.
Keep up the great support for the Open Solaris community.
http://blogs.sun.com/storage/
This message posted from opensolaris.org
___
storage-discuss mailing list
Based on the trace its definitely the Target terminating the login request.
This is seen by a TCP level disconnect (FIN) at frame 12. I would expect the
target doesn't like something in the (Operational) Login Request (Frame 10).
Does the target debug log have some sort of error that points
+1 (Very Cool!)
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
Whom ever is working on the following bug ...
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6568240
I believe the solution will be adding the below to /etc/driver_aliases.
sgen scsiclass,0c
This message posted from opensolaris.org
It's nice to see a full description in a bug. It makes it easier for the open
solaris community to present suggestions. 99% of the bugs say see comments.
This message posted from opensolaris.org
___
storage-discuss mailing list
The target host is multihomed, so I'm not surprised to see two entries,
one for each NIC, but why the 0.0.0.0 entries?
I have seen a similiar issue about 2-3 years ago with an ATTO bridge. I
think Sun has an initiator bug open to have it ignore 0.0.0.0 addresses. At
the time it was causing
Thanks for all the advice. Configuring as a static target worked just fine in
the end
I wish I would have saw this sooner. I'm starting to wonder if there is
something really busted in the Solaris iSCSI initiator SendTargets handling
with the latest patches. I recently got done helping
I don't understand the point of a new command. You have...
iscsiadm modify discovery -[tsi] disable
... which will en/disable the initiator. I think think two things
need addressed before adding another option, that does the
exact same thing as an existing option.
1) The system shouldn't
Another alternative to FC for low latency IP on Solaris is Myricon's products.
They have Solaris performance results posted on their web site if your
interested.
http://www.myri.com/Myri-10G/10gbe_solutions.html
(I do not work for Myricon or any related company.)
This message posted from
Here are some things to look at. (NOTE: I'm definitely not a ZFS expert.)
The most common low hanging fruit, in iSCSI performance tweaks, are in
the networking stack setup and the following tweak.
http://www.opensolaris.org/jive/thread.jspa?messageID=95566#95566
1) In your testing your using
Ben,
This bug looks like a match for your problem...
http://bugs.opensolaris.org/view_bug.do?bug_id=6550424
... which is a duplicate of ...
http://bugs.opensolaris.org/view_bug.do?bug_id=6480294
... based on the limited bug description available. It looks like an MPxIO
engineer is
I was just wondering what ever happened to the iTunes: Sun Microsystems -
Storage News and Training (aka. Data Management / Solutions) podcast. Was
this cut with the RIFs last year?
This message posted from opensolaris.org
___
storage-discuss
It does look like the [EMAIL PROTECTED] and Sun Developer Network SDN along
with many other devisions at Sun podcasts are still alive and well. (via
iTunes.)
This message posted from opensolaris.org
___
storage-discuss mailing list
I would predict this has more to do with the disk driver (SD), than iSCSI. x86
and Sparc SD both use a different sun vtoc and if I remember right they way the
values for these were generated was slightly different. Maybe one of the SD
experts can chime in here and correct me.
I thought putting a volume under ZFS automatically converted the label to EFI?
If thats still true you can skip format all together and just use ZFS if thats
your intension.
This message posted from opensolaris.org
___
storage-discuss mailing list
Similiar to the approach you suggested. If you have a non-ZFS LUN you can use
follow the below NetApp document for a little more detail. The steps are not
NetApp specific.
http://www.virtual.com/whitepapers/NetApp_Solaris_File.pdf
This message posted from opensolaris.org
Why don't you just turn on iSCSI Header and Data digests? They add an
additional CRC32 to all the iSCSI transactions. IPSec in general is still not
supported with most targets and the setup is fairly painful.
This message posted from opensolaris.org
I think you need to restate this posting. It makes very little sense. What
doesn't work with what? And what is going wrong?
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
This is slightly off topic. I noticed it in your logs...
on the initiator side :
# svcs iscsi_initiator
STATE STIME FMRI
online 15:19:06 svc:/network/iscsi_initiator:default
The iSCSI Initiator SMF service is pretty pointless. It has/had to goals. 1)
To workaround a devfs problem. This
[Off Topic...Is loopback access fixed yet?]
clip from nwsmith's post
# iscsiadm list target -vS
Target: iqn.1986-03.com.sun:02:1b993d5b-ca55-e822-f4d1-faa99b949707.sandbox
TPGT: 1
ISID: 402a0001
...
Discovery Method: Static
...
Target:
Is if possible for the network storage consolidation to start posting history
information for source files? (Peer Pressure)All the other consolidations on
OpenSolaris are doing it.This is a very handy feature for the public to
have any idea whats going on inside Sun.
Examples...
ON:
had to disable the iscsi_initaitor SMF service
I'm not sure if things have changed. But, development never really tests with
the iscsi_initiator SMF service enabled. I would recommend you always leave
that service disabled with atleast S10-S10U2 of the iSCSI initiator. This
service was
There is nothing in the Solaris iSCSI target to preclude it from
supporting multiple sessions. It does not support multiple connections
per session, yet. Give that time and knowing Rick he will get it
working. In relation to multiple sessions there isn't anything special
you need to setup
The target value should be based on the /etc/path_to_inst value if I
remember correctly.
Louwtjie Burger wrote:
I've noticed from prtconf the following:
name='lun' type=int items=1
value=
name='target' type=int items=1
37 matches
Mail list logo