Re: [storage-discuss] iscsiadm source compatibility

2009-09-09 Thread viveks
Hi,

Thanks for the reply,

The ip address is valid and the targets are reachable as i can discover and 
login to the targets using the iscsiadm utility on s10.

Is it possible that an EFAULT may be returned if the size of the structure 
passed to the ioctl is different from the size of structure expected by the 
driver? This is possible when using pragma pack compiler directives.

Thank you,
Vivek S
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iscsiadm source compatibility

2009-09-09 Thread viveks
Hi,

Sorry for the separate post, i should have included this in my previous reply.

Is there any way i can debug the driver or get a trace of something to know 
exactly what is happening inside the driver? Any way of finding out the exact 
cause of failure of the ioctl call?

Thank you,
Vivek S
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] FCoE in a N-Port point-to-point topology

2009-09-09 Thread Ronnie koch
Nigel Smith wrote:
 Hi Ronnie
 Many thanks for cross-posting that interesting 
 fairly detailed feedback from the open-fcoe list.

Sometimes detail is good :)

Zhong Wang wrote:
 Thanks Ronnie, that explains the Open FCoE initiator
 behavior. The debug  trace of Solaris FCoE software can be retrieved by
 issuing echo *ftb/s | mdb -k with root access. The information could be
 hard to understand since it's only for developers, and I can help you
 with it.

Thanks for the offer - I might take you up on it :)

 Would you  please send me a Wireshark capture of the traffic in
 your setup? I'd  like to see if there are differences from Nigel's
 setup.

My setup is in VMware Server with only the 2 VM's (and the VMware host) on a 
host-only vswitch. The target is up to 122 and the initiator Ubuntu 9.04 with 
kernel 2.6.30-4. I will attach 3 tcpdump traces showing target started first, 
initiator started first and initiator with nic that supports jumbo frames. The 
end result is fairly similar but the frame sizes negotiation in the last trace 
is interesting. The target suggests 2048 in a couple of FLOGIs but then accepts 
a FLOGI from initiator suggesting 2112 but the frame size in the ACC is 1452.

The traces were captured with no ip and no arp as the Open-FCoE initiator 
used a single interface for FCoE and IP.

Regards



Ronnie
-- 
This message posted from opensolaris.org

fcoe-OF-I-and-OS-T-01.cap
Description: Binary data


fcoe-OF-I-and-OS-T-02.cap
Description: Binary data


fcoe-OF-I-and-OS-T-03.cap
Description: Binary data
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] FCoE in a N-Port point-to-point topology

2009-09-09 Thread Zhong Wang




Ronnie koch wrote:

  Nigel Smith wrote:
  
  
Hi Ronnie
Many thanks for cross-posting that interesting 
fairly detailed feedback from the open-fcoe list.

  
  
Sometimes detail is good :)

Zhong Wang wrote:
  
  
Thanks Ronnie, that explains the Open FCoE initiator
behavior. The debug  trace of Solaris FCoE software can be retrieved by
issuing "echo *ftb/s | mdb -k" with root access. The information could be
hard to understand since it's only for developers, and I can help you
with it.

  
  
Thanks for the offer - I might take you up on it :)

  
  
Would you  please send me a Wireshark capture of the traffic in
your setup? I'd  like to see if there are differences from Nigel's
setup.

  
  
My setup is in VMware Server with only the 2 VM's (and the VMware host) on a "host-only" vswitch. The target is up to 122 and the initiator Ubuntu 9.04 with kernel 2.6.30-4. I will attach 3 tcpdump traces showing target started first, initiator started first and initiator with nic that supports jumbo frames. The end result is fairly similar 


Yes, the first two captures are almost identical to Nigel's.


  but the frame sizes negotiation in the last trace is interesting. The target suggests 2048 in a couple of FLOGIs but then accepts a FLOGI from initiator suggesting 2112 but the frame size in the ACC is 1452.
  


This is a FCoE target bug -- it's hard coded to 1452 when sending ACC
to a FLOGI. I'm opening a CR to track this.

Thanks,
Zhong


  
The traces were captured with "no ip and no arp" as the Open-FCoE initiator used a single interface for FCoE and IP.

Regards



Ronnie
  
  

___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
  




___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] FCoE in a N-Port point-to-point topology

2009-09-09 Thread Ronnie koch
Did some more testing by changing the target N/PWWN to be smaller than that of 
the initiator but the results were similar to my 03 trace from yesterday even 
when I reverted to original WWNs. Seems the OpenFC0E initiator responds 
differently with E1000 than with the VMware default PCnet xyz. No time to 
verify now but I am attaching traces FWIW.

The traces show the initiator consistently rejecting the PLOGI from the target 
and not initiating a PLOGI by itself.

Ronnie
-- 
This message posted from opensolaris.org

fcoe-OF-I-and-OS-T-04-7.tgz
Description: Binary data
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iscsiadm source compatibility

2009-09-09 Thread Bing Zhao - Sun Microsystems

Hi Viveks:

There is no existed dtrace script for debuging iscsi driver. So if you 
want to do that, you should write the script by yourself.
In dtrace script, you can know the return value of a function. So I 
think this is a way to find which function exits with errors.


You can find the related information here:
http://docs.sun.com/app/docs/doc/819-3620/chp-fbt?l=enq=dtracea=view

Regards,
Bing


viveks wrote:

Hi,

Sorry for the separate post, i should have included this in my previous reply.

Is there any way i can debug the driver or get a trace of something to know 
exactly what is happening inside the driver? Any way of finding out the exact 
cause of failure of the ioctl call?

Thank you,
Vivek S
  


___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] [fm-discuss] self test failure on Intel X25-E SSD

2009-09-09 Thread Peter Eriksson
Done some more testing, and I think my X4240/mpt/X25-problems must be something 
else. 

Attempting to read (with smartctl) the self test log on the 8850-firmware X25-E 
gives better results
than with the old firmware:


X25-E running firmware 8850 on an X4240 with mpt controller:

# smartctl -d scsi -l selftest /dev/rdsk/c1t15d0
smartctl version 5.38 [i386-pc-solaris2.10] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

No self-tests have been logged
Long (extended) Self Test duration: 120 seconds [2.0 minutes]




X25-M running firmware 8820 on an X4240 with mpt controller:

# smartctl -d scsi -l selftest /dev/rdsk/c1t14d0
smartctl version 5.38 [i386-pc-solaris2.10] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

scsiPrintSelfTest Failed [I/O error]

Plus SCSI errors on the console:
Sep  9 16:11:44 merope scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@e,0 (sd31):
Sep  9 16:11:44 merope  SCSI transport failed: reason 'unknown reason': 
giving up
Sep  9 16:12:01 merope scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci10de,3...@f/pci1000,3...@0/s...@e,0 (sd31):
Sep  9 16:12:01 merope  Error for Command: write   Error Level: 
Retryable
Sep  9 16:12:01 merope scsi: [ID 107833 kern.notice]Requested Block: 42976  
   Error Block: 42976
Sep  9 16:12:01 merope scsi: [ID 107833 kern.notice]Vendor: ATA 
   Serial Number: CVEM8465006B
Sep  9 16:12:01 merope scsi: [ID 107833 kern.notice]Sense Key: Unit 
Attention
Sep  9 16:12:01 merope scsi: [ID 107833 kern.notice]ASC: 0x29 (power on, 
reset, or bus reset occurred), ASCQ: 0x0, FRU: 0x0


X25-E running firmware 8621 in an X4500 (marvell controller):

# /ifm/bin/smartctl -d scsi -l selftest /dev/rdsk/c5t7d0
smartctl version 5.38 [i386-pc-solaris2.10] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/


SMART Self-test log
Num  Test  Status segment  LifeTime  LBA_first_err 
[SK ASC ASQ]
 Description  number   (hours)
# 1  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 2  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 3  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 4  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 5  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 6  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 7  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 8  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
# 9  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#10  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#11  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#12  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#13  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#14  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#15  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#16  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#17  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#18  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#19  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]
#20  Default   Interrupted (bus reset ?)   -9216  154621181988 
[0xb 0x40 0x82]

And SCSI errors on the console: 

Sep  9 16:14:27 andromeda scsi: [ID 107833 kern.warning] WARNING: 
/p...@2,0/pci1022,7...@8/pci11ab,1...@1/d...@7,0 (sd47):
Sep  9 16:14:27 andromeda   Error for Command: mode sense  
Error Level: Informational
Sep  9 16:14:27 andromeda scsi: [ID 107833 kern.notice] Requested 
Block: 0 Error Block: 0
Sep  9 16:14:27 andromeda scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number: 
Sep  9 16:14:27 andromeda scsi: [ID 107833 kern.notice] Sense Key: 
Illegal Request
Sep  9 16:14:27 andromeda scsi: [ID 107833 kern.notice] ASC: 0x24 
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0

This drive is generating a lot of FMA events too regarding the failed 
selftests. Going to
replace it with one running 8850 as soon as possible. (Requires some fiddling 

[storage-discuss] migration path to COMSTAR

2009-09-09 Thread Joseph Mocker

Hi,

Just getting my feet wet with COMSTAR. I have some iSCSI LUs backed by 
ZVOLs that I created pre-COMSTAR (Nevada 84). I would like to migrate to 
a newer version of Nevada or OpenSolaris, but my first attempt mailed 
miserably and apparently I narrowly escaped large amounts of data loss.


Has anyone worked out migration path from pre-COMSTAR to COMSTAR for 
existing LUs? As far as I understand, sdadm is the utility to manage LUs 
but I do not see any option to import or migrate an existing pre-COMSTAR LU.


Thanks...

 --joe
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] NexentaStor.org and Open Source components

2009-09-09 Thread Anil Gulecha
Hi All,

NexentaStor is an industry leading Storage appliance, based on
Opensolaris and Ubuntu LTS.

On behalf of Nexenta Systems, I'd like to announce the setup of
NexentaStor.org, where we've open-sourced internal kernel gate and
plugins that extend the appliance.

NexentaStor.org is a complete forge environment with mercurial
repositories, bug tracking, wiki, file hosting and other features.

We welcome developers and the user community to participate and extend
the storage appliance via our open Storage Appliance API (SA_API) and
plugin API.

Website: www.nexentastor.org

Thanks
--
Anil Gulecha
Community Lead,
www.nexentastor.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] migration path to COMSTAR

2009-09-09 Thread A Hettinger
one method that does work, is to use a comstar iscsi port on the loopback 
device, mount it locally then use dd to move the content of your old backing 
store to the new device.

you can then set up views to whatever you want. it's ugly, but it does work.
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] migration path to COMSTAR

2009-09-09 Thread Joseph Mocker

A Hettinger wrote:

one method that does work, is to use a comstar iscsi port on the loopback device, mount 
it locally then use dd to move the content of your old backing store to the 
new device.

you can then set up views to whatever you want. it's ugly, but it does work.
  
Ah, clever. hopefully a sparse ZVOL would be smart enough not to 
allocate blocks of zeros.


Anyone know if a migration tool is in the works? Should I file an RFE?
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] migration path to COMSTAR

2009-09-09 Thread eneal

Quoting Joseph Mocker m...@sun.com:


A Hettinger wrote:
one method that does work, is to use a comstar iscsi port on the   
loopback device, mount it locally then use dd to move the content  
 of your old backing store to the new device.


you can then set up views to whatever you want. it's ugly, but it does work.


Ah, clever. hopefully a sparse ZVOL would be smart enough not to
allocate blocks of zeros.



I'd hold off on the sparse volume. I've done some testing that seems  
to suggest that sparse volumes *seem* to be less performant than  
regular zvols.


I'm compiling the data now and I hope to publish it to the list shortly..





This email and any files transmitted with it are confidential and are  
intended solely for the use of the individual or entity to whom they  
are addressed. This communication may contain material protected by  
the attorney-client privilege. If you are not the intended recipient,  
be advised that any use, dissemination, forwarding, printing or  
copying is strictly prohibited. If you have received this email in  
error, please contact the sender and delete all copies.




___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Making zfs HA

2009-09-09 Thread Roman Naumenko
Hello list,

What are the options for building clustering storage using opensolaris? I'm 
interested in HA solutions.

I tried only one option - AVS replication. Unfortunately, AVS configuration is 
too complicated. Groups, bitmaps, queues, rcp timeouts, slicing - it's just a 
nightmare to make it work and support in production when there are more that a 
couple of pools. And probably it's gonna be slow if there are jbods attached to 
a storage controller, or will require half of Ethernet ports to replicated more 
or less reliably. 10Gige interface for it - did anybody try? 

Another option I'm looking into is sending snapshots. But regardless what we've 
heard from sun - it's gonna slow down zfs operations. Creating a snapshot is 
not quick on loaded pool, especially if storage controller manages many pools. 
So, it's probably dozens of minutes delays in transferring snapshots over to a 
standby server.

--
Roman
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] Making zfs HA

2009-09-09 Thread Erast

Hi Roman,

take a look on what NexentaStor is providing:

auto-cdp - a commercial plugin which simplifies significantly ZFS/AVS 
management of primary/secondary hosts


auto-sync - a free service which supports sophisticated ZFS send/recv 
over SSH, NC. Or use send snapshots over RSYNC - which could be ideal 
for clouds. This service can be ideal solution for asynchronous ZFS 
replication


auto-tier - a free service which supports RSYNC tiering. This service 
works really well as a second tier archiving solution. Think of Netapp 
over NFS periodic backups, etc


the development portal is http://www.nexentastor.org - the place where 
NexentaStor open source developers and Partners getting together, so 
this can provide you a free community support for open sourced plugins.


Roman Naumenko wrote:

Hello list,

What are the options for building clustering storage using opensolaris? I'm 
interested in HA solutions.

I tried only one option - AVS replication. Unfortunately, AVS configuration is too complicated. Groups, bitmaps, queues, rcp timeouts, slicing - it's just a nightmare to make it work and support in production when there are more that a couple of pools. And probably it's gonna be slow if there are jbods attached to a storage controller, or will require half of Ethernet ports to replicated more or less reliably. 10Gige interface for it - did anybody try? 


Another option I'm looking into is sending snapshots. But regardless what we've 
heard from sun - it's gonna slow down zfs operations. Creating a snapshot is 
not quick on loaded pool, especially if storage controller manages many pools. 
So, it's probably dozens of minutes delays in transferring snapshots over to a 
standby server.

--
Roman

___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iscsiadm source compatibility

2009-09-09 Thread viveks
Hi,

Thanks for the information and your time. I ll give it a try and get back if 
possible, probably after a while as i am stuck with some other work. Thanks a 
lot.

Vivek S
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] iscsiadm source compatibility

2009-09-09 Thread Bing Zhao - Sun Microsystems

Hi Viveks:

Please let us know if you have any questions  on the iscsi driver 
debuging or related dtrace scripts.


Regards,
Bing

viveks wrote:

Hi,

Thanks for the information and your time. I ll give it a try and get back if 
possible, probably after a while as i am stuck with some other work. Thanks a 
lot.

Vivek S
  


___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss