[storage-discuss] mpio on win2003

2010-01-11 Thread Younes
Hi all,

I have Comstar running as an FC and iSCSI target.
I'm trying to setup mpio on Win 2003, and I can't seem to find a way. Presented 
disks are seen as 2 disks in windows.

Any thoughs?

Thanks,
Younes
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] svc:/network/iscsi/initiator:default fails sometimes

2010-01-11 Thread Bernd Schemmer
Hi

I have a problem with the iscsi initator. I'm using OpenSolaris

r...@t61p:~# cat /etc/release
   OpenSolaris Development snv_130 X86
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 18 December 2009

The problem is that sometimes the service  fails after reboot like this:

xtrn...@t61p:~$  svcs *scsi*
STATE  STIMEFMRI
disabled   21:18:10 svc:/network/iscsi_initiator:default
disabled   21:18:12 svc:/system/iscsitgt:default
online 21:18:43 svc:/network/iscsi/target:default
maintenance21:18:27 svc:/network/iscsi/initiator:default

and sometimes not . Unfortunatley it fails more often than it works ...

The logfile of the service is not really useful here:

 Jan  9 10:07:48 Method start exited with status 255. ]
[ Jan 10 21:12:15 Leaving maintenance because disable requested. ]
[ Jan 10 21:13:07 Disabled. ]
[ Jan 10 21:18:12 Enabled. ]
[ Jan 10 21:18:22 Executing start method (/lib/svc/method/iscsi-initiator). ]
[ Jan 10 21:18:27 Method start exited with status 255. ]
[ Jan 10 21:18:27 Executing start method (/lib/svc/method/iscsi-initiator). ]
[ Jan 10 21:18:27 Method start exited with status 255. ]
[ Jan 10 21:18:27 Executing start method (/lib/svc/method/iscsi-initiator). ]
[ Jan 10 21:18:27 Method start exited with status 255. ]
r...@t61p:~# 

I did some research via google but did not find a solution.

Here's more information:

xtrn...@t61p:~$ modinfo | grep iscsi
 56 f7ab1000  35780 270   1  iscsi (iSCSI Initiator v-1.55)
227 f8927000  174b8 286   1  iscsit (iSCSI Target)

And this is the output of truss for the binary:

r...@t61p:~# truss /lib/svc/method/iscsi-initiator 
execve(/lib/svc/method/iscsi-initiator, 0x08047D84, 0x08047D8C)  argc = 1
sysconfig(_CONFIG_PAGESIZE) = 4096
mmap(0x, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1, 0) = 
0xFEFB
mmap(0x, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1, 0) = 
0xFEFA
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON, 
-1, 0) = 0xFEF9
memcntl(0xFEFBC000, 29892, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
memcntl(0x0805, 4284, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
resolvepath(/usr/lib/ld.so.1, /lib/ld.so.1, 1023) = 12
resolvepath(/lib/svc/method/iscsi-initiator, 
/lib/svc/method/iscsi-initiator, 1023) = 31
stat64(/lib/svc/method/iscsi-initiator, 0x08047A08) = 0
open(/var/ld/ld.config, O_RDONLY) Err#2 ENOENT
stat64(/tools/lib/libc.so.1, 0x08047208)  Err#2 ENOENT
stat64(./libc.so.1, 0x08047208)   Err#2 ENOENT
stat64(/lib/libc.so.1, 0x08047208)= 0
resolvepath(/lib/libc.so.1, /lib/libc.so.1, 1023) = 14
open(/lib/libc.so.1, O_RDONLY)= 3
mmapobj(3, MMOBJ_INTERPRET, 0xFEF90930, 0x08047274, 0x) = 0
close(3)= 0
memcntl(0xFEE3, 189652, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON, 
-1, 0) = 0xFEE2
mmap(0x0001, 24576, PROT_READ|PROT_WRITE|PROT_EXEC, 
MAP_PRIVATE|MAP_ANON|MAP_ALIGN, -1, 0) = 0xFEE1
getcontext(0x08047858)
getrlimit(RLIMIT_STACK, 0x08047850) = 0
getpid()= 3131 [3130]
lwp_private(0, 1, 0xFEE12A00)   = 0x01C3
setustack(0xFEE12A60)
sysi86(SI86FPSTART, 0xFEF8AFCC, 0x133F, 0x1F80) = 0x0001
sysconfig(_CONFIG_PAGESIZE) = 4096
brk(0x08062690) = 0
brk(0x08064690) = 0
stat64(/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3, 0x08046C90) = 0
resolvepath(/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3, 
/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3, 1023) = 44
open(/usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.3, O_RDONLY) = 3
mmapobj(3, MMOBJ_INTERPRET, 0xFEE20598, 0x08046CFC, 0x) = 0
close(3)= 0
memcntl(0xFE65, 6624, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
stat64(./libc.so.1, 0x08046B80)   Err#2 ENOENT
stat64(/usr/lib/locale/en_US.UTF-8/libc.so.1, 0x08046B80) Err#2 ENOENT
stat64(/tools/lib/methods_unicode.so.3, 0x08046B80) Err#2 ENOENT
stat64(./methods_unicode.so.3, 0x08046B80)Err#2 ENOENT
stat64(/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3, 0x08046B80) = 0
resolvepath(/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3, 
/usr/lib/locale/common/methods_unicode.so.3, 1023) = 43
open(/usr/lib/locale/en_US.UTF-8/methods_unicode.so.3, O_RDONLY) = 3
mmapobj(3, MMOBJ_INTERPRET, 0xFEE20C70, 0x08046BEC, 0x) = 0
close(3)= 0
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON, 
-1, 0) = 0xFEE0
memcntl(0xFE63, 3576, MC_ADVISE, MADV_WILLNEED, 0, 0) = 0
stat64(./libc.so.1, 0x08046B80)   Err#2 ENOENT
fxstat(2, -1, 

Re: [storage-discuss] mpio on win2003

2010-01-11 Thread Younes
Thanks for the quick reply.

Isn't mpio in the initiator valid only for iscsi?
I'm having the problem now in FC. 


Younes
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] Open Source VTL

2010-01-11 Thread Nick
As I understand, from the following post:
opensolaris.org/pipermail/storage-discuss/2009-December/007767.html

the Solaris/OpenSolaris iSCSI Target Daemon currently will not emulate a tape 
device, correct?  If this is correct, does anyone know of an Open Source VTL 
for the Solaris platform?  I've found the LinuxVTL/MHVTL for Linux, but am 
looking for one for Solaris, as well.

Thanks!
-Nick
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] svc:/network/iscsi/initiator:default fails sometimes

2010-01-11 Thread Daniel Carosone
 [..]  Method start exited with status 255. ]
 [..]
 There are no problems with the iscsi LUNS when the
 service is working.
 
 Any hints?

No, but I see exactly the same problem.
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] mpio on win2003

2010-01-11 Thread Chris Du
What FC HBA are you using? I have qlogic 2Gb and 4Gb card. I believe qlogic 
removed this feature from their driver. Depending on driver used, 2Gb version 
might be still possible to use MPIO.
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


[storage-discuss] opensolaris-vmware

2010-01-11 Thread Greg
Hello All,
I hope this makes sense, I have two opensolaris machines with a bunch of hard 
disks, one acts as a iSCSI SAN, and the other is identical other than the hard 
disk configuration. The only thing being served are VMWare esxi raw disks, 
which hold either virtual machines or data that the particular virtual machine 
uses, I.E. we have exchange 2007 virtualized and through its iSCSI initiator we 
are mounting two LUNs one for the database and another for the Logs, all on 
different arrays of course. Any how we are then snapshotting this data across 
the SAN network to the other box using snapshot send/recv. In the case the 
other box fails this box can immediatly serve all of the iSCSI LUNs. The 
problem, I don't really know if its a problem...Is when I snapshot a running vm 
will it come up alive in esxi or do I have to accomplish this in a different 
way. These snapshots will then be written to tape with bacula. I hope I am 
posting this in the correct place. 

Thanks, 
Greg
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] opensolaris-vmware

2010-01-11 Thread Eugene Vilensky
On Mon, Jan 11, 2010 at 6:17 PM, Greg gregory.dur...@gmail.com wrote:
 we have exchange 2007 virtualized and through its iSCSI initiator we are 
mounting two LUNs one for the database and another for the Logs, all on 
different arrays of course. Any how we are then snapshotting this data across 
the SAN network to the other box using snapshot send/recv. In the case the 
other box fails this box can immediatly serve all of the iSCSI LUNs.

snip snip

How are you quiescing your VMs, and how are you verifying Exchange DB
consistency?
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] mpio on win2003

2010-01-11 Thread Eugene Vilensky
On Mon, Jan 11, 2010 at 9:30 AM, Younes ynag...@gmail.com wrote:
 Hi all,

 I have Comstar running as an FC and iSCSI target.
 I'm trying to setup mpio on Win 2003, and I can't seem to find a way. 
 Presented disks are seen as 2 disks in windows.


Native (Microsoft-provided) FC initiator MPIO support is available in
Windows Server 2008 and later.  For releases prior  and better
integration in 2008 and beyond, Microsoft provided licenses to
hardware vendors for creating their Device Specific Module
(http://www.microsoft.com/WindowsServer2003/technologies/storage/mpio/faq.mspx).
 I am not aware of any FC MPIO for Server 2003 that wasn't provided by
the storage manufacturer.
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] opensolaris-vmware

2010-01-11 Thread Greg
The Exchange DB, I am not too worried about as I am backing it up via bacula 
from within the server 2008 vm itself, and am backing that up to tape. It is 
the OS vm's I am worried about, these range from *nix to windows servers. What 
is the best way to quiesce the vm's, other than shutting them down to back them 
up.

Thanks,
Greg
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] opensolaris-vmware

2010-01-11 Thread Eugene Vilensky
What is the best way to quiesce the vm's, other than shutting them down to 
back them up.

Coordination of the vmsnapshot with back-end snapshots.  routines
flush all buffers and ensure consistency of supported operating
systems.  You can run custom scripts from within the guest or
externally from the host.

Even without quiesce you should still be able to achieve a
crash-consistent snapshot of the VM, so maybe there is a tradeoff
between risk that you could assume.

-Ev
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


Re: [storage-discuss] mpio on win2003

2010-01-11 Thread Younes
Thanks,
My HBA is QMH2462.
Should I be downgrading my driver?

Thanks,
Younes
-- 
This message posted from opensolaris.org
___
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss