Nigel Smith wrote:
On 5/1/07, cedric briner wrote:
Hi,
I'm quite new to solaris, and I'd like to do a zfs pool of an iscsi
device. What is the good way of doing so:
0) The big Question is :
How can I achieve an exportation of a whole disk with iscsi. In a manner
that I can clearly use it in a zpool
alternatively through iscsi OR directly attached ?
Hello Cedric
I'm a little confused by what exactly you are trying to do here, and why.
I will try & explain what you can do, and what works, and lets see if that
helps.
ok I'll try to explain you what I'm wanting to do: even if I don't know
if it is a really good idea
I'd like to have a cheap,reliable and manageable NFS/samba server.
-----------------------------------------------------------------
- To make things cheap, I thought:
- of using cheap and <b>fat</b> IDE HDs
- of using cheap servers with a single alimentation
- To make things reliable, I thought :
- to use raidz/raidz2/mirror
- to spread HDs of an zpool between nodes.
- To make things manageable, I thought:
- of using ZFS (easyness to add, remove HDs, snaphot ...)
- of using iscsi (facility to move around nodes the HD, add new HDs)
I'll show you the ascii art of what I thought(is that art anyway ? :) )
---------------------------------------------
H:host, D:disk
+-- H1 --- + +-- H2 --- + +-- H3 --- +
| | | | | |
| +- D0 -+ | | +- D1 -+ | | +- D2 -+ |
| +------+ | | +------+ | | +------+ |
| | | | | |
| | | | | | o o o
| +- D5 -+ | | +- D6 -+ | | +- D7 -+ |
| +------+ | | +------+ | | +------+ |
| | | | | |
+--------- + +--------- + +--------- +
\ | /
\ | /
+-- Jumbo Switch--+
| |
+-----------------+
and now I'll show you the steps I was thinking to do:
----------------------------------------------------
let say that I want to have 2 zpool. 1st pool D[0-2], 2nd D[5-7]
1) I export with ISCSI all the HD's
2) I inititate on:
- H1 3 ISCSI initiator which will bring D[0-2] to H1 (D0 will be a
loopback (*) iscsi)
- H2 3 ISCSI initiator which will bring D[5-7] to H1 (D6 loopback)
3) I create the zpools
- on H1 : zpool tank012 create raidz D0-iscsi D1-iscsi D2-iscsi
- on H2 : zpool tank567 create raidz D5-iscsi D6-iscsi D7-iscsi
4) the final service:
- on H1 I export the zpool : zfs tank012 set share_nfs=on
and now the best... the use case
--------------------------------
1)H3 crash: we can still deliver the NFS/samba service without
interuption. So we will have time to relaunch H3
2) H1 crash: we do:
- step 2-3 (initiate ISCSI, import zpool)
- move the IP address of NFS/samba server to H3
+ For this we have to have kind of IP :
- IP for ISCSI transaction
- IP for NFS/samba transaction
- serve the zpool with nfs
3) We can easily add new HD or by adding them in empty slots of H[1-3]
or by adding a new Host.
The iscsi target needs a backing store.
backing-store ??? Is a backing-store something that we need to translate
an IDE HD to a SCSI HD or is it simply a place where the data will be hold ?
I can think of 3 ways of doing the backing store.
1) Use a file on an existing file system - useful for doing a quick test.
2) Use a dedicated hard-drive - You can use 'raw' mode if it is a scsi disk.
and if it's not.. you juste use type `disk'. Right ?
3) Use a ZFS pool - This is I think the easiest and best way.
Do you think --after having read what I'm trying to do--, that this
still be the best idea.
Cedric, you seem to be using method 2).
I have used this technique myself, & you can see example in these posts:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-April/001061.html
http://mail.opensolaris.org/pipermail/storage-discuss/2007-April/001093.html
Here's how I do method 3) on my PC:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SEAGATE-ST318405LW-0105-17.09GB>
/[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci9005,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0
1. c2t0d0 <DEFAULT cyl 9723 alt 2 hd 255 sec 63>
/[EMAIL PROTECTED],0/pci1028,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
Specify disk (enter its number):
# zpool create tank c0t0d0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 17G 88K 17.0G 0% ONLINE -
# zfs create -s -V 10gb tank/iscsi-zvol
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 110K 16.7G 24.5K /tank
tank/iscsi-zvol 22.5K 16.7G 22.5K -
# zfs set shareiscsi=on tank/iscsi-zvol
# iscsitadm list target -v
Target: tank/iscsi-zvol
iSCSI Name: iqn.1986-03.com.sun:02:b0737f95-bd49-e3a3-e75c-b4272e552411
Alias: tank/iscsi-zvol
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0x0
VID: SUN
PID: SOLARIS
Type: disk
Size: 10G
Backing store: /dev/zvol/rdsk/tank/iscsi-zvol
Status: online
woooahoo , does that mean that if you do the following :
- zpool
- zfs
- zfs set shareiscsi=on
then if you export this HD from this machine and and re-import it on an
other one, you will get a zfs, which will be already enabled and that
the ISCSI Name wouldn't change !!!!
I'm gonna try it.. sounds really good to have packaged ISCSI and a FS..
Ok, now the iscsi target PC is sharing out the backing store over the network.
Normally you would run the iscsi initiator on another PC connected across Ethernet.
But for testing, is is possible to run the initiator on the same PC
by just using the discover address of localhost: 127.0.0.1
# iscsiadm add discovery-address 127.0.0.1
# iscsiadm modify discovery -t enable
# iscsiadm list target
Target: iqn.1986-03.com.sun:02:b0737f95-bd49-e3a3-e75c-b4272e552411
Alias: tank/iscsi-zvol
TPGT: 1
ISID: 4000002a0000
Connections: 1
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SEAGATE-ST318405LW-0105-17.09GB>
/[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/pci9005,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0
1. c2t0d0 <DEFAULT cyl 9723 alt 2 hd 255 sec 63>
/[EMAIL PROTECTED],0/pci1028,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0
2. c4t010000123F71738800002A0044568CBCd0 <DEFAULT cyl 1303 alt 2 hd 255 sec
63>
/scsi_vhci/[EMAIL PROTECTED]
Specify disk (enter its number):
Ok, now on the initiator PC, you can create a ZFS pool using the
iscsi target.
This looks really weird if the initiator & target are on the same PC!
an even more weird to do a ZFS on a ZFS ??
# zpool create tank2 c4t010000123F71738800002A0044568CBCd0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 17G 184K 17.0G 0% ONLINE -
tank2 9.94G 89K 9.94G 0% ONLINE -
So this is effectively ZFS-on-ZFS !
oh yeah. this make more and more sense to me.
and then if I ``un-iscsi'' it !
zpool export mtank
iscsiadm remove discovery-address 127.0.0.1
iscsitadm delete target --lun 0 vol-2
and then I try to re-use directly the c1d0 with no success :(
zpool import mtank
cannot import 'mtank': no such pool available
Cedric, I don't think you are allowed to do that.
I'm not sure why you would want to try - can you
explain more clearly what you are trying to do here.
okay this is not really a that important topic. It was related to:
- ``and now I'll show you the steps I was thinking to do '' >> 2
to avoid to mount the iscsi in a loopback manner (I wasn't that smart at
that time !)
Thanks
Nigel Smith
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
--
Cedric BRINER
Geneva - Switzerland
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss