[zfs-discuss] Failure to zfs destroy - after interrupting zfs receive

2012-09-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Formerly, if you interrupted a zfs receive, it would leave a clone with a % in 
its name, and you could find it via zdb -d and then you could destroy the 
clone, and then you could destroy the filesystem you had interrupted receiving.

That was considered a bug, and it was fixed, I think by Sun.  If the lingering 
clone was discovered laying around, zfs would automatically destroy it.  But 
now I'm encountering a new version of the same problem...

Unfortunately, now I have a filesystem where zfs receive was interrupted, and 
I can't destroy the filesystem or the snapshot of the filesystem on the 
receiving side.

sudo zfs destroy -R tank/Downloads
cannot destroy 'tank/Downloads@zfs-auto-snap_hourly-2012-08-31-17h54': dataset 
already exists

sudo zfs destroy -R tank/Downloads@zfs-auto-snap_hourly-2012-08-31-17h54
cannot destroy snapshot tank/Downloads@zfs-auto-snap_hourly-2012-08-31-17h54: 
snapshot is cloned

sudo zfs list -t all | grep Downloads
tank/Downloads
tank/Downloads@zfs-auto-snap_hourly-2012-08-31-17h54

sudo zdb -d tank/Downloads
Dataset tank/Downloads [ZPL], ID 139, cr_txg 31408, 3.91G, 30 objects
(Notice, I don't get any clones listed.)

I'm running openindiana 151.1.6 (the latest, fully patched a couple of weeks 
ago.)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iscsi confusion

2012-09-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
I am confused, because I would have expected a 1-to-1 mapping, if you create an 
iscsi target on some system, you would have to specify which LUN it connects 
to.  But that is not the case...

I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm.  I read some 
online examples, where you first sbdadm create-lu which gives you a GUID for 
a specific device in the system, and then stmfadm add-view $GUID, and then 
itadm create-target.

It's this last command that confuses me - Because it generates an iscsi target 
iqn.blahblah...  And it will create as many as you specify, regardless of how 
many LUN's you have available.  So how can I know which device I'm handing out 
to some initiator?  And if an initiator connects to all those different 
iqn.blahblah addresses...  What device will they actually be mucking around 
with?

I'm not quite sure what in my brain is thinking wrong, but I'm guessing the 
explanation is something like this:

(can anyone tell me if this is the correct interpretation?)

I shouldn't be thinking in such linear terms.  When I create an iscsi target, 
don't think of it as connecting to a device - instead, think of it as sort of a 
channel.  Any initiator connecting to it can see any of the devices that I have 
done add-views on.  But each iscsi target can only be used by one initiator at 
a time.

Is that a good understanding?

Thanks...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iscsi confusion

2012-09-28 Thread Edward Harvey
I am confused, because I would have expected a 1-to-1 mapping, if you create an 
iscsi target on some system, you would have to specify which LUN it connects 
to.  But that is not the case...

I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm.  I read some 
online examples, where you first sbdadm create-lu which gives you a GUID for 
a specific device in the system, and then stmfadm add-view $GUID, and then 
itadm create-target.

It's this last command that confuses me - Because it generates an iscsi target 
iqn.blahblah...  And it will create as many as you specify, regardless of how 
many LUN's you have available.  So how can I know which device I'm handing out 
to some initiator?  And if an initiator connects to all those different 
iqn.blahblah addresses...  What device will they actually be mucking around 
with?

I'm not quite sure what in my brain is thinking wrong, but I'm guessing the 
explanation is something like this:

(can anyone tell me if this is the correct interpretation?)

I shouldn't be thinking in such linear terms.  When I create an iscsi target, 
don't think of it as connecting to a device - instead, think of it as sort of a 
channel.  Any initiator connecting to it can see any of the devices that I have 
done add-views on.  But each iscsi target can only be used by one initiator at 
a time.

Is that a good understanding?

Thanks...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iscsi confusion

2012-09-28 Thread Fajar A. Nugraha
On Sat, Sep 29, 2012 at 3:09 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
 I am confused, because I would have expected a 1-to-1 mapping, if you create
 an iscsi target on some system, you would have to specify which LUN it
 connects to.  But that is not the case...

Nope. one target can have anything from zero (which is kinda useless)
or many LUNs.

 I shouldn't be thinking in such linear terms.  When I create an iscsi
 target, don't think of it as connecting to a device - instead, think of it
 as sort of a channel.  Any initiator connecting to it can see any of the
 devices that I have done add-views on.

Yup

  But each iscsi target can only be
 used by one initiator at a time.

Nope. Many people use iscsi to provide shared storage (e.g. for
clustering), where two or more initiators connetcs to the same target.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss