Re: [zfs-discuss] Question on ZFS iSCSI

2011-06-01 Thread Jim Klimov
> > Disk /dev/zvol/rdsk/pool/dcpool: 4295GB
> > Sector size (logical/physical): 512B/512B
> 
> 
> Just to check, did you already try:
> 
> zpool import -d /dev/zvol/rdsk/pool/ poolname
> 

Thanks for the sugestion. As a matter of fact, I did not try that.
But it hasn't helped (possibly tue to partitioning inside the volume):
 
# zpool import -d /dev/zvol/dsk/pool dcpool
cannot import 'dcpool': no such pool available
 
# zpool import -d /dev/zvol/rdsk/pool/ dcpool
cannot import 'dcpool': no such pool available
 
//Jim
 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question on ZFS iSCSI

2011-06-01 Thread a . smith

Disk /dev/zvol/rdsk/pool/dcpool: 4295GB
Sector size (logical/physical): 512B/512B



Just to check, did you already try:

zpool import -d /dev/zvol/rdsk/pool/ poolname

?

thanks Andy.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Jim Klimov
> The volume is exported as whole disk. When given whole disk, zpool
> creates GPT partition table by default. You need to pass the partition
> (not the disk) to zdb.

Yes, that is what seems to be the problem.
However, for the zfs volumes (/dev/zvol/rdsk/pool/dcpool) there seems
to be no concept of partitions, etc. inside of them - these are defined
only for the iSCSI representation which I want to try and get rid of.
 
> In Linux you can use kpartx to make the partitions available. I don't
> know the equivalent command in Solaris.

Interesting... If only lofiadm could represent not a whole file, but a
given "window" into it ;)
 
 
At least, trying loopback mounts as well as directly the zfs volume
with "fdisk", "parted" and such reveals that there are no noticeable
iSCSI service data overheads in the addresable volume space:
 
# parted /dev/zvol/rdsk/pool/dcpool print
_device_probe_geometry: DKIOCG_PHYGEOM: Inappropriate ioctl for device
Model: Generic Ide (ide)
Disk /dev/zvol/rdsk/pool/dcpool: 4295GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End SizeFile system  Name  Flags
 1  131kB   4295GB  4295GB   zfs
 9  4295GB  4295GB  8389kB  

But lofiadm doesn't let me address that partition #1 as a separate device :(
 
Thanks,
//Jim Klimov
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Fajar A. Nugraha
On Tue, May 31, 2011 at 5:47 PM, Jim Klimov  wrote:
> However it seems that there may be some extra data beside the zfs
> pool in the actual volume (I'd at least expect an MBR or GPT, and
> maybe some iSCSI service data as an overhead). One way or another,
> the "dcpool" can not be found in the physical zfs volume:
>
> ===
> # zdb -l /dev/zvol/rdsk/pool/dcpool
>
> 
> LABEL 0
> 
> failed to unpack label 0

The volume is exported as whole disk. When given whole disk, zpool
creates GPT partition table by default. You need to pass the partition
(not the disk) to zdb.

> So the questions are:
>
> 1) Is it possible to skip iSCSI-over-loopback in this configuration?

Yes. Well, maybe.

In Linux you can use kpartx to make the partitions available. I don't
know the equivalent command in Solaris.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Jim Klimov
I have a oi_148a test box with a pool on physical HDDs, a volume in
this pool shared over iSCSI with explicit commands (sbdadm and such),
and this iSCSI target is initiated by the same box. In the resulting iSCSI
device I have another ZFS pool "dcpool".

Recently I found the iSCSI part to be a potential bottleneck in my pool
operations and wanted to revert to using ZFS volume directly as the
backing store for "dcpool".

However it seems that there may be some extra data beside the zfs
pool in the actual volume (I'd at least expect an MBR or GPT, and 
maybe some iSCSI service data as an overhead). One way or another, 
the "dcpool" can not be found in the physical zfs volume:

===
# zdb -l /dev/zvol/rdsk/pool/dcpool 


LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3
===

So the questions are:

1) Is it possible to skip iSCSI-over-loopback in this configuration?
Preferably I would just specify a fixed offset (at which byte in the
volume the "dcpool" data starts) and remove the iSCSI/networking
overheads and see if they are the bottlenecks.

2) This configuration "zpool -> iSCSI -> zvol" was initially proposed 
as preferable over direct volume access by Darren Moffat as the 
fully supported way, see last comments here:
http://blogs.oracle.com/darren/entry/compress_encrypt_checksum_deduplicate_with

I still wonder why - the overhead is deemed negligible and there
are more options quickly available, such as mounting the iSCSI
device on another server? Now that I hit the problem of reverting
to direct volume access, this makes sense ;)

Thanks in advance for ideas or clarifications,
//Jim Klimov


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss