[zfs-discuss] diff between sharenfs and sharesmb

2010-05-04 Thread Dick Hoogendijk
I have some ZFS datasets that are shared through CIFS/NFS. So I created 
them with sharenfs/sharesmb options.


I have full access from windows (through cifs) to the datasets, however, 
all files and directories are created with (UNIX) permisions of 
(--)/(d--). So, although I can access the files now from my 
windows machiens, I can -NOT- access the same files with NFS.
I know I gave myself full permissions in the ACL list. That's why 
sharesmb works I guess. But what do I have to do to make -BOTH- work?


--
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Robert Milkowski

On 16/02/2010 21:54, Jeff Bonwick wrote:

People used fastfs for years in specific environments (hopefully
understanding the risks), and disabling the ZIL is safer than fastfs.
Seems like it would be a useful ZFS dataset parameter.
 

We agree.  There's an open RFE for this:

6280630 zil synchronicity

No promise on date, but it will bubble to the top eventually.

   


So everyone knows - it has been integrated into snv_140 :)

--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Richard L. Hamilton
[...]
 To answer Richard's question, if you have to rename a
 pool during
 import due to a conflict, the only way to change it
 back is to
 re-import it with the original name. You'll have to
 either export the
 conflicting pool, or (if it's rpool) boot off of a
 LiveCD which
 doesn't use an rpool to do the rename.

Thanks.  The latter is what I ended up doing (well,
off of the SXCE install CD image that I'd used to set up that
disk image in VirtualBox in the first place).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Victor Latushkin

On May 4, 2010, at 2:02 PM, Robert Milkowski wrote:

 On 16/02/2010 21:54, Jeff Bonwick wrote:
 People used fastfs for years in specific environments (hopefully
 understanding the risks), and disabling the ZIL is safer than fastfs.
 Seems like it would be a useful ZFS dataset parameter.
 
 We agree.  There's an open RFE for this:
 
 6280630 zil synchronicity
 
 No promise on date, but it will bubble to the top eventually.
 
   
 
 So everyone knows - it has been integrated into snv_140 :)

Congratulations, Robert!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
Hi,

I am posting my question to both storage-discuss and zfs-discuss as I am not 
quite sure what is causing the messages I am receiving.

I have recently migrated my zfs volume from b104 to b134 and upgraded it from 
zfs version 14 to 22. It consist of two zvol's 'vol01/zvol01' and 
'vol01/zvol02'. 
During zpool import I am getting a non-zero exit code, however the volume is 
imported successfuly. Could you please help me to understand what could be the 
reason of those messages?

r...@san01a:/export/home/admin#zpool import vol01
r...@san01a:/export/home/admin#cannot share 'vol01/zvol01': iscsitgtd failed 
request to share 
r...@san01a:/export/home/admin#cannot share 'vol01/zvol02': iscsitgtd failed 
request to share

Many thanks,
Przem
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
It does not look like it is:

r...@san01a:/export/home/admin# svcs -a | grep iscsi
online May_01   svc:/network/iscsi/initiator:default
online May_01   svc:/network/iscsi/target:default

_
Przem




From: Rick McNeal [ramcn...@gmail.com]
Sent: 04 May 2010 13:14
To: Przemyslaw Ceglowski
Subject: Re: [storage-discuss] iscsitgtd failed request to share on zpool 
import after upgrade from b104 to b134

Look and see if the target daemon service is still enabled. COMSTAR has been 
the official scsi target project for a while now. In fact, the old iscscitgtd 
was removed in build 136.

Rick McNeal


On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski prze...@ceglowski.net wrote:

 Hi,

 I am posting my question to both storage-discuss and zfs-discuss as I am not 
 quite sure what is causing the messages I am receiving.

 I have recently migrated my zfs volume from b104 to b134 and upgraded it 
 from zfs version 14 to 22. It consist of two zvol's 'vol01/zvol01' and 
 'vol01/zvol02'.
 During zpool import I am getting a non-zero exit code, however the volume is 
 imported successfuly. Could you please help me to understand what could be 
 the reason of those messages?

 r...@san01a:/export/home/admin#zpool import vol01
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol01': iscsitgtd failed 
 request to share
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol02': iscsitgtd failed 
 request to share

 Many thanks,
 Przem
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-04 Thread Matt Keenan

Hi,

Just wondering whether mirroring a USB drive with main laptop disk for 
backup purposes is recommended or not.


Current setup, single root pool set up on 200GB internal laptop drive :

$ zpool status
  pool: rpool
 state: ONLINE
 scrub: non requested
config :
NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c5t0d0s0  ONLINE   0 0 0


I have a 320GB external USB drive which I'd like to configure as a 
mirror of this root pool (I know it will only use 200GB of the eternal 
one, not worried about that).


Plan would be to connect the USB drive, once or twice a week, let it 
resilver, and then disconnect again. Connecting USB drive 24/7 would 
AFAIK have performance issues for the Laptop.


This would have the added benefit of the USB drive being bootable.

- Recommended or not ?
- Are there known issues with this type of setup ?


cheers

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-04 Thread Peter Karlsson

Hi Matt,

Don't know if it's recommended or not, but I've been doing it for close 
to 3 years on my OpenSolaris laptop, it saved me a few times like last 
week when my internal drive died :)


/peter

On 2010-05-04 20.33, Matt Keenan wrote:

Hi,

Just wondering whether mirroring a USB drive with main laptop disk for
backup purposes is recommended or not.

Current setup, single root pool set up on 200GB internal laptop drive :

$ zpool status
pool: rpool
state: ONLINE
scrub: non requested
config :
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0


I have a 320GB external USB drive which I'd like to configure as a
mirror of this root pool (I know it will only use 200GB of the eternal
one, not worried about that).

Plan would be to connect the USB drive, once or twice a week, let it
resilver, and then disconnect again. Connecting USB drive 24/7 would
AFAIK have performance issues for the Laptop.

This would have the added benefit of the USB drive being bootable.

- Recommended or not ?
- Are there known issues with this type of setup ?


cheers

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-05-04 Thread Kyle McDonald
On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote:
 valrh...@gmail.com valrh...@gmail.com writes:

   
 I have been using DVDs for small backups here and there for a decade
 now, and have a huge pile of several hundred. They have a lot of
 overlapping content, so I was thinking of feeding the entire stack
 into some sort of DVD autoloader, which would just read each disk, and
 write its contents to a ZFS filesystem with dedup enabled. [...] That
 would allow me to consolidate a few hundred CDs and DVDs onto probably
 a terabyte or so, which could then be kept conveniently on a hard
 drive and archived to tape.
 
 it would be inconvenient to make a dedup copy on harddisk or tape, you
 could only do it as a ZFS filesystem or ZFS send stream.  it's better to
 use a generic tool like hardlink(1), and just delete files afterwards
 with

   
There is a perl script floating around on the internet for years that
will convert copies of files on the same FS to hardlinks (sorry I don't
have the name handy). So you don't need ZFS. Once this is done you can
even recreate an ISO and burn it back to DVD (possibly merging hundreds
of CD's into one DVD (or BD!). The script can also delete the
duplicates, but there isn't much control over which one it keeps - for
backupsyou may realyl want to  keep the earliest (or latest?) backup the
file appeared in.

Using ZFS Dedup is an interesting way of doing this. However archiving
the result may be hard. If you use different datasets (FS's) for each
backup, can you only send 1 dataset at a time (since you can only
snapshot on a dataset level? Won't that 'undo' the deduping?
 
If you instead put all the backups on on data set, then the snapshot can
theoretically contain the dedpued data. I'm not clear on whether
'send'ing it will preserve the deduping or not - or if it's up to the
receiving dataset to recognize matching blocks? If the dedup is in the
stream, then you may be able to write the stream to a DVD or BD.

Still if you save enough space so that you can add the required level of
redundancy, you could just leave it on disk and chuck the DVD's. Not
sure I'd do that, but it might let me put the media in the basement,
instead of the closet, or on the desk next to me.

  -Kyle


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-05-04 Thread Scott Steagall
On 05/04/2010 09:29 AM, Kyle McDonald wrote:
 On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote:
 valrh...@gmail.com valrh...@gmail.com writes:

   
 I have been using DVDs for small backups here and there for a decade
 now, and have a huge pile of several hundred. They have a lot of
 overlapping content, so I was thinking of feeding the entire stack
 into some sort of DVD autoloader, which would just read each disk, and
 write its contents to a ZFS filesystem with dedup enabled. [...] That
 would allow me to consolidate a few hundred CDs and DVDs onto probably
 a terabyte or so, which could then be kept conveniently on a hard
 drive and archived to tape.
 
 it would be inconvenient to make a dedup copy on harddisk or tape, you
 could only do it as a ZFS filesystem or ZFS send stream.  it's better to
 use a generic tool like hardlink(1), and just delete files afterwards
 with

   
 There is a perl script floating around on the internet for years that
 will convert copies of files on the same FS to hardlinks (sorry I don't
 have the name handy). So you don't need ZFS. Once this is done you can
 even recreate an ISO and burn it back to DVD (possibly merging hundreds
 of CD's into one DVD (or BD!). The script can also delete the
 duplicates, but there isn't much control over which one it keeps - for
 backupsyou may realyl want to  keep the earliest (or latest?) backup the
 file appeared in.

I've used Dirvish http://www.dirvish.org/ and rsync to do just
that...worked great!

Scott

 
 Using ZFS Dedup is an interesting way of doing this. However archiving
 the result may be hard. If you use different datasets (FS's) for each
 backup, can you only send 1 dataset at a time (since you can only
 snapshot on a dataset level? Won't that 'undo' the deduping?
  
 If you instead put all the backups on on data set, then the snapshot can
 theoretically contain the dedpued data. I'm not clear on whether
 'send'ing it will preserve the deduping or not - or if it's up to the
 receiving dataset to recognize matching blocks? If the dedup is in the
 stream, then you may be able to write the stream to a DVD or BD.
 
 Still if you save enough space so that you can add the required level of
 redundancy, you could just leave it on disk and chuck the DVD's. Not
 sure I'd do that, but it might let me put the media in the basement,
 instead of the closet, or on the desk next to me.
 
   -Kyle
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Richard Elling
On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:

 It does not look like it is:
 
 r...@san01a:/export/home/admin# svcs -a | grep iscsi
 online May_01   svc:/network/iscsi/initiator:default
 online May_01   svc:/network/iscsi/target:default

This is COMSTAR.

 _
 Przem
 
 
 
 
 From: Rick McNeal [ramcn...@gmail.com]
 Sent: 04 May 2010 13:14
 To: Przemyslaw Ceglowski
 Subject: Re: [storage-discuss] iscsitgtd failed request to share on zpool 
 import after upgrade from b104 to b134
 
 Look and see if the target daemon service is still enabled. COMSTAR has been 
 the official scsi target project for a while now. In fact, the old 
 iscscitgtd was removed in build 136.

For Nexenta, the old iscsi target was removed in 3.0 (based on b134).
 -- richard

 
 Rick McNeal
 
 
 On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski prze...@ceglowski.net 
 wrote:
 
 Hi,
 
 I am posting my question to both storage-discuss and zfs-discuss as I am 
 not quite sure what is causing the messages I am receiving.
 
 I have recently migrated my zfs volume from b104 to b134 and upgraded it 
 from zfs version 14 to 22. It consist of two zvol's 'vol01/zvol01' and 
 'vol01/zvol02'.
 During zpool import I am getting a non-zero exit code, however the volume 
 is imported successfuly. Could you please help me to understand what could 
 be the reason of those messages?
 
 r...@san01a:/export/home/admin#zpool import vol01
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol01': iscsitgtd 
 failed request to share
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol02': iscsitgtd 
 failed request to share
 
 Many thanks,
 Przem
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
ZFS storage and performance consulting at http://www.RichardElling.com










___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
On May 4, 2010, at 2:43 PM, Richard Elling wrote:

On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:

 It does not look like it is:

 r...@san01a:/export/home/admin# svcs -a | grep iscsi
 online May_01   svc:/network/iscsi/initiator:default
 online May_01   svc:/network/iscsi/target:default

This is COMSTAR.

Thanks Richard, I am aware of that.


 _
 Przem



 
 From: Rick McNeal [ramcn...@gmail.com]
 Sent: 04 May 2010 13:14
 To: Przemyslaw Ceglowski
 Subject: Re: [storage-discuss] iscsitgtd failed request to share on zpool 
 import after upgrade from b104 to b134

 Look and see if the target daemon service is still enabled. COMSTAR has 
 been the official scsi target project for a while now. In fact, the old 
 iscscitgtd was removed in build 136.

For Nexenta, the old iscsi target was removed in 3.0 (based on b134).
 -- richard

It does not answer my original question.
-- Przem



 Rick McNeal


 On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski prze...@ceglowski.net 
 wrote:

 Hi,

 I am posting my question to both storage-discuss and zfs-discuss as I am 
 not quite sure what is causing the messages I am receiving.

 I have recently migrated my zfs volume from b104 to b134 and upgraded it 
 from zfs version 14 to 22. It consist of two zvol's 'vol01/zvol01' and 
 'vol01/zvol02'.
 During zpool import I am getting a non-zero exit code, however the volume 
 is imported successfuly. Could you please help me to understand what could 
 be the reason of those messages?

 r...@san01a:/export/home/admin#zpool import vol01
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol01': iscsitgtd 
 failed request to share
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol02': iscsitgtd 
 failed request to share

 Many thanks,
 Przem
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
ZFS storage and performance consulting at http://www.RichardElling.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Jim Dunham
Przem,

 On May 4, 2010, at 2:43 PM, Richard Elling wrote:
 
 On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
 
 It does not look like it is:
 
 r...@san01a:/export/home/admin# svcs -a | grep iscsi
 online May_01   svc:/network/iscsi/initiator:default
 online May_01   svc:/network/iscsi/target:default
 
 This is COMSTAR.
 
 Thanks Richard, I am aware of that.

Since you upgrade to b134, not b136 the iSCSI Target Daemon is still around, 
just not on our system.

IPS packaging changes have not installed the iSCSI Target Daemon (among other 
things) by default. It is contained in IPS package known as either SUNWiscsitgt 
or network/iscsi/target/legacy. Visit your local package repository for 
updates: http://pkg.opensolaris.org/dev/

Of course starting with build 136..., iSCSI Target Daemon (and ZFS shareiscsi) 
are gone, so you will need to reconfigure your two ZVOLs 'vol01/zvol01' and 
'vol01/zvol02', under COMSTAR soon.

http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Ports
http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration

- Jim


 
 
 _
 Przem
 
 
 
 
 From: Rick McNeal [ramcn...@gmail.com]
 Sent: 04 May 2010 13:14
 To: Przemyslaw Ceglowski
 Subject: Re: [storage-discuss] iscsitgtd failed request to share on zpool 
 import after upgrade from b104 to b134
 
 Look and see if the target daemon service is still enabled. COMSTAR has 
 been the official scsi target project for a while now. In fact, the old 
 iscscitgtd was removed in build 136.
 
 For Nexenta, the old iscsi target was removed in 3.0 (based on b134).
 -- richard
 
 It does not answer my original question.
 -- Przem
 
 
 
 Rick McNeal
 
 
 On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski prze...@ceglowski.net 
 wrote:
 
 Hi,
 
 I am posting my question to both storage-discuss and zfs-discuss as I am 
 not quite sure what is causing the messages I am receiving.
 
 I have recently migrated my zfs volume from b104 to b134 and upgraded it 
 from zfs version 14 to 22. It consist of two zvol's 'vol01/zvol01' and 
 'vol01/zvol02'.
 During zpool import I am getting a non-zero exit code, however the volume 
 is imported successfuly. Could you please help me to understand what 
 could be the reason of those messages?
 
 r...@san01a:/export/home/admin#zpool import vol01
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol01': iscsitgtd 
 failed request to share
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol02': iscsitgtd 
 failed request to share
 
 Many thanks,
 Przem
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 --
 ZFS storage and performance consulting at http://www.RichardElling.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn

On Mon, 3 May 2010, Edward Ned Harvey wrote:


That's precisely the opposite of what I thought.  Care to explain?

If you have a primary OS disk, and you apply OS Updates ... in order to
access those updates in Sol10, you need a registered account and login, with
paid solaris support.  Then, if you boot a removable hard disk, and you wish
to apply updates to keep it at the same rev as the primary OS ... you've got
to once again enter your Sol10 update download credentials, and I don't
presume it works, or will always work for a 2nd installation of Sol10.
Aren't you supposed to pay for support on each OS installation?  Doesn't
that mean you'd have to pay a separate support contract for the removable
boot hard drive?


The Solaris 10 licensing situation has changed dramatically in recent 
months.  It used to be that anyone was always eligible for security 
updates and the core kernel was always marked as a security update. 
Now the only eligibility for use of Solaris 10 is either via an 
existing service contract, or an interim 90-day period (with 
registration) intended for product evaluation.  It is pretty common 
for the Solaris 10 installation from media to support an older version 
of zfs than the kernel now running on the system (which was updated 
via a patch).  Due to the new Solaris 10 license and the potential 
need to download and apply a patch, issues emerge if this maintenance 
needs to be done after a service contract (or the 90-day eval 
entitlement) has expired.  As a result, it is wise for Solaris 10 
users to maintain a local repository of licensed patches in case their 
service contract should expire.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn

On Mon, 3 May 2010, Richard Elling wrote:


This is not a problem on Solaris 10. It can affect OpenSolaris, though.


That's precisely the opposite of what I thought.  Care to explain?


In Solaris 10, you are stuck with LiveUpgrade, so the root pool is
not shared with other boot environments.


Richard,

You have fallen out of touch with Solaris 10, which is still a moving 
target.  While the Live Upgrade commands you are familiar with in 
Solaris 10 still mostly work as before, they *do* take advantage of 
zfs's features and boot environments do share the same root pool just 
like in OpenSolaris.  Solaris 10 Live Upgrade is dramatically improved 
in conjunction with zfs boot.  I am not sure how far behind it is from 
OpenSolaris new boot administration tools but under zfs its function 
can not be terribly different.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Cindy Swearingen

Brandon,

Using beadm to migrate your BEs to another root pool (and then
performing all the steps to get the system to boot) is different
than just outright renaming your existing root pool on import.

Since pool renaming isn't supported, I don't think we have identified
all the boot/mount-at-boot components that need to be changed.

Cindy

On 05/03/10 18:34, Brandon High wrote:

On Mon, May 3, 2010 at 9:13 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:

Renaming the root pool is not recommended. I have some details on what
actually breaks, but I can't find it now.


Really? I asked about using a new pool for the rpool, and there were
some comments that it works fine. In fact, you'd suggested using beadm
to move the BE to the new pool.

On x86, grub looks at the findroot command, which checks
/rpool/boot/grub/bootsign/ (See
http://docs.sun.com/app/docs/doc/819-2379/ggvms?a=view)
The zpool should have the bootfs property set (although I've had it
work without this set). (See
http://docs.sun.com/app/docs/doc/819-2379/ggqhp?l=ena=view)

To answer Richard's question, if you have to rename a pool during
import due to a conflict, the only way to change it back is to
re-import it with the original name. You'll have to either export the
conflicting pool, or (if it's rpool) boot off of a LiveCD which
doesn't use an rpool to do the rename.

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
Jim,

On May 4, 2010, at 3:45 PM, Jim Dunham wrote:

 
 On May 4, 2010, at 2:43 PM, Richard Elling wrote:
 
 On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
 
  It does not look like it is:
 
  r...@san01a:/export/home/admin# svcs -a | grep iscsi
  online May_01   svc:/network/iscsi/initiator:default
  online May_01   svc:/network/iscsi/target:default
 
 This is COMSTAR.
 
 Thanks Richard, I am aware of that.

Since you upgrade to b134, not b136 the iSCSI Target Daemon is still around, 
just not on our system.

IPS packaging changes have not installed the iSCSI Target Daemon (among other 
things) by default. It is contained in IPS package known as either 
SUNWiscsitgt or network/iscsi/target/legacy. Visit your local package 
repository for updates: http://pkg.opensolaris.org/dev/

Of course starting with build 136..., iSCSI Target Daemon (and ZFS shareiscsi) 
are gone, so you will need to reconfigure your two ZVOLs 'vol01/zvol01' and 
'vol01/zvol02', under COMSTAR soon.

http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Ports
http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration

- Jim

The migrated zVols have been running under COMSTAR originally on b104 which 
makes me wonder even more. Is there any way I can get rid of those messages?


 

 _
 Przem



 
 From: Rick McNeal [ramcn...@gmail.com]
 Sent: 04 May 2010 13:14
 To: Przemyslaw Ceglowski
 Subject: Re: [storage-discuss] iscsitgtd failed request to share on
 zpool import after upgrade from b104 to b134

 Look and see if the target daemon service is still enabled. COMSTAR
 has been the official scsi target project for a while now. In fact, the
 old iscscitgtd was removed in build 136.

For Nexenta, the old iscsi target was removed in 3.0 (based on b134).
 -- richard
 
 It does not answer my original question.
 -- Przem
 


 Rick McNeal


 On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski
prze...@ceglowski.net wrote:

 Hi,

 I am posting my question to both storage-discuss and zfs-discuss
as I am not quite sure what is causing the messages I am receiving.

 I have recently migrated my zfs volume from b104 to b134 and
upgraded it from zfs version 14 to 22. It consist of two zvol's
'vol01/zvol01' and 'vol01/zvol02'.
 During zpool import I am getting a non-zero exit code, however the
volume is imported successfuly. Could you please help me to understand
what could be the reason of those messages?

 r...@san01a:/export/home/admin#zpool import vol01
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol01':
iscsitgtd failed request to share
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol02':
iscsitgtd failed request to share

 Many thanks,
 Przem
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
ZFS storage and performance consulting at http://www.RichardElling.com
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread eXeC001er
Perhaps the problem is that the old version of pool have shareiscsi, but new
version have not this option, and for share LUN via iscsi you need to make
lun-mapping.



2010/5/4 Przemyslaw Ceglowski prze...@ceglowski.net

 Jim,

 On May 4, 2010, at 3:45 PM, Jim Dunham wrote:

 
  On May 4, 2010, at 2:43 PM, Richard Elling wrote:
 
  On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
  
   It does not look like it is:
  
   r...@san01a:/export/home/admin# svcs -a | grep iscsi
   online May_01   svc:/network/iscsi/initiator:default
   online May_01   svc:/network/iscsi/target:default
  
  This is COMSTAR.
 
  Thanks Richard, I am aware of that.
 
 Since you upgrade to b134, not b136 the iSCSI Target Daemon is still
 around, just not on our system.
 
 IPS packaging changes have not installed the iSCSI Target Daemon (among
 other things) by default. It is contained in IPS package known as either
 SUNWiscsitgt or network/iscsi/target/legacy. Visit your local package
 repository for updates: http://pkg.opensolaris.org/dev/
 
 Of course starting with build 136..., iSCSI Target Daemon (and ZFS
 shareiscsi) are gone, so you will need to reconfigure your two ZVOLs
 'vol01/zvol01' and 'vol01/zvol02', under COMSTAR soon.
 
 
 http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Ports
 http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration
 
 - Jim

 The migrated zVols have been running under COMSTAR originally on b104 which
 makes me wonder even more. Is there any way I can get rid of those messages?

 
 
 
  _
  Przem
 
 
 
  
  From: Rick McNeal [ramcn...@gmail.com]
  Sent: 04 May 2010 13:14
  To: Przemyslaw Ceglowski
  Subject: Re: [storage-discuss] iscsitgtd failed request to share on
  zpool import after upgrade from b104 to b134
 
  Look and see if the target daemon service is still enabled. COMSTAR
  has been the official scsi target project for a while now. In fact, the
  old iscscitgtd was removed in build 136.
 
 For Nexenta, the old iscsi target was removed in 3.0 (based on b134).
  -- richard
 
  It does not answer my original question.
  -- Przem
 
 
 
  Rick McNeal
 
 
  On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski
 prze...@ceglowski.net wrote:
 
  Hi,
 
  I am posting my question to both storage-discuss and zfs-discuss
 as I am not quite sure what is causing the messages I am receiving.
 
  I have recently migrated my zfs volume from b104 to b134 and
 upgraded it from zfs version 14 to 22. It consist of two zvol's
 'vol01/zvol01' and 'vol01/zvol02'.
  During zpool import I am getting a non-zero exit code, however the
 volume is imported successfuly. Could you please help me to understand
 what could be the reason of those messages?
 
  r...@san01a:/export/home/admin#zpool import vol01
  r...@san01a:/export/home/admin#cannot share 'vol01/zvol01':
 iscsitgtd failed request to share
  r...@san01a:/export/home/admin#cannot share 'vol01/zvol02':
 iscsitgtd failed request to share
 
  Many thanks,
  Przem
  ___
  storage-discuss mailing list
  storage-disc...@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/storage-discuss
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 --
 ZFS storage and performance consulting at http://www.RichardElling.com
  ___
  storage-discuss mailing list
  storage-disc...@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] b134 pool borked!

2010-05-04 Thread Michael Mattsson
My pool panic'd while updating to Lucid Lynx hosted inside an iSCSI LUN. And 
now it won't come back up. I have dedup and compression on.

These are my current findings:
* iostat -En won't list 8 of my disks
* zdb lists all my disks except my cache device
* The following commands panics the box in single-user mode: format, zfs, zpool 
and zdb -l. Multi-user panics before reading ZFS config.
* Unplugging all devices belonging to the pool brings up the host to multi-user 
mode and lists my pool as UNAVAIL.

I've scavenged the net for extracting useful information that might be of use.

I suspect it has something to do with the DDT table.

Best Regards
Michael

zdb output:
rpool:
version: 22
name: 'rpool'
state: 0
txg: 10643295
pool_guid: 16751367988873007995
hostid: 13336047
hostname: ''
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 16751367988873007995
children[0]:
type: 'mirror'
id: 0
guid: 6639969804249231424
whole_disk: 0
metaslab_array: 23
metaslab_shift: 31
ashift: 9
asize: 250956742656
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 14476065696483338328
path: '/dev/dsk/c14d0s0'
devid: 'id1,c...@awdc_wd2500yd-01nvb1=_wd-wcank4006148/a'
phys_path: 
'/p...@0,0/pci10de,7...@8/pci-...@9/i...@0/c...@0,0:a'
whole_disk: 0
DTL: 78
children[1]:
type: 'disk'
id: 1
guid: 10422182008705867883
path: '/dev/dsk/c16d0s0'
devid: 'id1,c...@awdc_wd2500yd-01nvb1=_wd-wcank5135915/a'
phys_path: 
'/p...@0,0/pci10de,7...@8/pci-...@9/i...@1/c...@0,0:a'
whole_disk: 0
DTL: 173
tank:
version: 22
name: 'tank'
state: 0
txg: 36636297
pool_guid: 10904371515657913150
hostid: 13336047
hostname: 'zen'
vdev_children: 3
vdev_tree:
type: 'root'
id: 0
guid: 10904371515657913150
children[0]:
type: 'raidz'
id: 0
guid: 4940983256616168565
nparity: 1
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 2560443285504
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 7633768960477747795
path: '/dev/dsk/c13t4d0s0'
devid: 
'id1,s...@sata_wdc_wd6400aacs-0_wd-wcauf0933938/a'
phys_path: 
'/p...@0,0/pci10de,7...@13/pci1033,1...@0/pci11ab,1...@1/d...@4,0:a'
whole_disk: 1
DTL: 4268
children[1]:
type: 'disk'
id: 1
guid: 12141479741527311128
path: '/dev/dsk/c13t5d0s0'
devid: 
'id1,s...@sata_wdc_wd6400aacs-0_wd-wcauf0934597/a'
phys_path: 
'/p...@0,0/pci10de,7...@13/pci1033,1...@0/pci11ab,1...@1/d...@5,0:a'
whole_disk: 1
DTL: 4267
children[2]:
type: 'disk'
id: 2
guid: 7952488001712683172
path: '/dev/dsk/c13t6d0s0'
devid: 
'id1,s...@sata_wdc_wd6400aacs-0_wd-wcauf0934679/a'
phys_path: 
'/p...@0,0/pci10de,7...@13/pci1033,1...@0/pci11ab,1...@1/d...@6,0:a'
whole_disk: 1
DTL: 4266
children[3]:
type: 'disk'
id: 3
guid: 535039729687145914
path: '/dev/dsk/c13t7d0s0'
devid: 
'id1,s...@sata_wdc_wd6400aacs-0_wd-wcauf0931654/a'
phys_path: 
'/p...@0,0/pci10de,7...@13/pci1033,1...@0/pci11ab,1...@1/d...@7,0:a'
whole_disk: 1
DTL: 4265
children[1]:
type: 'raidz'
id: 1
guid: 6936009139020911476
nparity: 1
metaslab_array: 4097
metaslab_shift: 34
ashift: 9
asize: 2000373678080
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 4043674464412192471
path: '/dev/dsk/c13t3d0s0'
devid: 
'id1,s...@sata_samsung_hd103si___s1vsj90sc22045/a'
phys_path: 
'/p...@0,0/pci10de,7...@13/pci1033,1...@0/pci11ab,1...@1/d...@3,0:a'
whole_disk: 1
DTL: 8198
children[1]:
type: 'disk'
id: 1
guid: 7230587084054299877
path: '/dev/dsk/c13t1d0s0'
devid: 
'id1,s...@sata_wdc_wd5001aals-0_wd-wmasy3260051/a'
phys_path: 

Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Brandon High
On Tue, May 4, 2010 at 7:19 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
 Using beadm to migrate your BEs to another root pool (and then
 performing all the steps to get the system to boot) is different
 than just outright renaming your existing root pool on import.

Does beadm take care of all the other steps that need to happen? I
imagine you'd have to keep rpool around otherwise ...

I ended up doing an offline copy to a new pool, which I renamed to
rpool at the end to avoid any problems

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Performance of the ZIL

2010-05-04 Thread Tony MacDoodle
How would one determine if I should have a separate ZIL disk? We are using
ZFS as the backend of our Guest Domains boot drives using LDom's. And we are
seeing bad/very slow write performance?

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance of the ZIL

2010-05-04 Thread Brandon High
On Tue, May 4, 2010 at 10:19 AM, Tony MacDoodle tpsdoo...@gmail.com wrote:
 How would one determine if I should have a separate ZIL disk? We are using
 ZFS as the backend of our Guest Domains boot drives using LDom's. And we are
 seeing bad/very slow write performance?

There's a dtrace script that Richard Elling wrote called zilstat.ksh.
It's available at
http://www.richardelling.com/Home/scripts-and-programs-1/zilstat

I'm not sure what the numbers mean (there's info at the address) but
anything other than lots of 0s indicates that the ZIL is being used.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Cindy Swearingen

No, beadm doesn't take care of all the steps that I provided
previously and included below.

Cindy

You can use the OpenSolaris beadm command to migrate a ZFS BE over
to another root pool, but you will also need to perform some manual
migration steps, such as
- copy over your other rpool datasets
- recreate swap and dump devices
- install bootblocks
- update BIOS and GRUB entries to boot from new root pool

The BE recreation gets you part of the way and its fast, anyway.


!. Create the second root pool.

# zpool create rpool2 c5t1d0s0

2. Create the new BE in the second root pool.

# beadm create -p rpool2 osol2BE

3. Activate the new BE.

# beadm activate osol2BE

4. Install the boot blocks.

5. Test that the system boots from the second root pool.

6. Update BIOS and GRUB to boot from new pool.

On 05/04/10 11:04, Brandon High wrote:

On Tue, May 4, 2010 at 7:19 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:

Using beadm to migrate your BEs to another root pool (and then
performing all the steps to get the system to boot) is different
than just outright renaming your existing root pool on import.


Does beadm take care of all the other steps that need to happen? I
imagine you'd have to keep rpool around otherwise ...

I ended up doing an offline copy to a new pool, which I renamed to
rpool at the end to avoid any problems

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
HI,

I have a question I cannot seem to find an answer to.

I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.

I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be 
relocated back to the spool.  I'd probably have it mirrored anyway, just in 
case.  However you cannot mirror the L2ARC, so...

What I want to know, is what happens if one of those SSD's goes bad?  What 
happens to the L2ARC?  Is it just taken offline, or will it continue to perform 
even with one drive missing?

Sorry, if these questions have been asked before, but I cannot seem to find an 
answer.
Mike

---
Michael Sullivan   
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Tomas Ögren
On 05 May, 2010 - Michael Sullivan sent me these 0,9K bytes:

 HI,
 
 I have a question I cannot seem to find an answer to.
 
 I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
 
 I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will
 be relocated back to the spool.  I'd probably have it mirrored anyway,
 just in case.  However you cannot mirror the L2ARC, so...

Given enough opensolaris.. Otherwise, your pool is screwed iirc.

 What I want to know, is what happens if one of those SSD's goes bad?
 What happens to the L2ARC?  Is it just taken offline, or will it
 continue to perform even with one drive missing?

L2ARC is a pure cache thing, if it gives bad data (checksum error), it
will be ignored, if you yank it, it will be ignored. It's very safe to
have crap hardware there (as long as they don't start messing up some
bus or similar). They can be added/removed at any time as well.

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
Anybody has an idea what I can do about it?

On 04/05/2010 16:43, eXeC001er execoo...@gmail.com wrote:

 Perhaps the problem is that the old version of pool have shareiscsi, but new
 version have not this option, and for share LUN via iscsi you need to make
 lun-mapping.
 
 
 
 2010/5/4 Przemyslaw Ceglowski
 prze...@ceglowski.netmailto:prze...@ceglowski.net
 Jim,
 
 On May 4, 2010, at 3:45 PM, Jim Dunham wrote:
 
 
 On May 4, 2010, at 2:43 PM, Richard Elling wrote:
 
 On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
 
 It does not look like it is:
 
 r...@san01a:/export/home/admin# svcs -a | grep iscsi
 online May_01   svc:/network/iscsi/initiator:default
 online May_01   svc:/network/iscsi/target:default
 
 This is COMSTAR.
 
 Thanks Richard, I am aware of that.
 
 Since you upgrade to b134, not b136 the iSCSI Target Daemon is still around,
 just not on our system.
 
 IPS packaging changes have not installed the iSCSI Target Daemon (among other
 things) by default. It is contained in IPS package known as either
 SUNWiscsitgt or network/iscsi/target/legacy. Visit your local package
 repository for updates: http://pkg.opensolaris.org/dev/
 
 Of course starting with build 136..., iSCSI Target Daemon (and ZFS
 shareiscsi) are gone, so you will need to reconfigure your two ZVOLs
 'vol01/zvol01' and 'vol01/zvol02', under COMSTAR soon.
 
 http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Po
 rts
 http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration
 
 - Jim
 
 The migrated zVols have been running under COMSTAR originally on b104 which
 makes me wonder even more. Is there any way I can get rid of those messages?
 
 
 
 
 _
 Przem
 
 
 
 
 From: Rick McNeal [ramcn...@gmail.commailto:ramcn...@gmail.com]
 Sent: 04 May 2010 13:14
 To: Przemyslaw Ceglowski
 Subject: Re: [storage-discuss] iscsitgtd failed request to share on
 zpool import after upgrade from b104 to b134
 
 Look and see if the target daemon service is still enabled. COMSTAR
 has been the official scsi target project for a while now. In fact, the
 old iscscitgtd was removed in build 136.
 
 For Nexenta, the old iscsi target was removed in 3.0 (based on b134).
 -- richard
 
 It does not answer my original question.
 -- Przem
 
 
 
 Rick McNeal
 
 
 On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski
 prze...@ceglowski.netmailto:prze...@ceglowski.net wrote:
 
 Hi,
 
 I am posting my question to both storage-discuss and zfs-discuss
 as I am not quite sure what is causing the messages I am receiving.
 
 I have recently migrated my zfs volume from b104 to b134 and
 upgraded it from zfs version 14 to 22. It consist of two zvol's
 'vol01/zvol01' and 'vol01/zvol02'.
 During zpool import I am getting a non-zero exit code, however the
 volume is imported successfuly. Could you please help me to understand
 what could be the reason of those messages?
 
 r...@san01a:/export/home/admin#zpool import vol01
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol01':
 iscsitgtd failed request to share
 r...@san01a:/export/home/admin#cannot share 'vol01/zvol02':
 iscsitgtd failed request to share
 
 Many thanks,
 Przem
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.orgmailto:storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.orgmailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 --
 ZFS storage and performance consulting at http://www.RichardElling.com
 ___
 storage-discuss mailing list
 storage-disc...@opensolaris.orgmailto:storage-disc...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.orgmailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Freddie Cash
On Tue, May 4, 2010 at 12:16 PM, Michael Sullivan 
michael.p.sulli...@mac.com wrote:

 I have a question I cannot seem to find an answer to.

 I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.

 I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
 relocated back to the spool.  I'd probably have it mirrored anyway, just in
 case.  However you cannot mirror the L2ARC, so...

 What I want to know, is what happens if one of those SSD's goes bad?  What
 happens to the L2ARC?  Is it just taken offline, or will it continue to
 perform even with one drive missing?

 Sorry, if these questions have been asked before, but I cannot seem to find
 an answer.


Data in the L2ARC is checksummed.  If a checksum fails, or the device
disappears, data is read from the pool.  The L2ARC is essentially a
throw-away cache for reads.  If it's there, reads can be faster as data is
not pulled from disk.  If it's not there, data just gets pulled from disk as
per normal.

There's nothing really special about the L2ARC devices.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Marc Nicholas
The L2ARC will continue to function.

-marc

On 5/4/10, Michael Sullivan michael.p.sulli...@mac.com wrote:
 HI,

 I have a question I cannot seem to find an answer to.

 I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.

 I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
 relocated back to the spool.  I'd probably have it mirrored anyway, just in
 case.  However you cannot mirror the L2ARC, so...

 What I want to know, is what happens if one of those SSD's goes bad?  What
 happens to the L2ARC?  Is it just taken offline, or will it continue to
 perform even with one drive missing?

 Sorry, if these questions have been asked before, but I cannot seem to find
 an answer.
 Mike

 ---
 Michael Sullivan
 michael.p.sulli...@me.com
 http://www.kamiogi.net/
 Japan Mobile: +81-80-3202-2599
 US Phone: +1-561-283-2034

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
Ok, thanks.

So, if I understand correctly, it will just remove the device from the VDEV and 
continue to use the good ones in the stripe.

Mike

---
Michael Sullivan   
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034

On 5 May 2010, at 04:34 , Marc Nicholas wrote:

 The L2ARC will continue to function.
 
 -marc
 
 On 5/4/10, Michael Sullivan michael.p.sulli...@mac.com wrote:
 HI,
 
 I have a question I cannot seem to find an answer to.
 
 I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
 
 I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
 relocated back to the spool.  I'd probably have it mirrored anyway, just in
 case.  However you cannot mirror the L2ARC, so...
 
 What I want to know, is what happens if one of those SSD's goes bad?  What
 happens to the L2ARC?  Is it just taken offline, or will it continue to
 perform even with one drive missing?
 
 Sorry, if these questions have been asked before, but I cannot seem to find
 an answer.
 Mike
 
 ---
 Michael Sullivan
 michael.p.sulli...@me.com
 http://www.kamiogi.net/
 Japan Mobile: +81-80-3202-2599
 US Phone: +1-561-283-2034
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 -- 
 Sent from my mobile device

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance of the ZIL

2010-05-04 Thread Robert Milkowski

On 04/05/2010 18:19, Tony MacDoodle wrote:
How would one determine if I should have a separate ZIL disk? We are 
using ZFS as the backend of our Guest Domains boot drives using 
LDom's. And we are seeing bad/very slow write performance?


if you can disable ZIL and compare the performance to when it is off it 
will give you an estimate of what's the absolute maximum performance 
increase (if any) by having a dedicated ZIL device.


--
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacement brackets for Supermicro UIO SAS cards....

2010-05-04 Thread Travis Tabbal
Thanks! I might just have to order a few for the next time I take the server 
apart. Not that my bent up versions don't work, but I might as well have them 
be pretty too. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diff between sharenfs and sharesmb

2010-05-04 Thread Cindy Swearingen

Hi Dick,

Experts on the cifs-discuss list could probably advise you better.
You might even check the cifs-discuss archive because I hear that
the SMB/NFS sharing scenario has been covered previously on that
list.

Thanks,

Cindy

On 05/04/10 03:06, Dick Hoogendijk wrote:
I have some ZFS datasets that are shared through CIFS/NFS. So I created 
them with sharenfs/sharesmb options.


I have full access from windows (through cifs) to the datasets, however, 
all files and directories are created with (UNIX) permisions of 
(--)/(d--). So, although I can access the files now from my 
windows machiens, I can -NOT- access the same files with NFS.
I know I gave myself full permissions in the ACL list. That's why 
sharesmb works I guess. But what do I have to do to make -BOTH- work?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sharing with zfs

2010-05-04 Thread Vadim Comanescu
Hello,
Im new to this discussion lists so i hope im posting in the right place. I
started using zfs not too long ago. Im trying to figure out the ISCSI and
NFS sharing for the moment. For the ISCSI sharing at the moment im using
COMSTAR. A created the appropriate target, also a LU corespondent to the
zvol with sbdadm, then created a view attached to this. Im wondering is
there a way to actually delete a zvol ignoring the fact that it has attached
LU? I noticed the -f option does not help in this specific case. I know it
can be deleted by first deleting the attached LU and everything else, but im
wondering if is there anyway to do it without doing this. Also, im still
wondering why a volume(zvol) dataset has the sharenfs property ... is there
any way you can actually use this ? Thanks in advance.

-- 
ing. Vadim Comanescu
S.C. Syneto S.R.L.
str. Vasile Alecsandri nr 2, Timisoara
Timis, Romania
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Brad
I yanked a disk to simulate failure to the test pool to test hot spare failover 
- everything seemed fine until the copy back completed.  The hot spare is still 
showing in used...do we need to remove the spare from the pool to get it to 
deattach?


# zpool status
  pool: ZPOOL.TEST
 state: ONLINE
 scrub: resilver completed after 7h55m with 0 errors on Tue May  4 16:33:33 2010
config:

NAME STATE READ WRITE CKSUM
ZPOOL.TEST   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A3B6695d0ONLINE   0 0 0
c10t5000C5001A3CED7Fd0ONLINE   0 0 0
c10t5000C5001A5A45C1d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A6B2300d0ONLINE   0 0 0
c10t5000C5001A6BC6C6d0ONLINE   0 0 0
c10t5000C5001A6C3439d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A6F177Bd0ONLINE   0 0 0
c10t5000C5001A6FDB0Bd0ONLINE   0 0 0
c10t5000C5001A6FFF86d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A39D7BEd0ONLINE   0 0 0
c10t5000C5001A60BED0d0ONLINE   0 0 0
c10t5000C5001A70D8AAd0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A70D9B0d0ONLINE   0 0 0
c10t5000C5001A70D89Ed0ONLINE   0 0 0
c10t5000C5001A70D719d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A700E07d0ONLINE   0 0 0
c10t5000C5001A701A12d0ONLINE   0 0 0
c10t5000C5001A701CD0d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A702c10Ed0ONLINE   0 0 0
c10t5000C5001A702C8Ed0ONLINE   0 0 0
c10t5000C5001A703D23d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A703FADd0ONLINE   0 0 0
c10t5000C5001A707D86d0ONLINE   0 0 0
c10t5000C5001A707EDCd0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A7013D4d0ONLINE   0 0 0
c10t5000C5001A7013E6d0ONLINE   0 0 0
c10t5000C5001A7013FDd0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A7021ADd0ONLINE   0 0 0
c10t5000C5001A7028B6d0ONLINE   0 0 0
c10t5000C5001A7029A2d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A7036F4d0ONLINE   0 0 0
c10t5000C5001A7053ADd0ONLINE   0 0 0
spareONLINE   6.05M 0 0
  c10t5000C5001A7069CAd0  ONLINE   0 0 0  171G 
resilvered
  c10t5000C5001A703651d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A70104Dd0ONLINE   0 0 0
c10t5000C5001A70126Fd0ONLINE   0 0 0
c10t5000C5001A70183Cd0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A70296Cd0ONLINE   0 0 0
c10t5000C5001A70395Ed0ONLINE   0 0 0
c10t5000C5001A70587Dd0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A70704Ad0ONLINE   0 0 0
c10t5000C5001A70830Ed0ONLINE   0 0 0
c10t5000C5001A701563d0ONLINE   0 0 0
  mirror ONLINE   0 0 0
c10t5000C5001A702542d0ONLINE   0 0 0
c10t5000C5001A702625d0ONLINE   0 0 0
c10t5000C5001A703374d0ONLINE   0 0 0
logs
  mirror ONLINE   0 0 0
c1t3d0   ONLINE   0 0 0
c1t4d0   ONLINE   0 0 0
cache
  c1t1d0 ONLINE   0 0 0
  c1t2d0 ONLINE   0 0 0
spares
  c10t5000C5001A703651d0  INUSE currently in use
  

Re: [zfs-discuss] Sharing with zfs

2010-05-04 Thread Frank Middleton

On 05/ 4/10 05:37 PM, Vadim Comanescu wrote:

Im wondering is there a way to actually delete a zvol ignoring the fact
that it has attached LU?


You didn't say what version of what OS you are running. As of b134
or so it seems to be impossible to delete a zfs iscsi target. You might
look at the thread: [zfs-discuss] How to destroy iscsi dataset?,
however it never really came to any really satisfying conclusion.

AFAIK the only way to delete a zfs iscsi target is to boot b132 or
earlier in single user mode. IIRC there are iscsigt and COMSTAR
changes coming in later releases so it night be worth trying again
when we eventually get to go past b134.

HTH -- Frank



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Migrating ZFS/data pool to new pool on the same system

2010-05-04 Thread Jonathan
Can anyone confirm my action plan is the proper way to do this?  The reason I'm 
doing this is I want to create 2xraidz2 pools instead of expanding my current 
2xraidz1 pool.  So I'll create a 1xraidz2 vdev, migrate my current 2xraidz1 
pool over, destroy that pool and then add it as a 1xraidz2 vdev to the new pool.

I'm running b130, sharing both with CIFS and ISCSI (not comstar), multiple 
decedent file systems.  Other than a couple VirtualBox machines that use the 
pool for storage (I'll shut them down), nothing on the server should be messing 
with the pool.  As I understand it the old way of doing iSCSI is going away so 
I should plan on Comstar.  I'm also thinking I should just unshare the CIFS to 
prevent any of my computers from writing to it.

So migrating from pool1 to pool2 
0. Turn off AutoSnapshots
1. Create snapshot - zfs snapshot -r po...@snap1 
2. Send/Receive - zfs send -R po...@snap1 | zfs receive -F -d test2
3. Unshare CIFS and remove iSCSI targets.  For the iSCSI targets, seems like I 
can't re-use them for Comstar and the reservations aren't carried over for 
block devices? I may just destroy them before hand.  Nothing important on them.
4. Create new snapshots - zfs snapshot -r po...@snap2
5. Send incremental stream - zfs send -Ri snap1 po...@snap2 | zfs receive -F 
-d test2
Repeat steps 4 and 5 as necessary.
6. Offline pool1... if I don't plan on destroying it right away.

Other than zfs list, is there anything I should check to make sure I received 
all the data to the new pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Ian Collins

On 05/ 5/10 11:09 AM, Brad wrote:

I yanked a disk to simulate failure to the test pool to test hot spare failover 
- everything seemed fine until the copy back completed.  The hot spare is still 
showing in used...do we need to remove the spare from the pool to get it to 
deattach?

   
Once the failed drive is replaced and resilvered, you can zpool 
detach the spare.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Brad
Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss