Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Paul B. Henson
On Thu, 27 Aug 2009, Paul B. Henson wrote: However, I went to create a new boot environment to install the patches into, and so far that's been running for about an hour and a half :(, which was not expected or planned for. [...] I don't think I'm going to make my downtime window :(, and will

Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Dave
Thanks, Trevor. I understand the RFE/CR distinction. What I don't understand is how this is not a bug that should be fixed in all solaris versions. The related ID 6612830 says it was fixed in Sol 10 U6, which was a while ago. I am using OpenSolaris, so I would really appreciate confirmation

Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-28 Thread Robert Milkowski
Randall Badilla wrote: Hi all: First; it is possible modify the boot zpool rpool after OS installation...? I install the OS on the whole 72GB harddisk.. it is mirrored so If I want to decrease the rpool; for example resize to a 36GB slice it can be done? As far I remember on UFS/SVM I was

Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Casper . Dik
Well, so far lucreate took 3.5 hours, lumount took 1.5 hours, applying the patches took all of 10 minutes, luumount took about 20 minutes, and luactivate has been running for about 45 minutes. I'm assuming it will probably take at least the 1.5 hours of the lumount (particularly considering it

Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-28 Thread Casper . Dik
Randall Badilla wrote: Hi all: First; it is possible modify the boot zpool rpool after OS installation...? I install the OS on the whole 72GB harddisk.. it is mirrored so If I want to decrease the rpool; for example resize to a 36GB slice it can be done? As far I remember on UFS/SVM I

Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Darren J Moffat
Trevor Pretty wrote: *Release Fixed* , solaris_10u6(s10u6_01) (*Bug ID:*2160894 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2160894) Although as nothing in life is guaranteed it looks like another bug 2160894 has been identified and that's not yet on bugs.opensolaris.org

[zfs-discuss] ZFS commands hang after several zfs receives

2009-08-28 Thread Andrew Robert Nicols
I've reported this in the past, and have seen a related thread but no resolution so far and I'm still seeing it. Any help really would be very much appreciated. We have three thumpers (X4500): * thumper0 - Running snv_76 * thumper1 - Running snv_121 * thumper2 - Running Solaris 10 update 7 Each

Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Jens Elkner
On Thu, Aug 27, 2009 at 10:59:16PM -0700, Paul B. Henson wrote: On Thu, 27 Aug 2009, Paul B. Henson wrote: However, I went to create a new boot environment to install the patches into, and so far that's been running for about an hour and a half :(, which was not expected or planned for.

[zfs-discuss] zfs startup question

2009-08-28 Thread Stephen Stogner
Hello, I have a quick question I cant seem to find a good answer for. I have a S10U7 server that is running zfs to a couple of iSCSI shares on one of our sans, we use a routed network to connect to our iSCSI shares with the route statements in solaris. During normal operations it works fine

Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-28 Thread Roman Naumenko
Roman, are you saying you want to install OpenSolaris on your old servers, or make the servers look like an external JBOD array, that another server will then connect to? No, JBOD is JBOD, just an external enclosure, disks+internal/external connectors -- Roman -- This message posted from

Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-28 Thread Roman Naumenko
This non-raid sas controller is $199 and is based on the LSI SAS 1068. http://accessories.us.dell.com/sna/products/Networking _Communication/productdetail.aspx?c=usl=ens=bsdcs=0 4sku=310-8285~lt=popup~ck=TopSellers Why Dell? Isn't it cheaper to go with LSI itself? What kind of chassis do

Re: [zfs-discuss] zfs startup question

2009-08-28 Thread Bob Friesenhahn
On Fri, 28 Aug 2009, Stephen Stogner wrote: I have a quick question I cant seem to find a good answer for. I have a S10U7 server that is running zfs to a couple of iSCSI shares on one of our sans, we use a routed network to connect to our iSCSI shares with the route statements in solaris.

[zfs-discuss] Snapshot creation time

2009-08-28 Thread Chris Baker
I'm using find to run a directory scan to see which files have changed since the last snapshot was taken. Something like: zfs snapshot tank/filesys...@snap1 ... time passes ... find /tank/filesystem -newer /tank/filesystem/.zfs/snap1 -print Initially I assumed the time data on the .zfs/snap1

Re: [zfs-discuss] Boot error

2009-08-28 Thread Cindy . Swearingen
Hi Grant, I've had no more luck researching this, mostly because the error message can mean different things in different scenarios. I did try to reproduce it and I can't. I noticed you are booting using boot -s, which I think means the system will boot from the default boot disk, not the

Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Peter Tribble
On Fri, Aug 28, 2009 at 3:51 PM, Chris Bakero...@lhc.me.uk wrote: I'm using find to run a directory scan to see which files have changed since the last snapshot was taken. Something like: zfs snapshot tank/filesys...@snap1 ... time passes ... find /tank/filesystem -newer

Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Ellis, Mike
Try a: zfs get -pH -o value creation snapshot -- MikeE -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Baker Sent: Friday, August 28, 2009 10:52 AM To: zfs-discuss@opensolaris.org Subject: [zfs-discuss]

Re: [zfs-discuss] Boot error

2009-08-28 Thread Enda O'Connor
Hi What does boot -L show you? Enda On 08/28/09 15:59, cindy.swearin...@sun.com wrote: Hi Grant, I've had no more luck researching this, mostly because the error message can mean different things in different scenarios. I did try to reproduce it and I can't. I noticed you are booting

Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Chris Baker
Peter, Mike, Thank you very much, zfs get -p is exactly what I need (and why I didn't see it despite having been through the man page dozens of times I cannot fathom.) Much appreciated. Chris -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Boot error

2009-08-28 Thread Grant Lowe
Hi Enda, This is what I get when I do the boot -L: 1} ok boot -L Sun Fire V240, No Keyboard Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.13.2, 4096 MB memory installed, Serial #61311259. Ethernet address 0:3:ba:a7:89:1b, Host ID: 83a7891b. Rebooting with

[zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
I was using OpenSolaris 2009.06 on an IDE drive, and decided to reinstall onto a mirror (smaller SSDs). My data pool was a separate pool and before reinstalling onto the new SSDs I exported the data pool. After rebooting and installing OpenSolaris 2009.06 onto the first SSD I tried to import

Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Richard Elling
On Aug 28, 2009, at 12:15 AM, Dave wrote: Thanks, Trevor. I understand the RFE/CR distinction. What I don't understand is how this is not a bug that should be fixed in all solaris versions. In a former life, I worked at Sun to identify things like this that affect availability and

[zfs-discuss] Comstar and ESXi

2009-08-28 Thread Greg
Hello all, I am running an OpenSolaris server running 06/09. I installed comstar and enabled it. I have an ESXi 4.0 server connecting to Comstar via iscsi on its own switch. (There are two esxi servers), both of which do this regardless of whether one is on or off. The error I see is on esxi

[zfs-discuss] Solaris 10 5/09 ISCSI Target Issues

2009-08-28 Thread deniz rende
Hi, I have a zfs root Solaris 10 running on my home server. I created an ISCSI target in the following way: # zfs create -V 1g rpool2/iscsivol Turned on the shareiscsi property # zfs set shareiscsi=on rpool2/iscsivol # zfs list rpool2/iscsivol NAME USED AVAIL

Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Dave
Richard Elling wrote: On Aug 28, 2009, at 12:15 AM, Dave wrote: Thanks, Trevor. I understand the RFE/CR distinction. What I don't understand is how this is not a bug that should be fixed in all solaris versions. In a former life, I worked at Sun to identify things like this that affect

Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Mark J Musante
On Fri, 28 Aug 2009, Dave wrote: Thanks, Trevor. I understand the RFE/CR distinction. What I don't understand is how this is not a bug that should be fixed in all solaris versions. Just to get the terminology right: CR means Change Request, and can refer to Defects (bugs) or RFE's. Defects

Re: [zfs-discuss] Boot error

2009-08-28 Thread Grant Lowe
Well, what I ended up doing was reinstalling Solaris. Fortunately this is a test box for now. I've repeatedly pulled both the root drive and the mirrored drive. The system behaved as normal. The trick that worked for me was to reinstall, but select both drives for zfs. Originally I

Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
Some more info that might help: I have the old IDE boot drive which I can reconnect if I get no help with this problem. I just hope it will allow me to import the data pool, as this is not guaranteed. Way back, I was using SXCE and the pool was upgraded to the latest ZFS version at the time.

Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-08-28 Thread Ian Collins
Andrew Robert Nicols wrote: I've reported this in the past, and have seen a related thread but no resolution so far and I'm still seeing it. Any help really would be very much appreciated. We have three thumpers (X4500): * thumper0 - Running snv_76 * thumper1 - Running snv_121 * thumper2 -

Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
Looks like my last IDE-based boot environment may have been pointing to the /dev package repository, so that might explain how the data pool version got ahead of the official 2009.06 one. Will try to fix the problem by pointing the SSD-based BE towards the dev repo and see if I get success.

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-28 Thread Gary Gendel
Alan, Super find. Thanks, I thought I was just going crazy until I rolled back to 110 and the errors disappeared. When you do work out a fix, please ping me to let me know when I can try an upgrade again. Gary -- This message posted from opensolaris.org

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-28 Thread James Lever
On 28/08/2009, at 3:23 AM, Adam Leventhal wrote: There appears to be a bug in the RAID-Z code that can generate spurious checksum errors. I'm looking into it now and hope to have it fixed in build 123 or 124. Apologies for the inconvenience. Are the errors being generated likely to cause

[zfs-discuss] Expanding a raidz pool?

2009-08-28 Thread Ty Newton
Hi, I've read a few articles about the lack of 'simple' raidz pool expansion capability in ZFS. I am interested in having a go at developing this functionality. Is anyone working on this at the moment? I'll explain what I am proposing. As mentioned in many forums, the concept is really

Re: [zfs-discuss] Boot error

2009-08-28 Thread Jens Elkner
On Thu, Aug 27, 2009 at 04:05:15PM -0700, Grant Lowe wrote: I've got a 240z with Solaris 10 Update 7, all the latest patches from Sunsolve. I've installed a boot drive with ZFS. I mirrored the drive with zpool. I installed the boot block. The system had been working just fine. But for

[zfs-discuss] ub_guid_sum and vdev guids

2009-08-28 Thread P. Anil Kumar
I've a zfs pool named 'ppool' with two vdevs(files) file1, file2 in it. zdb -l /pak/file1 output: version=16 name='ppool' state=0 txg=3080 pool_guid=14408718082181993222 hostid=8884850 hostname='solaris-b119-44' top_guid=4867536591080553814

Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Paul B. Henson
On Fri, 28 Aug 2009 casper@sun.com wrote: luactivate has been running for about 45 minutes. I'm assuming it will probably take at least the 1.5 hours of the lumount (particularly considering it appears to be running a lumount process under the hood) if not the 3.5 hours of lucreate.

Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Paul B. Henson
On Fri, 28 Aug 2009, Jens Elkner wrote: More info: http://iws.cs.uni-magdeburg.de/~elkner/luc/lutrouble.html#luslow **sweet**!! This is *exactly* the functionality I was looking for. Thanks much Any Sun people have any idea if Sun has any similar functionality planned for