Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Paul B. Henson
On Thu, 27 Aug 2009, Paul B. Henson wrote:

 However, I went to create a new boot environment to install the patches
 into, and so far that's been running for about an hour and a half :(,
 which was not expected or planned for.
[...]
 I don't think I'm going to make my downtime window :(, and will probably
 need to reschedule the patching. I never considered I might have to start
 the patch process six hours before the window.

Well, so far lucreate took 3.5 hours, lumount took 1.5 hours, applying the
patches took all of 10 minutes, luumount took about 20 minutes, and
luactivate has been running for about 45 minutes. I'm assuming it will
probably take at least the 1.5 hours of the lumount (particularly
considering it appears to be running a lumount process under the hood) if
not the 3.5 hours of lucreate. Add in the 1-1.5 hours to reboot, and, well,
so much for patches this maintenance window.

The lupi_bebasic process seems to be the time killer here. Not sure what
it's doing, but it spent 75 minutes running strcmp. Pretty much nothing but
strcmp. 75 CPU minutes running strcmp I took a look for the source but
I guess that component's not a part of opensolaris, or at least I couldn't
find it.

Hopefully I can figure out how to make this perform a little more
acceptably before our next maintenance window.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Dave
Thanks, Trevor. I understand the RFE/CR distinction. What I don't 
understand is how this is not a bug that should be fixed in all solaris 
versions.


The related ID 6612830 says it was fixed in Sol 10 U6, which was a while 
ago. I am using OpenSolaris, so I would really appreciate confirmation 
that it has been fixed in OpenSolaris as well. I can't tell by the info 
on the bugs DB - it seems like it hasn't been fixed in OpenSolaris. If 
it has, then the status should reflect it as Fixed/Closed in the bug 
database...


--
Dave


Trevor Pretty wrote:

Dave

Yep that's an RFE. (Request For Enchantment) that's how things are 
reported to engineers to fix things inside Sun.  If it's an honest to 
goodness CR = bug (However it normally need a real support paying 
customer to have a problem to go from RFE to CR) the responsible 
engineer evaluates it, and eventually gets it fixed, or not. When I 
worked at Sun I logged a lot of RFEs, only a few where accepted as bugs 
and fixed.


Click on the new Search link and look at the type and state menus. It 
gives you an idea of the states a RFE and CR goes through. It's probably 
documented somewhere, but I can't find it. Part of the joy of Sun 
putting out in public something most other vendors would not dream of doing.


Oh and it doesn't help both RFEs and CR are labelled bug at 
http://bugs.opensolaris.org/


So. Looking at your RFE.

It tells you which version on Nevada it was reported against 
(translating this into an Opensolaris version is easy - NOT!)


Look at *Related Bugs* 6612830 
http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=e49afb42be7df0f5f17ec9c2d711?bug_id=6612830 



This will tell you the

*Responsible Engineer* Richard Morris

and when it was fixed

*Release Fixed* , solaris_10u6(s10u6_01) (*Bug ID:*2160894 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2160894) 


Although as nothing in life is guaranteed it looks like another bug  
2160894 has been identified and that's not yet on bugs.opensolaris.org


Hope that helps.

Trevor


Dave wrote:

Just to make sure we're looking at the same thing:

http://bugs.opensolaris.org/view_bug.do?bug_id=6761786

This is not an issue of auto snapshots. If I have a ZFS server that 
exports 300 zvols via iSCSI and I have daily snapshots retained for 14 
days, that is a total of 4200 snapshots. According to the link/bug 
report above it will take roughly 5.5 hours to import my pool (even when 
the pool is operating perfectly fine and is not degraded or faulted).


This is obviously unacceptable to anyone in an HA environment. Hopefully 
someone close to the issue can clarify.


--
Dave

Blake wrote:
  

I think the value of auto-snapshotting zvols is debatable.  At least,
there are not many folks who need to do this.

What I'd rather see is a default property of 'auto-snapshot=off' for zvols.

Blake

On Thu, Aug 27, 2009 at 4:29 PM, Tim Cookt...@cook.ms wrote:


On Thu, Aug 27, 2009 at 3:24 PM, Remco Lengers re...@lengers.com wrote:
  

Dave,

Its logged as an RFE (Request for Enhancement) not as a CR (bug).

The status is 3-Accepted/  P1  RFE

RFE's are generally looked at in a much different way then a CR.

..Remco


Seriously?  It's considered works as designed for a system to take 5+
hours to boot?  Wow.

--Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  





*/

*//*

*//*///*

www.eagle.co.nz http://www.eagle.co.nz/ 

This email is confidential and may be legally privileged. If received in 
error please destroy and immediately notify us.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-28 Thread Robert Milkowski

Randall Badilla wrote:

Hi all:
First; it is possible modify the boot zpool rpool after OS 
installation...? I install the OS on the whole 72GB harddisk.. it is 
mirrored so If I want to decrease the rpool; for example resize to a 
36GB slice it can be done?
As far I remember on UFS/SVM I was able to resize boot OS disk via 
detach mirror (so tranforming to one-way mirror); ajust the partitions 
then attach de mirror. After sync boot form the resized mirror; 
re-doing the resize on the remaining mirror and attach mirror and reboot.

Dowtime reduced to a reboot times.


Yes, you can follow same procedure with zfs (details will differ of course).

Second: In the first can't be done; I was guessing I could increase 
rpool size via adding more hard disk. As you know that must be done 
with SMI labeled hard disk; well I have tried change  the start cyl; 
changed the label type almost everything and I still get the error

 zpool add rpool mirror c1t2d0 c1t5d0
cannot label 'c1t2d0': EFI labeled devices are not supported on root 
pools.


once you manually sliced the disks on SMI label then when creating a 
mirror you need to specify which slice zfs should use. If you specify a 
disk without providing a slice zfs always try to put new EFI label in 
place and use entire disk but since in above example you are trying to 
add to rpool and only SMI is allowed it fails.


(zpool add rpool mirror c1t2d0s0 c1t5d0s0)

However I'm not sure if raid-10 (two mirrors striped in your case) is 
allowed for rpools...


--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Casper . Dik

Well, so far lucreate took 3.5 hours, lumount took 1.5 hours, applying the
patches took all of 10 minutes, luumount took about 20 minutes, and
luactivate has been running for about 45 minutes. I'm assuming it will
probably take at least the 1.5 hours of the lumount (particularly
considering it appears to be running a lumount process under the hood) if
not the 3.5 hours of lucreate. Add in the 1-1.5 hours to reboot, and, well,
so much for patches this maintenance window.

The lupi_bebasic process seems to be the time killer here. Not sure what
it's doing, but it spent 75 minutes running strcmp. Pretty much nothing but
strcmp. 75 CPU minutes running strcmp I took a look for the source but
I guess that component's not a part of opensolaris, or at least I couldn't
find it.

Hopefully I can figure out how to make this perform a little more
acceptably before our next maintenance window.


Do you have a lot of files in /etc/mnttab, including nfs filesystems
mounted from server1,server2:/path?

And you're using lucreate for a ZFS root?  It should be quick; we are
changing a number of things in Solaris 10 update 8 and we hope it will
be faster/

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-28 Thread Casper . Dik

Randall Badilla wrote:
 Hi all:
 First; it is possible modify the boot zpool rpool after OS 
 installation...? I install the OS on the whole 72GB harddisk.. it is 
 mirrored so If I want to decrease the rpool; for example resize to a 
 36GB slice it can be done?
 As far I remember on UFS/SVM I was able to resize boot OS disk via 
 detach mirror (so tranforming to one-way mirror); ajust the partitions 
 then attach de mirror. After sync boot form the resized mirror; 
 re-doing the resize on the remaining mirror and attach mirror and reboot.
 Dowtime reduced to a reboot times.

Yes, you can follow same procedure with zfs (details will differ of course).

You can actually change the partitions while you're using the slice.
But after changing the size of both slices you may need to reboot

I've used it also when going from ufs to zfs for boot.

However I'm not sure if raid-10 (two mirrors striped in your case) is 
allowed for rpools...


Nope: a mirror will work (just boot from one of the devices until the
kernel is loaded) and you can even compress the root filesystem (with the
standard compress algorithm)

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Darren J Moffat

Trevor Pretty wrote:

*Release Fixed* , solaris_10u6(s10u6_01) (*Bug ID:*2160894 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2160894) 


Although as nothing in life is guaranteed it looks like another bug  
2160894 has been identified and that's not yet on bugs.opensolaris.org


That isn't acutally another bug but an implementation artefact of the 
multiple release support in Bugster.  Bug numbers beginning with 2* 
aren't actually real bugs bug sub-CRs of the main one.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS commands hang after several zfs receives

2009-08-28 Thread Andrew Robert Nicols
I've reported this in the past, and have seen a related thread but no
resolution so far and I'm still seeing it. Any help really would be very
much appreciated.

We have three thumpers (X4500):
* thumper0 - Running snv_76
* thumper1 - Running snv_121
* thumper2 - Running Solaris 10 update 7

Each has exactly the same disk and zpool configuration, and was bought at
the same time.

thumper0 sends snapshots hourly to thumpers 1 and 2. It is relatively
stable. It's running a very old firmware but we aren't keen to update as
this is our live service system.

thumper1 regularly hangs during a receive - maybe after 1-2.5 days of
hourly receives. All zfs commands on the filesystem receiving the data hang
unrecoverably and only a hard system reset or a reboot -n (do not sync
disks) will allow the machine to return to service.
It's running the most recent system firmware.

thumper2 was previously running snv_110-112 (or thereabouts) and was also
experiencing exactly the same issues until I `downgraded' to Sol 10u6/7, at
which point it began to work perfectly and hasn't experienced any major
issues since.
It's running the most recent system firmware.
Unfortunately, the zpool and zfs versions are too high to downgrade
thumper1 too.

I've tried upgrading thumper1 to 117 and now 121. We were originally
running 112. I'm still seeing exactly the same issues though.

What can I do in an attempt to find out what is causing these lockups?

Thanks in advance,

Andrew Nicols

-- 
Systems Developer

e: andrew.nic...@luns.net.uk
im: a.nic...@jabber.lancs.ac.uk
t: +44 (0)1524 5 10147

Lancaster University Network Services is a limited company registered in
England and Wales. Registered number: 4311892. Registered office:
University House, Lancaster University, Lancaster, LA1 4YW


signature.asc
Description: Digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Jens Elkner
On Thu, Aug 27, 2009 at 10:59:16PM -0700, Paul B. Henson wrote:
 On Thu, 27 Aug 2009, Paul B. Henson wrote:
 
  However, I went to create a new boot environment to install the patches
  into, and so far that's been running for about an hour and a half :(,
  which was not expected or planned for.
 [...]
  I don't think I'm going to make my downtime window :(, and will probably
  need to reschedule the patching. I never considered I might have to start
  the patch process six hours before the window.
 
 Well, so far lucreate took 3.5 hours, lumount took 1.5 hours, applying the
 patches took all of 10 minutes, luumount took about 20 minutes, and
 luactivate has been running for about 45 minutes. I'm assuming it will

Have a look at http://iws.cs.uni-magdeburg.de/~elkner/luc/lu-5.10.patch
or http://iws.cs.uni-magdeburg.de/~elkner/luc/lu-5.11.patch ...
So first install most recent LU patches and than one of the above.
Since still on vacation (for ~8 weeks), haven't checked, whether there
are new LU patches out there and the patches still match (usually they do).
If not, adjusting the files manually shouldn't be a problem ;-)

There are also versions for pre svn_b107 and pre 121430-36,121431-37:
see http://iws.cs.uni-magdeburg.de/~elkner/

More info:
http://iws.cs.uni-magdeburg.de/~elkner/luc/lutrouble.html#luslow

Have fun,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs startup question

2009-08-28 Thread Stephen Stogner
Hello,
  I have a quick question I cant seem to find a good answer for.  I have a  
S10U7 server that is running zfs to a couple of iSCSI shares on one of our 
sans, we use a routed network to connect to our iSCSI shares with the route 
statements in solaris. During normal operations it works fine however after 
every reboot the system tries to mount the iSCSI shares but it cannot reach 
them due to the fact it has not brought up the routing for it yet and the iscsi 
sessions take forever to time out.  Does anyone know a way around this  other 
than putting the iSCSI shares and the iSCSI nic's on the same subnet?

Thank you.



Stephen Stogner



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-28 Thread Roman Naumenko
 Roman, are you saying you want to install OpenSolaris
 on your old servers, or make the servers look like an
 external JBOD array, that another server will then
 connect to?

No, JBOD is JBOD, just an external enclosure, disks+internal/external connectors

--
Roman
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-28 Thread Roman Naumenko
 This non-raid sas controller is $199 and is based on
 the LSI SAS 1068.
 
 http://accessories.us.dell.com/sna/products/Networking
 _Communication/productdetail.aspx?c=usl=ens=bsdcs=0
 4sku=310-8285~lt=popup~ck=TopSellers

Why Dell? Isn't it cheaper to go with LSI itself?

 What kind of chassis do these drives currently reside
 in? Does the backplane have a sata connector for each
 drive, or does it have a sas backplane [i.e. one SFF
 8087 for every four drive slots]?

I have standalones sata on backplane, it means I need sff8087 -
http://www.3ware.com/images-sas/CBL-SFF8087OCF-10M.jpg

Then I need external adapter
http://www.pc-pitstop.com/sas_cables_adapters/AD8788-4.asp

From adapter I use SFF-8088 cable:
http://www.pc-pitstop.com/sas_cables_adapters/-1M.asp

The same cable connects JBOD to another JBOD.

There is even a set, just exactly for 12 disk chassis I have:
http://www.pc-pitstop.com/sata_enclosures/acc-msclb3.asp

So, the question remains about HBA.
Is this one can be used? Probably not because of internal SAS?

http://www.pc-pitstop.com/sas_controllers/adp5805.asp

--
Roman
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs startup question

2009-08-28 Thread Bob Friesenhahn

On Fri, 28 Aug 2009, Stephen Stogner wrote:

 I have a quick question I cant seem to find a good answer for.  I 
have a S10U7 server that is running zfs to a couple of iSCSI shares 
on one of our sans, we use a routed network to connect to our iSCSI 
shares with the route statements in solaris. During normal 
operations it works fine however after every reboot the system tries 
to mount the iSCSI shares but it cannot reach them due to the fact 
it has not brought up the routing for it yet and the iscsi sessions 
take forever to time out.  Does anyone know a way around this other 
than putting the iSCSI shares and the iSCSI nic's on the same 
subnet?


It seems that there must be a missing dependency in the service 
manifests.  A dependency needs to be established between the iSCSI 
service and the routing service which adds the needed routes.  This 
ensures that routing is running before iSCSI is started.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Snapshot creation time

2009-08-28 Thread Chris Baker
I'm using find to run a directory scan to see which files have changed since 
the last snapshot was taken. Something like:

zfs snapshot tank/filesys...@snap1
... time passes ...
find /tank/filesystem -newer /tank/filesystem/.zfs/snap1 -print

Initially I assumed the time data on the .zfs/snap1 directory would reflect the 
time the snapshot was taken - but the time values were earlier than that. So I 
thought perhaps it was the time of the last filesystem modification, but now 
that seems not to be the case. The above find line is discovering files in 
the snapshot newer than the snapshot root directory which just seems odd.

Please can anyone advise what time data is being used for the snapshot root 
directory?

Also - please can anyone advise any better approach than grepping zpool history 
to find an accurate-to-the-second snapshot creation or filesystem modification 
time?  I suppose I could encode the time in the snapshot name, but that feels 
clumsy. zfs get creation will only give me to the nearest minute.

Appreciate any observations.

Cheers

Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Cindy . Swearingen

Hi Grant,

I've had no more luck researching this, mostly because the error message 
can mean different things in different scenarios.


I did try to reproduce it and I can't.

I noticed you are booting using boot -s, which I think means the system 
will boot from the default boot disk, not the newly added disk.


Can you boot from the secondary boot disk directly by using the boot
path? On my 280r system, I would boot from the secondary disk like this:

ok boot /p...@8,60/SUNW,q...@4/f...@0,0/d...@0,0

Cindy


On 08/27/09 23:54, Grant Lowe wrote:

Hi Cindy,

I tried booting from DVD but nothing showed up.  Thanks for the ideas, though.  
Maybe your other sources might have something?



- Original Message 
From: Cindy Swearingen cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 6:24:00 PM
Subject: Re: [zfs-discuss] Boot error

Hi Grant,

I don't have all my usual resources at the moment, but I would 
boot from alternate media and use the format utility to check 
the partitioning on newly added disk, and look for something 
like overlapping partitions. Or, possibly, a mismatch between

the actual root slice and the one you are trying to boot from.

Cindy

- Original Message -
From: Grant Lowe gl...@sbcglobal.net
Date: Thursday, August 27, 2009 5:06 pm
Subject: [zfs-discuss] Boot error
To: zfs-discuss@opensolaris.org


I've got a 240z with Solaris 10 Update 7, all the latest patches from 
Sunsolve.  I've installed a boot drive with ZFS.  I mirrored the drive 
with zpool.  I installed the boot block.  The system had been working 
just fine.  But for some reason, when I try to boot, I get the error: 



{1} ok boot -s
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
SunOS Release 5.10 Version Generic_141414-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Division by Zero
{1} ok

Any ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Peter Tribble
On Fri, Aug 28, 2009 at 3:51 PM, Chris Bakero...@lhc.me.uk wrote:
 I'm using find to run a directory scan to see which files have changed 
 since the last snapshot was taken. Something like:

 zfs snapshot tank/filesys...@snap1
 ... time passes ...
 find /tank/filesystem -newer /tank/filesystem/.zfs/snap1 -print

 Initially I assumed the time data on the .zfs/snap1 directory would reflect 
 the time the snapshot was taken - but the time values were earlier than that. 
 So I thought perhaps it was the time of the last filesystem modification, but 
 now that seems not to be the case. The above find line is discovering files 
 in the snapshot newer than the snapshot root directory which just seems odd.

 Please can anyone advise what time data is being used for the snapshot root 
 directory?

The timestamp of the root directory at the time the snapshot was taken?

 Also - please can anyone advise any better approach than grepping zpool 
 history to find an accurate-to-the-second snapshot creation or filesystem 
 modification time?  I suppose I could encode the time in the snapshot name, 
 but that feels clumsy. zfs get creation will only give me to the nearest 
 minute.

'zfs get -p creation' gives you seconds since the epoch, which you can convert
using a utility of your choice.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Ellis, Mike
Try a:  zfs get -pH -o value creation snapshot

 -- MikeE

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Baker
Sent: Friday, August 28, 2009 10:52 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Snapshot creation time

I'm using find to run a directory scan to see which files have changed
since the last snapshot was taken. Something like:

zfs snapshot tank/filesys...@snap1
... time passes ...
find /tank/filesystem -newer /tank/filesystem/.zfs/snap1 -print

Initially I assumed the time data on the .zfs/snap1 directory would
reflect the time the snapshot was taken - but the time values were
earlier than that. So I thought perhaps it was the time of the last
filesystem modification, but now that seems not to be the case. The
above find line is discovering files in the snapshot newer than the
snapshot root directory which just seems odd.

Please can anyone advise what time data is being used for the snapshot
root directory?

Also - please can anyone advise any better approach than grepping zpool
history to find an accurate-to-the-second snapshot creation or
filesystem modification time?  I suppose I could encode the time in the
snapshot name, but that feels clumsy. zfs get creation will only give
me to the nearest minute.

Appreciate any observations.

Cheers

Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Enda O'Connor

Hi
What does boot -L show you?

Enda

On 08/28/09 15:59, cindy.swearin...@sun.com wrote:

Hi Grant,

I've had no more luck researching this, mostly because the error message 
can mean different things in different scenarios.


I did try to reproduce it and I can't.

I noticed you are booting using boot -s, which I think means the system 
will boot from the default boot disk, not the newly added disk.


Can you boot from the secondary boot disk directly by using the boot
path? On my 280r system, I would boot from the secondary disk like this:

ok boot /p...@8,60/SUNW,q...@4/f...@0,0/d...@0,0

Cindy


On 08/27/09 23:54, Grant Lowe wrote:

Hi Cindy,

I tried booting from DVD but nothing showed up.  Thanks for the ideas, 
though.  Maybe your other sources might have something?




- Original Message 
From: Cindy Swearingen cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 6:24:00 PM
Subject: Re: [zfs-discuss] Boot error

Hi Grant,

I don't have all my usual resources at the moment, but I would boot 
from alternate media and use the format utility to check the 
partitioning on newly added disk, and look for something like 
overlapping partitions. Or, possibly, a mismatch between

the actual root slice and the one you are trying to boot from.

Cindy

- Original Message -
From: Grant Lowe gl...@sbcglobal.net
Date: Thursday, August 27, 2009 5:06 pm
Subject: [zfs-discuss] Boot error
To: zfs-discuss@opensolaris.org


I've got a 240z with Solaris 10 Update 7, all the latest patches from 
Sunsolve.  I've installed a boot drive with ZFS.  I mirrored the 
drive with zpool.  I installed the boot block.  The system had been 
working just fine.  But for some reason, when I try to boot, I get 
the error:


{1} ok boot -s
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
SunOS Release 5.10 Version Generic_141414-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Division by Zero
{1} ok

Any ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Chris Baker
Peter, Mike,

Thank you very much, zfs get -p is exactly what I need (and why I didn't see 
it despite having been through the man page dozens of times I cannot fathom.) 

Much appreciated.

Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Grant Lowe
Hi Enda,

This is what I get when I do the boot -L:

1} ok boot -L

Sun Fire V240, No Keyboard
Copyright 1998-2003 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.13.2, 4096 MB memory installed, Serial #61311259.
Ethernet address 0:3:ba:a7:89:1b, Host ID: 83a7891b.



Rebooting with command: boot -L
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -L
1 s10s_u7wos_08
Select environment to boot: [ 1 - 1 ]: 1

To boot the selected entry, invoke:
boot [root-device] -Z rpool/ROOT/s10s_u7wos_08

Program terminated
{1} ok







- Original Message 
From: Enda O'Connor enda.ocon...@sun.com
To: cindy.swearin...@sun.com
Cc: Grant Lowe gl...@sbcglobal.net; zfs-discuss@opensolaris.org
Sent: Friday, August 28, 2009 8:18:55 AM
Subject: Re: [zfs-discuss] Boot error

Hi
What does boot -L show you?

Enda

On 08/28/09 15:59, cindy.swearin...@sun.com wrote:
 Hi Grant,
 
 I've had no more luck researching this, mostly because the error message can 
 mean different things in different scenarios.
 
 I did try to reproduce it and I can't.
 
 I noticed you are booting using boot -s, which I think means the system will 
 boot from the default boot disk, not the newly added disk.
 
 Can you boot from the secondary boot disk directly by using the boot
 path? On my 280r system, I would boot from the secondary disk like this:
 
 ok boot /p...@8,60/SUNW,q...@4/f...@0,0/d...@0,0
 
 Cindy
 
 
 On 08/27/09 23:54, Grant Lowe wrote:
 Hi Cindy,
 
 I tried booting from DVD but nothing showed up.  Thanks for the ideas, 
 though.  Maybe your other sources might have something?
 
 
 
 - Original Message 
 From: Cindy Swearingen cindy.swearin...@sun.com
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Sent: Thursday, August 27, 2009 6:24:00 PM
 Subject: Re: [zfs-discuss] Boot error
 
 Hi Grant,
 
 I don't have all my usual resources at the moment, but I would boot from 
 alternate media and use the format utility to check the partitioning on 
 newly added disk, and look for something like overlapping partitions. Or, 
 possibly, a mismatch between
 the actual root slice and the one you are trying to boot from.
 
 Cindy
 
 - Original Message -
 From: Grant Lowe gl...@sbcglobal.net
 Date: Thursday, August 27, 2009 5:06 pm
 Subject: [zfs-discuss] Boot error
 To: zfs-discuss@opensolaris.org
 
 
 I've got a 240z with Solaris 10 Update 7, all the latest patches from 
 Sunsolve.  I've installed a boot drive with ZFS.  I mirrored the drive with 
 zpool.  I installed the boot block.  The system had been working just fine. 
  But for some reason, when I try to boot, I get the error:
 
 {1} ok boot -s
 Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
 SunOS Release 5.10 Version Generic_141414-08 64-bit
 Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 Division by Zero
 {1} ok
 
 Any ideas?
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
I was using OpenSolaris 2009.06 on an IDE drive, and decided to reinstall onto 
a mirror (smaller SSDs).

My data pool was a separate pool and before reinstalling onto the new SSDs I 
exported the data pool.

After rebooting and installing OpenSolaris 2009.06 onto the first SSD I tried 
to import my data pool and saw the following message:

# zpool import tank
cannot import 'tank': pool is formatted using a newer ZFS version

I then used Package Manager to do an update all to bring the OS upto the 
latest version, and so hopefully also the ZFS version, then I retried the 
import with the same result -- i.e. it won't import.

Here's some additional info:
SunOS zfsnas 5.11 snv_111b i86pc i386 i86pc Solaris

# zpool upgrade -v
This system is currently running ZFS pool version 14.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit support
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.

If the OS is at ZFS version 14, which I assume is the latest version, then my 
data pool presumably can't be using a newer version.

So is there a bug, workaround or simple solution to this problem?

If I could query the ZFS version of the unimported data pool that would be 
handy, but I suspect this is a bug anyway...

Here's hoping for a quick reply as right now, I cannot access my data :(((

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Richard Elling

On Aug 28, 2009, at 12:15 AM, Dave wrote:

Thanks, Trevor. I understand the RFE/CR distinction. What I don't  
understand is how this is not a bug that should be fixed in all  
solaris versions.


In a former life, I worked at Sun to identify things like this that  
affect availability
and lobbied to get them fixed. There are opposing forces at work: the  
functionality
is correct as designed versus availability folks think it should go  
faster. It is difficult
to build the case that code changes should be made for availability  
when other
workarounds exist. It will be more fruitful for you to examine the  
implementation and
see if there is a better way to improve the efficiencies of your  
snapshot processes.
For example, the case can be made for a secondary data store  
containing long-term
snapshots which can allow you to further optimize the primary data  
store for

performance and availability.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Comstar and ESXi

2009-08-28 Thread Greg
Hello all, 
I am running an OpenSolaris server running 06/09. I installed comstar and 
enabled it. I have an ESXi 4.0 server connecting to Comstar via iscsi on its 
own switch. (There are two esxi servers), both of which do this regardless of 
whether one is on or off. The error I see is on esxi Lost connectivity to 
storage device 
naa.600144f030bc45004a9806980003. Path vmhba33:C0:T0:L0 is down. Affected 
datastores: Unknown. error 8/28/2009 11:10:34 AM This error occurs every 40 
seconds and does not stop. I have disabled the iscsigt service and all other 
iscsi services and just enable the one for comstar. I have created target 
groups and host groups however to no avail the issue continues. Has anyone seen 
this issue I can give you other error logs if needed. Would I get the same 
result if I moved to Solaris 10 05/09? I also had the thought that it might be 
esxi 4 so I updated it but again to no avail. If anyone has any ideas it would 
be helpful!

Thanks!
Greg
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Solaris 10 5/09 ISCSI Target Issues

2009-08-28 Thread deniz rende
Hi,

I have a zfs root Solaris 10 running on my home server. I  created an ISCSI 
target in the following way:

# zfs create -V 1g rpool2/iscsivol

Turned on the shareiscsi property

# zfs set shareiscsi=on rpool2/iscsivol

# zfs list rpool2/iscsivol
NAME  USED  AVAILREFER
MOUNTPOINT
rpool2/iscsivol 18K 19.9G   18K 
rpool2/iscsivol

I am running into couple of issues here

1) Can't mount rpool2/iscsivol to another mountpoint such as /myiscsivol
I understand this has something to do with -V flag creating a block device. 
Is there any workaround for this to mount it as volume?

2) I can connect to it through Windows XP Test machine with Windows ISCSI 
initiator, can copy files into it but from the solaris machine if I do format 
I do not get to see the volume listed. 

What am I missing?

Thanks for all the help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Dave

Richard Elling wrote:

On Aug 28, 2009, at 12:15 AM, Dave wrote:

Thanks, Trevor. I understand the RFE/CR distinction. What I don't 
understand is how this is not a bug that should be fixed in all 
solaris versions.


In a former life, I worked at Sun to identify things like this that 
affect availability
and lobbied to get them fixed. There are opposing forces at work: the 
functionality
is correct as designed versus availability folks think it should go 
faster. It is difficult
to build the case that code changes should be made for availability when 
other
workarounds exist. It will be more fruitful for you to examine the 
implementation and
see if there is a better way to improve the efficiencies of your 
snapshot processes.
For example, the case can be made for a secondary data store containing 
long-term
snapshots which can allow you to further optimize the primary data store 
for

performance and availability.
 -- richard


This is unfortunate, but it seems this may be the only option if I want 
to import a pool within a reasonable amount of time. It's very 
frustrating to know that it can be fixed (evidenced by the S10U6 fix), 
but won't be fixed in Nevada/OpenSolaris - or so it seems.


It may be filed as an RFE, but in my opinion it is most definitely a bug.

--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Mark J Musante

On Fri, 28 Aug 2009, Dave wrote:

Thanks, Trevor. I understand the RFE/CR distinction. What I don't 
understand is how this is not a bug that should be fixed in all solaris 
versions.


Just to get the terminology right: CR means Change Request, and can 
refer to Defects (bugs) or RFE's.  Defects have higher priority than 
RFE's, even though sometimes what makes something a defect vs. an RFE can 
be a bit subjective.  But both bugs/defects and RFE's are CR's.


Oh and it doesn't help both RFEs and CR are labelled bug at 
http://bugs.opensolaris.org/


That's not true.  Bugs or Defects are distinct from RFEs.  There's a 
Type pulldown on that site that lets you choose.  But I would agree with 
the assertion that it doesn't help to have RFEs labelled with a Bug ID 
number.



Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Grant Lowe
Well, what I ended up doing was reinstalling Solaris.  Fortunately this is a 
test box for now.  I've repeatedly pulled both the root drive and the mirrored 
drive.  The system behaved as normal.  The trick that worked for me was to 
reinstall, but select both drives for zfs.  Originally I selected only one 
drive for zfs.



- Original Message 
From: Grant Lowe gl...@sbcglobal.net
To: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 4:05:15 PM
Subject: [zfs-discuss] Boot error

I've got a 240z with Solaris 10 Update 7, all the latest patches from Sunsolve. 
 I've installed a boot drive with ZFS.  I mirrored the drive with zpool.  I 
installed the boot block.  The system had been working just fine.  But for some 
reason, when I try to boot, I get the error: 

{1} ok boot -s
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
SunOS Release 5.10 Version Generic_141414-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Division by Zero
{1} ok

Any ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
Some more info that might help:

I have the old IDE boot drive which I can reconnect if I get no help with this 
problem. I just hope it will allow me to import the data pool, as this is not 
guaranteed.

Way back, I was using SXCE and the pool was upgraded to the latest ZFS version 
at the time.
Then around May 2009 I installed OpenSolaris 2009.06 preview, which appeared 
a couple of weeks before the release of the final OpenSolaris 2009.06. I used 
Package Manager to update all packages etc. At some point I ran zpool upgrade 
on the data pool to bring it up to the latest ZFS version, as it was saying 
that it was not using the latest ZFS version when I did a zpool status on the 
pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-08-28 Thread Ian Collins

Andrew Robert Nicols wrote:

I've reported this in the past, and have seen a related thread but no
resolution so far and I'm still seeing it. Any help really would be very
much appreciated.

We have three thumpers (X4500):
* thumper0 - Running snv_76
* thumper1 - Running snv_121
* thumper2 - Running Solaris 10 update 7

Each has exactly the same disk and zpool configuration, and was bought at
the same time.

thumper0 sends snapshots hourly to thumpers 1 and 2. It is relatively
stable. It's running a very old firmware but we aren't keen to update as
this is our live service system.

thumper1 regularly hangs during a receive - maybe after 1-2.5 days of
hourly receives. All zfs commands on the filesystem receiving the data hang
unrecoverably and only a hard system reset or a reboot -n (do not sync
disks) will allow the machine to return to service.
It's running the most recent system firmware.

snip

What can I do in an attempt to find out what is causing these lockups?

  

I have a case open for this problem on Solaris 10u7.

The case has been identified and I've just received an IDR,which I will 
test next week.  I've been told the issue is fixed in update 8, but I'm 
not sure if there is an nv fix target.


I'll post back once I've abused a test system for a while.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-28 Thread Simon Breden
Looks like my last IDE-based boot environment may have been pointing to the 
/dev package repository, so that might explain how the data pool version got 
ahead of the official 2009.06 one.

Will try to fix the problem by pointing the SSD-based BE towards the dev repo 
and see if I get success.

Will update this thread with my findings, although I expect it will work :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-28 Thread Gary Gendel
Alan,

Super find.  Thanks, I thought I was just going crazy until I rolled back to 
110 and the errors disappeared.  When you do work out a fix, please ping me to 
let me know when I can try an upgrade again.

Gary
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-28 Thread James Lever


On 28/08/2009, at 3:23 AM, Adam Leventhal wrote:

There appears to be a bug in the RAID-Z code that can generate  
spurious checksum errors. I'm looking into it now and hope to have  
it fixed in build 123 or 124. Apologies for the inconvenience.


Are the errors being generated likely to cause any significant problem  
running 121 with a RAID-Z volume or should users of RAID-Z* wait until  
this issue is resolved?


cheers,
James

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Expanding a raidz pool?

2009-08-28 Thread Ty Newton
Hi,
I've read a few articles about the lack of 'simple' raidz pool expansion 
capability in ZFS.  I am interested in having a go at developing this 
functionality.  Is anyone working on this at the moment?

I'll explain what I am proposing.  As mentioned in many forums, the concept is 
really simple: allow a raidz pool to grow by adding one or more disks to an 
existing pool.  My intended user group is the consumer market, as opposed to 
the enterprise, so I expect I'll put some rather strict limitations on how/when 
this functionality will operate: to make the first implementation more 
achievable.

The use case I will try and solve first is, what I see as, the simplest.  I 
have a raidz pool configured with 1 file system on top; no snapshots.  I want 
to add an additional disk (must be at least the same size as the rest of the 
disks in the pool).  I don't mind if there is some downtime.  I want all my 
data to take advantage of the additional disk.

What is the benefit to the consumer?  The answer is simple:
- more flexibility in growing storage i.e. can have an odd number of disks.
- more disk space available for use e.g. 2 pools of 3 disks gives less 
available space than 1 pool of 6 disks.
- consistent with many RAID-5 implementations
- opens up the consumer market for raidz: growable small backup/SAN/Home 
Theatre appliances


I'm no expert on any of this stuff, but I do have many years experience as a 
software engineer.  Is there a mentoring program that Sun offers so I can get 
some assistance when necessary?  My expectation is that this isn't impossible 
to do but it isn't simple to do either.

Are there any procedural hoops I need to jump through to take on this piece of 
work?


Regards,
Ty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Jens Elkner
On Thu, Aug 27, 2009 at 04:05:15PM -0700, Grant Lowe wrote:
 I've got a 240z with Solaris 10 Update 7, all the latest patches from 
 Sunsolve.  I've installed a boot drive with ZFS.  I mirrored the drive with 
 zpool.  I installed the boot block.  The system had been working just fine.  
 But for some reason, when I try to boot, I get the error: 
 
 {1} ok boot -s
 Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
 SunOS Release 5.10 Version Generic_141414-08 64-bit
 Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 Division by Zero
 {1} ok

My guess: s0 was to small when updating the boot archive.
So booting from a jumstart dir/CD, mounting s0 (e.g. to /a)
and running bootadm update-archive -R /a should fix the problem.

If you are low on space on /, manually 
rm -f /a/platform/sun4u/boot_archive before doing the update-archive.
If still not enough space, try to move some other stuff temp. away, 
e.g. /core , /etc/mail/cf ...

Good luck,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ub_guid_sum and vdev guids

2009-08-28 Thread P. Anil Kumar
I've a zfs pool named 'ppool' with two vdevs(files) file1, file2 in it.

zdb -l /pak/file1 output:
version=16
name='ppool'
state=0
txg=3080
pool_guid=14408718082181993222
hostid=8884850
hostname='solaris-b119-44'
top_guid=4867536591080553814
guid=4867536591080553814
vdev_tree
type='file'
id=0
guid=4867536591080553814
path='/pak/file1'
metaslab_array=23
metaslab_shift=19
ashift=9
asize=68681728
is_log=0


zdb -l /pak/file2 output:
version=16
name='ppool'
state=0
txg=3081
pool_guid=14408718082181993222
hostid=8884850
hostname='solaris-b119-44'
top_guid=4015976099930560107
guid=4015976099930560107
vdev_tree
type='file'
id=1
guid=4015976099930560107
path='/pak/file2'
metaslab_array=27
metaslab_shift=19
ashift=9
asize=68681728
is_log=0


bash-3.2# zdb -uuu ppool
Uberblock

magic = 00bab10c
version = 16
txg = 3082
guid_sum = 484548669948327

I see that the uber block ub_guid_sum is not equal to the sum of guid's of both 
the vdevs. Can someone please explain me why?

Regards,
pak
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Paul B. Henson
On Fri, 28 Aug 2009 casper@sun.com wrote:

 luactivate has been running for about 45 minutes. I'm assuming it will
 probably take at least the 1.5 hours of the lumount (particularly
 considering it appears to be running a lumount process under the hood) if
 not the 3.5 hours of lucreate.

Eeeek, the luactivate command ended up taking about *7 hours* to complete.
And I'm not sure it was even successful, output excerpts at the end of this
message.

 Do you have a lot of files in /etc/mnttab, including nfs filesystems
 mounted from server1,server2:/path?

There's only one nfs filesystem in vfstab which is always mounted, user
home directories are automounted and would be in mnttab if accessed, but
during the lu process no users were on the box.

On the other hand, there are a *lot* of zfs filesytems in mnttab:

# grep zfs /etc/mnttab  | wc -l
8145

 And you're using lucreate for a ZFS root?  It should be quick; we are
 changing a number of things in Solaris 10 update 8 and we hope it will be
 faster/

lucreate on a system with *only* an os root pool is blazing (the magic of
clones). The problem occurs when my data pool (with 6k odd filesystems) is
also there. The live upgrade process is analyzing all 6k of those
filesystems, mounting them all in the alternate root, unmounting them all,
and who knows what else. This is totally wasted effort, those filesystems
have nothing to do with the OS or patching, and I'm really hoping that they
can just be completely ignored.

So, after 7 hours, here is the last bit of output from luactivate. Other
than taking forever and a day, all of the output up to this point seemed
normal. The BE s10u6 is neither the currently active BE nor the one being
made active, but these errors have me concerned something _bad_ might
happen if I reboot :(. Any thoughts?


Modifying boot archive service
Propagating findroot GRUB for menu conversion.
ERROR: Read-only file system: cannot create mount point
/.alt.s10u6/export/group/ceis
ERROR: failed to create mount point /.alt.s10u6/export/group/ceis for
file system export/group/ceis
ERROR: unmounting partially mounted boot environment file systems
ERROR: No such file or directory: error unmounting ospool/ROOT/s10u6
ERROR: umount: warning: ospool/ROOT/s10u6 not in mnttab
umount: ospool/ROOT/s10u6 no such file or directory
ERROR: cannot unmount ospool/ROOT/s10u6
ERROR: cannot mount boot environment by name s10u6
ERROR: Failed to mount BE s10u6.
ERROR: Failed to mount BE s10u6. Cannot propagate file
/etc/lu/installgrub.findroot to BE
File propagation was incomplete
ERROR: Failed to propagate installgrub
ERROR: Could not propagate GRUB that supports the findroot command.
Activation of boot environment patch-20090817 successful.

According to lustatus everything is good, but shiver... These boxes have
only been in full production about a month, it would not be good for them
to die during the first scheduled patches.


# lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
s10u6  yes  no noyes-
s10u6-20090413 yes  yesnono -
patch-20090817 yes  no yes   no -


Tuanks...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-28 Thread Paul B. Henson
On Fri, 28 Aug 2009, Jens Elkner wrote:

 More info:
 http://iws.cs.uni-magdeburg.de/~elkner/luc/lutrouble.html#luslow

**sweet**!!

This is *exactly* the functionality I was looking for. Thanks much

Any Sun people have any idea if Sun has any similar functionality planned
for live upgrade? Live upgrade without this capability is basically useless
on a system with lots of zfs filesystems.

Jens, thanks again, this is perfect.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss