Re: [zfs-discuss] Fwd: [ilugb] Does ZFS support Hole Punching/Discard

2009-09-08 Thread Chris Csanady
2009/9/7 Ritesh Raj Sarraf r...@researchut.com:
 The Discard/Trim command is also available as part of the SCSI standard now.

 Now, if you look from a SAN perspective, you will need a little of both.
 Filesystems will need to be able to deallocate blocks and then the same 
 should be triggered as a SCSI Trim to the Storage Controller.
 For a virtualized environment, the filesystem should be able to punch holes 
 into virt image files.

 F_FREESP is only on XFS to my knowledge.

I found F_FREESP while looking through the OpenSolaris source, and it
is supported on all filesystems which implement VOP_SPACE.  (I was
initially investigating what it would take to transform writes of
zeroed blocks into block frees on ZFS.  Although it would not appear
to be too difficult, I'm not sure if it would be worth complicating
the code paths.)

 So how does ZFS tackle the above 2 problems ?

At least for file backed filesystems, ZFS already does its part.  It
is the responsibility of the hypervisor to execute the mentioned
fcntl(), wether it is triggered by a TRIM or whatever else.  ZFS does
not use TRIM itself, though it is not recommended to use it on top of
files anyway, nor is there a need for virtualization purposes.

It does appear that the ATA TRIM command should be used with great
care though, or avoided all together.  Not only does it need to wait
for the entire queue to empty, it can cause a delay of ~100ms if you
execute them without enough elapsed time.  (See the thread linked from
the article I mentioned.)

As far as I can tell, Solaris is missing the equivalent of a
DKIOCDISCARD ioctl().  Something like that should be implemented to
allow recovery of space on zvols and iSCSI backing stores. (Though,
the latter would require implementing the SCSI TRIM support as well,
if I understand correctly.)

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS simply concatenate LUNs (eg no RAID0)?

2009-09-08 Thread Darren J Moffat

Piero Gramenzi wrote:

Hi,

I do have a disk array that is providing striped LUNs to my Solaris box. Hence 
I'd like to simply concat those LUNs without adding another layer of striping.

Is this possibile with ZFS?

As far as I understood, if I use

zpool create myPool lun-1 lun-2 ... lun-n

I will get a RAID0 striping where each data block is split across all n LUNs.


Individual ZFS blocks don't span a vdev (lun in your case) they are on 
one disk or another.  ZFS will stripe blocks across all available top 
level vdevs, and as additional top level vdevs are added it will attempt

to rebalance as new writes come in.


If that's correct, is there a way to avoid that and get ZFS to write 
sequentially on the LUNs that are part of myPool?


Why do you want to do that ?  What do you actually think it gives you, 
other than possibly *worse* performance ?


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS simply concatenate LUNs (eg no RAID0)?

2009-09-08 Thread Darren J Moffat

Piero Gramenzi wrote:

Hi Darren,

I do have a disk array that is providing striped LUNs to my Solaris 
box. Hence I'd like to simply concat those LUNs without adding 
another layer of striping.


Is this possibile with ZFS?

As far as I understood, if I use

zpool create myPool lun-1 lun-2 ... lun-n

I will get a RAID0 striping where each data block is split across all 
n LUNs.


Individual ZFS blocks don't span a vdev (lun in your case) they are on 
one disk or another.  ZFS will stripe blocks across all available top 
level vdevs, and as additional top level vdevs are added it will 
attempt to rebalance as new writes come in.

That's exactly what I would like to avoid.


Why ?

I do have several Solaris servers, each one mounting several LUNs 
provided by the same disk-array.


The disk-array is configured to provide RAID5 (7+1) protection so each 
LUN is striped across several physical disks.


Make sure you still provide ZFS with redundancy otherwise you will 
regret it later.   Ideally by giving it the ability to do raidz or 
mirroring, failing that at the very least set copies=2.


If I was mounting multiple LUNs on the same Solaris box and ZFS was 
using them in RAID0 then the result is that each filesystem write would span across a 
*huge* number of physical disks on the disk-array.


That is a *good* thing it helps increase performance usually.

This would almost certainly impacting other LUNs mounted on different 
servers.


Why ?  I don't think it will unless you disk hardware is really 
simplistic, and if it is that simplistic then it might not be able to 
keep up regardless.


As the striping is already provided at hardware level by the disk-array, 
I would like to avoid to further stripe already striped devices (ie LUNs).


But why ?  What is the rationale ?

If that's correct, is there a way to avoid that and get ZFS to write 
sequentially on the LUNs that are part of myPool?


Why do you want to do that ?  What do you actually think it gives you, 
other than possibly *worse* performance ?

ditto.

Of course the alternative is to get the disk-array providing non-striped


Ideally give ZFS access to the raw disks and let it do everything.


LUNs but I suspect
that hardware striping is way more efficient than software strping, no 
matter of good ZFS is.


Poor assumption to make - the only way is to verify it for your 
particular workload.


The other thing to consider is that if you don't give ZFS the ability 
for it to create multiple copies of the data (ideally one of 
mirror,raidz,raidz2,raidz3) you run the risk of losing complete access 
to the data regardless of wither ZFS was striping or not.


Try to work with ZFS not against it.

For more information and suggestions I strongly recommend reading the 
following:


http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide

If you are hosting databases this one too:

http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases

and after that if you still need performance help read this one (last!):

http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

don't jump straight to the ZFS_Evil_Tuning_Guide - seriously!

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Incremental backup via zfs send / zfs receive

2009-09-08 Thread Frank Middleton

On 09/07/09 07:29 PM, David Dyer-Bennet wrote:


Is anybody doing this [zfs send/recv] routinely now on 2009-6
OpenSolaris, and if so can I see your commands?


Wouldn't a simple recursive send/recv work in your case? I
imagine all kinds of folks are doing it already. The only problem
with it, AFAIK, is when a new fs is created locally without also
being created on the backup disk (unless this now works with
zfs  V3). The following works with snv103. If it works there, it
should work with 2009-6. The script method may have the advantage
of not destroying file systems on the backup that don't exist
on the source, but I have not tested that.

ZFS send/recv is pretty cool, but at least with older versions, it
takes some tweaking to get right. Rather than send to a local drive,
I'm sending to a live remote system, which is some ways is more
complicated since there might be things like /opt and xxx/swap
that you might not want to even send. Finally, at least with ZFS
version 3, an incremental send of a filesystem that doesn't exist
on the far side doesn't work either, so one needs to test for that.

Given this, a simple send of a recursive snapshot AFAIK isn't going to
work. I am no bash expert, so this script probably can do with lots
of improvements, but it seems to do what I need it to do. You would
have to extensively modify it for your local needs; you would have
to remove the ssh backup and fix it to receive to your local disk. I
include it here in response to your request in the hope that it
might be useful. Note, as written, it will create space/swap but it
won't send updates.

The pool I'm backing up is called space and the target host is called
backup, an alias in /etc/hosts. When the machines switch roles, I
edit both /etc/hosts so the stream can go the other way. This script
probably won't work for rpools; there is lots of documentation about
that in previous posts to this list.

My solution to the rpool problem is to receive it locally to an
alternate root and then send that, but this works here if the
rpool isn't your only pool, of course.

If any zfs/bash gurus out there can suggest improvements, they
would be much appreciated, especially ways to deal with the /opt
problem (which probably relates to the general rpool question).
Currently the /opts for each host are set mountpoint=legacy,
but that is not a great solution :-(.

Cheers -- Frank

#!/bin/bash
P=`cat cur_snap`
rm -f cur_snap
T=`date +%Y-%m-%d:%H:%M:%S`
echo $T  cur_snap
echo snapping to sp...@$t
zfs snapshot -r sp...@$t
echo snapshot done
for FS in `zfs list -H | cut -f 1`
do
RFS=`ssh backup zfs list -H $FS 2/dev/null | cut  -f 1`
if test $RFS; then
  if [ $FS = space/swap ]; then
echo skipping $FS
  else
echo do zfs send -i $...@$p $...@$t I ssh backup zfs recv -vF $RFS
zfs send -i $...@$p $...@$t | ssh backup zfs recv -vF $RFS
  fi
else
  echo do zfs send $...@$t I ssh backup zfs recv -v $FS
  zfs send $...@$t | ssh backup zfs recv -v $FS
fi
done

ssh backup zfs destroy -r sp...@$p
zfs destroy -r sp...@$p



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can ZFS simply concatenate LUNs (eg no RAID0)?

2009-09-08 Thread Piero Gramenzi
Hi,

I do have a disk array that is providing striped LUNs to my Solaris box. Hence 
I'd like to simply concat those LUNs without adding another layer of striping.

Is this possibile with ZFS?

As far as I understood, if I use

zpool create myPool lun-1 lun-2 ... lun-n

I will get a RAID0 striping where each data block is split across all n LUNs.

If that's correct, is there a way to avoid that and get ZFS to write 
sequentially on the LUNs that are part of myPool?

Thanks,

   Piero
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS simply concatenate LUNs (eg no RAID0)?

2009-09-08 Thread Piero Gramenzi

Hi Darren,

I do have a disk array that is providing striped LUNs to my Solaris 
box. Hence I'd like to simply concat those LUNs without adding 
another layer of striping.


Is this possibile with ZFS?

As far as I understood, if I use

zpool create myPool lun-1 lun-2 ... lun-n

I will get a RAID0 striping where each data block is split across all 
n LUNs.


Individual ZFS blocks don't span a vdev (lun in your case) they are on 
one disk or another.  ZFS will stripe blocks across all available top 
level vdevs, and as additional top level vdevs are added it will 
attempt to rebalance as new writes come in.

That's exactly what I would like to avoid.

I do have several Solaris servers, each one mounting several LUNs 
provided by the same

disk-array.

The disk-array is configured to provide RAID5 (7+1) protection so each 
LUN is striped

across several physical disks.

If I was mounting multiple LUNs on the same Solaris box and ZFS was 
using them in
RAID0 then the result is that each filesystem write would span across a 
*huge* number

of physical disks on the disk-array.

This would almost certainly impacting other LUNs mounted on different 
servers.


As the striping is already provided at hardware level by the disk-array, 
I would like to

avoid to further stripe already striped devices (ie LUNs).

If that's correct, is there a way to avoid that and get ZFS to write 
sequentially on the LUNs that are part of myPool?


Why do you want to do that ?  What do you actually think it gives you, 
other than possibly *worse* performance ?

ditto.

Of course the alternative is to get the disk-array providing non-striped 
LUNs but I suspect
that hardware striping is way more efficient than software strping, no 
matter of good ZFS is.


Cheers,

 Piero


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pulsing write performance

2009-09-08 Thread Scott Meilicke
True, this setup is not designed for high random I/O, but rather lots of 
storage with fair performance. This box is for our dev/test backend storage. 
Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test) 
on average, so for our development VI, we are expecting half of that at most, 
on average. Testing with parameters that match the observed behavior of the 
production VI gets us about 750 IOPS with compression (NFS, 2009.06), so I am 
happy with the performance and very happy with the amount of available space.

Stripped mirrors are much faster, ~2200 IOPS with 16 disks (but alas, tested 
with iSCSI on 2008.11, compression on. We got about 1,000 IOPS with the 3x5 
raidz setup with compression to compare iSCSI and 2008.11 vs NFS and 2009.06), 
but again we are shooting for available space, with performance being a 
secondary goal. And yes, we would likely get much better performance using SSDs 
for the ZIL and L2ARC. 

This has been an interesting thread! Sorry for the bit of hijacking...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] associating an unmodified clone file with an origin snapshot

2009-09-08 Thread John Zolnowsky x69422/408-404-5064
The context is a file in a dataset cloned from a snapshot.
If the file has not been modified since the clone was created,
I'd like to ascribe to the file attributes associated with
the origin snapshot.

1)  Is it feasible to determine from the vnode relating to
the clone file if that file is unmodified from the origin?
(I'm hoping this can be as simple as verifying that the
file uses the same data blocks as it did in the snapshot.)

2)  Could modifications of other files in the clone dataset
make the unchanged status of an unmodified file more
difficult to verify?

Thanks  -JZ
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help with Scenerio

2009-09-08 Thread Jon Whitehouse
I'm new to ZFS and a scenario recently came up that I couldn't figure out.  We 
are used to using Veritas Volume Mgr so that may affect our thinking to this 
approach.

Here it is.


1.ServerA was originally built let's say in January '09 with the 
Solaris 10 build from 10/08 with zfs as its default filesystem and was setup to 
mirror.  So zpool status might show something like this:

pool: rpool
state: ONLINE
config:

NAME   STATEREAD WRITE   CKSUM
rpool ONLINE0  0  0
 mirror  ONLINE0  0  0
   c1t0d0s0   ONLINE0  0  0
   c1t1d0s0   ONLINE0  0  0

errors: No known data errors


2.   Now lets say someone came in and thought that this box was no longer 
needed, and reinstalled with the Solaris 10 build from 05/09 with zfs but 
didn't mirror it but the root pool is also called rpool.  Now looking like this:

pool: rpool
state: ONLINE
config:

NAME   STATEREAD WRITE   CKSUM
rpool ONLINE0  0  0
   c1t0d0s0   ONLINE0  0  0

errors: No known data errors


3.   Now the part I can't figure out.  It was discovered that there is data 
you need from the system before it was rebuilt.  How do you get the data off 
c1t1d0s0 keeping all your current build, yet pulling off the old data you need?


I was thinking it might be as simple as doing a zfs import but how do you do 
that when your root filesystem is already called rpool?

---
Jon Whitehouse
Systems Engineer - IT, Server Support
MS 5221
1800 W. Center Street
Warsaw, IN 46580
(574) 371-8684
(574) 377-2829 (cell)
jonathan.whiteho...@zimmer.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with Scenerio

2009-09-08 Thread Kees Nuyt
On Tue, 8 Sep 2009 15:09:24 -0400, Jon Whitehouse
jonathan.whiteho...@zimmer.com wrote:

I'm new to ZFS and a scenario recently came up that I couldn't figure out.  We 
are used to using Veritas Volume Mgr so that may affect our thinking to this 
approach.

Here it is.


1.ServerA was originally built let's say in January '09 with the 
Solaris 10 build from 10/08 with zfs as its default filesystem and was setup 
to mirror.  So zpool status might show something like this:

pool: rpool
state: ONLINE
config:

NAME   STATEREAD WRITE   CKSUM
rpool ONLINE0  0  0
 mirror  ONLINE0  0  0
   c1t0d0s0   ONLINE0  0  0
   c1t1d0s0   ONLINE0  0  0

errors: No known data errors


2.   Now lets say someone came in and thought that this box was no longer 
needed, and reinstalled with the Solaris 10 build from 05/09 with zfs but 
didn't mirror it but the root pool is also called rpool.  Now looking like 
this:

pool: rpool
state: ONLINE
config:

NAME   STATEREAD WRITE   CKSUM
rpool ONLINE0  0  0
   c1t0d0s0   ONLINE0  0  0

errors: No known data errors


3.   Now the part I can't figure out.
  It was discovered that there is data
  you need from the system before it was rebuilt.
  How do you get the data off c1t1d0s0 keeping
  all your current build, yet pulling off the
 old data you need?

 I was thinking it might be as simple as doing a 
 zfs import but how do you do that when your
 root filesystem is already called rpool?

zfs import  without any argument will tell you which pools
can be imported, not only by name but also with their unique
ID's.
It can then be imported using its unique ID, and renamed by
the import subcommand. Also, you can provide an alternative
mountpoint (altroot). 

For details see:  man zpool
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with Scenerio

2009-09-08 Thread Ross Walker
On Tue, Sep 8, 2009 at 3:09 PM, Jon
Whitehousejonathan.whiteho...@zimmer.com wrote:
 I'm new to ZFS and a scenario recently came up that I couldn't figure out.
 We are used to using Veritas Volume Mgr so that may affect our thinking to
 this approach.



 Here it is.



 1.    ServerA was originally built let's say in January '09 with the
 Solaris 10 build from 10/08 with zfs as its default filesystem and was setup
 to mirror.  So zpool status might show something like this:

 pool: rpool
 state: ONLINE
 config:

 NAME   STATE            READ WRITE   CKSUM
 rpool     ONLINE    0
 0  0
  mirror      ONLINE    0  0
 0
    c1t0d0s0   ONLINE    0  0  0
    c1t1d0s0   ONLINE    0  0  0

 errors: No known data errors

 2.   Now lets say someone came in and thought that this box was no
 longer needed, and reinstalled with the Solaris 10 build from 05/09 with zfs
 but didn't mirror it but the root pool is also called rpool.  Now looking
 like this:

 pool: rpool
 state: ONLINE
 config:

 NAME   STATE            READ WRITE   CKSUM
 rpool     ONLINE    0
 0  0
    c1t0d0s0   ONLINE    0  0  0

 errors: No known data errors

 3.   Now the part I can't figure out.  It was discovered that there is
 data you need from the system before it was rebuilt.  How do you get the
 data off c1t1d0s0 keeping all your current build, yet pulling off the old
 data you need?



 I was thinking it might be as simple as doing a zfs import but how do you do
 that when your root filesystem is already called rpool?

Do a 'zpool import' with no options and if it is still viable it will
list it's ID #.

You can then import it with a:

# zpool import -R /oldrpool ID# oldrpool

-Ross
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with Scenerio

2009-09-08 Thread Cindy . Swearingen

Hi Jon,

If the zpool import command shows the old rpool and associated disk 
(c1t1d0s0), then you might able to import it like this:


# zpool import rpool rpool2

Which renames the original pool, rpool, to rpool2, upon import.

If the disk c1t1d0s0 was overwritten in any way then I'm not sure
this operation will work.

Cindy

On 09/08/09 13:09, Jon Whitehouse wrote:
I'm new to ZFS and a scenario recently came up that I couldn't figure 
out.  We are used to using Veritas Volume Mgr so that may affect our 
thinking to this approach.


 


Here it is.

 

1.ServerA was originally built let's say in January '09 with the 
Solaris 10 build from 10/08 with zfs as its default filesystem and was 
setup to mirror.  So zpool status might show something like this:


pool: rpool
state: ONLINE
config:

NAME   STATEREAD WRITE   CKSUM
rpool ONLINE0  
0  0
 mirror  ONLINE0  
0  0

   c1t0d0s0   ONLINE0  0  0
   c1t1d0s0   ONLINE0  0  0

errors: No known data errors

2.   Now lets say someone came in and thought that this box was no 
longer needed, and reinstalled with the Solaris 10 build from 05/09 with 
zfs but didn't mirror it but the root pool is also called rpool.  Now 
looking like this:


pool: rpool
state: ONLINE
config:

NAME   STATEREAD WRITE   CKSUM
rpool ONLINE0  
0  0

   c1t0d0s0   ONLINE0  0  0

errors: No known data errors

3.   Now the part I can't figure out.  It was discovered that there 
is data you need from the system before it was rebuilt.  How do you get 
the data off c1t1d0s0 keeping all your current build, yet pulling off 
the old data you need?


 

I was thinking it might be as simple as doing a zfs import but how do 
you do that when your root filesystem is already called rpool?


 


---

Jon Whitehouse

Systems Engineer - IT, Server Support

MS 5221

1800 W. Center Street

Warsaw, IN 46580

(574) 371-8684

(574) 377-2829 (cell)

jonathan.whiteho...@zimmer.com

 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sndradm ZFS cluster 3.2

2009-09-08 Thread Mohammed Al Basti

hello experts,

   i have cluster3.2/ZFS and AVS4  in main site and ZFS/AVS4 in 
DR , i am trying to replicate ZFS volumes using AVS, i am getting the 
below error
sndradm: Error: volume 
/dev/rdsk/c4t600A0B80005B1E5702934A27A8CCd0s0 is not part of a 
disk group,

please specify resource ctag

the command we ran is :
sndradm -E avs-pri /dev/rdsk/c4t600A0B80005B1E5702934A27A8CCd0s0 
/dev/rdsk/c4t600A0B80005B2077044E4A3DF83Ed0s3 avs-sec 
/dev/rdsk/c4t600A0B80005B1FC7029D4A27B65Ed0s0 
/dev/rdsk/c4t600A0B80005B1FC7039E4AA61DFAd0s3 ip async g Pool


since we are using ZFS , we don't have any disk groups..
any idea how to setup remote mirror for zfs.



Regards


__
   /_/\  Mohammed Al Basti
  / \\ \ Technical specialist PS Middle East  North Africa
 /_\ \\ /
/_/ \/ / /   Sun Microsystems Inc.,
/_/ /   \//\  Bldg No 15, Dubai Internet City, PO Box 50769
\_\//\   / /  Dubai, United Arab Emirates (AE)
\_/ / /\ /   Mobile : +971 (50) 6245553
 \_/ \\ \
  \_\ \\ Fax : +971 (4) 366 2626

   \_\/  Email : mohammed.ba...@sun.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] This is the scrub that never ends...

2009-09-08 Thread Will Murnane
I left the scrub running all day:
 scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
but as you can see, it didn't finish.  So, I ran pkg image-update,
rebooted, and am now running b122.  On reboot, the scrub restarted
from the beginning, and currently estimates 17h to go.  I'll post an
update in about 17 hours ;)

On Mon, Sep 7, 2009 at 18:06, Will Murnane will.murn...@gmail.com wrote:
 On Mon, Sep 7, 2009 at 15:59, Henrik Johansson henr...@henkis.net wrote:
 Hello Will,
 On Sep 7, 2009, at 3:42 PM, Will Murnane wrote:

 What can cause this kind of behavior, and how can I make my pool
 finish scrubbing?


 No idea what is causing this but did you try to stop the scrub?
 I haven't done so yet.  Perhaps that would be a reasonable next step.
 I could run zpool status as root and see if that triggers the
 restart-scrub bug.  I don't mind scrubbing my data, but I do mind
 getting stuck in scrub-forever mode.

 If so what
 happened? (Might not be a good idea since this is not a normal state?) What
 release of OpenSolaris are you running?
 $ uname -a
 SunOS will-fs 5.11 snv_118 i86pc i386 i86xpv
 I can update to latest /dev if someone can suggest a reason why that
 might help.  Otherwise I'm sort of once-bitten twice-shy on upgrading
 for fun.

 Maybe this could be of interest, but it is a duplicate and it should have
 been fixed in snv_110: running zpool scrub twice hangs the scrub
 Interesting.  Note my crontab entry doesn't have any protection
 against this, so perhaps this bug is back in different form now.

 Will

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] This is the scrub that never ends...

2009-09-08 Thread Tim Cook
On Tue, Sep 8, 2009 at 10:24 PM, Will Murnane will.murn...@gmail.comwrote:

 I left the scrub running all day:
  scrub: scrub in progress for 67h57m, 100.00% done, 0h0m to go
 but as you can see, it didn't finish.  So, I ran pkg image-update,
 rebooted, and am now running b122.  On reboot, the scrub restarted
 from the beginning, and currently estimates 17h to go.  I'll post an
 update in about 17 hours ;)

 On Mon, Sep 7, 2009 at 18:06, Will Murnane will.murn...@gmail.com wrote:
  On Mon, Sep 7, 2009 at 15:59, Henrik Johansson henr...@henkis.net
 wrote:
  Hello Will,
  On Sep 7, 2009, at 3:42 PM, Will Murnane wrote:
 
  What can cause this kind of behavior, and how can I make my pool
  finish scrubbing?
 
 
  No idea what is causing this but did you try to stop the scrub?
  I haven't done so yet.  Perhaps that would be a reasonable next step.
  I could run zpool status as root and see if that triggers the
  restart-scrub bug.  I don't mind scrubbing my data, but I do mind
  getting stuck in scrub-forever mode.
 
  If so what
  happened? (Might not be a good idea since this is not a normal state?)
 What
  release of OpenSolaris are you running?
  $ uname -a
  SunOS will-fs 5.11 snv_118 i86pc i386 i86xpv
  I can update to latest /dev if someone can suggest a reason why that
  might help.  Otherwise I'm sort of once-bitten twice-shy on upgrading
  for fun.
 
  Maybe this could be of interest, but it is a duplicate and it should
 have
  been fixed in snv_110: running zpool scrub twice hangs the scrub
  Interesting.  Note my crontab entry doesn't have any protection
  against this, so perhaps this bug is back in different form now.
 
  Will
 


Might wanna be careful with b122.  There's issues with raid-z raidsets
producing phantom checksum errors.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss