[zfs-discuss] Promoting clones

2009-11-23 Thread Thomas Törnblom
I have noticed for a while that zfs promote clone doesn't seem to complete properly, even though no error is returned. I'm still running the nevada builds and use lucreate/luupgrade/ludelete to manage my BE:s. Due to issues with ludelete failing to clean up old BE:s properly I've made it a

[zfs-discuss] zpool upgrade question

2009-11-23 Thread Nishchaya Bahuguna
Hi experts, I have a scenario where I need to use a zpool version which is a part of Nevada currently (snv 125 onwards), but not yet a part of S10. What is the best way to do it? What all packages do I need to install from snv in order to upgrade my zpool version? Thanks in advance,

Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-23 Thread Rich Brown
On 11/22/09 16:48, Tim Cook wrote: On Sun, Nov 22, 2009 at 4:18 PM, Trevor Pretty trevor_pre...@eagle.co.nz mailto:trevor_pre...@eagle.co.nz wrote: Team I'm missing something? First off I normally play around with OpenSolaris it's been a while since I played with Solaris

Re: [zfs-discuss] The 100, 000th beginner question about a zfs server

2009-11-23 Thread Al Hopper
On Sun, Nov 22, 2009 at 12:49 PM, Tim Cook t...@cook.ms wrote: snip Someone can correct me if I'm wrong... but I believe that opensolaris can do the ECC scrubbing in software even of the motherboard BIOS doesn't support it. The OS is not involved with the ECC functionality of the

Re: [zfs-discuss] Fwd: The 100, 000th beginner question about a zfs server

2009-11-23 Thread David Dyer-Bennet
On Sat, November 21, 2009 20:25, Al Hopper wrote: And the last silly question. It seems to me that you'd have many, many adopters if there was a real answer to what the HCL tries to be and isn't - an answer to if I buy this stuff, do I have a prayer of making it work, or is there a subtle

Re: [zfs-discuss] Basic question about striping and ZFS

2009-11-23 Thread Kjetil Torgrim Homme
Kjetil Torgrim Homme kjeti...@linpro.no writes: Cindy Swearingen cindy.swearin...@sun.com writes: You might check the slides on this page: http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs Particularly, slides 14-18. In this case, graphic illustrations are probably the best way

Re: [zfs-discuss] Fwd: The 100, 000th beginner question about a zfs server

2009-11-23 Thread Frank Middleton
On 11/23/09 10:10 AM, David Dyer-Bennet wrote: Is there enough information available from system configuration utilities to make an automatic HCL (or unofficial HCL competitor) feasible? Someone could write an application people could run which would report their opinion on how well it works,

Re: [zfs-discuss] Fwd: The 100, 000th beginner question about a zfs server

2009-11-23 Thread David Dyer-Bennet
On Mon, November 23, 2009 09:53, Frank Middleton wrote: On 11/23/09 10:10 AM, David Dyer-Bennet wrote: Is there enough information available from system configuration utilities to make an automatic HCL (or unofficial HCL competitor) feasible? Someone could write an application people could

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-23 Thread Kjetil Torgrim Homme
Daniel Carosone d...@geek.com.au writes: Would there be a way to avoid taking snapshots if they're going to be zero-sized? I don't think it is easy to do, the txg counter is on a pool level, AFAIK: # zdb -u spool Uberblock magic = 00bab10c version = 13 txg

Re: [zfs-discuss] zpool upgrade question

2009-11-23 Thread Richard Elling
On Nov 23, 2009, at 1:43 AM, Nishchaya Bahuguna wrote: Hi experts, I have a scenario where I need to use a zpool version which is a part of Nevada currently (snv 125 onwards), but not yet a part of S10. What is the best way to do it? XVM or VirtualBox. What all packages do I need to

[zfs-discuss] Proper way to make bootable ZFS slice on SPARC?

2009-11-23 Thread Arnold Bob
Hey everyone - I'm trying to live upgrade a Solaris 10 5/08 system on UFS to Solaris 10 10/90 on ZFS. / is mounted on c1t0d0s0 (UFS). I have a 2nd disk, c1t1d0, that is not being used and is available for the ZFS migration. What is the correct procedure for making a bootable ZFS slice? This

Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-23 Thread Cindy Swearingen
div id=jive-html-wrapper-div font face=Helvetica, Arial, sans-serifbr Thanks old friendbr br I was surprised to read in the S10 zfs man page that there was the option /fontfont face=Helvetica, Arial, sans-serifsharesmb=on.nbsp; I though I had missed the CIFs server making S10 whilst I

Re: [zfs-discuss] Proper way to make bootable ZFS slice on SPARC?

2009-11-23 Thread Arnold Bob
Sorry, I forgot to put this in the post: I did zpool create boot c1t1d0s0 after the format command and before the lucreate command and got that error once I ran lucreate. Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing

Re: [zfs-discuss] Fwd: The 100, 000th beginner question about a zfs server

2009-11-23 Thread R.G. Keen
Your point is well taken, Frank, and I agree - there has to be some serious design work for reliability. My background includes both hardware design for reliability and field service engineering support, so the issues are not at all foreign to me. Nor are the limits of something like a

[zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread sundeep dhall
All, I have a test environment with 4 internal disks and RAIDZ option. Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz handles things well without data errors Options considered 1. suddenly pulling a disk out 2. using zpool offline I think both these have issues in

Re: [zfs-discuss] Proper way to make bootable ZFS slice on SPARC?

2009-11-23 Thread Francois Napoleoni
Your system must be in Solaris 10 10/08 (update 6) to provide ZFS boot support before going from UFS to ZFS. 1st update to Update 6 then move from UFS to ZFS. F. On 11/23/09 18:07, Arnold Bob wrote: Sorry, I forgot to put this in the post: I did zpool create boot c1t1d0s0 after the

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread David Dyer-Bennet
On Mon, November 23, 2009 11:44, sundeep dhall wrote: All, I have a test environment with 4 internal disks and RAIDZ option. Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz handles things well without data errors Options considered 1. suddenly pulling a disk out

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread Eric D. Mudama
On Mon, Nov 23 at 9:44, sundeep dhall wrote: All, I have a test environment with 4 internal disks and RAIDZ option. Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz handles things well without data errors Options considered 1. suddenly pulling a disk out 2. using

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread Shawn Ferry
I would try using hdadm or cfgadm to specifically offline devices out from under ZFS. I have done that previously with cfgadm for systems I cannot physically access. You can also use file backed storage to create your raidz and move, delete, overwrite the files to simulate issues. Shawn On

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread Kjetil Torgrim Homme
sundeep dhall sundeep.dh...@sun.com writes: Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz handles things well without data errors Options considered 1. suddenly pulling a disk out 2. using zpool offline I think both these have issues in simulating a sudden

Re: [zfs-discuss] mirroring ZIL device

2009-11-23 Thread Scott Meilicke
# 1. It may help to use 15k disks as the zil. When I tested using three 15k disks striped as my zil, it made my workload go slower, even though it seems like it should have been faster. My suggestion is to test it out, and see if it helps. #3. You may get good performance with an inexpensive

[zfs-discuss] Large ZFS server questions

2009-11-23 Thread John Welter
Hi everyone, We've been reasonably happy with ZFS running on Dell 2970/MD1000 hardware. We are running pools of about 50TB usable (60 drives) made up of many RaidZ2 groups. Anyhow, we now have the desire to build pool's even larger - in the 300TB range. Having that much disk behind a single

[zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-23 Thread Matthew Ahrens
If you did not do zfs set dedup=fletcher4,verify fs (which is available in build 128 and nightly bits since then), you can ignore this message. We have changed the on-disk format of the pool when using dedup=fletcher4,verify with the integration of: 6903705 dedup=fletcher4,verify doesn't

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread Richard Elling
On Nov 23, 2009, at 9:44 AM, sundeep dhall wrote: All, I have a test environment with 4 internal disks and RAIDZ option. Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz handles things well without data errors First, list the failure modes you expect to see.

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Len Zaifman
I asked this question a week ago but now I have what I feel are reasonable pricing numbers : For 2 X4540s (24 TB each) I pay 6% more than for one 7310 redundant cluster (2 7310s in a cluster configuration) with 22 TB of disk and 2 x 18 GB SSDs. I lose live redundancy, but can switch the

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Scott Meilicke
If the 7310s can meet your performance expectations, they sound much better than a pair of x4540s. Auto-fail over, SSD performance (although these can be added to the 4540s), ease of management, and a great front end. I haven't seen if you can use your backup software with the 7310s, but from

Re: [zfs-discuss] zfs-raidz - simulate disk failure

2009-11-23 Thread David Dyer-Bennet
On Mon, November 23, 2009 12:42, Eric D. Mudama wrote: On Mon, Nov 23 at 9:44, sundeep dhall wrote: All, I have a test environment with 4 internal disks and RAIDZ option. Q) How do I simulate a sudden 1-disk failure to validate that zfs / raidz handles things well without data errors

Re: [zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-23 Thread dan pritts
On Nov 4, 2009, at 6:02 PM, Jim Klimov wrote: Thanks for the link, but the main concern in spinning down drives of a ZFS pool is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a transaction group (TXG) which requires a synchronous write of metadata to disk. I'm

Re: [zfs-discuss] The 100, 000th beginner question about a zfs server

2009-11-23 Thread Miles Nordin
tc == Tim Cook t...@cook.ms writes: tc I believe that opensolaris can do the ECC scrubbing in tc software even of the motherboard BIOS doesn't support it. yeah, I don't really understand how the solaris idle page scrubbing interacts with whatever. scrubbing's a hardware feature for

Re: [zfs-discuss] Data balance across vdevs

2009-11-23 Thread Jesse Stroik
Erik and Richard: thanks for the information -- this is all very good stuff. Erik Trimble wrote: Something occurs to me: how full is your current 4 vdev pool? I'm assuming it's not over 70% or so. yes, by adding another 3 vdevs, any writes will be biased towards the empty vdevs, but

Re: [zfs-discuss] The 100, 000th beginner question about a zfs server

2009-11-23 Thread Richard Elling
On Nov 23, 2009, at 12:48 PM, Miles Nordin wrote: tc == Tim Cook t...@cook.ms writes: tc I believe that opensolaris can do the ECC scrubbing in tc software even of the motherboard BIOS doesn't support it. yeah, I don't really understand how the solaris idle page scrubbing interacts

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Trevor Pretty
Len Zaifman wrote: Under these circumstances what advantage would a 7310 cluster over 2 X4540s backing each other up and splitting the load? FISH! My wife could drive a 7310 :-) www.eagle.co.nz This email is confidential and may be legally privileged. If received in error

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread David Magda
On Nov 23, 2009, at 14:46, Len Zaifman wrote: Under these circumstances what advantage would a 7310 cluster over 2 X4540s backing each other up and splitting the load? Do you want to worry about your storage system at 3 AM? That's what all these appliances (regardless of vendor) get you for

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Erik Trimble
Get the 7310 setup. Vs. the X4540 it is: (1) less configuration on your clients (2) instant failover with no intervention on your part (3) less expensive (4) expandable to 3x your current disk space (5) lower power draw less rack space (6) So Simple, A Caveman Could Do It (tm) -Erik On Mon,

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Miles Nordin
lz == Len Zaifman leona...@sickkids.ca writes: lz So I now have 2 disk paths and two network paths as opposed to lz only one in the 7310 cluster. confused You're configuring all your failover on the client, so the HA stuff is stateless wrt the server? sounds like the smart way since

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-23 Thread Daniel Carosone
Daniel Carosone d...@geek.com.au writes: Would there be a way to avoid taking snapshots if they're going to be zero-sized? I don't think it is easy to do, the txg counter is on a pool level, [..] it would help when the entire pool is idle, though. .. which is exactly the scenario in

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-23 Thread Travis Tabbal
I will give you all of this information on monday. This is great news :) Indeed. I will also be posting this information when I get to the server tonight. Perhaps it will help. I don't think I want to try using that old driver though, it seems too risky for my taste. Is there a command

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-23 Thread James C. McPherson
Travis Tabbal wrote: I will give you all of this information on monday. This is great news :) Indeed. I will also be posting this information when I get to the server tonight. Perhaps it will help. I don't think I want to try using that old driver though, it seems too risky for my taste.

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-23 Thread Andrew Gabriel
Kjetil Torgrim Homme wrote: Daniel Carosone d...@geek.com.au writes: Would there be a way to avoid taking snapshots if they're going to be zero-sized? I don't think it is easy to do, the txg counter is on a pool level, AFAIK: # zdb -u spool Uberblock magic = 00bab10c

Re: [zfs-discuss] The 100, 000th beginner question about a zfs server

2009-11-23 Thread R.G. Keen
Most ECC setups are as you describe. The memory hardware detects and corrects all 1-bit errors, and detects all two-bit errors on its own. What ... should ... happen is that the OS should get an interrupt when this happens so it has the opportunity to note the error in logs and to higher level

Re: [zfs-discuss] Heads up: SUNWzfs-auto-snapshot obsoletion in snv 128

2009-11-23 Thread Matthew Ahrens
Andrew Gabriel wrote: Kjetil Torgrim Homme wrote: Daniel Carosone d...@geek.com.au writes: Would there be a way to avoid taking snapshots if they're going to be zero-sized? I don't think it is easy to do, the txg counter is on a pool level, AFAIK: # zdb -u spool Uberblock

Re: [zfs-discuss] The 100, 000th beginner question about a zfs server

2009-11-23 Thread R.G. Keen
Most ECC setups are as you describe. The memory hardware detects and corrects all 1-bit errors, and detects all two-bit errors on its own. What ... should ... happen is that the OS should get an interrupt when this happens so it has the opportunity to note the error in logs and to higher level

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-23 Thread Travis Tabbal
I have a possible workaround. Mark Johnson mark.john...@sun.com has been emailing me today about this issue and he proposed the following: You can try adding the following to /etc/system, then rebooting... set xpv_psm:xen_support_msi = -1 I have been able to format a ZVOL container from a

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-23 Thread Carson Gaspar
Travis Tabbal wrote: I have a possible workaround. Mark Johnson mark.john...@sun.com has been emailing me today about this issue and he proposed the following: You can try adding the following to /etc/system, then rebooting... set xpv_psm:xen_support_msi = -1 I am also running XVM, and after

[zfs-discuss] flar and tar the best way to backup S10 ZFS only?

2009-11-23 Thread Trevor Pretty
I'm persuading a customer that when he goes to S10 he should use ZFS for everything. We only have one M3000 and a J4200 connected to it. We are not talking about a massive site here with a SAN etc. The M3000 is their "mainframe". His RTO and RPO are both about 12 hours, his business gets

Re: [zfs-discuss] flar and tar the best way to backup S10 ZFS only?

2009-11-23 Thread Richard Elling
On Nov 23, 2009, at 8:24 PM, Trevor Pretty wrote: I'm persuading a customer that when he goes to S10 he should use ZFS for everything. We only have one M3000 and a J4200 connected to it. We are not talking about a massive site here with a SAN etc. The M3000 is their mainframe. His RTO

Re: [zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-23 Thread Matthew Ahrens
We discovered another, more fundamental problem with dedup=fletcher4,verify. I've just putback the fix for: 6904243 zpool scrub/resilver doesn't work with cross-endian dedup=fletcher4,verify blocks The same instructions as below apply, but in addition, the dedup=fletcher4,verify

Re: [zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-23 Thread Jeff Bonwick
And, for the record, this is my fault. There is an aspect of endianness that I simply hadn't thought of. When I have a little more time I will blog about the whole thing, because there are many useful lessons here. Thank you, Matt, for all your help with this. And my apologies to everyone else

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-23 Thread Jeremy Kitchen
On Nov 23, 2009, at 7:28 PM, Travis Tabbal wrote: I have a possible workaround. Mark Johnson mark.john...@sun.com has been emailing me today about this issue and he proposed the following: You can try adding the following to /etc/system, then rebooting... set xpv_psm:xen_support_msi = -1

Re: [zfs-discuss] heads-up: dedup=fletcher4,verify was broken

2009-11-23 Thread Jeff Bonwick
Finally, just to be clear, one last point: the two fixes integrated today only affect you if you've explicitly set dedup=fletcher4,verify. To quote Matt: This is not the default dedup setting; pools that only used zfs set dedup=on (or =sha256, or =verify, or =sha256,verify) are unaffected.

[zfs-discuss] NexentaStor 2.2.0 Developer Edition Released

2009-11-23 Thread Anil Gulecha
Hi All, I'd like to announce the immediate availability of NexentaStor Developer Edition v2.2.0. Since the previous announcement, many exciting additions have gone into NexentaStor Developer edition. * This is a major stable release. * Storage limit increased to 4TB. * Built-in antivirus

[zfs-discuss] Dedup question

2009-11-23 Thread Colin Raven
Folks, I've been reading Jeff Bonwick's fascinating dedup post. This is going to sound like either the dumbest or the most obvious question ever asked, but, if you don't know and can't produce meaningful RTFM resultsask...so here goes: Assuming you have a dataset in a zfs pool that's been

Re: [zfs-discuss] flar and tar the best way to backup S10 ZFS only?

2009-11-23 Thread Brent Jones
On Mon, Nov 23, 2009 at 8:24 PM, Trevor Pretty trevor_pre...@eagle.co.nz wrote: I'm persuading a customer that when he goes to S10 he should use ZFS for everything. We only have one M3000 and a J4200 connected to it. We are not talking about a massive site here with a SAN etc. The M3000 is

Re: [zfs-discuss] Dedup question

2009-11-23 Thread Michael Schuster
Colin Raven wrote: Folks, I've been reading Jeff Bonwick's fascinating dedup post. This is going to sound like either the dumbest or the most obvious question ever asked, but, if you don't know and can't produce meaningful RTFM resultsask...so here goes: Assuming you have a dataset in a