Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-22 Thread Mark Sandrock
Why don't you see which byte differs, and how it does? Maybe that would suggest the failure mode. Is it the same byte data in all affected files, for instance? Mark Sent from my iPhone On Oct 22, 2011, at 2:08 PM, Robert Watzlavick rob...@watzlavick.com wrote: On Oct 22, 2011, at 13:14,

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Mark Sandrock
On Oct 18, 2011, at 11:09 AM, Nico Williams wrote: On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote: I just wanted to add something on fsck on ZFS - because for me that used to make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime environments. Where ZFS doesn't have an fsck command -

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Mark Sandrock
Shouldn't the choice of RAID type also be based on the i/o requirements? Anyway, with RAID-10, even a second failed disk is not catastophic, so long as it is not the counterpart of the first failed disk, no matter the no. of disks. (With 2-way mirrors.) But that's why we do backups, right? Mark

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 2:37 AM, Ian Collins i...@ianshome.com wrote: On 04/ 8/11 06:30 PM, Erik Trimble wrote: On 4/7/2011 10:25 AM, Chris Banal wrote: While I understand everything at Oracle is top secret these days. Does anyone have any insight into a next-gen X4500 / X4540? Does some other

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 3:29 AM, Ian Collins i...@ianshome.com wrote: On 04/ 8/11 08:08 PM, Mark Sandrock wrote: On Apr 8, 2011, at 2:37 AM, Ian Collinsi...@ianshome.com wrote: On 04/ 8/11 06:30 PM, Erik Trimble wrote: On 4/7/2011 10:25 AM, Chris Banal wrote: While I understand everything

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 7:50 AM, Evaldas Auryla evaldas.aur...@edqm.eu wrote: On 04/ 8/11 01:14 PM, Ian Collins wrote: You have built-in storage failover with an AR cluster; and they do NFS, CIFS, iSCSI, HTTP and WebDav out of the box. And you have fairly unlimited options for application

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 9:39 PM, Ian Collins i...@ianshome.com wrote: On 04/ 9/11 03:20 AM, Mark Sandrock wrote: On Apr 8, 2011, at 7:50 AM, Evaldas Aurylaevaldas.aur...@edqm.eu wrote: On 04/ 8/11 01:14 PM, Ian Collins wrote: You have built-in storage failover with an AR cluster; and they do

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 11:19 PM, Ian Collins i...@ianshome.com wrote: On 04/ 9/11 03:53 PM, Mark Sandrock wrote: I'm not arguing. If it were up to me, we'd still be selling those boxes. Maybe you could whisper in the right ear? I wish. I'd have a long list if I could do that. Mark

Re: [zfs-discuss] Any use for extra drives?

2011-03-25 Thread Mark Sandrock
On Mar 24, 2011, at 7:23 AM, Anonymous wrote: Generally, you choose your data pool config based on data size, redundancy, and performance requirements. If those are all satisfied with your single mirror, the only thing left for you to do is think about splitting your data off onto a

Re: [zfs-discuss] Any use for extra drives?

2011-03-24 Thread Mark Sandrock
On Mar 24, 2011, at 5:42 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Nomen Nescio Hi ladies and gents, I've got a new Solaris 10 development box with ZFS mirror root using 500G drives. I've got several

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread Mark Sandrock
On Feb 2, 2011, at 8:10 PM, Eric D. Mudama wrote: All other things being equal, the 15k and the 7200 drive, which share electronics, will have the same max transfer rate at the OD. Is that true? So the only difference is in the access time? Mark

Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Mark Sandrock
Why do you say fssnap has the same problem? If it write locks the file system, it is only for a matter of seconds, as I recall. Years ago, I used it on a daily basis to do ufsdumps of large fs'es. Mark On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote: On 1/30/2011 5:26 PM, Joerg Schilling

Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Mark Sandrock
/31/2011 10:24 AM, Mark Sandrock wrote: Why do you say fssnap has the same problem? If it write locks the file system, it is only for a matter of seconds, as I recall. Years ago, I used it on a daily basis to do ufsdumps of large fs'es. Mark On Jan 30, 2011, at 5:41 PM, Torrey McMahon

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
On Dec 18, 2010, at 12:23 PM, Lanky Doodle wrote: Now this is getting really complex, but can you have server failover in ZFS, much like DFS-R in Windows - you point clients to a clustered ZFS namespace so if a complete server failed nothing is interrupted. This is the purpose of an Amber

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
Erik, just a hypothetical what-if ... In the case of resilvering on a mirrored disk, why not take a snapshot, and then resilver by doing a pure block copy from the snapshot? It would be sequential, so long as the original data was unmodified; and random access in dealing with the

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
On Dec 20, 2010, at 2:05 PM, Erik Trimble wrote: On 12/20/2010 11:56 AM, Mark Sandrock wrote: Erik, just a hypothetical what-if ... In the case of resilvering on a mirrored disk, why not take a snapshot, and then resilver by doing a pure block copy from the snapshot? It would

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
It well may be that different methods are optimal for different use cases. Mechanical disk vs. SSD; mirrored vs. raidz[123]; sparse vs. populated; etc. It would be interesting to read more in this area, if papers are available. I'll have to take a look. ... Or does someone have pointers? Mark

Re: [zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-15 Thread Mark Sandrock
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote: On 11/ 2/10 08:33 AM, Mark Sandrock wrote: I'm working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-15 Thread Mark Sandrock
Edward, I recently installed a 7410 cluster, which had added Fiber Channel HBAs. I know the site also has Blade 6000s running VMware, but no idea if they were planning to run fiber to those blades (or even had the option to do so). But perhaps FC would be an option for you? Mark On Nov 12,

[zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-01 Thread Mark Sandrock
Hello, I'm working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported: scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go and a week being 168