Why don't you see which byte differs, and how it does?
Maybe that would suggest the failure mode. Is it the
same byte data in all affected files, for instance?
Mark
Sent from my iPhone
On Oct 22, 2011, at 2:08 PM, Robert Watzlavick rob...@watzlavick.com wrote:
On Oct 22, 2011, at 13:14,
On Oct 18, 2011, at 11:09 AM, Nico Williams wrote:
On Tue, Oct 18, 2011 at 9:35 AM, Brian Wilson wrote:
I just wanted to add something on fsck on ZFS - because for me that used to
make ZFS 'not ready for prime-time' in 24x7 5+ 9s uptime environments.
Where ZFS doesn't have an fsck command -
Shouldn't the choice of RAID type also
be based on the i/o requirements?
Anyway, with RAID-10, even a second
failed disk is not catastophic, so long
as it is not the counterpart of the first
failed disk, no matter the no. of disks.
(With 2-way mirrors.)
But that's why we do backups, right?
Mark
On Apr 8, 2011, at 2:37 AM, Ian Collins i...@ianshome.com wrote:
On 04/ 8/11 06:30 PM, Erik Trimble wrote:
On 4/7/2011 10:25 AM, Chris Banal wrote:
While I understand everything at Oracle is top secret these days.
Does anyone have any insight into a next-gen X4500 / X4540? Does some other
On Apr 8, 2011, at 3:29 AM, Ian Collins i...@ianshome.com wrote:
On 04/ 8/11 08:08 PM, Mark Sandrock wrote:
On Apr 8, 2011, at 2:37 AM, Ian Collinsi...@ianshome.com wrote:
On 04/ 8/11 06:30 PM, Erik Trimble wrote:
On 4/7/2011 10:25 AM, Chris Banal wrote:
While I understand everything
On Apr 8, 2011, at 7:50 AM, Evaldas Auryla evaldas.aur...@edqm.eu wrote:
On 04/ 8/11 01:14 PM, Ian Collins wrote:
You have built-in storage failover with an AR cluster;
and they do NFS, CIFS, iSCSI, HTTP and WebDav
out of the box.
And you have fairly unlimited options for application
On Apr 8, 2011, at 9:39 PM, Ian Collins i...@ianshome.com wrote:
On 04/ 9/11 03:20 AM, Mark Sandrock wrote:
On Apr 8, 2011, at 7:50 AM, Evaldas Aurylaevaldas.aur...@edqm.eu wrote:
On 04/ 8/11 01:14 PM, Ian Collins wrote:
You have built-in storage failover with an AR cluster;
and they do
On Apr 8, 2011, at 11:19 PM, Ian Collins i...@ianshome.com wrote:
On 04/ 9/11 03:53 PM, Mark Sandrock wrote:
I'm not arguing. If it were up to me,
we'd still be selling those boxes.
Maybe you could whisper in the right ear?
I wish. I'd have a long list if I could do that.
Mark
On Mar 24, 2011, at 7:23 AM, Anonymous wrote:
Generally, you choose your data pool config based on data size,
redundancy, and performance requirements. If those are all satisfied with
your single mirror, the only thing left for you to do is think about
splitting your data off onto a
On Mar 24, 2011, at 5:42 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nomen Nescio
Hi ladies and gents, I've got a new Solaris 10 development box with ZFS
mirror root using 500G drives. I've got several
On Feb 2, 2011, at 8:10 PM, Eric D. Mudama wrote:
All other
things being equal, the 15k and the 7200 drive, which share
electronics, will have the same max transfer rate at the OD.
Is that true? So the only difference is in the access time?
Mark
Why do you say fssnap has the same problem?
If it write locks the file system, it is only for a matter of seconds, as I
recall.
Years ago, I used it on a daily basis to do ufsdumps of large fs'es.
Mark
On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote:
On 1/30/2011 5:26 PM, Joerg Schilling
/31/2011 10:24 AM, Mark Sandrock wrote:
Why do you say fssnap has the same problem?
If it write locks the file system, it is only for a matter of seconds, as I
recall.
Years ago, I used it on a daily basis to do ufsdumps of large fs'es.
Mark
On Jan 30, 2011, at 5:41 PM, Torrey McMahon
On Dec 18, 2010, at 12:23 PM, Lanky Doodle wrote:
Now this is getting really complex, but can you have server failover in ZFS,
much like DFS-R in Windows - you point clients to a clustered ZFS namespace
so if a complete server failed nothing is interrupted.
This is the purpose of an Amber
Erik,
just a hypothetical what-if ...
In the case of resilvering on a mirrored disk, why not take a snapshot, and then
resilver by doing a pure block copy from the snapshot? It would be sequential,
so long as the original data was unmodified; and random access in dealing with
the
On Dec 20, 2010, at 2:05 PM, Erik Trimble wrote:
On 12/20/2010 11:56 AM, Mark Sandrock wrote:
Erik,
just a hypothetical what-if ...
In the case of resilvering on a mirrored disk, why not take a snapshot, and
then
resilver by doing a pure block copy from the snapshot? It would
It well may be that different methods are optimal for different use cases.
Mechanical disk vs. SSD; mirrored vs. raidz[123]; sparse vs. populated; etc.
It would be interesting to read more in this area, if papers are available.
I'll have to take a look. ... Or does someone have pointers?
Mark
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote:
On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported
Edward,
I recently installed a 7410 cluster, which had added Fiber Channel HBAs.
I know the site also has Blade 6000s running VMware, but no idea if they
were planning to run fiber to those blades (or even had the option to do so).
But perhaps FC would be an option for you?
Mark
On Nov 12,
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168
20 matches
Mail list logo