Bart Smaalders wrote:
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of
[EMAIL PROTECTED] wrote:
That's the dilemma, the array provides nice features like RAID1 and
RAID5, but those are of no real use when using ZFS.
RAID5 is not a nice feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power
Your example would prove more effective if you added, I've got ten
databases. Five on AIX, Five on Solaris 8
Peter Rival wrote:
I don't like to top-post, but there's no better way right now. This
issue has recurred several times and there have been no answers to it
that cover the bases.
Jason Schroeder wrote:
Torrey McMahon wrote:
[EMAIL PROTECTED] wrote:
I'll bet that ZFS will generate more calls about broken hardware
and fingers will be pointed at ZFS at first because it's the new
kid; it will be some time before people realize that the data was
rotting all along
Roch wrote:
Sean Meighan writes:
The vi we were doing was a 2 line file. If you just vi a new file, add
one line and exit it would take 15 minutes in fdsynch. On recommendation
of a workaround we set
set zfs:zil_disable=1
after the reboot the fdsynch is now 0.1 seconds. Now I
Nicolas Williams wrote:
On Wed, Jun 21, 2006 at 10:41:50AM -0600, Neil Perrin wrote:
Why is this option available then? (Yes, that's a loaded question.)
I wouldn't call it an option, but an internal debugging switch that I
originally added to allow progress when initially integrating
Martin Englund wrote:
Alec Muffett wrote:
Then you could cd ~/.zfs/view/[EMAIL PROTECTED] and see all your music
files. Or large files. Or files with nlinks1. Or whatever.
Should it retain the directory structure for each match, e.g.
~/.zfs/view/[EMAIL
Jeff Bonwick wrote:
http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to
thanks, that is very useful information. it pretty much rules out raid-z
for this workload with any reasonable configuration I can dream up
with only 12 disks available. it looks like mirroring is
201 - 208 of 208 matches
Mail list logo