Wow, some great comments on here now, even a few people agreeing with me which
is nice :D
I'll happily admit I don't have the in depth understanding of storage many of
you guys have, but since the idea doesn't seem pie-in-the-sky crazy, I'm going
to try to write up all my current thoughts on
On 30-Aug-08, at 2:32 AM, Todd H. Poole wrote:
Wrt. what I've experienced and read in ZFS-discussion etc. list
I've the
__feeling__, that we would have got really into trouble, using
Solaris
(even the most recent one) on that system ...
So if one asks me, whether to run Solaris+ZFS on
On Sat, 30 Aug 2008 09:35:31 -0300
Toby Thain [EMAIL PROTECTED] wrote:
On 30-Aug-08, at 2:32 AM, Todd H. Poole wrote:
I can't agree with you more. I'm beginning to understand what the
phrase Sun's software is great - as long as you're running it on
Sun's hardware means...
Totally OT,
On Sat, 30 Aug 2008, Ross wrote:
while the problem is diagnosed. - With that said, could the write
timeout default to on when you have a slog device? After all, the
data is safely committed to the slog, and should remain there until
it's written to all devices. Bob, you seemed the most
Triple mirroring you say? That'd be me then :D
The reason I really want to get ZFS timeouts sorted is that our long term goal
is to mirror that over two servers too, giving us a pool mirrored across two
servers, each of which is actually a zfs iscsi volume hosted on triply mirrored
disks.
Just saw this blog post linked from the register, it's EMC pointing out that
their array wastes less disk space than either HP or NetApp. I'm loving the
10% of space they have to reserve for snapshots, and you can't add more o_0.
HP similarly recommend 20% of reserved space for snapshots, and
Does anyone have any experience with ZFS filesystems just disappearing? I've
got a healthy pool where two of my filesystems (out of 7) suddenly disappeared.
It happened after I installed FUSE and ntfs-3g, then rebooted.
Both were not created by the system - I added them to the existing default
Eric Schrock writes:
A better option would be to not use this to perform FMA diagnosis, but
instead work into the mirror child selection code. This has already
been alluded to before, but it would be cool to keep track of latency
over time, and use this to both a) prefer one drive over
Miles Nordin writes:
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
bf You are saying that I can't split my mirrors between a local
bf disk in Dallas and a remote disk in New York accessed via
bf iSCSI?
nope, you've misread. I'm saying reads should go to the local disk