Hello everyone,
I'm new to ZFS and OpenSolaris, and I've been reading the docs on ZFS (the pdf
The Last Word on Filesystems and wikipedia of course), and I'm trying to
understand something.
So ZFS is self-healing, correct? This is accomplished via parity and/or
metadata of some sort on the
On Sat, May 24, 2008 at 3:21 AM, Richard Elling [EMAIL PROTECTED] wrote:
Consider a case where you might use large, slow SATA drives (1 TByte,
7,200 rpm)
for the main storage, and a single small, fast (36 GByte, 15krpm) drive
for the
L2ARC. This might provide a reasonable cost/performance
No, this is a 64-bit system (athlon64) with 64-bit kernel of course.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So, I think I've narrowed it down to two things:
* ZFS tries to destroy the dataset every time it's called because the last time
it didn't finish destroying
* In this process, ZFS makes the kernel run out of memory and die
So I thought of two options, but I'm not sure if I'm right:
Option 1:
On Fri, May 23, 2008 at 05:26:34PM -0500, Bob Friesenhahn wrote:
On Fri, 23 May 2008, Bill McGonigle wrote:
The remote-disk cache makes perfect sense. I'm curious if there are
measurable benefits for caching local disks as well? NAND-flash SSD
drives have good 'seek' and slow transfer,
On Sat, May 24, 2008 at 3:12 AM, Steve Hull [EMAIL PROTECTED] wrote:
Hello everyone,
I'm new to ZFS and OpenSolaris, and I've been reading the docs on ZFS (the
pdf The Last Word on Filesystems and wikipedia of course), and I'm trying
to understand something.
So ZFS is self-healing,
Anyway you can add mirrored, [...], raidz, or raidz2 arrays to the pool,
right?
correct.
add a disk or two to increase your protected storage capacity.
if its a protected vdev, like a mirror or raidz, sure... one can
force add a single disk, but then the pool isn't protected until
you
cache improve write performance or only reads?
L2ARC cache device is for reads... for write you want
Intent Log
The ZFS Intent Log (ZIL) satisfies POSIX requirements for
synchronous transactions. For instance, databases often
require their transactions to be on
Hi Steve,
Am 24.05.2008 um 10:17 schrieb [EMAIL PROTECTED]:
ZFS: A general question
To: zfs-discuss@opensolaris.org
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; charset=UTF-8
Hello everyone,
I'm new to ZFS and OpenSolaris, and I've been reading the docs on
ZFS (the pdf The
On Sat, May 24, 2008 at 4:00 PM, [EMAIL PROTECTED] wrote:
cache improve write performance or only reads?
L2ARC cache device is for reads... for write you want
Intent Log
Thanks for answering my question, I had seen mention of intent log
devices, but wasn't sure of their purpose.
If
I like the link you sent along... They did a nice job with that.
(but it does show that mixing and matching vastly different drive-sizes
is not exactly optimal...)
http://www.drobo.com/drobolator/index.html
Doing something like this for ZFS allowing people to create pools by
OK so in my (admittedly basic) understanding of raidz and raidz2, these
technologies are very similar to raid5 and raid6. BUT if you set up one disk
as a raidz vdev, you (obviously) can't maintain data after a disk failure, but
you are protected against data corruption that is NOT a result of
Sooo... I've been reading a lot in various places. The conclusion I've drawn
is this:
I can create raidz vdevs in groups of 3 disks and add them to my zpool to be
protected against 1 drive failure. This is the current status of growing
protected space in raidz. Am I correct here?
This
Hugh Saunders wrote:
On Sat, May 24, 2008 at 4:00 PM, [EMAIL PROTECTED] wrote:
cache improve write performance or only reads?
L2ARC cache device is for reads... for write you want
Intent Log
Thanks for answering my question, I had seen mention of intent log
devices, but wasn't sure
Steve Hull wrote:
Sooo... I've been reading a lot in various places. The conclusion I've
drawn is this:
I can create raidz vdevs in groups of 3 disks and add them to my zpool to be
protected against 1 drive failure. This is the current status of growing
protected space in raidz. Am I
15 matches
Mail list logo