On 13 July, 2008 - John-Paul Drawneek sent me these 0,4K bytes:
Yep - each zfs file system has its own nfs mount point.
Will mirror mount it all works like magic and is pretty nice.
Can see it being a git for linux, I wonder when it will mirror mount
will get implemented in linux.
Linux
Hello!
I'm trying something like the LTSP, with opensolaris.
I found some articles about issues with the features of ZFS when trying this
configuration (sharing via NFS with the diskless clients).
Does anybody know something about the status of some issues with diskless
clients? Where can I
Are right that X4500 have single point of failure but keeping a spare server
module is not that expensive.
As there are no cables , replacing will tkae a few seconds and after the
boot everything will be ok.
Besides cluster support for JBOD's will come shortly, that setup will
eleminate SPOF
Hi
I had a quick look. Looks great!
A suggestion - From given example, I think API could be made more pythonic.
Python is dynamically typed and properties can be dynamically looked up too.
Thus, instead of prop_get_* we can have -
1. prop() : generic function, returning typed arguments. The
As long as I'm composing an email, I might as well mention that I had
forgotten to mention Swig as a dependency (d'oh!). I now have a
mention of it on the page, and a spec file that can be built using
pkgtool. If you tried this before and gave up because of a missing
package, please give it
Still reading, but would like to correct one point.
* It would seem that ZFS is deeply wedded to the
concept of a single,
linear chain of snapshots. No snapshots of
snapshots, apparently.
http://blogs.sun.com/ahrens/entry/is_it_magic
Writable snapshots are called clones in zfs. So
On 14 Jul 2008, at 16:07, Will Murnane wrote:
As long as I'm composing an email, I might as well mention that I had
forgotten to mention Swig as a dependency (d'oh!). I now have a
mention of it on the page, and a spec file that can be built using
pkgtool. If you tried this before and gave
So, finally! Sun has a SAS/SATA JBOD that doesn't have the extra crap
we, as zfs users, do not need.
How does zfs handle device renumbering? IIRC from my last foray into
SAS, the LSI or Sun HBA remembers drive id's. If you remove all drives
from the chassis (some SAS/SATA chassis, not
[EMAIL PROTECTED] wrote on 07/13/2008 11:29:07 PM:
ZFS co-inventor Matt Ahrens recently fixed this:
6343667 scrub/resilver has to start over when a snapshot is taken
Trust me when I tell you that solving this correctly was much harder
than you might expect. Thanks again, Matt.
Jeff
Is
On Sun, Jul 13, 2008 at 2:03 AM, Evert Meulie [EMAIL PROTECTED] wrote:
A feature like that would be indeed great. It's the ONLY reason I'm not 100%
sure about RAID-Z yet for my next system... 8-)
The explanation that I've heard is that expanding raidz is of less
importance to enterprise
On Monday 14 July 2008 08:29, Akhilesh Mritunjai wrote:
Still reading, but would like to correct one point.
* It would seem that ZFS is deeply wedded to the
concept of a single,
linear chain of snapshots. No snapshots of
snapshots, apparently.
I was reading some ZFS pages on another discussion list I found this comment by
few people who may or may not know something:
http://episteme.arstechnica.com/eve/forums/a/tpc/f/24609792/m/956007108831
[i]You tend to get better write performance when the number of disks in the
raid is (a power
Nit: you meant 2^N + 1 I believe.
Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 14 Jul 2008, [UTF-8] Søren Ragsdale wrote:
It seems to me that the blanket 8% improvement statement by 'Black
Jacque' is clearly the most technically correct even though it is a
not a serious answer.
[i]You tend to get better write performance when the number of
disks in the raid is
Will I get markedly better performance with 5 drives (2^2+1) or 6 drives
2*(2^1+1) because the parity calculations are more efficient across 2^N
drives?
If only parity calculations stand to benefit, then it wouldn't make a
difference because your CPU is more than powerful enough to take
On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn
[EMAIL PROTECTED] wrote:
This seems like a bunch of hog-wash to me. Any time you see even a
single statement which is incorrect, it is best to ignore that forum
poster entirely and if no one corrects him, then ignore the entire forum.
I don't
Hey,
I just had a D'oh! moment I'm afraid, woke up this morning realising my
previous post about the chances of failure was completely wrong.
You do need to multiply the chance of failure by the number of remaining disks,
because you're reading the data of every one of them, and you risk
17 matches
Mail list logo