On May 3, 2006, at 15:21, eric kustarz wrote:
There's basically two writes that need to happen: one for time and one
for the subcommand string. The kernel just needs to make sure if a
write completes, the data is parseable (has a delimiter). Its then up
to the userland parser (zpool history)
Brian Hechinger wrote:
Could you mix and match by keeping the current style assuming there
are no -o options present?
# zfs create pool/fs
If you need to specify options, then they should all be options:
# zfs create -o name=pool/fs -o mountpoint=/bar -o etc
I would be tempted to have two
oab wrote:
I'm new to ZFS so I was wondering if it is possible to concurrently
share a ZFS storage pool between two separate machines. I am currently
evaluating Sybase IQ
running on ZFS rather than raw devices(initial performance tests look
very promising) and need now to evaluate whether the IQ
On Sep 8, 2006, at 9:33, Richard Elling - PAE wrote:
I was looking for a new AM2 socket motherboard a few weeks ago. All
of the ones
I looked at had 2xIDE and 4xSATA with onboard (SATA) RAID. All were
less than $150.
In other words, the days of having a JBOD-only solution are over
except for
On Oct 20, 2006, at 0:48, Torrey McMahon wrote:
Anthony Miller wrote:
I want to create create a raidz on one array and have it mirrored to
the other array.
Do you think this will get you more availability compared to a simple
mirror? I'm curious as to why you would want to do this.
This
On Dec 9, 2006, at 8:59 , Jim Mauro wrote:
AnywayI'm feeling rather naive' here, but I've seen the NFS
enforced synchronous semantics phrase
kicked around many times as the explanation for suboptimal
performance for metadata-intensive
operations when ZFS is the underlying file system, but
On Dec 22, 2006, at 09:50, Anton B. Rang wrote:
Phantom writes and/or misdirected reads/writes:
I haven't seen probabilities published on this; obviously the disk
vendors would claim zero, but we believe they're slightly
wrong. ;-) That said, 1 in 10^8 bits would mean we’d have an
Ivan wrote:
Hi,
Is ZFS comparable to PVFS2? Could it also be used as an distributed filesystem
at the moment or are there any plans for this in the future?
I don't know anything at all about PVFS2, so I can't comment on that point.
As far as ZFS being used as a distributed file system, it
Shannon Roddy wrote:
For sun to charge 4-8 times street price for hard drives that
they order just the same as I do from the same manufacturers that I
order from is infuriating.
Are you sure they're really the same drives? Mechanically, they
probably are, but last I knew (I don't work in the
On Jan 26, 2007, at 7:17, Peter Eriksson wrote:
If you _boot_ the original machine then it should see that the pool
now is owned by
the other host and ignore it (you'd have to do a zpool import -f
again I think). Not tested though so don't take my word for it...
Conceptually, that's about
On Jan 26, 2007, at 9:42, Gary Mills wrote:
How does this work in an environment with storage that's centrally-
managed and shared between many servers? I'm putting together a new
IMAP server that will eventually use 3TB of space from our Netapp via
an iSCSI SAN. The Netapp provides all of the
On Jan 26, 2007, at 10:57, Ross, Gary (G.A.) wrote:
...
What if something like the old CacheFS was revived, using ZFS as the
base file system instead of UFS?
...
Could this be a good thing, or am I way off base???
Disconnected operation is a hard problem. One of the better research
efforts
On Jan 26, 2007, at 12:13, Richard Elling wrote:
On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
A number that I've been quoting, albeit without a good reference,
comes from Jim Gray, who has been around the data-management industry
for longer than I have (and I've been
On Jan 26, 2007, at 12:52, Dana H. Myers wrote:
So this leaves me wondering how often the controller/drive subsystem
reads data from the wrong sector of the drive without notice; is it
symmetrical with respect to writing, and thus about once a drive/year,
or are there factors which change this?
On Jan 26, 2007, at 13:16, Dana H. Myers wrote:
I would tend to expect these spurious events to impact read and write
equally; more specifically, the chance of any one read or write being
mis-addressed is about the same. Since, AFAIK, there are many more
reads
from a disk typically than
On Jan 26, 2007, at 13:29, Selim Daoud wrote:
it would be good to have real data and not only guess ot anecdots
Yes, I agree. I'm sorry I don't have the data that Jim presented at
FAST, but he did present actual data. Richard Elling (I believe it was
Richard) has also posted some related
Richard Elling wrote:
warning: noun/verb overload. In my context, swap is a verb.
It is also a common shorthand for swap space.
--
--Ed
begin:vcard
fn:Ed Gould
n:Gould;Ed
org:Sun Microsystems, Inc.;Solaris Cluster
adr;dom:M/S UMPK17-201;;17 Network Circle;Menlo Park;CA;94025
Eric Haycraft wrote:
Since no one seems to believe that you can expand a raidz pool, I have attached the following output from solaris 11/06 showing me doing just that. The first expanision is with like sized disks, and the second expansion is with larger disks. I realize that the documentation
18 matches
Mail list logo