Manoj Nayak writes:
Hi All.
ZFS document says ZFS schedules it's I/O in such way that it manages to
saturate a single disk bandwidth using enough concurrent 128K I/O.
The no of concurrent I/O is decided by vq_max_pending.The default value
for vq_max_pending is 35.
We have
Roch - PAE wrote:
Manoj Nayak writes:
Hi All.
ZFS document says ZFS schedules it's I/O in such way that it manages to
saturate a single disk bandwidth using enough concurrent 128K I/O.
The no of concurrent I/O is decided by vq_max_pending.The default value
for
Hello,
Thought I'd mention a recent (slightly biased) article comparing
DragonflyBSD's new HAMMER file system and ZFS:
Infinite [automatic] snapshots
As-of mounts [like PITR on Postgres]
Clustered
Backups made easy
File database
Huge
This is one feature I've been hoping for... old threads and blogs talk about
this feature possibly showing up by the end of 2007 just curious on what
the status of this feature is...
thanks,
john
This message posted from opensolaris.org
___
On Jan 23, 2008 6:36 AM, Manoj Nayak [EMAIL PROTECTED] wrote:
It means 4-disk raid-z group inside ZFS pool is exported to ZFS as a
single device ( vdev ) .ZFS assigns vq_max_pending value of 35 to this vdev.
To get higher throughput , I need to do following things ?
1.Reduce no of disks in
Manoj Nayak wrote:
Roch - PAE wrote:
Manoj Nayak writes:
Hi All.
ZFS document says ZFS schedules it's I/O in such way that it manages to
saturate a single disk bandwidth using enough concurrent 128K I/O.
The no of concurrent I/O is decided by vq_max_pending.The default
I remember reading a discussion where these kind of problems were discussed.
Basically it boils down to everything not being aware of the radical changes
in filesystems concept.
All these things are being worked on, but it might take sometime before
everything is made aware that yes it's no
On Wed, Jan 23, 2008 at 08:02:22AM -0800, Akhilesh Mritunjai wrote:
I remember reading a discussion where these kind of problems were
discussed.
Basically it boils down to everything not being aware of the
radical changes in filesystems concept.
All these things are being worked on, but
Is this service something that we'd like to put into OpenSolaris
Heck yes, at least Indiana needs something like that. I guess nobody is
spearheading the Indiana data backup solution right now, but that work of
yours could be part of it.
To the user there is no difference between regularly
Say I'm firing off an at(1) or cron(1) job to do scrubs, and say I want to
scrub two pools sequentially
because they share one device. The first pool, BTW, is a mirror comprising of
a smaller disk and a subset of a larger disk. The other pool is the remainder
of the larger disk.
I see no
On Wed, Jan 23, 2008 at 11:11:38AM -0800, Matt Newcombe wrote:
Creating an empty zpool zfs
Creating a 6MB text file
Taking a snapshot
So far so good. The filesystem size is 6MB and the snapshot 0MB
Now I edit the first 4 characters of the text file. I would have
expected the size of the
How are you editing the file? Are you sure your editor isn't writing
out the entire file even though only four characters have changed? If
you truss the app, do you see a single 4 byte write to the file?
- Eric
On Wed, Jan 23, 2008 at 11:11:38AM -0800, Matt Newcombe wrote:
Hi,
We're
Sorry, no such feature exists. We do generate sysevents for when
resilvers are completed, but not scrubs. Adding those sysevents would
be an easy change, but doing anything more complicated (such as baking
that functionality into zpool(1M)) would be annoying.
If you want an even more hacked up
OK, to answer my own question (with a little help from Eric!) ...
I was using vi to edit the file which must be rewriting the entire file back
out to disk - hence the larger than expected growth of the snapshot.
Matt
This message posted from opensolaris.org
Hi,
I have been experiencing corruption on one of my ZFS pool over the last couple
of days. I have tried running zpool scrub on the pool, but everytime it comes
back with new files being corrupted. I would have thought that zpool scrub
would have identified the corrupted files once and for all
Dan McDonald wrote:
Say I'm firing off an at(1) or cron(1) job to do scrubs, and say I want to
scrub two pools sequentially
because they share one device. The first pool, BTW, is a mirror comprising
of a smaller disk and a subset of a larger disk. The other pool is the
remainder of the
Thiago Sobral schrieb:
Hi Thomas,
Thomas Maier-Komor escreveu:
Thiago Sobral schrieb:
I need to manage volumes like LVM does on Linux or AIX, and I think
that ZFS can solve this issue.
I read the SVM specification and certainly it doesn't will be the
solution that I'll adopt. I don't
The Silicon Image 3114 controller is known to corrupt data.
Google for silicon image 3114 corruption to get a flavor.
I'd suggest getting your data onto different h/w, quickly.
Jeff
On Wed, Jan 23, 2008 at 12:34:56PM -0800, Bertrand Sirodot wrote:
Hi,
I have been experiencing corruption on
Jeff Bonwick wrote:
The Silicon Image 3114 controller is known to corrupt data.
Google for silicon image 3114 corruption to get a flavor.
I'd suggest getting your data onto different h/w, quickly.
I'll second this, the 3114 is a piece of junk if you value your data. I
bought a 4 port LSI SAS
Actually s10_72, but it's not really a fix, it's a workaround
for a bug in the hardware. I don't know how effective it is.
Jeff
On Wed, Jan 23, 2008 at 04:54:54PM -0800, Erast Benson wrote:
I believe issue been fixed in snv_72+, no?
On Wed, 2008-01-23 at 16:41 -0800, Jeff Bonwick wrote:
well, we had some problems with si3124 driver, but with driver binary
posted in this forum the problem seems been fixed. Later we saw the same
fix went in into b72.
On Thu, 2008-01-24 at 05:11 +0300, Jonathan Stewart wrote:
Jeff Bonwick wrote:
The Silicon Image 3114 controller is known to
Hi,
if I want to stay with SATA and not go to SAS, do you have a recommendation on
which SATA controller is actually supported by Solaris?
The weird thing about the corruption is that everything was fine, until one of
the disks went flaky and things went downhill on the resilvering. No I am
Bertrand Sirodot wrote:
Hi,
if I want to stay with SATA and not go to SAS, do you have a
recommendation on which SATA controller is actually supported by
Solaris?
SAS controllers do support SATA drives actually (not the other way
around though). I'm running SATA drives on mine without a
John wrote:
This is one feature I've been hoping for... old threads and blogs talk about
this feature possibly showing up by the end of 2007 just curious on what
the status of this feature is...
It's still a high priority on our road map, just pushed back a bit. Our
current goal is to
24 matches
Mail list logo