ZFS enables the write cache and flushes it when committing transaction
groups; this insures that all of a transaction group appears or does
not appear on disk.
It also flushes the disk write cache before returning from every
synchronous request (eg fsync, O_DSYNC). This is done after
writing
Well this does look more and more like a duplicate of:
6413510 zfs: writing to ZFS filesystem slows down fsync() on other files in the
same FS
Neil
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Torrey McMahon wrote On 06/21/06 10:29,:
Roch wrote:
Sean Meighan writes:
The vi we were doing was a 2 line file. If you just vi a new file,
add one line and exit it would take 15 minutes in fdsynch. On
recommendation of a workaround we set
set zfs:zil_disable=1
after the
Robert Milkowski wrote On 06/21/06 11:09,:
Hello Neil,
Why is this option available then? (Yes, that's a loaded question.)
NP I wouldn't call it an option, but an internal debugging switch that I
NP originally added to allow progress when initially integrating the ZIL.
NP As Roch says it
Chris,
The data will be written twice on ZFS using NFS. This is because NFS
on closing the file internally uses fsync to cause the writes to be
committed. This causes the ZIL to immediately write the data to the intent log.
Later the data is also written committed as part of the pools
Robert Milkowski wrote On 06/25/06 04:12,:
Hello Neil,
Saturday, June 24, 2006, 3:46:34 PM, you wrote:
NP Chris,
NP The data will be written twice on ZFS using NFS. This is because NFS
NP on closing the file internally uses fsync to cause the writes to be
NP committed. This causes the ZIL
Robert Milkowski wrote On 06/27/06 03:00,:
Hello Chris,
Tuesday, June 27, 2006, 1:07:31 AM, you wrote:
CC On 6/26/06, Neil Perrin [EMAIL PROTECTED] wrote:
Robert Milkowski wrote On 06/25/06 04:12,:
Hello Neil,
Saturday, June 24, 2006, 3:46:34 PM, you wrote:
NP Chris,
NP The data
[EMAIL PROTECTED] wrote On 06/27/06 17:17,:
We have over 1 filesystems under /home in strongspace.com and it works fine.
I forget but there was a bug or there was an improvement made around nevada
build 32 (we're currently at 41) that made the initial mount on reboot
significantly
Robert Milkowski wrote On 06/28/06 15:52,:
Hello Neil,
Wednesday, June 21, 2006, 8:15:54 PM, you wrote:
NP Robert Milkowski wrote On 06/21/06 11:09,:
Hello Neil,
Why is this option available then? (Yes, that's a loaded question.)
NP I wouldn't call it an option, but an internal
This is change request:
6428639 large writes to zvol synchs too much, better cut down a little
which I have a fix for, but it hasn't been put back.
Neil.
Jürgen Keil wrote On 07/17/06 04:18,:
Further testing revealed
that it wasn't an iSCSI performance issue but a zvol
issue. Testing on a
Brian Hechinger wrote On 07/26/06 06:49,:
On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote:
If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let
it label and use the disks, it will automatically turn on the write
cache for you.
What if you can't give ZFS
Not quite, zil_disable is inspected on file system mounts.
It's also looked at dynamically on every write for zvols.
Neil.
Robert Milkowski wrote On 08/07/06 10:07,:
Hello zfs-discuss,
Just a note to everyone experimenting with this - if you change it
online it has only effect when pools
Robert Milkowski wrote:
Hello Neil,
Monday, August 7, 2006, 6:40:01 PM, you wrote:
NP Not quite, zil_disable is inspected on file system mounts.
I guess you right that umount/mount will suffice - I just hadn't time
to check it and export/import worked.
Anyway is there a way for file systems
Robert Milkowski wrote:
Hello Eric,
Monday, August 7, 2006, 6:29:45 PM, you wrote:
ES Robert -
ES This isn't surprising (either the switch or the results). Our long term
ES fix for tweaking this knob is:
ES 6280630 zil synchronicity
ES Which would add 'zfs set sync' as a per-dataset
Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't it be possible to write block only once (for synchronous
IO) and than just point to that block instead of copying it
Robert Milkowski wrote:
Hello Neil,
Thursday, August 10, 2006, 7:02:58 PM, you wrote:
NP Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't it be possible to write
Myron Scott wrote:
Is there any difference between fdatasync and fsync on ZFS?
-No. ZFS does not log data and meta data separately. rather
it logs essentially the system call records, eg writes, mkdir,
truncate, setattr, etc. So fdatasync and fsync are identical
on ZFS.
Yes James is right this is normal behaviour. Unless the writes are
synchronous (O_DSYNC) or explicitely flushed (fsync()) then they
are batched up, written out and committed as a transaction
every txg_time (5 seconds).
Neil.
James C. McPherson wrote:
Bob Evans wrote:
Just getting my feet wet
Robert Milkowski wrote:
ps. however I'm really concerned with ZFS behavior when a pool is
almost full, there're lot of write transactions to that pool and
server is restarted forcibly or panics. I observed that file systems
on that pool will mount in 10-30 minutes each during zfs mount -a, and
It is highly likely you are seeing a duplicate of:
6413510 zfs: writing to ZFS filesystem slows down fsync() on
other files in the same FS
which was fixed recently in build 48 on Nevada.
The symptoms are very similar. That is a fsync from the vi would, prior
to the bug being fixed, have
Philip Brown wrote On 09/21/06 20:28,:
Eric Schrock wrote:
If you're using EFI labels, yes (VTOC labels are not endian neutral).
ZFS will automatically convert endianness from the on-disk format, and
new data will be written using the native endianness, so data will be
gradually be rewritten
ZFS will currently panic on a write failure to a non replicated pool.
In the case below the Intent Log (though it could have been any module)
could not write an intent log block. Here's a previous response from Eric
Schrock explaining how ZFS intends to handle this:
ZFS ignores the fsflush. Here's a snippet of the code in zfs_sync():
/*
* SYNC_ATTR is used by fsflush() to force old filesystems like UFS
* to sync metadata, which they would otherwise cache indefinitely.
* Semantically, the only requirement is that the sync
Matthew Ahrens wrote On 10/16/06 09:07,:
Robert Milkowski wrote:
Hello zfs-discuss,
S10U2+patches. ZFS pool of about 2TB in size. Each day snapshot is
created and 7 copies are kept. There's quota set for a file system
however there's always at least 50GB of free space in a file system
Pawel,
I second that praise. Well done!
Attached is a copy of ziltest. You will have to adapt this a bit
to your environment. In particular it uses bringover to pull a subtree
of our source and then builds and later runs it. This tends to create
a fair number of transactions with various
Jürgen Keil wrote On 10/27/06 11:55,:
This is:
6483887 without direct management, arc ghost lists can run amok
That seems to be a new bug?
http://bugs.opensolaris.org does not yet find it.
It's not so new as it was created on 10/19, but as you say bug
search doesn't find it. However, you
Robert Milkowski wrote On 11/08/06 08:16,:
Hello Paul,
Wednesday, November 8, 2006, 3:23:35 PM, you wrote:
PvdZ On 7 Nov 2006, at 21:02, Michael Schuster wrote:
listman wrote:
hi, i found a comment comparing linux and solaris but wasn't sure
which version of solaris was being referred.
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize was about
68k and vmstat -s said we had a hitrate of ~30%, so I set
Tomas Ögren wrote On 11/09/06 13:47,:
On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked
Hi Robert,
Yes, it could be related, or even the bug. Certainly the replay
was (prior to this bug fix) extremely slow. I don't really have enough
information to determine if it's the exact problem, though after
re-reading your original post I strongly suspect it is.
I also putback a companion
Jim,
I'm not at all sure what happened to your pool.
However, I can answer some of your questions.
Jim Hranicky wrote On 12/05/06 11:32,:
So the questions are:
- is this fixable? I don't see an inum I could run
find on to remove,
I think the pool is busted. Even the message printed in your
Ben,
The attached dscript might help determining the zfs_create issue.
It prints:
- a count of all functions called from zfs_create
- average wall count time of the 30 highest functions
- average cpu time of the 30 highest functions
Note, please ignore warnings of the
Are you looking purely for performance, or for the added reliability that ZFS
can give you?
If the latter, then you would want to configure across multiple LUNs in either
a mirrored or RAID configuration. This does require sacrificing some storage in
exchange for the peace of mind that any
Tom Duell wrote On 12/12/06 17:11,:
Group,
We are running a benchmark with 4000 users
simulating a hospital management system
running on Solaris 10 6/06 on USIV+ based
SunFire 6900 with 6540 storage array.
Are there any tools for measuring internal
ZFS activity to help us understand what is
CT Will I be able to I tune the DMU flush rate, now set at 5 seconds?
echo 'txg_time/D 0t1' | mdb -kw
Er, that 'D' ahould be a 'W'.
Having said that I don't think we recommend messing with the transaction
group commit timing.
___
zfs-discuss mailing
Robert Milkowski wrote On 12/22/06 13:40,:
Hello Torrey,
Friday, December 22, 2006, 9:17:46 PM, you wrote:
TM Roch - PAE wrote:
The fact that most FS do not manage the disk write caches
does mean you're at risk of data lost for those FS.
TM Does ZFS? I thought it just turned it on in
I'm currently working on putting the ZFS intent log on separate devices
which could include seperate disks and nvram/solid state devices.
This would help any application using fsync/O_DSYNC - in particular
DB and NFS. From protoyping considerable peformanace improvements have
been seen.
Neil.
Robert Milkowski wrote On 01/05/07 11:45,:
Hello Neil,
Friday, January 5, 2007, 4:36:05 PM, you wrote:
NP I'm currently working on putting the ZFS intent log on separate devices
NP which could include seperate disks and nvram/solid state devices.
NP This would help any application using
Anantha N. Srirama wrote On 01/08/07 13:04,:
Our setup:
- E2900 (24 x 96); Solaris 10 Update 2 (aka 06/06)
- 2 2Gbps FC HBA
- EMC DMX storage
- 50 x 64GB LUNs configured in 1 ZFS pool
- Many filesystems created with COMPRESS enabled; specifically I've one that is
768GB
I'm observing the
Rainer Heilke wrote On 01/17/07 15:44,:
It turns out we're probably going to go the UFS/ZFS route, with 4 filesystems
(the DB files on
UFS with Directio).
It seems that the pain of moving from a single-node ASM to a RAC'd ASM is
great, and not worth it.
The DBA group decided doing the
Anton B. Rang wrote On 01/17/07 20:31,:
Yes, Anantha is correct that is the bug id, which could be responsible
for more disk writes than expected.
I believe, though, that this would explain at most a factor of 2
of write expansion (user data getting pushed to disk once in the
intent log,
Hi Leon,
This was fixed in March 2006, and is in S10_U2.
Neil.
Leon Koll wrote On 01/28/07 08:58,:
Hello,
what is the status of the bug 6381203 fix in S10 u3 ?
(deadlock due to i/o while assigning (tc_lock held))
Was it integrated? Is there a patch?
Thanks,
[i]-- leon[/i]
This message
No it's not the final version or even the latest!
The current on disk format version is 3. However, it hasn't
diverged much and the znode/acl stuff hasn't changed.
Neil.
James Blackburn wrote On 01/31/07 14:31,:
Or look at pages 46-50 of the ZFS on-disk format document:
ZFS checksums are at the block level.
Nathan Essex wrote On 02/01/07 08:27,:
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was
Robert Milkowski wrote On 02/06/07 11:43,:
Hello eric,
Tuesday, February 6, 2007, 5:55:23 PM, you wrote:
IIRC Bill posted here some tie ago saying the problem with write cache
on the arrays is being worked on.
ek Yep, the bug is:
ek 6462690 sd driver should set SYNC_NV bit when issuing
Jeff Davis wrote On 02/25/07 20:28,:
if you have N processes reading the same file sequentially (where file size is
much greater than physical memory) from the same starting position, should I
expect that all N processes finish in the same time as if it were a single
process?
Yes I would
Gino,
We have ween this before but only very rarely and never got a good crash dump.
Coincidently, we saw it
only yesterday on a server here, and are currently investigating it. Did you
also get a dump we
can access? That would If not can you tell us what zfs version you were running.
At the
Yes, this is supported now. Replacing one half of a mirror with a larger device;
letting it resilver; then replacing the other half does indeed get a larger
mirror.
I believe this is described somewhere but I can't remember where now.
Neil.
Richard L. Hamilton wrote On 03/23/07 20:45,:
If I
Matthew Ahrens wrote On 03/24/07 12:13,:
Kangurek wrote:
Thanks for info.
My idea was to traverse changing filesystem, now I see that it will
not work.
I will try to traverse snapshots. Zreplicate will:
1. do snapshot @replicate_leatest and
2. send data to snapshot @replicate_leatest
3.
Matthew Ahrens wrote On 03/24/07 12:36,:
Neil Perrin wrote:
I'm not sure exactly what will be slow about taking snapshots, but
one aspect might be that we have to suspend the intent log (see call
to zil_suspend() in dmu_objset_snapshot_one()). I've been meaning to
change that for a while
Hi Robert,
Robert Milkowski wrote On 04/02/07 17:48,:
Right now a symlink should consume one dnode (320 bytes)
dnode_phys_t are actually 512 bytes:
::sizeof dnode_phys_t
sizeof (dnode_phys_t) = 0x200
if the name it point to is less than 67 bytes, otherwise a data block is
allocated
cedric briner wrote:
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would still
corrupt your nfs clients.
-r
hello Roch,
I've few questions
1)
from:
Shenanigans with ZFS flushing and intelligent arrays...
kyusun Chang wrote On 05/04/07 19:34,:
If system crashes some time after last commit of transaction group (TxG), what
happens to the file system transactions since the last commit of TxG
They are lost, unless they were synchronous (see below).
(I presume last commit of TxG represents the
Adam Megacz wrote:
After reading through the ZFS slides, it appears to be the case that
if ZFS wants to modify a single data block, if must rewrite every
block between that modified block and the uberblock (root of the tree).
Is this really the case?
That is true when commiting the
Adam Megacz wrote:
Ah, okay. The slides I read said that in ZFS there is no journal --
not needed (slide #9):
http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf
I guess the slides are out of date in light of the ZFS Intent Log
journal?
Yes , I can understand your confusion.
lonny wrote:
On May 11, 2007, at 9:09 AM, Bob Netherton wrote:
**On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
**I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
** around 5 seconds. I assume it's just something to do with caching?
^Yep - the ZFS equivalent of
eric kustarz wrote:
Over NFS to non-ZFS drive
-
tar xfvj linux-2.6.21.tar.bz2
real5m0.211s,user0m45.330s,sys 0m50.118s
star xfv linux-2.6.21.tar.bz2
real3m26.053s,user0m43.069s,sys 0m33.726s
star -no-fsync -x -v -f
Rick Mann wrote:
Hi. I've been reading the ZFS admin guide, and I don't understand the distinction between
adding a device and attaching a device to a pool?
attach is used to create or add a side to a mirror.
add is to add a new top level vdev where that can be a raidz, mirror
or single
Bryna,
Your timing is excellent! We've been working on this for a while now and
hopefully within the next day I'll be adding support for separate log
devices into Nevada.
I'll send out more details soon...
Neil.
Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any
Darren Dunham wrote:
The problem I've come across with using mirror or raidz for this setup
is that (as far as I know) you can't add disks to mirror/raidz groups,
and if you just add the disk to the pool, you end up in the same
situation as above (with more space but no redundancy).
You
Cyril,
I wrote this case and implemented the project. My problem was
that I didn't know what policy (if any) Sun has about publishing
ARC cases, and a mail log with a gazillion email addresses.
I did receive an answer to this this in the form:
Cyril Plisko wrote:
On 7/7/07, Neil Perrin [EMAIL PROTECTED] wrote:
Cyril,
I wrote this case and implemented the project. My problem was
that I didn't know what policy (if any) Sun has about publishing
ARC cases, and a mail log with a gazillion email addresses.
I did receive an answer
Er with attachment this time.
So I've attached the accepted proposal. There was (as expected) not
much discussion of this case as it was considered an obvious extension.
The actual psarc case materials when opened will not have much more info
than this.
PSARC CASE: 2007/171 ZFS Separate Intent
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
Neil.
___
zfs-discuss mailing list
Albert Chin wrote:
On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
I wrote up a blog on the separate intent log called slog blog
which describes the interface; some performance results; and
general status:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
So, how
Adolf,
Yes, there was a separate driver, that I believe came from Micro
Memories. I installit from a package umem_Sol_Drv_Cust_i386_v01_10.pkg.
I just use pkgadd on it and it just worked. Sorry, I don't know if it's
publicly available or will even work for your device.
I gave details of that
Jay,
Slides look good, though I'm not not sure what you say along
with Filthy lying on slide 22 related to the ZIL, or
slide 27 which has Worst Feature - thinks hardware is stupid.
Anyway I have some comments on http://www.meangrape.com/2007/08/oscon-zfs
You say:
---
Records in the ZIL are
How does ZFS handle snapshots of large files like VM images? Is
replication done on the bit/block level or by file? In otherwords, does
a snapshot of a changed VM image take up the same amount of space as the
image or only the amount of space of the bits that have changed within
the
Tim Spriggs wrote:
Hello,
I think I have gained sufficient fool status for testing the
fool-proof-ness of zfs. I have a cluster of T1000 servers running
Solaris 10 and two x4100's running an OpenSolaris dist (Nexenta) which
is at b68. Each T1000 hosts several zones each of which
Yes performance will suffer, but it's a bit difficult to say by how much.
Both pool transaction group writes and zil writes are spread across
all devices. It depends on what applications you will run as to how much
use is made of the zil. Maybe you should experiment and see if performance
is good
Separate log devices (slogs) didn't make it into S10U4 but will be in U5.
Andy Lubel wrote:
I think we are very close to using zfs in our production environment.. Now
that I have snv_72 installed and my pools set up with NVRAM log devices
things are hauling butt.
I've been digging to find
Matty wrote:
On 9/18/07, Neil Perrin [EMAIL PROTECTED] wrote:
Separate log devices (slogs) didn't make it into S10U4 but will be in U5.
This is awesome! Will the SYNC_NV support that was integrated this
week be added to update 5 as well? That would be super useful,
assuming the major
Erik Trimble wrote:
Ivan Wang wrote:
Hi all,
Forgive me if this is a dumb question. Is it possible for a two-disk
mirrored zpool to be seamlessly enlarged by gradually replacing previous
disk with larger one?
Say, in a constrained desktop, only space for two internal disks is
I don't know of any way to observe IOPS per zvol and I believe
this would be tricky. Any writes/reads from individual datasets (filesystems
and zvols) will go through the pipeline and can fan out to multiple
mirrors or raidz or be striped across devices. Block writes will be
combined and pushed
Scott Laird wrote:
I'm debating using an external intent log on a new box that I'm about
to start working on, and I have a few questions.
1. If I use an external log initially and decide that it was a
mistake, is there a way to move back to the internal log without
rebuilding the entire
Scott Laird wrote:
On 10/18/07, Neil Perrin [EMAIL PROTECTED] wrote:
Scott Laird wrote:
I'm debating using an external intent log on a new box that I'm about
to start working on, and I have a few questions.
1. If I use an external log initially and decide that it was a
mistake
Scott Laird wrote:
On 10/18/07, Neil Perrin [EMAIL PROTECTED] wrote:
So, the only way to lose transactions would be a crash or power loss,
leaving outstanding transactions in the log, followed by the log
device failing to start up on reboot? I assume that that would that
be handled
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over all devices but now all synchronous traffic
is directed at one device (and everything is synchronous in NFS).
Mind you 15MB/s seems a bit on the slow
Roch - PAE wrote:
Neil Perrin writes:
Joe Little wrote:
On Nov 16, 2007 9:13 PM, Neil Perrin [EMAIL PROTECTED] wrote:
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over
Ajay Kumar wrote:
IHAC who would like to understand following:
We've upgraded a box to sol10-u4 and created a ZFS pool. We notice that
running zfs iostat 1 or iostat -xnz 1, the data gets written to disk
every 5 seconds, even though the data is being copied to the filesystem
Vincent Fox wrote:
So does anyone have any insight on BugID 6535160?
We have verified on a similar system, that ZFS shows big latency in filebench
varmail test.
We formatted the same LUN with UFS and latency went down from 300 ms to 1-2
ms.
This is such a big difference it makes me
Vincent Fox wrote:
So does anyone have any insight on BugID 6535160?
We have verified on a similar system, that ZFS shows big latency in filebench
varmail test.
We formatted the same LUN with UFS and latency went down from 300 ms to 1-2
ms.
This is such a big difference it makes me think
sudarshan sridhar wrote:
I'm not quite sure what you're asking here. Data, whether newly written or
copy-on-write, goes to a newly allocated block, which may reside on any
vdev, and will be spread across devices if using RAID.
My exact doubt is, if COW is default behavior of ZFS then does
parvez shaikh wrote:
Hello,
I am learning ZFS, its design and layout.
I would like to understand how Intent logs are different from journal?
Journal too are logs of updates to ensure consistency of file system
over crashes. Purpose of intent log also appear to be same. I hope I am
Todd Moore wrote:
My understanding is that the answers to the questions posed below are both
YES due the transactional design of ZFS. However, I'm working with some
folks that need more details or documents describing the design/behavior
without having to look through all the source
Steve Hillman wrote:
I realize that this topic has been fairly well beaten to death on this forum,
but I've also read numerous comments from ZFS developers that they'd like to
hear about significantly different performance numbers of ZFS vs UFS for
NFS-exported filesystems, so here's one
Roch - PAE wrote:
Jonathan Loran writes:
Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
that exist in the various recent Open Solaris flavors? I would like to
move my ZIL to solid state storage, but I fear I can't do it until I
have another update.
Jonathan Loran wrote:
Vincent Fox wrote:
Are you already running with zfs_nocacheflush=1? We have SAN arrays with
dual battery-backed controllers for the cache, so we definitely have this
set on all our production systems. It makes a big difference for us.
No, we're not using the
Marc Bevand wrote:
William Fretts-Saxton william.fretts.saxton at sun.com writes:
I disabled file prefetch and there was no effect.
Here are some performance numbers. Note that, when the application server
used a ZFS file system to save its data, the transaction took TWICE as long.
For
Nathan Kroenert wrote:
And something I was told only recently - It makes a difference if you
created the file *before* you set the recordsize property.
If you created them after, then no worries, but if I understand
correctly, if the *file* was created with 128K recordsize, then it'll
ZFS will handle out of order writes due to it transactional
nature. Individual writes can be re-ordered safely. When the transaction
commits it will wait for all writes and flush them; then write a
new uberblock with the new transaction group number and flush that.
Chris Siebenmann wrote:
We're
Haudy,
Thanks for reporting this bug and helping to improve ZFS.
I'm not sure either how you could have added a note to an
existing report. Anyway I've gone ahead and done that for you
in the Related Bugs field. Though opensolaris doesn't reflect it yet
Neil.
Haudy Kazemi wrote:
I have
I also noticed (perhaps by design) that a copy with compression off almost
instantly returns, but the writes continue LONG after the cp process claims
to be done. Is this normal?
Yes this is normal. Unless the application is doing synchronous writes
(eg DB) the file will be written to disk at
Hugh Saunders wrote:
On Sat, May 24, 2008 at 4:00 PM, [EMAIL PROTECTED] wrote:
cache improve write performance or only reads?
L2ARC cache device is for reads... for write you want
Intent Log
Thanks for answering my question, I had seen mention of intent log
devices, but wasn't sure
Joe Little wrote:
On Tue, May 27, 2008 at 4:50 PM, Eric Schrock [EMAIL PROTECTED] wrote:
Joe -
We definitely don't do great accounting of the 'vdev_islog' state here,
and it's possible to create a situation where the parent replacing vdev
has the state set but the children do not, but I have
This is actually quite a tricky fix as obviously data and meta data have
to be relocated. Although there's been no visible activity in this bug
there has been substantial design activity to allow the RFE to be easily
fixed.
Anyway, to answer your question, I would fully expect this RFE would
be
Patrick Pinchera wrote:
IHAC using ZFS in production, and he's opening up some files with the
O_SYNC flag. This affects subsequent write()'s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically
Mertol,
Yes, dedup is certainly on our list and has been actively
discussed recently, so there's hope and some forward progress.
It would be interesting to see where it fits into our customers
priorities for ZFS. We have a long laundry list of projects.
In addition there's bug fixes performance
them up.
I'm still waiting for the hardware for this server, but regarding the
drivers, if these
cards don't work out of the box I was planning to pester Neil Perrin and see
if he still
has some drivers for them :)
Unfortunately, there are a couple of problems:
1. It's been a while
Peter Cudhea wrote:
Your point is well taken that ZFS should not duplicate functionality
that is already or should be available at the device driver level.In
this case, I think it misses the point of what ZFS should be doing that
it is not.
ZFS does its own periodic commits to the
1 - 100 of 201 matches
Mail list logo