Carson Gaspar wrote:
Todd E. Moore wrote:
I'm working with a group that wants to commit all the way to disk every
single write - flushing or bypassing all the caches each time. The
fsync() call will flush the ZIL. As for the disk's cache, if given the
entire disk, ZFS enables its cache by
Richard Elling wrote:
Ross wrote:
I'm trying to import a pool I just exported but I can't, even -f doesn't
help. Every time I try I'm getting an error:
cannot import 'rc-pool': one or more devices is currently unavailable
Now I suspect the reason it's not happy is that the pool used to
Ross,
Thanks, I have updated the bug with this info.
Neil.
Ross Smith wrote:
Hmm... got a bit more information for you to add to that bug I think.
Zpool import also doesn't work if you have mirrored log devices and
either one of them is offline.
I created two ramdisks with:
#
Michael Hale wrote:
A bug report I've submitted for a zfs-related kernel crash has been
marked incomplete and I've been asked to provide more information.
This CR has been marked as incomplete by User 1-5Q-2508
for the reason Need More Info. Please update the CR
providing the
On 09/05/08 14:42, Narayan Venkat wrote:
I understand that if you want to use ZIL, then the requirement is one or more
ZILs per pool.
A little clarification of ZFS terms may help here. The term ZIL is somewhat
overloaded. I think what you mean here is a separate log device (slog), because
Karthik,
The pool failmode property as implemented governs the behaviour when all
the devices needed are unavailable. The default behaviour is to wait
(block) until the IO can continue - perhaps by re-enabling the device(s).
The behaviour you expected can be achieved by zpool set
On 10/22/08 10:26, Constantin Gonzalez wrote:
Hi,
On a busy NFS server, performance tends to be very modest for large amounts
of small files due to the well known effects of ZFS and ZIL honoring the
NFS COMMIT operation[1].
For the mature sysadmin who knows what (s)he does, there are
But the slog is the ZIL. formaly a *separate* intent log.
No the slog is not the ZIL!
Here's the definition of the terms as we've been trying to use them:
ZIL:
The body of code the supports synchronous requests, which writes
out to the Intent Logs
Intent Log:
A stable
On 10/22/08 13:56, Marcelo Leal wrote:
But the slog is the ZIL. formaly a *separate*
intent log.
No the slog is not the ZIL!
Ok, when you did write this:
I've been slogging for a while on support for separate intent logs (slogs)
for ZFS.
Without slogs, the ZIL is allocated dynamically
Ethan,
It is still not possible to remove a slog from a pool. This is bug:
6574286 removing a slog doesn't work
The error message:
cannot remove c4t15d0p0: only inactive hot spares or cache devices can be
removed
is correct and this is the same as documented in the zpool man page:
Leal,
ZFS uses the DNLC. It still provides the fastest lookup of directory, name to
vnode.
The DNLC is kind of LRU. An async process will use a rotor to move
through the hash chains and select the LRU entry but will select first
negative cache entries and vnodes only referenced by the DNLC.
On 10/30/08 11:00, Marcelo Leal wrote:
Hello Neil,
Leal,
ZFS uses the DNLC. It still provides the fastest
lookup of directory, name to vnode.
Ok, so the whole concept remains true? We can tune the DNLC and expect the
same behaviour on ZFS?
Yes.
The DNLC is kind of LRU. An async
I wouldn't expect any improvement using a separate disk slice for the Intent Log
unless that disk was much faster and was otherwise largely idle. If it was
heavily
used then I'd expect quite the performance degradation as the disk head bounces
around between slices. Separate intent logs are
On 11/20/08 12:52, Danilo Poccia wrote:
Hi,
I was wondering is there is a performance gain for an OLTP-like workload
in putting the ZFS Intent Log (ZIL) on traditional HDDs.
It's probably always best to benchmark it yourself, but my
experience has shown that it's better to only have a
I suspect ZFS is unaware that anything has changed in the
z_phys so it never gets written out. You probably need
to create a dmu transaction and call dmu_buf_will_dirty(zp-z_dbuf, tx);
Neil.
On 11/26/08 03:36, shelly wrote:
In place of padding in zfs znode i added a new field. stored an integer
On 12/02/08 03:47, River Tarnell wrote:
hi,
i have a system connected to an external DAS (SCSI) array, using ZFS. the
array has an nvram write cache, but it honours SCSI cache flush commands by
flushing the nvram to disk. the array has no way to disable this behaviour.
a
well-known
On 01/06/09 21:25, Nicholas Lee wrote:
Since zfs is so smart is other areas is there a particular reason why a
high water mark is not calculated and the available space not reset to this?
I'd far rather have a zpool of 1000GB that said it only had 900GB but
did not have corruption as it
On 01/12/09 20:45, Simon wrote:
Hi Experts,
IHAC who using Solaris 10 + ZFS,two questions they're concerned:
- ZIL(zfs intent log) is enabled by default for ZFS,there are varied
storage purchased by customer(such as EMC CX/DMX series,HDS AMS/USP
series and etc),customer wonder whether
This is a known bug:
6678070 Panic from vdev_mirror_map_alloc()
http://bugs.opensolaris.org/view_bug.do?bug_id=6678070
Neil.
On 01/12/09 21:12, Krzys wrote:
any idea what could cause my system to panic? I get my system rebooted daily
at
various times. very strange, but its pointing to zfs.
much greater
importance and I get such situation its quite scary... :(
On Mon, 12 Jan 2009, Neil Perrin wrote:
This is a known bug:
6678070 Panic from vdev_mirror_map_alloc()
http://bugs.opensolaris.org/view_bug.do?bug_id=6678070
Neil.
On 01/12/09 21:12, Krzys wrote:
any idea what
I don't believe that iozone does any synchronous calls (fsync/O_DSYNC/O_SYNC),
so the ZIL and separate logs (slogs) would be unused.
I'd recommend performance testing by configuring filebench to
do synchronous writes:
http://opensolaris.org/os/community/performance/filebench/
Neil.
On 01/15/09
On 01/29/09 21:32, Greg Mason wrote:
This problem only manifests itself when dealing with many small files
over NFS. There is no throughput problem with the network.
I've run tests with the write cache disabled on all disks, and the cache
flush disabled. I'm using two Intel SSDs for ZIL
Looks reasonable
+1
Neil.
On 02/02/09 08:55, Mark Shellenbaum wrote:
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
On 02/08/09 11:50, Vincent Fox wrote:
So I have read in the ZFS Wiki:
# The minimum size of a log device is the same as the minimum size of device
in
pool, which is 64 Mbytes. The amount of in-play data that might be stored on
a log
device is relatively small. Log blocks are freed
Mark,
I believe creating a older version pool is supported:
zpool create -o version=vers whirl c0t0d0
I'm not sure what version of ZFS in Solaris 10 you are running.
Try running zpool upgrade and replacing vers above with that version number.
Neil.
: trasimene ; zpool create -o version=11
Having a separate intent log on good hardware will not prevent corruption
on a pool with bad hardware. By good I mean hardware that correctly
flush their write caches when requested.
Note, a pool is always consistent (again when using good hardware).
The function of the intent log is not to
I'd like to correct a few misconceptions about the ZIL here.
On 03/06/09 06:01, Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does maintain
filesystem consistency through coordination between the ZPL (ZFS POSIX
Layer) and the ZIL (ZFS Intent Log).
Pool and file
On 03/06/09 08:10, Jim Dunham wrote:
Andrew,
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the ZPL
(ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for
SNDR, ZFS caches a lot of an
On 03/06/09 14:51, Miles Nordin wrote:
np == Neil Perrin neil.per...@sun.com writes:
np Alternatively, a lockfs will flush just a file system to
np stable storage but in this case just the intent log is
np written. (Then later when the txg commits those intent log
np records
Patrick,
The ZIL is only used for synchronous requests like O_DSYNC/O_SYNC and
fsync(). Your iozone command must be doing some synchronous writes.
All the other tests (dd, cat, cp, ...) do everything asynchronously.
That is they do not require the data to be on stable storage on
return from the
On 04/10/09 20:15, Toby Thain wrote:
On 10-Apr-09, at 5:05 PM, Mark J Musante wrote:
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could buffer
them even for 60 seconds, it would make everything much smoother.
ZFS already batches up
Will,
This is bug:
6710376 log device can show incorrect status when other parts of pool are
degraded
This is just an error in the reporting. There was nothing actually wrong with
the log device. It is picking up the degraded status from the rest of the pool.
The bug was fixed only yesterday
On 06/20/09 11:14, tester wrote:
Hi,
Does anyone know the difference between zpool iostat and iostat?
dd if=/dev/zero of=/test/test1/trash count=1 bs=1024k;sync
pool only shows 236K IO and 13 write ops. whereas iostat shows a correctly meg
of activity.
The zfs numbers are per second as
I understand that the ZILs are allocated out of the general pool.
There is one intent log chain per dataset (file system or zvol).
The head of each log the log is kept in the main pool.
Without slog(s) we allocate (and chain) blocks from the
main pool. If separate intent log(s) exist then
On 08/07/09 10:54, Scott Meilicke wrote:
ZFS absolutely observes synchronous write requests (e.g. by NFS or a
database). The synchronous write requests do not benefit from the
long write aggregation delay so the result may not be written as
ideally as ordinary write requests. Recently zfs
On 08/20/09 06:41, Greg Mason wrote:
Something our users do quite a bit of is untarring archives with a lot
of small files. Also, many small, quick writes are also one of the many
workloads our users have.
Real-world test: our old Linux-based NFS server allowed us to unpack a
particular
On 09/04/09 09:54, Scott Meilicke wrote:
Roch Bourbonnais Wrote:
100% random writes produce around 200 IOPS with a 4-6 second pause
around every 10 seconds.
This indicates that the bandwidth you're able to transfer
through the protocol is about 50% greater than the bandwidth
the pool
Nils,
A zil_clean() is started for each dataset after every txg.
this includes snapshots (which is perhaps a bit inefficient).
Still, zil_clean() is fairly lightweight if there's nothing
to do (grab a non contended lock; find nothing on a list;
drop the lock exit).
Neil.
On 09/21/09 08:08,
() threads?
Neil.
On 09/21/09 08:53, Neil Perrin wrote:
Nils,
A zil_clean() is started for each dataset after every txg.
this includes snapshots (which is perhaps a bit inefficient).
Still, zil_clean() is fairly lightweight if there's nothing
to do (grab a non contended lock; find nothing on a list
On 09/23/09 10:59, Scott Meilicke wrote:
How can I verify if the ZIL has been disabled or not?
I am trying to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1 | mdb -kw
- this only temporarily disables the
On 09/25/09 16:19, Bob Friesenhahn wrote:
On Fri, 25 Sep 2009, Ross Walker wrote:
Problem is most SSD manufactures list sustained throughput with large
IO sizes, say 4MB, and not 128K, so it is tricky buying a good SSD
that can handle the throughput.
Who said that the slog SSD is written
Also, ZFS does things like putting the ZIL data (when not on a dedicated
device) at the outer edge of disks, that being faster.
No, ZFS does not do that. It will chain the intent log from blocks allocated
from the same metaslabs that the pool is allocating from.
This actually works out well
On 11/18/09 12:21, Joe Cicardo wrote:
Hi,
My customer says:
Application has NFS directories with millions of files in a directory,
and this can't changed.
We are having issues with the EMC appliance and RPC timeouts on the NFS
lookup. I am looking doing
Under the hood in ZFS, writes are committed using either shadow paging or
logging, as I understand it. So I believe that I mean to ask whether a
write(2), pushed to ZPL, and pushed on down the stack, can be split into
multiple transactions? Or, instead, is it guaranteed to be committed in a
On 12/03/09 09:21, mbr wrote:
Hello,
Bob Friesenhahn wrote:
On Thu, 3 Dec 2009, mbr wrote:
What about the data that were on the ZILlog SSD at the time of
failure, is
a copy of the data still in the machines memory from where it can be
used
to put the transaction to the stable storage
On 12/05/09 01:36, anu...@kqinfotech.com wrote:
Hi,
What you say is probably right with respect to L2ARC, but logging (ZIL or
database log) is required for consistency purpose.
No, the ZIL is not required for consistency. The pool is fully consistent
without
the ZIL. See
On 12/06/09 10:11, Anurag Agarwal wrote:
Hi,
My reading of write code of ZFS (zfs_write in zfs_vnops.c), is that all
the writes in zfs are logged in the ZIL.
Each write gets recorded in memory in case it needs to be forced out
later (eg fsync()), but is not written to the on-disk log until
I'll try to find out whether ZFS binding the same file always to the same
opening transaction group.
Not sure what you mean by this. Transactions (eg writes) will go into
the current open transaction group (txg). Subsequent writes may enter
the same or a future txg. Txgs are obviously
On 12/09/09 13:52, Glenn Lagasse wrote:
* R.G. Keen (k...@geofex.com) wrote:
I didn't see remove a simple device anywhere in there.
Is it:
too hard to even contemplate doing,
or
too silly a thing to do to even consider letting that happen
or
too stupid a question to even consider
or
too
On 12/11/09 14:56, Bill Sommerfeld wrote:
On Fri, 2009-12-11 at 13:49 -0500, Miles Nordin wrote:
sh == Seth Heeren s...@zfs-fuse.net writes:
sh If you don't want/need log or cache, disable these? You might
sh want to run your ZIL (slog) on ramdisk.
seems quite silly. why would you
Hi Adam,
So was FW aware of this or in contact with these guys?
Also are you requesting/ordering any of these cards to evaluate?
The device seems kind of small at 4GB, and uses a double wide PCI Express slot.
Neil.
On 01/13/10 12:27, Adam Leventhal wrote:
Hey Chris,
The DDRdrive X1
On 01/15/10 12:59, Jeffry Molanus wrote:
Sometimes people get confused about the ZIL and separate logs. For
sizing purposes,
the ZIL is a write-only workload. Data which is written to the ZIL is
later asynchronously
written to the pool when the txg is committed.
Right; the tgx needs time
On 02/09/10 08:18, Kjetil Torgrim Homme wrote:
Richard Elling richard.ell...@gmail.com writes:
On Feb 8, 2010, at 9:10 PM, Damon Atkins wrote:
I would have thought that if I write 1k then ZFS txg times out in
30secs, then the 1k will be written to disk in a 1k record block, and
then if I
If I understand correctly, ZFS now adays will only flush data to
non volatile storage (such as a RAID controller NVRAM), and not
all the way out to disks. (To solve performance problems with some
storage systems, and I believe that it also is the right thing
to do under normal circumstances.)
On 03/30/10 20:00, Bob Friesenhahn wrote:
On Tue, 30 Mar 2010, Edward Ned Harvey wrote:
But the speedup of disabling the ZIL altogether is
appealing (and would
probably be acceptable in this environment).
Just to make sure you know ... if you disable the ZIL altogether, and
you
have a power
On 04/02/10 08:24, Edward Ned Harvey wrote:
The purpose of the ZIL is to act like a fast log for synchronous
writes. It allows the system to quickly confirm a synchronous write
request with the minimum amount of work.
Bob and Casper and some others clearly know a lot here. But I'm
On 04/05/10 11:43, Andreas Höschler wrote:
Hi Khyron,
No, he did *not* say that a mirrored SLOG has no benefit,
redundancy-wise.
He said that YOU do *not* have a mirrored SLOG. You have 2 SLOG devices
which are striped. And if this machine is running Solaris 10, then
you cannot
remove a
On 04/07/10 09:19, Bob Friesenhahn wrote:
On Wed, 7 Apr 2010, Robert Milkowski wrote:
it is only read at boot if there are uncomitted data on it - during
normal reboots zfs won't read data from slog.
How does zfs know if there is uncomitted data on the slog device
without reading it? The
On 04/07/10 10:18, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
It is also worth pointing out that in normal operation the slog is
essentially a write-only device which is only read at boot time.
On 04/10/10 09:28, Edward Ned Harvey wrote:
Neil or somebody? Actual ZFS developers? Taking feedback here? ;-)
While I was putting my poor little server through cruel and unusual
punishment as described in my post a moment ago, I noticed something
unexpected:
I expected that
On 04/10/10 14:55, Daniel Carosone wrote:
On Sat, Apr 10, 2010 at 11:50:05AM -0500, Bob Friesenhahn wrote:
Huge synchronous bulk writes are pretty rare since usually the
bottleneck is elsewhere, such as the ethernet.
Also, large writes can go straight to the pool, and the zil only
On 05/26/10 07:10, sensille wrote:
Recently, I've been reading through the ZIL/slog discussion and
have the impression that a lot of folks here are (like me)
interested in getting a viable solution for a cheap, fast and
reliable ZIL device.
I think I can provide such a solution for about $200,
On 06/11/10 22:07, zfsnoob4 wrote:
Hey,
I'm running some test right now before setting up my server. I'm running
Nexenta Core 3.02 (RC2, based on opensolaris build 134 I believe) in Virtualbox.
To do the test, I'm creating three empty files and then making a raidz mirror:
mkfile -n 1g /foo
On 06/12/10 17:13, zfsnoob4 wrote:
Thanks. As I discovered from that post, VB does not have cache flush enabled by
default. Ignoreflush must be explicitly turned off.
VBoxManage setextradata VMNAME
VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush 0
where VMNAME is the name of your
On 06/14/10 12:29, Bob Friesenhahn wrote:
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
It is good to keep in mind that only small writes go to the dedicated
slog. Large writes to to main store. A succession of that many small
writes (to fill RAM/2) is highly unlikely. Also, that the zil is
On 06/14/10 19:35, Erik Trimble wrote:
On 6/14/2010 12:10 PM, Neil Perrin wrote:
On 06/14/10 12:29, Bob Friesenhahn wrote:
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote:
It is good to keep in mind that only small writes go to the dedicated
slog. Large writes to to main store. A succession
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using ARC?
Or is there some sort of memory pressure point where the DDT gets
moved
On 07/02/10 00:57, Erik Trimble wrote:
On 7/1/2010 10:17 PM, Neil Perrin wrote:
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using
On 07/02/10 11:14, Erik Trimble wrote:
On 7/2/2010 6:30 AM, Neil Perrin wrote:
On 07/02/10 00:57, Erik Trimble wrote:
That's what I assumed. One further thought, though. Is the DDT is
treated as a single entity - so it's *all* either in the ARC or in
the L2ARC? Or does it move one entry
On 07/09/10 19:40, Erik Trimble wrote:
On 7/9/2010 5:18 PM, Brandon High wrote:
On Fri, Jul 9, 2010 at 5:00 PM, Edward Ned Harvey
solar...@nedharvey.com mailto:solar...@nedharvey.com wrote:
The default ZFS block size is 128K. If you have a filesystem
with 128G used, that means you
This is a consequence of the design for performance of the ZIL code.
Intent log blocks are dynamically allocated and chained together.
When reading the intent log we read each block and checksum it
with the embedded checksum within the same block. If we can't read
a block due to an IO error then
the following entries in case the
log records
in the missing log block were important (eg create file).
Mirroring the slogs is recommended to minimise concerns about slogs
corruption.
Â
Regards,
Markus
Neil Perrin neil.per...@oracle.com hat am 23. August 2010 um 19:44
geschrieben
On 08/25/10 20:33, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
This is a consequence of the design for performance of the ZIL code.
Intent log blocks are dynamically allocated and chained together
Arne,
NFS often demands it's transactions are stable before returning.
This forces ZFS to do the system call synchronously. Usually the
ZIL (code) allocates and writes a new block in the intent log chain to
achieve this.
If ever it fails to allocate a block (of the size requested) it it forced
I should also have mentioned that if the pool has a separate log device
then this shouldn't happen.Assuming the slog is big enough then it
it should have enough blocks to not be forced into using main pool
device blocks.
Neil.
On 09/09/10 10:36, Neil Perrin wrote:
Arne,
NFS often demands
On 09/17/10 18:32, Edward Ned Harvey wrote:
From: Neil Perrin [mailto:neil.per...@oracle.com]
you lose information. Not your whole pool. You lose up to
30 sec of writes
The default is now 5 seconds (zfs_txg_timeout).
When did that become default?
It was changed more
On 09/17/10 23:31, Ian Collins wrote:
On 09/18/10 04:46 PM, Neil Perrin wrote:
On 09/17/10 18:32, Edward Ned Harvey wrote:
From: Neil Perrin [mailto:neil.per...@oracle.com]
you lose information. Not your whole pool. You lose up to
30 sec of writes
The default is now 5
On 09/22/10 11:22, Moazam Raja wrote:
Hi all, I have a ZFS question related to COW and scope.
If user A is reading a file while user B is writing to the same file,
when do the changes introduced by user B become visible to everyone?
Is there a block level scope, or file level, or something
On 09/22/10 11:23, Peter Taps wrote:
Folks,
While going through zpool source code, I see a configuration option called
l2cache. What is this option for? It doesn't seem to be documented.
Thank you in advance for your help.
Regards,
Peter
man zpool
under Cache Devices section
On 09/22/10 13:40, Peter Taps wrote:
Neil,
Thank you for your help.
However, I don't see anything about l2cache under Cache devices man pages.
To be clear, there are two different vdev types defined in zfs source code - cache and l2cache.
I am familiar with cache devices. I am curious about
On 09/24/10 11:26, Peter Taps wrote:
Folks,
One of the zpool properties that is reported is dedupditto. However, there is
no documentation available, either in man pages or anywhere else on the Internet. What
exactly is this property?
Thank you in advance for your help.
Regards,
Peter
I
On 10/22/10 15:34, Peter Taps wrote:
Folks,
Let's say I have a volume being shared over iSCSI. The dedup has been turned on.
Let's say I copy the same file twice under different names at the initiator
end. Let's say each file ends up taking 5 blocks.
For dedupe to work, each block for a file
On 10/22/10 17:28, Peter Taps wrote:
Hi Neil,
if the file offset does not match, the chances that the checksum would match,
especially sha256, is almost 0.
May be I am missing something. Let's say I have a file that contains 11 letters
- ABCDEFGHIJK. Let's say the block size is 5.
For the
On 12/01/10 22:14, Miles Nordin wrote:
Also did anyone ever clarify whether the slog has an ashift? or is it
forced-512? or derived from whatever vdev will eventually contain the
separately-logged data? I would expect generalized immediate Caring
about that since no slogs except ACARD and
On 12/25/10 19:32, Bill Werner wrote:
Understood Edward, and if this was a production data center, I wouldn't be
doing it this way. This is for my home lab, so spending hundreds of dollars on
SSD devices isn't practical.
Can several datasets share a single ZIL and a single L2ARC, or much
On 03/31/11 12:28, Roy Sigurd Karlsbakk wrote:
http://pastebin.com/nD2r2qmh
Here is zpool status and zpool version
The only thing I wonder about here, is why you have two striped log devices. I
didn't even know that was supported.
Yes it's supported. ZFS will round robin writes to
On 04/25/11 11:55, Erik Trimble wrote:
On 4/25/2011 8:20 AM, Edward Ned Harvey wrote:
And one more comment: Based on what's below, it seems that the DDT
gets stored on the cache device and also in RAM. Is that correct?
What if you didn't have a cache device? Shouldn't it *always* be in
On 4/28/11 12:45 PM, Edward Ned Harvey wrote:
From: Erik Trimble [mailto:erik.trim...@oracle.com]
OK, I just re-looked at a couple of things, and here's what I /think/ is
the correct numbers.
I just checked, and the current size of this structure is 0x178, or 376
bytes.
Each ARC entry, which
On 04/30/11 01:41, Sean Sprague wrote:
: xvm-4200m2-02 ;
I can do the echo | mdb -k. But what is that : xvm-4200 command?
My guess is that is a very odd shell prompt ;-)
- Indeed
':' means what follows a comment (at least to /bin/ksh)
'xvm-4200m2-02' is the comment -
On 05/02/11 14:02, Nico Williams wrote:
Also, sparseness need not be apparent to applications. Until recent
improvements to lseek(2) to expose hole/non-hole offsets, the only way
to know about sparseness was to notice that a file's reported size is
more than the file's reported filesystem
On 06/16/11 20:26, Daniel Carosone wrote:
On Thu, Jun 16, 2011 at 09:15:44PM -0400, Edward Ned Harvey wrote:
My personal preference, assuming 4 disks, since the OS is mostly reads and
only a little bit of writes, is to create a 4-way mirrored 100G partition
for the OS, and the remaining 900G
In general the blogs conclusion is correct . When file systems get full
there is
fragmentation (happens to all file systems) and for ZFS the pool uses gang
blocks of smaller blocks when there are insufficient large blocks.
However, the ZIL never allocates or uses gang blocks. It directly
On 08/30/11 08:31, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jesus Cea
10. What happens if my 1GB of ZIL is too optimistic?. Will ZFS use the
disks or it will stop writers until flushing ZIL to the HDs?.
On 9/19/11 11:45 AM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have a new answer: interaction between dataset encryption and L2ARC
and ZIL.
1. I am pretty sure (but not completely sure) that data stored in the
ZIL is encrypted, if the destination dataset uses encryption.
On 10/28/11 00:04, Mark Wolek wrote:
Still kicking around this idea and didnt see it
addressed in any of the threads before the forum closed.
If one made an all ssd pool, would a log/cache
drive just slow you down? Would zil slow you down? Thinking rotate
MLC drives with
On 10/28/11 00:54, Neil Perrin wrote:
On 10/28/11 00:04, Mark Wolek wrote:
Still kicking around this idea and didnt see
it
addressed in any of the threads before the forum closed.
If one made an all ssd pool, would a log/cache
drive just slow you
Perrin
Sent: Friday, October 28, 2011 11:38 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Log disk with all ssd pool?
On 10/28/11 00:54, Neil Perrin wrote:
On 10/28/11 00:04, Mark Wolek wrote:
Still kicking around this idea and didnt see it
addressed in any
On 08/03/12 19:39, Bob Friesenhahn wrote:
On Fri, 3 Aug 2012, Karl Rossing wrote:
I'm looking at
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
wondering what I should get.
Are people getting intel 330's for l2arc and 520's for slog?
For the slog,
On 10/04/12 05:30, Schweiss, Chip wrote:
Thanks for all the input. It seems information on the
performance of the ZIL is sparse and scattered. I've spent
significant time researching this the past day. I'll summarize
what I've found. Please correct me if I'm wrong.
On 10/04/12 15:59, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
The ZIL code chains blocks together and these are allocated round robin
among slogs
101 - 200 of 201 matches
Mail list logo