zfs hogs all the ram under a sustained heavy write load. This is
being tracked by:
6429205 each zpool needs to monitor it's throughput and throttle heavy
writers
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Jill Manfield writes:
My customer is running java on a ZFS file system. His platform is Soalris
10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs
rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it
came back:
The culprit
With ZFS however the in-between cache is obsolete, as individual disk
caches can be used directly.
The statement needs to be qualified.
Storage cache, if protected, works great to reduce critical
op latency. ZFS when it writes to disk cache, will flush
data out before return to
Jürgen Keil writes:
ZFS 11.0 on Solaris release 06/06, hangs systems when
trying to copy files from my VXFS 4.1 file system.
any ideas what this problem could be?.
What kind of system is that? How much memory is installed?
I'm able to hang an Ultra 60 with 256 MByte of main
as an alternative, I thaught this would be relevant to the
discussion:
Bug ID: 6478980
Synopsis: zfs should support automount property
In other words, do we really need to mount 1 FS in a
snap, or do we just need to system to be up quickly then
mount on demand
-r
Erblichs writes:
Hi,
My suggestion is direct any command output to a file
that may print thous of lines.
I have not tried that number of FSs. So, my first
suggestion is to have alot of phys mem installed.
I seem to recall 64K per FS and being worked on to
Luke Lonergan writes:
Robert,
I belive it's not solved yet but you may want to try with
latest nevada and see if there's a difference.
It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
post build 47 I think.
- Luke
This one is not yet fixed :
Chris Gerhard writes:
One question that keeps coming up in my discussions about ZFS is the lack of
user quotas.
Typically this comes from people who have many tens of thousands
(30,000 - 100,000) of users where they feel that having a file system
per user will not be manageable. I
How much memory in the V210 ?
UFS will recycle it's own pages while creating files that
are big. ZFS working against a large heap of free memory will
cache the data (why not?). The problem is that ZFS does not
know when to stop. During the subsequent memory/cache
reclaim, ZFS is potentially not
Here is my take on this
http://blogs.sun.com/roch/entry/zfs_and_directio
-r
Marlanne DeLaSource writes:
I had a look at various topics covering ZFS direct I/O, and this topic is
sometimes mentioned, and it was not really clear to me.
Correct me if I'm wrong
Direct I/O
Tomas Ögren writes:
On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
Tomas,
comments inline...
arc::print struct arc
{
anon = ARC_anon
mru = ARC_mru
mru_ghost = ARC_mru_ghost
mfu = ARC_mfu
?
Thanks
Matt
Roch - PAE wrote On 11/21/06 11:28,:
Matthew B Sweeney - Sun Microsystems Inc. writes:
Hi
I have an application that use NFS between a Thumper and a 4600. The
Thumper exports 2 ZFS filesystems that the 4600 uses as an inqueue and
outqueue
Nope, wrong conclusion again.
This large performance degradation has nothing whatsoever to
do with ZFS. I have not seen data that would show a possible
slowness on the part of ZFS vfs AnyFS on the
backend; there may well be and that would be an entirely
diffenrent discussion to the large
MB/s. Not a
huge difference for sure, but enough to make you think about switching.
This was single stream over a 10GE link. (x4600 mounting vols from an x4500)
Matt
Bill Moore wrote:
On Thu, Nov 23, 2006 at 03:37:33PM +0100, Roch - PAE wrote:
Al Hopper writes:
Hi
How about attaching the slow storage and kick off a
scrub during the nights ? Then detach in the morning ?
Downside: you are running an unreplicated pool during the
day. Storage side errors won't be recoverable.
-r
Albert Shih writes:
Le 04/12/2006 à 21:24:26-0800, Anton B. Rang a écrit
Why all people are strongly recommending to use whole disk (not part
of disk) for creation zpools / ZFS file system ?
One thing is performance; ZFS can enable/disable write cache in the disk
at will if it has full control over the entire disk..
ZFS will also flush the WC when
I got around that some time ago with a little hack;
Maintain a directory with soft links to disks of interest:
ls -l .../mydsklist
total 50
lrwxrwxrwx 1 cx158393 staff 17 Apr 29 2006 c1t0d0s1 -
/dev/dsk/c1t0d0s1
lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t16d0s1 -
Anton B. Rang writes:
If your database performance is dominated by sequential reads, ZFS may
not be the best solution from a performance perspective. Because ZFS
uses a write-anywhere layout, any database table which is being
updated will quickly become scattered on the disk, so that
Maybe this will help:
http://blogs.sun.com/roch/entry/zfs_and_directio
-r
dudekula mastan writes:
Hi All,
We have directio() system to do DIRECT IO on UFS file system. Can
any one know how to do DIRECT IO on ZFS file system.
Regards
Masthan
The latency issue might improve with this rfe
6471212 need reserved I/O scheduler slots to improve I/O latency of critical
ops
-r
Tom Duell writes:
Group,
We are running a benchmark with 4000 users
simulating a hospital management system
running on Solaris 10 6/06 on USIV+ based
Right on. And you might want to capture this in a blog for
reference. The permalink will be quite useful.
We did have a use case for zil synchronicity which was a
big user controlled transaction :
turn zil off
do tons of thing to the filesystem.
big sync
turn
Was it over NFS ?
Was zil_disable set on the server ?
If it's yes/yes, I still don't know for sure if that would
be grounds for a causal relationship, but I would certainly
be looking into it.
-r
Trevor Watson writes:
Anton B. Rang wrote:
Were there any errors reported in
Jason J. W. Williams writes:
Hi Jeremy,
It would be nice if you could tell ZFS to turn off fsync() for ZIL
writes on a per-zpool basis. That being said, I'm not sure there's a
consensus on that...and I'm sure not smart enough to be a ZFS
contributor. :-)
The behavior is a
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
-r
Al Hopper writes:
On Sun, 17 Dec 2006, Ricardo Correia wrote:
On Friday 15 December 2006 20:02, Dave Burleson wrote:
Does anyone have a document that describes
Jonathan Edwards writes:
On Dec 19, 2006, at 07:17, Roch - PAE wrote:
Shouldn't there be a big warning when configuring a pool
with no redundancy and/or should that not require a -f flag ?
why? what if the redundancy is below the pool .. should we
warn that ZFS isn't
Robert Milkowski writes:
Hello przemolicc,
Friday, December 22, 2006, 10:02:44 AM, you wrote:
ppf On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote:
Hello Shawn,
Thursday, December 21, 2006, 4:28:39 PM, you wrote:
SJ All,
SJ I understand that
I've just generated some data for an upcoming blog entry on
the subject. This is about a small file tar extract :
All times are elapse (single 72GB SAS disk)
Local and memory based filesystems
tmpfs : 0.077 sec
ufs : 0.25 sec
zfs : 0.12 sec
NFS service
Anton B. Rang writes:
In our recent experience RAID-5 due to the 2 reads, a XOR calc and a
write op per write instruction is usually much slower than RAID-10
(two write ops). Any advice is greatly appreciated.
RAIDZ and RAIDZ2 does not suffer from this malady (the RAID5 write
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
Performance, Availability Architecture Engineering
Roch BourbonnaisSun Microsystems,
Hans-Juergen Schnitzer writes:
Roch - PAE wrote:
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
Which role plays network latency? If I understand you right,
even a low-latency network, e.g. Infiniband, would not increase
performance
Dennis Clarke writes:
On Mon, Jan 08, 2007 at 03:47:31PM +0100, Peter Schuller wrote:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
So just to confirm; disabling the zil *ONLY* breaks the semantics of
fsync()
and synchronous writes from the application
Jonathan Edwards writes:
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Direct I/O as generally understood (i.e. not UFS-specific) is an
optimization which allows data to
If some aspect of the load is writing large amount of data
into the pool (through the memory cache, as opposed to the
zil) and that leads to a frozen system, I think that a
possible contributor should be:
|6429205||each zpool needs to monitor its throughput and throttle heavy
Jason J. W. Williams writes:
Hi Anantha,
I was curious why segregating at the FS level would provide adequate
I/O isolation? Since all FS are on the same pool, I assumed flogging a
FS would flog the pool and negatively affect all the other FS on that
pool?
Best Regards,
Jason
[EMAIL PROTECTED] writes:
Note also that for most applications, the size of their IO operations
would often not match the current page size of the buffer, causing
additional performance and scalability issues.
Thanks for mentioning this, I forgot about it.
Since ZFS's default
Anantha N. Srirama writes:
Agreed, I guess I didn't articulate my point/thought very well. The
best config is to present JBoDs and let ZFS provide the data
protection. This has been a very stimulating conversation thread; it
is shedding new light into how to best use ZFS.
I would
Bjorn Munch writes:
Hello,
I am doing some tests using ZFS for the data files of a database
system, and ran into memory problems which has been discussed in a
thread here a few weeks ago.
When creating a new database, the data files are first initialized to
their configured size
Nicolas Williams writes:
On Tue, Jan 30, 2007 at 06:32:14PM +0100, Roch - PAE wrote:
The only benefit of using a HW RAID controller with ZFS is that it
reduces the I/O that the host needs to do, but the trade off is that ZFS
cannot do combinatorial parity reconstruction so
Robert Milkowski writes:
Hello Jonathan,
Tuesday, February 6, 2007, 5:00:07 PM, you wrote:
JE On Feb 6, 2007, at 06:55, Robert Milkowski wrote:
Hello zfs-discuss,
It looks like when zfs issues write cache flush commands se3510
actually honors it. I do not have right
It's just a matter of time before ZFS overtakes UFS/DIO
for DB loads, See Neel's new blog entry:
http://blogs.sun.com/realneel/entry/zfs_and_databases_time_for
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Robert Milkowski writes:
bash-3.00# dtrace -n fbt::txg_quiesce:return'{printf(%Y ,walltimestamp);}'
dtrace: description 'fbt::txg_quiesce:return' matched 1 probe
CPU IDFUNCTION:NAME
3 38168 txg_quiesce:return 2007 Feb 12 14:08:15
0 38168
Duh!.
Long sync (which delays the next sync) are also possible on
a write intensive workloads. Throttling heavy writters, I
think, is the key to fixing this.
Robert Milkowski writes:
Hello Roch,
Monday, February 12, 2007, 3:19:23 PM, you wrote:
RP Robert Milkowski writes:
Erblichs writes:
Jeff Bonwick,
Do you agree that their is a major tradeoff of
builds up a wad of transactions in memory?
We loose the changes if we have an unstable
environment.
Thus, I don't quite understand why a 2-phase
approach to commits
Peter Schuller writes:
I agree about the usefulness of fbarrier() vs. fsync(), BTW. The cool
thing is that on ZFS, fbarrier() is a no-op. It's implicit after
every system call.
That is interesting. Could this account for disproportionate kernel
CPU usage for applications that
The only obvious thing would be if the exported ZFS
filesystems where initially mounted at a point in time when
zil_disable was non-null.
The stack trace that is relevant is:
sd_send_scsi_SYNCHRONIZE_CACHE
sd`sdioctl+0x1770
On x86 try with sd_send_scsi_SYNCHRONIZE_CACHE
Leon Koll writes:
Hi Marion,
your one-liner works only on SPARC and doesn't work on x86:
# dtrace -n fbt::ssd_send_scsi_SYNCHRONIZE_CACHE:entry'[EMAIL PROTECTED] =
count()}'
dtrace: invalid probe specifier
Leon Koll writes:
An update:
Not sure is it related to the fragmentation, but I can say that serious
performance degradation in my NFS/ZFS benchmarks is a result of on-disk ZFS
data layout.
Read operations on directories (NFS3 readdirplus) are abnormally time
consuming . That
dudekula mastan writes:
If a write call attempted to write X bytes of data, and if writecall writes
only x ( hwere x X) bytes, then we call that write as short write.
-Masthan
What kind of support do you want/need ?
-r
___
zfs-discuss
from CC: people related to Perforce benchmark (not in
techtracker) is welcome.
Thanks,
Clausde
Roch - PAE a écrit :
Salut Claude.
For this kind of query, try zfs-discuss@opensolaris.org;
Looks like a common workload to me.
I know of no small file problem with ZFS.
You
So Jonathan, you have a concern about the on-disk space
efficiency for small file (more or less subsector). It is a
problem that we can throw rust at. I am not sure if this is
the basis of Claude's concern though.
Creating small files, last week I did a small test. With ZFS
I can create 4600
Jens Elkner writes:
Currently I'm trying to figure out the best zfs layout for a thumper wrt. to
read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~
500 MB/s seems to be the maximum on can reach (tried initial default
setup, all 46
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all of them
waiting on I/O) but thentend to stick around servicing
moderate loads.
-r
Leon Koll wrote:
Hello, gurus
I need your help. During the benchmark test
Jeff Davis writes:
On February 26, 2007 9:05:21 AM -0800 Jeff Davis
But you have to be aware that logically sequential
reads do not
necessarily translate into physically sequential
reads with zfs. zfs
I understand that the COW design can fragment files. I'm still trying to
Frank Hofmann writes:
On Tue, 27 Feb 2007, Jeff Davis wrote:
Given your question are you about to come back with a
case where you are not
seeing this?
As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the
I/O rate drops off quickly when you add
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all of them
waiting on I/O) but thentend to stick around servicing
moderate loads
Leon Koll writes:
On 3/5/07, Roch - PAE [EMAIL PROTECTED] wrote:
Leon Koll writes:
On 2/28/07, Roch - PAE [EMAIL PROTECTED] wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all
Jesse, You can change txg_time with mdb
echo txg_time/W0t1 | mdb -kw
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Manoj Joseph writes:
Matt B wrote:
Any thoughts on the best practice points I am raising? It disturbs me
that it would make a statement like don't use slices for
production.
ZFS turns on write cache on the disk if you give it the entire disk to
manage. It is good for
Working with a small txg_time means we are hit by the pool
sync overhead more often. This is why the per second
throughpuot has smaller peak values.
With txg_time = 5, we have another problem which is that
depending on timing of the pool sync, some txg can end up
with too little data in them
Did you run touch from a client ?
ZFS and UFS are different in general but in response to a local touch
command neither need to generate immediate I/O and in response to a client
touch both do.
-r
Ayaz Anjum writes:
HI !
Well as per my actual post, i created a zfs file as part of Sun
Frank Cusack writes:
On March 7, 2007 8:50:53 AM -0800 Matt B [EMAIL PROTECTED] wrote:
Any thoughts on the best practice points I am raising? It disturbs me
that it would make a statement like don't use slices for production.
I think that's just a performance thing.
Right, I
Info on tuning the ARC was just recently updated:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommendations
-r
Rainer Heilke writes:
Thanks for the feedback. Please see below.
ZFS should give back memory used for cache to
Rainer Heilke writes:
The updated information states that the kernel setting is only for the
current Nevada build. We are not going to use the kernel debugger
method to change the setting on a live production system (and do this
everytime we need to reboot).
We're back to trying to
Hi Mike, This already integrated in Nevada:
6510807 ARC statistics should be exported via kstat
kstat zfs:0:arcstats
module: zfs instance: 0
name: arcstatsclass:misc
c
JS writes:
The big problem is that if you don't do your redundancy in the zpool,
then the loss of a single device flatlines the system. This occurs in
single device pools or stripes or concats. Sun support has said in
support calls and Sunsolve docs that this is by design, but I've never
Richard L. Hamilton writes:
_FIOSATIME - why doesn't zfs support this (assuming I didn't just miss it)?
Might be handy for backups.
Are these syscall sufficent ?
int utimes(const char *path, const struct timeval times[2]);
int futimesat(int fildes, const char *path, const
See
Kernel Statistics Library Functions kstat(3KSTAT)
-r
Atul Vidwansa writes:
Peter,
How do I get those stats programatically? Any clues?
Regards,
_Atul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Robert Milkowski writes:
Hello Selim,
Wednesday, March 28, 2007, 5:45:42 AM, you wrote:
SD talking of which,
SD what's the effort and consequences to increase the max allowed block
SD size in zfs to highr figures like 1M...
Think what would happen then if you try to read 100KB
220434 8615%
Free (cachelist) 318625 12448%
Free (freelist)659607 2576 16%
Total 4167561 16279
Physical 4078747 15932
On 3/23/07, Roch - PAE [EMAIL
Annie Li writes:
Can anyone help explain what does out-of-order issue mean in the
following segment?
ZFS has a pipelined I/O engine, similar in concept to CPU pipelines. The
pipeline operates on I/O dependency graphs and provides scoreboarding,
priority, deadline scheduling,
Gino writes:
6322646 ZFS should gracefully handle all devices
failing (when writing)
Which is being worked on. Using a redundant
configuration prevents this
from happening.
What do you mean with redundant? All our servers has 2 or 4 HBAs, 2 or 4
fc switches and
Richard L. Hamilton writes:
Well, no; his quote did say software or hardware. The theory is apparently
that ZFS can do better at detecting (and with redundancy, correcting) errors
if it's dealing with raw hardware, or as nearly so as possible. Most SANs
_can_ hand out raw LUNs as well as
tester writes:
Hi,
quoting from zfs docs
The SPA allocates blocks in a round-robin fashion from the top-level
vdevs. A storage pool with multiple top-level vdevs allows the SPA to
use dynamic striping to increase disk bandwidth. Since a new block may
be allocated from any of the
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
-r
cedric briner writes:
Hello,
I wonder if the subject of this email is not self-explanetory ?
okay let'say that it is not. :)
Robert Milkowski writes:
Hello Wee,
Thursday, April 26, 2007, 4:21:00 PM, you wrote:
WYT On 4/26/07, cedric briner [EMAIL PROTECTED] wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as
Wee Yeh Tan writes:
Robert,
On 4/27/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Wee,
Thursday, April 26, 2007, 4:21:00 PM, you wrote:
WYT On 4/26/07, cedric briner [EMAIL PROTECTED] wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
-
Chad Mynhier writes:
On 4/27/07, Erblichs [EMAIL PROTECTED] wrote:
Ming Zhang wrote:
Hi All
I wonder if any one have idea about the performance loss caused by COW
in ZFS? If you have to read old data out before write it to some other
place, it involve disk seek.
cedric briner writes:
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
Just to better understand ? (I know that I'm quite slow :( )
when you say _nfs clients_ are you specifically
Ian Collins writes:
Roch Bourbonnais wrote:
with recent bits ZFS compression is now handled concurrently with many
CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
Would changing (selecting a smaller)
Manoj Joseph writes:
Hi,
I was wondering about the ARC and its interaction with the VM
pagecache... When a file on a ZFS filesystem is mmaped, does the ARC
cache get mapped to the process' virtual memory? Or is there another copy?
My understanding is,
The ARC does not get mapped
Torrey McMahon writes:
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or
Hi Seigfried, just making sure you had seen this:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
You have very fast NFS to non-ZFS runs.
That seems only possible if the hosting OS did not sync the
data when NFS required it or the drive in question had some
fast write caches. If
Joe S writes:
After researching this further, I found that there are some known
performance issues with NFS + ZFS. I tried transferring files via SMB, and
got write speeds on average of 25MB/s.
So I will have my UNIX systems use SMB to write files to my Solaris server.
This seems
Sorry about that; looks like you've hit this:
6546683 marvell88sx driver misses wakeup for mv_empty_cv
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6546683
Fixed in snv_64.
-r
Thomas Garner writes:
We have seen this behavior, but it appears to be entirely
Regarding the bold statement
There is no NFS over ZFS issue
What I mean here is that,if you _do_ encounter a
performance pathology not linked to the NVRAM Storage/cache
flush issue then you _should_ complain or better get someone
to do an analysis of the situation.
One
Brandorr wrote:
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn't, but I can't find the
£ukasz K writes:
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn't, but I can't find the
£ukasz K writes:
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn't, but I can't find the
Matty writes:
Are there any plans to support record sizes larger than 128k? We use
ZFS file systems for disk staging on our backup servers (compression
is a nice feature here), and we typically configure the disk staging
process to read and write large blocks (typically 1MB or so). This
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
Pawel Jakub Dawidek writes:
On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php
[EMAIL PROTECTED] writes:
Roch - PAE wrote:
[EMAIL PROTECTED] writes:
Jim Mauro wrote:
Hey Max - Check out the on-disk specification document at
http://opensolaris.org/os/community/zfs/docs/.
Page 32 illustration shows the rootbp pointing to a dnode_phys_t
Claus Guttesen writes:
I have many small - mostly jpg - files where the original file is
approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
are currently on vxfs. I have copied all files from one partition onto
a zfs-ditto. The vxfs-partition occupies 401 GB and
Claus Guttesen writes:
So the 1 MB files are stored as ~8 x 128K recordsize.
Because of
5003563 use smaller tail block for last block of object
The last block of you file is partially used. It will depend
on your filesize distribution by without that info we can
Pawel Jakub Dawidek writes:
I'm CCing zfs-discuss@opensolaris.org, as this doesn't look like
FreeBSD-specific problem.
It looks there is a problem with block allocation(?) when we are near
quota limit. tank/foo dataset has quota set to 10m:
Without quota:
FreeBSD:
Hi Jason, This should have helped.
6542676 ARC needs to track meta-data memory overhead
Some of the lines to arc.c:
1551 1.36 if (arc_meta_used = arc_meta_limit) {
1552/*
1553 * We are exceeding our meta-data cache
Vincent Fox writes:
I don't understand. How do you
setup one LUN that has all of the NVRAM on the array dedicated to it
I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
thick here, but can you be more specific for the n00b?
Do you mean from firmware side or
Neelakanth Nadgir writes:
io:::start probe does not seem to get zfs filenames in
args[2]-fi_pathname. Any ideas how to get this info?
-neel
Who says an I/O is doing work for a single pathname/vnode
or for a single process. There is not that one to one
correspondance anymore. Not in the
Rayson Ho writes:
1) Modern DBMSs cache database pages in their own buffer pool because
it is less expensive than to access data from the OS. (IIRC, MySQL's
MyISAM is the only one that relies on the FS cache, but a lot of MySQL
sites use INNODB which has its own buffer pool)
The DB
Matty writes:
On 10/3/07, Roch - PAE [EMAIL PROTECTED] wrote:
Rayson Ho writes:
1) Modern DBMSs cache database pages in their own buffer pool because
it is less expensive than to access data from the OS. (IIRC, MySQL's
MyISAM is the only one that relies on the FS cache
1 - 100 of 130 matches
Mail list logo