high-speed streams can be
ramped up quickly while files belonging to a pool which has recently
fed low-speed streams can be ramped up more conservatively (until
proven otherwise) in order to not flood memory and starve the I/O
needed by other streams.
Bob
--
Bob Friesenhahn
bfrie
e ones used for
zfs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
uge synchronous writes should
necessary block reading (particularly if the reads are for unrelated
blocks/files), but it is understandable if zfs focuses more on the
writes.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Main
riters
of important updates may be blocked due to many readers trying to
access those important updates.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
address, I
supply the identification of the system via an script argument. The
name could be obtained from `uname -n` or `hostname` instead.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http:/
directly. Zfs removes the RAID obfustication which exists in
traditional RAID systems.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-
t the
disks write data in the most efficient order, but it absolutely must
commit all of the data when requested so that the checkpoint is valid.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http:/
On Fri, 24 Jul 2009, Frank Middleton wrote:
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system! Any hardware can lose a flush;
From my understanding, ZFS is not a journalled file system. ZFS
relies on
then a time may come where the drive suddenly "hits the wall" and is
no longer able to erase the data as fast as it comes in.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Grap
d a crime. Regardless, it has committed a crime.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zf
=1253.74 ops/sec
Min xfer= 85589.00 ops
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
On Fri, 24 Jul 2009, Bob Friesenhahn wrote:
This seems like rather low random write performance. My 12-drive array of
rotating rust obtains 3708.89 ops/sec. In order to be effective, it seems
that a synchronous write log should perform considerably better than the
backing store.
Actually
5000 on reads.
This seems like rather low random write performance. My 12-drive
array of rotating rust obtains 3708.89 ops/sec. In order to be
effective, it seems that a synchronous write log should perform
considerably better than the backing store.
Bob
--
Bob Friesenhahn
ggest maxing out your server RAM capacity before worrying about
adding a L2ARC. The reason why is that RAM is full speed and contains
the L1ARC. The only reason to do otherwise if if you can't afford it.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
> /dev/null'
144000768 blocks
real35m22.41s
user0m4.40s
sys 1m14.22s
Notice that with 3X the files, the throughput is dramatically reduced
and the time is the same for both cases.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.or
nly part of the block is updated.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opens
on a server and then saved off to bulk storage. The local
disks on the server surely see massive amounts of use.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
theroretically
results in Poof!
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
e end of its
lifetime. Unless drive ages are carefully staggered, or different
types of drives are intentionally used, it might be that data
redundancy does not help. Poof!
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maint
icksArea.0
This is pretty exciting stuff. We just have to switch to Windows 7. :-)
I notice that the life of the drives depends considerably on write
cycles. 20GB/day 24/7 does not support a data intensive environment,
which could easly write 10-20X that much.
Bob
--
Bob Friesen
thing to get right.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
K10 CPU caused any problem for
Solaris. The chipset used on the motherboard is probably what you
should pay attention to.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
other parity disk
to an existing raidz vdev but I don't know how much work that entails.
Zfs development seems to be overwelmed with marketing-driven
requirements lately and it is time to get back to brass tacks and make
sure that the parts already developed are truely enterprise-grade.
that the
10TB data lost to a failed pool was not lost due to lack of ECC. It
was lost because VirtualBox intentionally broke the guest operating
system.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
nice. Before developers worry about such exotic
features, I would rather that they attend to the gross performance
issues so that zfs performs at least as well as Windows NTFS or Linux
XFS in all common cases.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/
ctim.
I think that the standard disclaimer "Always use protection" applies
here. Victims who do not use protection should assume substantial
guilt for their subsequent woes.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick M
ed, and not
used for any critical application.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-dis
ware cache sync works and is respected.
Without taking advantage of the drive caches, zfs would be
considerably less performant.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
easy.
A RAID system with distributed parity (like raidz) does not have a
"parity device". Instead, all disks are treated as equal. Without
distributed parity you have a bottleneck and it becomes difficult to
scale the array to different stripe sizes.
Bob
--
Bob Friesenhahn
bfrie...@
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listi
I have received email that Sun CR numbers 6861397 & 6859997 have been
created to get this performance problem fixed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick
0.00.0 0 0 c6t0d0
0.00.00.00.0 0.0 0.00.00.0 0 0
c2t202400A0B83A8A0Bd31
0.00.00.00.0 0.0 0.00.00.0 0 0
c3t202500A0B83A8A0Bd31
0.00.00.00.0 0.0 0.00.00.0 0 0 freddy:vold(pid508)
Bob
--
Bob Friesenhah
em with 40 fast SAS drives.
This is the opposite situation from the zfs writes which periodically
push the hardware to its limits.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
laris
improvement is not as much as was assumed.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolari
ng's hypothesis (which is the
same as my own).
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@
mprovement. This is demonstrated by
Scott Lawson's little two disk mirror almost producing the same
performance as our much more exotic setups.
Evidence suggests that SPARC systems are doing better than x86.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simples
what are you running?): 4m7.13s
This new one from Scott Lawson is incredible (but technically quite
possible):
SPARC Enterprise M3000, single SAS mirror pair: 3m25.13s
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maint
t your little two disk mirror reads as fast
as mega Sun systems with 38+ disks and striped vdevs to boot.
Incredible!
Does this have something to do with your well-managed power and
cooling? :-)
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
G
Solaris kernel is throttling the
read rate so that throwing more and faster hardware at the problem
does not help.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
_
has a
syntax error when used for Solaris 10 U7.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss
ng to much better performance than we are seeing.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opens
details, the script is
updated to dump some system info and the pool configuration. Refresh
from
http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick
ad 540MB/second from a huge file).
Do the math and see if you think that zfs is giving you the
read performance you expect based on your hardware.
I think that we are encountering several bugs here. We also have a
general read bottleneck.
Bob
--
Bob Friesenhahn
bfrie...@simple.dal
rejected.
If star is truely more efficient than cpio, it may make the difference
even more obvious. What did you discover when you modified my test
script to use 'star' instead?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users
been using ZFS
(1-3/4 years). Sometimes it takes a while for me to wake up and smell
the coffee.
Meanwhile I have opened a formal service request (IBIS 71326296) with
Sun Support.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsM
, I have not filed a bug report yet. Any problem report to Sun's
Service department seems to require at least one day's time.
I was curious to see if recent OpenSolaris suffers from the same
problem, but posted results (thus far) are not as conclusive as they
are for Solaris 10.
this testing mmap is not being used (cpio does not use mmap) so the
page cache is not an issue. It does become an issue for 'cp -r'
though where we see the I/O be substantially (and essentially
permanently) reduced even more for impacted files until the filesystem
is unmounted.
Bob
--
che. Then once it
finally determines that there is no cached data after all, it issues a
read request.
Even the "better" read performance is 1/2 of what I would expect from
my hardware and based on prior test results from 'iozone'. More
prefetch would surely help.
Bo
when caching is
disabled. I don't think that this is strictly a bug since it is what
the database folks are looking for.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
caused by purging
old data from the ARC. If these delays were caused by purging data
from the ARC, then 'zfs iostat' would start showing lower read
performance once the ARC becomes full, but that is not the case.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simples
portable USB drives that I use for backup because of ZFS. This is
making me madder and madder by the minute.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
is an
OpenSolaris issue as well.
It seems likely to be more evident with fast SAS disks or SAN devices
rather than a few SATA disks since the SATA disks have more access
latency. Pools composed of mirrors should offer less read latency as
well.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.
On Mon, 13 Jul 2009, Alexander Skwar wrote:
This is a M4000 mit 32 GB RAM and two HDs in a mirror.
I think that you should edit the script to increase the file count
since your RAM size is big enough to cache most of the data.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http
/dev/null'
48000247 blocks
real23m50.27s
user2m41.81s
sys 9m46.76s
Feel free to clean up with 'zfs destroy rpool/zfscachetest'.
I am interested to hear about systems which do not suffer from this
bug.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.u
On Sun, 12 Jul 2009, Bob Friesenhahn wrote:
This is the first I have heard about a ZFS deduplication project. Is there a
public anouncement (from Sun) somewhere that there is a ZFS deduplication
project or are you just speculating that there might be such a project?
Ahhh, I found some
times and zfs send. To me, getting existing features to
operate in a stellar fashion is more important than adding new
features.
Rome was not built in a day and we all know how the Tower of Babel
(http://en.wikipedia.org/wiki/Tower_of_Babel) turned out.
Bob
--
Bob Friesenhahn
On Wed, 8 Jul 2009, Fredrich Maney wrote:
Any idea what the Patch ID was?
x86:119535-15
SPARC: 119534
Description of change "6690473 request to have flash support for ZFS
root install".
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
Solaris 10 patch for supporting Flash archives on ZFS
came out about a week ago.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss
tuned so
it is likely that a single process won't see much benefit.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing
existing working madvise() functionality.
ZFS seems to want to cache all read data in the ARC, period.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
vise() provides the ability for I/O scheduling, or
to flush stale data from memory. In recent Solaris, it also includes
provisions which allow applications to improve their performance on
NUMA systems.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfri
nbusy the CPU cores are when I/O is bad.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
0xe1
1 17% 100% 0.00 324 cpu[2]+0xf8cv_timedwait_sig+0xe1
-------
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintaine
wn us
that once you've copied files with cp(1) - which does use mmap(2) - that
anything that uses read(2) on the same files is impacted.
The problem is observed with cpio, which does not use mmap. This is
immediately after a reboot or unmount/mount of the filesystem.
Bob
--
Bob Friesen
which has sometime been mapped, will be impacted,
even if the file is nolonger mapped.
However, it seems that memory mapping is not responsible for the
problem I am seeing here. Memory mapping may make the problem seem
worse, but it is clearly not the cause.
Bob
--
Bob Friesenhahn
bfrie
h, but I have attached output
from 'hotkernel' while a subsequent cpio copy is taking place. It
shows that the kernel is mostly sleeping.
This is not a new problem. It seems that I have been banging my head
against this from the time I started using zfs.
Bob
--
Bob Friesenhahn
bf
Did you try to use highly performant software like star?
No, because I don't want to tarnish your software's stellar
reputation. I am focusing on Solaris 10 bugs today.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
ooted immediately prior to each use.
/etc/system tunables are currently:
set zfs:zfs_arc_max = 0x28000
set zfs:zfs_write_limit_override = 0xea60
set zfs:zfs_vdev_max_pending = 5
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maint
SSDs.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
ools to improve application
performance which are just not available via traditional I/O.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-
cycle is
more on the order of one second in every five.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-di
configure the various tunables
to match after that
Yes, this comes off of a 2540. I used iozone for testing and see that
through zfs, the hardware is able to write a 64GB file at 380 MB/s and
read at 551 MB/s. Unfortunately, this does not seem to translate well
for the actual task.
On Fri, 3 Jul 2009, Bob Friesenhahn wrote:
Copy Method Data Rate
==
cpio -pdum 75 MB/s
cp -r 32 MB/s
tar -cf - . | (cd dest && tar -xf -)
erformance. If I am encountering this
problem, then it is likely that many others are as well.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
__
the end of the transfer or if the application waits for
a response while the kernel is still waiting for more data from the
application. It is necessary to remove TCP_CORK before writing the
final data and if the application guesses wrong, the connection will
hang.
Bob
--
Bob Friesen
intervals results in large delays while the
system syncs, followed by normal response times while the system buffers more
input...
I don't see any such problems unless compression is enabled. When
compression is enabled, the TXG sync causes definite response time
issues in the syste
On Fri, 3 Jul 2009, Victor Latushkin wrote:
On 02.07.09 22:05, Bob Friesenhahn wrote:
On Thu, 2 Jul 2009, Zhu, Lejun wrote:
Actually it seems to be 3/4:
3/4 is an awful lot. That would be 15 GB on my system, which explains why
the "5 seconds to write" rule is dominant.
3/4
huge variation in performance (and cost) with
so-called "enterprise" SSDs. SSDs with capacitor-backed write caches
seem to be fastest.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Graphics
correct order.
Ugly, but seems to have worked for many years.)
Oops! The only solution is to add some code to sort the results. You
can use qsort() for that.
Depending on existing directory order is an error.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simples
m that zfs is incable of reading
during all/much of the time it is syncing a TXG. Even if the TXG is
written more often, readers will still block, resulting in a similar
cumulative effect on performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user
us is good - no issues
or errors. any ideas?
Try using direct i/o (the -D flag) in bonnie++. You'll need at least version
1.03e.
If this -D flag uses the Solaris directio() function, then it will do
nothing for ZFS. It only works for UFS and NFS.
Bob
--
Bob Friesenhahn
bfrie...@simp
This causes me to believe that the algorithm is not implemented as
described in Solaris 10.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
where the sawtooth occurs.
More at 11.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolari
issue does not apply at all to NFS service, database
service, or any other usage which does synchronous writes.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://ww
.png
Perfmeter display for 768 MB:
http://www.simplesystems.org/users/bfriesen/zfs-discuss/perfmeter-768mb.png
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
wise the outage is usually longer than the UPSs can
stay up since the problem required human attention.
A standby generator is needed for any long outages.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Main
g globally blocked and it is not just
misbehavior of a single process.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailin
On Sun, 28 Jun 2009, Bob Friesenhahn wrote:
On Sun, 28 Jun 2009, Bob Friesenhahn wrote:
Today I experimented with doubling this value to 688128 and was happy to
see a large increase in sequential read performance from my ZFS pool which
is based on six mirrors vdevs. Sequential read
ation issue
and it will be resolved soon.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ed below.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
#!/bin/ksh
# Date: Mon, 14 Apr 2008 15:49:41 -0700
# From: Jeff Bonwick
# To: Henrik Hjort
# Cc: zfs-discuss@opensolaris.or
On Sun, 28 Jun 2009, Bob Friesenhahn wrote:
Today I experimented with doubling this value to 688128 and was happy to see
a large increase in sequential read performance from my ZFS pool which is
based on six mirrors vdevs. Sequential read performance jumped from 552787
MB/s to 799626 MB/s
read
performance by balancing the reads from the mirror devices. Now the
read performance is almost 2X the write performance.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagic
obvious since the disks
will only be driven as hard as the slowest disk and so the slowest
disk may not seem much slower.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org
the slow disk
a bit more difficult.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
earlier
zpool iostat data for iSCSI). Isn't this what we expect, because NFS
does syncs, while iSCSI does not (assumed)?
If iSCSI does not do syncs (presumably it should when a cache flush is
requested) then NFS is safer in case the server crashes and reboots.
Bob
--
Bob Friesenhahn
s internal buffers are much smaller than
that, and fiber channel device drivers are not allowed to consume much
memory either. To make matters worse, I am using ZFS mirrors so the
amount of data written to the array in those five seconds is doubled
to 3.6GB.
Bob
--
Bob Friese
. Since Matt Ahrens has been
working on it for almost a year, it must be almost fixed by now. :-)
I am not sure how is queue depth is managed, but it seems possible to
detect when reads are blocked by bulk writes and make some automatic
adjustments to improve balance.
Bob
--
Bob Friesenhahn
bfrie
even though the large ZFS flushes are taking
place. This proves that my application is seeing stalled reads rather
than stalled writes.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.Gra
250 MB/s.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
386M
Sun_2540 460G 1.18T142792 15.8M 82.7M
Sun_2540 460G 1.18T375 0 46.9M 0
Here is an interesting discussion thread on another list that I had
not seen before:
http://opensolaris.org/jive/thread.jspa?messageID=347212
Bob
--
Bob Friesenhahn
bfri
801 - 900 of 1490 matches
Mail list logo