Similarly, read block size does not make a
significant difference to the sequential read speed.
Last time I did a simple bench using dd, supplying the record size as
blocksize to it instead of no blocksize parameter bumped the mirror pool
speed from 90MB/s to 130MB/s.
-mg
signature.asc
On Thu, 20 Mar 2008, Mario Goebbels wrote:
Similarly, read block size does not make a
significant difference to the sequential read speed.
Last time I did a simple bench using dd, supplying the record size as
blocksize to it instead of no blocksize parameter bumped the mirror pool
speed
On Mar 20, 2008, at 11:07 AM, Bob Friesenhahn wrote:
On Thu, 20 Mar 2008, Mario Goebbels wrote:
Similarly, read block size does not make a
significant difference to the sequential read speed.
Last time I did a simple bench using dd, supplying the record size as
blocksize to it instead of
On Thu, 20 Mar 2008, Jonathan Edwards wrote:
in that case .. try fixing the ARC size .. the dynamic resizing on the ARC
can be less than optimal IMHO
Is a 16GB ARC size not considered to be enough? ;-)
I was only describing the behavior that I observed. It seems to me
that when large files
On Mar 20, 2008, at 2:00 PM, Bob Friesenhahn wrote:
On Thu, 20 Mar 2008, Jonathan Edwards wrote:
in that case .. try fixing the ARC size .. the dynamic resizing on
the ARC
can be less than optimal IMHO
Is a 16GB ARC size not considered to be enough? ;-)
I was only describing the
Hi Bob ... as richard has mentioned, allocation to vdevs
is done in a fixed sized chunk (richard specs 1MB, but I
remember a 512KB number from the original spec, but this
is not very important), and the allocation algorithm is
basically doing load balancing.
for your non-raid pool, this chunk
On Wed, 19 Mar 2008, Bill Moloney wrote:
When application IO sizes get small, the overhead in ZFS goes
up dramatically.
Thanks for the feedback. However, from what I have observed, it is
not a full story at all. On my own system, when a new file is
written, the write block size does not
On my own system, when a new file is
written, the write block size does not make
a significant difference to the write speed
Yes, I've observed the same result ... when a new file is being written
sequentially, the file data and newly constructed meta-data can be
built in cache and written
I do see that all the devices are quite evenly busy. There is no
doubt that the load balancing is quite good. The main question is if
there is any actual striping going on (breaking the data into
smaller chunks), or if the algorithm is simply load balancing.
Striping trades IOPS for
Bob Friesenhahn wrote:
On Sat, 15 Mar 2008, Richard Elling wrote:
My observation, is that each metaslab is, by default, 1 MByte in
size. Each
top-level vdev is allocated by metaslabs. ZFS tries to allocate a
top-level
vdev's metaslab before moving onto another one. So you should see
On Sun, 16 Mar 2008, Richard Elling wrote:
But where is the bottleneck? iostat will show bottlenecks in the
physical disks and channels. vmstat or mpstat will show the
bottlenecks in cpus. To see if the app is the bottleneck will
require some analysis of the app itself. Is it spending its
Bob Friesenhahn wrote:
Can someone please describe to me the actual underlying I/O operations
which occur when a 128K block of data is written to a storage pool
configured as shown below (with default ZFS block sizes)? I am
particularly interested in the degree of striping across mirrors
On Sat, 15 Mar 2008, Richard Elling wrote:
My observation, is that each metaslab is, by default, 1 MByte in size. Each
top-level vdev is allocated by metaslabs. ZFS tries to allocate a top-level
vdev's metaslab before moving onto another one. So you should see eight
128kByte allocs per
13 matches
Mail list logo