On 21/04/2010 18:37, Ben Rockwood wrote:
You've made an excellent case for benchmarking and where its useful
but what I'm asking for on this thread is for folks to share the
research they've done with as much specificity as possible for research
purposes. :)
However you can also find
My use case for opensolaris is as a storage server for a VM environment (we
also use EqualLogic, and soon an EMC CX4-120). To that end, I use iometer
within a VM, simulating my VM IO activity, with some balance given to easy
benchmarking. We have about 110 VMs across eight ESX hosts. Here is
On 21/04/2010 04:43, Ben Rockwood wrote:
I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks
On 21/04/2010 04:43, Ben Rockwood wrote:
I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks
Ben,
never trust a benchmark, you haven't faked yourself!
There are many benchmarks out there, but the question is, how relevant are
they for your usage pattern. How important are single stream benchmarks, when
you are opening and closing 1000s of files per second or if you run a DB on
top of
On 4/21/10 2:15 AM, Robert Milkowski wrote:
I haven't heard from you in a while! Good to see you here again :)
Sorry for stating obvious but at the end of a day it depends on what
your goals are.
Are you interested in micro-benchmarks and comparison to other file
systems?
I think the most
I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks wouldn't mind sharing their work on
the
Hi all,
among many other things I recently restarted benchmarking ZFS over NFS3
performance between X4500 (host) and Linux clients. I've just iozone
quite a while ago and am still a bit at a loss understanding the
results. The automatic mode is pretty ok (and generates nice 3D plots
for the
On Thu, 8 Jan 2009, Carsten Aulbert wrote:
for the people higher up the ladder), but someone gave a hint to use
multiple threads for testing the ops/s and here I'm a bit at a loss how
to understand the results and if the values are reasonable or not.
I will admit that some research is
Hi Bob.
Bob Friesenhahn wrote:
Here is the current example - can anyone with deeper knowledge tell me
if these are reasonable values to start with?
Everything depends on what you are planning do with your NFS access. For
example, the default blocksize for zfs is 128K. My example tests
On Thu, 8 Jan 2009, Carsten Aulbert wrote:
My experience with iozone is that it refuses to run on an NFS client of
a Solaris server using ZFS since it performs a test and then refuses to
work since it says that the filesystem is not implemented correctly.
Commenting a line of code in iozone
On Jul 8, 2007, at 8:05 PM, Peter C. Norton wrote:
List,
Sorry if this has been done before - I'm sure I'm not the only person
interested in this, but I haven't found anything with the searches
I've done.
I'm looking to compare nfs performance between nfs on zfs and a
lower-end netapp
List,
Sorry if this has been done before - I'm sure I'm not the only person
interested in this, but I haven't found anything with the searches
I've done.
I'm looking to compare nfs performance between nfs on zfs and a
lower-end netapp filer. It seems like the only way to do this is to
measure
Management here is worried about performance under ZFS because they had
a bad experience with Instant Image a number of years ago. When iiamd
was used, server performance was reduced to a crawl. Hence they want
proof in the form of benchmarking that zfs snapshots will not adversely
affect system
[EMAIL PROTECTED] wrote on 04/12/2007 04:47:06 PM:
Management here is worried about performance under ZFS because they had
a bad experience with Instant Image a number of years ago. When iiamd
was used, server performance was reduced to a crawl. Hence they want
proof in the form of
On April 12, 2007 3:47:06 PM -0600 Bruce Shaw [EMAIL PROTECTED] wrote:
Management here is worried about performance under ZFS because they had
a bad experience with Instant Image a number of years ago. When iiamd
was used, server performance was reduced to a crawl. Hence they want
proof in the
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are
addressed. If you have received this email in error please notify the
system manager. This message contains confidential information and is
intended
On 4/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote on 04/12/2007 04:47:06 PM:
I time mkfile'ing a 1 gb file on ufs and copying it, then did the same
thing on each zfs partition. Then I took snapshots, copied files, more
snapshots, keeping timings all the way. I
18 matches
Mail list logo