I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks.
I want to make best use of the available disk space and have some level of
redundancy without impacting performance too much.
What I am trying to figure out is: would it be better to have a simple mirror
of an identical 200Gb
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
Hi Przemol,
I think Casper had a good point bringing up the data integrity
features when using ZFS for RAID. Big companies do a lot of things
just because that's the certified way that end up biting them in the
rear.
On 27/02/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
Hi Przemol,
I think Casper had a good point bringing up the data integrity
features when using ZFS for RAID. Big companies do a lot of things
just because that's the
Hi,
I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks.
congratulations, this is a great machine!
I want to make best use of the available disk space and have some level
of redundancy without impacting performance too much.
What I am trying to figure out is: would it be better
Thanks Constantin, that was just the information I needed!
Trev
Constantin Gonzalez wrote:
Hi,
I have a shiny new Ultra 40 running S10U3 with 2 x 250Gb disks.
congratulations, this is a great machine!
I want to make best use of the available disk space and have some level
of redundancy
Hi Jason,
we done the tests using S10U2, two fc cards, MPXIO.
5 LUN in a raidZ group.
Each LUN was visible to both the fc card.
Gino
Hi Gino,
Was there more than one LUN in the RAID-Z using the
port you disabled?
This message posted from opensolaris.org
On Tue, Feb 27, 2007 at 08:29:04PM +1100, Shawn Walker wrote:
On 27/02/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
Hi Przemol,
I think Casper had a good point bringing up the data integrity
features when using ZFS
Jens Elkner writes:
Currently I'm trying to figure out the best zfs layout for a thumper wrt. to
read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~
500 MB/s seems to be the maximum on can reach (tried initial default
setup, all 46
As has been pointed out you want to mirror (or get more disks).
I would suggest you think carefully about the layout of the disks so that you
can take advantage of ZFS boot when it arrives. See
http://blogs.sun.com/chrisg/entry/new_server_arrived for a suggestion.
--chris
This message
Hello przemolicc,
Tuesday, February 27, 2007, 11:28:59 AM, you wrote:
ppf On Tue, Feb 27, 2007 at 08:29:04PM +1100, Shawn Walker wrote:
On 27/02/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
Hi Przemol,
I think Casper
On 2/26/07, Jim Dunham [EMAIL PROTECTED] wrote:
The Availability Suite product set
(http://www.opensolaris.org/os/project/avs/) offers both snapshot and
data replication data services, both of which are built on top of a
Solaris filter driver framework.
Is the Solaris filter driver framework
Best place to check to see if someone's registered whether or not
Solaris has been run on a system is at http://www.sun.com/bigadmin/hcl/
Bev.
Nicholas Lee wrote:
Has anyone run Solaris on one of these:
http://acmemicro.com/estore/merchant.ihtml?pid=4014step=4
On Feb 27, 2007, at 2:35 AM, Roch - PAE wrote:
Jens Elkner writes:
Currently I'm trying to figure out the best zfs layout for a
thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~
500 MB/s seems to be the maximum on can reach
huge forwards on how bad SANs really are for data integrity removed
The answer is: insufficient data.
With modern journalling filesystems, I've never had to fsck anything or
run a filesystem repair. Ever. On any of my SAN stuff.
The sole place I've run into filesystem corruption in the
I am no scripting pro, but I would imagine it would be fairly simple to create
a script and batch it to make symlinks in all subdirectories.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Given your question are you about to come back with a
case where you are not
seeing this?
As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O
rate drops off quickly when you add processes while reading the same blocks
from the same file at the same time. I don't
[EMAIL PROTECTED] wrote:
Is the true situation really so bad ?
The failure mode is silent error. By definition, it is hard to
count silent errors. What ZFS does is improve the detection of
silent errors by a rather considerable margin. So, what we are
seeing is that suddenly people are
On Tue, 27 Feb 2007, Jeff Davis wrote:
Given your question are you about to come back with a
case where you are not
seeing this?
As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the I/O
rate drops off quickly when you add processes while reading the same blocks
from the
On February 26, 2007 9:05:21 AM -0800 Jeff Davis
But you have to be aware that logically sequential
reads do not
necessarily translate into physically sequential
reads with zfs. zfs
I understand that the COW design can fragment files. I'm still trying to
understand how that would affect a
it seems there isn't an algorithm in ZFS that detects sequential
write
in traditional fs such as ufs, one would trigger directio.
qfs can be set to automatically go to directio if sequential IO is detected.
the txg trigger of 5sec is inappropriate in this case (as stated by
bug 6415647)
even a
how do i remove myself from this [EMAIL PROTECTED]
Selim Daoud wrote On 02/27/07 10:56 AM,:
it seems there isn't an algorithm in ZFS that detects sequential
write
in traditional fs such as ufs, one would trigger directio.
qfs can be set to automatically go to directio if sequential IO is
all writes in zfs are sequential
On February 27, 2007 7:56:58 PM +0100 Selim Daoud [EMAIL PROTECTED]
wrote:
it seems there isn't an algorithm in ZFS that detects sequential
write
in traditional fs such as ufs, one would trigger directio.
qfs can be set to automatically go to directio if
it seems there isn't an algorithm in ZFS that detects sequential write
in traditional fs such as ufs, one would trigger directio.
There is no directio for ZFS. Are you encountering a situation in which
you believe directio support would improve performance? If so, please
explain.
-j
indeed, a customer is doing 2TB of daily backups on a zfs filesystem
the throughput doesn't go above 400MB/s, knowing that at raw speed,
the throughput goes up to 800MB/s, the gap is quite wide
also, sequential IO is a very common in real life..unfortunately zfs
is not performing well still
sd
Selim Daoud wrote:
indeed, a customer is doing 2TB of daily backups on a zfs filesystem
the throughput doesn't go above 400MB/s, knowing that at raw speed,
the throughput goes up to 800MB/s, the gap is quite wide
OK, I'll bite.
What is the workload and what is the hardware (zpool) config?
A
my mistake, the system is not a Thumper but rather a 6140 disk array,
using 4xHBA ports on a T2000
I tried several config and from raid (zfs) , raidz and mirror (zfs)
using 8 disks
what I observe is a non-continuous stream of data using [zpool] iostat
so at some stage the IO is interrupted,
Hello, gurus
I need your help. During the benchmark test of NFS-shared ZFS file systems at
some moment the number of NFS threads jumps to the maximal value, 1027
(NFSD_SERVERS was set to 1024). The latency also grows and the number of IOPS
is going down.
I've collected the output of
echo
You don't honestly, really, reasonably, expect someone, anyone, to look
at the stack
trace of a few hundred threads, and post something along the lines of
This is what is
wrong with your NFS server.Do you? Without any other information at
all?
We're here to help, but please reset your
You don't honestly, really, reasonably, expect someone, anyone, to look
at the stack
well of course he does :-)
and I looked at it .. all of it and I can tell exactly what the problem is
but I'm not gonna say because its a trick question.
so there.
Dennis
Hello Erik,
Tuesday, February 27, 2007, 5:47:42 PM, you wrote:
ET huge forwards on how bad SANs really are for data integrity removed
ET The answer is: insufficient data.
ET With modern journalling filesystems, I've never had to fsck anything or
ET run a filesystem repair. Ever. On any of
Hi Przemol,
I think migration is a really important feature...think I said that...
;-) SAN/RAID is not awful...frankly there's not been better solution
(outside of NetApp's WAFL) till ZFS. SAN/RAID just has its own
reliability issues you accept unless you don't have toZFS :-)
-J
On
Honestly, no, I don't consider UFS a modern file system. :-)
It's just not in the same class as JFS for AIX, xfs for IRIX, or even
VxFS.
-Erik
On Wed, 2007-02-28 at 00:40 +0100, Robert Milkowski wrote:
Hello Erik,
Tuesday, February 27, 2007, 5:47:42 PM, you wrote:
ET huge forwards on
With modern journalling filesystems, I've never had to fsck anything or
run a filesystem repair. Ever. On any of my SAN stuff.
you will.. even if the SAN is perfect, you will hit
bugs in the filesystem code.. from lots of rsync hard
links or like this one from raidtools last week:
Feb 9
On Mon, Feb 26, 2007 at 06:36:47PM -0800, Richard Elling wrote:
Jens Elkner wrote:
Currently I'm trying to figure out the best zfs layout for a thumper wrt.
to read AND write performance.
First things first. What is the expected workload? Random, sequential,
lots of
little files, few
On Tue, Feb 27, 2007 at 11:35:37AM +0100, Roch - PAE wrote:
That might be a per pool limitation due to
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622
Not sure - did not use compression feature...
This performance feature was fixed in Nevada last week.
35 matches
Mail list logo