There won't be a performance hit beyond that of RAIDZ2 vs. RAIDZ.
But you'll wind up with a pool with fundamentally single-disk-failure
tolerance, so I'm not sure it's worth it (at least until there's a mechanism
for replacing the remaining raidz1 vdevs with raidz2).
This message posted
I was under the impression that real-time processes essentially trump all
others, and I'm surprised by this behaviour; I had a dozen or so RT-processes
sat waiting for disc for about 20s.
Process priorities on Solaris affect CPU scheduling, but not (currently) I/O
scheduling nor memory usage.
Dickon Hood wrote:
We've got an interesting application which involves recieving lots of
multicast groups, and writing the data to disc as a cache. We're
currently using ZFS for this cache, as we're potentially dealing with a
couple of TB at a time.
The threads writing to the filesystem
The man page gives this form:
zpool create [-fn] [-R root] [-m mountpoint] pool vdev ...
however, lower down, there is this command:
# zpool create mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
Isn't the pool element missing in the command?
This message posted from opensolaris.org
On Fri, Dec 07, 2007 at 12:38:11 +, Darren J Moffat wrote:
: Dickon Hood wrote:
: We've got an interesting application which involves recieving lots of
: multicast groups, and writing the data to disc as a cache. We're
: currently using ZFS for this cache, as we're potentially dealing with a
We've got an interesting application which involves recieving lots of
multicast groups, and writing the data to disc as a cache. We're
currently using ZFS for this cache, as we're potentially dealing with a
couple of TB at a time.
The threads writing to the filesystem have real-time SCHED_FIFO
Jonathan,
I think I remember seeing this error in an older Solaris release. The
current zpool.1m man page doesn't have this error unless I'm missing it:
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m
In a current Solaris release, this command fails as expected:
# zpool create mirror
Dickon Hood wrote:
On Fri, Dec 07, 2007 at 12:38:11 +, Darren J Moffat wrote:
: Dickon Hood wrote:
: We've got an interesting application which involves recieving lots of
: multicast groups, and writing the data to disc as a cache. We're
: currently using ZFS for this cache, as we're
Hello Walter,
Thursday, December 6, 2007, 7:05:54 PM, you wrote:
Hi All,
We are currently a hardware issue with our zfs file server hence the file system is unusable.
We are planning to move it to a different system.
The setup on the file server when it was running was
bash-3.00#
SunOS 5.10 Last change: 25 Apr 2006
Yes, I see that my other server is more up to date.
SunOS 5.10 Last change: 13 Feb 2007
This one was recently installed.
Is there a patch that was not included with 10_Recommended?
This message posted from opensolaris.org
Hi,
I'm new to the list and fairly new to ZFS so hopefully this isn't a dumb
question, but...
I just inadvertantly added s0 of a disk to a zpool, and then added the
entire device:
NAME STATE READ WRITE CKSUM
zfs-bo
Hello zfs-discuss,
http://bugs.opensolaris.org/view_bug.do?bug_id=6421210
1. App opens and creates an empty file /pool/fs1/file1
2. zfs snapshot pool/[EMAIL PROTECTED]
3. App writes something to file and still keeps it open
4. zfs rollback pool/[EMAIL PROTECTED]
Now what happens to fd App is
Hello Matt,
Monday, December 3, 2007, 8:36:28 PM, you wrote:
MB Hi,
MB We have a number of 4200's setup using a combination of an SVM 4
MB way mirror and a ZFS raidz stripe.
MB Each disk (of 4) is divided up like this
MB / 6GB UFS s0
MB Swap 8GB s1
MB /var 6GB UFS s3
MB Metadb 50MB UFS s4
MB
I believe the data dedup is also a feature of NTFS.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Jorgen,
Honestly - I don't think zfs is a good solution to your problem.
What you could try to do however when it comes to x4500 is:
1. Use SVM+UFS+user quotas
2. Use zfs and create several (like up-to 20? so each stays below 1TB)
ufs file systems on zvols and then apply user quotas on
On Fri, 2007-12-07 at 08:02 -0800, jonathan soons wrote:
The man page gives this form:
zpool create [-fn] [-R root] [-m mountpoint] pool vdev ...
however, lower down, there is this command:
# zpool create mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
Isn't the pool element missing in the
I keep getting ETOOMUCHTROLL errors thrown while reading this list, is
there a list admin that can clean up the mess? I would hope that repeated
personal attacks could be considered grounds for removal/blocking.
Wade Stuart
Fallon Worldwide
P: 612.758.2660
C: 612.877.0385
mis _HOLD_ # cat /etc/release
Solaris 10 6/06 s10s_u2wos_09a SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 09 June 2006
mis _HOLD_ #
This
Hello,
I have been trying to chase down some ZFS performance issues, and I was hoping
someone with more ZFS experience might be able to comment.
When running a zfs list command, it often takes several minutes to complete.
I see similar behavior when running most other ZFS commands, such as
NOTHING anton listed takes the place of ZFS
That's not surprising, since I didn't list any file systems.
Here's a few file systems, and some of their distinguishing features. None of
them do exactly what ZFS does. ZFS doesn't do what they do, either.
QFS: Very, very fast. Supports
You have me at a disadvantage here, because I'm
not
even a Unix (let alone Solaris and Linux)
aficionado.
But don't Linux snapshots in conjunction with
rsync
(leaving aside other possibilities that I've never
heard of) provide rather similar capabilities
(e.g.,
incremental backup
There are a category of errors that are
not caused by firmware, or any type of software. The
hardware just doesn't write or read the correct bit value this time
around. With out a checksum there's no way for the firmware to know, and
next time it very well may write or read the correct bit
You have me at a disadvantage here, because I'm not
even a Unix (let alone Solaris and Linux) aficionado.
But don't Linux snapshots in conjunction with rsync
(leaving aside other possibilities that I've never
heard of) provide rather similar capabilities (e.g.,
incremental backup or
I keep getting ETOOMUCHTROLL errors thrown while
reading this list, is
there a list admin that can clean up the mess? I
would hope that repeated
personal attacks could be considered grounds for
removal/blocking.
Actually, most of your more unpleasant associates here seem to suffer
Once again, profuse apologies for having taken so long (well over 24 hours by
now - though I'm not sure it actually appeared in the forum until a few hours
after its timestamp) to respond to this.
can you guess? wrote:
Primarily its checksumming features, since other
open source solutions
NAMESTATE READ WRITE CKSUM
fatty DEGRADED 0 0 3.71K
raidz2DEGRADED 0 0 3.71K
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3
If you ever progress beyond counting on your fingers
you might (with a lot of coaching from someone who
actually cares about your intellectual development)
be able to follow Anton's recent explanation of this
(given that the higher-level overviews which I've
provided apparently flew
On Wed, 5 Dec 2007, Brian Hechinger wrote:
[1] Finally, someone built a flash SSD that rocks
(and they know how
fast it is judging by the pricetag):
http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb
/
http://www.anandtech.com/storage/showdoc.aspx?i=3167
Great now if only Sun would
I am using ZFS on FreeBSD 7.0_beta3. This is the first time i have
used ZFS and I have run into something that I am not sure if this is
normal, but am very concerned about.
SYSTEM INFO:
hp 320s (storage array)
12 disks (750GB each)
2GB RAM
1GB flash drive (running the OS)
When I take a disk
Allowing a filesystem to be rolled back without unmounting it sounds unwise,
given the potentially confusing effect on any application with a file currently
open there.
And if a user can't roll back their home directory filesystem, is that so bad?
Presumably they can still access snapshot
On Fri, 2007-12-07 at 08:24 -0800, jonathan soons wrote:
SunOS 5.10 Last change: 25 Apr 2006
Yes, I see that my other server is more up to date.
SunOS 5.10 Last change: 13 Feb 2007
This one was recently installed.
What OS rev? (more /etc/release)
I don't have
Please see below for an example.
-Wade
[EMAIL PROTECTED] wrote on 12/07/2007 03:07:29 PM:
I keep getting ETOOMUCHTROLL errors thrown while
reading this list, is
there a list admin that can clean up the mess? I
would hope that repeated
personal attacks could be considered grounds
Thanks Darren.
I found another link that goes into the 2003 implementation:
http://blogs.technet.com/filecab/archive/tags/Single+Instance+Store+_2800_SIS_2900_/default.aspx
It looks pretty nice, although I am not sure about the userland dedup
service design -- I would like to see it
[EMAIL PROTECTED] wrote:
Darren,
Do you happen to have any links for this? I have not seen anything
about NTFS and CAS/dedupe besides some of the third party apps/services
that just use NTFS as their backing store.
Single Instance Storage is what Microsoft uses to refer to this:
So name these mystery alternatives that come anywhere
close to the protection,
If you ever progress beyond counting on your fingers you might (with a lot of
coaching from someone who actually cares about your intellectual development)
be able to follow Anton's recent explanation of this
Hello Paul,
Wednesday, December 5, 2007, 10:34:47 PM, you wrote:
PG Constantin Gonzalez wrote:
Hi Paul,
yes, ZFS is platform agnostic and I know it works in SANs.
For the USB stick case, you may have run into labeling issues. Maybe
Solaris SPARC did not recognize the x64 type label on the
--On 07 December 2007 11:18 -0600 Jason Morton
[EMAIL PROTECTED] wrote:
I am using ZFS on FreeBSD 7.0_beta3. This is the first time i have used
ZFS and I have run into something that I am not sure if this is normal,
but am very concerned about.
SYSTEM INFO:
hp 320s (storage array)
12
Hello Marcin,
Saturday, December 1, 2007, 9:57:11 AM, you wrote:
MW i did some test lately with zfs, env is:
MW 2 node veritas cluster 5.0 on solaris 8/07 with recommended
MW patches, 2 machines v440 v480, shared storage through switch on 6120
array.
MW 2 luns from array, on every zfs pool.
Darren,
Do you happen to have any links for this? I have not seen anything
about NTFS and CAS/dedupe besides some of the third party apps/services
that just use NTFS as their backing store.
Thanks!
Wade Stuart
Fallon Worldwide
P: 612.758.2660
C: 612.877.0385
[EMAIL PROTECTED] wrote on
39 matches
Mail list logo