Hi all,
A simple question: does ZFS need Autodefrag like other File Systems?
Regards,
Adam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Malahat,
For the boot drive you'll want to read the product notes document also
since there are some special
instructions that may need to be followed.
http://www.sun.com/products-n-solutions/hardware/docs/Servers/coolthreads/t2000/index.html
That document also covers some patches that are
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
the list:
Features:
PSARC 2006/223 ZFS Hot Spares
6405966 Hot Spare support in ZFS
PSARC 2006/303 ZFS Clone Promotion
6276916 support for clone swap
PSARC
Feeling a bit brave (i guess), i upgraded one of our systems
to Solaris 10/u2 (from u1), and moved quite a bit of data to
zfs. This was 4 days ago. Found the system in a reboot loop
this morning.
Eventually got the system to boot (by wiping
/etc/zfs/zpool.cache), but one of the pools causes the
Mark James Thas a Lot, James your link help me to complete the proces -- thanks again -- On Jul 31, 2006, at 7:22 AM, Mark Danico wrote:Malahat,For the boot drive you'll want to read the product notes document also since there are some specialinstructions that may need to be
George Wilson [EMAIL PROTECTED] writes:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
the list:
That's great news, thanks.
Bug Fixes:
I notice this one
6405330 swap on zvol isn't added during boot
Rainer,
This will hopefully go into build 06 of s10u3. It's on my list... :-)
Thanks,
George
Rainer Orth wrote:
George Wilson [EMAIL PROTECTED] writes:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
I forgot to highlight that RAIDZ2 (a.k.a RAID-6) is also in this wad:
6417978 double parity RAID-Z a.k.a. RAID6
Thanks,
George
George Wilson wrote:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
the
Hello all,
After setting up a Solaris 10 machine with ZFS as the new NFS server,
I'm stumped by some serious performance problems. Here are the
(admittedly long) details (also noted at
http://www.netmeister.org/blog/):
The machine in question is a dual-amd64 box with 2GB RAM and two
broadcom
Interesting. When you do the import, try doing this:
zpool import -o ro yourpool
And see if that fares any better. If it works, could you send the
output of zpool status -v? Also, how big is the pool in question?
Either access to the machine, or a way to copy the crash dump would be
On Jul 30, 2006, at 23:44, Malahat Qureshi wrote:
Is any one have a comparison between zfs vs. vxfs, I'm working on a
presentation for my management on this ---
That can be a tough question to answer depending on what you're
looking for .. you could take the feature comparison approach
On Mon, Jul 31, 2006 at 02:17:00PM -0400, Jan Schaumann wrote:
Is there anybody here who's using ZFS on Apple XRaids and serving them
via NFS? Does anybody have any other ideas what I could do to solve
this? (I have, in the mean time, converted the XRaid to plain old UFS,
and performance is
Hello Richard,
Monday, July 31, 2006, 6:29:03 PM, you wrote:
RE Malahat Qureshi wrote:
Is any one have a comparison between zfs vs. vxfs, I'm working on
a presentation for my management on this ---
RE In management speak, this is easy. VxFS $0. ZFS priceless. :-)
Well, it's free for
However, note the limitations on usage: 4 'user-data file systems'...
B.
Robert Milkowski wrote:
Hello Richard,
Monday, July 31, 2006, 6:29:03 PM, you wrote:
RE Malahat Qureshi wrote:
Is any one have a comparison between zfs vs. vxfs, I'm working on
a presentation for
Bill Moore [EMAIL PROTECTED] wrote:
To test this theory, run this command on your NFS server (as root):
echo '::spa -v' | mdb -k | \
awk '/dev.dsk/{print $1::print -a vdev_t vdev_nowritecache}' | \
mdb -k | awk '{print $1/W1}' | mdb -kw
Thanks for the suggestion.
On Jul 31, Bill Moore wrote:
| Interesting. When you do the import, try doing this:
|
| zpool import -o ro yourpool
|
| And see if that fares any better. If it works, could you send the
| output of zpool status -v? Also, how big is the pool in question?
Same panic.
It's a 250GB drive, so
On Mon, Jul 31, 2006 at 03:59:23PM -0400, Jan Schaumann wrote:
Thanks for the suggestion. However, I'm not sure if the above pipeline
is correct:
2# !! | awk '/dev.dsk/{print $1::print -a vdev_t vdev_nowritecache}'
857a0580::print -a vdev_t vdev_nowritecache
3# !! | mdb -k
0
Hmm.
Bill Moore [EMAIL PROTECTED] wrote:
Hmm. It should have printed something like this:
857a0a60 vdev_nowritecache = 0 (B_FALSE)
I think there might be a problem with the CTF data (debugging info)
in U2. First, check /etc/release and make sure it says something like
Solaris
On Mon, 31 Jul 2006, Dale Ghent wrote:
So what does this exercise leave me thinking? Is Linux 2.4.x really screwed up
in NFS-land? This Solaris NFS replaces a Linux-based NFS server that the
Linux has had, uhhmmm (struggling to be nice), iffy NFS for ages.
--
Rich Teer, SCNA, SCSA,
On Jul 31, 2006, at 7:30 PM, Rich Teer wrote:
On Mon, 31 Jul 2006, Dale Ghent wrote:
So what does this exercise leave me thinking? Is Linux 2.4.x
really screwed up
in NFS-land? This Solaris NFS replaces a Linux-based NFS server
that the
Linux has had, uhhmmm (struggling to be nice), iffy
Rich Teer wrote:
On Mon, 31 Jul 2006, Dale Ghent wrote:
So what does this exercise leave me thinking? Is Linux 2.4.x really screwed up
in NFS-land? This Solaris NFS replaces a Linux-based NFS server that the
Linux has had, uhhmmm (struggling to be nice), iffy NFS for ages.
The
On 7/31/06, Bev Crair [EMAIL PROTECTED] wrote:
However, note the limitations on usage: 4 'user-data file systems'...
B.
And last I looked it was x86-only.
Mike
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
On Jul 31, 2006, at 8:07 PM, eric kustarz wrote:
The 2.6.x Linux client is much nicer... one thing fixed was the
client doing too many commits (which translates to fsyncs on the
server). I would still recommend the Solaris client but i'm sure
that's no surprise. But if you'r'e stuck on
On Mon, Jul 31, 2006 at 11:51:09AM -0400, George Wilson wrote:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
the list:
George,
this is great! any idea when these will be available as patches for
I try to address it myself first.
I think most of File Systems need this Autodefrag function as the file
data is distributed at different unconsecutive blocks after the file
systems has been used for a long time.
Though ZFS is based on Transaction and Copy-On-Write, some modifications
to an
Grant,
Expect patches late September or so. Once available I'll post the patch
information.
Thanks,
George
grant beattie wrote:
On Mon, Jul 31, 2006 at 11:51:09AM -0400, George Wilson wrote:
We have putback a significant number of fixes and features from
OpenSolaris into what will become
Luke Lonergan wrote:
Torrey,
On 7/28/06 10:11 AM, Torrey McMahon [EMAIL PROTECTED] wrote:
That said a 3510 with a raid controller is going to blow the door, drive
brackets, and skin off a JBOD in raw performance.
I'm pretty certain this is not the case.
If you need sequential
Torrey,
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, July 31, 2006 8:32 PM
You might want to check the specs of the the 3510. In some
configs you
only get 2 ports. However, in others you can get 8.
Really? 8 active Fibre Channel ports?
On 7/31/06, Dale Ghent [EMAIL PROTECTED] wrote:
On Jul 31, 2006, at 8:07 PM, eric kustarz wrote:
The 2.6.x Linux client is much nicer... one thing fixed was the
client doing too many commits (which translates to fsyncs on the
server). I would still recommend the Solaris client but i'm sure
On July 31, 2006 11:32:15 PM -0400 Torrey McMahon [EMAIL PROTECTED] wrote:
You're comparing apples to a crate of apples. A more useful comparison would be
something along
the lines a single R0 LUN on a 3510 with controller to a single 3510-JBOD with
ZFS across all the
drives.
I think the
30 matches
Mail list logo