The whole raid does not fail -- we are talking about corruption
here. If you lose some inodes your whole partition is not gone.
My ZFS pool would not salvage -- poof, whole thing was gone (granted
it was a test one and not a raidz or mirror yet). But still, for
what happened, I cannot believe
Hi Folks,
Man pages of ZFS and ZPOOL, clearly saying that it is not good (recommended)
to use some portion of device for ZFS file system creation.
Hardly what are the problems if we use only some portion of disk space for
ZFS FS ?
I've got a Thumper doing nothing but serving NFS. Its using B43 with
zil_disabled. The system is being consumed in waves, but by what I don't know.
Notice vmstat:
3 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 0 0 926 91 703 0 25 75
21 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0
On 07 December, 2006 - dudekula mastan sent me these 2,9K bytes:
Hi Folks,
Man pages of ZFS and ZPOOL, clearly saying that it is not good
(recommended) to use some portion of device for ZFS file system
creation.
Hardly what are the problems if we use only some portion of
Hey Ben - I need more time to look at this and connect some dots,
but real quick
Some nfsstat data that we could use to potentially correlate to the local
server activity would be interesting. zfs_create() seems to be the
heavy hitter, but a periodic kernel profile (especially if we can
Hi
I am about to plan an upgrade of about 500 systems (sparc) to Solaris 10 and
would like to go for ZFS to manage the rootdisk. But what timeframe are we
looking at? and what should we take into account to be able to migrate to it
later on?
--
// Flemming Danielsen
Why all people are strongly recommending to use whole disk (not part
of disk) for creation zpools / ZFS file system ?
One thing is performance; ZFS can enable/disable write cache in the disk
at will if it has full control over the entire disk..
ZFS will also flush the WC when
Hey all, I run a netra X1 as the mysql db server for my small
personal web site. This X1 has two drives in it with SVM-mirrored UFS
slices for / and /var, a swap slice, and slice 7 is zfs. There is one
zfs mirror pool called local on which there are a few file systems,
one of which is
I am about to plan an upgrade of about 500 systems (sparc) to Solaris 10 and
would like to go for ZFS to manage the rootdisk. But what timeframe are we
looking at?
I've heard update 5, so several months at least.
and what should we take into account to be able to migrate to it
later on?
Hi Ben,
Your sar output shows one core pegged pretty much constantly! From the Solaris
Performance and Tools book that SLP state value has The remainder of important
events such as disk and network waits. along with other kernel wait
events.. kernel locks or condition variables also
Ben,
The attached dscript might help determining the zfs_create issue.
It prints:
- a count of all functions called from zfs_create
- average wall count time of the 30 highest functions
- average cpu time of the 30 highest functions
Note, please ignore warnings of the
Hi Dale,
Are you using MyISAM or InnoDB? Also, what's your zpool configuration?
Best Regards,
Jason
On 12/7/06, Dale Ghent [EMAIL PROTECTED] wrote:
Hey all, I run a netra X1 as the mysql db server for my small
personal web site. This X1 has two drives in it with SVM-mirrored UFS
slices for /
Luke Schwab wrote:
Hi,
I am running Solaris 10 ZFS and I do not have STMS multipathing enables. I have dual FC connections to storage using two ports on an Emulex HBA.
In the Solaris ZFS admin guide. It says that a ZFS file system monitors disks by their path and their device ID. If a disk
Quick question about the interaction of ZFS filesystem compression and the
filesystem cache. We have an Opensolaris (actually Nexenta alpha-6) box
running RRD collection. These files seem to be quite compressible. A test
filesystem containing about 3,000 of these files shows a compressratio
Hi Luke,
That's terrific!
You know you might be able to tell ZFS which disks to look at. I'm not
sure. It would be interesting, if anyone with a Thumper could comment
on whether or not they see the import time issue. What are your load
times now with MPXIO?
Best Regards,
Jason
On 12/7/06,
You said you are running Solaris 10 FCS but zfs was not released until
Solaris 10 6/06 which is Solaris 10U2.
On 12/7/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Hi Dale,
Are you using MyISAM or InnoDB? Also, what's your zpool configuration?
Best Regards,
Jason
On 12/7/06, Dale Ghent
Andrew Miller wrote:
Quick question about the interaction of ZFS filesystem compression and the filesystem cache. We have an Opensolaris (actually Nexenta alpha-6) box running RRD collection. These files seem to be quite compressible. A test filesystem containing about 3,000 of these files
On 12/8/06, Mark Maybee [EMAIL PROTECTED] wrote:
Yup, your assumption is correct. We currently do compression below the
ARC. We have contemplated caching data in compressed form, but have not
really explored the idea fully yet.
Hmm... interesting idea.
That will incur CPU to do a decompress
Looking at the source code overview, it looks like
the compression happens underneath the ARC layer,
so by that I am assuming the uncompressed blocks are
cached, but I wanted to ask to be sure.
Thanks!
-Andy
Yup, your assumption is correct. We currently do
compression below the
I'm still confused though, I believe that locking an adaptive mutex will spin
for a short
period then context switch and so they shouldn't be burning CPU - at least
not .4s worth!
An adaptive mutex will spin as long as the thread which holds the mutex is on
CPU. If the lock is moderately
On Dec 7, 2006, at 1:46 PM, Jason J. W. Williams wrote:
Hi Dale,
Are you using MyISAM or InnoDB?
InnoDB.
Also, what's your zpool configuration?
A basic mirror:
[EMAIL PROTECTED]zpool status
pool: local
state: ONLINE
scrub: none requested
config:
NAME STATE READ
This does look like the ATA driver bug rather than a ZFS issue per se.
(For the curious, the reason ZFS triggers this when UFS doesn't is because ZFS
sends a synchronize cache command to the disk, which is not handled in DMA mode
by the controller; and for this particular controller, switching
On 12/7/06, Andrew Miller [EMAIL PROTECTED] wrote:
Quick question about the interaction of ZFS filesystem compression and the
filesystem cache. We have an Opensolaris (actually Nexenta alpha-6) box
running RRD collection. These files seem to be quite compressible. A test
filesystem
That's gotta be what it is. All our MySQL IOP issues have gone away
one we moved to RAID-1 from RAID-Z.
-J
On 12/7/06, Anton B. Rang [EMAIL PROTECTED] wrote:
This does look like the ATA driver bug rather than a ZFS issue per se.
(For the curious, the reason ZFS triggers this when UFS doesn't
On Dec 7, 2006, at 5:22 PM, Nicholas Senedzuk wrote:
You said you are running Solaris 10 FCS but zfs was not released
until Solaris 10 6/06 which is Solaris 10U2.
Look at a Solaris 10 6/06 CD/DVD. Check out the Solaris_10/
UpgradePatches directory.
ah! well whaddya know...
Yes, apply
On Dec 7, 2006, at 6:14 PM, Anton B. Rang wrote:
This does look like the ATA driver bug rather than a ZFS issue per se.
Yes indeed. Well, that answers that. FWIW, I'm hour 2 of a mysql
configure script run. Yow!
(For the curious, the reason ZFS triggers this when UFS doesn't is
because
Be careful here. If you are using files that have no
data in them yet
you will get much better compression than later in
life. Judging by
the fact that you got only 12.5x, I suspect that your
files are at
least partially populated. Expect the compression to
get worse over
time.
I do
Hi Dale,
For what its worth, the SX releases tend to be pretty stable. I'm not
sure if snv_52 has made a SX release yet. We ran for over 6 months on
SX 10/05 (snv_23) with no downtime.
Best Regards,
Jason
On 12/7/06, Dale Ghent [EMAIL PROTECTED] wrote:
On Dec 7, 2006, at 6:14 PM, Anton B.
Jason,
I am no longer looking at not using STMS multipathing because without STMS you
loose the binding to the array and I loose all transmissions between the server
and array. The binding does come back after a few minutes but this is not
acceptable in our environment.
Load times vary
Hi Luke,
I wonder if it is the HBA. We had issues with Solaris and LSI HBAs
back when we were using an Xserve RAID.
Haven't had any of the issues you're describing between our LSI array
and the Qlogic HBAs we're using now.
If you have another type of HBA I'd try it. MPXIO and ZFS haven't ever
Ben Rockwood wrote:
Eric Kustarz wrote:
Ben Rockwood wrote:
I've got a Thumper doing nothing but serving NFS. Its using B43 with
zil_disabled. The system is being consumed in waves, but by what I
don't know. Notice vmstat:
We made several performance fixes in the NFS/ZFS area in recent
31 matches
Mail list logo