Hello Erik,
Wednesday, February 28, 2007, 12:55:24 AM, you wrote:
ET Honestly, no, I don't consider UFS a modern file system. :-)
ET It's just not in the same class as JFS for AIX, xfs for IRIX, or even
ET VxFS.
The point is that fsck was due to an array corrupting data.
IMHO it would hit JFS,
Feb 28 05:47:31 server141 genunix: [ID 403854 kern.notice] assertion failed: ss
== NULL, file: ../../common/fs/zfs/space_map.c, line: 81
Feb 28 05:47:31 server141 unix: [ID 10 kern.notice]
Feb 28 05:47:31 server141 genunix: [ID 802836 kern.notice] fe8000d559f0
fb9acff3 ()
Feb 28
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6467988
NFSD threads are created on a demand spike (all of them
waiting on I/O) but thentend to stick around servicing
moderate loads.
-r
Leon Koll wrote:
Hello, gurus
I need your help. During the benchmark test
Hi All,
Today I created a zpool (name as testpool) on c0t0d0.
#zpool create -m /masthan testpool c0t0d0
Then I written some data on the pool
#cp /usha/* /masthan/
Then I destroyed the zpool
#zpool destroy testpool
After that I created UFS File System on
Jeff Davis writes:
On February 26, 2007 9:05:21 AM -0800 Jeff Davis
But you have to be aware that logically sequential
reads do not
necessarily translate into physically sequential
reads with zfs. zfs
I understand that the COW design can fragment files. I'm still trying to
I'm an Oracle DBA and we are doing ASM on SUN with RAC. I am happy with ASM's
performance but am interested in Clustering. I mentioned to Bob Netherton that
if Sun could make it a clustering file system, that helps them enable the grid
further. Oracle wrote and gave OCFS2 to the Linux Kernel.
Also Oracle forums and SUN forums have the SAME exact look and feel... hmmm.
Even the options are exactly the same... weird.
Both are from a company called Jive Software that does enterprise forums.
This message posted from opensolaris.org
___
zfs list -o name,type,used,available,referenced,quota,compressratio
NAME TYPE USED AVAIL REFER QUOTA RATIO
pool/notes filesystem 151G 149G 53.2G 300G 1.25x
pool/[EMAIL PROTECTED]snapshot 48.1G - 57.7G - 1.27x
On Wed, Feb 28, 2007 at 07:23:44AM -0800, Thomas Roach wrote:
I'm an Oracle DBA and we are doing ASM on SUN with RAC. I am happy with ASM's
performance but am interested in Clustering. I mentioned to Bob Netherton that
if Sun could make it a clustering file system, that helps them enable the
Gino,
We have ween this before but only very rarely and never got a good crash dump.
Coincidently, we saw it
only yesterday on a server here, and are currently investigating it. Did you
also get a dump we
can access? That would If not can you tell us what zfs version you were running.
At the
zfs list -o name,type,used,available,referenced,quota,compressratio
NAME TYPE USED AVAIL REFER QUOTA RATIO
pool/notes filesystem 151G 149G 53.2G 300G 1.25x
pool/[EMAIL PROTECTED]snapshot 48.1G - 57.7G -
Frank Hofmann writes:
On Tue, 27 Feb 2007, Jeff Davis wrote:
Given your question are you about to come back with a
case where you are not
seeing this?
As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the
I/O rate drops off quickly when you add
This would occur if /dev/rdsk/c0t0d0s2 is not the same as c0t0d0
Double check your partition table.
-- richard
dudekula mastan wrote:
Hi All,
Today I created a zpool (name as testpool) on c0t0d0.
#zpool create -m /masthan testpool c0t0d0
Then I written some data on the pool
#cp /usha/*
Hi there,
We have been using ZFS for our backup storage since August last year.
Overall it's been very good, handling transient data issues and even
drop outs of connectivity to the iscsi arrays we are using for storage.
However, I logged in this morning to discover that the ZFS volume could
not
On 2/27/07, Eric Haycraft [EMAIL PROTECTED] wrote:
I am no scripting pro, but I would imagine it would be fairly simple to create
a script and batch it to make symlinks in all subdirectories.
I've done something similar using NFS aggregation products. The real
problem is when you export,
ZFS Group,
My two cents..
Currently, in my experience, it is a waste of time to try to
guarantee exact location of disk blocks with any FS.
A simple reason exception is bad blocks, a neighboring block
will suffice.
Second, current disk
That's quite strange. What version of ZFS are you running? What does
'zdb -l /dev/dsk/c5t6006016031E0180032F8E9868E30DB11d0s0' show?
- eric
On Thu, Mar 01, 2007 at 09:31:05AM +1000, Stuart Low wrote:
Hi there,
We have been using ZFS for our backup storage since August last year.
Overall
On 28-Feb-07, at 6:43 PM, Erblichs wrote:
ZFS Group,
My two cents..
Currently, in my experience, it is a waste of time to try to
guarantee exact location of disk blocks with any FS.
? Sounds like you're confusing logical location with physical
location,
Heya,
Firstly thanks for your help.
That's quite strange.
Your telling me! :) I like ZFS I really do but this has dented my love
of it. :-/
What version of ZFS are you running?
[EMAIL PROTECTED] ~]$ pkginfo -l SUNWzfsu
PKGINST: SUNWzfsu
NAME: ZFS (Usr)
CATEGORY: system
The label looks sane. Can you try running:
# dtrace -n vdev_set_state:entry'[EMAIL PROTECTED], args[3], stack()] =
count()}'
While executing 'zpool import' and send the output? Can you also send
'::dis' output (from 'mdb -k') of the function immediately above
vdev_set_state() in the above
Toby Thain,
No, physical location was for the exact location and
logical was for the rest of my info.
But, what I might not have made clear was the use of
fragments. Their are two types of fragments. One which
is the partial use of a logical disk block and
Heya,
The label looks sane. Can you try running:
Not sure if I should be reassured by that but I'll hold my hopes
high. :)
# dtrace -n vdev_set_state:entry'[EMAIL PROTECTED], args[3], stack()] =
count()}'
While executing 'zpool import' and send the output? Can you also send
'::dis'
Heya,
Sorry. Try 'echo vdev_load::dis | mdb -k'. This will give the
disassembly for vdev_load() in your current kernel (which will help us
pinpoint what vdev_load+0x69 is really doing).
Ahh, thanks for that.
Attached.
Stuart
---
[EMAIL PROTECTED] ~]$ echo vdev_load::dis | mdb -k
Further to this, I've considered doing the following:
1) Doing a zpool destroy on the volume
2) Doing a zpool import -D on the volume
It would appear to me that primarily what has occurred is one or all of
the metadata stores ZFS has created have become corrupt? Will a zpool
import -D ignore
24 matches
Mail list logo