Volker A. Brandt v...@bb-c.de wrote:
Given the massive success of GNU based systems (Linux, OS X, *BSD)
Ouch! Neither OSX nor *BSD are GNU-based. They do ship with
GNU-related things but that's been a long and hard battle.
While you are true, this isn't going to help on.
Let me try to
Ouch! Neither OSX nor *BSD are GNU-based. They do ship with
GNU-related things but that's been a long and hard battle.
While you are true, this isn't going to help on.
I agree.
I see three possible types of Linux users that should be discussed.
1)The really dumb Linux users. These
I would like zpool iostat to take a -p option to output parsable statistics
with absolute counters/figures that for example could be fed to MRTG, RRD, et
al.
The zpool iostat [-v] POOL 60 [N] is great for humans but not very
api-friendly; N=2 is a bit overkill and unreliable. Is this info
Take the new disk out as well.. foreign/bad non-zero disk label may cause
trouble too.
I've experienced tool core dumps with foreign disk (partition) label which
might be the case if it is a recycled replacement disk (In my case fixed by
plugging the disk it into a linux desktop and blanking
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform at least as well as the current
setup. A performance hit is very hard to explain to our users.
Perhaps I missed something, but what was your previous setup?
I.e. what did
On Fri, Jan 30, 2009 at 3:55 AM, Volker A. Brandt v...@bb-c.de wrote:
Hmmm... I don't think a Linux user can be really dumb. He/she would
not run Linux, but a certain other system. :-)
My mother just ordered a netbook that came with ubuntu. She can barely
handle turning a system on. So
On Fri, Jan 30, 2009 at 8:24 AM, Greg Mason gma...@msu.edu wrote:
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform at least as well as the current
setup. A performance hit is very hard to explain to our users.
What
I should also add that this creating many small files issue is the
ONLY case where the Thors are performing poorly, which is why I'm
focusing on it.
Greg Mason wrote:
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform
This problem only manifests itself when dealing with many small files
over NFS. There is no throughput problem with the network.
But there could be a _latency_ issue with the network.
[snip]
I've done my homework on this issue, I've ruled out the network as an
issue, as well as the NFS
Jim Mauro wrote:
This problem only manifests itself when dealing with many small files
over NFS. There is no throughput problem with the network.
But there could be a _latency_ issue with the network.
If there was a latency issue, we would see such a problem with our
existing file server
If there was a latency issue, we would see such a problem with our
existing file server as well, which we do not. We'd also have much
greater problems than just file server performance.
So, like I've said, we've ruled out the network as an issue.
I should also add that I've tested these
You have SSD's for the ZIL (logzilla) enabled, and ZIL IO
is what is hurting your performance...Hmmm
I'll ask the stupid question (just to get it out of the way) - is
it possible that the logzilla is undersized?
Did you gather data using Richard Elling's zilstat (included below)?
Thanks,
I'll give this a script a shot a little bit later today.
For ZIL sizing, I'm using either 1 or 2 32G Intel X25-E SSDs in my
tests, which, according to what I've read, is 2-4 times larger than the
maximum that ZFS can possibly use. We've got 32G of system memory in
these Thors, and (if I'm not
Sogranted, tank is about 77% full (not to split hairs ;^),
but in this case, 23% is 640GB of free space. I mean, it's
not like 15 years ago when a file system was 2GB total,
and 23% free meant a measely 460MB to allocate from.
640GB is a lot of space, and our largest writes are less
than 5MB.
On Fri, 30 Jan 2009, Greg Mason wrote:
A Linux NFS file server, with a few terabytes of fibre-attached disk,
using XFS.
I'm trying to get these Thors to perform at least as well as the current
setup. A performance hit is very hard to explain to our users.
I have heard that Linux NFS service
Hello All,
I recently upgrade a test system that had a zpool (test_pool) from S10u5 to
S10U6-zfsroot by simply replacing the root disks. I exported the zpool before
I init 5'ed the system. On S10u5, the zpool vdevs were on c2t#d#. On
S10U6-zfsroot, the zpool vdevs were on c4t#d#. I ran
so ... i hate USB as well. i guess i'll have to get a SAS or fibre
enclosure. (even though i only need USB2 performance.)
i hot plugged a drive into my USB2 enclosure. i was adding and removing
drives earlier just fine, but this time all (both) disks in the enclosure
became unavailable. i
i made a mistake and created my zpool on a partition (c2t0d0p0). i can't
attach another identical whole drive (c3t0d0) to this pool, i get an
error that the new drive is too small (i'd have thought it would be
bigger!)
the mount point of the top dataset is 'none', and various datasets
in the
# rmformat
Looking for devices...
1. Logical Node: /dev/rdsk/c3t0d0p0
Physical Node: /p...@0,0/pci108e,c...@2,1/stor...@1/d...@0,0
Connected Device: Ext Hard Disk
Device Type: Removable
2. Logical Node: /dev/rdsk/c2t0d0p0
Physical Node:
On Fri, 30 Jan 2009, Frank Cusack wrote:
so, is there a way to tell zfs not to perform the mounts for data2? or
another way i can replicate the pool on the same host, without exporting
the original pool?
There is not a way to do that currently, but I know it's coming down the
road.
On January 30, 2009 9:52:59 AM -0800 Frank Cusack fcus...@fcusack.com
wrote:
pool: data2
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using
fm == Fredrich Maney fredrichma...@gmail.com writes:
fm changing the default toolset (without notification)
I wouldn't wish for notification all the time and tell people they
cannot move unless they notify everyone, or you will get a bunch of
CYA disclaimers and still have no input. And
Frank Cusack wrote:
On January 30, 2009 9:52:59 AM -0800 Frank Cusack fcus...@fcusack.com
wrote:
pool: data2
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing
Jim Mauro wrote:
You have SSD's for the ZIL (logzilla) enabled, and ZIL IO
is what is hurting your performance...Hmmm
I'll ask the stupid question (just to get it out of the way) - is
it possible that the logzilla is undersized?
Did you gather data using Richard Elling's zilstat
Ruslan Valiyev wrote:
Hi all,
I have couple of questions regarding a ZFS setup I have at home.
It's six SATA disks, set up as two groups with three disks with raidz1
in each one.
Here are some graphs I've made: http://job.valiyev.net/gnuplot/zfs/
The client is a Mac, I'm using NFS with
apparently if you don't order a J4200 with drives, you just get filler
sleds that won't accept a hard drive. (had to look at a parts breakdown
on sunsolve to figure this out -- the docs should simply make this clear.)
it looks like the sled that will accept a drive is part #570-1182.
anyone know
Frank,
apparently if you don't order a J4200 with drives, you just get filler
sleds that won't accept a hard drive. (had to look at a parts breakdown
on sunsolve to figure this out -- the docs should simply make this clear.)
it looks like the sled that will accept a drive is part #570-1182.
I'm running ClearCase on a Solaris 10u4 system. Views vobs.
I lock the vob, snapshot /var/adm/rational, vobs, views, then unlock the vobs.
We've been able to copy the snapshot to another server restore.
I believe ClearCase is supported by Rational on ZFS also. We would not have
done it
Sounds like the device it not ignoring the cache flush requests sent
down by ZFS/zil commit.
If the SSD is able the drain it's internal buffer to flash on a power
outage; then it needs to ignore the cache flush.
You can do this on a per device basis, It's kludgy tuning but hope the
On January 30, 2009 1:31:46 PM -0800 Frank Cusack fcus...@fcusack.com
wrote:
apparently if you don't order a J4200 with drives, you just get filler
sleds that won't accept a hard drive. (had to look at a parts breakdown
on sunsolve to figure this out -- the docs should simply make this
Hello,
My apologies if this has been discussed before or if this is
the wrong place to discuss Solaris 10 U6 issues..
I am investigating using ZFS as a possible replacement for SVM for
root disk mirroring. So far, I have installed the system with the
new ZFS option in the text installer of U6.
Hi Pål,
CR 6420274 covers the -p part of your question. As far as kstats go, we only
have them in the arc and the vdev read-ahead cache.
Regards,
markm
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Fri, 30 Jan 2009, Ed Kaczmarek wrote:
And/or step me thru the required mdb/kdb/whatever it's called stack
trace dump command sequence after booting with -kd
Dan Mick's got a good guide on his blog:
http://blogs.sun.com/dmick/entry/diagnosing_kernel_hangs_panics_with
Regards,
markm
Maybe ZFS hasn't seen an error in a long enough time that it considers
the pool healthy? You could try clearing the pool and then observing.
On Wed, Jan 28, 2009 at 9:40 AM, Ben Miller mil...@eecis.udel.edu wrote:
# zpool status -xv
all pools are healthy
Ben
What does 'zpool status -xv'
zfs set only seems to accept an absolute path, which even if you set it
to the name of the pool, isn't quite the same thing as the default.
see my other thread about set mountpoint but don't mount?.
___
zfs-discuss mailing list
On January 30, 2009 10:03:42 AM -0800 Frank Cusack fcus...@fcusack.com
wrote:
# zpool create data3 c3t0d1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t0d0s2 is part of active ZFS pool data. Please see zpool(1M).
/dev/dsk/c2t0d0s8 is part of active ZFS pool
Frank Cusack wrote:
apparently if you don't order a J4200 with drives, you just get filler
sleds that won't accept a hard drive. (had to look at a parts breakdown
on sunsolve to figure this out -- the docs should simply make this clear.)
it looks like the sled that will accept a drive is
For those who didn't follow down the thread this afternoon,
I have posted a tool call zilstat which will help you to answer
the question of whether a separate log might help your
workload. Details start here:
http://richardelling.blogspot.com/2009/01/zilstat.html
Enjoy!
-- richard
On January 30, 2009 4:51:36 PM -0800 Frank Cusack fcus...@fcusack.com
wrote:
later on when i am done with the new pool (it's temporary space) i will
destroy it and try to recreate it and see if i get the same error.
yup. this time i couldn't attach.
# zpool status | grep c.t.d.
39 matches
Mail list logo