$ cat /etc/release
Solaris Express Community Edition snv_114 X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 04 May 2009
I recently replaced two drives in
I’m experiencing occasional slow responsiveness on an OpenSolaris b118
system typically noticed when running an ‘ls’ (no extra flags, so no
directory service lookups). There is a delay of between 2 and 30
seconds but no correlation has been noticed with load on the server
and the slow
On Sun, Sep 6, 2009 at 9:15 AM, James Leverj...@jamver.id.au wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris b118
system typically noticed when running an ‘ls’ (no extra flags, so no
directory service lookups). There is a delay of between 2 and 30 seconds
but no
An attempt to pkg image-update from snv111b to snv122 failed
miserably for a number of reasons which are probably out of
scope here. Suffice it to say that it ran out of disk space
after the third attempt.
Before starting, I was careful to make a baseline snapshot,
but rolling back to that
The limitations section of the Wikipedia article on ZFS currently includes
the statement:
You cannot mix vdev types in a zpool. For example, if you had a striped
ZFS pool consisting of disks on a SAN, you cannot add the local-disks as a
mirrored vdev.
As I understand it, this is simply
if you add a raidz group to a group of 3 mirrors, the entire pool slows
down to the speed of the raidz.
That's not true. Blocks are being randomly spread across all vdevs.
Unless all requests keep pulling blocks from the RAID-Z, the speed is a
mean of the performance of all vdevs.
-mg
yes, but it stripes across the vdevs, and when it needs to read data back,
it will absolutely be limited.
On Sun, Sep 6, 2009 at 3:14 PM, Mario Goebbels m...@tomservo.cc wrote:
if you add a raidz group to a group of 3 mirrors, the entire pool slows
down to the speed of the raidz.
On Sep 6, 2009, at 3:32 PM, Thomas Burgess wonsl...@gmail.com wrote:
yes, but it stripes across the vdevs, and when it needs to read data
back, it will absolutely be limited.
During reads the raidz will be the fastest vdev, during writes it
should have about the same write performance
On Sep 6, 2009, at 7:53 AM, Ross Walker wrote:
On Sun, Sep 6, 2009 at 9:15 AM, James Leverj...@jamver.id.au wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris
b118
system typically noticed when running an ‘ls’ (no extra flags, so no
directory service lookups). There is
Correction
On 09/06/09 12:00 PM, I wrote:
(there are no hidden directories in / ),
Well, there is .zfs, of course, but it is normally hidden,
apparently by default on SPARC rpool, but not on X86 rpool
or non-rpool pools on either. Hmmm. I don't recollect setting
the snapdir property on any
On Sep 6, 2009, at 11:19 AM, Al Lang wrote:
The limitations section of the Wikipedia article on ZFS currently
includes the statement:
You cannot mix vdev types in a zpool. For example, if you had a
striped ZFS pool consisting of disks on a SAN, you cannot add the
local-disks as a
On Sun, 6 Sep 2009, Thomas Burgess wrote:
if you add a raidz group to a group of 3 mirrors, the entire pool slows down
to the speed of the raidz.
while you technically CAN do it, it's a horrible idea.
I don't think it is necessarily as horrid as you say. Zfs does
distribute writes to
i don't think it's the same at all.
I think it's about the same as filling a radiator in a car with oatmeal to
make it stop leaking.
On Sun, Sep 6, 2009 at 6:26 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 6 Sep 2009, Thomas Burgess wrote:
if you add a raidz group to a
On 07/09/2009, at 6:24 AM, Richard Elling wrote:
On Sep 6, 2009, at 7:53 AM, Ross Walker wrote:
On Sun, Sep 6, 2009 at 9:15 AM, James Leverj...@jamver.id.au wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris
b118
system typically noticed when running an ‘ls’ (no
Sorry for my earlier post I responded prematurely.
On Sep 6, 2009, at 9:15 AM, James Lever j...@jamver.id.au wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris b1
18 system typically noticed when running an ‘ls’ (no extra flags,
so no directory service lookups).
On Sep 6, 2009, at 5:06 PM, James Lever wrote:
On 07/09/2009, at 6:24 AM, Richard Elling wrote:
On Sep 6, 2009, at 7:53 AM, Ross Walker wrote:
On Sun, Sep 6, 2009 at 9:15 AM, James Leverj...@jamver.id.au wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris
b118
system
Near Success! After 5 (yes, five) attempts, managed to do
an update of snv111b to snv122, until it ran out of space
again. Looks like I need to get a bigger disk...
Sorry about the monolog, but there might be someone on this
list trying to use pkg on SPARC who, like me, has been
unable to
On 07/09/2009, at 11:08 AM, Richard Elling wrote:
Ok, just so I am clear, when you mean local automount you are
on the server and using the loopback -- no NFS or network involved?
Correct. And the behaviour has been seen locally as well as remotely.
You are looking for I/O that takes
On 07/09/2009, at 10:46 AM, Ross Walker wrote:
zpool is RAIDZ2 comprised of 10 * 15kRPM SAS drives behind an LSI
1078 w/ 512MB BBWC exposed as RAID0 LUNs (Dell MD1000 behind PERC 6/
E) with 2x SSDs each partitioned as 10GB slog and 36GB remainder as
l2arc behind another LSI 1078 w/ 256MB
19 matches
Mail list logo