Does anyone know where I can still find the SUNWsmbs and SUNWsmbskr
packages for the Sparc version of OpenSolaris? I wanted to experiment with
ZFS/CIFS on my Sparc server but the ZFS share command fails with:
zfs set sharesmb=on tank1/windows
cannot share 'tank1/windows': smb
On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek p...@freebsd.org wrote:
BTW. Can you, Cindy, or someone else reveal why one cannot boot from
RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
would have to be licensed under GPL as the rest of the boot code?
I'm
On Sun, Dec 18, 2011 at 07:24:27PM +0700, Fajar A. Nugraha wrote:
On Sun, Dec 18, 2011 at 6:52 PM, Pawel Jakub Dawidek p...@freebsd.org wrote:
BTW. Can you, Cindy, or someone else reveal why one cannot boot from
RAIDZ on Solaris? Is this because Solaris is using GRUB and RAIDZ code
would
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks come off the same controller, and
all pools are backed by SSD-based l2arc and ZIL. Performance is
excellent on all pools but one, and I am struggling greatly to figure
out what is
what are the output of zpool status pool1 and pool2
it seems that you have mix configuration of pool3 with disk and mirror
On 12/18/2011 9:53 AM, Jan-Aage Frydenbø-Bruvoll wrote:
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks
Hi,
On Sun, Dec 18, 2011 at 15:13, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
laot...@gmail.com wrote:
what are the output of zpool status pool1 and pool2
it seems that you have mix configuration of pool3 with disk and mirror
The other two pools show very similar outputs:
root@stor:~# zpool status
On Sun, Dec 18, 2011 at 10:46 PM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
The affected pool does indeed have a mix of straight disks and
mirrored disks (due to running out of vdevs on the controller),
however it has to be added that the performance of the affected pool
was
Hi,
On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha w...@fajar.net wrote:
Is the pool over 80% full? Do you have dedup enabled (even if it was
turned off later, see zpool history)?
The pool stands at 86%, but that has not changed in any way that
corresponds chronologically with the sudden drop
2011-12-17 21:59, Steve Gonczi wrote:
Coincidentally, I am pretty sure entry 0 of these meta dnode objects is
never used,
so the block with the checksum error does never comes into play.
Steve
I wonder if this is true indeed - seems so, because the pool
seems to work reardless of the seemingly
On Mon, Dec 19, 2011 at 12:40 AM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
Hi,
On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha w...@fajar.net wrote:
Is the pool over 80% full? Do you have dedup enabled (even if it was
turned off later, see zpool history)?
The pool stands at 86%,
Hi,
On Sun, Dec 18, 2011 at 22:00, Fajar A. Nugraha w...@fajar.net wrote:
From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since it seems to be inaccessible
now:
Keep pool space under 80% utilization to maintain pool
Do note, that though Frank is correct, you have to be a little careful
around what might happen should you drop your original disk, and only
the large mirror half is left... ;)
On 12/16/11 07:09 PM, Frank Cusack wrote:
You can just do fdisk to create a single large partition. The
attached
I know some others may already have pointed this out - but I can't see
it and not say something...
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
At least for me - the rate at which _I_ seem to lose disks, it would be
worth
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert nat...@tuneunix.com wrote:
I know some others may already have pointed this out - but I can't see it
and not say something...
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
At least
Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
side of failure doing heavy retries that id dragging the pool down.
Craig
--
Craig Morgan
On 18 Dec 2011, at 16:23, Jan-Aage Frydenbø-Bruvoll j...@architechs.eu wrote:
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan
I'd look at iostat -En. It will give you a good breakdown of disks that
have seen errors. I've also spotted failing disks just by watching an
iostat -nxz and looking for the one who's spending more %busy than the rest
of them, or exhibiting longer than normal service times.
-Matt
-Original
Hi Craig,
On Sun, Dec 18, 2011 at 22:33, Craig Morgan crgm...@gmail.com wrote:
Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
side of failure doing heavy retries that id dragging the pool down.
Thanks for the hint - didn't know about fmdump. Nothing in the log
Hi,
On Sun, Dec 18, 2011 at 22:38, Matt Breitbach matth...@flash.shanje.com wrote:
I'd look at iostat -En. It will give you a good breakdown of disks that
have seen errors. I've also spotted failing disks just by watching an
iostat -nxz and looking for the one who's spending more %busy than
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert nat...@tuneunix.com wrote:
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
Ah - didn't pick up on that one until someone here pointed it out -
all my disks are mirrored, however some of them
19 matches
Mail list logo