what is the ram size?
are there many snap? create then delete?
did you run a scrub?
Sent from my iPad
On Dec 18, 2011, at 10:46, Jan-Aage Frydenbø-Bruvoll j...@architechs.eu wrote:
Hi,
On Sun, Dec 18, 2011 at 15:13, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
laot...@gmail.com wrote:
what are the
Hi,
2011/12/19 Hung-Sheng Tsao (laoTsao) laot...@gmail.com:
what is the ram size?
32 GB
are there many snap? create then delete?
Currently, there are 36 snapshots on the pool - it is part of a fairly
normal backup regime of snapshots every 5 min, hour, day, week and
month.
did you run a
2011-12-19 2:00, Fajar A. Nugraha wrote:
From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since it seems to be inaccessible
now:
Keep pool space under 80% utilization to maintain pool performance.
Currently, pool performance can
On Mon, Dec 19, 2011 at 11:58:57AM +, Jan-Aage Frydenbø-Bruvoll wrote:
2011/12/19 Hung-Sheng Tsao (laoTsao) laot...@gmail.com:
did you run a scrub?
Yes, as part of the previous drive failure. Nothing reported there.
Now, interestingly - I deleted two of the oldest snapshots
2011-12-19 2:53, Jan-Aage Frydenbø-Bruvoll пишет:
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenertnat...@tuneunix.com wrote:
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
Ah - didn't pick up on that one until someone here pointed it out
not sure oi support shadow migration
or you may be to send zpool to another server then send back to do defrag
regards
Sent from my iPad
On Dec 19, 2011, at 8:15, Gary Mills gary_mi...@fastmail.fm wrote:
On Mon, Dec 19, 2011 at 11:58:57AM +, Jan-Aage Frydenbø-Bruvoll wrote:
2011/12/19
On 12/18/2011 4:23 PM, Jan-Aage Frydenbø-Bruvoll wrote:
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenertnat...@tuneunix.com wrote:
I know some others may already have pointed this out - but I can't see it
and not say something...
Do you realise that losing a single disk in that pool
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks come off the same controller, and
all pools are backed by SSD-based l2arc and ZIL. Performance is
excellent on all pools but one, and I am struggling greatly to figure
out what is
what are the output of zpool status pool1 and pool2
it seems that you have mix configuration of pool3 with disk and mirror
On 12/18/2011 9:53 AM, Jan-Aage Frydenbø-Bruvoll wrote:
Dear List,
I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks
Hi,
On Sun, Dec 18, 2011 at 15:13, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
laot...@gmail.com wrote:
what are the output of zpool status pool1 and pool2
it seems that you have mix configuration of pool3 with disk and mirror
The other two pools show very similar outputs:
root@stor:~# zpool status
On Sun, Dec 18, 2011 at 10:46 PM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
The affected pool does indeed have a mix of straight disks and
mirrored disks (due to running out of vdevs on the controller),
however it has to be added that the performance of the affected pool
was
Hi,
On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha w...@fajar.net wrote:
Is the pool over 80% full? Do you have dedup enabled (even if it was
turned off later, see zpool history)?
The pool stands at 86%, but that has not changed in any way that
corresponds chronologically with the sudden drop
On Mon, Dec 19, 2011 at 12:40 AM, Jan-Aage Frydenbø-Bruvoll
j...@architechs.eu wrote:
Hi,
On Sun, Dec 18, 2011 at 16:41, Fajar A. Nugraha w...@fajar.net wrote:
Is the pool over 80% full? Do you have dedup enabled (even if it was
turned off later, see zpool history)?
The pool stands at 86%,
Hi,
On Sun, Dec 18, 2011 at 22:00, Fajar A. Nugraha w...@fajar.net wrote:
From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since it seems to be inaccessible
now:
Keep pool space under 80% utilization to maintain pool
I know some others may already have pointed this out - but I can't see
it and not say something...
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
At least for me - the rate at which _I_ seem to lose disks, it would be
worth
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert nat...@tuneunix.com wrote:
I know some others may already have pointed this out - but I can't see it
and not say something...
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
At least
Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
side of failure doing heavy retries that id dragging the pool down.
Craig
--
Craig Morgan
On 18 Dec 2011, at 16:23, Jan-Aage Frydenbø-Bruvoll j...@architechs.eu wrote:
Hi,
On Sun, Dec 18, 2011 at 22:14, Nathan
Hi Craig,
On Sun, Dec 18, 2011 at 22:33, Craig Morgan crgm...@gmail.com wrote:
Try fmdump -e and then fmdump -eV, it could be a pathological disk just this
side of failure doing heavy retries that id dragging the pool down.
Thanks for the hint - didn't know about fmdump. Nothing in the log
On Sun, Dec 18, 2011 at 22:14, Nathan Kroenert nat...@tuneunix.com wrote:
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
Ah - didn't pick up on that one until someone here pointed it out -
all my disks are mirrored, however some of them
19 matches
Mail list logo