Do you have dedup on? Removing large files, zfs destroy a snapshot, or a zvol
and you'll see hangs like you are describing.
Turn off dedup is best option..
If you want dedup get more ram, and more, and more, and.. add SSD cache
device.. then it works ok usually..
Right now I'm fighting an
I see in NexentaStor's announcement of Community Edition 3.0.3 they mention
some backported patches in this release.
Aside from their management features / UI what is the core OS difference if we
move to Nexenta from OpenSolaris b134?
These DeDup bugs are my main frustration - if a staff
As most others have - I've been having issues with dedup.
Here's my situation, 4TB pool for daily backups of sql server - dedup enabled -
so a typical directory has 100+ files that are mostly identical (some all are
identical).
If I do rm * OpenSolaris is dead, zfs hung, etc. sometimes it
http://www.bitshop.com/Blogs/tabid/95/EntryId/78/Bug-in-OpenSolaris-SMB-Server-causes-slow-disk-i-o-always.aspx
This explains just how major of a bug this issue is IMHO - The SMB slowdown
from Windows 2003 is doing something odd in the Kernel I think now from the
symptoms - See the tests for
I'd agree with export/import *IF* the drive should be good, however I have a
drive that was pulled from the pool a long time ago (to flash drive the drive)
- The data on it is useless. exporting/importing would cause either a) errors,
b) scrub to need to be run which should overwrite it
We're having to split data to multiple pools if we enable dedup, 1+ TB pools
each (one 6x750gb is particularly bad).
The timeouts cause COMSTAR / iSCSI to fail, Windows clients are dropping the
persistent targets due to timeouts ( 15 seconds it seems). This is causing
bigger problems.
I should note that trying zfs set primarycache=metadata tank1 took a few
minutes. Seems changing what is cached in ram would be instant (we don't need
to flush out from ram the data, just don't put it back in ram again).
During this disk i/o seemed slow, could have been unrelated.
--
This
I know, we should have done zpool scrub -s first.. but.. sigh..
bits...@zfs:/opt/StorMan# zpool status -v tankmir1
pool: tankmir1
state: ONLINE
scrub: scrub in progress for 0h16m, 0.14% done, 187h17m to go
config:
NAMESTATE READ WRITE CKSUM
tankmir1ONLINE
I enabled compression on a zfs filesystem with compression=gzip9 - i.e. fairly
slow compression - this stores backups of databases (which compress fairly
well).
The next question is: Is the CRC on the disk based on the uncompressed data
(which seems more likely to be able to be recovered) or
Hardware: Supermicro server with Adaptec 5405 SAS controller, LSI expander -
24 drives. Currently using 2x 1tb SAS drives striped and 1x750gb SATA as
another pool. I don't think hardware is related though as if I turn off zfs
compression it's fine - I seem to get same behavior on either pool.
10 matches
Mail list logo