On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
> Brent,
>
> I had known about that bug a couple of weeks ago, but that bug has been files 
> against v111 and we're at v130. I have also seached the ZFS part of this 
> forum and really couldn't find much about this issue.
>
> The other issue I noticed is that, as opposed to the statements I read, that 
> once zfs is underway destroying a big dataset, other operations would 
> continue to work, but that doesen't seem to be the case. When destroying the 
> 3 TB dataset, the other zvol that had been exported via iSCSI stalled as well 
> and that's really bad.
>
> Cheers,
> budy
> --
> This message posted from opensolaris.org
> _______________________________________________
> opensolaris-help mailing list
> opensolaris-h...@opensolaris.org
>

I just tested your claim, and you appear to be correct.

I created a couple dummy ZFS filesystems, loaded them with about 2TB,
exported them via CIFS, and destroyed one of them.
The destroy took the usual amount of time (about 2 hours), and
actually, quite to my surprise, all I/O on the ENTIRE zpool stalled.
I dont recall seeing this prior to 130, in fact, I know I would have
noticed this, as we create and destroy large ZFS filesystems very
frequently.

So it seems the original issue I reported many months back has
actually gained some new negative impacts  :(

I'll try to escalate this with my Sun support contract, but Sun
support still isn't very familiar/clued in about OpenSolaris, so I
doubt I will get very far.

Cross posting to ZFS-discuss also, as other may have seen this and
know of a solution/workaround.



-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to