Brandon,

You're probably hitting this CR:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924824

I'm tracking the existing dedup issues here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Thanks,

Cindy

On 04/29/10 23:11, Brandon High wrote:
I tried destroying a large (710GB) snapshot from a dataset that had
been written with dedup on. The host locked up almost immediately, but
there wasn't a stack trace on the console and the host required a
power cycle, but seemed to reboot normally. Once up, the snapshot was
still there. I was able to get a dump from this. The data was written
with b129, and the system is currently at b134.

I tried destroying it again, and the host started behaving badly.
'less' would hang, and there were several zfs-auto-snapshot processes
that were over an hour old, and the 'zfs snapshot' processes were
stuck on the first dataset of the pool. Eventually the host became
unusable and I rebooted again.

The host seems to be fine now, and is currently running a scrub.

Any ideas on how to avoid this in the future? I'm no longer using
dedup due to performance issues with it, which implies that the DDT is
very large.

bh...@basestar:~$ pfexec zdb -DD tank
DDT-sha256-zap-duplicate: 5339247 entries, size 348 on disk, 162 in core
DDT-sha256-zap-unique: 1479972 entries, size 1859 on disk, 1070 in core

-B

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to