3:08pm, Daniel Carosone wrote:

On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote:
So I turned deduplication on on my staging FS (the one that gets mounted
on the database servers) yesterday, and since then I've been seeing the
mount hang for short periods of time off and on. (It lights nagios up
like a Christmas tree 'cause the disk checks hang and timeout.)

Does it have enough (really, lots) of memory?  Do you have an l2arc
cache device attached (as well)?

Dedup has a significant memory requirement, or it has to go to disk
for lots of DDT entries.  While its doing that, NFS requests can time
out.  Lengthening the timeouts on the client (for the fs mounted as a
backup destination) might help you around the edges of the problem.

As a related issue, are your staging (export) and backup fileystems
in the same pool?  If they are, moving from staging to final will
involve another round of updating lots of DDT entries.

What might be worthwhile trying:
- turning dedup *off* on the staging filesystem, so NFS isn't waiting
  for it, and then deduping later as you move to the backup area at
  leisure (effectively, asynchronously to the nfs writes).
- or, perhaps eliminating this double work by writing directly to the
  main backup fs.


Thanks for the info.

FWIW, I have turned off dedup on the staging filesystem, but the dedup'ed data is still there, so it's a bit late now.

The reason I can't write directly to the main backup FS is that the backup process (RMAN run by my Oracle DBA) writes new files in place, and so my snapshots were taking up 500GB each, vs the 50GB I get if I use rsync instead.

I had the dedup turned on on the staging FS so that I could take snapshots of it with dedup and the final FS without dedup (but populated via rsync) to compare which works best. I guess I'll have to wait until I can get some more RAM on the box.

Paul
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to