How do you know it is dedup causing the problem?

You can check to see how much is by looking at the threads (look for ddt)

mdb -k

::threadlist -v

or dtrace it.

fbt:zfs:ddt*:entry

You can disable dedup. I believe current dedup data stays until it gets over written. I'm not sure what send would do, but I would assume the new filesystem if dedup is not enabled would not have dedup'd data.
You might also want to read.

http://blogs.sun.com/roch/entry/dedup_performance_considerations1

As far as the impact of <ctrl-c> on a move operation, When I do a test to move a file from one file system to another an <ctrl-c> the operation, the file is intact on the original filesystem and on the new filesystem it is partial. So you would have to be careful about which data has already been copied.

Dave

On 09/24/10 14:34, Thomas S. wrote:
Hi all

I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data is 
being moved from a dataset to another, which has dedup enabled.

The transfer started at quite a slow transfer speed — maybe 12MB/s. But it is 
now crawling to a near halt. Only 800GB has been moved in 48 hours.

I looked for similar problems on the forums and other places, and it seems 
dedup needs a much bigger amount of RAM than the server currently has (3GB), to 
perform smoothly for such an operation.

My question is, how can I gracefully stop the ongoing operation? What I did was simply 
"mv temp/* new/" in an ssh session (which is still open).

Can I disable dedup on the dataset while the transfer is going on? Can I simply 
Ctrl-C the procress to stop it? Shoul I be careful of anything?

Help would be appreciated


--




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to