My  rollback "finished" yesterday after about 7.5 days. It still wasn't ready 
to receive the last snapshot, so I rm'ed all the files (took 14 hours) and then 
issued the rollback command again, 2 minutes this time.

Ok, I now have many questions, some due to a couple of responses (which don't 
appear on the http://opensolaris.org/jive website)

One response was.

"I think it has been shown by others that dedup requires LOTS of RAM and 
to be safe, an SSD L2ARC, especially with large (multi-TB) datasets.  Dedup 
is still very new, too.  People seem to forget that."

The other was

"My only suggestion is if the machine is still showing any disk activity to try 
adding more RAM.  I don't know this for a fact but it seems that destroying 
deduped data when the dedup table doesn't fit in RAM is pathologically slow 
because the entire table is traversed for every deletion, or at least enough of 
it to hit the disk on every delete. 
I've seen a couple of people report that the process was able to complete in a 
sane amount of time after adding more RAM.

This information is based on what I remember of past conversations and is all 
available in the archives as well."

I currently have 4 GB of RAM, and can't get anymore in this box (4 x 2 TB hard 
drives), so it sounds like I need bigger hardware. So the question is how much 
more. According to one post I have read, the poster claimed that the dedup 
table would fill 13.4GB for his 1.7 TB file space, assuming this is true (8GB 
per 1TB), then do modern servers have enough RAM space to use dedup 
effectively. Is a SSD fast enough, or does the whole DDT need to be held in RAM?

I am currently getting a planning a new file server for the company which need 
to have space for approx 16 TB of files (twice what we are currently using) and 
this will need to be much more focused to performance. So would the 2 solutions 
have  similar performance, and what results does turning on compress give?

Both will have 20 Hard disks (2 rpool, 2 SDD cache, and 14 data as mirrored 
pairs, and 2 hot spares)

non- dedup. 
16 x 2 TB giving 14 TB file system space ( 2 spares)
2 x 80 GB SSD cache
16 GB RAM  (2 GB for system, 14GB for ZFS, is this fine for non dedup?)

dedup ( I am getting a 2.7 ratio at the moment on the secondary backup)
14 x 1 TB giving 6 TB of file system space ( dedup of 2.3 and 2 spare slots for 
upgrade)
2 x 160 GB SSD cache
64 GB RAM (2GB system, 6GB ZFS, 48 DDT, yes, I know I can't seperate ZFS and 
DDT.)

The second system will be more upgradeable/future proof, but do people think 
the performance would be similar?


Thanks

John
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to