Cool thx, sounds like exactly what I'm looking for.  

I did a bit of reading on the subject and to my understanding I should...
Create a volume of a size as large as I could possibly need.  So, siding on the 
optimistic, "zfs create -s -V 4000G tank/iscsi1".  Then in Windows initialize 
and quick format it and Windows will think it is 4000G.  Obviously I would do a 
quick format not a full or it would write 4000G worth of zeros or die trying.  
Although with Dedup I would presume it should be able to do that.  Is that a 
good procedure or is there a better way?

Anyway, my next question is what happens when it fills up?  Also what happens 
when deleted files on the NTFS partition add up to consume all the available 
space.  

I mean if I write a file to the NTFS volume it will write all that data to the 
ZFS filesystem.  Then I delete that file and all that happens is it gets marked 
as deleted, the data doesn't actually get zeroed out so as far as ZFS is 
concerned the blocks still contain data and need to be stored.  As with most 
NTFS partitions it will eventually use every bit of space it sees available no 
matter how many active files are there.  

So using this type of thin provisioning should I run scheduled cleans on the 
NTFS partition from Windows to zero out the deleted data?  Also are there any 
other issues I should be aware of?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to