On 02/10/13 12:01, Koopmann, Jan-Peter wrote:
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
support (I believe currently only Nexenta but correct me if I am wrong) the
blocks will not be freed, will they?
Solaris 11.1 has ZFS with SCSI UNMAP
Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
support (I believe currently only Nexenta but correct me if I am wrong) the
blocks will not be freed, will they?
Solaris 11.1 has ZFS with SCSI UNMAP support.
Freeing unused blocks works perfectly well with fstrim
Darren
On 02/12/2013 11:25 AM, Darren J Moffat wrote:
On 02/10/13 12:01, Koopmann, Jan-Peter wrote:
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi
unmap support (I believe currently only Nexenta but correct me if I am
wrong) the blocks will not be freed,
On 02/12/13 15:07, Thomas Nau wrote:
Darren
On 02/12/2013 11:25 AM, Darren J Moffat wrote:
On 02/10/13 12:01, Koopmann, Jan-Peter wrote:
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi
unmap support (I believe currently only Nexenta but correct me if I
No tools, ZFS does it automaticaly when freeing blocks when the
underlying device advertises the functionality.
ZFS ZVOLs shared over COMSTAR advertise SCSI UNMAP as well.
If a system was running something older, e.g., Solaris 11; the free
blocks will not be marked such on the server even
On 02/10/2013 01:01 PM, Koopmann, Jan-Peter wrote:
Why should it?
I believe currently only Nexenta but correct me if I am wrong
The code has been mainlined a while ago, see:
https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/comstar/lu/stmf_sbd/sbd.c#L3702-L3730
I run dd if=/dev/zero of=testfile bs=1024k count=5 inside the iscsi vmfs
from ESXi and rm textfile.
However, the zpool list doesn't decrease at all. In fact, the used storage
increase when I do dd.
FreeNas 8.0.4 and ESXi 5.0
Help.
Thanks.
___
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
support (I believe currently only Nexenta but correct me if I am wrong) the
blocks will not be freed, will they?
Kind regards
JP
Sent from a mobile device.
Am 10.02.2013 um 11:01 schrieb Datnus
On 2013-02-10 10:57, Datnus wrote:
I run dd if=/dev/zero of=testfile bs=1024k count=5 inside the iscsi vmfs
from ESXi and rm textfile.
However, the zpool list doesn't decrease at all. In fact, the used storage
increase when I do dd.
FreeNas 8.0.4 and ESXi 5.0
Help.
Thanks.
Did you also
I forgot about compression. Makes sense. As long as the zeroes find their way
to the backend storage this should work. Thanks!
Kind regards
JP
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
I have to come back to this issue after a while cause it just hit me.
I have a VMWare vSphere 4 test host. I have various machines in there to do
tests for performance and other stuff. So a lot of IO/ benchmarks are done and
a lot of data is created during this benchmarks.
The vSphere test
On Sat, May 8, 2010 at 5:04 AM, Lutz Schumann
presa...@storageconcepts.de wrote:
Now If I think there would be write all zero, clear the block again in ZFS
(which I suggest in this thread) I could do the following:
Fill the disks within the VM's with all zero (dd if=/dev/zero of=/MYFILE
On 26 February, 2010 - Lutz Schumann sent me these 2,2K bytes:
Hello list,
ZFS can be used in both file level (zfs) and block level access (zvol). When
using zvols, those are always thin provisioned (space is allocated on first
write). We use zvols with comstar to do iSCSI and FC access
On 02/26/10 11:42, Lutz Schumann wrote:
Idea:
- If the guest writes a block with 0's only, the block is freed again
- if someone reads this block again - it wil get the same 0's it would get
if the 0's would be written
- The checksum of a all 0 block dan be hard coded for SHA1 /
This would be an idea and I thought about this. However I see the following
problems:
1) using deduplication
This will reduce the on disk size however the DDT will grow forever and for the
deletion of zvols this will mean a lot of time and work (see other threads
regarding DDT memory issues
On Fri, Feb 26, 2010 at 2:42 PM, Lutz Schumann
presa...@storageconcepts.dewrote:
Now If a virtual machine writes to the zvol, blocks are allocated on disk.
Reads are now partial from disk (for all blocks written) and from ZFS layer
(all unwritten blocks).
If the virtual machine (which may
On Feb 26, 2010, at 11:55 AM, Lutz Schumann wrote:
This would be an idea and I thought about this. However I see the following
problems:
1) using deduplication
This will reduce the on disk size however the DDT will grow forever and for
the deletion of zvols this will mean a lot of
17 matches
Mail list logo