I’m talking about the case where the storage device uses thin-provisioning 
internally to allow for better utilization. In this particular case the vendor 
array uses thin-provisioning, and I don’t have an option to turn it off. What I 
see is something like this:

- Create a 10 TB file systems
- Fill it with 4 TB of data
- Delete the data (from the host)
- Storage array still reports 4 TB of usage  while the host sees 0

This evidently goes back to the use of the SCSI "UNMAP” call that you can add 
as a mount option in RedHat Linux (example)

mount -o discard LABEL=DemoVol /files/

Google "redhat linux scsi unmap” and you’ll see references to this.

However, GPFS doesn’t support this (see my previous reference) and as a result 
the array doesn’t know the block is no longer in use. This doesn’t mean GPFS 
can’t re-use it, it just means the array thinks there is more in use than there 
really is. I don’t know if this is common with thinly-provisioned arrays in 
general or specific to this vendor. But the fact that IBM call it out (“since 
at present GPFS does not communicate block deallocation events to the block 
device layer”) means that they are aware of this behavior in some arrays – 
perhaps their own as well.

Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid



From: 
<[email protected]<mailto:[email protected]>>
 on behalf of Jonathan Buzzard 
<[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, December 22, 2015 at 5:46 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] Experiences with thinly-provisioned volumes?

What is the usage case of using thin provisioning on a GPFS file system?
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to