Hi Ilya,

Can you confirm that you are using fiber channel VMFS type as your
primary storage? I have tried enabling the
storage.overprovisioning.factor in the global settings previously and
it made no difference. The docs show that VMware storage
overprovisioning is only supported on NFS and iSCSI, so I am curious
if this is working for you.

Thanks,

Chris

On Thu, Oct 10, 2013 at 11:38 AM, Musayev, Ilya <imusa...@webmd.net> wrote:
> Chris,
>
> Current solution is to enable storage.overprovisioning.factor in global 
> settings. Long term solution as per discussion with Edison, who maintains 
> storage framework, is to do real lookup of actual space.
>
> I'm in exact situation as you are, I've overprovisioned by factor of 20 
> (maybe an over kill). I also rely on vSphere monitoring for space alerts.
>
> Regards
> ilya
>
>> -----Original Message-----
>> From: Chris Sciarrino [mailto:chris.sciarr...@gmail.com]
>> Sent: Thursday, October 10, 2013 10:21 AM
>> To: users@cloudstack.apache.org
>> Subject: Cloudstack System Capacity - VMFS Storage
>>
>> Hi,
>>
>> I have the situation where I have deployed a fairly large amount instances of
>> the same template. This has increased the amount of allocated primary
>> storage used to over 2.5 TB . However due to the usage of linked clones I am
>> really only using a couple of hundred GB.
>> The primary storage type is a VMFS datastore so I do not believe I can
>> overprovision it in cloudstack. The problem lies when trying to deploy more
>> instances it is failing because the amount I have allocated is over the 
>> disable
>> threshold even though I have plenty of actual storage left.
>>
>> Is there any way around this? Or any way to make cloudstack see the actual
>> storage usage on the VMFS datastores?
>>
>> Thanks,
>>
>> Chris
>
>

Reply via email to