weizhouapache commented on PR #8481: URL: https://github.com/apache/cloudstack/pull/8481#issuecomment-1885689032
> > @GutoVeronezi could you target 4.18.2? > > @JoaoJandre, done. > > > > code looks good. I have a worry though. overprovisioning on storage in general is a dangerous thing. On cpu and memory any errors due to overprovisioning are likely to not be fatal, but on storage these can easily lead to data loss. when storage is allocated but not used we can apply overprovisioning but when parts of it get used these should not be considered in overprovisioning. I don't think our model supports such behaviour. any ideas @GutoVeronezi @sureshanaparti (sorry, i don't know what wisdom is in respect to this) > > @DaanHoogland > Sure, dealing with over-provisioning without caution can be harmful. Before changing the overprovisioning configurations, operators should understand how the environment is being consumed. Furthermore, it is important to have active monitoring over the environment; this way, it is possible to take actions, like the one @weizhouapache mentioned in https://github.com/apache/cloudstack/pull/8481#issuecomment-1884729728, to avoid problems. > > The configuration `cluster.storage.capacity.notificationthreshold` makes ACS notify the operators when the storage reaches the capacity threshold (for memory and CPU we have similar configurations). I am not sure if we have much more to do from the ACS perspective, but we can think about ways to improve this feature. > > > Yes @GutoVeronezi, there are global settings to (1) notify users, or (2) disable new volume allocation, if allocated capacity or actual used capacity reaches a threshold. The thresholds are around 75% or 85%, so users have enough time to get more storage or clean the storage. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
