> Just to add some context, this was awhile back that I tried it,
> years. The idea was that we could just set max memory to some crazy
> high number and then “unlock” just the amount in the offering, and
> adjust on the fly. As mentioned I found it was trivial for VM users
> to unlock the full amount and get a “free” upgrade, so it was
> useless. There was also a non trivial amount of RAM overhead just
> lost to support balloon, if I recall.

IMHO, supporting full dynamic scaling included shrinkage has a limited
number of use cases. If you want a workload to be dynamically scalable,
it would usually be much better to look into horizontal scaling, i.e.
deploying more instances as load increases. If your workload is too
small to make horizontal scaling effective, you should probably ask
yourself the question if you need scaling at all.

Limiting scaling to memory increase only might have some merit and
should be much easier to implement by means of memory hotplug
emulation. Though, is it really worth the complexity when an offline
upgrade would normally only cause a very short downtime (or none at all
in a HA setup)?

Reply via email to