Re: Community Over Code North America 2024 - Talk submitting
Hello guys, I discussed with the responsibles and it was decided that the CloudStack talks will not be targeted to an individual track; however, the track "Cloud and runtime" was renamed to "CloudStack, Cloud, and Runtime", due to our expressiveness in it, and we will be co-chairing this track. Thank you to those who submitted talks to the track. Best regards, Daniel Salvador (gutoveronezi) On 12/04/2024 09:36, Gabriel Beims Bräscher wrote: Hi Daniel, Great to hear, I would love to attend this one. And good work at making this happen. Best regards, Gabriel. On Fri, 12 Apr 2024 at 1:01 AM Ivet Petrova wrote: Hey Daniel, Great initiative! Hope more people will submit proposals. I think we posted on socials for the event at some point. Will post today also. Best regards, On 12 Apr 2024, at 3:36, Guto Veronezi wrote: Hello guys, The Community Over Code North America (COCNA) will happen in October 2024 in Denver. The Call For Tracks for this conference occurred between December 2023 and January 2024; however, we (the community) did not engage in the discussion at that time and ended up not having a track for CloudStack in COCNA 2024, like we always had in the past years. Those interested in submitting talks targeted to ACS do so by submitting them to the "Runtime and cloud". However, I am speaking with the responsible for the conference to check if we could put something together if there were enough talks submitted. Currently, there are already 7 talks submitted to "Runtime and cloud" targeted to ACS. Unfortunately, the CPF will last until 23:59 UTC on April 15, 2024; therefore, we have a short time to try to make that happen. Thus, if you are interested, we invite you to submit a CloudStack talk to the track "Runtime and cloud" at COCNA 2024[1]. Best regards, Daniel Salvador (gutoveronezi) [1] https://communityovercode.org/call-for-presentations/
Community Over Code North America 2024 - Talk submitting
Hello guys, The Community Over Code North America (COCNA) will happen in October 2024 in Denver. The Call For Tracks for this conference occurred between December 2023 and January 2024; however, we (the community) did not engage in the discussion at that time and ended up not having a track for CloudStack in COCNA 2024, like we always had in the past years. Those interested in submitting talks targeted to ACS do so by submitting them to the "Runtime and cloud". However, I am speaking with the responsible for the conference to check if we could put something together if there were enough talks submitted. Currently, there are already 7 talks submitted to "Runtime and cloud" targeted to ACS. Unfortunately, the CPF will last until 23:59 UTC on April 15, 2024; therefore, we have a short time to try to make that happen. Thus, if you are interested, we invite you to submit a CloudStack talk to the track "Runtime and cloud" at COCNA 2024[1]. Best regards, Daniel Salvador (gutoveronezi) [1] https://communityovercode.org/call-for-presentations/
Re: AW: Manual fence KVM Host
Hello Murilo, Complementing Swen's answer, if your host is still up and you can manage it, then you could also put your host in maintenance mode in ACS. This process will evacuate (migrate to another host) every VM from the host (not only the ones that have HA enabled). Is this your situation? If not, could you provide more details about your configurations and the environment state? Depending on what you have in your setup, the HA might not work as expected. For VMware and XenServer, the process is expected to happen at the hypervisor level. For KVM, ACS does not support HA; what ACS supports is failover (it is named HA in ACS though) and this process will work only when certain criteria are met. Furthermore, we have two ways to implement the failover for ACS + KVM: the VM's failover and the host's failover. In both cases, when identified that a host crashed or a VM suddenly stopped working, ACS will start the VM in another host. In ACS + KVM, to work with VM's failover, it is necessary at least one NFS primary storage; the KVM Agent of every host writes the heartbeat in it. The VM's failover is triggered only if the VM's compute offering has the property "Offer HA" enabled OR the global setting "force.ha" is enabled. VRs have failover triggered independently of the offering of the global setting. In this approach, ACS will check the VM state periodically (sending commands to the KVM Agent) and it will trigger the failover if the VM meets the previously mentioned criteria AND the determined limit (defined by the global settings "ping.interval" and "ping.timeout") has been elapsed. Bear in mind that, if you lose your host, ACS will trigger the failover; however, if you gracefully shutdown the KVM Agent or the host, the Agent will send a disconnect command to the Management Server and ACS will not check the VM state anymore for that host. Therefore, if you lose your host while the service is down, the failover will not be triggered. Also, if a host loses access to the NFS primary storage used for heartbeat and the VM uses some other primary storage, ACS might trigger the failover too. As we do not have a STONITH/fencing in this scenario, it is possible for the VM to still be running in the host and ACS to try to start it in another host. In ACS + KVM, to work with the host's failover, it is necessary to configure the host's OOBM (of each host desired to trigger the failover) in ACS. In this approach, ACS monitors the Agent's state and triggers the failover in case it cannot establish the connection again. In this scenario, ACS will shut down the host via OOBM and will start the VMs in another host; therefore, it is not dependent on an NFS primary storage. This behavior is driven by the "kvm.ha.*" global settings. Furthermore, one has to be aware that stopping the Agent might trigger the failover; therefore, it is recommended to disable the failover feature while doing operations in the host (like upgrading the packages or some other maintenance processes). Best regards, Daniel Salvador (gutoveronezi) On 10/04/2024 03:52, m...@swen.io wrote: What exactly do you mean? In which state is the host? If a host is in state "Disconnected" or "Alert" you can declare a host as degraded via api (https://cloudstack.apache.org/api/apidocs-4.19/apis/declareHostAsDegraded.html) or UI (icon). Cloudstack will then start all VM with HA enabled on other hosts, if storage is accessible. Regards, Swen -Ursprüngliche Nachricht- Von: Murilo Moura Gesendet: Mittwoch, 10. April 2024 02:10 An: users@cloudstack.apache.org Betreff: Manual fence KVM Host hey guys! Is there any way to manually fence a KVM host and then automatically start the migration of VMs that have HA enabled?
Re: CPU compatibility
For processors of the same family but of different generations, we can add them all to the same cluster, leveling the instructions to the lowest common denominator (limiting the instructions to the older generation). This way, every node on the cluster has the same set of instructions. If we do not level the instructions, a migration from an older generation to a new generation will not cause problems, as the new generation contains the older generation's set of instructions; the opposite might cause problems, as the older generation might not have all the instructions the new generation have. > So you recommend to make a cluster for each CPU Type ? It is a common recommendation in the academy; I found an older paper from VMware [1] that can help you understand this topic; however, if you dig a bit more you may find other papers. > Can you define the migration peer for hosts? For example having them all one cluster but define somehow that migration should be done between hosts of same CPU? That would be the idea by segregating hosts in clusters. Could you give details about your use case of having hosts with different specs (CPU families) in the same cluster? Best regards, Daniel Salvador (gutoveronezi) [1] https://www.vmware.com/techpapers/2007/vmware-vmotion-and-cpu-compatibility-1022.html On 10/04/2024 12:48, R A wrote: Hi, is it also problematic migrating to different CPUs of same Family? For example from Epyc 9654 to Epyc 9754 ? So you recommend to make a cluster for each CPU Type ? Can you define the migration peer for hosts? For example having them all one cluster but define somehow that migration should be done between hosts of same CPU? BR -Original Message- From: Guto Veronezi Sent: Mittwoch, 10. April 2024 00:14 To: users@cloudstack.apache.org Subject: Re: CPU compatibility Hello Steve, For CloudStack, it does not matter if you have hosts with different processors; however, this is a recommendation regarding how virtualization systems work; therefore, this discussion happens aside from CloudStack. When we are dealing with different processors, we are dealing with different flags, instructions, clocks, and so on. For processors of the same family, but of different generations, we can level the instructions to the lowest common denominator (limit the instructions to the older generation); however, it starts to get tricky when we are dealing with different families. For instance, if you deploy a guest VM in a host with Xeon Silver and try to migrate it to a Xeon Gold, the OS of your guest, which already knows the Xeon Silver instructions, might not adapt to the instructions of the new host (Xeon Gold). Therefore, in these cases, you will face problems in the guest VM. If you are aware of the differences between the processors and that mixing different types can cause problems, then you can create a cluster mixing them; however, it is not recommended. For KVM, the parameter is defined in ACS; on the other hand, for XenServer and VMware this kind of setup is done in the cluster in XenServer or vCenter. It is also important to bear in mind that, even though you level the instruction sets between the different processors in the host operating system, you might still suffer some issues due to clock differences when you migrate a VM from a faster CPU to a slower CPU and vice versa. Best regards, Daniel Salvador (gutoveronezi) On 09/04/2024 18:58, Wei ZHOU wrote: Hi, You can use a custom cpu model which is supported by both cpu processors. Please refer to https://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/k vm.html#configure-cpu-model-for-kvm-guest-optional -Wei On Tuesday, April 9, 2024, S.Fuller wrote: The Cloudstack Install Guide has the following statement - "All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags" Obviously this means we can't mix Intel and AMD CPUs within the same cluster. However, for a cluster with Intel CPUs, how much if any leeway is there within this statement? If I have two 20 Core Xeon Silver 4316 CPUs on one host and two 20 Core Xeon Silver 4416 CPUs in another, is that close enough? I'm looking to add capacity to an existing cluster, and am trying to figure out how "picky" Cloudstack is about this. Steve Fuller steveful...@gmail.com
Re: CPU compatibility
Hello Steve, For CloudStack, it does not matter if you have hosts with different processors; however, this is a recommendation regarding how virtualization systems work; therefore, this discussion happens aside from CloudStack. When we are dealing with different processors, we are dealing with different flags, instructions, clocks, and so on. For processors of the same family, but of different generations, we can level the instructions to the lowest common denominator (limit the instructions to the older generation); however, it starts to get tricky when we are dealing with different families. For instance, if you deploy a guest VM in a host with Xeon Silver and try to migrate it to a Xeon Gold, the OS of your guest, which already knows the Xeon Silver instructions, might not adapt to the instructions of the new host (Xeon Gold). Therefore, in these cases, you will face problems in the guest VM. If you are aware of the differences between the processors and that mixing different types can cause problems, then you can create a cluster mixing them; however, it is not recommended. For KVM, the parameter is defined in ACS; on the other hand, for XenServer and VMware this kind of setup is done in the cluster in XenServer or vCenter. It is also important to bear in mind that, even though you level the instruction sets between the different processors in the host operating system, you might still suffer some issues due to clock differences when you migrate a VM from a faster CPU to a slower CPU and vice versa. Best regards, Daniel Salvador (gutoveronezi) On 09/04/2024 18:58, Wei ZHOU wrote: Hi, You can use a custom cpu model which is supported by both cpu processors. Please refer to https://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/kvm.html#configure-cpu-model-for-kvm-guest-optional -Wei On Tuesday, April 9, 2024, S.Fuller wrote: The Cloudstack Install Guide has the following statement - "All hosts within a cluster must be homogenous. The CPUs must be of the same type, count, and feature flags" Obviously this means we can't mix Intel and AMD CPUs within the same cluster. However, for a cluster with Intel CPUs, how much if any leeway is there within this statement? If I have two 20 Core Xeon Silver 4316 CPUs on one host and two 20 Core Xeon Silver 4416 CPUs in another, is that close enough? I'm looking to add capacity to an existing cluster, and am trying to figure out how "picky" Cloudstack is about this. Steve Fuller steveful...@gmail.com
Re: Quota Tariff Plugin
Hello Murilo, Unfortunately, we could not address all the changes in the 4.19; we are expecting everything (tariff management via GUI, Quota GUI rework, charts, and so on) to be working on 4.20. I'll take note to keep this thread updated with the new changes. Best regards, Daniel Salvador (gutoveronezi) On 3/30/24 15:26, Murilo Moura wrote: Hello! Yesterday I installed version 4.19.0.1 and noticed that the tariff update still has the same problem as before (Unable to execute the quotatariffupdate API command due to the missing parameter name). Is there an estimate of when the quota plugin will be 100% operational? regards, *Murilo Moura* On Fri, Nov 3, 2023 at 9:16 AM Guto Veronezi wrote: Hello, Murilo There were several improvements on the Quota plugin in 4.18 (and there are some more to come). One of them was enabling operators to write rules to determine in which context the tariff will be applied; along with that, the vCPU, CPU_SPEED, and MEMORY tariffs were converted to RUNNING_VM tariffs with respective activation rules (it supports ES 5.1 on the JavaSript scripts). You can check issue #5891 [1] and PR #5909 [2] for more information. You can also check this video [3] on YouTube to get an overview of the new features yet to be ported to the community (though you probably will need to use the automatic subtitles generator). Unfortunately, we did not have time to put effort into the official documentation adjustments, but it is in the roadmap. If you have any doubt about how it works or any improvement suggestion, just let us know. Best regards, Daniel Salvador (gutoveronezi) [1] https://github.com/apache/cloudstack/issues/5891 [2] https://github.com/apache/cloudstack/pull/5909 [3] https://www.youtube.com/watch?v=3tGhrzuxaOw=ygUQcXVvdGEgY2xvdWRzdGFjaw%3D%3D On 11/3/23 02:16, Murilo Moura wrote: Guys, is the quota tariff plugin still in development? I ask because in version 4.18 I've noticed that the memory tariff, for example, isn't being calculated or saved in the cloud_usage table, in addition to the error that appears when trying to update a tariff (Unable to execute API command quotatariffupdate due to missing parameter name).
Re: [ANNOUNCE] New PMC Chair & VP Apache CloudStack Project - Daniel Salvador
Thanks Rohit for your work and thank you guys for the support. I'll put in my best effort in the role. Best regards, Daniel Salvador (gutoveronezi) On 3/22/24 05:25, Sven Vogel wrote: Thanks Rohit for your work. As always good. Congratulations Daniel. Cheers Sven -- On March 22, 2024 09:22:47 Abhishek Kumar wrote: Thanks a lot Rohit for your work Congratulations Daniel! From: Rohit Yadav Sent: 21 March 2024 19:11 To: dev ; users ; Subject: [ANNOUNCE] New PMC Chair & VP Apache CloudStack Project - Daniel Salvador All, It gives me great pleasure to announce that the ASF board has accepted CloudStack PMC resolution of Daniel Augusto Veronezi Salvador as the next PMC Chair / VP of the Apache CloudStack project. I would like to thank everyone for the support I've received over the past year. Please join me in congratulating Daniel, the new CloudStack PMC Chair / VP. Best Regards, Rohit Yadav
Invite to join the logging standard discussion
Hello guys, Hope you are doing fine. Currently, there is a discussion in GitHub [1] regarding the CloudStack logging standards. Mostly, logs are written based on the developer's feelings about what should be logged and how; however, operators are the ones constantly dealing with logs. With that in mind, I would like to invite operators to join the discussion, and present their points of view. This will enable us to create a standard that can benefit both operators and developers. Best regards, Daniel Salvador (gutoveronezi) [1] https://github.com/apache/cloudstack/discussions/8746
Re: Regarding the Log4j upgrade
Hello guys We finally merged PR #7131 [1]. With that, other PRs targeted to the branch "main" might get the conflict status. The PR #7131 [1] description contains instructions on how to fix the conflicts; however, if you have any doubts, do not hesitate to contact us. For those who have PRs targeted to the "main" and did not get the conflict status, we recommend merging the "main" and running the build, as for some cases git will not point out a conflict (e.g.: when the declaration is removed in order to inherit from the super class and the name of the logger instances are different). Thank you to everyone involved, and if you have any doubt or problem, just contact us. Best regards, Daniel Salvador (gutoveronezi) [1]: https://github.com/apache/cloudstack/pull/7131 On 2/2/24 10:51, João Jandre Paraquetti wrote: Hi all, As some of you might already be aware, ACS version 4.20 will bring our logging library, log4j, from version 1.2 to 2.19. This change will bring us a number of benefits, such as: * Async Loggers - performance similar to logging switched off * Custom log levels * Automatically reload its configuration upon modification without loosing log events during reconfigurations. * Java 8-style lambda support for lazy logging (which enables methods to be executed only when necessary, i.e.: the right log level) * Log4j 2 is garbage-free (or at least low-garbage) since version 2.6 * Plugin Architecture - easy to extend by building custom components * Log4j 2 API supports more than just logging Strings: CharSequences, Objects and custom Messages. Messages allow support for interesting and complex constructs to be passed through the logging system and be efficiently manipulated. Users are free to create their own Message types and write custom Layouts, Filters and Lookups to manipulate them. * Concurrency improvements: log4j2 uses java.util.concurrent libraries to perform locking at the lowest level possible. Log4j-1.x has known deadlock issues. * Configuration via XML, JSON, YAML, properties configuration files or programmatically. Regarding the upgrade: * To our devs: We are planning on merging #7131 as soon as 4.19 is released, this way, we will have plenty of time to fix any other PRs that might break with this change. If you have any issues regarding log4j2 in your PRs after the PR is merged, feel free to ping me (@JoaoJandre) on GitHub and I'll try my best to help you. Also, for any doubts, it might be a good idea to check the log4j documentation: https://logging.apache.org/log4j/2.x/manual/index.html. * To our users: For users that haven't tinkered with the default log configurations, this change should not bring any work to you, when installing ACS 4.20, your package manager should ask you if you want to upgrade your log4j configuration, please accept this, as the old configuration will not work anymore. For those who have made modifications to the default configuration, please take a look at this documentation: https://logging.apache.org/log4j/2.x/manual/migration.html#Log4j2ConfigurationFormat; it should help you migrate your custom configuration. In any case, if you have problems upgrading from 4.19 to 4.20, feel free to create a thread on the users list so that we can try to help you. I should remind you that 4.20 will only be launched in the end of Q2/start of Q3, so you'll have plenty of time to review what needs to be done regarding the log4j2 configuration. Best regards, João Jandre.
Re: Quota Tariff Plugin
Hello, Murilo There were several improvements on the Quota plugin in 4.18 (and there are some more to come). One of them was enabling operators to write rules to determine in which context the tariff will be applied; along with that, the vCPU, CPU_SPEED, and MEMORY tariffs were converted to RUNNING_VM tariffs with respective activation rules (it supports ES 5.1 on the JavaSript scripts). You can check issue #5891 [1] and PR #5909 [2] for more information. You can also check this video [3] on YouTube to get an overview of the new features yet to be ported to the community (though you probably will need to use the automatic subtitles generator). Unfortunately, we did not have time to put effort into the official documentation adjustments, but it is in the roadmap. If you have any doubt about how it works or any improvement suggestion, just let us know. Best regards, Daniel Salvador (gutoveronezi) [1] https://github.com/apache/cloudstack/issues/5891 [2] https://github.com/apache/cloudstack/pull/5909 [3] https://www.youtube.com/watch?v=3tGhrzuxaOw=ygUQcXVvdGEgY2xvdWRzdGFjaw%3D%3D On 11/3/23 02:16, Murilo Moura wrote: Guys, is the quota tariff plugin still in development? I ask because in version 4.18 I've noticed that the memory tariff, for example, isn't being calculated or saved in the cloud_usage table, in addition to the error that appears when trying to update a tariff (Unable to execute API command quotatariffupdate due to missing parameter name).