[ADVISORY] Apache CloudStack LTS Security Releases 4.18.2.4 and 4.19.1.2

2024-10-15 Thread Guto Veronezi

The Apache CloudStack project announces the release of LTS security releases
4.18.2.4 and 4.19.1.2 that address the following security issues:

- CVE-2024-45219 (severity 'Important')
- CVE-2024-45461 (severity 'Moderate')
- CVE-2024-45462 (severity 'Moderate')
- CVE-2024-45693 (severity 'Important')

# CVE-2024-45219: Uploaded and registered templates and volumes can be 
used to

abuse KVM-based infrastructure

Account users in Apache CloudStack by default are allowed to upload and 
register
templates for deploying instances and volumes for attaching them as data 
disks
to their existing instances. Due to missing validation checks for 
KVM-compatible
templates or volumes in CloudStack 4.0.0 through 4.18.2.3 and 4.19.0.0 
through
4.19.1.1, an attacker that can upload or register templates and volumes, 
can use
them to deploy malicious instances or attach uploaded volumes to their 
existing
instances on KVM-based environments and exploit this to gain access to 
the host

filesystems that could result in the compromise of resource integrity and
confidentiality, data loss, denial of service, and availability of KVM-based
infrastructure managed by CloudStack.

Users are recommended to upgrade to Apache CloudStack 4.18.2.4 or 
4.19.1.2, or

later, which addresses this issue.

Additionally, all user-uploaded or registered KVM-compatible templates and
volumes can be scanned and checked that they are flat files that should 
not be

using any additional or unnecessary features. For example, operators can run
this on their secondary storage(s) and inspect output. An empty output 
for the
disk being validated means it has no references to the host filesystems; 
on the
other hand, if the output for the disk being validated is not empty, it 
might

indicate a compromised disk.

for file in $(find /path/to/storage/ -type f -regex [a-f0-9\-]*.*); do 
echo "Retrieving file [$file] info. If the output is not empty, that 
might indicate a compromised disk; check it carefully."; qemu-img info 
-U $file | grep file: ; printf "\n\n"; done


The command can also be run for the file-based primary storages; 
however, bear

in mind that (i) volumes created from templates will have references for the
templates at first and (ii) volumes can be consolidated while migrating, 
losing

their references to the templates. Therefore, the command execution for the
primary storages can show both false positives and false negatives.

For checking the whole template/volume features of each disk, operators 
can run

the following command:


for file in $(find /path/to/storage/ -type f -regex [a-f0-9\-]*.*); do 
echo "Retrieving file [$file] info."; qemu-img info -U $file; printf 
"\n\n"; done


# CVE-2024-45461: Access checks not enforced in Quota

The CloudStack Quota feature allows cloud administrators to implement a 
quota or

usage limit system for cloud resources, and is disabled by default. In
environments where the feature is enabled, due to missing access check
enforcement, non-administrative CloudStack user accounts are able to 
access and

modify quota-related configurations and data. This issue affects Apache
CloudStack from 4.7.0 through 4.18.2.3; and from 4.19.0.0 through 4.19.1.1,
where the Quota feature is enabled.

Users that do not use the Quota feature are advised to disabled the 
plugin by

setting the global setting "quota.enable.service" to "false".

# CVE-2024-45462: Incomplete session invalidation on web interface logout

The logout operation in the CloudStack web interface does not expire the 
user
session completely which is valid until expiry by time or restart of the 
backend
service. An attacker that has access to a user's browser can use an 
unexpired

session to gain access to resources owned by the logged out user account.

# CVE-2024-45693: Request origin validation bypass makes account takeover
possible

Users logged into the Apache CloudStack's web interface can be tricked 
to submit
malicious CSRF requests due to missing validation of the origin of the 
requests.

This can allow an attacker to gain privileges and access to resources of the
authenticated users and may lead to account
takeover, disruption, exposure of sensitive data and compromise 
integrity of the

resources owned by the user account that are managed by the platform.

# Credits

The CVEs are credited to the following reporters:

## CVE-2024-45219:

- Daniel Augusto Veronezi Salvador 

## CVE-2024-45461:

- Fabrício Duarte 

## CVE-2024-45462:

- Arthur Souza
- Felipe Olivaes

## CVE-2024-45693:

- Arthur Souza
- Felipe Olivaes

# Affected versions

## CVE-2024-45219:
- Apache CloudStack 4.0.0 through 4.18.2.3
- Apache CloudStack 4.0.0 through 4.19.1.1

## CVE-2024-45461:
- Apache CloudStack 4.7.0 through 4.18.2.3
- Apache CloudStack 4.7.0 through 4.19.1.1

## CVE-2024-45462:
- Apache CloudStack 4.15.1.0 through 4.18.2.3
- Apache CloudStack 4.15.1.0 through 4.19.1.1

## CVE-2024-45693:
- Apache CloudStack 4.15.1.0 through 4.18.2.3
- Apache Clou

Re: Install cloudstack 4.19 , TOTAL WASTE OF TIME

2024-07-03 Thread Guto Veronezi
nagement controller is working fine but the problem is
with the kvm host. The log I presented is from the host trying to connect
to the controller at the port 8250.

I just installed chrony because I didn't before and rebooted the host.
Bellow is the two errors I am having but the second is the one that is
causing the exception.

Jul 02 13:06:23 host1-kvm java[1195]:* libvirt: Domain Config error :

invalid connection pointer in **virConnectGetVersion*


Jul 02 13:06:23 host1-kvm java[1195]: ERROR

[kvm.resource.LibvirtConnection] (Agent-Handler-1:) (logid:) Connection
with libvirtd is broken: invalid connection pointer in

virConnectGetVersion

Jul 02 13:06:23 host1-kvm java[1195]: INFO

  [kvm.storage.LibvirtStorageAdaptor] (Agent-Handler-1:) (logid:)

Attempting

to create storage pool 598765d0-efc5-46b8-af72-6e6126c5d26d

(Filesystem) in

libvirt

Jul 02 13:06:23 host1-kvm java[1195]: libvirt: Storage Driver error :

Storage pool not found: no storage pool with matching uuid
'598765d0-efc5-46b8-af72-6e6126c5d26d'

Jul 02 13:06:23 host1-kvm java[1195]: WARN

  [kvm.storage.LibvirtStorageAdaptor] (Agent-Handler-1:) (logid:) Storage
pool 598765d0-efc5-46b8-af72-6e6126c5d26d was not found running in

libvirt.

Need to create it.

Jul 02 13:06:23 host1-kvm java[1195]: INFO

  [kvm.storage.LibvirtStorageAdaptor] (Agent-Handler-1:) (logid:) Didn't
find an existing storage pool 598765d0-efc5-46b8-af72-6e6126c5d26d by

UUID,

checking for pools with duplicate paths

Jul 02 13:06:23 host1-kvm java[1195]: INFO

  [kvm.storage.LibvirtStorageAdaptor] (Agent-Handler-1:) (logid:) Trying

to

fetch storage pool 598765d0-efc5-46b8-af72-6e6126c5d26d from libvirt

Jul 02 13:06:23 host1-kvm java[1195]: *libvirt:  error : internal error:

could not initialize domain event timer*

Jul 02 13:06:23 host1-kvm java[1195]: ERROR

[kvm.resource.LibvirtComputingResource] (Agent-Handler-1:) (logid:)

Failed

to get libvirt connection for domain event lifecycle

Jul 02 13:06:23 host1-kvm java[1195]: org.libvirt.LibvirtException:

internal error: could not initialize domain event timer

Jul 02 13:06:23 host1-kvm java[1195]: at

org.libvirt.ErrorHandler.processError(Unknown Source)

Jul 02 13:06:23 host1-kvm java[1195]: at

org.libvirt.ErrorHandler.processError(Unknown Source)

Jul 02 13:06:23 host1-kvm java[1195]: at

org.libvirt.Connect.domainEventRegister(Unknown Source)

Jul 02 13:06:23 host1-kvm java[1195]: at

org.libvirt.Connect.domainEventRegister(Unknown Source)

Jul 02 13:06:23 host1-kvm java[1195]: at

org.libvirt.Connect.addLifecycleListener(Unknown Source)


This is a test for proof of concept to present in my company and I am
using all machines in KVM nested. Most of the time I see people

suggesting

in this case to use VirtualBox but I am using KVM in Proxmox. I am
beginning to suspect that more twicks are necessary to overcome this
"domain event timer" issue.

I am also using Virtual Openswitch and had to create two networks in
advance:



   default

   b966134d-74ec-4d4b-bc00-751899655f27

   

   

   




And



   ovs1

   ddfe58a5-dc9e-4a7b-9301-7b4c86ad8cc1

   

   

   




And at agent.properties I configured:

network.bridge.type=openvswitch


I think at this point, the problem is the hardware clock that probably
does not allow the event timer configuration.

What do you think ?


Again, thank you.







On Mon, Jul 1, 2024 at 5:43 PM Guto Veronezi 
wrote:


Hello Jorge,

We are sorry that you are not having a good experience deploying the
platform. As Jayanth said, we will try to be as supportive as possible.

Based on the data and log you provided us, it seems to me that there is
some misconfiguration between the OS/hypervisor and CloudStack. Could
you provide us some details about the steps you followed to configure
your environment, like how you installed and configured the OS and the
hypervisor, how you introduced the host in CloudStack (if you manually
configured the agent.properties or let the ACS configure it), and so on.
As more detail you can provide, the more we can help you to track down
and solve the situation.

Best regards,
Daniel Salvador (gutoveronezi)

On 01/07/2024 12:31, Jorge Ventura wrote:

Hi Jayanth,
Thank you for reply.
The certificate problem I overcame setting the global
ca.plugin.root.auth.strictness=false.

After several changes and fixes, cloudstack agent doesn't start. Here

is my

agent.properties in the host:

#Storage

#Mon Jul 01 15:25:54 UTC 2024

cluster=default

pod=default

resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource

domr.scripts.dir=scripts/network/domr/kvm

host.cpu.manual.speed.mhz=1

hypervisor.type=kvm

port=8250

zone=default

local.storage.uuid=

host=10.0.1.1

guid="6553e7e1-1184-450f-8f04-551aced6820d"

host.manual.speed.mhz=3000

LibvirtComputingResource.id=0

network.bridge.type=openvswitch

workers=5

iscsi.session.cleanup.enabled=false


Whe

Re: Install cloudstack 4.19 , TOTAL WASTE OF TIME

2024-07-01 Thread Guto Veronezi

Hello Jorge,

We are sorry that you are not having a good experience deploying the 
platform. As Jayanth said, we will try to be as supportive as possible.


Based on the data and log you provided us, it seems to me that there is 
some misconfiguration between the OS/hypervisor and CloudStack. Could 
you provide us some details about the steps you followed to configure 
your environment, like how you installed and configured the OS and the 
hypervisor, how you introduced the host in CloudStack (if you manually 
configured the agent.properties or let the ACS configure it), and so on. 
As more detail you can provide, the more we can help you to track down 
and solve the situation.


Best regards,
Daniel Salvador (gutoveronezi)

On 01/07/2024 12:31, Jorge Ventura wrote:

Hi Jayanth,
Thank you for reply.
The certificate problem I overcame setting the global
ca.plugin.root.auth.strictness=false.

After several changes and fixes, cloudstack agent doesn't start. Here is my
agent.properties in the host:

#Storage

#Mon Jul 01 15:25:54 UTC 2024

cluster=default

pod=default

resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource

domr.scripts.dir=scripts/network/domr/kvm

host.cpu.manual.speed.mhz=1

hypervisor.type=kvm

port=8250

zone=default

local.storage.uuid=

host=10.0.1.1

guid="6553e7e1-1184-450f-8f04-551aced6820d"

host.manual.speed.mhz=3000

LibvirtComputingResource.id=0

network.bridge.type=openvswitch

workers=5

iscsi.session.cleanup.enabled=false


When I start cloudstack-agent.service, the service fails. Here is the log:



Jul 01 15:28:11 host1-kvm systemd[1]: cloudstack-agent.service: Scheduled

restart job, restart counter is at 349.

Jul 01 15:28:11 host1-kvm systemd[1]: Stopped CloudStack Agent.

Jul 01 15:28:11 host1-kvm systemd[1]: cloudstack-agent.service: Consumed

3.743s CPU time.

Jul 01 15:28:11 host1-kvm systemd[1]: Started CloudStack Agent.

Jul 01 15:28:11 host1-kvm java[55609]: log4j:WARN No appenders could be

found for logger (com.cloud.agent.AgentShell).

Jul 01 15:28:11 host1-kvm java[55609]: log4j:WARN Please initialize the

log4j system properly.

Jul 01 15:28:11 host1-kvm java[55609]: log4j:WARN See

http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.agent.AgentShell]

(main:) (logid:) Agent started

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.agent.AgentShell]

(main:) (logid:) Implementation Version is 4.19.0.1

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.agent.AgentShell]

(main:) (logid:) agent.properties found at
/etc/cloudstack/agent/agent.properties

Jul 01 15:28:11 host1-kvm java[55609]: SLF4J: Failed to load class

"org.slf4j.impl.StaticLoggerBinder".

Jul 01 15:28:11 host1-kvm java[55609]: SLF4J: Defaulting to no-operation

(NOP) logger implementation

Jul 01 15:28:11 host1-kvm java[55609]: SLF4J: See

http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.agent.AgentShell]

(main:) (logid:) Defaulting to using properties file for storage

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.agent.AgentShell]

(main:) (logid:) Defaulting to the constant time backoff algorithm

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.utils.LogUtils] (main:)

(logid:) log4j configuration found at /etc/cloudstack/agent/log4j-cloud.xml

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.agent.AgentShell]

(main:) (logid:) Using default Java settings for IPv6 preference for agent
connection

Jul 01 15:28:11 host1-kvm java[55609]: INFO  [cloud.agent.Agent] (main:)

(logid:) id is 0

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [cloud.resource.ServerResourceBase] (main:) (logid:) Trying to
autodiscover this resource's private network interface.

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [cloud.resource.ServerResourceBase] (main:) (logid:) Using NIC
[name:cloudbr0 (cloudbr0)] as private NIC.

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [kvm.resource.LibvirtComputingResource] (main:) (logid:) uefi.properties
file found at /etc/cloudstack/agent/uefi.properties

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [kvm.resource.LibvirtComputingResource] (main:) (logid:)
guest.nvram.template.legacy = "/usr/share/OVMF/OVMF_VARS.fd"

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [kvm.resource.LibvirtComputingResource] (main:) (logid:)
guest.loader.legacy = "/usr/share/OVMF/OVMF_CODE.secboot.fd"

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [kvm.resource.LibvirtComputingResource] (main:) (logid:)
guest.nvram.template.secure = "/usr/share/OVMF/OVMF_VARS.ms.fd"

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [kvm.resource.LibvirtComputingResource] (main:) (logid:)
guest.loader.secure ="/usr/share/OVMF/OVMF_VARS.ms.fd"

Jul 01 15:28:11 host1-kvm java[55609]: INFO

  [kvm.resource.LibvirtComputingResource] (main:) (logid:) guest.nvram.path
= "/var/lib/libvirt/qemu/nvram"

Jul 01 15:28:11 host1-kvm java[55609]: INFO

Re: Community Over Code North America 2024 - Talk submitting

2024-04-19 Thread Guto Veronezi

Hello guys,

I discussed with the responsibles and it was decided that the CloudStack 
talks will not be targeted to an individual track; however, the track 
"Cloud and runtime" was renamed to "CloudStack, Cloud, and Runtime", due 
to our expressiveness in it, and we will be co-chairing this track.


Thank you to those who submitted talks to the track.

Best regards,
Daniel Salvador (gutoveronezi)

On 12/04/2024 09:36, Gabriel Beims Bräscher wrote:

Hi Daniel,

Great to hear, I would love to attend this one.
And good work at making this happen.

Best regards,
Gabriel.

On Fri, 12 Apr 2024 at 1:01 AM Ivet Petrova 
wrote:


Hey Daniel,

Great initiative! Hope more people will submit proposals. I think we
posted on socials for the event at some point. Will post today also.

Best regards,




On 12 Apr 2024, at 3:36, Guto Veronezi  wrote:

Hello guys,

The Community Over Code North America (COCNA) will happen in October 2024
in Denver. The Call For Tracks for this conference occurred between
December 2023 and January 2024; however, we (the community) did not engage
in the discussion at that time and ended up not having a track for
CloudStack in COCNA 2024, like we always had in the past years.

Those interested in submitting talks targeted to ACS do so by submitting
them to the "Runtime and cloud". However, I am speaking with the
responsible for the conference to check if we could put something together
if there were enough talks submitted. Currently, there are already 7 talks
submitted to "Runtime and cloud" targeted to ACS. Unfortunately, the CPF
will last until 23:59 UTC on April 15, 2024; therefore, we have a short
time to try to make that happen. Thus, if you are interested, we invite you
to submit a CloudStack talk to the track "Runtime and cloud" at COCNA
2024[1].

Best regards,
Daniel Salvador (gutoveronezi)

[1] https://communityovercode.org/call-for-presentations/





Community Over Code North America 2024 - Talk submitting

2024-04-11 Thread Guto Veronezi

Hello guys,

The Community Over Code North America (COCNA) will happen in October 
2024 in Denver. The Call For Tracks for this conference occurred between 
December 2023 and January 2024; however, we (the community) did not 
engage in the discussion at that time and ended up not having a track 
for CloudStack in COCNA 2024, like we always had in the past years.


Those interested in submitting talks targeted to ACS do so by submitting 
them to the "Runtime and cloud". However, I am speaking with the 
responsible for the conference to check if we could put something 
together if there were enough talks submitted. Currently, there are 
already 7 talks submitted to "Runtime and cloud" targeted to ACS. 
Unfortunately, the CPF will last until 23:59 UTC on April 15, 2024; 
therefore, we have a short time to try to make that happen. Thus, if you 
are interested, we invite you to submit a CloudStack talk to the track 
"Runtime and cloud" at COCNA 2024[1].


Best regards,
Daniel Salvador (gutoveronezi)

[1] https://communityovercode.org/call-for-presentations/



Re: AW: Manual fence KVM Host

2024-04-10 Thread Guto Veronezi

Hello Murilo,

Complementing Swen's answer, if your host is still up and you can manage 
it, then you could also put your host in maintenance mode in ACS. This 
process will evacuate (migrate to another host) every VM from the host 
(not only the ones that have HA enabled). Is this your situation? If 
not, could you provide more details about your configurations and the 
environment state?


Depending on what you have in your setup, the HA might not work as 
expected. For VMware and XenServer, the process is expected to happen at 
the hypervisor level. For KVM, ACS does not support HA; what ACS 
supports is failover (it is named HA in ACS though) and this process 
will work only when certain criteria are met. Furthermore, we have two 
ways to implement the failover for ACS + KVM: the VM's failover and the 
host's failover. In both cases, when identified that a host crashed or a 
VM suddenly stopped working, ACS will start the VM in another host.


In ACS + KVM, to work with VM's failover, it is necessary at least one 
NFS primary storage; the KVM Agent of every host writes the heartbeat in 
it. The VM's failover is triggered only if the VM's compute offering has 
the property "Offer HA" enabled OR the global setting "force.ha" is 
enabled. VRs have failover triggered independently of the offering of 
the global setting. In this approach, ACS will check the VM state 
periodically (sending commands to the KVM Agent) and it will trigger the 
failover if the VM meets the previously mentioned criteria AND the 
determined limit (defined by the global settings "ping.interval" and 
"ping.timeout") has been elapsed. Bear in mind that, if you lose your 
host, ACS will trigger the failover; however, if you gracefully shutdown 
the KVM Agent or the host, the Agent will send a disconnect command to 
the Management Server and ACS will not check the VM state anymore for 
that host. Therefore, if you lose your host while the service is down, 
the failover will not be triggered. Also, if a host loses access to the 
NFS primary storage used for heartbeat and the VM uses some other 
primary storage, ACS might trigger the failover too. As we do not have a 
STONITH/fencing in this scenario, it is possible for the VM to still be 
running in the host and ACS to try to start it in another host.


In ACS + KVM, to work with the host's failover, it is necessary to 
configure the host's OOBM (of each host desired to trigger the failover) 
in ACS. In this approach, ACS monitors the Agent's state and triggers 
the failover in case it cannot establish the connection again. In this 
scenario, ACS will shut down the host via OOBM and will start the VMs in 
another host; therefore, it is not dependent on an NFS primary storage. 
This behavior is driven by the "kvm.ha.*" global settings. Furthermore, 
one has to be aware that stopping the Agent might trigger the failover; 
therefore, it is recommended to disable the failover feature while doing 
operations in the host (like upgrading the packages or some other 
maintenance processes).


Best regards,
Daniel Salvador (gutoveronezi)

On 10/04/2024 03:52, m...@swen.io wrote:

What exactly do you mean? In which state is the host?
If a host is in state "Disconnected" or "Alert" you can declare a host as 
degraded via api (https://cloudstack.apache.org/api/apidocs-4.19/apis/declareHostAsDegraded.html) 
or UI (icon).
Cloudstack will then start all VM with HA enabled on other hosts, if storage is 
accessible.

Regards,
Swen

-Ursprüngliche Nachricht-
Von: Murilo Moura 
Gesendet: Mittwoch, 10. April 2024 02:10
An: users@cloudstack.apache.org
Betreff: Manual fence KVM Host

hey guys!

Is there any way to manually fence a KVM host and then automatically start the 
migration of VMs that have HA enabled?




Re: CPU compatibility

2024-04-10 Thread Guto Veronezi
For processors of the same family but of different generations, we can 
add them all to the same cluster, leveling the instructions to the 
lowest common denominator (limiting the instructions to the older 
generation). This way, every node on the cluster has the same set of 
instructions. If we do not level the instructions, a migration from an 
older generation to a new generation will not cause problems, as the new 
generation contains the older generation's set of instructions; the 
opposite might cause problems, as the older generation might not have 
all the instructions the new generation have.


> So you recommend to make a cluster for each CPU Type ?

It is a common recommendation in the academy; I found an older paper 
from VMware [1] that can help you understand this topic; however, if you 
dig a bit more you may find other papers.


> Can you define the migration peer for hosts? For example having them 
all one cluster but define somehow that migration should be done between 
hosts of same CPU?


That would be the idea by segregating hosts in clusters. Could you give 
details about your use case of having hosts with different specs (CPU 
families) in the same cluster?


Best regards,
Daniel Salvador (gutoveronezi)

[1] 
https://www.vmware.com/techpapers/2007/vmware-vmotion-and-cpu-compatibility-1022.html


On 10/04/2024 12:48, R A wrote:

Hi,

is it also problematic migrating to different CPUs of same Family? For example 
from Epyc 9654 to Epyc 9754 ?

So you recommend to make a cluster for each CPU Type ? Can you define the 
migration peer for hosts? For example having them all one cluster but define 
somehow that migration should be done between hosts of same CPU?

BR

-Original Message-
From: Guto Veronezi 
Sent: Mittwoch, 10. April 2024 00:14
To: users@cloudstack.apache.org
Subject: Re: CPU compatibility

Hello Steve,

For CloudStack, it does not matter if you have hosts with different processors; 
however, this is a recommendation regarding how virtualization systems work; 
therefore, this discussion happens aside from CloudStack.

When we are dealing with different processors, we are dealing with different 
flags, instructions, clocks, and so on. For processors of the same family, but 
of different generations, we can level the instructions to the lowest common 
denominator (limit the instructions to the older generation); however, it 
starts to get tricky when we are dealing with different families. For instance, 
if you deploy a guest VM in a host with Xeon Silver and try to migrate it to a 
Xeon Gold, the OS of your guest, which already knows the Xeon Silver 
instructions, might not adapt to the instructions of the new host (Xeon Gold). 
Therefore, in these cases, you will face problems in the guest VM.

If you are aware of the differences between the processors and that mixing 
different types can cause problems, then you can create a cluster mixing them; 
however, it is not recommended.

For KVM, the parameter is defined in ACS; on the other hand, for XenServer and 
VMware this kind of setup is done in the cluster in XenServer or vCenter.

It is also important to bear in mind that, even though you level the 
instruction sets between the different processors in the host operating system, 
you might still suffer some issues due to clock differences when you migrate a 
VM from a faster CPU to a slower CPU and vice versa.

Best regards,
Daniel Salvador (gutoveronezi)

On 09/04/2024 18:58, Wei ZHOU wrote:

Hi,

You can use a custom cpu model which is supported by both cpu processors.

Please refer to
https://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/k
vm.html#configure-cpu-model-for-kvm-guest-optional

-Wei


On Tuesday, April 9, 2024, S.Fuller  wrote:


The Cloudstack Install Guide has the following statement - "All hosts
within a cluster must be homogenous. The CPUs must be of the same
type, count, and feature flags"

Obviously this means we can't mix Intel and AMD CPUs within the same
cluster. However, for a cluster with Intel CPUs, how much if any
leeway is there within this statement? If I have two 20 Core Xeon
Silver 4316  CPUs on one host and two 20 Core Xeon Silver 4416 CPUs
in another, is that close enough? I'm looking to add capacity to an
existing cluster, and am trying to figure out how "picky" Cloudstack is about 
this.



Steve Fuller
steveful...@gmail.com



Re: CPU compatibility

2024-04-09 Thread Guto Veronezi

Hello Steve,

For CloudStack, it does not matter if you have hosts with different 
processors; however, this is a recommendation regarding how 
virtualization systems work; therefore, this discussion happens aside 
from CloudStack.


When we are dealing with different processors, we are dealing with 
different flags, instructions, clocks, and so on. For processors of the 
same family, but of different generations, we can level the instructions 
to the lowest common denominator (limit the instructions to the older 
generation); however, it starts to get tricky when we are dealing with 
different families. For instance, if you deploy a guest VM in a host 
with Xeon Silver and try to migrate it to a Xeon Gold, the OS of your 
guest, which already knows the Xeon Silver instructions, might not adapt 
to the instructions of the new host (Xeon Gold). Therefore, in these 
cases, you will face problems in the guest VM.


If you are aware of the differences between the processors and that 
mixing different types can cause problems, then you can create a cluster 
mixing them; however, it is not recommended.


For KVM, the parameter is defined in ACS; on the other hand, for 
XenServer and VMware this kind of setup is done in the cluster in 
XenServer or vCenter.


It is also important to bear in mind that, even though you level the 
instruction sets between the different processors in the host operating 
system, you might still suffer some issues due to clock differences when 
you migrate a VM from a faster CPU to a slower CPU and vice versa.


Best regards,
Daniel Salvador (gutoveronezi)

On 09/04/2024 18:58, Wei ZHOU wrote:

Hi,

You can use a custom cpu model which is supported by both cpu processors.

Please refer to
https://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/kvm.html#configure-cpu-model-for-kvm-guest-optional

-Wei


On Tuesday, April 9, 2024, S.Fuller  wrote:


The Cloudstack Install Guide has the following statement - "All hosts
within a cluster must be homogenous. The CPUs must be of the same type,
count, and feature flags"

Obviously this means we can't mix Intel and AMD CPUs within the same
cluster. However, for a cluster with Intel CPUs, how much if any leeway is
there within this statement? If I have two 20 Core Xeon Silver 4316  CPUs
on one host and two 20 Core Xeon Silver 4416 CPUs in another, is that close
enough? I'm looking to add capacity to an existing cluster, and am trying
to figure out how "picky" Cloudstack is about this.



Steve Fuller
steveful...@gmail.com



Re: Quota Tariff Plugin

2024-04-01 Thread Guto Veronezi

Hello Murilo,

Unfortunately, we could not address all the changes in the 4.19; we are 
expecting everything (tariff management via GUI, Quota GUI rework, 
charts, and so on) to be working on 4.20.

I'll take note to keep this thread updated with the new changes.

Best regards,
Daniel Salvador (gutoveronezi)

On 3/30/24 15:26, Murilo Moura wrote:

Hello!

Yesterday I installed version 4.19.0.1 and noticed that the tariff update
still has the same problem as before (Unable to execute the
quotatariffupdate API command due to the missing parameter name).

Is there an estimate of when the quota plugin will be 100% operational?


regards,

*Murilo Moura*


On Fri, Nov 3, 2023 at 9:16 AM Guto Veronezi 
wrote:


Hello, Murilo

There were several improvements on the Quota plugin in 4.18 (and there
are some more to come). One of them was enabling operators to write
rules to determine in which context the tariff will be applied; along
with that, the vCPU, CPU_SPEED, and MEMORY tariffs were converted to
RUNNING_VM tariffs with respective activation rules (it supports ES 5.1
on the JavaSript scripts). You can check issue #5891 [1] and PR #5909
[2] for more information. You can also check this video [3] on YouTube
to get an overview of the new features yet to be ported to the community
(though you probably will need to use the automatic subtitles generator).

Unfortunately, we did not have time to put effort into the official
documentation adjustments, but it is in the roadmap.

If you have any doubt about how it works or any improvement suggestion,
just let us know.

Best regards,
Daniel Salvador (gutoveronezi)

[1] https://github.com/apache/cloudstack/issues/5891
[2] https://github.com/apache/cloudstack/pull/5909
[3]

https://www.youtube.com/watch?v=3tGhrzuxaOw&pp=ygUQcXVvdGEgY2xvdWRzdGFjaw%3D%3D

On 11/3/23 02:16, Murilo Moura wrote:

Guys, is the quota tariff plugin still in development?

I ask because in version 4.18 I've noticed that the memory tariff, for
example, isn't being calculated or saved in the cloud_usage table, in
addition to the error that appears when trying to update a tariff (Unable
to execute API command quotatariffupdate due to missing parameter name).



Re: [ANNOUNCE] New PMC Chair & VP Apache CloudStack Project - Daniel Salvador

2024-03-22 Thread Guto Veronezi
Thanks Rohit for your work and thank you guys for the support. I'll put 
in my best effort in the role.


Best regards,
Daniel Salvador (gutoveronezi)

On 3/22/24 05:25, Sven Vogel wrote:

Thanks Rohit for your work. As always good.

Congratulations Daniel.

Cheers

Sven

--
On March 22, 2024 09:22:47 Abhishek Kumar 
 wrote:


Thanks a lot Rohit for your work
Congratulations Daniel!




From: Rohit Yadav 
Sent: 21 March 2024 19:11
To: dev ; users 
;  

Subject: [ANNOUNCE] New PMC Chair & VP Apache CloudStack Project - 
Daniel Salvador


All,

It gives me great pleasure to announce that the ASF board has
accepted CloudStack PMC resolution of Daniel Augusto Veronezi Salvador as
the next PMC Chair / VP of the Apache CloudStack project.

I would like to thank everyone for the support I've received over the 
past

year.

Please join me in congratulating Daniel, the new CloudStack PMC Chair 
/ VP.


Best Regards,
Rohit Yadav



Invite to join the logging standard discussion

2024-03-07 Thread Guto Veronezi

Hello guys,

Hope you are doing fine.

Currently, there is a discussion in GitHub [1] regarding the CloudStack 
logging standards. Mostly, logs are written based on the developer's 
feelings about what should be logged and how; however, operators are the 
ones constantly dealing with logs. With that in mind, I would like to 
invite operators to join the discussion, and present their points of 
view. This will enable us to create a standard that can benefit both 
operators and developers.


Best regards,
Daniel Salvador (gutoveronezi)

[1] https://github.com/apache/cloudstack/discussions/8746



Re: Regarding the Log4j upgrade

2024-02-08 Thread Guto Veronezi

Hello guys

We finally merged PR #7131 [1]. With that, other PRs targeted to the 
branch "main" might get the conflict status. The PR #7131 [1] 
description contains instructions on how to fix the conflicts; however, 
if you have any doubts, do not hesitate to contact us. For those who 
have PRs targeted to the "main" and did not get the conflict status, we 
recommend merging the "main" and running the build, as for some cases 
git will not point out a conflict (e.g.: when the declaration is removed 
in order to inherit from the super class and the name of the logger 
instances are different).


Thank you to everyone involved, and if you have any doubt or problem, 
just contact us.


Best regards,
Daniel Salvador (gutoveronezi)

[1]: https://github.com/apache/cloudstack/pull/7131


On 2/2/24 10:51, João Jandre Paraquetti wrote:

Hi all,

As some of you might already be aware, ACS version 4.20 will bring our 
logging library, log4j, from version 1.2 to 2.19. This change will 
bring us a number of benefits, such as:


  * Async Loggers - performance similar to logging switched off
  * Custom log levels
  * Automatically reload its configuration upon modification without
    loosing log events during reconfigurations.
  * Java 8-style lambda support for lazy logging (which enables methods
    to be executed only when necessary, i.e.: the right log level)
  * Log4j 2 is garbage-free (or at least low-garbage) since version 2.6
  * Plugin Architecture - easy to extend by building custom components
  * Log4j 2 API supports more than just logging Strings: CharSequences,
    Objects and custom Messages. Messages allow support for interesting
    and complex constructs to be passed through the logging system and
    be efficiently manipulated. Users are free to create their own
    Message types and write custom Layouts, Filters and Lookups to
    manipulate them.
  * Concurrency improvements: log4j2 uses java.util.concurrent libraries
    to perform locking at the lowest level possible. Log4j-1.x has known
    deadlock issues.
  * Configuration via XML, JSON, YAML, properties configuration files or
    programmatically.


Regarding the upgrade:

* To our devs:

    We are planning on merging #7131 as soon as 4.19 is released, this 
way, we will have plenty of time to fix any other PRs that might break 
with this change. If you have any issues regarding log4j2 in your PRs 
after the PR is merged, feel free to ping me (@JoaoJandre) on GitHub 
and I'll try my best to help you. Also, for any doubts, it might be a 
good idea to check the log4j documentation: 
https://logging.apache.org/log4j/2.x/manual/index.html.


* To our users:

    For users that haven't tinkered with the default log 
configurations, this change should not bring any work to you, when 
installing ACS 4.20, your package manager should ask you if you want 
to upgrade your log4j configuration, please accept this, as the old 
configuration will not work anymore.


    For those who have made modifications to the default 
configuration, please take a look at this documentation: 
https://logging.apache.org/log4j/2.x/manual/migration.html#Log4j2ConfigurationFormat; 
it should help you migrate your custom configuration.


    In any case, if you have problems upgrading from 4.19 to 4.20, 
feel free to create a thread on the users list so that we can try to 
help you. I should remind you that 4.20 will only be launched in the 
end of Q2/start of Q3, so you'll have plenty of time to review what 
needs to be done regarding the log4j2 configuration.


Best regards,
João Jandre.



Re: Quota Tariff Plugin

2023-11-03 Thread Guto Veronezi

Hello, Murilo

There were several improvements on the Quota plugin in 4.18 (and there 
are some more to come). One of them was enabling operators to write 
rules to determine in which context the tariff will be applied; along 
with that, the vCPU, CPU_SPEED, and MEMORY tariffs were converted to 
RUNNING_VM tariffs with respective activation rules (it supports ES 5.1 
on the JavaSript scripts). You can check issue #5891 [1] and PR #5909 
[2] for more information. You can also check this video [3] on YouTube 
to get an overview of the new features yet to be ported to the community 
(though you probably will need to use the automatic subtitles generator).


Unfortunately, we did not have time to put effort into the official 
documentation adjustments, but it is in the roadmap.


If you have any doubt about how it works or any improvement suggestion, 
just let us know.


Best regards,
Daniel Salvador (gutoveronezi)

[1] https://github.com/apache/cloudstack/issues/5891
[2] https://github.com/apache/cloudstack/pull/5909
[3] 
https://www.youtube.com/watch?v=3tGhrzuxaOw&pp=ygUQcXVvdGEgY2xvdWRzdGFjaw%3D%3D


On 11/3/23 02:16, Murilo Moura wrote:

Guys, is the quota tariff plugin still in development?

I ask because in version 4.18 I've noticed that the memory tariff, for
example, isn't being calculated or saved in the cloud_usage table, in
addition to the error that appears when trying to update a tariff (Unable
to execute API command quotatariffupdate due to missing parameter name).