Cloudstack DB using 3 Node Galrea Cluster.

2024-02-23 Thread Joan g
Hi Community,

I need some suggestions  on using 3 node Mariadb *Galera Cluster or percona
xtradb* for Cloudstack Databases.

In My setup the Databases are behind a LB and write happens only to a
single node

With new Cloudstack 4.18.1 install  initial database migration is always
failing because of schema update/sync issues with other DB nodes.

Logs in Mysql err::
2024-02-23T12:55:15.521278Z 17 [ERROR] [MY-010584] [Repl] Replica SQL:
Error 'Duplicate column name 'display'' on query. Default
 database: 'cloud'. Query: 'ALTER TABLE cloud.guest_os ADD COLUMN display
tinyint(1) DEFAULT '1' COMMENT 'should this guest_os b
e shown to the end user'', Error_code: MY-001060

Due to this Cloudstack initialisation is always failing.

Can someone point me with a suggested method for DB HA

Jon


Re: [D] EVPN-VXLAN - IPv6 via SLAAC [cloudstack]

2024-02-23 Thread via GitHub


GitHub user tobzsc added a comment to the discussion: EVPN-VXLAN - IPv6 via 
SLAAC

Finally, we found the problem which is related to VXLAN flags. When IPv6 
multicast packets enter our fabric the VXLAN packet somehow gets the flags 
`0x0a00` instead of `0x0800`, which is being ignored by the kernel and the 
packet is dropped. See the corresponding code fragment here: 
https://elixir.bootlin.com/linux/v5.14.21/source/drivers/net/vxlan.c#L1905

This seems to be a problem with SONiC itself and we will check here further. 

The temporary fix is:
```
tc qdisc add dev ens1f0np0 clsact
tc filter add dev ens1f0np0 ingress pref 1 proto ip flower ip_proto udp 
dst_port 4789 action pedit munge offset 28 u8 set 0x08
tc qdisc add dev ens1f1np1 clsact
tc filter add dev ens1f1np1 ingress pref 1 proto ip flower ip_proto udp 
dst_port 4789 action pedit munge offset 28 u8 set 0x08
```



GitHub link: 
https://github.com/apache/cloudstack/discussions/8685#discussioncomment-8568633


This is an automatically sent email for users@cloudstack.apache.org.
To unsubscribe, please send an email to: users-unsubscr...@cloudstack.apache.org



Re: Experience on GPU Support?

2024-02-23 Thread Ivan Kud
Another way to deal with it is to use KVM agent hooks (this is my code
implemented specifically to deal with GPUs and VM-dedicated drives):
https://github.com/apache/cloudstack/blob/8f6721ed4c4e1b31081a951c62ffbe5331cf16d4/agent/conf/agent.properties#L123

You can implement the logic in Groovy to modify XML during the start to
support extra devices out of CloudStack management.

On Fri, Feb 23, 2024 at 2:36 PM Jorge Luiz Correa
 wrote:

> Hi Bryan! We are using here but in a different way, customized for our
> environment and using how it is possible the features of CloudStack. In
> documentation we can see support for some GPU models a little bit old
> today.
>
> We are using pci passthrough. All hosts with GPU are configured to boot
> with IOMMU and vfio-pci, not loading kernel modules for each GPU.
>
> Then, we create a serviceoffering to describe VMs that will have GPU. In
> this serviceoffering we use the serviceofferingdetails[1].value field to
> insert a block of configuration related to the GPU. It is something like
> " ...  ... address type=pci" that describes the PCI bus
> from each GPU. Then, we use tags to force this computeoffering to run only
> in hosts with GPUs.
>
> We create a Cloudstack cluster with a lot of hosts equipped with GPUs. When
> a user needs a VM with GPU he/she should use the created computeoffering.
> VM will be instantiated in some host of the cluster and GPUs are
> passthrough to VM.
>
> There are no control executed by cloudstack. For example, it can try to
> instantiate a VM in a host when a GPU is already being used (will fail).
> Our management is that the ROOT admin always controls that creation. We
> launch all VMs using all GPUs from the infrastructure. Then we use a queue
> manager to run jobs in those VMs with GPUs. When a user needs a dedicated
> VM to develop something, we can shutdown a VM already running (that is part
> of the queue manager as processor node) and then create this dedicated VM,
> that uses the GPUs isolated.
>
> There are some possibilities when using GPUs. For example, some models
> accept virtualization when we can divide a GPU. In that case, Cloudstack
> would need to support that, so it would manage the driver, creating the
> virtual GPUs based on information input from the user, as memory size.
> Then, it should manage the hypervisor to passthrough the virtual gpu to VM.
>
> Another possibility that would help us in our scenario is to make some
> control about PCI buses in hosts. For example, if Cloustack could check if
> a PCI is being used in some host and then use this information in VM
> scheduling, would be great. Cloudstack could launch VMs in a host that has
> a PCI address free. This would be used not only for GPUs, but any PCI
> device.
>
> I hope this can help in some way, to think of new scenarios etc.
>
> Thank you!
>
> Em qui., 22 de fev. de 2024 às 07:56, Bryan Tiang <
> bryantian...@hotmail.com>
> escreveu:
>
> > Hi Guys,
> >
> > Anyone running Cloudstack with GPU Support in Production? Say NVIDIA H100
> > or AMD M1300X?
> >
> > Just want to know if there is any support for this still on going, or
> > anyone who is running a cloud business with GPUs.
> >
> > Regards,
> > Bryan
> >
>
> --
> __
> Aviso de confidencialidade
>
> Esta mensagem da
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica
> federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro
> de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter
> informacoes  confidenciais, protegidas  por sigilo profissional.  Sua
> utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei.
> Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao
> emitente, esclarecendo o equivoco.
>
> Confidentiality note
>
> This message from
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government
> company  established under  Brazilian law (5.851/72), is directed
> exclusively to  its addressee  and may contain confidential data,
> protected under  professional secrecy  rules. Its unauthorized  use is
> illegal and  may subject the transgressor to the law's penalties. If you
> are not the addressee, please send it back, elucidating the failure.
>


-- 
With best regards, Ivan Kudriavtsev
BWSoft Management LLC
Cell AM: +374-43-047-914
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ 


Re: KVM VM consoles connection timing out due to authentication failure

2024-02-23 Thread Wei ZHOU
Hi Kapil,

If you run CloudStack in FIPS-mode, it does not work for now.

According to
https://qemu-project.gitlab.io/qemu/system/vnc-security.html#with-passwords
Password authentication is not supported when operating in FIPS 140-2
compliance mode as it requires the use of the DES cipher.

However, CloudStack generates VNC password for all VMs, VMs are also
started on hypervisors (for example KVM hosts) with a VNC password.


-Wei

On Tue, Feb 20, 2024 at 9:55 AM Kapil Bhuskute 
wrote:

> Hello,
> We have a POD setup with CloudStack 4.19 latest version and all schema
> with a Zone, POD, and cluster has been setup using a Advance Shared
> networking Architecture.
> We have a couple of VMs spun up on this environment now and were trying to
> access its VM console via VNC port. However, it has been observed that the
> VM console connections are getting timed out due to failed authentication
> errors observed on the vmconsoleproxy System VM logs.
>
> VM Console not working . We have mysql ssl enabled and console proxy ssl
> is disabled for now.
> Failed to connect/access token expired.
> While checking logs on CCVM ..we see vnc auth failed.
> 2024-02-13 16:12:12,926 INFO [vnc.security.VncTLSSecurity]
> (Thread-86:null) Processing VNC TLS security
> 2024-02-13 16:12:12,930 INFO [utils.nio.Link] (Thread-86:null) Conf file
> found: /usr/local/cloud/systemvm/conf/agent.properties
> 2024-02-13 16:12:12,964 INFO [vnc.security.VncAuthSecurity]
> (Thread-83:null) Finished VNCAuth security
> 2024-02-13 16:12:12,966 ERROR [consoleproxy.vnc.NoVncClient]
> (Thread-83:null) Connection to VNC server failed: wrong password.
> 2024-02-13 16:12:12,966 ERROR [consoleproxy.vnc.NoVncClient]
> (Thread-83:null) Connection to VNC server failed: wrong password. - Reason:
> Authentication failed
> 2024-02-13 16:12:13,164 INFO [vnc.security.VncAuthSecurity]
> (Thread-86:null) VNC server requires password authentication
> 2024-02-13 16:12:13,184 INFO [vnc.security.VncAuthSecurity]
> (Thread-86:null) Finished VNCAuth security
>
>
> Kindly suggest if anyone is aware of any fix for this.
>
> Regards,
> Kapil B
>


Re: Experience on GPU Support?

2024-02-23 Thread Ivan Kudryavtsev
Another way to deal with it is to use KVM agent hooks:
https://github.com/apache/cloudstack/blob/8f6721ed4c4e1b31081a951c62ffbe5331cf16d4/agent/conf/agent.properties#L123

You can implement the logic in Groovy to modify XML during the start to
support extra devices out of CloudStack management.

On Fri, Feb 23, 2024 at 2:36 PM Jorge Luiz Correa
 wrote:

> Hi Bryan! We are using here but in a different way, customized for our
> environment and using how it is possible the features of CloudStack. In
> documentation we can see support for some GPU models a little bit old
> today.
>
> We are using pci passthrough. All hosts with GPU are configured to boot
> with IOMMU and vfio-pci, not loading kernel modules for each GPU.
>
> Then, we create a serviceoffering to describe VMs that will have GPU. In
> this serviceoffering we use the serviceofferingdetails[1].value field to
> insert a block of configuration related to the GPU. It is something like
> " ...  ... address type=pci" that describes the PCI bus
> from each GPU. Then, we use tags to force this computeoffering to run only
> in hosts with GPUs.
>
> We create a Cloudstack cluster with a lot of hosts equipped with GPUs. When
> a user needs a VM with GPU he/she should use the created computeoffering.
> VM will be instantiated in some host of the cluster and GPUs are
> passthrough to VM.
>
> There are no control executed by cloudstack. For example, it can try to
> instantiate a VM in a host when a GPU is already being used (will fail).
> Our management is that the ROOT admin always controls that creation. We
> launch all VMs using all GPUs from the infrastructure. Then we use a queue
> manager to run jobs in those VMs with GPUs. When a user needs a dedicated
> VM to develop something, we can shutdown a VM already running (that is part
> of the queue manager as processor node) and then create this dedicated VM,
> that uses the GPUs isolated.
>
> There are some possibilities when using GPUs. For example, some models
> accept virtualization when we can divide a GPU. In that case, Cloudstack
> would need to support that, so it would manage the driver, creating the
> virtual GPUs based on information input from the user, as memory size.
> Then, it should manage the hypervisor to passthrough the virtual gpu to VM.
>
> Another possibility that would help us in our scenario is to make some
> control about PCI buses in hosts. For example, if Cloustack could check if
> a PCI is being used in some host and then use this information in VM
> scheduling, would be great. Cloudstack could launch VMs in a host that has
> a PCI address free. This would be used not only for GPUs, but any PCI
> device.
>
> I hope this can help in some way, to think of new scenarios etc.
>
> Thank you!
>
> Em qui., 22 de fev. de 2024 às 07:56, Bryan Tiang <
> bryantian...@hotmail.com>
> escreveu:
>
> > Hi Guys,
> >
> > Anyone running Cloudstack with GPU Support in Production? Say NVIDIA H100
> > or AMD M1300X?
> >
> > Just want to know if there is any support for this still on going, or
> > anyone who is running a cloud business with GPUs.
> >
> > Regards,
> > Bryan
> >
>
> --
> __
> Aviso de confidencialidade
>
> Esta mensagem da
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica
> federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro
> de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter
> informacoes  confidenciais, protegidas  por sigilo profissional.  Sua
> utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei.
> Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao
> emitente, esclarecendo o equivoco.
>
> Confidentiality note
>
> This message from
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government
> company  established under  Brazilian law (5.851/72), is directed
> exclusively to  its addressee  and may contain confidential data,
> protected under  professional secrecy  rules. Its unauthorized  use is
> illegal and  may subject the transgressor to the law's penalties. If you
> are not the addressee, please send it back, elucidating the failure.
>


Re: Experience on GPU Support?

2024-02-23 Thread Jorge Luiz Correa
Hi Bryan! We are using here but in a different way, customized for our
environment and using how it is possible the features of CloudStack. In
documentation we can see support for some GPU models a little bit old
today.

We are using pci passthrough. All hosts with GPU are configured to boot
with IOMMU and vfio-pci, not loading kernel modules for each GPU.

Then, we create a serviceoffering to describe VMs that will have GPU. In
this serviceoffering we use the serviceofferingdetails[1].value field to
insert a block of configuration related to the GPU. It is something like
" ...  ... address type=pci" that describes the PCI bus
from each GPU. Then, we use tags to force this computeoffering to run only
in hosts with GPUs.

We create a Cloudstack cluster with a lot of hosts equipped with GPUs. When
a user needs a VM with GPU he/she should use the created computeoffering.
VM will be instantiated in some host of the cluster and GPUs are
passthrough to VM.

There are no control executed by cloudstack. For example, it can try to
instantiate a VM in a host when a GPU is already being used (will fail).
Our management is that the ROOT admin always controls that creation. We
launch all VMs using all GPUs from the infrastructure. Then we use a queue
manager to run jobs in those VMs with GPUs. When a user needs a dedicated
VM to develop something, we can shutdown a VM already running (that is part
of the queue manager as processor node) and then create this dedicated VM,
that uses the GPUs isolated.

There are some possibilities when using GPUs. For example, some models
accept virtualization when we can divide a GPU. In that case, Cloudstack
would need to support that, so it would manage the driver, creating the
virtual GPUs based on information input from the user, as memory size.
Then, it should manage the hypervisor to passthrough the virtual gpu to VM.

Another possibility that would help us in our scenario is to make some
control about PCI buses in hosts. For example, if Cloustack could check if
a PCI is being used in some host and then use this information in VM
scheduling, would be great. Cloudstack could launch VMs in a host that has
a PCI address free. This would be used not only for GPUs, but any PCI
device.

I hope this can help in some way, to think of new scenarios etc.

Thank you!

Em qui., 22 de fev. de 2024 às 07:56, Bryan Tiang 
escreveu:

> Hi Guys,
>
> Anyone running Cloudstack with GPU Support in Production? Say NVIDIA H100
> or AMD M1300X?
>
> Just want to know if there is any support for this still on going, or
> anyone who is running a cloud business with GPUs.
>
> Regards,
> Bryan
>

-- 
__
Aviso de confidencialidade

Esta mensagem da 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica 
federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro 
de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter 
informacoes  confidenciais, protegidas  por sigilo profissional.  Sua 
utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei. 
Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao 
emitente, esclarecendo o equivoco.

Confidentiality note

This message from 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government 
company  established under  Brazilian law (5.851/72), is directed 
exclusively to  its addressee  and may contain confidential data,  
protected under  professional secrecy  rules. Its unauthorized  use is 
illegal and  may subject the transgressor to the law's penalties. If you 
are not the addressee, please send it back, elucidating the failure.


RE: VMware Import Timeout

2024-02-23 Thread Kevin Seales
That setting is currently set to 8 hours, but I agree, that is for the 
conversion process.

I can connect to vCenter from the management server.  Nothing is being blocked 
there.  In the management server logs I can see each VM, disk and networking 
information being pulled one at a time.  The failure message occurs at exactly 
10 minutes, but the logs will continue to pull data from vCenter.  

I opened Chrome's developer tools and found this error:

Error: timeout of 60ms exceeded  - plugins.js:213
  at e.exports (createError.js:16:15)
  at c.ontimeout (xhr.js:111:14)

The vCenter I'm connecting to has a little over 1k VMs/templates.  The logs 
indicate it takes its time with each VM, so hitting 60ms makes sense.  Just 
need to find where that timeout is set and increase it. 

-Original Message-
From: Nux  
Sent: Friday, February 23, 2024 7:58 AM
To: users@cloudstack.apache.org
Cc: Kevin Seales 
Subject: Re: VMware Import Timeout

[You don't often get email from n...@li.nux.ro. Learn why this is important at 
https://aka.ms/LearnAboutSenderIdentification ]

I think the setting you tried to change is 
convert.vmware.instance.to.kvm.timeout, but that has to do with the conversion 
process itself.
If you try with telnet or curl from the shell of the Cloudstack management 
server, can you reach the VCenter?

On 2024-02-22 15:35, Kevin Seales wrote:
> We are trying to use the "Import-Export Instances" tool in ACS to test 
> migration from VMware to ACS.  After selecting "List VMware 
> Instances", it hangs for 10 minutes, then ACS gives a very detailed 
> error saying "Request Failed."  The management logs show ACS is still 
> receiving data from vCenter for another 2 or 3 minutes after the 
> failure message.  I'm assuming we are hitting a time out somewhere.  I 
> tried adjusting what I could find under global settings that may be related 
> but the error
> still occurs.Does anyone know how we can resolve this issue?


Re: VMware Import Timeout

2024-02-23 Thread Nux
I think the setting you tried to change is 
convert.vmware.instance.to.kvm.timeout, but that has to do with the 
conversion process itself.
If you try with telnet or curl from the shell of the Cloudstack 
management server, can you reach the VCenter?


On 2024-02-22 15:35, Kevin Seales wrote:
We are trying to use the "Import-Export Instances" tool in ACS to test 
migration from VMware to ACS.  After selecting "List VMware Instances", 
it hangs for 10 minutes, then ACS gives a very detailed error saying 
"Request Failed."  The management logs show ACS is still receiving data 
from vCenter for another 2 or 3 minutes after the failure message.  I'm 
assuming we are hitting a time out somewhere.  I tried adjusting what I 
could find under global settings that may be related but the error 
still occurs.Does anyone know how we can resolve this issue?


Re: CKS Storage Provisioner Info

2024-02-23 Thread Bharat Bhushan Saini
Hi Community/Vivek,

I created a discussion form on github discussion.

Kindly check the information at https://github.com/apache/cloudstack/issues/8695

Thanks and Regards,
Bharat Saini

[signature_2677501444]

From: Vivek Kumar 
Date: Friday, 23 February 2024 at 12:50 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
EXTERNAL EMAIL: Please verify the sender email address before taking any 
action, replying, clicking any link or opening any attachment.


Hello Bharat,

Can you try to install plugin manually and see if that works or not.


Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 
www.indiqus.com 




> On 23-Feb-2024, at 11:16 AM, Bharat Bhushan Saini 
>  wrote:
>
> Hi Wei/Vivek/Kiran,
>
> Kindly lookout in this.
>
> Thanks and Regards,
> Bharat Saini
>
>
>
> From: Bharat Bhushan Saini  >
> Date: Friday, 23 February 2024 at 12:56 AM
> To: users@cloudstack.apache.org  
> mailto:users@cloudstack.apache.org>>
> Subject: Re: CKS Storage Provisioner Info
>
> EXTERNAL EMAIL: Please verify the sender email address before taking any 
> action, replying, clicking any link or opening any attachment.
>
> Hi Vivek,
>
> I deployed cloudstack with version 4.18.1.
> I think it should be present in this version.
>
> Thanks and Regards,
> Bharat Saini
>
>
>
> From: Vivek Kumar  >
> Date: Friday, 23 February 2024 at 12:48 AM
> To: users@cloudstack.apache.org  
> mailto:users@cloudstack.apache.org>>
> Subject: Re: CKS Storage Provisioner Info
>
> EXTERNAL EMAIL: Please verify the sender email address before taking any 
> action, replying, clicking any link or opening any attachment.
>
>
> Hello Bharat,
>
> Then you will have to deploy the cloudstack-kubernetes-provider, however 
> after 4.16, it’s automatically deployed, Follow the instruction in the link 
> -https://github.com/apache/cloudstack-kubernetes-provider
>
>
> 1- Create a file called - cloud-config and put the below information - ( you 
> can create an user under your account and provide it’s apikey and secret key )
>
>
> [Global]
> api-url = 
> api-key = 
> secret-key = 
>
>
> 2- kubectl -n kube-system create secret generic cloudstack-secret 
> --from-file=cloud-config
> 3- then deploy the controller -
>
> You can then use the provided example deployment.yaml 
> 
>  to deploy the controller:
>
> kubectl apply -f deployment.yaml
>
>
>
>
> Vivek Kumar
> Sr. Manager - Cloud & DevOps
> TechOps | Indiqus Technologies
>
> vivek.ku...@indiqus.com  
> 
> www.indiqus.com  
> 
>
>
>
>
> > On 23-Feb-2024, at 12:08 AM, Bharat Bhushan Saini 
> >  > > wrote:
> >
> > Hi Vivek,
> >
> > I am glad and thankful to you for sharing the information, for the 
> > configuration of CSI driver.
> >
> > But I didn’t have cloudstack-secret in my cluster, sharing the details below
> >
> > cloud@Kspot-App-control-18dd041902a:~$ kubectl get secret cloudstack-secret 
> > -n kube-system
> > Error from server (NotFound): secrets "cloudstack-secret" not found
> > cloud@Kspot-App-control-18dd041902a:~$ kubectl get secret -A -n kube-system
> > NAMESPACE  NAME  TYPE   
> >DATA   AGE
> > kloudspot  dockerregistrykey 
> > kubernetes.io/dockerconfigjson 
> > >
> > 1  8h
> > kube-systembootstrap-token-cb0b7f
> > bootstrap.kubernetes.io/token 
> > >
> >  6  8h
> > kubernetes-dashboard   kubernetes-dashboard-certsOpaque 
> >0  8h
> > kubernetes-dashboard   kubernetes-dashboard-csrf Opaque 
> >1  8h
> > kubernetes-dashboard   kubernetes-dashboard-key-holder   Opaque 
> >2  8h
> > kubernetes-dashboard   kubernetes-dashboard-token
> > kubernetes.io/service-account-token 
> >  
> >   3  8h
> > cloud@Kspot-App-control-18dd041902a:~$
> >
> > I am using v1.28.4 k8s version ISO for the cluster, storage offering and 
> > network is in shared mode and 

Re: [D] Changing compute offering / scaling VM and root disk [cloudstack]

2024-02-23 Thread via GitHub


GitHub user NuxRo added a comment to the discussion: Changing compute offering 
/ scaling VM and root disk

@DaanHoogland perhaps `resizeVolume` is the culprit here.

GitHub link: 
https://github.com/apache/cloudstack/discussions/8578#discussioncomment-8565785


This is an automatically sent email for users@cloudstack.apache.org.
To unsubscribe, please send an email to: users-unsubscr...@cloudstack.apache.org