Problem using Kubernetes Plugin.
Hello all. I'm here again to ask for help. I'm using CloudStack 4.15, working well with user account, Instances, firewall and so on. The problem is when I try to use Kubernetes pluing, following the manual http://docs.cloudstack.apache.org/en/latest/plugins/cloudstack-kubernetes-service.html downloading official project ISO but, all time, even creating new Network Offering, as I searched on the Internet and was suggested this, putting the Egress rule as full permitted gave me the same error: "Failed to setup Kubernetes cluster : kubercluster05 in usable state as unable to provision API endpoint for the cluster" The instances, master and nodes, was created normally, respond for SSH connection (even that I do not know the password, but asked for it) but stop with the Error status on the UI. Any one has any idea what is going on, what am I doing wrong? Best regards.
Cloudstack Usage and Quota Plugin. How it works?
Hello all. I'm using Usage and Quota Plugin to try billing with my future costumers accounts but I'm not understand de calculation an results that is showing me. Like this tow bellow: *Balance* *Quota* -599755.85 599755.85 What is "Balanece"? Why the number is negative? It is possible to make the number o Quota be shorter? Best regards.
Cloudstack quota, billing, usage_discriminator... How the Usage, or Quata plugin, makes the calculation?
Hello all. I'm trying to test the Usage, or Quota Plugin, to take values and make billing with the CloudStack that we are testing here, in my company. I do not understand how it makes the count. I needed to change the values directly on da data base, so I'm not sure if it is the best plate to do it and the result it not what I was thought. Here, on the image, one example of my "billing". Why this value is negative? Where, and how, it is made the calculation? Am I looking for to the right place? Best regards
Re: CloudStack billing GUI. Is there any one?
Hello all. Thanks Rafael, I think that will help a lot. But a need a new help. Where can I change the values of the billing? I was on Quota Tarrif, try to change the values but it is like it is not enabled, nothing happened. Any one can tell me, or send my any documentation where show me how and where can change the tarrif values? Best regards. Em 12/07/2021 10:15, Rafael Weingärtner escreveu: Tu podes utilizar o quota plugin para esse fim :) On Mon, Jul 12, 2021 at 9:35 AM Kalil de Albuquerque Carvalho <mailto:kalil.carva...@hybriddc.com.br>> wrote: Hello all. Studying CloudStack I knew that Usage service has the goal of create a billing service. Mi doubt is, because I didn't found, if there is any GUI that we can use to send to yours "costumers" see and understand what they are using and paying for. Is there any free GUI billing service that I can use with CloudStack/Usage? Best regards. -- Rafael Weingärtner
CloudStack billing GUI. Is there any one?
Hello all. Studying CloudStack I knew that Usage service has the goal of create a billing service. Mi doubt is, because I didn't found, if there is any GUI that we can use to send to yours "costumers" see and understand what they are using and paying for. Is there any free GUI billing service that I can use with CloudStack/Usage? Best regards.
Re: DEBUG message: Cannot release reservation, Found VM: VM[User|i-5-106-VM] Stopped but reserved on host 1. How to fixed?
Hello Suresh. Thanks for the tip, worked fine. Best regards. Em 28/06/2021 01:37, Suresh Anaparti escreveu: Hi Kalil, Can you check the config setting 'capacity.skipcounting.hours ', which is the wait time before releasing the VM resources. The setting 'host.reservation.release.period' is the time interval to check and release the host reservations, but the resources are not released until the duration specified through 'capacity.skipcounting.hours' setting after VM is stopped. Regards, Suresh On 25/06/21, 7:28 PM, "Kalil de Albuquerque Carvalho" wrote: Hello all. Some days ago I asked how to release the resources when my instances been halted. A member reply me with "host.reservation.release.period" setting, on global configuration. I changed to 5, the statement, reboot the CloudStack service but it is not releasing the resources and it is showing me this messages in the management server: 2021-06-25 09:45:54,680 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Checking if any host reservation can be released ... 2021-06-25 09:45:54,686 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found VM: VM[User|i-5-106-VM] Stopped but reserved on host 1 2021-06-25 09:45:54,691 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found VM: VM[User|i-2-105-VM] Stopped but reserved on host 3 2021-06-25 09:45:54,695 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found 1 VMs Running on host 4 2021-06-25 09:45:54,700 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found 3 VMs Running on host 5 this situation keep going and not release the resources. All my instances are halted, just System VM's and Virtual Routers are running. My environment is: What am I doing wrong? Where can I fix this? CloudStack: 4.15 Distro: CentOS 7.9 Kernel: 3.10.0-1160.25.1.el7.x86_64 Best regards.
DEBUG message: Cannot release reservation, Found VM: VM[User|i-5-106-VM] Stopped but reserved on host 1. How to fixed?
Hello all. Some days ago I asked how to release the resources when my instances been halted. A member reply me with "host.reservation.release.period" setting, on global configuration. I changed to 5, the statement, reboot the CloudStack service but it is not releasing the resources and it is showing me this messages in the management server: 2021-06-25 09:45:54,680 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Checking if any host reservation can be released ... 2021-06-25 09:45:54,686 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found VM: VM[User|i-5-106-VM] Stopped but reserved on host 1 2021-06-25 09:45:54,691 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found VM: VM[User|i-2-105-VM] Stopped but reserved on host 3 2021-06-25 09:45:54,695 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found 1 VMs Running on host 4 2021-06-25 09:45:54,700 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (HostReservationReleaseChecker:ctx-6a943f41) (logid:fa97e34d) Cannot release reservation, Found 3 VMs Running on host 5 this situation keep going and not release the resources. All my instances are halted, just System VM's and Virtual Routers are running. My environment is: What am I doing wrong? Where can I fix this? CloudStack: 4.15 Distro: CentOS 7.9 Kernel: 3.10.0-1160.25.1.el7.x86_64 Best regards.
Why my resources is not been release when my machines is down?
Hello all. I'm testing Cloudstack 4.15 and it is strange, or, probably, I'm doing same mistakes, that when I halted my VM's the resources are not been released for the system. It is like when the VM is created the resource it is reserved exclusively for it. Some one can tell me where I can find the documentation that I can get this information and how to work with it? Any one can help me to understand what it is happening? Best regards.
Re: Prediction or studies for KVM live migration.
Hello Andrija. Sorry for the mess. Instance migration, just about VM's, memory, vCPU works fine, on the fly. But if is needed to migrate a volume from one primary storage to other is needed the VM must be halted. I had tested of migration from a Gluster primary Storage to a NFS primary storage, and contrariwise, and worked well but just with powerd off VM's. Best regars. Em 27/05/2021 20:18, Andrija Panic escreveu: I understood you said that LIVE storage migration (migrate VM's volumes (with the VM)) works while VM is RUNNING. Are you now saying this is NOT working (which is what I would expect), and that only stopped VM migration is possible from GLuster to NFS? best, On Wed, 26 May 2021 at 22:04, Kalil de Albuquerque Carvalho <mailto:kalil.carva...@hybriddc.com.br>> wrote: Hello Andrija. Gluster as primary storage works fine. Storage migration, with powered off VM's, it's working too. My problem is just doing this with VM's running. I'm using Ubuntu 20.02, CentOS 7 and Windows 10 and 2016 for testing and not working. But it is the life, thaks all Best regards, Em 26/05/2021 16:41, Andrija Panic escreveu: I thought I replied to this one, but I dont' see my email... So, from CEPH/NFS to SolidFire should work (in this direction only) - or let me say "used to work" (haven't tested it recently) - this was developed for my ex-company where I use to work, by Mike Tutkowski from NetApp) Also, my understanding is that it's also possible to migrate VMs using local storage from host to host (whole VM with its disks) - @Gabriel Beims Bräscher <mailto:gabrasc...@gmail.com> can confirm this, afaik? If you are using Ubuntu - all fine - qemu-kvm supports live storage migrations from Ubuntu 14.04 at least, an onwards. If you are using CentOS 7, you have to use qemu-kvm-ev from the oVirt repo ONLY - all other versions of qemu-kvm do NOT support storage live migration (Redhat revoked it for $$$ reasons, while it was working fine in CentOS6) If your tested it and it worked from Gluster to NFS - that's (great) news (for me). Hope that helps, Cheers, On Wed, 26 May 2021 at 20:06, Wido den Hollander mailto:w...@widodh.nl>> wrote: On 26/05/2021 13:55, Kalil de Albuquerque Carvalho wrote: > Hello Wido. > > Sorry about that. I was not so clear, or made some misunderstanding. > > Doing some corrections, I've tested migration from Gluster to NFS, and > the reverse, and every think worked well. So, please, disregard this > part of my question. I should would did this test before made the question. > > My question, now, is when will be support, if will be, with the running > VM's. Today, I'm testing the version 4.15, just working with power off > VM's. > Aha, you mean live storage migration between different types of primary storage. That is indeed not supported with KVM and also not on the roadmap at the moment. Wido > Best regards. > > Em 26/05/2021 04:08, Wido den Hollander escreveu: >> >> >> On 25/05/2021 13:32, Kalil de Albuquerque Carvalho wrote: >>> Hello all. >>> >>> Reading the manual I discovery that live migration is not support for >>> KVM hypervisor. I was wander if there are studies or predictions for >>> this features on KVM hosts. >>> >> >> Where did you read this? Live Migration with the KVM hypervisor works >> just fine. >> >> Wido >> >>> Yet on the manual citation, it said that migration just can occur >>> from CEPH/NFS to "SolidFire Managed Storage". On my tests we are >>> using Gluster as Primary Storage and not appear any storage to >>> migrate to. We created tow differents Primary Storages for this kind >>> of tests. Is that correct, migration in this case just will occur >>> from/to CEPH/NFS? If yes, will be same future release that will be >>> possible migration between Guster storages? >>> >>> Best regars. >>> >> -- Andrija Panić -- Andrija Panić
Re: Prediction or studies for KVM live migration.
Hello Andrija. Gluster as primary storage works fine. Storage migration, with powered off VM's, it's working too. My problem is just doing this with VM's running. I'm using Ubuntu 20.02, CentOS 7 and Windows 10 and 2016 for testing and not working. But it is the life, thaks all Best regards, Em 26/05/2021 16:41, Andrija Panic escreveu: I thought I replied to this one, but I dont' see my email... So, from CEPH/NFS to SolidFire should work (in this direction only) - or let me say "used to work" (haven't tested it recently) - this was developed for my ex-company where I use to work, by Mike Tutkowski from NetApp) Also, my understanding is that it's also possible to migrate VMs using local storage from host to host (whole VM with its disks) - @Gabriel Beims Bräscher <mailto:gabrasc...@gmail.com> can confirm this, afaik? If you are using Ubuntu - all fine - qemu-kvm supports live storage migrations from Ubuntu 14.04 at least, an onwards. If you are using CentOS 7, you have to use qemu-kvm-ev from the oVirt repo ONLY - all other versions of qemu-kvm do NOT support storage live migration (Redhat revoked it for $$$ reasons, while it was working fine in CentOS6) If your tested it and it worked from Gluster to NFS - that's (great) news (for me). Hope that helps, Cheers, On Wed, 26 May 2021 at 20:06, Wido den Hollander <mailto:w...@widodh.nl>> wrote: On 26/05/2021 13:55, Kalil de Albuquerque Carvalho wrote: > Hello Wido. > > Sorry about that. I was not so clear, or made some misunderstanding. > > Doing some corrections, I've tested migration from Gluster to NFS, and > the reverse, and every think worked well. So, please, disregard this > part of my question. I should would did this test before made the question. > > My question, now, is when will be support, if will be, with the running > VM's. Today, I'm testing the version 4.15, just working with power off > VM's. > Aha, you mean live storage migration between different types of primary storage. That is indeed not supported with KVM and also not on the roadmap at the moment. Wido > Best regards. > > Em 26/05/2021 04:08, Wido den Hollander escreveu: >> >> >> On 25/05/2021 13:32, Kalil de Albuquerque Carvalho wrote: >>> Hello all. >>> >>> Reading the manual I discovery that live migration is not support for >>> KVM hypervisor. I was wander if there are studies or predictions for >>> this features on KVM hosts. >>> >> >> Where did you read this? Live Migration with the KVM hypervisor works >> just fine. >> >> Wido >> >>> Yet on the manual citation, it said that migration just can occur >>> from CEPH/NFS to "SolidFire Managed Storage". On my tests we are >>> using Gluster as Primary Storage and not appear any storage to >>> migrate to. We created tow differents Primary Storages for this kind >>> of tests. Is that correct, migration in this case just will occur >>> from/to CEPH/NFS? If yes, will be same future release that will be >>> possible migration between Guster storages? >>> >>> Best regars. >>> >> -- Andrija Panić
Re: Prediction or studies for KVM live migration.
Hello Wido. Sorry about that. I was not so clear, or made some misunderstanding. Doing some corrections, I've tested migration from Gluster to NFS, and the reverse, and every think worked well. So, please, disregard this part of my question. I should would did this test before made the question. My question, now, is when will be support, if will be, with the running VM's. Today, I'm testing the version 4.15, just working with power off VM's. Best regards. Em 26/05/2021 04:08, Wido den Hollander escreveu: On 25/05/2021 13:32, Kalil de Albuquerque Carvalho wrote: Hello all. Reading the manual I discovery that live migration is not support for KVM hypervisor. I was wander if there are studies or predictions for this features on KVM hosts. Where did you read this? Live Migration with the KVM hypervisor works just fine. Wido Yet on the manual citation, it said that migration just can occur from CEPH/NFS to "SolidFire Managed Storage". On my tests we are using Gluster as Primary Storage and not appear any storage to migrate to. We created tow differents Primary Storages for this kind of tests. Is that correct, migration in this case just will occur from/to CEPH/NFS? If yes, will be same future release that will be possible migration between Guster storages? Best regars.
Re: Prediction or studies for KVM live migration.
Hello all. Doing some corrections, I've tested migration from Gluster to NFS, and the reverse, and every think worked well. So, please, disregard this part of my question. I should would did this test before made the question. Best regards, Em 25/05/2021 08:32, Kalil de Albuquerque Carvalho escreveu: Hello all. Reading the manual I discovery that live migration is not support for KVM hypervisor. I was wander if there are studies or predictions for this features on KVM hosts. Yet on the manual citation, it said that migration just can occur from CEPH/NFS to "SolidFire Managed Storage". On my tests we are using Gluster as Primary Storage and not appear any storage to migrate to. We created tow differents Primary Storages for this kind of tests. Is that correct, migration in this case just will occur from/to CEPH/NFS? If yes, will be same future release that will be possible migration between Guster storages? Best regars.
Prediction or studies for KVM live migration.
Hello all. Reading the manual I discovery that live migration is not support for KVM hypervisor. I was wander if there are studies or predictions for this features on KVM hosts. Yet on the manual citation, it said that migration just can occur from CEPH/NFS to "SolidFire Managed Storage". On my tests we are using Gluster as Primary Storage and not appear any storage to migrate to. We created tow differents Primary Storages for this kind of tests. Is that correct, migration in this case just will occur from/to CEPH/NFS? If yes, will be same future release that will be possible migration between Guster storages? Best regars.
Problem list Host on UI. CloudStack 4.15.
Hello all. I'm knew using CloudStack, the project in my company it is to use it for professional propose. The problem is that I'm trying a fresh install and after every think is working, when I put the first host, the UI starts to show me a error, "Request Failed (413) For input string" and some numbers. On the page appear "404 Not Found.". On the management.log just show a warning: *"WARN [c.c.a.d.ParamGenericValidationWorker] (qtp769798433-319:ctx-fcef6a8b ctx-0321de05) (logid:2aeb6cc1) Received unknown parameters for command listHostsMetrics. Unknown parameters : listall"* Research on the lists I found the following instruction: "add.host.on.service.restart.kvm" to "false" I tried, restarted even the whole serve, but not working. Here my environment: The management: Distro: CentOS 7.9.2009 (Core) Kernel: 3.10.0-1160.25.1.el7.x86_64 CloudStack packages: cloudstack-common.x86_64 and cloudstack-management.x86_64 both on the version 4.15.0.0-1.el7 MySQL Server: mysql-community-client.x86_64 mysql-community-common.x86_64 mysql-community-libs.x86_64 mysql-community-server.x86_64 mysql-connector-python.x86_64, all on the version 5.6.51-2.el7 and mysql-community-release.noarch version el7-5 Java: java-11-openjdk.x86_64, just have this installed The hosts Distro: CentOS 7.9.2009 (Core) Kernel: 3.10.0-1160.25.1.el7.x86_64 CloudStack packages: cloudstack-agent.x86_64 and cloudstack-common.x86_64 both on the version 4.15.0.0-1.el7 Java: java-11-openjdk.x86_64, just have this installed Is there any one how has idea what I'm doing wrong? Best regars.