Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-04-16 Thread Alex Mattioli
Hi all,

I'd like to brainstorm dynamic routing in ACS (yes, again... for the newcomers 
to this mailing list - this has been discussed multiple times in the past 10+ 
years)
ACS 4.17 has introduced routed mode for IPv6 in Isolated networks and VPCs, we 
are currently working on extending that to IPv4 as well, which will support the 
current NAT'ed mode and also a routed mode (inspired by the NSX integration 
https://www.youtube.com/watch?v=f7ao-vv7Ahk).

With stock ACS (i.e. without NSX or OpenSDN) this routing is purely static, 
with the operator being responsible to add static routes to the Isolated 
network or VPC tiers via the "public" (outside) IP of the virtual router.

The next step on this journey is to add some kind of dynamic routing. One way 
that I have in mind is using dynamic BGP:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated
4 - ACS configures the BGP session on the VR, advertising all its connected 
networks

This way there's no need to reconfigure the upstream router for each new ACS 
network (it just needs to allow dynamic BGP peering from the pool of AS numbers 
presented to the zone)

This implementation could also be used for Shared Networks, in which case the 
destination advertised via BGP is to the gateway of the shared network.

There could also be an offering where we allow for end users to setup the BGP 
parameters for their Isolated or VPC networks, which can then peer with 
upstream VNF(s).

Any and all input is very welcome...

Taking the liberty to tag some of you: @Wei Zhou 
@Wido den Hollander @Kristaps 
Čudars

Cheers,
Alex

 



Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-16 Thread Wei ZHOU
See inline

On Tuesday, April 16, 2024, benoit lair  wrote:

> Ok and so there's no interaction with CS for firewalling and public ips
> access par example ?


CAPC automatically adds some lb rules and firewall rules for the k8s
apiserver.
To support k8s services with Type=LoadBalancer, you can deploy
https://github.com/apache/cloudstack-kubernetes-provider


getting it in a vpc tier for internal dmz use ?


No. vpc is not supported.
Refer to
https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/issues/314



> Le mar. 16 avr. 2024 à 11:12, Wei ZHOU  a écrit :
>
> > Hi benoit,
> >
> > You can try CAPC, it should work with 4.16
> > https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack
> >
> > You need a local kind cluster to manage the CAPC clusters. They are
> > not managed by CloudStack.
> >
> >
> > -Wei
> >
> > On Fri, Apr 12, 2024 at 2:43 PM benoit lair 
> wrote:
> > >
> > > Hi Wei,
> > >
> > > upgrading from 4.16 to 4.19 will be a huge work for me
> > > I have to test vpc/shared network, vrrp vpc with public lb Netscaler
> and
> > > withou it on several VPCs, i added support for VPX 12 and VPX 13 on it
> > with
> > > a nginx middleware rewriting rules from CS
> > > I am on xcp-ng 8.2.1
> > > i kept during years with a CS 4.3, restarting from 4.16, i would like
> > this
> > > time to keep LTS in production
> > > Is there a specific upgrade path to respect ? a specific process to
> > observe
> > > ? what rollback mechanism is possible ?
> > >
> > > For my K8S needs, I would like to upgrade it, but it was looking first
> > for
> > > a preproduction K8s working on 1.28 in order integrating CI/CD
> pipelines
> > > So this would not too be a problem upgrading CS after installing a k8s
> > > cluster for CI/CD
> > >
> > > Rohit was talking about CAPC, i can use it independently of the version
> > of
> > > ACS i use ?
> > >
> >
>


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-16 Thread benoit lair
Ok and so there's no interaction with CS for firewalling and public ips
access par example ?
getting it in a vpc tier for internal dmz use ?

Le mar. 16 avr. 2024 à 11:12, Wei ZHOU  a écrit :

> Hi benoit,
>
> You can try CAPC, it should work with 4.16
> https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack
>
> You need a local kind cluster to manage the CAPC clusters. They are
> not managed by CloudStack.
>
>
> -Wei
>
> On Fri, Apr 12, 2024 at 2:43 PM benoit lair  wrote:
> >
> > Hi Wei,
> >
> > upgrading from 4.16 to 4.19 will be a huge work for me
> > I have to test vpc/shared network, vrrp vpc with public lb Netscaler and
> > withou it on several VPCs, i added support for VPX 12 and VPX 13 on it
> with
> > a nginx middleware rewriting rules from CS
> > I am on xcp-ng 8.2.1
> > i kept during years with a CS 4.3, restarting from 4.16, i would like
> this
> > time to keep LTS in production
> > Is there a specific upgrade path to respect ? a specific process to
> observe
> > ? what rollback mechanism is possible ?
> >
> > For my K8S needs, I would like to upgrade it, but it was looking first
> for
> > a preproduction K8s working on 1.28 in order integrating CI/CD pipelines
> > So this would not too be a problem upgrading CS after installing a k8s
> > cluster for CI/CD
> >
> > Rohit was talking about CAPC, i can use it independently of the version
> of
> > ACS i use ?
> >
>


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-16 Thread Wei ZHOU
Hi benoit,

You can try CAPC, it should work with 4.16
https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack

You need a local kind cluster to manage the CAPC clusters. They are
not managed by CloudStack.


-Wei

On Fri, Apr 12, 2024 at 2:43 PM benoit lair  wrote:
>
> Hi Wei,
>
> upgrading from 4.16 to 4.19 will be a huge work for me
> I have to test vpc/shared network, vrrp vpc with public lb Netscaler and
> withou it on several VPCs, i added support for VPX 12 and VPX 13 on it with
> a nginx middleware rewriting rules from CS
> I am on xcp-ng 8.2.1
> i kept during years with a CS 4.3, restarting from 4.16, i would like this
> time to keep LTS in production
> Is there a specific upgrade path to respect ? a specific process to observe
> ? what rollback mechanism is possible ?
>
> For my K8S needs, I would like to upgrade it, but it was looking first for
> a preproduction K8s working on 1.28 in order integrating CI/CD pipelines
> So this would not too be a problem upgrading CS after installing a k8s
> cluster for CI/CD
>
> Rohit was talking about CAPC, i can use it independently of the version of
> ACS i use ?
>


Re: [I] Installing v0.5.0 gives error about unexpected hash [cloudstack-terraform-provider]

2024-04-16 Thread via GitHub


rohityadavcloud commented on issue #109:
URL: 
https://github.com/apache/cloudstack-terraform-provider/issues/109#issuecomment-2058519817

   @CodeBleu @Ye-Min-Tun @kiranchavala after some help from the Terraform 
registry support, I got the release removed and republished the artefacts 
again. With `Terraform v1.8.0` I was able to make init the provider with no 
errors:
   
   ```
   > terraform init
   
   Initializing the backend...
   
   Initializing provider plugins...
   - Finding cloudstack/cloudstack versions matching "0.5.0"...
   - Installing cloudstack/cloudstack v0.5.0...
   - Installed cloudstack/cloudstack v0.5.0 (self-signed, key ID 
484248210EE3D884)
   
   Partner and community providers are signed by their developers.
   If you'd like to know more about provider signing, you can read about it 
here:
   https://www.terraform.io/docs/cli/plugins/signing.html
   
   Terraform has created a lock file .terraform.lock.hcl to record the provider
   selections it made above. Include this file in your version control 
repository
   so that Terraform can guarantee to make the same selections by default when
   you run "terraform init" in the future.
   
   Terraform has been successfully initialized!
   
   You may now begin working with Terraform. Try running "terraform plan" to see
   any changes that are required for your infrastructure. All Terraform commands
   should now work.
   
   If you ever set or change modules or backend configuration for Terraform,
   rerun this command to reinitialize your working directory. If you forget, 
other
   commands will detect it and remind you to do so if necessary.
   ```
   
   Closing on this remark, please test if you still face any issues kindly 
reopen the issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] Installing v0.5.0 gives error about unexpected hash [cloudstack-terraform-provider]

2024-04-16 Thread via GitHub


rohityadavcloud closed issue #109: Installing v0.5.0 gives error about 
unexpected hash
URL: https://github.com/apache/cloudstack-terraform-provider/issues/109


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] New terraform git tag for registry workaround

2024-04-16 Thread Rohit Yadav
All,

Update - looks like there was a caching/delay issue, after republishing the 
provider the issue is no longer seen now (as tested with Terraform v1.8.0 CLI) 
and a workaround release may not be necessary now.

Issue reference: 
https://github.com/apache/cloudstack-terraform-provider/issues/109#issuecomment-2058519817


The only note for future release manager is to avoid publishing the artifacts 
before voting is closed, or to delete an already published release.


Regards.

 



From: Rohit Yadav 
Sent: Tuesday, April 16, 2024 13:15
To: dev 
Subject: Re: [DISCUSS] New terraform git tag for registry workaround

All,

I got the following from TF support:


Hello there, provider versions are immutable in the registry. Therefore, when 
you deleted and recreated the version in Github, the hash has changed, and the 
registry will refuse to serve it. Your option now it either to delete the 
version in the registry (which we can help you do), or publish a new version.

I tried to get the release deleted from the registry, and republished the 
releases artifacts using goreleaser but still same issue.

I think the only thing to document is that we must never delete/republish an 
artifact on the release repo: 
http://github.com/cloudstack/terraform-provider-cloudstack

We cannot use the apache one as: (1) the registry needs the repo to be named as 
terraform-provider- and (2) it would need access from asf infra to 
connect registry with the repo, and it would then need existing users to 
migrate from cloudstack/cloudstack provider namespace (as the plugin has 
history/registry even before being donated to the cloudstack project).



Regards.





From: Harikrishna Patnala 
Sent: Tuesday, April 16, 2024 11:54
To: dev 
Subject: Re: [DISCUSS] New terraform git tag for registry workaround

okay, given the situation your proposal seems right Rohit. Like Daan asked, if 
we can prevent this situation in future we need to document it also.

Regards,
Harikrishna




From: Daan Hoogland 
Sent: Monday, April 15, 2024 4:50 PM
To: dev 
Subject: Re: [DISCUSS] New terraform git tag for registry workaround

I think your proposals are ok as an ad-hoc solution to the current
situation @Rohit Yadav . I wonder how we should deal with this in the
future though.

1. As for the numbering, do we have a procedure to prevent this issue
in the future?
2. The provider is part of apache and I think the link
https://github.com/apache/terraform-provider-cloudstack should be
validated. What exactly is the objection to this?

On Mon, Apr 15, 2024 at 12:05 PM Rohit Yadav  wrote:
>
> All,
>
> The recent Terraform provider release v0.5.0 has a problem with the Terraform 
> registry website. 
> https://github.com/apache/cloudstack-terraform-provider/issues/109
>
> The registry support isn't able to provide a resolution now, their manual 
> resync button on the provider isn't fixing the issue.
>
> While I've documented the steps for manually installing and using the 
> provider. Most terraform/tofu users are used to consuming a provider from the 
> registry.
>
> If there are no objections, I propose that we just tag the current version as 
> v0.5.1 and push it on the registry for the purpose of publishing on the 
> registry website. We may need not do a formal vote for this as code wise 
> nothing has changed and we can make this tag to be a community release tag 
> solely done for the purpose of having a workaround on the registry website  
> https://registry.terraform.io/providers/cloudstack/cloudstack/latest which 
> gets published via 
> https://github.com/cloudstack/terraform-provider-cloudstack as the registry 
> also has a strict repo naming policy (due to which it can't use the repo 
> under Apache org).
>
> Thoughts?
>
> Regards.
>
>
>


--
Daan


Re: ACS 4.16 - Change SystemVM template for CKS

2024-04-16 Thread Embedded


is docker required, pretty sure k8s runs wth containerd or crio installed, no 
docker needed

On Friday 12 April 2024 07:25:19 PM (+07:00), benoit lair wrote:

> i succeded to install systemvm template 4.19 as serving template for
> control and worker nodes with cks community iso 1.28.4
> when cluster saying "starting", i connect to node controller via p, saw
> that docker was not installed, but containerd.io yes
> 
> I've done the following :
> 
> apt install docker-ce
> cp /etc/containerd/config.toml /etc/containerd/config.toml.bck
> containerd config default | tee /etc/containerd/config.toml
> /opt/bin/setup-kube-system
> /opt/bin/deploy-kube-system
> 
> on to control node and same after on worker node
> CS UI show kubernetes yaml
> a "kubectl.exe --kubeconfig kube.conf.conf get nodes"
> 
> NAME   STATUS   ROLES   AGEVERSION
> k8s-cks-cl19-control-18ed18311c9   Readycontrol-plane   3h1m   v1.28.4
> k8s-cks-cl19-node-18ed1850433  Ready  170m   v1.28.4
> 
> However CS says the cluster is in alert state
> And dashboad is not working
> 
> Any advice ?
> 
> when executing this on my laptop :
> kubectl.exe --kubeconfig cl19_k8s_1.28.4.conf proxy
> 
> and opening this :
> http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
> 
> I have this result :
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "no endpoints available for service \"kubernetes-dashboard\"",
>   "reason": "ServiceUnavailable",
>   "code": 503
> }
> 
> Somebody got this problem with dashboard ?
> 
> Did i miss something installing the cluster manually with acs stuff ?
> 
> Le jeu. 11 avr. 2024 à 17:40, benoit lair  a écrit :
> 
> > Hi Wei,
> >
> > Thanks for the sharing, i also tried to install the systemvm 4.19
> > I have control node and worker node under template systemvm 4.19 (
> > http://download.cloudstack.org/systemvm/4.19/systemvmtemplate-4.19.0-xen.vhd.bz2
> > )
> > I tried with community cks iso 1.25.0 and 1.28.9
> > On systemvm 4.19 docker was not present by default, just containerd in
> > 1.6.x version
> > I installed docker-ce and docker-ce-cli :
> >
> > apt install -y apt-transport-https ca-certificates curl
> > software-properties-common
> > curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg
> > --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
> > echo "deb [arch=$(dpkg --print-architecture)
> > signed-by=/usr/share/keyrings/docker-archive-keyring.gpg]
> > https://download.docker.com/linux/debian $(lsb_release -cs) stable" |
> > sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
> > remove line about docker in double in /etc/apt/sources.list
> > apt update
> > apt install -y docker-ce
> > apt install -y docker-cli
> > i mounted the 1.28.4 cks community iso in control vm
> >
> > and /opt/bin/setup-kube-system
> > i have these errors :
> >
> > Is it the good way to manually install kube system
> >
> > root@k8s-cks-cl17-control-18ecdc1e5bd:/home/core#
> > /opt/bin/setup-kube-system
> > Installing binaries from /mnt/k8sdisk/
> > 5b1fa8e3e100: Loading layer
> > [==>]  803.8kB/803.8kB
> > 39c831b1aa26: Loading layer
> > [==>]  26.25MB/26.25MB
> > Loaded image: apache/cloudstack-kubernetes-autoscaler:latest
> > 417cb9b79ade: Loading layer
> > [==>]  657.7kB/657.7kB
> > 8d323b160d65: Loading layer
> > [==>]  24.95MB/24.95MB
> > Loaded image: apache/cloudstack-kubernetes-provider:v1.0.0
> > 6a4a177e62f3: Loading layer
> > [==>]  115.2kB/115.2kB
> > 398c9baff0ce: Loading layer
> > [==>]  16.07MB/16.07MB
> > Loaded image: registry.k8s.io/coredns/coredns:v1.10.1
> > bd8a70623766: Loading layer
> > [==>]  75.78MB/75.78MB
> > c88361932af5: Loading layer
> > [==>] 508B/508B
> > Loaded image: kubernetesui/dashboard:v2.7.0
> > e023e0e48e6e: Loading layer
> > [==>]  103.7kB/103.7kB
> > 6fbdf253bbc2: Loading layer
> > [==>]   21.2kB/21.2kB
> > 7bea6b893187: Loading layer
> > [==>]  716.5kB/716.5kB
> > ff5700ec5418: Loading layer
> > [==>] 317B/317B
> > d52f02c6501c: Loading layer
> > [==>] 198B/198B
> > e624a5370eca: Loading layer
> > [==>] 113B/113B
> > 1a73b54f556b: Loading layer
> > [==>] 385B/385B
> > 

Re: [DISCUSS] New terraform git tag for registry workaround

2024-04-16 Thread Rohit Yadav
All,

I got the following from TF support:


Hello there, provider versions are immutable in the registry. Therefore, when 
you deleted and recreated the version in Github, the hash has changed, and the 
registry will refuse to serve it. Your option now it either to delete the 
version in the registry (which we can help you do), or publish a new version.

I tried to get the release deleted from the registry, and republished the 
releases artifacts using goreleaser but still same issue.

I think the only thing to document is that we must never delete/republish an 
artifact on the release repo: 
http://github.com/cloudstack/terraform-provider-cloudstack

We cannot use the apache one as: (1) the registry needs the repo to be named as 
terraform-provider- and (2) it would need access from asf infra to 
connect registry with the repo, and it would then need existing users to 
migrate from cloudstack/cloudstack provider namespace (as the plugin has 
history/registry even before being donated to the cloudstack project).



Regards.

 



From: Harikrishna Patnala 
Sent: Tuesday, April 16, 2024 11:54
To: dev 
Subject: Re: [DISCUSS] New terraform git tag for registry workaround

okay, given the situation your proposal seems right Rohit. Like Daan asked, if 
we can prevent this situation in future we need to document it also.

Regards,
Harikrishna




From: Daan Hoogland 
Sent: Monday, April 15, 2024 4:50 PM
To: dev 
Subject: Re: [DISCUSS] New terraform git tag for registry workaround

I think your proposals are ok as an ad-hoc solution to the current
situation @Rohit Yadav . I wonder how we should deal with this in the
future though.

1. As for the numbering, do we have a procedure to prevent this issue
in the future?
2. The provider is part of apache and I think the link
https://github.com/apache/terraform-provider-cloudstack should be
validated. What exactly is the objection to this?

On Mon, Apr 15, 2024 at 12:05 PM Rohit Yadav  wrote:
>
> All,
>
> The recent Terraform provider release v0.5.0 has a problem with the Terraform 
> registry website. 
> https://github.com/apache/cloudstack-terraform-provider/issues/109
>
> The registry support isn't able to provide a resolution now, their manual 
> resync button on the provider isn't fixing the issue.
>
> While I've documented the steps for manually installing and using the 
> provider. Most terraform/tofu users are used to consuming a provider from the 
> registry.
>
> If there are no objections, I propose that we just tag the current version as 
> v0.5.1 and push it on the registry for the purpose of publishing on the 
> registry website. We may need not do a formal vote for this as code wise 
> nothing has changed and we can make this tag to be a community release tag 
> solely done for the purpose of having a workaround on the registry website  
> https://registry.terraform.io/providers/cloudstack/cloudstack/latest which 
> gets published via 
> https://github.com/cloudstack/terraform-provider-cloudstack as the registry 
> also has a strict repo naming policy (due to which it can't use the repo 
> under Apache org).
>
> Thoughts?
>
> Regards.
>
>
>


--
Daan


Re: [DISCUSS] New terraform git tag for registry workaround

2024-04-16 Thread Harikrishna Patnala
okay, given the situation your proposal seems right Rohit. Like Daan asked, if 
we can prevent this situation in future we need to document it also.

Regards,
Harikrishna
 



From: Daan Hoogland 
Sent: Monday, April 15, 2024 4:50 PM
To: dev 
Subject: Re: [DISCUSS] New terraform git tag for registry workaround

I think your proposals are ok as an ad-hoc solution to the current
situation @Rohit Yadav . I wonder how we should deal with this in the
future though.

1. As for the numbering, do we have a procedure to prevent this issue
in the future?
2. The provider is part of apache and I think the link
https://github.com/apache/terraform-provider-cloudstack should be
validated. What exactly is the objection to this?

On Mon, Apr 15, 2024 at 12:05 PM Rohit Yadav  wrote:
>
> All,
>
> The recent Terraform provider release v0.5.0 has a problem with the Terraform 
> registry website. 
> https://github.com/apache/cloudstack-terraform-provider/issues/109
>
> The registry support isn't able to provide a resolution now, their manual 
> resync button on the provider isn't fixing the issue.
>
> While I've documented the steps for manually installing and using the 
> provider. Most terraform/tofu users are used to consuming a provider from the 
> registry.
>
> If there are no objections, I propose that we just tag the current version as 
> v0.5.1 and push it on the registry for the purpose of publishing on the 
> registry website. We may need not do a formal vote for this as code wise 
> nothing has changed and we can make this tag to be a community release tag 
> solely done for the purpose of having a workaround on the registry website  
> https://registry.terraform.io/providers/cloudstack/cloudstack/latest which 
> gets published via 
> https://github.com/cloudstack/terraform-provider-cloudstack as the registry 
> also has a strict repo naming policy (due to which it can't use the repo 
> under Apache org).
>
> Thoughts?
>
> Regards.
>
>
>


--
Daan