Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-23 Thread Wido den Hollander



Op 22/05/2024 om 14:55 schreef Alex Mattioli:

Thanks for the input Wido,


That said, you could also opt that you can specify BGP peers are zone level and 
override them at network level if one prefers. Nothing specified at >the network? 
The zone-level peers are used. If you do >specify them at the network level those 
are used. Again, think about multihop.




Have you thought about the AS number pool? This pool could be assigned 
to a network. All networks can point to the same AS number pool or have 
multiple pools where you might make different choices.


On the network level this allows you to create specific BGP filters 
based on the AS number. When these are in fixed 'blocks' you can create 
better filters.



That's exactly what I had in mind, the same way we set DNS for the zone but can 
specify at network level as well. This way we keep self-service intact for when 
end-users simply want a routed network that peers with whatever the provider 
has setup upstream but also give the ability to either peer with an user 
managed VNF upstream or an operator provided router.

I hope this way we can cater for most use cases, at least with a first simple 
implementation.

Will definitely keep multihop in mind.


+ BGP password. This would solve most use-cases from the start:

- BGP peer on zone level
  - override on network level
- AS number pool
  - networks refer to this pool
- BGP multihop enabled yes or no
  - zone level
  - network level override
- BGP password

Wido



Cheers,
Alex


  



-Original Message-
From: Wido den Hollander 
Sent: Monday, May 20, 2024 8:22 PM
To: dev@cloudstack.apache.org; Alex Mattioli ; 
us...@cloudstack.apache.org; adietr...@ussignal.com
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC 
networks



Op 20/05/2024 om 14:45 schreef Alex Mattioli:

Hi Alex,

In this scenario:


I think adding the ability to add network specific peers as mentioned in one of 
>your prior replies would still allow the level of control some operators (myself 
>included) may desire.


How do you propose network specific peers to be implemented?



I do agree with Alex (Dietrich) that I think BGP peers should be configured per 
network. There is no guarantee that every VLAN/VNI
(VXLAN) ends up at the same pair of routers. Technically there is also no need 
to do so.

Let's say I have two VNI (VXLAN):

VNI 500:
Router 1: 192.168.153.1 / 2001:db8::153:1 Router 2: 192.168.153.2 / 
2001:db8::153:2

VNI 600:
Router 1: 192.168.155.1 / 2001:db8::155:1 Router 2: 192.168.155.2 / 
2001:db8::155:2

In these case you would say that the upstream BGP peers are .153.1/2,
.155.1/2 (and their IPv6 addresses). No need for BGP multihop.

Talking about multihop, I would make that optional, people might want to have 
two central BGP routers where each VR peers with (multihop) and those routers 
distribute the routes into the network again.

Per network you create you also provide the ASN range, but even better would be 
to refer to a pool. You can use one pool for your zone by referencing every 
network to the same pool are simply use multiple pools if your network requires 
so.

That said, you could also opt that you can specify BGP peers are peer level and 
override them at network level if one prefers. Nothing specified at the 
network? The zone-level peers are used. If you do specify them at the network 
level those are used. Again, think about multihop.

Wido


Regards
Alex


   



-Original Message-
From: Dietrich, Alex 
Sent: Monday, May 20, 2024 2:21 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated
and VPC networks

Hi Alex,

This may be a difference in perspective in implementation of BGP at the tenant 
level. I see the ability this would provide to seamlessly establishing those 
peering relationships with minimal intervention (helping scalability).

I think adding the ability to add network specific peers as mentioned in one of 
your prior replies would still allow the level of control some operators 
(myself included) may desire.

Thanks,
Alex

[photo]<http://www.ussignal.com/>

Alex Dietrich
Senior Network Engineer, US Signal

616-233-5094  |
www.ussignal.com<https://www.ussignal.com>  |
adietr...@ussignal.com<mailto:adietr...@ussignal.com>

201 Ionia Ave SW, Grand Rapids, MI
49503<https://maps.google.com/?q=201%20Ionia%20Ave%20SW,%20Grand%20Rap
ids,%20MI%2049503>

[linkedin]<https://www.linkedin.com/company/us-signal/>

[facebook]<https://www.facebook.com/ussignalcom/>

[youtube]<https://www.youtube.com/channel/UCaFBGFfXmHziWGTFqjGzaWw>

IMPORTANT: The contents of this email are confidential. Information is intended 
for the named recipient(s) only. If you have received this email by mistake, 
please notify the sender immediately and do not disclose the contents to anyone 
or make copies thereof.

[__tpx__]
From: Alex 

Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-20 Thread Wido den Hollander
AS numbers 
presented to the zone)

This implementation could also be used for Shared Networks, in which case the 
destination advertised via BGP is to the gateway of the shared network.

There could also be an offering where we allow for end users to setup the BGP 
parameters for their Isolated or VPC networks, which can then peer with 
upstream VNF(s).

Any and all input is very welcome...

Taking the liberty to tag some of you: @Wei Zhou<mailto:wei.z...@shapeblue.com> @Wido den 
Hollander<mailto:w...@widodh.nl> @Kristaps Čudars<mailto:kristaps.cud...@telia.lv>

Cheers,
Alex


Re: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

2024-05-17 Thread Wido den Hollander

My apologies! I totally missed this one. Commments inline.

Op 15/05/2024 om 14:55 schreef Alex Mattioli:

Hi all,

Does anyone have an opinion on the implementation of dynamic routing in 
Isolated networks and VPCs?

So far the design is:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated to the network
4 - ACS configures the BGP session on the VR (using FRR), advertising all its 
connected networks



I would suggest that the upstream router (Juniper, Frr, etc) should then 
use Dynamic BGP neihbors.


On JunOS this is the "allow" statement [0]. The VR would indeed get an 
AS assigned by ACS and the network should know the 1, 2 or X upstream 
routers it can peer with. I do suggest we add BGP passwords/encryption 
from the start for safety reasons.


"allow 192.168.1.0/24"

On JunOS this allows any router within that subnet to establish a BGP 
sessions (and when the BGP password matches).


On the VR you just need to make sure you properly configure the BGP 
daemon and it points to the right upstream routers.


[0]: 
https://www.juniper.net/documentation/us/en/software/junos/cli-reference/topics/ref/statement/allow-edit-protocols-bgp.html



Any and all input will be very welcome.

Cheers,
Alex


  


From: Alex Mattioli
Sent: Wednesday, April 17, 2024 3:25 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Dynamic routing for routed mode IPv6 and IPv4 Isolated and VPC networks

Hi all,

I'd like to brainstorm dynamic routing in ACS (yes, again... for the newcomers 
to this mailing list - this has been discussed multiple times in the past 10+ 
years)

ACS 4.17 has introduced routed mode for IPv6 in Isolated networks and VPCs, we 
are currently working on extending that to IPv4 as well, which will support the 
current NAT'ed mode and also a routed mode (inspired by the NSX integration 
https://www.youtube.com/watch?v=f7ao-vv7Ahk).

With stock ACS (i.e. without NSX or OpenSDN) this routing is purely static, with the 
operator being responsible to add static routes to the Isolated network or VPC tiers via 
the "public" (outside) IP of the virtual router.

The next step on this journey is to add some kind of dynamic routing. One way 
that I have in mind is using dynamic BGP:

1 - Operator configures one or more BGP peers for a given Zone (with different 
metrics)
2 - Operator presents a pool of Private AS numbers to the Zone (just like we do 
for VLANs)
3 - When a network is created with an offering which has dynamic routing 
enabled an AS number is allocated
4 - ACS configures the BGP session on the VR, advertising all its connected 
networks

This way there's no need to reconfigure the upstream router for each new ACS 
network (it just needs to allow dynamic BGP peering from the pool of AS numbers 
presented to the zone)

This implementation could also be used for Shared Networks, in which case the 
destination advertised via BGP is to the gateway of the shared network.

There could also be an offering where we allow for end users to setup the BGP 
parameters for their Isolated or VPC networks, which can then peer with 
upstream VNF(s).

Any and all input is very welcome...

Taking the liberty to tag some of you: @Wei Zhou<mailto:wei.z...@shapeblue.com> @Wido den 
Hollander<mailto:w...@widodh.nl> @Kristaps Čudars<mailto:kristaps.cud...@telia.lv>

Cheers,
Alex



Re: CloudStack Collaboration conference 2024 - Registration is NOW OPEN

2024-04-08 Thread Wido den Hollander

Great! I'm going to be there! :-)

Wido

Op 08/04/2024 om 16:41 schreef Ivet Petrova:

Hi All,

I am delighted to announce that the registration for CloudStack Collaboration 
Conference 2024 is now open.
I also would like to encourage you all to register here: 
https://www.eventbrite.com/e/cloudstack-collaboration-conference-2024-tickets-879401903767

This year, CCC2024 will require a ticket purchase to attend. The reason for 
this is that as you know, our event relies entirely on community support and 
sponsorship, so that we can organise it on an yearly basis. The CloudStack 
Collaboration Conference 2024 is a non-profit event, and we use all the 
sponsorship for the sake of the event - venue, catering, welcome bags for the 
attendees, etc.

IMPORTANT: The CloudStack Collaboration Conference is and will always be a free 
to attend event for all project committers. Use promo code CommitterFREE24 for 
registration.

--

The CloudStack Collaboration Conference 2024 Ticket Options are:

Super Early bird: 69 EUR - available until June 15th
Early bird: 99 EUR - available until September 15th
Standard Price - 199 EUR

The ticket fee includes:
- Hackathon Participation
- Access to all sessions and workshops
- Coffee breaks
- Lunch included for Nov 21 and Nov 22
- Drinks after the talks on Nov 21
- Welcome bag with a CloudStack T-shirt and swags


Free Passes are available for:

   *   Speakers
   *   Sponsors
   *   Committers - with promo code
   *   Sponsors will have a discount code to invite their customers and 
partners to join the event

Registrations with a promo code are a subject to review.

--

CCC2024 - Exclusive In-Person Experience in 2024
This year's conference is breaking new ground by going exclusively in-person. 
After careful consideration, collecting feedback from the community and 
planning, this year we will offer to the community the opportunity to meet 
face-to-face, fostering deeper connections and more meaningful interactions 
than ever before.
We did our best to deliver the CloudStack Community to everybody interested 
during the pandemic time for a few years. Now, we are focusing on growing the 
community and giving a chance to all contributors, committers and people 
interested into the project, to meet in-person and interact in a better way.

--

Register here: 
https://www.eventbrite.com/e/cloudstack-collaboration-conference-2024-tickets-879401903767

--

CFP is also OPEN:
The call for speakers is also open here: 
https://docs.google.com/forms/d/e/1FAIpQLSdzhEy-v68wyVQcBY3AnQT7OeDVs4xnfvlt3wIlLxV50dP11w/viewform



Best regards,


  





Re: CSEUG - September, Germany

2024-03-01 Thread Wido den Hollander




Op 28/02/2024 om 16:29 schreef Ivet Petrova:

Hi all,

I would like to propose to organise the CSEUG next meeting on September 12th in 
Germany.
I already had a few informal conversations with community members in Germany, 
who are willing to help.
Do you all think the date is OK and we can meet there?


Works for me on 12-09. Just wondering, Germany is big, which city are we 
looking at?


Frankfurt? Berlin? München? Düsseldorf? :-)

Wido



I consider the usual formal of half day talks and will need 5-6 speakers on 
place.

Best regards,


  





Re: [VOTE] next version 20 instead of 4.20

2024-02-19 Thread Wido den Hollander

+1

Op 19/02/2024 om 13:49 schreef Daan Hoogland:

LS,

This is a vote on dev@c.a.o with cc to users@c.a.o. If you want to be
counted please reply to dev@.

As discussed in [1] we are deciding to drop the 4 from our versioning
scheme. The result would be that the next major version will be 20
instead of 4.20, as it would be in a traditional upgrade. As 20 > 4
and the versions are processed numerically there are no technical
impediments.

+1 agree (next major version as 20
0 (no opinion)
-1 disagree (keep 4.20 as the next version, give a reason)

As this is a lazy consensus vote any -1 should be accompanied with a reason.

[1] https://lists.apache.org/thread/lh45w55c3jmhm7w2w0xgdvlw78pd4p87



Re: new website is life

2024-02-07 Thread Wido den Hollander

Nice, very nice!

Op 07/02/2024 om 09:22 schreef Daan Hoogland:

People,
we brought the new website. Please all have a look at
https://cloudstack.apache.org

thanks for any feedback



Re: [PROPOSAL] version naming : drop the 4.

2024-01-24 Thread Wido den Hollander




Op 24/01/2024 om 10:47 schreef Daan Hoogland:

personally I don't like the months too much. They tie us down to a
release schedule that we have proven not to be able to maintain. a
year as number restricts us to just one major release that year, i.e.
only one moment for new integrations or major features. S I am for the
more liberal 20.x and if we make a second one some year we can freely
add a number.



Our mails just crossed :-) Timing!

Does .MM tie you down to a specific schedule? You can release 
whenever you want, right? The version depends on when you release.


But I'm ok with just going with an number. 24, then 25, then 26, etc. 
Something else then 4.x for ever.


Wido


On Wed, Jan 24, 2024 at 12:27 AM Wei ZHOU  wrote:


Yes, the ubuntu version naming is the best in my opinion.
Other than the version naming, we need to decide the frequency of major
releases and minor releases, which version will be LTS, how long the
LTS/normal version will be supported, etc.

Maybe a vote in the dev/users/pmc mailing list?



在 2024年1月23日星期二,Nicolas Vazquez  写道:


I like this idea as well, even if its .MM or YY.MM.

Would we want to define delivery months for releases similar to Ubuntu .04
and .10?

Regards,
Nicolas Vazquez

From: Nux 
Sent: Tuesday, January 23, 2024 6:11 PM
To: dev@cloudstack.apache.org 
Cc: Wei ZHOU 
Subject: Re: [PROPOSAL] version naming : drop the 4.

An interesting proposition, I like it.
It would also relieve us from having to come up with any over-the-top
feature or change for a major version change.

On 2024-01-23 14:49, Wido den Hollander wrote:

We could look at Ubuntu, and other projects, and call it 2025.01 if we
release it in Jan 2025.

A great post on the website, mailinglists and social media could
explain the change in versioning, but that the code doesn't change that
much.

Project has matured, etc, etc.











Re: [PROPOSAL] version naming : drop the 4.

2024-01-24 Thread Wido den Hollander




Op 24/01/2024 om 00:27 schreef Wei ZHOU:

Yes, the ubuntu version naming is the best in my opinion.
Other than the version naming, we need to decide the frequency of major
releases and minor releases, which version will be LTS, how long the
LTS/normal version will be supported, etc.

Maybe a vote in the dev/users/pmc mailing list?



I think that's a good decision.

@ Daan: Do you want to start one?

I'd prefer the .MM release schedule. We just need a good (blog)post 
to explain the new versioning.


Wido




在 2024年1月23日星期二,Nicolas Vazquez  写道:


I like this idea as well, even if its .MM or YY.MM.

Would we want to define delivery months for releases similar to Ubuntu .04
and .10?

Regards,
Nicolas Vazquez

From: Nux 
Sent: Tuesday, January 23, 2024 6:11 PM
To: dev@cloudstack.apache.org 
Cc: Wei ZHOU 
Subject: Re: [PROPOSAL] version naming : drop the 4.

An interesting proposition, I like it.
It would also relieve us from having to come up with any over-the-top
feature or change for a major version change.

On 2024-01-23 14:49, Wido den Hollander wrote:

We could look at Ubuntu, and other projects, and call it 2025.01 if we
release it in Jan 2025.

A great post on the website, mailinglists and social media could
explain the change in versioning, but that the code doesn't change that
much.

Project has matured, etc, etc.









Re: [PROPOSAL] version naming : drop the 4.

2024-01-23 Thread Wido den Hollander




Op 22/01/2024 om 12:17 schreef Wei ZHOU:

+1 with 20.0

5.0 sounds like a leap with lots of significant changes. Unfortunately it
has not been discussed what needs to be done.
20.0 (or 24.0) looks better.



We could look at Ubuntu, and other projects, and call it 2025.01 if we 
release it in Jan 2025.


A great post on the website, mailinglists and social media could explain 
the change in versioning, but that the code doesn't change that much.


Project has matured, etc, etc.

Wido


Wei

On Mon, 22 Jan 2024 at 12:01, Daan Hoogland  wrote:


João,
I think we should not consider 5.0, but go to 20,0 that is more in
line with what we've actually been doing (semantic versioning from the
second digit)

On Mon, Jan 22, 2024 at 11:53 AM Nux  wrote:


LGTM!

On 2024-01-19 19:19, João Jandre Paraquetti wrote:

Hi all,

I agree that our current versioning schema doesn't make much sense, as
"minors" introduce pretty big features; even backward incompatibilities
are introduced in minor versions sometimes.

As the current plan is to have 4.20 by June, I think we should stick to
it and still have the next "minor", and make it the last minor version
of the major 4. After so much time in the same major version, we should
plan something relevant before changing it, and June 2024 is a bit of a
tight schedule for that.

I think that we should plan to move to version 5.0.0, we could set the
release date to the end of 2024 or the start (January) of 2025; by
doing that, we have plenty of time for planning and developing amazing
features for version 5, while also preparing a cleanup of our current
APIs. For instance, we are working on the following major developments:
KVM differential snapshots/backups without needing extra software;
theme management system (white label portal for ACS); native
snapshot/backup for VMware (without needing Veeam) to make it similar
to what ACS does with XenServer and KVM; Operators backup (which are
different from end-user backups); and many other items.

What do you guys think?

Best regards,
João Jandre.

On 1/19/24 10:39, Daan Hoogland wrote:

devs, PMC,

as we are closing in on 4.19 I want to propose that we drop the 4. in
our versioning scheme. We've been discussing 5 but no real initiatives
have been taken. Nowadays big features go into our "minor"
dot-releases. In my opinion this warrants promoting those version to
the status of major and dropping the 4..

technically this won't be an issue as 20 > 4 and out upgrade scheme
supports a step like that.

any thoughts?





--
Daan





Re: Apache CloudStack and Ceph Day - February 22, Amsterdam

2024-01-09 Thread Wido den Hollander




Op 09/01/2024 om 13:45 schreef Ivet Petrova:

Dear community members,

We managed to finalised the date for our joint event with the Ceph community.
I am happy to share that we are doing it on February 22nd, in Amsterdam, 
Netherlands.
The event will be hosted by Adyen, the address is: Rokin 49, 1012 KK Amsterdam, 
The Netherlands

Two things at this stage:

1. We still have a few speaking slots free (actually 2 slots only). If you are 
interested to present at the event, please submit your talk proposal now.
https://forms.gle/TnBfxS2cKWfe28CS6

2. We would be happy if we can see more people from the community, so register 
as an attendee here:
https://www.eventbrite.nl/e/cloudstack-and-ceph-day-netherlands-2024-tickets-700177167757

Look forward to meeting you in Amsterdam!


Yes! That's like my backyard. I'll be there!

Wido



Best regards,


  





Object Store browser connects over HTTP while HTTPS is set in URL

2023-12-20 Thread Wido den Hollander

Hi,

While testing the Ceph RGW plugin [0] for the Object Store driver I 
noticed the bucket browser isn't working.


So I debugged my Firefox browser and I noticed that it tries to connect 
to http://ceph01.holfra.eu/ (not SSL) while the store's URL is 
httpS://ceph01.holfra.eu/


My debugger says:

"Blocked loading mixed active content 
“http://ceph01.holfra.eu/wido02?location”request.js:150:9;


I looked at ObjectStoreBrowser.vue [1] and found this piece of code:

initMinioClient () {
  if (!this.client) {
const url = 
/https?:\/\/([^/]+)\/?/.exec(this.resource.url.split(this.resource.name)[0])[1]

const isHttps = /^https/.test(url)
this.client = new Minio.Client({
  endPoint: url.split(':')[0],
  port: url.split(':').length > 1 ? parseInt(url.split(':')[1]) 
: isHttps ? 443 : 80,

  useSSL: isHttps,
  accessKey: this.resource.accesskey,
  secretKey: this.resource.usersecretkey
})
this.listObjects()
  }
},


Based on if the URL starts with http or https it should select SSL or 
not, but in my case it selects HTTP while the URL contains HTTPS.


My JavaScript is a bit rusty, anybody else with ideas?

Wido

[0]: https://github.com/apache/cloudstack/pull/8389
[1]: 
https://github.com/apache/cloudstack/blob/1411da1a22bc6aa26634f3038475e3d5fbbcd6bb/ui/src/components/view/ObjectStoreBrowser.vue#L449


Re: New Object Storage - Huawei OBS

2023-12-20 Thread Wido den Hollander

It is working now!

PR is out: https://github.com/apache/cloudstack/pull/8389

On 12/16/23 13:14, Wido den Hollander wrote:



On 12/15/23 20:09, Wei ZHOU wrote:

Hi Wido,

I just tested your branch, it looks ok (
https://github.com/wido/cloudstack/commits/ceph-object-store).



Yes, it works now! I had old files in my classpath... (rookie mistake!)

I get an error about an invalid URL, but that's something else:

Error: (HTTP 530, error code ) http://ceph01..eu/ is not a valid 
URL


2023-12-16 12:13:25,737 ERROR [o.a.c.a.c.a.s.AddObjectStoragePoolCmd] 
(qtp1970436060-18:ctx-9cb56831 ctx-99f41e21) (logid:c93cae1f) Exception:
com.cloud.exception.InvalidParameterValueException: 
https://ceph01.xxx.eu/ is not a valid URL


Looking into this!

Wido



I got an error which is because I input invalid host/keys

2023-12-15 19:07:22,313 DEBUG [o.a.c.s.d.l.CephObjectStoreLifeCycleImpl]
(qtp775386112-17:ctx-9be6ad79 ctx-b4cdcdd0) (logid:3a8340e9) Error while
initializing Ceph RGW Object Store: IOException
2023-12-15 19:07:22,313 DEBUG [c.c.s.StorageManagerImpl]
(qtp775386112-17:ctx-9be6ad79 ctx-b4cdcdd0) (logid:3a8340e9) Failed to 
add

object store: Error while initializing Ceph RGW Object Store. Invalid
credentials or URL
java.lang.RuntimeException: Error while initializing Ceph RGW Object 
Store.

Invalid credentials or URL
at
org.apache.cloudstack.storage.datastore.lifecycle.CephObjectStoreLifeCycleImpl.initialize(CephObjectStoreLifeCycleImpl.java:87)
at
com.cloud.storage.StorageManagerImpl.discoverObjectStore(StorageManagerImpl.java:3777)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)


-Wei

On Fri, 15 Dec 2023 at 17:18, Wido den Hollander  wrote:




Op 15/12/2023 om 13:54 schreef Wei ZHOU:

Hi Wido,

It looks you need a file like


https://github.com/apache/cloudstack/blob/main/plugins/storage/object/simulator/src/main/resources/META-INF/cloudstack/storage-object-simulator/spring-storage-object-simulator-context.xml




Tnx. But that didn't solve it. The module seems to load according to the
mgmt server log, but I can't add the storage. Exception is the same.

Wido


-Wei

On Fri, 15 Dec 2023 at 13:40, Wido den Hollander 


wrote:




On 12/15/23 09:41, Ronald Feicht wrote:

Hi Kishan,


when I add my module to client/pom.xml I get the following error and

http://192.168.17.252:8080/client/ returns "HTTP ERROR 503 Service
Unavailable" because of the following exception:


[WARNING] Failed startup of context

o.e.j.m.p.JettyWebAppContext@1df8ea34



{/client,file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/,UNAVAILABLE}{file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/}

java.lang.NullPointerException
   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$1.with

(DefaultModuleDefinitionSet.java:104)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:263)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:251)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.startContexts

(DefaultModuleDefinitionSet.java:96)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load

(DefaultModuleDefinitionSet.java:79)

   at



org.apache.cloudstack.spring.module.factory.ModuleBasedContextFactory.loadModules

(ModuleBasedContextFactory.java:37)

   at

org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init
(CloudStackSpringContext.java:70)

   at



org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.

(CloudStackSpringContext.java:57)

   at



org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.

(CloudStackSpringContext.java:61)

   at



org.apache.cloudstack.spring.module.web.CloudStackContextLoaderListener.contextInitialized

(CloudStackContextLoaderListener.java:52)

   at

org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized
(ContextHandle

Re: New Object Storage - Huawei OBS

2023-12-19 Thread Wido den Hollander




Op 14/12/2023 om 10:51 schreef Ronald Feicht:

Hi,

I am trying to write an Object Storage plugin for Huawei OBS using Minio as example. I 
added my plugin code to the plugins/storage/object/ directory, added my plugin into 
plugins/pom.xml and added the string 'Huawei OBS' to AddObjectStorage.vue for the 
dropdown in the UI. But when I select that dropdown entry and click "Save" in 
the UI the following exception is thrown:


Great to see! I'm working on the Ceph plugin: 
https://github.com/wido/cloudstack/commits/ceph-object-store


I will be changing something to the framework where a BucketTO is added 
and passed, not just a String containing the bucket's name.


My PR will go out next week or after Christmas I think.

Wido


com.cloud.exception.InvalidParameterValueException: can't find object store 
provider: Huawei OBS
 at 
com.cloud.storage.StorageManagerImpl.discoverObjectStore(StorageManagerImpl.java:3743)
 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566)
 at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
 at 
org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:107)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
 at 
com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:52)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
 at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
 at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
 at com.sun.proxy.$Proxy119.discoverObjectStore(Unknown Source)
 at 
org.apache.cloudstack.api.command.admin.storage.AddObjectStoragePoolCmd.execute(AddObjectStoragePoolCmd.java:117)
 at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:172)
 at com.cloud.api.ApiServer.queueCommand(ApiServer.java:782)
 at com.cloud.api.ApiServer.handleRequest(ApiServer.java:603)
 at 
com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:347)
 at com.cloud.api.ApiServlet$1.run(ApiServlet.java:154)
 at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
 at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
 at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
 at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:151)
 at com.cloud.api.ApiServlet.doGet(ApiServlet.java:105)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
 at 
org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1386)
 at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:755)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)
 at 
org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:226)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
 at 

Re: New Object Storage - Huawei OBS

2023-12-16 Thread Wido den Hollander




On 12/15/23 20:09, Wei ZHOU wrote:

Hi Wido,

I just tested your branch, it looks ok (
https://github.com/wido/cloudstack/commits/ceph-object-store).



Yes, it works now! I had old files in my classpath... (rookie mistake!)

I get an error about an invalid URL, but that's something else:

Error: (HTTP 530, error code ) http://ceph01..eu/ is not a valid URL

2023-12-16 12:13:25,737 ERROR [o.a.c.a.c.a.s.AddObjectStoragePoolCmd] 
(qtp1970436060-18:ctx-9cb56831 ctx-99f41e21) (logid:c93cae1f) Exception:
com.cloud.exception.InvalidParameterValueException: 
https://ceph01.xxx.eu/ is not a valid URL


Looking into this!

Wido



I got an error which is because I input invalid host/keys

2023-12-15 19:07:22,313 DEBUG [o.a.c.s.d.l.CephObjectStoreLifeCycleImpl]
(qtp775386112-17:ctx-9be6ad79 ctx-b4cdcdd0) (logid:3a8340e9) Error while
initializing Ceph RGW Object Store: IOException
2023-12-15 19:07:22,313 DEBUG [c.c.s.StorageManagerImpl]
(qtp775386112-17:ctx-9be6ad79 ctx-b4cdcdd0) (logid:3a8340e9) Failed to add
object store: Error while initializing Ceph RGW Object Store. Invalid
credentials or URL
java.lang.RuntimeException: Error while initializing Ceph RGW Object Store.
Invalid credentials or URL
at
org.apache.cloudstack.storage.datastore.lifecycle.CephObjectStoreLifeCycleImpl.initialize(CephObjectStoreLifeCycleImpl.java:87)
at
com.cloud.storage.StorageManagerImpl.discoverObjectStore(StorageManagerImpl.java:3777)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)


-Wei

On Fri, 15 Dec 2023 at 17:18, Wido den Hollander  wrote:




Op 15/12/2023 om 13:54 schreef Wei ZHOU:

Hi Wido,

It looks you need a file like


https://github.com/apache/cloudstack/blob/main/plugins/storage/object/simulator/src/main/resources/META-INF/cloudstack/storage-object-simulator/spring-storage-object-simulator-context.xml




Tnx. But that didn't solve it. The module seems to load according to the
mgmt server log, but I can't add the storage. Exception is the same.

Wido


-Wei

On Fri, 15 Dec 2023 at 13:40, Wido den Hollander 


On 12/15/23 09:41, Ronald Feicht wrote:

Hi Kishan,


when I add my module to client/pom.xml I get the following error and

http://192.168.17.252:8080/client/ returns "HTTP ERROR 503 Service
Unavailable" because of the following exception:


[WARNING] Failed startup of context

o.e.j.m.p.JettyWebAppContext@1df8ea34



{/client,file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/,UNAVAILABLE}{file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/}

java.lang.NullPointerException
   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$1.with

(DefaultModuleDefinitionSet.java:104)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:263)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:268)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule

(DefaultModuleDefinitionSet.java:251)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.startContexts

(DefaultModuleDefinitionSet.java:96)

   at



org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load

(DefaultModuleDefinitionSet.java:79)

   at



org.apache.cloudstack.spring.module.factory.ModuleBasedContextFactory.loadModules

(ModuleBasedContextFactory.java:37)

   at

org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init
(CloudStackSpringContext.java:70)

   at



org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.

(CloudStackSpringContext.java:57)

   at



org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.

(CloudStackSpringContext.java:61)

   at



org.apache.cloudstack.spring.module.web.CloudStackContextLoaderListener.contextInitialized

(CloudStackContextLoaderListener.java:52)

   at

org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized
(ContextHandler.java:933)

   at

org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized
(ServletContextHandler.java:553)

   at org.eclipse.jetty.server.handler.ContextHandler.st

Re: New Object Storage - Huawei OBS

2023-12-15 Thread Wido den Hollander




Op 15/12/2023 om 13:54 schreef Wei ZHOU:

Hi Wido,

It looks you need a file like
https://github.com/apache/cloudstack/blob/main/plugins/storage/object/simulator/src/main/resources/META-INF/cloudstack/storage-object-simulator/spring-storage-object-simulator-context.xml



Tnx. But that didn't solve it. The module seems to load according to the 
mgmt server log, but I can't add the storage. Exception is the same.


Wido


-Wei

On Fri, 15 Dec 2023 at 13:40, Wido den Hollander 
wrote:




On 12/15/23 09:41, Ronald Feicht wrote:

Hi Kishan,


when I add my module to client/pom.xml I get the following error and

http://192.168.17.252:8080/client/ returns "HTTP ERROR 503 Service
Unavailable" because of the following exception:


[WARNING] Failed startup of context o.e.j.m.p.JettyWebAppContext@1df8ea34

{/client,file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/,UNAVAILABLE}{file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/}

java.lang.NullPointerException
  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$1.with
(DefaultModuleDefinitionSet.java:104)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
(DefaultModuleDefinitionSet.java:263)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
(DefaultModuleDefinitionSet.java:268)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
(DefaultModuleDefinitionSet.java:268)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
(DefaultModuleDefinitionSet.java:268)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
(DefaultModuleDefinitionSet.java:268)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
(DefaultModuleDefinitionSet.java:268)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
(DefaultModuleDefinitionSet.java:251)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.startContexts
(DefaultModuleDefinitionSet.java:96)

  at

org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load
(DefaultModuleDefinitionSet.java:79)

  at

org.apache.cloudstack.spring.module.factory.ModuleBasedContextFactory.loadModules
(ModuleBasedContextFactory.java:37)

  at

org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init
(CloudStackSpringContext.java:70)

  at

org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.
(CloudStackSpringContext.java:57)

  at

org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.
(CloudStackSpringContext.java:61)

  at

org.apache.cloudstack.spring.module.web.CloudStackContextLoaderListener.contextInitialized
(CloudStackContextLoaderListener.java:52)

  at

org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized
(ContextHandler.java:933)

  at

org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized
(ServletContextHandler.java:553)

  at org.eclipse.jetty.server.handler.ContextHandler.startContext

(ContextHandler.java:892)

  at org.eclipse.jetty.servlet.ServletContextHandler.startContext

(ServletContextHandler.java:356)

  at org.eclipse.jetty.webapp.WebAppContext.startWebapp

(WebAppContext.java:1445)

  at org.eclipse.jetty.maven.plugin.JettyWebAppContext.startWebapp

(JettyWebAppContext.java:328)

  at org.eclipse.jetty.webapp.WebAppContext.startContext

(WebAppContext.java:1409)

  at org.eclipse.jetty.server.handler.ContextHandler.doStart

(ContextHandler.java:825)

  at org.eclipse.jetty.servlet.ServletContextHandler.doStart

(ServletContextHandler.java:275)

  at org.eclipse.jetty.webapp.WebAppContext.doStart

(WebAppContext.java:524)

  at org.eclipse.jetty.maven.plugin.JettyWebAppContext.doStart

(JettyWebAppContext.java:397)

  at org.eclipse.jetty.util.component.AbstractLifeCycle.start

(AbstractLifeCycle.java:72)

  at org.eclipse.jetty.util.component.ContainerLifeCycle.start

(ContainerLifeCycle.java:169)

  at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart

(ContainerLifeCycle.java:117)

  at org.eclipse.jetty.server.handler.AbstractHandler.doStart

(AbstractHandler.java:97)

  at org.eclipse.jetty.util.component.AbstractLifeCycle.start

(AbstractLifeCycle.java:72)

  at org.eclipse.jetty.util.component.ContainerLifeCycle.start

(ContainerLifeCycle.java:169)

  at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart

(ContainerLifeCycle.java:117)

  at org.eclipse.jetty.server.handler.AbstractHandler.doStart

(AbstractHandler.java:97)

  at org.eclipse.jetty.util.component.AbstractLifeCycle.start

(AbstractLifeCycle.java:72)

  at org.eclipse.jetty.util.component.ContainerLifeC

Re: New Object Storage - Huawei OBS

2023-12-15 Thread Wido den Hollander




On 12/15/23 09:41, Ronald Feicht wrote:

Hi Kishan,


when I add my module to client/pom.xml I get the following error and 
http://192.168.17.252:8080/client/ returns "HTTP ERROR 503 Service Unavailable" 
because of the following exception:

[WARNING] Failed startup of context 
o.e.j.m.p.JettyWebAppContext@1df8ea34{/client,file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/,UNAVAILABLE}{file:///opt/cloudstack-huawei-obs/client/target/classes/META-INF/webapp/}
java.lang.NullPointerException
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$1.with
 (DefaultModuleDefinitionSet.java:104)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
 (DefaultModuleDefinitionSet.java:263)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
 (DefaultModuleDefinitionSet.java:268)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
 (DefaultModuleDefinitionSet.java:268)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
 (DefaultModuleDefinitionSet.java:268)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
 (DefaultModuleDefinitionSet.java:268)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
 (DefaultModuleDefinitionSet.java:268)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule
 (DefaultModuleDefinitionSet.java:251)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.startContexts
 (DefaultModuleDefinitionSet.java:96)
 at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load 
(DefaultModuleDefinitionSet.java:79)
 at 
org.apache.cloudstack.spring.module.factory.ModuleBasedContextFactory.loadModules
 (ModuleBasedContextFactory.java:37)
 at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init 
(CloudStackSpringContext.java:70)
 at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext. 
(CloudStackSpringContext.java:57)
 at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext. 
(CloudStackSpringContext.java:61)
 at 
org.apache.cloudstack.spring.module.web.CloudStackContextLoaderListener.contextInitialized
 (CloudStackContextLoaderListener.java:52)
 at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized 
(ContextHandler.java:933)
 at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized 
(ServletContextHandler.java:553)
 at org.eclipse.jetty.server.handler.ContextHandler.startContext 
(ContextHandler.java:892)
 at org.eclipse.jetty.servlet.ServletContextHandler.startContext 
(ServletContextHandler.java:356)
 at org.eclipse.jetty.webapp.WebAppContext.startWebapp 
(WebAppContext.java:1445)
 at org.eclipse.jetty.maven.plugin.JettyWebAppContext.startWebapp 
(JettyWebAppContext.java:328)
 at org.eclipse.jetty.webapp.WebAppContext.startContext 
(WebAppContext.java:1409)
 at org.eclipse.jetty.server.handler.ContextHandler.doStart 
(ContextHandler.java:825)
 at org.eclipse.jetty.servlet.ServletContextHandler.doStart 
(ServletContextHandler.java:275)
 at org.eclipse.jetty.webapp.WebAppContext.doStart (WebAppContext.java:524)
 at org.eclipse.jetty.maven.plugin.JettyWebAppContext.doStart 
(JettyWebAppContext.java:397)
 at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:72)
 at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:169)
 at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:117)
 at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:97)
 at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:72)
 at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:169)
 at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:117)
 at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:97)
 at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:72)
 at org.eclipse.jetty.util.component.ContainerLifeCycle.start 
(ContainerLifeCycle.java:169)
 at org.eclipse.jetty.server.Server.start (Server.java:407)
 at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart 
(ContainerLifeCycle.java:110)
 at org.eclipse.jetty.server.handler.AbstractHandler.doStart 
(AbstractHandler.java:97)
 at org.eclipse.jetty.server.Server.doStart (Server.java:371)
 at org.eclipse.jetty.util.component.AbstractLifeCycle.start 
(AbstractLifeCycle.java:72)
 at org.eclipse.jetty.maven.plugin.AbstractJettyMojo.startJetty 
(AbstractJettyMojo.java:450)
 at 

Re: New Object Storage - Huawei OBS

2023-12-14 Thread Wido den Hollander




Op 14/12/2023 om 10:51 schreef Ronald Feicht:

Hi,

I am trying to write an Object Storage plugin for Huawei OBS using Minio as example. I 
added my plugin code to the plugins/storage/object/ directory, added my plugin into 
plugins/pom.xml and added the string 'Huawei OBS' to AddObjectStorage.vue for the 
dropdown in the UI. But when I select that dropdown entry and click "Save" in 
the UI the following exception is thrown:


Great to see! I'm working on the Ceph plugin: 
https://github.com/wido/cloudstack/commits/ceph-object-store


I will be changing something to the framework where a BucketTO is added 
and passed, not just a String containing the bucket's name.


My PR will go out next week or after Christmas I think.

Wido


com.cloud.exception.InvalidParameterValueException: can't find object store 
provider: Huawei OBS
 at 
com.cloud.storage.StorageManagerImpl.discoverObjectStore(StorageManagerImpl.java:3743)
 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566)
 at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
 at 
org.apache.cloudstack.network.contrail.management.EventUtils$EventInterceptor.invoke(EventUtils.java:107)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
 at 
com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:52)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
 at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
 at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
 at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
 at com.sun.proxy.$Proxy119.discoverObjectStore(Unknown Source)
 at 
org.apache.cloudstack.api.command.admin.storage.AddObjectStoragePoolCmd.execute(AddObjectStoragePoolCmd.java:117)
 at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:172)
 at com.cloud.api.ApiServer.queueCommand(ApiServer.java:782)
 at com.cloud.api.ApiServer.handleRequest(ApiServer.java:603)
 at 
com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:347)
 at com.cloud.api.ApiServlet$1.run(ApiServlet.java:154)
 at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
 at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
 at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
 at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:151)
 at com.cloud.api.ApiServlet.doGet(ApiServlet.java:105)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
 at 
org.eclipse.jetty.servlet.ServletHolder$NotAsyncServlet.service(ServletHolder.java:1386)
 at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:755)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)
 at 
org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:226)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
 at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
 at 

Re: Question about ObjectStoreDriver for implementing Ceph driver

2023-12-12 Thread Wido den Hollander




Op 12/12/2023 om 04:45 schreef Rohit Yadav:

Hi Wido,

I think when the minio object storage plugin was written we didn’t have the 
limitations or foresight on how to structure the code, I would agree in 
refactoring the interface enough to allow what you’re trying to achieve.

Typically to a plugin you don’t want to pass database objects (Dao or VO) but a 
transform them as transfer object (TO) that doesn’t introduce dao or schema pkg 
dependencies to the plugin and TOs are kept as simple Java object. In the TO 
you can introduce fields and getters that suit your use cases.



Got it, thanks for the feedback! I'll work on this in two different commits.

Wido


Regards.

Regards.

From: Wido den Hollander 
Sent: Tuesday, December 12, 2023 1:28:56 AM
To: dev@cloudstack.apache.org ; kis...@apache.org 

Subject: Question about ObjectStoreDriver for implementing Ceph driver

Hi (Kishan),

I am making a first attempt [0] to implement a Ceph RGW [1] Object Store
Driver for CloudStack and I have a few questions about the code.

While implementing the Ceph RGW driver I have noticed that some methods
are provided the bucket's name (as a String) as an argument, but I'd
rather have a 'Bucket' object, for example:

public AccessControlList getBucketAcl(String bucketName, long storeId)
public boolean setBucketEncryption(String bucketName, long storeId)

In Ceph's case it would be better if these methods would get a Bucket
object, like:


public AccessControlList getBucketAcl(Bucket bucket, long storeId)

The reason is that I need to access the Account the bucket belongs to.
With Minio there is an 'Admin' client which allows you to do all these
operations as an Admin, but with Ceph there isn't. With Ceph you are
supposed to obtain the credentials (access + secret) via an Admin API
[2] and then execute these commands as the user.

Now, we have the access + secret key from the account recorded under the
account and we can access that from the Bucket object:

bucket.getAccessKey()
bucket.getSecretKey()

My proposal would be to change the signature of these methods, but
before I do so, is there any particular reason the String was passed and
not the whole Bucket object?

Thanks,

Wido

[0]: https://github.com/wido/cloudstack/commits/ceph-object-store
[1]: https://ceph.io/en/discover/technology/#object
[2]:
https://www.javadoc.io/doc/io.github.twonote/radosgw-admin4j/latest/org/twonote/rgwadmin4j/RgwAdmin.html

  





Question about ObjectStoreDriver for implementing Ceph driver

2023-12-11 Thread Wido den Hollander

Hi (Kishan),

I am making a first attempt [0] to implement a Ceph RGW [1] Object Store 
Driver for CloudStack and I have a few questions about the code.


While implementing the Ceph RGW driver I have noticed that some methods 
are provided the bucket's name (as a String) as an argument, but I'd 
rather have a 'Bucket' object, for example:


public AccessControlList getBucketAcl(String bucketName, long storeId)
public boolean setBucketEncryption(String bucketName, long storeId)

In Ceph's case it would be better if these methods would get a Bucket 
object, like:



public AccessControlList getBucketAcl(Bucket bucket, long storeId)

The reason is that I need to access the Account the bucket belongs to. 
With Minio there is an 'Admin' client which allows you to do all these 
operations as an Admin, but with Ceph there isn't. With Ceph you are 
supposed to obtain the credentials (access + secret) via an Admin API 
[2] and then execute these commands as the user.


Now, we have the access + secret key from the account recorded under the 
account and we can access that from the Bucket object:


bucket.getAccessKey()
bucket.getSecretKey()

My proposal would be to change the signature of these methods, but 
before I do so, is there any particular reason the String was passed and 
not the whole Bucket object?


Thanks,

Wido

[0]: https://github.com/wido/cloudstack/commits/ceph-object-store
[1]: https://ceph.io/en/discover/technology/#object
[2]: 
https://www.javadoc.io/doc/io.github.twonote/radosgw-admin4j/latest/org/twonote/rgwadmin4j/RgwAdmin.html


Re: [VOTE] Adopt Github Discusssions as Users Forum

2023-12-05 Thread Wido den Hollander

-0: Agree with Nux (see his concern in another e-mail)

Op 04/12/2023 om 22:56 schreef Nux:

-0 - I have voiced my concerns already.


On 2023-12-04 08:01, Rohit Yadav wrote:

All,

Following the discussion thread on adopting Github Discussions as 
users forum [1], I put the following proposal for a vote:



  1.  Adopt and use Github Discussions as user forums.
  2.  The Github Discussions feature is tied with the 
us...@cloudstack.apache.org mailing list (PR: 
https://github.com/apache/cloudstack/pull/8274).
  3.  Any project governance and decision-making thread such as 
voting, releases etc. should continue to use the project mailing lists.


Vote will be open for 120 hours (by Friday, 8th Dec).

For sanity in tallying the vote, can PMC members please be sure to 
indicate "(binding)" with their vote?


[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

[1] https://lists.apache.org/thread/hs0295hw9rnmhoh9l2qo5hc4b62hhvk8


Regards.


Re: [VOTE] Adopt Github Discusssions as Users Forum

2023-12-04 Thread Wido den Hollander

-0: Agree with Nux (see his concern in another e-mail)

Op 04/12/2023 om 22:56 schreef Nux:

-0 - I have voiced my concerns already.


On 2023-12-04 08:01, Rohit Yadav wrote:

All,

Following the discussion thread on adopting Github Discussions as 
users forum [1], I put the following proposal for a vote:



  1.  Adopt and use Github Discussions as user forums.
  2.  The Github Discussions feature is tied with the 
us...@cloudstack.apache.org mailing list (PR: 
https://github.com/apache/cloudstack/pull/8274).
  3.  Any project governance and decision-making thread such as 
voting, releases etc. should continue to use the project mailing lists.


Vote will be open for 120 hours (by Friday, 8th Dec).

For sanity in tallying the vote, can PMC members please be sure to 
indicate "(binding)" with their vote?


[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

[1] https://lists.apache.org/thread/hs0295hw9rnmhoh9l2qo5hc4b62hhvk8


Regards.


Re: [DISCUSS] New Design for the Apache CloudStack Website

2023-09-07 Thread Wido den Hollander




Op 31-08-2023 om 11:41 schreef Rohit Yadav:

Thanks Ivet, the new iterated design looks impressive especially some of the 
new graphical elements.

+1 to moving forward with the proposed design. Let's collect any further 
feedback and ideas from the community and if there are no objections go ahead 
with updating the preview/staging site (https://cloudstack.staged.apache.org/) 
and eventually publishing the website.

Just to be clear on the CMS integration - the ASF infra has declined supporting 
a 3rd party git-based CMS that can be used with the project website. This is 
such a shame as I had created PoCs with some rather okay-ist git-based CMS such 
as Netlify-CMS, TinaCMS and SpinalCMS which would give similar UI-based 
workflow like the old Roller-based blog did.

Nevertheless, for the purposes of publishing blogs the new Github-based 
document/markdown editor/CMS is fair enough and now allows uploading of assets 
(images, files etc.) and blog that can be edited directly for any committer 
incl. PMC members logged into Github and upon saving such changes the website 
is published by an automatic Github action that builds and published the 
websites. Unfortunately, any non-committer would need to follow the PR 
workflow. I had this documented at 
https://cloudstack.staged.apache.org/website-guide/ that I can help further 
update in this regard.



Thank you Ivet and Rohit, this is looking great! Much better then we had.

I have no real comments not objections to do. It's a great improvement 
over what we currently have.


Wido



Regards.


From: Ivet Petrova 
Sent: Wednesday, August 30, 2023 19:04
To: Giles Sirett 
Cc: dev@cloudstack.apache.org ; us...@cloudstack.apache.org 
; Marketing 
Subject: Re: [DISCUSS] New Design for the Apache CloudStack Website

Hi All,

I uploaded the design here: 
https://drive.google.com/file/d/1pef7xWWMPYAA5UkbS_XMUxrz53KB7J5t/view?usp=sharing


Kind regards,





  


On 30 Aug 2023, at 16:31, Giles Sirett 
mailto:giles.sir...@shapeblue.com>> wrote:

Hi Ivet – thanks for pushing forward with this – excited to review a new design.

On that note, I cant see a link in your mail ☹

Kind Regards
Giles


Giles Sirett
CEO
giles.sir...@shapeblue.com
www.shapeblue.com




From: Ivet Petrova 
mailto:ivet.petr...@shapeblue.com>>
Sent: Wednesday, August 30, 2023 10:14 AM
To: us...@cloudstack.apache.org; Marketing 
mailto:market...@shapeblue.com>>
Cc: dev mailto:dev@cloudstack.apache.org>>
Subject: [DISCUSS] New Design for the Apache CloudStack Website

Hello,

I would like to start a discussion on the design of the Apache CloudStack 
Website and to propose a new design for it.

As we all know, the website has not been changed for years in terms of design 
and information. The biggest issue we know we have is that the website is not 
showing the full potential of CloudStack. In addition to it during discussions 
with many community members, I have noted the following issues:
- the existing website design is old-school
- the current homepage does not collect enough information to show CloudStack's 
strengths
- current website design is missing images from the ACS UI and cannot create a 
feel for the product in the users
- the website has issues on a mobile device
- we lack any graphic and diagrams
- some important information like how to download is not very visible

I collected a lot of feedback during last months and want to propose a new up 
to date design for the website, which is attached below. The new design will 
bring:
- improved UX
- look and feel corresponding to the CloudStack's capabilities and strengths
- more graphical elements, diagrams
- better branding
- more important information, easily accessible for the potential users

I hope you will like the new design – all feedback welcome. Once we have the 
design finalised, we will use Rohit’s proposal previously of a CMS, which is 
easy to edit.

[cid:B5517475-02DA-472A-BD1D-F3B600AD28ED]

Kind regards,



Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP and Registration Now Open!

2023-07-28 Thread Wido den Hollander

Hi,

Op 24-07-2023 om 08:53 schreef Ivet Petrova:

Hi All,

- As Gregoire has mentioned, the hotel recently rebranded - maybe the new site 
was launched 2-3 weeks ago. So the name of the hotel now is 
https://www.ihg.com/voco/hotels/us/en/clichy/parpc/hoteldetail
It is renovated and rebranded, but location is the same.


I see! The link now seems to be fine.


However, on CloudStackcollab.org the link is partially broken.

"CCC 2023 will be taking place in the Holiday Inn/VOCO Paris, Porte de 
Clichy. "


This link points to: 
"https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetailhttps://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail;


Double the URL concatted.



- I have sent an email last week to the hotel to ask for some pricing/special 
offer for our conference members. Will keep you posted as soon as I have 
details.

- Good remark for the project website. I thought we posted there, will add the 
info today.


I see it now! Shouldn't this link to https://www.cloudstackcollab.org/ ?

Because right now it goes to Hubilo directly and there isn't much 
information there.




- We will add the train info also to the website official info for travelling 
we plan to post. Megali from DIMSI’s team is helping us on these details.



Great! And indeed, from London you can very easily reach Paris via 
Eurostar. I just booked my Thalys train tickets :-)


Wido


Kind regards,


  


On 23 Jul 2023, at 20:27, Gregoire LAMODIERE 
mailto:g.lamodi...@dimsi.fr>> wrote:

Hi Wido,

Thanks for your message.
You are right, UK / EU people should consider coming to Paris by train.
The hotel is well connected to subways lines 13 and 14, and we (DIMSI) can help 
anyone to find best way to come.

About the hotel, you are right, it's been rebranded last month, and I guess 
they forgot to make a redirection from old website (HolidayInn) to new one 
(vocco).
I stayed there once since the change, and result is good, with a nice rhum bar for 
"change the world" late conversations.

As local player, we (still DIMSI team) will be happy to arrange a beer / diner 
session.
Maybe we can discuss this @Ivet Petrova and @Jamie Pell ?

Btw, very happy to welcome the community in France next november !



Gregoire LAMODIERE
Co-fondateur & Solution architect
-----Message d'origine-
De : Wido den Hollander mailto:w...@widodh.nl>>
Envoyé : vendredi 21 juillet 2023 20:18
À : us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>; Jamie Pell 
mailto:jamie.p...@shapeblue.com>>; 
market...@cloudstack.apache.org<mailto:market...@cloudstack.apache.org>; 
dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>
Objet : Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP 
and Registration Now Open!

Hi Ivet,

Thanks for organizing this! I was just sharing this internally with our 
colleagues throughout Europe and I noticed that the link to the hotel is 
broken. The correct URL is:
https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail

A few things in addition:

- Is there a deal with the hotel for a fixed price for rooms?
- I don't see this event mentioned at the CloudStack main website
- Do we have anybody local who can help with finding bars for drinks?

I'd also like to make a recommendation to everybody from Europe to consider 
coming by train instead of flying. Paris is very well connected from Brussels 
(BE), Aachen (DE), Cologne (DE) and Amsterdam (NL) and ofcourse other cities in 
Paris.

'Gare du Nord' and 'Gare de Lyon' are two major train stations where high speed 
trains terminate.

Check out Google Maps or Thalys/TGV to see if coming by train makes sense. I'll 
be doing that for sure and will make sure all my other colleagues come by train 
as well.

https://en.wikipedia.org/wiki/High-speed_rail_in_Europe

See you in Paris in November!

Wido

Op 31/05/2023 om 10:48 schreef Jamie Pell:
Hi everyone,

I am pleased to share that the CloudStack Collaboration Conference 2023 
registration and call for presentations is now open! Once again, this year’s 
event will be a hybrid one, allowing community members who are not able to 
attend in-person, to join virtually.

Register for
CCC<https://events.hubilo.com/cloudstack-collaboration-conference-2023
/login>

Submit a
Talk<https://docs.google.com/forms/d/e/1FAIpQLSdaFH8I_fubiImp6mOXpAPL8
2UfjpCgisu3WAQBMtY-geqWyA/viewform>

This year’s event will be taking place in the Holiday Inn 
Paris<https://www.ihg.com/holidayinn/hotels/us/en/clichy/parpc/hoteldetail> on 
November 23-24th.

If your company is interested in supporting the event by becoming a
sponsor, you can read the sponsorship
prospectus.<https://www.cloudstackcollab.org/wp-content/uploads/2023/0
2/Sponsorship-Prospectus-CCC-2023.pdf>

For further information regarding the event, visit the official event
website.<https://www.cloudstackcollab.org/>

Kind regards,













Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP and Registration Now Open!

2023-07-26 Thread Wido den Hollander




Op 26/07/2023 om 09:40 schreef Ivet Petrova:

Hi Wido,

The link on the site is fixed.
As for the project website - it links on purpose to the Hubilo platform, so 
that people can see a direct registration page and make it easier for them to 
register.
As you mentioned there is not enough info there - can you let me know what 
would you like to see there to make the experience better?



Yesterday it was redirecting me to 
https://events.hubilo.com/cloudstack-collaboration-conference-2023/login


That URL doesn't contain much, but right now the main ACS website links 
to 
https://events.hubilo.com/cloudstack-collaboration-conference-2023/register 
and that does contain the information needed.


All good!

Wido


Kind regards,


  


On 25 Jul 2023, at 12:01, Wido den Hollander 
mailto:w...@widodh.nl>> wrote:

Hi,

Op 24-07-2023 om 08:53 schreef Ivet Petrova:
Hi All,
- As Gregoire has mentioned, the hotel recently rebranded - maybe the new site 
was launched 2-3 weeks ago. So the name of the hotel now is 
https://www.ihg.com/voco/hotels/us/en/clichy/parpc/hoteldetail
It is renovated and rebranded, but location is the same.

I see! The link now seems to be fine.


However, on CloudStackcollab.org<http://CloudStackcollab.org> the link is 
partially broken.

"CCC 2023 will be taking place in the Holiday Inn/VOCO Paris, Porte de Clichy."

This link points to: 
"https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetailhttps://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail;

Double the URL concatted.

- I have sent an email last week to the hotel to ask for some pricing/special 
offer for our conference members. Will keep you posted as soon as I have 
details.
- Good remark for the project website. I thought we posted there, will add the 
info today.

I see it now! Shouldn't this link to https://www.cloudstackcollab.org/ ?

Because right now it goes to Hubilo directly and there isn't much information 
there.

- We will add the train info also to the website official info for travelling 
we plan to post. Megali from DIMSI’s team is helping us on these details.

Great! And indeed, from London you can very easily reach Paris via Eurostar. I 
just booked my Thalys train tickets :-)

Wido

Kind regards,
  On 23 Jul 2023, at 20:27, Gregoire LAMODIERE 
mailto:g.lamodi...@dimsi.fr><mailto:g.lamodi...@dimsi.fr>>
 wrote:
Hi Wido,
Thanks for your message.
You are right, UK / EU people should consider coming to Paris by train.
The hotel is well connected to subways lines 13 and 14, and we (DIMSI) can help 
anyone to find best way to come.
About the hotel, you are right, it's been rebranded last month, and I guess 
they forgot to make a redirection from old website (HolidayInn) to new one 
(vocco).
I stayed there once since the change, and result is good, with a nice rhum bar for 
"change the world" late conversations.
As local player, we (still DIMSI team) will be happy to arrange a beer / diner 
session.
Maybe we can discuss this @Ivet Petrova and @Jamie Pell ?
Btw, very happy to welcome the community in France next november !
Gregoire LAMODIERE
Co-fondateur & Solution architect
-Message d'origine-
De : Wido den Hollander 
mailto:w...@widodh.nl><mailto:w...@widodh.nl>>
Envoyé : vendredi 21 juillet 2023 20:18
À : us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org><mailto:us...@cloudstack.apache.org>; Jamie Pell 
mailto:jamie.p...@shapeblue.com><mailto:jamie.p...@shapeblue.com>>; 
market...@cloudstack.apache.org<mailto:market...@cloudstack.apache.org><mailto:market...@cloudstack.apache.org>; 
dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org><mailto:dev@cloudstack.apache.org>
Objet : Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP 
and Registration Now Open!
Hi Ivet,
Thanks for organizing this! I was just sharing this internally with our 
colleagues throughout Europe and I noticed that the link to the hotel is 
broken. The correct URL is:
https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail
A few things in addition:
- Is there a deal with the hotel for a fixed price for rooms?
- I don't see this event mentioned at the CloudStack main website
- Do we have anybody local who can help with finding bars for drinks?
I'd also like to make a recommendation to everybody from Europe to consider 
coming by train instead of flying. Paris is very well connected from Brussels 
(BE), Aachen (DE), Cologne (DE) and Amsterdam (NL) and ofcourse other cities in 
Paris.
'Gare du Nord' and 'Gare de Lyon' are two major train stations where high speed 
trains terminate.
Check out Google Maps or Thalys/TGV to see if coming by train makes sense. I'll 
be doing that for sure and will make sure all my other colleagues come by train 
as well.
https://en.wikipedia.org/wiki/High-speed_rail_in_Europe
See you in Paris in November!
Wido
Op 31/05/2023 om 10:48 schre

Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP and Registration Now Open!

2023-07-25 Thread Wido den Hollander

Hi,

Op 24-07-2023 om 08:53 schreef Ivet Petrova:

Hi All,

- As Gregoire has mentioned, the hotel recently rebranded - maybe the new site 
was launched 2-3 weeks ago. So the name of the hotel now is 
https://www.ihg.com/voco/hotels/us/en/clichy/parpc/hoteldetail
It is renovated and rebranded, but location is the same.


I see! The link now seems to be fine.


However, on CloudStackcollab.org the link is partially broken.

"CCC 2023 will be taking place in the Holiday Inn/VOCO Paris, Porte de 
Clichy. "


This link points to: 
"https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetailhttps://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail;


Double the URL concatted.



- I have sent an email last week to the hotel to ask for some pricing/special 
offer for our conference members. Will keep you posted as soon as I have 
details.

- Good remark for the project website. I thought we posted there, will add the 
info today.


I see it now! Shouldn't this link to https://www.cloudstackcollab.org/ ?

Because right now it goes to Hubilo directly and there isn't much 
information there.




- We will add the train info also to the website official info for travelling 
we plan to post. Megali from DIMSI’s team is helping us on these details.



Great! And indeed, from London you can very easily reach Paris via 
Eurostar. I just booked my Thalys train tickets :-)


Wido


Kind regards,


  


On 23 Jul 2023, at 20:27, Gregoire LAMODIERE 
mailto:g.lamodi...@dimsi.fr>> wrote:

Hi Wido,

Thanks for your message.
You are right, UK / EU people should consider coming to Paris by train.
The hotel is well connected to subways lines 13 and 14, and we (DIMSI) can help 
anyone to find best way to come.

About the hotel, you are right, it's been rebranded last month, and I guess 
they forgot to make a redirection from old website (HolidayInn) to new one 
(vocco).
I stayed there once since the change, and result is good, with a nice rhum bar for 
"change the world" late conversations.

As local player, we (still DIMSI team) will be happy to arrange a beer / diner 
session.
Maybe we can discuss this @Ivet Petrova and @Jamie Pell ?

Btw, very happy to welcome the community in France next november !



Gregoire LAMODIERE
Co-fondateur & Solution architect
-----Message d'origine-
De : Wido den Hollander mailto:w...@widodh.nl>>
Envoyé : vendredi 21 juillet 2023 20:18
À : us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>; Jamie Pell 
mailto:jamie.p...@shapeblue.com>>; 
market...@cloudstack.apache.org<mailto:market...@cloudstack.apache.org>; 
dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>
Objet : Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP 
and Registration Now Open!

Hi Ivet,

Thanks for organizing this! I was just sharing this internally with our 
colleagues throughout Europe and I noticed that the link to the hotel is 
broken. The correct URL is:
https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail

A few things in addition:

- Is there a deal with the hotel for a fixed price for rooms?
- I don't see this event mentioned at the CloudStack main website
- Do we have anybody local who can help with finding bars for drinks?

I'd also like to make a recommendation to everybody from Europe to consider 
coming by train instead of flying. Paris is very well connected from Brussels 
(BE), Aachen (DE), Cologne (DE) and Amsterdam (NL) and ofcourse other cities in 
Paris.

'Gare du Nord' and 'Gare de Lyon' are two major train stations where high speed 
trains terminate.

Check out Google Maps or Thalys/TGV to see if coming by train makes sense. I'll 
be doing that for sure and will make sure all my other colleagues come by train 
as well.

https://en.wikipedia.org/wiki/High-speed_rail_in_Europe

See you in Paris in November!

Wido

Op 31/05/2023 om 10:48 schreef Jamie Pell:
Hi everyone,

I am pleased to share that the CloudStack Collaboration Conference 2023 
registration and call for presentations is now open! Once again, this year’s 
event will be a hybrid one, allowing community members who are not able to 
attend in-person, to join virtually.

Register for
CCC<https://events.hubilo.com/cloudstack-collaboration-conference-2023
/login>

Submit a
Talk<https://docs.google.com/forms/d/e/1FAIpQLSdaFH8I_fubiImp6mOXpAPL8
2UfjpCgisu3WAQBMtY-geqWyA/viewform>

This year’s event will be taking place in the Holiday Inn 
Paris<https://www.ihg.com/holidayinn/hotels/us/en/clichy/parpc/hoteldetail> on 
November 23-24th.

If your company is interested in supporting the event by becoming a
sponsor, you can read the sponsorship
prospectus.<https://www.cloudstackcollab.org/wp-content/uploads/2023/0
2/Sponsorship-Prospectus-CCC-2023.pdf>

For further information regarding the event, visit the official event
website.<https://www.cloudstackcollab.org/>

Kind regards,













Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP and Registration Now Open!

2023-07-24 Thread Wido den Hollander

Hi Ivet,

Thanks for organizing this! I was just sharing this internally with our 
colleagues throughout Europe and I noticed that the link to the hotel is 
broken. The correct URL is: 
https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail


A few things in addition:

- Is there a deal with the hotel for a fixed price for rooms?
- I don't see this event mentioned at the CloudStack main website
- Do we have anybody local who can help with finding bars for drinks?

I'd also like to make a recommendation to everybody from Europe to 
consider coming by train instead of flying. Paris is very well connected 
from Brussels (BE), Aachen (DE), Cologne (DE) and Amsterdam (NL) and 
ofcourse other cities in Paris.


'Gare du Nord' and 'Gare de Lyon' are two major train stations where 
high speed trains terminate.


Check out Google Maps or Thalys/TGV to see if coming by train makes 
sense. I'll be doing that for sure and will make sure all my other 
colleagues come by train as well.


https://en.wikipedia.org/wiki/High-speed_rail_in_Europe

See you in Paris in November!

Wido

Op 31/05/2023 om 10:48 schreef Jamie Pell:

Hi everyone,

I am pleased to share that the CloudStack Collaboration Conference 2023 
registration and call for presentations is now open! Once again, this year’s 
event will be a hybrid one, allowing community members who are not able to 
attend in-person, to join virtually.

Register for 
CCC

Submit a 
Talk

This year’s event will be taking place in the Holiday Inn 
Paris on 
November 23-24th.

If your company is interested in supporting the event by becoming a sponsor, you can 
read the sponsorship 
prospectus.

For further information regarding the event, visit the official event 
website.

Kind regards,







  





Re: OFFICIAL ANNOUNCEMENT: CloudStack Collaboration Conference 2023 CFP and Registration Now Open!

2023-07-21 Thread Wido den Hollander

Hi Ivet,

Thanks for organizing this! I was just sharing this internally with our 
colleagues throughout Europe and I noticed that the link to the hotel is 
broken. The correct URL is: 
https://www.ihg.com/voco/hotels/gb/en/clichy/parpc/hoteldetail


A few things in addition:

- Is there a deal with the hotel for a fixed price for rooms?
- I don't see this event mentioned at the CloudStack main website
- Do we have anybody local who can help with finding bars for drinks?

I'd also like to make a recommendation to everybody from Europe to 
consider coming by train instead of flying. Paris is very well connected 
from Brussels (BE), Aachen (DE), Cologne (DE) and Amsterdam (NL) and 
ofcourse other cities in Paris.


'Gare du Nord' and 'Gare de Lyon' are two major train stations where 
high speed trains terminate.


Check out Google Maps or Thalys/TGV to see if coming by train makes 
sense. I'll be doing that for sure and will make sure all my other 
colleagues come by train as well.


https://en.wikipedia.org/wiki/High-speed_rail_in_Europe

See you in Paris in November!

Wido

Op 31/05/2023 om 10:48 schreef Jamie Pell:

Hi everyone,

I am pleased to share that the CloudStack Collaboration Conference 2023 
registration and call for presentations is now open! Once again, this year’s 
event will be a hybrid one, allowing community members who are not able to 
attend in-person, to join virtually.

Register for 
CCC

Submit a 
Talk

This year’s event will be taking place in the Holiday Inn 
Paris on 
November 23-24th.

If your company is interested in supporting the event by becoming a sponsor, you can 
read the sponsorship 
prospectus.

For further information regarding the event, visit the official event 
website.

Kind regards,







  





Re: Missing ubuntu 4.18 packages

2023-03-19 Thread Wido den Hollander

Hi,

We are aware. Those need to be build and uploaded.

Wido

Op 18-03-2023 om 13:39 schreef Curious Pandora:

Hello,

it looks like the 4.18 packages for focal/jammy/bionic are missing at
download.cloudstack.org.

Regards,



Re: Register for the CCC - Physical Attendance

2022-10-27 Thread Wido den Hollander
Looking forward to it! We'll join with 4 people from our company 
(Netherlands).


Great to see the community again. Learn from and help each other and 
afterwards enjoy a few beers.


See you all in Bulgaria!

Wido

On 10/26/22 12:59, Ivet Petrova wrote:

Hi all,
I would like to kindly ask all of you, who plan to attend the CloudStack 
Collaboration Conference physically in Sofia to register as soon as possible. I 
am finalizing the lunch and catering details and need some close to exact 
numbers for this by end of week.
Please, take a moment to complete your registration here: 
https://events.hubilo.com/cloudstack-collaboration-conference-2022/register

Thanks for the effort!
Ivet

Sent from my iPhone

  



Re: Running the UI on Docker container

2022-09-07 Thread Wido den Hollander




Op 07-09-2022 om 09:34 schreef Nux:

Wido,

I've just run the UI standalone on a separate machine the other day, but 
not on docker.
What I did was to install cloudstack-ui rpm package, then serve it like 
a normal site, but proxy /client to mgmt-server:8000 like described here:


https://github.com/apache/cloudstack/tree/main/ui#production



Yes, true. Our scenario is that we want to have a single VM running 
multiple versions of the UI with different config.json files.


Now technically we could also have the Proxy (NGINX/Apache) inject a 
different config.json based on the hostname/url being requested and thus 
influence the UI and how it behaves.


Running different Docker containers with different configurations is 
also possible and we aren't sure yet on which route to take.


Our main idea is that we want to personalize and customize the UI to 
better fit the look and feel of the company and other portals.


Wido


hth

---
Nux
www.nux.ro

On 2022-09-06 15:56, Wido den Hollander wrote:

Hi,

Anybody there running the UI outside the Management Server in a Docker
container like described here?

https://github.com/apache/cloudstack/tree/main/ui

I have a use-case for this we would like to investigate, but the build
currently fails [0].

Is this still relevant and supported?

Wido

[0]: https://github.com/apache/cloudstack/issues/6709


Running the UI on Docker container

2022-09-06 Thread Wido den Hollander

Hi,

Anybody there running the UI outside the Management Server in a Docker 
container like described here?


https://github.com/apache/cloudstack/tree/main/ui

I have a use-case for this we would like to investigate, but the build 
currently fails [0].


Is this still relevant and supported?

Wido

[0]: https://github.com/apache/cloudstack/issues/6709


Introducing Bart and Ruben

2022-08-08 Thread Wido den Hollander

Hi devs,

Recently Bart and Ruben (CC) joined CLDIN (was PCextreme) as DevOps 
engineers and will be working on CloudStack and start to contribute to 
the project.


In the coming months they'll finding their way around in the community 
and code and you should start seeing Pull Requests from them coming in.


We'll try to make the CCC in Bulgaria later this year!

Wido


Re: IPV6 in Isolated/VPC networks

2022-05-13 Thread Wido den Hollander




Op 13-05-2022 om 11:39 schreef Wei ZHOU:

Hi Wido,

We do not allocate an ipv6 subnet to a VPC. Instead, we allocate an ipv6
subnet for each vpc tier.
These subnets have the same gateway (ipv6 addr in VR).

for example

ipv6 route fd23:313a:2f53:3111::/64 fd23:313a:2f53:3000:1c00:baff:fe00:4
ipv6 route fd23:313a:2f53:3112::/64 fd23:313a:2f53:3000:1c00:baff:fe00:4
ipv6 route fd23:313a:2f53:3113::/64 fd23:313a:2f53:3000:1c00:baff:fe00:4



Thanks Wei and Alex, that explains! So you can have multiple /64 subnets 
being routed to the same VR.


It's up to the organization to only allocate /64s coming from a specific 
/56 or /48 to make things logical, but underneath it's all /64s which we 
need to route.


Wido


-Wei

On Fri, 13 May 2022 at 11:30, Wido den Hollander  wrote:




Op 12-05-2022 om 16:10 schreef Alex Mattioli:

ipv6 route fd23:313a:2f53:3cbf::/64 fd23:313a:2f53:3000:1c00:baff:fe00:4


That's correct


Ok. So in that case the subnet would be very welcome to be present in
the message on the bus.




Or a larger subnet:
ipv6 route fd23:313a:2f53:3c00::/56 fd23:313a:2f53:3000:1c00:baff:fe00:4


Not really, the subnets for isolated/VPC networks are always /64.  Which

means also no real need to include subnets as well.




But there can be multiple networks behind the VPC router or not? If
there are multiple networks you need >/64 as you can then allocate /64s
from that larger subnet.

Wido


Cheers
Alex










Re: IPV6 in Isolated/VPC networks

2022-05-13 Thread Wido den Hollander




Op 12-05-2022 om 16:10 schreef Alex Mattioli:

ipv6 route fd23:313a:2f53:3cbf::/64 fd23:313a:2f53:3000:1c00:baff:fe00:4


That's correct


Ok. So in that case the subnet would be very welcome to be present in 
the message on the bus.





Or a larger subnet:
ipv6 route fd23:313a:2f53:3c00::/56 fd23:313a:2f53:3000:1c00:baff:fe00:4


Not really, the subnets for isolated/VPC networks are always /64.  Which means 
also no real need to include subnets as well.



But there can be multiple networks behind the VPC router or not? If 
there are multiple networks you need >/64 as you can then allocate /64s 
from that larger subnet.


Wido


Cheers
Alex


  



-Original Message-
From: Wido den Hollander 
Sent: 12 May 2022 16:04
To: Abhishek Kumar ; dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks



On 5/12/22 09:55, Abhishek Kumar wrote:

Hi Wido,

I do not understand what you mean by WAB address but


WAB was a type. I meant WAN.


fd23:313a:2f53:3000:1c00:baff:fe00:4 is the public IP of the network
(IPv6 of the public NIC of the network VR) in the sample.
Yes, route for fd23:313a:2f53:3cbf::/64 need to be added to this IP.
fd23:313a:2f53:3cbf::/64 is guest IPv6 CIDR of the network here.



So that means that I would need to run this command on my upstream router:

ipv6 route fd23:313a:2f53:3cbf::/64 fd23:313a:2f53:3000:1c00:baff:fe00:4

Or a larger subnet:

ipv6 route fd23:313a:2f53:3c00::/56 fd23:313a:2f53:3000:1c00:baff:fe00:4


Currently, the message on event bus does not include subnet. Should
that be included?


Yes, because then you can pickup those messages and inject the route via ExaBGP 
into a routing table right away.


In case of VPCs, there could be multiple tiers which will need
multiple routes to be added. Will that be an issue if we include
current network/tier subnet in the event message?


No, as long as it points to the same VR you simply have multiple subnets being 
routed to the same VR.

I do wonder what happens if you destroy the VR and create a new one. The WAN 
address then changes (due to SLAAC) and thus the routes need to be 
re-programmed.

Wido



Regards,
Abhishek


   

   


--
--
*From:* Wido den Hollander 
*Sent:* 10 May 2022 19:01
*To:* dev@cloudstack.apache.org ; Abhishek
Kumar 
*Subject:* Re: IPV6 in Isolated/VPC networks
  
Hi,


Op 10-05-2022 om 11:42 schreef Abhishek Kumar:

Yes. When a public IPv6 is assigned or released, CloudStack will publish event 
with type NET.IP6ASSIGN, NET.IP6RELEASE.
These event notifications can be tracked. And with improvements in events 
framework, these event messages will have network uuid as entityuuid and 
Network as entitytype. Using this network can be queried using to list IPv6 
routes that need to be added.

Sample event message,

{"eventDateTime":"2022-05-10 09:32:12
+","entityuuid":"14658b39-9d20-4783-a1bc-12fb58bcbd98","Network":
"14658b39-9d20-4783-a1bc-12fb58bcbd98","description":"Assigned public
IPv6 address: fd23:313a:2f53:3000:1c00:baff:fe00:4 for network ID:
14658b39-9d20-4783-a1bc-12fb58bcbd98","event":"NET.IP6ASSIGN","user":
"bde866ba-c600-11ec-af19-1e00320001f3","account":"bde712c9-c600-11ec-
af19-1e00320001f3","entity":"Network","status":"Completed"}


?Sample API call,

list networks id=14658b39-9d20-4783-a1bc-12fb58bcbd98
filter=id,name,ip6routes

{
     "count": 1,
     "network": [
   {
     "id": "14658b39-9d20-4783-a1bc-12fb58bcbd98",
     "ip6routes": [
   {
     "gateway": "fd23:313a:2f53:3000:1c00:baff:fe00:4",
     "subnet": "fd23:313a:2f53:3cbf::/64"
   }


Looking at this JSON, does this mean that
fd23:313a:2f53:3000:1c00:baff:fe00:4 is the WAB address of the VR?

And that I would need to (statically) route fd23:313a:2f53:3cbf::/64
to that IP?

The event message does not include the subnet, that makes it a bit
more difficult as you would then also need to do a API-call to gather
that information.

Wido

P.S.: Who controls the DNS of qa.cloudstack.cloud? It lacks an
-record for IPv6!


     ],
     "name": "routing_test"
   }
     ]
}





From: Wido den Hollander 
Sent: 10 May 2022 13:59
To: dev@cloudstack.apache.org ; Abhishek
Kumar 
Subject: Re: IPV6 in Isolated/VPC networks

Op 10-05-2022 om 10:19 schreef Abhishek Kumar:

Hi all,

IPv6 Support in Isolated Network and VPC with Static Routing based on the 
design doc [1] has been implemented and is available in 4.17.0 RC2. I hope 
while testing 4.17.0 RC2 you will also try to test it ?
Documentation for it is available at
http://qa.cloudstack.cloud/docs/WIP-PROOFING/pr/262/plugi

Re: IPV6 in Isolated/VPC networks

2022-05-12 Thread Wido den Hollander



On 5/12/22 09:55, Abhishek Kumar wrote:
> Hi Wido,
> 
> I do not understand what you mean by WAB address but

WAB was a type. I meant WAN.

> fd23:313a:2f53:3000:1c00:baff:fe00:4 is the public IP of the network
> (IPv6 of the public NIC of the network VR) in the sample.
> Yes, route for fd23:313a:2f53:3cbf::/64 need to be added to this IP.
> fd23:313a:2f53:3cbf::/64 is guest IPv6 CIDR of the network here.
> 

So that means that I would need to run this command on my upstream router:

ipv6 route fd23:313a:2f53:3cbf::/64 fd23:313a:2f53:3000:1c00:baff:fe00:4

Or a larger subnet:

ipv6 route fd23:313a:2f53:3c00::/56 fd23:313a:2f53:3000:1c00:baff:fe00:4

> Currently, the message on event bus does not include subnet. Should that
> be included?

Yes, because then you can pickup those messages and inject the route via
ExaBGP into a routing table right away.

> In case of VPCs, there could be multiple tiers which will need multiple
> routes to be added. Will that be an issue if we include current
> network/tier subnet in the event message?

No, as long as it points to the same VR you simply have multiple subnets
being routed to the same VR.

I do wonder what happens if you destroy the VR and create a new one. The
WAN address then changes (due to SLAAC) and thus the routes need to be
re-programmed.

Wido

> 
> Regards,
> Abhishek
> 
> 
>   
> 
>   
> 
> --------
> *From:* Wido den Hollander 
> *Sent:* 10 May 2022 19:01
> *To:* dev@cloudstack.apache.org ; Abhishek
> Kumar 
> *Subject:* Re: IPV6 in Isolated/VPC networks
>  
> Hi,
> 
> Op 10-05-2022 om 11:42 schreef Abhishek Kumar:
>> Yes. When a public IPv6 is assigned or released, CloudStack will publish 
>> event with type NET.IP6ASSIGN, NET.IP6RELEASE.
>> These event notifications can be tracked. And with improvements in events 
>> framework, these event messages will have network uuid as entityuuid and 
>> Network as entitytype. Using this network can be queried using to list IPv6 
>> routes that need to be added.
>> 
>> Sample event message,
>> 
>> {"eventDateTime":"2022-05-10 09:32:12 
>> +","entityuuid":"14658b39-9d20-4783-a1bc-12fb58bcbd98","Network":"14658b39-9d20-4783-a1bc-12fb58bcbd98","description":"Assigned
>>  public IPv6 address: fd23:313a:2f53:3000:1c00:baff:fe00:4 for network ID: 
>> 14658b39-9d20-4783-a1bc-12fb58bcbd98","event":"NET.IP6ASSIGN","user":"bde866ba-c600-11ec-af19-1e00320001f3","account":"bde712c9-c600-11ec-af19-1e00320001f3","entity":"Network","status":"Completed"}
>> 
>> 
>> ?Sample API call,
>>> list networks id=14658b39-9d20-4783-a1bc-12fb58bcbd98 
>>> filter=id,name,ip6routes
>> {
>>    "count": 1,
>>    "network": [
>>  {
>>    "id": "14658b39-9d20-4783-a1bc-12fb58bcbd98",
>>    "ip6routes": [
>>  {
>>    "gateway": "fd23:313a:2f53:3000:1c00:baff:fe00:4",
>>    "subnet": "fd23:313a:2f53:3cbf::/64"
>>  }
> 
> Looking at this JSON, does this mean that
> fd23:313a:2f53:3000:1c00:baff:fe00:4 is the WAB address of the VR?
> 
> And that I would need to (statically) route fd23:313a:2f53:3cbf::/64 to
> that IP?
> 
> The event message does not include the subnet, that makes it a bit more
> difficult as you would then also need to do a API-call to gather that
> information.
> 
> Wido
> 
> P.S.: Who controls the DNS of qa.cloudstack.cloud? It lacks an
> -record for IPv6!
> 
>>    ],
>>    "name": "routing_test"
>>  }
>>    ]
>> }
>> 
>> 
>> 
>> 
>> 
>> From: Wido den Hollander 
>> Sent: 10 May 2022 13:59
>> To: dev@cloudstack.apache.org ; Abhishek Kumar 
>> 
>> Subject: Re: IPV6 in Isolated/VPC networks
>> 
>> Op 10-05-2022 om 10:19 schreef Abhishek Kumar:
>>> Hi all,
>>>
>>> IPv6 Support in Isolated Network and VPC with Static Routing based on the 
>>> design doc [1] has been implemented and is available in 4.17.0 RC2. I hope 
>>> while testing 4.17.0 RC2 you will also try to test it ?
>>> Documentation for it is available at 
>>> http://qa.cloudstack.cloud/docs/WIP-PROOFING/pr/262/plugins/ipv6.html#isolated-network-and-vpc-tier
> <http://qa.cloudstack.cloud/docs/WIP-PROOFING/pr/262/plugins/ipv6.html#isolated-network-a

Re: IPV6 in Isolated/VPC networks

2022-05-10 Thread Wido den Hollander

Hi,

Op 10-05-2022 om 11:42 schreef Abhishek Kumar:

Yes. When a public IPv6 is assigned or released, CloudStack will publish event 
with type NET.IP6ASSIGN, NET.IP6RELEASE.
These event notifications can be tracked. And with improvements in events 
framework, these event messages will have network uuid as entityuuid and 
Network as entitytype. Using this network can be queried using to list IPv6 
routes that need to be added.

Sample event message,

{"eventDateTime":"2022-05-10 09:32:12 +","entityuuid":"14658b39-9d20-4783-a1bc-12fb58bcbd98","Network":"14658b39-9d20-4783-a1bc-12fb58bcbd98","description":"Assigned public IPv6 
address: fd23:313a:2f53:3000:1c00:baff:fe00:4 for network ID: 
14658b39-9d20-4783-a1bc-12fb58bcbd98","event":"NET.IP6ASSIGN","user":"bde866ba-c600-11ec-af19-1e00320001f3","account":"bde712c9-c600-11ec-af19-1e00320001f3","entity":"Network","status":"Completed"}


?Sample API call,

list networks id=14658b39-9d20-4783-a1bc-12fb58bcbd98 filter=id,name,ip6routes

{
   "count": 1,
   "network": [
 {
   "id": "14658b39-9d20-4783-a1bc-12fb58bcbd98",
   "ip6routes": [
 {
   "gateway": "fd23:313a:2f53:3000:1c00:baff:fe00:4",
   "subnet": "fd23:313a:2f53:3cbf::/64"
 }


Looking at this JSON, does this mean that 
fd23:313a:2f53:3000:1c00:baff:fe00:4 is the WAB address of the VR?


And that I would need to (statically) route fd23:313a:2f53:3cbf::/64 to 
that IP?


The event message does not include the subnet, that makes it a bit more 
difficult as you would then also need to do a API-call to gather that 
information.


Wido

P.S.: Who controls the DNS of qa.cloudstack.cloud? It lacks an 
-record for IPv6!



   ],
   "name": "routing_test"
 }
   ]
}





From: Wido den Hollander 
Sent: 10 May 2022 13:59
To: dev@cloudstack.apache.org ; Abhishek Kumar 

Subject: Re: IPV6 in Isolated/VPC networks

Op 10-05-2022 om 10:19 schreef Abhishek Kumar:

Hi all,

IPv6 Support in Isolated Network and VPC with Static Routing based on the 
design doc [1] has been implemented and is available in 4.17.0 RC2. I hope 
while testing 4.17.0 RC2 you will also try to test it ?
Documentation for it is available at 
http://qa.cloudstack.cloud/docs/WIP-PROOFING/pr/262/plugins/ipv6.html#isolated-network-and-vpc-tier
 (will be available in the official docs once 4.17.0 version of docs is built).



Great work!

I see only static routing is supported. But do we publish something on
the message bus once a new VR/VPC is created?

This way you could pick up these messages and have the network create a
(static) route based on those.

ExaBGP for example could be used to inject such routes.

Wido


[1] 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in+Isolated+Network+and+VPC+with+Static+Routing

Regards,
Abhishek


From: Rohit Yadav 
Sent: 13 September 2021 14:30
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks

Thanks Alex, Wei. I've updated the docs here: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+Support+in+Isolated+Network+and+VPC+with+Static+Routing

I'll leave the thread open for futher discussion/ideas/feedback. I think we've 
completed the phase1 design doc including all feedback comments for adding IPv6 
support in CloudStack and some initial poc/work can be started. My colleagues 
and I will keep everyone posted on this thread and/or on a Github PR as and 
when we're able to start our work on the same (after 4.16, potentially towards 
4.17).


Regards.


From: Wei ZHOU 
Sent: Friday, September 10, 2021 20:22
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks

Agree with Alex.
We only need to know how many /64 are allocated. We do not care how many
ipv6 addresses are used by VMs.

-Wei

On Fri, 10 Sept 2021 at 16:36, Alex Mattioli 
wrote:


Hi Rohit,

I'd go for option 2, don't see a point tracking anything smaller than a
/64 tbh.

Cheers
Alex




-Original Message-
From: Rohit Yadav 
Sent: 09 September 2021 12:44
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Thanks Alex, Kristaps. I've updated the design doc to reflect two
agreements:

*   Allocate /64 for both isolated network and VPC tiers, no large
allocation of prefixes to VPC (cons: more static routing rules for upstream
router/admins)
*   All systemvms (incl. ssvm, cpvm, VRs) get IPv6 address if zone has a
dedicated /64 prefix/block for systemvms

The only outstanding question now is:

*   How do we manage IPv6 usage? Can anyone advise how we do IPv6 usage
for shared network (desig

Re: IPV6 in Isolated/VPC networks

2022-05-10 Thread Wido den Hollander
ed network or vpc. @Rohit, correct me if I'm

wrong.


I have a question, it looks stateless dhcpv6 (SLAAC from
router/VR, router/dns etc via RA messages) will be the only option
for now (related

to

your pr https://github.com/apache/cloudstack/pull/3077) . Would it
be

good

to provide stateful dhcpv6 (which can be implemented by dnsmasq)
as an option in cloudstack ? The advantages are
(1) support other ipv6 cidr sizes than /64.
(2) we can assign a specified Ipv6 address to a vm. vm Ipv6
addresses can be changed
(4) an Ipv6 addresses can be re-used by multiple vms.
The problem is, stateful dhcpv6 does not support
routers,nameservers,

etc.

we need to figure it out (probably use radvd/frr and dnsmasq both).

-Wei



















  


On Fri, 13 Aug 2021 at 12:19, Wido den Hollander  wrote:



Hi,

See my inline responses:

Op 11-08-2021 om 14:26 schreef Rohit Yadav:

Hi all,

Thanks for your feedback and ideas, I've gone ahead with
discussing

them

with Alex and came up with a PoC/design which can be implemented
in the following phases:


*   Phase1: implement ipv6 support in isolated networks and

VPC

with

static routing

*   Phase2: discuss and implement support for dynamic routing

(TBD)


For Phase1 here's the high-level proposal:

*   IPv6 address management:
   *   At the zone level root-admin specifies a /64 public

range

that

will be used for VRs, then they can add a /48, or /56 IPv6 range
for

guest

networks (to be used by isolated networks and VPC tiers)

   *   On creation of any IPv6 enabled isolated network or VPC

tier,

from the /48 or /56 block a /64 network is allocated/used

   *   We assume SLAAC and autoconfiguration, no DHCPv6 in the

zone

(discuss: is privacy a concern, can privacy extensions rfc4941
of

slaac be

explored?)

Privacy Extensions are only a concern for client devices which
roam between different IPv6 networks.

If you IPv6 address of a client keeps the same suffix (MAC
based) and switches network then only the prefix (/64) will change.

This way a network like Google, Facebook, etc could track your
device moving from network to network if they only look at the
last 64-bits of the IPv6 address.

For servers this is not a problem as you already know in which
network they are.


*   Network offerings: root-admin can create new network

offerings

(with VPC too) that specifies a network stack option:

   *   ipv4 only (default, for backward compatibility all

networks/offerings post-upgrade migrate to this option)

   *   ipv4-and-ipv6
   *   ipv6-only (this can be phase 1.b)
   *   A new routing option: static (phase1), dynamic (phase2,

with

multiple sub-options such as ospf/bgp etc...)

This means that the network admin will need to statically route
the

IPv6

subnet to the VR's outside IPv6 address, for example, on a JunOS

router:


set routing-options rib inet6.0 static route 2001:db8:500::/48
next-hop
2001:db8:100::50

I'm assuming that 2001:db8:100::50 is the address of the VR on
the outside (/64) network. In reality this will probably be a
longer address, but this is for just the example.


*   VR changes:
   *   VR gets its guest and public nics set to inet6 auto
   *   For each /64 allocated to guest network and VPC tiers,

radvd

is configured to do RA

radvd is fine, but looking at phase 2 with dynamic routing you
might already want to look into FRRouting. FRR can also
advertise RAs while not doing any routing.

interface ens4
no ipv6 nd suppress-ra
ipv6 nd prefix 2001:db8:500::/64
ipv6 nd rdnss 2001:db8:400::53 2001:db8:200::53

See: http://docs.frrouting.org/en/latest/ipv6.html


   *   Firewall: a new ipv6 zone/chain is created for ipv6

where

ipv6

firewall rules (ACLs, ingress, egress) are implemented; ACLs
between

VPC

tiers are managed/implemented by ipv6 firewall on VR

Please take a look at the existing security_group.py script
which implements RFC4890

https://datatracker.ietf.org/doc/html/rfc4890

ICMPv6 is a vital part of IPv6 and certain packets should always
be allowed.


   *   It is assumed that static routes are created on the

core/main

router by the admin or automated using some scripts/tools; for
this CloudStack will announce events with details of /64
networks and VR's public IPv6 address that can be consumed by a
rabbitmq/message bus

client

(for example), or a custom cron job or script as part of

orchestration.

(this wouldn't be necessary for dynamic routing bgp with
phase2)\\

You would only need to announce the /48 or /56 allocated to the
VR, that's all. You don't need to inform the upstream router
about the /64 subnets created within that larger subnet.


*   Guest Networking: With SLAAC, it's easy for CloudStack to

calculate allocate and use a /64 and determine the IPv6 address
of VR

nics

and guest VM nics

   *   A user create an isolated network/VPC with an offering

that is

ipv6 enabled

   *   A user can manage firewall fo

Build in main branch failing due to SSL exception when downloading a resource

2022-04-06 Thread Wido den Hollander

Hi,

On my Ubuntu laptop I tried building the main branch and this fails:

[INFO] --- download-maven-plugin:1.6.3:wget (download-checksums) @ 
cloud-engine-schema ---

[WARNING] Ignoring download failure.
[WARNING] Could not get content
javax.net.ssl.SSLHandshakeException: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target

at sun.security.ssl.Alert.createSSLException (Alert.java:131)
at sun.security.ssl.TransportContext.fatal (TransportContext.java:352)

I traced this back to this part:

*engine/schema/pom.xml*

download-checksums
validate

wget



https://download.cloudstack.org/systemvm/${cs.version}/md5sum.txt

${basedir}/dist/systemvm-templates/
true
true




The 'url' points to download.cloudstack.org where it wants to download a 
resource.


https://download.cloudstack.org/systemvm/4.16/md5sum.txt

I can download the file just fine via my browser, with cURL or wget, but 
my Java on my Ubuntu laptop seems to fail.


I'm building with openjdk-11-jdk on Ubuntu 20.04 which is version:

javac 11.0.14.1

It seems that my local Java is missing the certificate chain for the 
LetsEncrypt cert for download.cloudstack.org


Anybody else seeing this?

Wido


Re: CloudStack Collaboration Conference 2022 - November 14-16

2022-04-05 Thread Wido den Hollander

Sounds good! I'll do my best to be present!

(And make sure there's a bar nearby! ;-) )

Wido

Op 05-04-2022 om 16:57 schreef Alex Mattioli:

Sounds amazing @Ivet Petrova


  



-Original Message-
From: Daman Arora 
Sent: 05 April 2022 16:51
To: dev@cloudstack.apache.org
Cc: users 
Subject: Re: CloudStack Collaboration Conference 2022 - November 14-16

Sounds like a good idea to me.

Thanks,
Daman Arora.

On Tue., Apr. 5, 2022, 10:45 a.m. Ivet Petrova, 
wrote:


Hi all,

I am working on the idea for the CloudStack Collaboration Conference 2022.
I was thinking that this time we can make it as a hybrid event and the
end of the year - November 14-16th.
We will choose one physical location in Europe and will also stream
the whole event online as previous year for the people who cannot/does
not want to travel.
If nobody is against, I will start some organization plan.

Kind regards,







Re: KVM: Out of subnet 'secondary' IPs for Virtual Machines (Anycast/floating IPs)

2022-01-19 Thread Wido den Hollander



On 1/17/22 4:28 PM, Wei ZHOU wrote:
> Hi Wido,
> 
> CloudStack allows users to add multiple IP ranges to a shared network. All
> these IPs share the same vlan. I hope it helps you.
> 

Yes, but then they would also be allocated to VMs.

> The problem is, a secondary IP can only be assigned to a VM. I think we can
> add a flag like `floating` to secondary IP . If the flag is true, it can be
> assigned to multiple VMs (belonging to same owner) as secondary IP.
> 

Something along that way. I would like to be able to add a IPv4 or IPv6
address as a secondary without it being checked. Just allow any address
to be added as a secondary address.

This would already be sufficient.

Wido

> -Wei
> 
> On Mon, 17 Jan 2022 at 14:37, Wido den Hollander  wrote:
> 
>> Hi,
>>
>> Use-case: I have a SG enabled shared network where a VM establishes a
>> BGP session with the upstream router.
>>
>> Over this BGP session the VM announces a /32 (IPv4) and/or /128 (IPv6)
>> address and the router now installs this route.
>>
>> I do the same (with the same IPs) on a few different VMs and this way I
>> can have a Anycast/Floating IP which is being routed to those VMs.
>>
>> Problem: Security Group filtering prohibits this as the 'ipset' on the
>> hypervisor checks all the packets originating from the VM and drops all
>> packets not matching the ipset.
>>
>> Name: i-79-1328-VM
>> Type: hash:ip
>> Revision: 4
>> Header: family inet hashsize 1024 maxelem 65536
>> Size in memory: 248
>> References: 5
>> Number of entries: 1
>> Members:
>> 62.221.XXX.11
>>
>> I want to add /32 and /128 addresses to this subnet so that the SG does
>> not filter away this traffic.
>>
>> They could be added as a secondary IP to the VM, but this is not allowed
>> by the API as the secondary IPs you want to add should always come from
>> the subnet configured for that network.
>>
>> I do not want to turn off security grouping as this poses other
>> potential issues.
>>
>> Solutions I see:
>>
>> - Add global/account/domain setting which allows arbitrary secondary IPs
>> - Add per-network setting which allows arbitrary secondary IPs
>> - Pre-define subnets which Anycast/Floating IPs can be picked from per
>> network
>>
>> Any ideas or suggestions?
>>
>> Wido
>>
> 


Re: KVM: Out of subnet 'secondary' IPs for Virtual Machines (Anycast/floating IPs)

2022-01-17 Thread Wido den Hollander




Op 17-01-2022 om 15:07 schreef Daan Hoogland:

Wido,
As an operator, would I sell a floating ip with a number of instances it
can be applied to?


For example you would sell a /32 and /128 address (or a larger subnet) 
which a client can announce from their VMs.


It does require that the upstream routers (outside CloudStack) have BGP 
peers configured on their side which allows the VM to announce that they 
have a route for that address.


Regardless of how many CloudStack environments you have each one of them 
could announce that /32 or /128 which would then route traffic to the 
closest VM in the network.


Let's say you would announce 8.8.8.8/32 or 2001:4860:4860::/128 from 
multiple VPS to create a highly available DNS server as an example.


Wido


just checking on your envisioned business case, not implying an answer
here/yet.

On Mon, Jan 17, 2022 at 2:37 PM Wido den Hollander  wrote:


Hi,

Use-case: I have a SG enabled shared network where a VM establishes a
BGP session with the upstream router.

Over this BGP session the VM announces a /32 (IPv4) and/or /128 (IPv6)
address and the router now installs this route.

I do the same (with the same IPs) on a few different VMs and this way I
can have a Anycast/Floating IP which is being routed to those VMs.

Problem: Security Group filtering prohibits this as the 'ipset' on the
hypervisor checks all the packets originating from the VM and drops all
packets not matching the ipset.

Name: i-79-1328-VM
Type: hash:ip
Revision: 4
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 248
References: 5
Number of entries: 1
Members:
62.221.XXX.11

I want to add /32 and /128 addresses to this subnet so that the SG does
not filter away this traffic.

They could be added as a secondary IP to the VM, but this is not allowed
by the API as the secondary IPs you want to add should always come from
the subnet configured for that network.

I do not want to turn off security grouping as this poses other
potential issues.

Solutions I see:

- Add global/account/domain setting which allows arbitrary secondary IPs
- Add per-network setting which allows arbitrary secondary IPs
- Pre-define subnets which Anycast/Floating IPs can be picked from per
network

Any ideas or suggestions?

Wido






KVM: Out of subnet 'secondary' IPs for Virtual Machines (Anycast/floating IPs)

2022-01-17 Thread Wido den Hollander

Hi,

Use-case: I have a SG enabled shared network where a VM establishes a 
BGP session with the upstream router.


Over this BGP session the VM announces a /32 (IPv4) and/or /128 (IPv6) 
address and the router now installs this route.


I do the same (with the same IPs) on a few different VMs and this way I 
can have a Anycast/Floating IP which is being routed to those VMs.


Problem: Security Group filtering prohibits this as the 'ipset' on the 
hypervisor checks all the packets originating from the VM and drops all 
packets not matching the ipset.


Name: i-79-1328-VM
Type: hash:ip
Revision: 4
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 248
References: 5
Number of entries: 1
Members:
62.221.XXX.11

I want to add /32 and /128 addresses to this subnet so that the SG does 
not filter away this traffic.


They could be added as a secondary IP to the VM, but this is not allowed 
by the API as the secondary IPs you want to add should always come from 
the subnet configured for that network.


I do not want to turn off security grouping as this poses other 
potential issues.


Solutions I see:

- Add global/account/domain setting which allows arbitrary secondary IPs
- Add per-network setting which allows arbitrary secondary IPs
- Pre-define subnets which Anycast/Floating IPs can be picked from per 
network


Any ideas or suggestions?

Wido


Re: Live migration between AMD Epyc and Ubuntu 18.04 and 20.04

2021-12-11 Thread Wido den Hollander
notify the sender immediately and then delete it

(including any attachments) from your system. Thank you.

Von: Gabriel Bräscher 
Datum: Dienstag, 7. Dezember 2021 um 09:57
An: dev 
Betreff: Re: Live migration between AMD Epyc and Ubuntu 18.04 and
20.04 Wei, I agree.
This is not necessarily a bug per se.

The main point here is: the issue we are seeing is the "bug #1887490"
raised in Ubuntu's qemu package.
CPU features were added on the newer releases, which caused the
compatibility issue when (live) migrating VMs between compatible
hardware but different qemu packages.


On Tue, Dec 7, 2021 at 9:26 AM Wei ZHOU  wrote:


Hi Gabriel,

In my opinion, migration should work from lower version to higher

version,

but no guarantee from higher version to lower version, like we
upgrade cloudstack.
Therefore, migrate should work from ubuntu 18.04 to ubuntu 20.04.
But it

is

not a bug if migration fails from ubuntu 20.04 to ubuntu 18.04.

As Paul said, migration fails from qemu-ev 2.10 to qemu-ev 2.12,
this is definitely a bug in my point of view.

-Wei

On Mon, 6 Dec 2021 at 16:05, Gabriel Bräscher 
wrote:


Hi Paul (& all),

I strongly believe that this is a bug in QEMU.
I was looking for bugs and found something that looks related to
what

we

are seeing. Precisely at Ubuntu's bug #*1887490*
<https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1887490>:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1887490

In the link above, there was the following comment:


https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1887490/comments/5
3


It seems one of the patches also introduced a regression:*
lp-1887490-cpu_map-Add-missing-AMD-SVM-features.patchadds various
SVM-related flags. Specifically npt and nrip-save are now expected
to

be

present by default as shown in the updated testdata.This however
breaks migration from instances using *EPYC* or *EPYC-IBPB* CPU
models started with libvirt versions prior to this one because the
instance on the

target

host has these extra flags


More about #*1887490*
<https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1887490> can
be

found

at the mail








https://www.mail-archive.com/ubuntu-bugs@lists.ubuntu.com/msg5842376.html.

We can see that the specific bug was addressed in "linux
(5.4.0-49.53) focal".

linux (5.4.0-49.53) focal; urgency=medium

   * Add/Backport EPYC-v3 and EPYC-Rome CPU model (LP: #1887490)
 - kvm: svm: Update svm_xsaves_supported


Regards,
Gabriel.

On Fri, Dec 3, 2021 at 10:59 AM Paul Angus <

paul.an...@ticketmaster.com>

wrote:


Which version(s) of QEMU are you using Wido?

We've just be upgrading CentOS 7.6 to 7.9 Most 7.6 hosts had
qemu-ev 2.10 on it  (the buggy one). 2.12 was on

the

new hosts.
We were getting errors complaining that the ibpb CPU feature
wasn't available when migrating to the updated OS hosts (even
though

identical

hardware).

Upgrading qemu-ev to 2.12 on the originating host, then stopping
and starting the VMs, then allowed us to migrate.  We couldn't
find any solution that didn't involve stopping and starting the

VMs.


Paul.

-Original Message-
From: Wido den Hollander 
Sent: Monday, November 29, 2021 7:57 AM
To: dev@cloudstack.apache.org; Wei ZHOU 
Subject: Re: Live migration between AMD Epyc and Ubuntu 18.04
and

20.04




On 11/24/21 10:36 PM, Wei ZHOU wrote:

Hi Wido,

I think it is not good to run an environment with two
ubuntu/qemu

versions.

It always happens that some cpu features are supported in the

higher

version but not supported in the older version.
 From my experience, the migration from older version to higher

version

works like a charm, but there were many issues in migration
from higher version to older version.



I understand. But with a large amount of hosts and working your
way through upgrades you sometimes run into these situations.
Therefor it

would

be welcome if it works.


I do not have a solution for you. I have tried to hack
/etc/libvirt/hooks/qemu but it didn't work.
Have you tried with other cpu models like x86_Opteron_G5 ? you
can find the cpu features of each cpu model in

/usr/share/libvirt/cpu_map/




I have not tried that yet, but I can see if that works.

The EPYC-IBPB CPU model is identical on 18.04 and 20.04, but
even

using

that model we can't seem to migrate as it complains about the

'npt'

feature.


Wido


Anyway, even if the vm migration succeeds, you do not know if
vm

works

fine. I believe the best solution is upgrading all hosts to
the

same

OS version.

-Wei

On Tue, 23 Nov 2021 at 16:31, Wido den Hollander


wrote:



Hi,

I'm trying to debug an issue with live migrations between
Ubuntu
18.04 and 20.04 machines each with different CPUs:

- Ubuntu 18.04 with AMD Epyc 7552 (Rome)
- Ubuntu 20.04 with AMD Epyc 7662 (Milan)

We are currently using this setting:

guest.cpu.mode=custom
guest.cpu.model=EPYC

This does not allow for live migrations:

Ubuntu 20.04 with Epyc 7662 to Ubuntu 18.04 with Epyc 75

Re: Live migration between AMD Epyc and Ubuntu 18.04 and 20.04

2021-11-28 Thread Wido den Hollander



On 11/24/21 10:36 PM, Wei ZHOU wrote:
> Hi Wido,
> 
> I think it is not good to run an environment with two ubuntu/qemu versions.
> It always happens that some cpu features are supported in the higher
> version but not supported in the older version.
> From my experience, the migration from older version to higher version
> works like a charm, but there were many issues in migration from higher
> version to older version.
> 

I understand. But with a large amount of hosts and working your way
through upgrades you sometimes run into these situations. Therefor it
would be welcome if it works.

> I do not have a solution for you. I have tried to hack
> /etc/libvirt/hooks/qemu but it didn't work.
> Have you tried with other cpu models like x86_Opteron_G5 ? you can find the
> cpu features of each cpu model in /usr/share/libvirt/cpu_map/
> 

I have not tried that yet, but I can see if that works.

The EPYC-IBPB CPU model is identical on 18.04 and 20.04, but even using
that model we can't seem to migrate as it complains about the 'npt' feature.

Wido

> Anyway, even if the vm migration succeeds, you do not know if vm works
> fine. I believe the best solution is upgrading all hosts to the same OS
> version.
> 
> -Wei
> 
> On Tue, 23 Nov 2021 at 16:31, Wido den Hollander  wrote:
> 
>> Hi,
>>
>> I'm trying to debug an issue with live migrations between Ubuntu 18.04
>> and 20.04 machines each with different CPUs:
>>
>> - Ubuntu 18.04 with AMD Epyc 7552 (Rome)
>> - Ubuntu 20.04 with AMD Epyc 7662 (Milan)
>>
>> We are currently using this setting:
>>
>> guest.cpu.mode=custom
>> guest.cpu.model=EPYC
>>
>> This does not allow for live migrations:
>>
>> Ubuntu 20.04 with Epyc 7662 to Ubuntu 18.04 with Epyc 7552 fails
>>
>> "ExecutionException : org.libvirt.LibvirtException: unsupported
>> configuration: unknown CPU feature: npt"
>>
>> So we tried to define a set of features manually:
>>
>> guest.cpu.features=3dnowprefetch abm adx aes apic arat avx avx2 bmi1
>> bmi2 clflush clflushopt cmov cr8legacy cx16 cx8 de f16c fma fpu fsgsbase
>> fxsr fxsr_opt lahf_lm lm mca mce misalignsse mmx mmxext monitor movbe
>> msr mtrr nx osvw pae pat pclmuldq pdpe1gb pge pni popcnt pse pse36
>> rdrand rdseed rdtscp sep sha-ni smap smep sse sse2 sse4.1 sse4.2 sse4a
>> ssse3 svm syscall tsc vme xgetbv1 xsave xsavec xsaveopt -npt -x2apic
>> -hypervisor -topoext -nrip-save
>>
>> This results in this going into the XML:
>>
>> 
>>
>> You would say that works, but then the target host (18.04 with the 7552)
>> says it doesn't support the feature 'npt' and the migration still fails.
>>
>> Now we could ofcourse use the kvm64 CPU from Qemu, but that's lacking so
>> many features that for example TLS offloading isn't available.
>>
>> I also tried to set 'EPYC-Rome' on the Ubuntu 20.04 hypervisor, but it
>> then complains on the Ubuntu 18.04 hypervisor that the CPU 'EPYC-Rome'
>> is unknown as the 18.04 hypervisor doesn't have that profile.
>>
>> Any ideas on how to get this working?
>>
>> Wido
>>
> 


Live migration between AMD Epyc and Ubuntu 18.04 and 20.04

2021-11-23 Thread Wido den Hollander

Hi,

I'm trying to debug an issue with live migrations between Ubuntu 18.04 
and 20.04 machines each with different CPUs:


- Ubuntu 18.04 with AMD Epyc 7552 (Rome)
- Ubuntu 20.04 with AMD Epyc 7662 (Milan)

We are currently using this setting:

guest.cpu.mode=custom
guest.cpu.model=EPYC

This does not allow for live migrations:

Ubuntu 20.04 with Epyc 7662 to Ubuntu 18.04 with Epyc 7552 fails

"ExecutionException : org.libvirt.LibvirtException: unsupported 
configuration: unknown CPU feature: npt"


So we tried to define a set of features manually:

guest.cpu.features=3dnowprefetch abm adx aes apic arat avx avx2 bmi1 
bmi2 clflush clflushopt cmov cr8legacy cx16 cx8 de f16c fma fpu fsgsbase 
fxsr fxsr_opt lahf_lm lm mca mce misalignsse mmx mmxext monitor movbe 
msr mtrr nx osvw pae pat pclmuldq pdpe1gb pge pni popcnt pse pse36 
rdrand rdseed rdtscp sep sha-ni smap smep sse sse2 sse4.1 sse4.2 sse4a 
ssse3 svm syscall tsc vme xgetbv1 xsave xsavec xsaveopt -npt -x2apic 
-hypervisor -topoext -nrip-save


This results in this going into the XML:



You would say that works, but then the target host (18.04 with the 7552) 
says it doesn't support the feature 'npt' and the migration still fails.


Now we could ofcourse use the kvm64 CPU from Qemu, but that's lacking so 
many features that for example TLS offloading isn't available.


I also tried to set 'EPYC-Rome' on the Ubuntu 20.04 hypervisor, but it 
then complains on the Ubuntu 18.04 hypervisor that the CPU 'EPYC-Rome' 
is unknown as the 18.04 hypervisor doesn't have that profile.


Any ideas on how to get this working?

Wido


Re: Root disk resizing

2021-10-11 Thread Wido den Hollander



On 10/10/21 10:35 AM, Ranjit Jadhav wrote:
> Hello folks,
> 
> I have implemented cloudstack with Xenserver Host. The template has been
> made out of VM with basic centos 7 and following package installed on it
> 
> sudo yum -y cloud-init
> sudo yum -y install cloud-utils-growpart
> sudo yum -y install gdisk
> 
> 
> After creating new VM with this template, root disk is created as per size
> mention in template or we are able to increase it at them time of creation.
> 
> But later when we try to increase root disk again, it increases disk space
> but "/" partiton do not get autoresize.
> 

As far as I know it only grows the partition once, eg, upon first boot.
I won't do it again afterwards.

Wido

> 
> Following parameters were passed in userdata
> 
> #cloud-config
> growpart:
> mode: auto
> devices: ["/"]
> ignore_growroot_disabled: true
> 
> 
> Thanks & Regards,
> Ranjit
> 


Re: [VOTE] Standard string lib

2021-09-16 Thread Wido den Hollander

+1 on what Rohit said.

Op 15-09-2021 om 11:11 schreef Rohit Yadav:

Thanks for explaining Daniel.

+1 (binding) if the StringsUtils facade (in cloud-api) is used to rely on 
commons-lang3 and use StringsUtils facade (from cloud-api) throughout the 
source code.

-0 (binding) if we're only replacing all String operations throughout with 
commons-lang3 directly but not using the facade as the default.

+1 (binding) on points #2 (checkstyle enforcement/checks) and #3 (update 
wiki/docs on coding conventions).


Regards.


From: Daniel Augusto Veronezi Salvador 
Sent: Tuesday, September 14, 2021 18:25
To: dev@cloudstack.apache.org 
Subject: Re: [VOTE] Standard string lib

Rohit, sure.

About the points:

1. The objective of the vote is to see if all are in favor of using
"commons.lang3" as the String standard library and for String operations
not covered on "commons.lang3", we use our StringUtils (as we discussed
in the discussion thread -
https://lists.apache.org/thread.html/r806cd10b3de645c150e5e0e3d845c5a380a700197143f57f0834d758%40%3Cdev.cloudstack.apache.org%3E).
Then, if the vote passes, I will create the PR to address this change in
the code base by removing unnecessary libraries, and changing the code
to use "commons.lang3"'. Te proposal is to use "lang3" as the standard
String library; therefore, I will replace every occurrence of others
String libraries by "lang3" (and update "lang" to "lang3"). Our (facade)
StringUtils will be only to specific methods that "lang3" doesn't cover,
like "csvTagsToList", "areTagsEqual" and others.

2. As there are many libraries, what I could do is to add the module
"IllegalImport" to the checkstyle and verify the libraries I will remove
in the refactor.

3. I will update the code conventions wiki/docs with the outcome of this
vote, and then we will be able to use it as a guideline in our reviews.

Best regards,
Daniel

On 14/09/2021 05:35, Rohit Yadav wrote:

Daniel - can you explain what are we exactly voting for?

I get that your vote thread is primarily about moving to commons-lang3 but it 
does not explain the plan and logistics, for example what about:

*   Creating a utility facade under cloud-api and using that throughout the 
codebase; or is it find-replace all usage of google's Strings with common-lang3?
*   Introducing specific checks via checkstyle plugin to enforce developers 
(https://github.com/apache/cloudstack/tree/main/tools/checkstyle)
*   Updating the code conventions wiki/docs

Regards.


From: Pearl d'Silva 
Sent: Tuesday, September 14, 2021 09:27
To: dev@cloudstack.apache.org 
Subject: Re: [VOTE] Standard string lib

+1. Sounds like a good plan.

From: Gabriel Br?scher 
Sent: Monday, September 13, 2021 9:15 PM
To: dev 
Subject: Re: [VOTE] Standard string lib

+1

On Mon, Sep 13, 2021, 12:40 Sadi  wrote:


+1

Good idea.

On 13/09/2021 12:02, Daniel Augusto Veronezi Salvador wrote:

Hi All,

We had a discussion about standardizing the string libs we're using (

https://lists.apache.org/thread.html/r806cd10b3de645c150e5e0e3d845c5a380a700197143f57f0834d758%40%3Cdev.cloudstack.apache.org%3E
).

As I proposed, I'm opening this voting thread to see if all are in favor

of using "commons.lang3" as the String standard library and for String
operations not convered on "commons.lang3", we use our StringUtils. Then,
if the vote passes, I will create the PR to address this change in the code
base by removing unnecessary libraries, and changing the code to use
"commons.lang3".

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Best regards,
Daniel










  





Re: IPV6 in Isolated/VPC networks

2021-08-17 Thread Wido den Hollander




Op 16-08-2021 om 11:29 schreef Rohit Yadav:

Thanks Hean, Kristaps, Wido for your feedback.

I think we've some quorum and consensus on how we should proceed with IPv6 
support with static routing (phase1). Based on my proof-of-concept and 
discussions, I believe we may target this feature as early as 4.17 and I 
welcome offer by Kristaps and others who may want to be involved in testing the 
feature as and when we'll develop it.

As the next step I'll write a short design doc on cwiki including some of the 
new ideas/suggestions and share it on this thread for another iteration.



I was thinking about this and I have nother idea for phase 2.

As mentioned earlier it can be quite difficult when you have a lot of 
VRs running to configure OSPFv3 or BGP in the upstream routers to have 
this all working.


Instead we could think about using ExaBGP. With ExaBGP you would have 
some additional tooling which picks up the subnets assigned to VRs from 
the messagebus.


ExaBGP: https://github.com/Thomas-Mangin/exabgp

This tooling then dynamically injects this route into ExaBGP where the 
destination of thus /48 (example) points to the VR.


Inside the VR there is no need to configure OSPF or BGP. The admin 
doesn't need to configure static routers either.


Their routers only peer via BGP with ExaBGP which injects the routes.

See this blogpost to get an idea: 
https://vincent.bernat.ch/en/blog/2013-exabgp-highavailability


"Redundancy with ExaBGP"
"ExaBGP is a convenient tool to plug scripts into BGP. They can then 
receive and advertise routes. ExaBGP does the hard work of speaking BGP 
with your routers. The scripts just have to read routes from standard 
input or advertise them on standard output."


So we would just need to feed ExaBGP all the routes which then 
advertises them again towards the upstream routers.


ExaBGP could be running anywhere as long as it receives the messages 
from CloudStack and has a BGP connection with the routers.


Might be worth looking into in the future for phase 2.

And again: Also in shared networks it would be very nice to be able to 
route a subnet towards a single Instance. Native IPv6 with Docker, VPN 
services with IPv6 inside a VM, etc, etc.


Wido



Regards.

________
From: Wido den Hollander 
Sent: Friday, August 13, 2021 15:48
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks

Hi,

See my inline responses:

Op 11-08-2021 om 14:26 schreef Rohit Yadav:

Hi all,

Thanks for your feedback and ideas, I've gone ahead with discussing them with 
Alex and came up with a PoC/design which can be implemented in the following 
phases:

*   Phase1: implement ipv6 support in isolated networks and VPC with static 
routing
*   Phase2: discuss and implement support for dynamic routing (TBD)

For Phase1 here's the high-level proposal:

*   IPv6 address management:
   *   At the zone level root-admin specifies a /64 public range that will 
be used for VRs, then they can add a /48, or /56 IPv6 range for guest networks 
(to be used by isolated networks and VPC tiers)
   *   On creation of any IPv6 enabled isolated network or VPC tier, from 
the /48 or /56 block a /64 network is allocated/used
   *   We assume SLAAC and autoconfiguration, no DHCPv6 in the zone 
(discuss: is privacy a concern, can privacy extensions rfc4941 of slaac be 
explored?)


Privacy Extensions are only a concern for client devices which roam
between different IPv6 networks.

If you IPv6 address of a client keeps the same suffix (MAC based) and
switches network then only the prefix (/64) will change.

This way a network like Google, Facebook, etc could track your device
moving from network to network if they only look at the last 64-bits of
the IPv6 address.

For servers this is not a problem as you already know in which network
they are.


*   Network offerings: root-admin can create new network offerings (with 
VPC too) that specifies a network stack option:
   *   ipv4 only (default, for backward compatibility all 
networks/offerings post-upgrade migrate to this option)
   *   ipv4-and-ipv6
   *   ipv6-only (this can be phase 1.b)
   *   A new routing option: static (phase1), dynamic (phase2, with 
multiple sub-options such as ospf/bgp etc...)


This means that the network admin will need to statically route the IPv6
subnet to the VR's outside IPv6 address, for example, on a JunOS router:

set routing-options rib inet6.0 static route 2001:db8:500::/48 next-hop
2001:db8:100::50

I'm assuming that 2001:db8:100::50 is the address of the VR on the
outside (/64) network. In reality this will probably be a longer
address, but this is for just the example.


*   VR changes:
   *   VR gets its guest and public nics set to inet6 auto
   *   For each /64 allocated to guest network and VPC tiers, radvd is 
configured to do RA


radvd is fine, but looking at phase 2 with dynamic routing yo

Re: IPV6 in Isolated/VPC networks

2021-08-17 Thread Wido den Hollander




Op 17-08-2021 om 11:20 schreef Wei ZHOU:

Hi Wido,

(cc to Rohit and Alex)

It is a good suggestion to use FRR for ipv6. The configuration is quite
simple and the VMs can get SLAAC, routes, etc.

Privacy extension looks not the same as what you mentioned. see
https://datatracker.ietf.org/doc/html/rfc4941

You are right. To use static routing, the admins need to configure the
routes in the upstream router, and add some ipv6 ranges (eg /56 for VPCs
and /64 for isolated networks) and their next-hop  (which will be
configured in VRs) in CloudStack. CloudStack will pick up an IPv6 range and
assign it to an isolated network or vpc. @Rohit, correct me if I'm wrong.

I have a question, it looks stateless dhcpv6 (SLAAC from router/VR,
router/dns etc via RA messages) will be the only option for now (related to


RA/SLAAC is NOT DHCPv6. Please don't confuse that. DHCPv6 is not 
involved at all when using SLAAC.



your pr https://github.com/apache/cloudstack/pull/3077) . Would it be good
to provide stateful dhcpv6 (which can be implemented by dnsmasq) as an
option in cloudstack ? The advantages are
(1) support other ipv6 cidr sizes than /64.


Yes, possibly, although hardly used. A /64 for a network is the default 
in most cases.



(2) we can assign a specified Ipv6 address to a vm. vm Ipv6 addresses can
be changed


Yes, correct. Although you can now also just add a secondary IPv6 
address to the Instance.



(4) an Ipv6 addresses can be re-used by multiple vms.


Yes, that is a benefit. Although this can be achieved with secondaire 
IPs as well.



The problem is, stateful dhcpv6 does not support routers,nameservers, etc.
we need to figure it out (probably use radvd/frr and dnsmasq both).



You will *always* need RAs, but you will:

- Set the Other Managed flag in the RA
- Do not advertise a prefix

This way the client will learn the IPv6 default gateways, but obtain 
it's address through DHCPv6.


Wido


-Wei


On Fri, 13 Aug 2021 at 12:19, Wido den Hollander  wrote:


Hi,

See my inline responses:

Op 11-08-2021 om 14:26 schreef Rohit Yadav:

Hi all,

Thanks for your feedback and ideas, I've gone ahead with discussing them

with Alex and came up with a PoC/design which can be implemented in the
following phases:


*   Phase1: implement ipv6 support in isolated networks and VPC with

static routing

*   Phase2: discuss and implement support for dynamic routing (TBD)

For Phase1 here's the high-level proposal:

*   IPv6 address management:
   *   At the zone level root-admin specifies a /64 public range that

will be used for VRs, then they can add a /48, or /56 IPv6 range for guest
networks (to be used by isolated networks and VPC tiers)

   *   On creation of any IPv6 enabled isolated network or VPC tier,

from the /48 or /56 block a /64 network is allocated/used

   *   We assume SLAAC and autoconfiguration, no DHCPv6 in the zone

(discuss: is privacy a concern, can privacy extensions rfc4941 of slaac be
explored?)

Privacy Extensions are only a concern for client devices which roam
between different IPv6 networks.

If you IPv6 address of a client keeps the same suffix (MAC based) and
switches network then only the prefix (/64) will change.

This way a network like Google, Facebook, etc could track your device
moving from network to network if they only look at the last 64-bits of
the IPv6 address.

For servers this is not a problem as you already know in which network
they are.


*   Network offerings: root-admin can create new network offerings

(with VPC too) that specifies a network stack option:

   *   ipv4 only (default, for backward compatibility all

networks/offerings post-upgrade migrate to this option)

   *   ipv4-and-ipv6
   *   ipv6-only (this can be phase 1.b)
   *   A new routing option: static (phase1), dynamic (phase2, with

multiple sub-options such as ospf/bgp etc...)

This means that the network admin will need to statically route the IPv6
subnet to the VR's outside IPv6 address, for example, on a JunOS router:

set routing-options rib inet6.0 static route 2001:db8:500::/48 next-hop
2001:db8:100::50

I'm assuming that 2001:db8:100::50 is the address of the VR on the
outside (/64) network. In reality this will probably be a longer
address, but this is for just the example.


*   VR changes:
   *   VR gets its guest and public nics set to inet6 auto
   *   For each /64 allocated to guest network and VPC tiers, radvd

is configured to do RA

radvd is fine, but looking at phase 2 with dynamic routing you might
already want to look into FRRouting. FRR can also advertise RAs while
not doing any routing.

interface ens4
no ipv6 nd suppress-ra
ipv6 nd prefix 2001:db8:500::/64
ipv6 nd rdnss 2001:db8:400::53 2001:db8:200::53

See: http://docs.frrouting.org/en/latest/ipv6.html


   *   Firewall: a new ipv6 zone/chain is created for ipv6 where ipv6

firewall rules (ACLs, ingress, egress) are implemented; ACLs between VPC

Re: IPV6 in Isolated/VPC networks

2021-08-13 Thread Wido den Hollander
s 
public IPv6 addresses to clients connected.


Instead of routing the subnet to a VR we route the subnet to a single 
instance in a shared network.


If we could then also move these subnets between Instances easily one 
can quickly migrate to a different instance while keeping the same IPv6 
subnet.


Wido



Proof-of-concept commentary: here's what I did to test the idea:

   *   Created an isolated network and deployed a VM in my home lab
The VR running on KVM has following nics
eth0 - guest network
eth1 - link local
eth2 - public network

   *   I setup a custom openwrt router on a RPi4 to serve as a toy-core router 
where I create a wan6 IPv6 tunnel using tunnel broker and I got a /48 
allocated. My configuration looks like:
/48 - 2001:470:ed36::/48 (allocated by tunnel broker)
/64 - 2001:470:36:3e2::/64 (default allocated by)

I create a LAN ipv6 (public network for CloudStack VR): at subnet/prefix 0:
LAN IPv6 address: 2001:470:ed36:0::1/64
Address mode: SLAAC+stateless DHCP (no dhcpv6)
   *
   *
In the isolated VR, I enabled ipv6 as:
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.autoconf = 1

Set up a IPv6 nameserver/dns in /etc/resolve.conf
And configured the nics:
echo iface eth0 inet6 auto >> /etc/network/interfaces
echo iface eth2 inet6 auto >> /etc/network/interfaces
/etc/init.d/networking restart
Next, restart ACS isolated network without cleanup to have it reconfigure IPv4 
nics, firewall, NAT etc

   *
Next, I created a /64 network for the isolated guest network on eth0 of VR 
using radvd:

# cat /etc/radvd.conf
interface eth0
{
 AdvSendAdvert on;
 MinRtrAdvInterval 5;
 MaxRtrAdvInterval 15;
 prefix 2001:470:ed36:1::/64
 {
 AdvOnLink on;
 AdvAutonomous on;
 };
};
systemctl restart radvd
All guest VMs nics and VR's eth0 gets IPv6 address (SLAAC) in this ...:1::/64 
network
   *   Finally I added a static route in toy core-router for the new /64 IPv6 
range in the isolated network
2001:470:ed36:1::/64 via  dev 
   *
... and I enabled firewall rules to allow any traffic to pass for the new /64 
network

And voila all done! I create a domain  record that points to my guest VM 
IPv6 address a test webserver on
http://ipv6-isolated-ntwk-demo.yadav.cloud/

(Note: I'll get rid of the tunnel and request a new /48 block after a few days, 
sharing this solely for testing purposes)


Regards.

________
From: Wido den Hollander 
Sent: Tuesday, July 20, 2021 12:46
To: dev@cloudstack.apache.org 
Subject: Re: IPV6 in Isolated/VPC networks



Op 19-07-2021 om 20:38 schreef Kristaps Cudars:

Hi Wido,

I assume that flouting ip will not work grate with ingress/egress acl on VR.

  From regular ACS user perspective:
I have Instance with dualstack its running web app on 443.
I want to swap instances for whatever reason.
In case of IPv4 change d-nat rule.
In case of IPv6 if flouting IP was not created upfront he will need to change 
dns entry that usually has 24h ttl. Inconvenience degradation in experience.



Yes, but, keep in mind that the IP you are using can also be terminated
on the VR where HAProxy proxies request to the backend VM (could even be
v4!)

I'm not against DHCPv6, but I have seen many issues with implementing
it. Therefor I always stick to SLAAC.


  From ACS admin perspective:
I don’t want to have these tickets in helpdesk.
You needed to create another flouting IP that it would be seamless- will not 
work as answer.



I understand that as well.

Wido



On 2021/07/19 09:05:54, Wido den Hollander  wrote:



Op 16-07-2021 om 21:46 schreef Kristaps Cudars:

Hi Wido,

Your proposal is to sacrifice ability to reassign IPv6 to instance, have 
internal domain prefix, and list/db in ACS what IPv6 has been assigned to what 
instance and go with RA and SLAAC. For route signaling to switch use BGP/OSPFv3 
or manual pre-creation.



You can still list the IPs which have been assigned. You'll know exactly
what IPv6 address a VM has because of the prefix + MAC. Privacy
Extensions need to be disabled in the VM.

This already works in CloudStack in Shared Networks in this way.

Using secondary IPs you can always have 'floating' IPv6 addressess.

Wido


Option with RA and managed flag that DHCPv6 is in use to support preset 
information and ability to create route information from ACS is not an option 
as DHCPv6 its failing?


On 2021/07/16 15:17:42, Wido den Hollander  wrote:



Op 16-07-2021 om 16:42 schreef Hean Seng:

Hi Wido,

In current setup,  each Cloudstack have own VR, so in this new  IPv6 subnet
allocation , each VR (which have Frr) will need to have peering with ISP
router (and either BGP or Static Route) , and there is 1000 Acocunts,  it
will 1000 BGP session with ISP router ,  Am I right for this ? or I
understand wrong .



Yes, that is correct. A /56 would also be sufficient or a /60 which is
enough to alloc

Re: IPV6 in Isolated/VPC networks

2021-07-20 Thread Wido den Hollander




Op 19-07-2021 om 20:38 schreef Kristaps Cudars:

Hi Wido,

I assume that flouting ip will not work grate with ingress/egress acl on VR.

 From regular ACS user perspective:
I have Instance with dualstack its running web app on 443.
I want to swap instances for whatever reason.
In case of IPv4 change d-nat rule.
In case of IPv6 if flouting IP was not created upfront he will need to change 
dns entry that usually has 24h ttl. Inconvenience degradation in experience.



Yes, but, keep in mind that the IP you are using can also be terminated 
on the VR where HAProxy proxies request to the backend VM (could even be 
v4!)


I'm not against DHCPv6, but I have seen many issues with implementing 
it. Therefor I always stick to SLAAC.



 From ACS admin perspective:
I don’t want to have these tickets in helpdesk.
You needed to create another flouting IP that it would be seamless- will not 
work as answer.



I understand that as well.

Wido



On 2021/07/19 09:05:54, Wido den Hollander  wrote:



Op 16-07-2021 om 21:46 schreef Kristaps Cudars:

Hi Wido,

Your proposal is to sacrifice ability to reassign IPv6 to instance, have 
internal domain prefix, and list/db in ACS what IPv6 has been assigned to what 
instance and go with RA and SLAAC. For route signaling to switch use BGP/OSPFv3 
or manual pre-creation.



You can still list the IPs which have been assigned. You'll know exactly
what IPv6 address a VM has because of the prefix + MAC. Privacy
Extensions need to be disabled in the VM.

This already works in CloudStack in Shared Networks in this way.

Using secondary IPs you can always have 'floating' IPv6 addressess.

Wido


Option with RA and managed flag that DHCPv6 is in use to support preset 
information and ability to create route information from ACS is not an option 
as DHCPv6 its failing?


On 2021/07/16 15:17:42, Wido den Hollander  wrote:



Op 16-07-2021 om 16:42 schreef Hean Seng:

Hi Wido,

In current setup,  each Cloudstack have own VR, so in this new  IPv6 subnet
allocation , each VR (which have Frr) will need to have peering with ISP
router (and either BGP or Static Route) , and there is 1000 Acocunts,  it
will 1000 BGP session with ISP router ,  Am I right for this ? or I
understand wrong .



Yes, that is correct. A /56 would also be sufficient or a /60 which is
enough to allocate a few /64 subnets.

1000 BGP connections isn't really a problem for a proper router at the
ISP. OSPF(v3) would be better, but as I said that's poorly supported.

The ISP could also install 1000 static routes, but that means that the
ISP's router needs to have those configured.

http://docs.frrouting.org/en/latest/ospf6d.html
(While looking up this URL I see that Frr recently put in a lot of work
in OSPFv3, seems better now)


I understand IPv6 is different then IPv4, and in IPv6 it suppose each
devices have own IP. It just how to realize in easy way.









On Fri, Jul 16, 2021 at 8:17 PM Wido den Hollander  wrote:




Op 16-07-2021 om 05:54 schreef Hean Seng:

Hi Wido,

My initial thought is not like this,  it is the /48 at ISP router, and

/64

subnet assign to AdvanceZoneVR,   AdvanceZoneVR responsible is
distribule IPv6 ip (from the assigned /64 sunet) to VM,  and not routing
the traffic,   in the VM that get the IPv6 IP will default route to ISP
router as gw.   It can may be a bridge over via Advancezone-VR.



How would you bridge this? That sounds like NAT?

IPv6 is meant to be routed. Not to be translated or bridged in any way.

The way a made the drawing is exactly how IPv6 should work in a VPC
environment.

Traffic flows through the VR where it can do firewalling of the traffic.


However, If do as the way described in the drawing, then i suppose will

be

another kind of virtual router going to introduce , to get hold the /48

in

this virtual router right ?



It can be the same VR. But keep in mind that IPv6 != IPv4.

The VR will get Frr as a new daemon which can talk BGP with the upper
network to route traffic.


After this,  The Advance Zone, NAT's  VR will peer with this new IPv6 VR
for getting the IPv6 /64 prefix ?



IPv4 will be behind NAT, but IPv6 will not be behind NAT.


If do in this way, then I guess  you just only need Static route, with
peering ip both end  as one /48 can have a lot of /64 on it.  And

hardware

budgeting for new IPv6-VR will become very important, as all traffic will
need to pass over it .



Routing or NAT is the same for the VR. You don't need a very beefy VR
for this.


It will be like

ISP Router  -- >  (new IPV6-VR )  > AdvanceZone-VR > VM

Relationship of (new IPv6 VR) and AdvanceZone-VR , may be considering on
OSPF instead of  BGP , otherwise few thousand of AdvanceZone-VR wil have
few thousand of BGP session. on new-IPv6-VR

Also, I suppose we cannot do ISP router. -->. Advancezone VR direct,   ,
otherwise ISP router will be full of /64 prefix route either on BGP( Many
BGP Session) , or  Many Static route .   If few thousand account, ti

Re: IPV6 in Isolated/VPC networks

2021-07-19 Thread Wido den Hollander




Op 17-07-2021 om 06:28 schreef Hean Seng:

I think if doing this way ,  since you were to implement on peering ip
between vr and phsical router , then would need keep /56 or 48 at
Clodustack ?  We can only add /64 subnet to Cloudstack only (instead of
keep the /56 or 48 there).



We can have a /64 at CloudStack in which all VRs talk with the router of 
the ISP.


That is large enough as a interconnect subnet between the VRs and routers.

From there you can route /56, /48 or which size you want towards the VR.

From the interconnect /64 you can also grab IPs which you use for 
loadbalancing purposes over different VMs.




I  saw other software provider do is adding /64 subnet to their system,
and  after that allocate subnet to the VM (from the previous added list).

May be considering the OSPF if really on this.  It really a nightmare for
maintaining 1000 or few thousand of BGP session.   You can imagine your
Cisco Router list of few thousand BGP session there.



Yes, but I would suggest that both OSPFv3 and BGP should work. Not 
everybody will have 1000 accounts on their environment.


Even static routes should be supported.

Wido






On Fri, Jul 16, 2021 at 11:17 PM Wido den Hollander  wrote:




Op 16-07-2021 om 16:42 schreef Hean Seng:

Hi Wido,

In current setup,  each Cloudstack have own VR, so in this new  IPv6

subnet

allocation , each VR (which have Frr) will need to have peering with ISP
router (and either BGP or Static Route) , and there is 1000 Acocunts,  it
will 1000 BGP session with ISP router ,  Am I right for this ? or I
understand wrong .



Yes, that is correct. A /56 would also be sufficient or a /60 which is
enough to allocate a few /64 subnets.

1000 BGP connections isn't really a problem for a proper router at the
ISP. OSPF(v3) would be better, but as I said that's poorly supported.

The ISP could also install 1000 static routes, but that means that the
ISP's router needs to have those configured.

http://docs.frrouting.org/en/latest/ospf6d.html
(While looking up this URL I see that Frr recently put in a lot of work
in OSPFv3, seems better now)


I understand IPv6 is different then IPv4, and in IPv6 it suppose each
devices have own IP. It just how to realize in easy way.









On Fri, Jul 16, 2021 at 8:17 PM Wido den Hollander 

wrote:





Op 16-07-2021 om 05:54 schreef Hean Seng:

Hi Wido,

My initial thought is not like this,  it is the /48 at ISP router, and

/64

subnet assign to AdvanceZoneVR,   AdvanceZoneVR responsible is
distribule IPv6 ip (from the assigned /64 sunet) to VM,  and not

routing

the traffic,   in the VM that get the IPv6 IP will default route to ISP
router as gw.   It can may be a bridge over via Advancezone-VR.



How would you bridge this? That sounds like NAT?

IPv6 is meant to be routed. Not to be translated or bridged in any way.

The way a made the drawing is exactly how IPv6 should work in a VPC
environment.

Traffic flows through the VR where it can do firewalling of the traffic.


However, If do as the way described in the drawing, then i suppose will

be

another kind of virtual router going to introduce , to get hold the /48

in

this virtual router right ?



It can be the same VR. But keep in mind that IPv6 != IPv4.

The VR will get Frr as a new daemon which can talk BGP with the upper
network to route traffic.


After this,  The Advance Zone, NAT's  VR will peer with this new IPv6

VR

for getting the IPv6 /64 prefix ?



IPv4 will be behind NAT, but IPv6 will not be behind NAT.


If do in this way, then I guess  you just only need Static route, with
peering ip both end  as one /48 can have a lot of /64 on it.  And

hardware

budgeting for new IPv6-VR will become very important, as all traffic

will

need to pass over it .



Routing or NAT is the same for the VR. You don't need a very beefy VR
for this.


It will be like

ISP Router  -- >  (new IPV6-VR )  > AdvanceZone-VR > VM

Relationship of (new IPv6 VR) and AdvanceZone-VR , may be considering

on

OSPF instead of  BGP , otherwise few thousand of AdvanceZone-VR wil

have

few thousand of BGP session. on new-IPv6-VR

Also, I suppose we cannot do ISP router. -->. Advancezone VR direct,

  ,

otherwise ISP router will be full of /64 prefix route either on BGP(

Many

BGP Session) , or  Many Static route .   If few thousand account, ti

will

be few thousand of BGP session with ISP router or few thousand static

route

which  is not possible .






On Thu, Jul 15, 2021 at 10:47 PM Wido den Hollander 

wrote:



But you still need routing. See the attached PNG (and draw.io XML).

You need to route the /48 subnet TO the VR which can then route it to
the Virtual Networks behind the VR.

There is no other way then routing with either BGP or a Static route.

Wido

Op 15-07-2021 om 12:39 schreef Hean Seng:

Or explain like this :

1) Cloudstack generate list of /64 subnet from /48 that Network admin
assigned to Cloudstack
2) Cloudsack allocated the subnet (that ge

Re: IPV6 in Isolated/VPC networks

2021-07-19 Thread Wido den Hollander




Op 16-07-2021 om 21:46 schreef Kristaps Cudars:

Hi Wido,

Your proposal is to sacrifice ability to reassign IPv6 to instance, have 
internal domain prefix, and list/db in ACS what IPv6 has been assigned to what 
instance and go with RA and SLAAC. For route signaling to switch use BGP/OSPFv3 
or manual pre-creation.



You can still list the IPs which have been assigned. You'll know exactly 
what IPv6 address a VM has because of the prefix + MAC. Privacy 
Extensions need to be disabled in the VM.


This already works in CloudStack in Shared Networks in this way.

Using secondary IPs you can always have 'floating' IPv6 addressess.

Wido


Option with RA and managed flag that DHCPv6 is in use to support preset 
information and ability to create route information from ACS is not an option 
as DHCPv6 its failing?


On 2021/07/16 15:17:42, Wido den Hollander  wrote:



Op 16-07-2021 om 16:42 schreef Hean Seng:

Hi Wido,

In current setup,  each Cloudstack have own VR, so in this new  IPv6 subnet
allocation , each VR (which have Frr) will need to have peering with ISP
router (and either BGP or Static Route) , and there is 1000 Acocunts,  it
will 1000 BGP session with ISP router ,  Am I right for this ? or I
understand wrong .



Yes, that is correct. A /56 would also be sufficient or a /60 which is
enough to allocate a few /64 subnets.

1000 BGP connections isn't really a problem for a proper router at the
ISP. OSPF(v3) would be better, but as I said that's poorly supported.

The ISP could also install 1000 static routes, but that means that the
ISP's router needs to have those configured.

http://docs.frrouting.org/en/latest/ospf6d.html
(While looking up this URL I see that Frr recently put in a lot of work
in OSPFv3, seems better now)


I understand IPv6 is different then IPv4, and in IPv6 it suppose each
devices have own IP. It just how to realize in easy way.









On Fri, Jul 16, 2021 at 8:17 PM Wido den Hollander  wrote:




Op 16-07-2021 om 05:54 schreef Hean Seng:

Hi Wido,

My initial thought is not like this,  it is the /48 at ISP router, and

/64

subnet assign to AdvanceZoneVR,   AdvanceZoneVR responsible is
distribule IPv6 ip (from the assigned /64 sunet) to VM,  and not routing
the traffic,   in the VM that get the IPv6 IP will default route to ISP
router as gw.   It can may be a bridge over via Advancezone-VR.



How would you bridge this? That sounds like NAT?

IPv6 is meant to be routed. Not to be translated or bridged in any way.

The way a made the drawing is exactly how IPv6 should work in a VPC
environment.

Traffic flows through the VR where it can do firewalling of the traffic.


However, If do as the way described in the drawing, then i suppose will

be

another kind of virtual router going to introduce , to get hold the /48

in

this virtual router right ?



It can be the same VR. But keep in mind that IPv6 != IPv4.

The VR will get Frr as a new daemon which can talk BGP with the upper
network to route traffic.


After this,  The Advance Zone, NAT's  VR will peer with this new IPv6 VR
for getting the IPv6 /64 prefix ?



IPv4 will be behind NAT, but IPv6 will not be behind NAT.


If do in this way, then I guess  you just only need Static route, with
peering ip both end  as one /48 can have a lot of /64 on it.  And

hardware

budgeting for new IPv6-VR will become very important, as all traffic will
need to pass over it .



Routing or NAT is the same for the VR. You don't need a very beefy VR
for this.


It will be like

ISP Router  -- >  (new IPV6-VR )  > AdvanceZone-VR > VM

Relationship of (new IPv6 VR) and AdvanceZone-VR , may be considering on
OSPF instead of  BGP , otherwise few thousand of AdvanceZone-VR wil have
few thousand of BGP session. on new-IPv6-VR

Also, I suppose we cannot do ISP router. -->. Advancezone VR direct,   ,
otherwise ISP router will be full of /64 prefix route either on BGP( Many
BGP Session) , or  Many Static route .   If few thousand account, ti will
be few thousand of BGP session with ISP router or few thousand static

route

which  is not possible .






On Thu, Jul 15, 2021 at 10:47 PM Wido den Hollander 

wrote:



But you still need routing. See the attached PNG (and draw.io XML).

You need to route the /48 subnet TO the VR which can then route it to
the Virtual Networks behind the VR.

There is no other way then routing with either BGP or a Static route.

Wido

Op 15-07-2021 om 12:39 schreef Hean Seng:

Or explain like this :

1) Cloudstack generate list of /64 subnet from /48 that Network admin
assigned to Cloudstack
2) Cloudsack allocated the subnet (that generated from step1) to

Virtual

Router, one Virtual Router have one subniet /64
3) Virtual Router allocate single IPv6 (within the range of /64
allocated to VR)  to VM






On Thu, Jul 15, 2021 at 6:25 PM Hean Seng mailto:heans...@gmail.com>> wrote:

   Hi Wido,

   I think the /48 is at physical router as gateway , and subnet of

/64

   a

Re: IPV6 in Isolated/VPC networks

2021-07-16 Thread Wido den Hollander




Op 16-07-2021 om 16:42 schreef Hean Seng:

Hi Wido,

In current setup,  each Cloudstack have own VR, so in this new  IPv6 subnet
allocation , each VR (which have Frr) will need to have peering with ISP
router (and either BGP or Static Route) , and there is 1000 Acocunts,  it
will 1000 BGP session with ISP router ,  Am I right for this ? or I
understand wrong .



Yes, that is correct. A /56 would also be sufficient or a /60 which is 
enough to allocate a few /64 subnets.


1000 BGP connections isn't really a problem for a proper router at the 
ISP. OSPF(v3) would be better, but as I said that's poorly supported.


The ISP could also install 1000 static routes, but that means that the
ISP's router needs to have those configured.

http://docs.frrouting.org/en/latest/ospf6d.html
(While looking up this URL I see that Frr recently put in a lot of work 
in OSPFv3, seems better now)



I understand IPv6 is different then IPv4, and in IPv6 it suppose each
devices have own IP. It just how to realize in easy way.









On Fri, Jul 16, 2021 at 8:17 PM Wido den Hollander  wrote:




Op 16-07-2021 om 05:54 schreef Hean Seng:

Hi Wido,

My initial thought is not like this,  it is the /48 at ISP router, and

/64

subnet assign to AdvanceZoneVR,   AdvanceZoneVR responsible is
distribule IPv6 ip (from the assigned /64 sunet) to VM,  and not routing
the traffic,   in the VM that get the IPv6 IP will default route to ISP
router as gw.   It can may be a bridge over via Advancezone-VR.



How would you bridge this? That sounds like NAT?

IPv6 is meant to be routed. Not to be translated or bridged in any way.

The way a made the drawing is exactly how IPv6 should work in a VPC
environment.

Traffic flows through the VR where it can do firewalling of the traffic.


However, If do as the way described in the drawing, then i suppose will

be

another kind of virtual router going to introduce , to get hold the /48

in

this virtual router right ?



It can be the same VR. But keep in mind that IPv6 != IPv4.

The VR will get Frr as a new daemon which can talk BGP with the upper
network to route traffic.


After this,  The Advance Zone, NAT's  VR will peer with this new IPv6 VR
for getting the IPv6 /64 prefix ?



IPv4 will be behind NAT, but IPv6 will not be behind NAT.


If do in this way, then I guess  you just only need Static route, with
peering ip both end  as one /48 can have a lot of /64 on it.  And

hardware

budgeting for new IPv6-VR will become very important, as all traffic will
need to pass over it .



Routing or NAT is the same for the VR. You don't need a very beefy VR
for this.


It will be like

ISP Router  -- >  (new IPV6-VR )  > AdvanceZone-VR > VM

Relationship of (new IPv6 VR) and AdvanceZone-VR , may be considering on
OSPF instead of  BGP , otherwise few thousand of AdvanceZone-VR wil have
few thousand of BGP session. on new-IPv6-VR

Also, I suppose we cannot do ISP router. -->. Advancezone VR direct,   ,
otherwise ISP router will be full of /64 prefix route either on BGP( Many
BGP Session) , or  Many Static route .   If few thousand account, ti will
be few thousand of BGP session with ISP router or few thousand static

route

which  is not possible .






On Thu, Jul 15, 2021 at 10:47 PM Wido den Hollander 

wrote:



But you still need routing. See the attached PNG (and draw.io XML).

You need to route the /48 subnet TO the VR which can then route it to
the Virtual Networks behind the VR.

There is no other way then routing with either BGP or a Static route.

Wido

Op 15-07-2021 om 12:39 schreef Hean Seng:

Or explain like this :

1) Cloudstack generate list of /64 subnet from /48 that Network admin
assigned to Cloudstack
2) Cloudsack allocated the subnet (that generated from step1) to

Virtual

Router, one Virtual Router have one subniet /64
3) Virtual Router allocate single IPv6 (within the range of /64
allocated to VR)  to VM






On Thu, Jul 15, 2021 at 6:25 PM Hean Seng mailto:heans...@gmail.com>> wrote:

  Hi Wido,

  I think the /48 is at physical router as gateway , and subnet of

/64

  at VR of Cloudstack.   Cloudstack only keep which /48 prefix and
  vlan information of this /48 to be later split the  /64. to VR.

  And the instances is getting singe IPv6 of /64  IP.   The VR is
  getting /64.  The default gateway shall goes to /48 of physical
  router ip .   In this case ,does not need any BGP router .


  Similar concept as IPv4 :

  /48 subnet of IPv6 is equivalent to current /24 subnet of IPv4

that

  created in Network.
  and /64  of IPv6 is equivalent to single IP of IPv4 assign to VM.




  On Thu, Jul 15, 2021 at 5:31 PM Wido den Hollander <

w...@widodh.nl

  <mailto:w...@widodh.nl>> wrote:



  Op 14-07-2021 om 16:44 schreef Hean Seng:
   > Hi
   >
   > I replied in another thread, i think do not need implement
  BGP or OSPF,

Re: IPV6 in Isolated/VPC networks

2021-07-16 Thread Wido den Hollander




Op 16-07-2021 om 05:54 schreef Hean Seng:

Hi Wido,

My initial thought is not like this,  it is the /48 at ISP router, and /64
subnet assign to AdvanceZoneVR,   AdvanceZoneVR responsible is
distribule IPv6 ip (from the assigned /64 sunet) to VM,  and not routing
the traffic,   in the VM that get the IPv6 IP will default route to ISP
router as gw.   It can may be a bridge over via Advancezone-VR.



How would you bridge this? That sounds like NAT?

IPv6 is meant to be routed. Not to be translated or bridged in any way.

The way a made the drawing is exactly how IPv6 should work in a VPC 
environment.


Traffic flows through the VR where it can do firewalling of the traffic.


However, If do as the way described in the drawing, then i suppose will be
another kind of virtual router going to introduce , to get hold the /48 in
this virtual router right ?



It can be the same VR. But keep in mind that IPv6 != IPv4.

The VR will get Frr as a new daemon which can talk BGP with the upper 
network to route traffic.



After this,  The Advance Zone, NAT's  VR will peer with this new IPv6 VR
for getting the IPv6 /64 prefix ?



IPv4 will be behind NAT, but IPv6 will not be behind NAT.


If do in this way, then I guess  you just only need Static route, with
peering ip both end  as one /48 can have a lot of /64 on it.  And hardware
budgeting for new IPv6-VR will become very important, as all traffic will
need to pass over it .



Routing or NAT is the same for the VR. You don't need a very beefy VR 
for this.



It will be like

ISP Router  -- >  (new IPV6-VR )  > AdvanceZone-VR > VM

Relationship of (new IPv6 VR) and AdvanceZone-VR , may be considering on
OSPF instead of  BGP , otherwise few thousand of AdvanceZone-VR wil have
few thousand of BGP session. on new-IPv6-VR

Also, I suppose we cannot do ISP router. -->. Advancezone VR direct,   ,
otherwise ISP router will be full of /64 prefix route either on BGP( Many
BGP Session) , or  Many Static route .   If few thousand account, ti will
be few thousand of BGP session with ISP router or few thousand static route
which  is not possible .






On Thu, Jul 15, 2021 at 10:47 PM Wido den Hollander  wrote:


But you still need routing. See the attached PNG (and draw.io XML).

You need to route the /48 subnet TO the VR which can then route it to
the Virtual Networks behind the VR.

There is no other way then routing with either BGP or a Static route.

Wido

Op 15-07-2021 om 12:39 schreef Hean Seng:

Or explain like this :

1) Cloudstack generate list of /64 subnet from /48 that Network admin
assigned to Cloudstack
2) Cloudsack allocated the subnet (that generated from step1) to Virtual
Router, one Virtual Router have one subniet /64
3) Virtual Router allocate single IPv6 (within the range of /64
allocated to VR)  to VM






On Thu, Jul 15, 2021 at 6:25 PM Hean Seng mailto:heans...@gmail.com>> wrote:

 Hi Wido,

 I think the /48 is at physical router as gateway , and subnet of /64
 at VR of Cloudstack.   Cloudstack only keep which /48 prefix and
 vlan information of this /48 to be later split the  /64. to VR.

 And the instances is getting singe IPv6 of /64  IP.   The VR is
 getting /64.  The default gateway shall goes to /48 of physical
 router ip .   In this case ,does not need any BGP router .


 Similar concept as IPv4 :

 /48 subnet of IPv6 is equivalent to current /24 subnet of IPv4 that
 created in Network.
 and /64  of IPv6 is equivalent to single IP of IPv4 assign to VM.




 On Thu, Jul 15, 2021 at 5:31 PM Wido den Hollander mailto:w...@widodh.nl>> wrote:



 Op 14-07-2021 om 16:44 schreef Hean Seng:
  > Hi
  >
  > I replied in another thread, i think do not need implement
 BGP or OSPF,
  > that would be complicated .
  >
  > We only need assign  IPv6 's /64 prefix to Virtual Router
 (VR) in NAT
  > zone, and the VR responsible to deliver single IPv6 to VM via
 DHCP6.
  >
  > In VR, you need to have Default IPv6 route to  Physical
 Router's /48. IP
  > as IPv6 Gateway.  Thens should be done .
  >
  > Example :
  > Physical Router Interface
  >   IPv6 IP : 2000:::1/48
  >
  > Cloudstack  virtual router : 2000::200:201::1/64 with
 default ipv6
  > route to router ip 2000:::1
  > and Clodustack Virtual router dhcp allocate IP to VM , and
 VM will have
  > default route to VR. IPv6 2000::200:201::1
  >
  > So in cloudstack need to allow  user to enter ,  IPv6
 gwateway , and
  > the  /48 Ipv6 prefix , then it will self allocate the /64 ip
 to the VR ,
  > and maintain make sure not ovelap allocation
  >
  >

 But N

Re: IPV6 in Isolated/VPC networks

2021-07-16 Thread Wido den Hollander




Op 15-07-2021 om 20:49 schreef Kristaps Cudars:

Hi Wido,

DHCPv6 is not an option?


It is poorly implemented in many Operating Systems. Address obtaining 
does not work. I have seen it fail too many times.



It enables feature parity between IPv4 and IPv6 in context of VR.



IPv6 != IPv4. IPv6 was also never designed with DHCPv6 for Address 
Distribution. DHCPv6 was only added to send additional options towards a 
client.



Or there are some advantages in RA and SLAAC?



Yes. Router Advertisements are *mandatory*. Without the RA a VM will 
never learn it's default gateway(s).


By sending the prefix in the RA the VM can autoconfigure and has 
connectvitity.


This already works in Basic networking and Advanced Networking in Shared 
Networks.


Security Grouping prevents spoofing of MACs and IPv6 addresses. It also 
filters away RAs coming from VMs, etc, etc.





On 2021/07/15 15:10:38, Wido den Hollander  wrote:



Op 15-07-2021 om 17:05 schreef Kristaps Cudars:

Hi Wido,

What is benefit of using Route Advertisement on internal VR networks?



The VMs need the Router Advertisement to learn their default gateway.
That's the only way with IPv6.

The RA also contains the prefix (/64) which the VMs can use to calculate
their IPv6 address (SLAAC).


In drawing VR is in VPC mode how it will work for isolated network where 
external link/ip is not assigned initially?


On 2021/07/15 14:47:24, Wido den Hollander  wrote:

But you still need routing. See the attached PNG (and draw.io XML).

You need to route the /48 subnet TO the VR which can then route it to
the Virtual Networks behind the VR.

There is no other way then routing with either BGP or a Static route.

Wido

Op 15-07-2021 om 12:39 schreef Hean Seng:

Or explain like this :

1) Cloudstack generate list of /64 subnet from /48 that Network admin
assigned to Cloudstack
2) Cloudsack allocated the subnet (that generated from step1) to Virtual
Router, one Virtual Router have one subniet /64
3) Virtual Router allocate single IPv6 (within the range of /64
allocated to VR)  to VM






On Thu, Jul 15, 2021 at 6:25 PM Hean Seng mailto:heans...@gmail.com>> wrote:

  Hi Wido,

  I think the /48 is at physical router as gateway , and subnet of /64
  at VR of Cloudstack.   Cloudstack only keep which /48 prefix and
  vlan information of this /48 to be later split the  /64. to VR.

  And the instances is getting singe IPv6 of /64  IP.   The VR is
  getting /64.  The default gateway shall goes to /48 of physical
  router ip .   In this case ,does not need any BGP router .


  Similar concept as IPv4 :

  /48 subnet of IPv6 is equivalent to current /24 subnet of IPv4 that
  created in Network.
  and /64  of IPv6 is equivalent to single IP of IPv4 assign to VM.




  On Thu, Jul 15, 2021 at 5:31 PM Wido den Hollander mailto:w...@widodh.nl>> wrote:



  Op 14-07-2021 om 16:44 schreef Hean Seng:
   > Hi
   >
   > I replied in another thread, i think do not need implement
  BGP or OSPF,
   > that would be complicated .
   >
   > We only need assign  IPv6 's /64 prefix to Virtual Router
  (VR) in NAT
   > zone, and the VR responsible to deliver single IPv6 to VM via
  DHCP6.
   >
   > In VR, you need to have Default IPv6 route to  Physical
  Router's /48. IP
   > as IPv6 Gateway.  Thens should be done .
   >
   > Example :
   > Physical Router Interface
   >   IPv6 IP : 2000:::1/48
   >
   > Cloudstack  virtual router : 2000::200:201::1/64 with
  default ipv6
   > route to router ip 2000:::1
   > and Clodustack Virtual router dhcp allocate IP to VM , and
  VM will have
   > default route to VR. IPv6 2000::200:201::1
   >
   > So in cloudstack need to allow  user to enter ,  IPv6
  gwateway , and
   > the  /48 Ipv6 prefix , then it will self allocate the /64 ip
  to the VR ,
   > and maintain make sure not ovelap allocation
   >
   >

  But NAT is truly not the solution with IPv6. IPv6 is supposed to be
  routable. In addition you should avoid DHCPv6 as much as
  possible as
  that's not really the intended use-case for address allocation
  with IPv6.

  In order to route an /48 IPv6 subnet to the VR you have a few
  possibilities:

  - Static route from the upperlying routers which are outside of
  CloudStack
  - BGP
  - OSPFv3 (broken in most cases!)
  - DHCPv6 Prefix Delegation

  BGP and/or Static routes are still the best bet here.

  So what you do is that you tell CloudStack that you will route
  2001:db8::/48 to the VR, the 

Re: IPV6 in Isolated/VPC networks

2021-07-15 Thread Wido den Hollander




Op 15-07-2021 om 17:05 schreef Kristaps Cudars:

Hi Wido,

What is benefit of using Route Advertisement on internal VR networks?



The VMs need the Router Advertisement to learn their default gateway. 
That's the only way with IPv6.


The RA also contains the prefix (/64) which the VMs can use to calculate 
their IPv6 address (SLAAC).



In drawing VR is in VPC mode how it will work for isolated network where 
external link/ip is not assigned initially?


On 2021/07/15 14:47:24, Wido den Hollander  wrote:

But you still need routing. See the attached PNG (and draw.io XML).

You need to route the /48 subnet TO the VR which can then route it to
the Virtual Networks behind the VR.

There is no other way then routing with either BGP or a Static route.

Wido

Op 15-07-2021 om 12:39 schreef Hean Seng:

Or explain like this :

1) Cloudstack generate list of /64 subnet from /48 that Network admin
assigned to Cloudstack
2) Cloudsack allocated the subnet (that generated from step1) to Virtual
Router, one Virtual Router have one subniet /64
3) Virtual Router allocate single IPv6 (within the range of /64
allocated to VR)  to VM






On Thu, Jul 15, 2021 at 6:25 PM Hean Seng mailto:heans...@gmail.com>> wrote:

 Hi Wido,

 I think the /48 is at physical router as gateway , and subnet of /64
 at VR of Cloudstack.   Cloudstack only keep which /48 prefix and
 vlan information of this /48 to be later split the  /64. to VR.

 And the instances is getting singe IPv6 of /64  IP.   The VR is
 getting /64.  The default gateway shall goes to /48 of physical
 router ip .   In this case ,does not need any BGP router .


 Similar concept as IPv4 :

 /48 subnet of IPv6 is equivalent to current /24 subnet of IPv4 that
 created in Network.
 and /64  of IPv6 is equivalent to single IP of IPv4 assign to VM.




 On Thu, Jul 15, 2021 at 5:31 PM Wido den Hollander mailto:w...@widodh.nl>> wrote:



 Op 14-07-2021 om 16:44 schreef Hean Seng:
  > Hi
  >
  > I replied in another thread, i think do not need implement
 BGP or OSPF,
  > that would be complicated .
  >
  > We only need assign  IPv6 's /64 prefix to Virtual Router
 (VR) in NAT
  > zone, and the VR responsible to deliver single IPv6 to VM via
 DHCP6.
  >
  > In VR, you need to have Default IPv6 route to  Physical
 Router's /48. IP
  > as IPv6 Gateway.  Thens should be done .
  >
  > Example :
  > Physical Router Interface
  >   IPv6 IP : 2000:::1/48
  >
  > Cloudstack  virtual router : 2000::200:201::1/64 with
 default ipv6
  > route to router ip 2000:::1
  > and Clodustack Virtual router dhcp allocate IP to VM , and
 VM will have
  > default route to VR. IPv6 2000::200:201::1
  >
  > So in cloudstack need to allow  user to enter ,  IPv6
 gwateway , and
  > the  /48 Ipv6 prefix , then it will self allocate the /64 ip
 to the VR ,
  > and maintain make sure not ovelap allocation
  >
  >

 But NAT is truly not the solution with IPv6. IPv6 is supposed to be
 routable. In addition you should avoid DHCPv6 as much as
 possible as
 that's not really the intended use-case for address allocation
 with IPv6.

 In order to route an /48 IPv6 subnet to the VR you have a few
 possibilities:

 - Static route from the upperlying routers which are outside of
 CloudStack
 - BGP
 - OSPFv3 (broken in most cases!)
 - DHCPv6 Prefix Delegation

 BGP and/or Static routes are still the best bet here.

 So what you do is that you tell CloudStack that you will route
 2001:db8::/48 to the VR, the VR can then use that to split it up
 into
 multiple /64 subnets going towards the instances:

 - 2001:db8::/64
 - 2001:db8:1::/64
 - 2001:db8:2::/64
 ...
 - 2001:db8:f::/64

 And go on.

 In case of BGP you indeed have to tell the VR a few things:

 - It's own AS number
 - The peer's address(es)

 With FRR you can simply say:

 neighbor 2001:db8:4fa::179 remote-as external

 The /48 you need to have at the VR anyway in case of either a
 static
 route or BGP.

 We just need to add a NullRoute on the VR for that /48 so that
 traffic
 will not be routed to the upper gateway in case of the VR can't
 find a
 route.

 Wido

  >
  >
  >
  >
  >
  > On Wed, Jul 14, 2021 at 8:55 PM Alex Mattioli
  > mailto:alex.matti...@shape

Re: IPV6 in Isolated/VPC networks

2021-07-15 Thread Wido den Hollander




Op 14-07-2021 om 14:59 schreef Alex Mattioli:

Hi Kristaps,
Thanks for the nice schematic, pretty much where we were going.

I just didn't understand your first statement " I would like to argue that 
implementer dynamic routing protocol and associated security problems/challenges with it 
to have IPv6 route inserted in L3 router/s is not a good goal."

Would you mind clarifying/expanding on it please?


I would like to know that as well. Because protocols like BGP and OSPF 
are intended for that use-case.


I don't see ACS logging into our Juniper MX routers to program a static 
route.


BGP doesn't have to be used for something like Anycast or multiple 
datacenter availability.


The reason I said DHCPv6 should be avoided is because of the limited 
support. You also need to keep a database of IP addresses while SLAAC 
exactly does what you want.


Router Advertisements with SLAAC is much better supported in Operating 
Systems then DHCPv6 is.


Wido



Thanks
Alex

  



-Original Message-
From: Kristaps Cudars 
Sent: 13 July 2021 20:44
To: dev@cloudstack.apache.org
Subject: Re: IPV6 in Isolated/VPC networks

Hi,

I would like to argue that implementer dynamic routing protocol and associated 
security problems/challenges with it to have IPv6 route inserted in L3 router/s 
is not a good goal.

In my opinion dynamic routing on VR would be interesting to scale availability 
of service across several datacenter if they participate in same AS. With BGP 
you could advertise same IP form different VR located in different DC IPv6/128 
or/and IPv4/32.

I would delegate task of router creation to ACS somewhere at moment of VR 
creation.
It could happen over ssh/snmp/api rest or ansible- something that supports wide 
variety of vendors/devices.

Have created rough schematic on how it could look on VR side: 
https://dice.lv/acs/ACS_router_v2.pdf


On 2021/07/13 13:08:20, Wido den Hollander  wrote:



On 7/7/21 1:16 PM, Alex Mattioli wrote:

Hi all,
@Wei Zhou<mailto:wei.z...@shapeblue.com> @Rohit 
Yadav<mailto:rohit.ya...@shapeblue.com> and myself are investigating how to enable 
IPV6 support on Isolated and VPC networks and would like your input on it.
At the moment we are looking at implementing FRR with BGP (and possibly OSPF) 
on the ACS VR.

We are looking for requirements, recommendations, ideas, rants, etc...etc...



Ok! Here we go.

I think that you mean that the VR will actually route the IPv6 traffic
and for that you need to have a way of getting a subnet routed to the VR.

BGP is probably you best bet here. Although OSPFv3 technically
supports this it is very badly implemented in Frr for example.

Now FRR is a very good router and one of the fancy features it
supports is BGP Unnumered. This allows for auto configuration of BGP
over a L2 network when both sides are sending Router Advertisements.
This is very easy for flexible BGP configurations where both sides have dynamic 
IPs.

What you want to do is that you get a /56, /48 or something which is

/64 bits routed to the VR.


Now you can sub-segment this into separate /64 subnets. You don't want
to go smaller then a /64 is that prevents you from using SLAAC for
IPv6 address configuration. This is how it works for Shared Networks
now in Basic and Advanced Zones.

FRR can now also send out the Router Advertisements on the downlinks
sending out:

- DNS servers
- DNS domain
- Prefix (/64) to be used

There is no need for DHCPv6. You can calculate the IPv6 address the VM
will obtain by using the MAC and the prefix.

So in short:

- Using BGP you routed a /48 to the VR
- Now you split this into /64 subnets towards the isolated networks

Wido


Alex Mattioli

  







Re: IPV6 in Isolated/VPC networks

2021-07-15 Thread Wido den Hollander

But you still need routing. See the attached PNG (and draw.io XML).

You need to route the /48 subnet TO the VR which can then route it to 
the Virtual Networks behind the VR.


There is no other way then routing with either BGP or a Static route.

Wido

Op 15-07-2021 om 12:39 schreef Hean Seng:

Or explain like this :

1) Cloudstack generate list of /64 subnet from /48 that Network admin 
assigned to Cloudstack
2) Cloudsack allocated the subnet (that generated from step1) to Virtual 
Router, one Virtual Router have one subniet /64
3) Virtual Router allocate single IPv6 (within the range of /64 
allocated to VR)  to VM







On Thu, Jul 15, 2021 at 6:25 PM Hean Seng <mailto:heans...@gmail.com>> wrote:


Hi Wido,

I think the /48 is at physical router as gateway , and subnet of /64
at VR of Cloudstack.   Cloudstack only keep which /48 prefix and
vlan information of this /48 to be later split the  /64. to VR.

And the instances is getting singe IPv6 of /64  IP.   The VR is
getting /64.  The default gateway shall goes to /48 of physical
router ip .   In this case ,does not need any BGP router .


Similar concept as IPv4 :

/48 subnet of IPv6 is equivalent to current /24 subnet of IPv4 that
created in Network.
and /64  of IPv6 is equivalent to single IP of IPv4 assign to VM.




On Thu, Jul 15, 2021 at 5:31 PM Wido den Hollander mailto:w...@widodh.nl>> wrote:



Op 14-07-2021 om 16:44 schreef Hean Seng:
 > Hi
 >
 > I replied in another thread, i think do not need implement
BGP or OSPF,
 > that would be complicated .
 >
 > We only need assign  IPv6 's /64 prefix to Virtual Router
(VR) in NAT
 > zone, and the VR responsible to deliver single IPv6 to VM via
DHCP6.
 >
 > In VR, you need to have Default IPv6 route to  Physical
Router's /48. IP
 > as IPv6 Gateway.  Thens should be done .
 >
 > Example :
 > Physical Router Interface
 >   IPv6 IP : 2000:::1/48
 >
 > Cloudstack  virtual router : 2000::200:201::1/64 with
default ipv6
 > route to router ip 2000:::1
 > and Clodustack Virtual router dhcp allocate IP to VM , and 
VM will have

 > default route to VR. IPv6 2000::200:201::1
 >
 > So in cloudstack need to allow  user to enter ,  IPv6
gwateway , and
 > the  /48 Ipv6 prefix , then it will self allocate the /64 ip
to the VR ,
 > and maintain make sure not ovelap allocation
 >
 >

But NAT is truly not the solution with IPv6. IPv6 is supposed to be
routable. In addition you should avoid DHCPv6 as much as
possible as
that's not really the intended use-case for address allocation
with IPv6.

In order to route an /48 IPv6 subnet to the VR you have a few
possibilities:

- Static route from the upperlying routers which are outside of
CloudStack
- BGP
- OSPFv3 (broken in most cases!)
- DHCPv6 Prefix Delegation

BGP and/or Static routes are still the best bet here.

So what you do is that you tell CloudStack that you will route
2001:db8::/48 to the VR, the VR can then use that to split it up
into
multiple /64 subnets going towards the instances:

- 2001:db8::/64
- 2001:db8:1::/64
- 2001:db8:2::/64
...
- 2001:db8:f::/64

And go on.

In case of BGP you indeed have to tell the VR a few things:

- It's own AS number
- The peer's address(es)

With FRR you can simply say:

neighbor 2001:db8:4fa::179 remote-as external

The /48 you need to have at the VR anyway in case of either a
static
route or BGP.

We just need to add a NullRoute on the VR for that /48 so that
traffic
will not be routed to the upper gateway in case of the VR can't
find a
route.

Wido

 >
 >
 >
 >
 >
 > On Wed, Jul 14, 2021 at 8:55 PM Alex Mattioli
 > mailto:alex.matti...@shapeblue.com>
<mailto:alex.matti...@shapeblue.com
<mailto:alex.matti...@shapeblue.com>>> wrote:
 >
 >     Hi Wido,
 >     That's pretty much in line with our thoughts, thanks for
the input.
 >     I believe we agree on the following points then:
 >
 >     - FRR with BGP (no OSPF)
 >     - Route /48 (or/56) down to the VR
 >     - /64 per network
 >     - SLACC for IP addressing
 >
 >     I believe the next big question is then "on which 

Re: IPV6 in Isolated/VPC networks

2021-07-15 Thread Wido den Hollander




Op 14-07-2021 om 16:44 schreef Hean Seng:

Hi

I replied in another thread, i think do not need implement BGP or OSPF, 
that would be complicated .


We only need assign  IPv6 's /64 prefix to Virtual Router (VR) in NAT 
zone, and the VR responsible to deliver single IPv6 to VM via DHCP6.


In VR, you need to have Default IPv6 route to  Physical Router's /48. IP 
as IPv6 Gateway.  Thens should be done .


Example :
Physical Router Interface
  IPv6 IP : 2000:::1/48

Cloudstack  virtual router : 2000::200:201::1/64 with default ipv6 
route to router ip 2000:::1
and Clodustack Virtual router dhcp allocate IP to VM , and  VM will have 
default route to VR. IPv6 2000::200:201::1


So in cloudstack need to allow  user to enter ,  IPv6 gwateway , and 
the  /48 Ipv6 prefix , then it will self allocate the /64 ip to the VR , 
and maintain make sure not ovelap allocation





But NAT is truly not the solution with IPv6. IPv6 is supposed to be 
routable. In addition you should avoid DHCPv6 as much as possible as 
that's not really the intended use-case for address allocation with IPv6.


In order to route an /48 IPv6 subnet to the VR you have a few possibilities:

- Static route from the upperlying routers which are outside of CloudStack
- BGP
- OSPFv3 (broken in most cases!)
- DHCPv6 Prefix Delegation

BGP and/or Static routes are still the best bet here.

So what you do is that you tell CloudStack that you will route 
2001:db8::/48 to the VR, the VR can then use that to split it up into 
multiple /64 subnets going towards the instances:


- 2001:db8::/64
- 2001:db8:1::/64
- 2001:db8:2::/64
...
- 2001:db8:f::/64

And go on.

In case of BGP you indeed have to tell the VR a few things:

- It's own AS number
- The peer's address(es)

With FRR you can simply say:

neighbor 2001:db8:4fa::179 remote-as external

The /48 you need to have at the VR anyway in case of either a static 
route or BGP.


We just need to add a NullRoute on the VR for that /48 so that traffic 
will not be routed to the upper gateway in case of the VR can't find a 
route.


Wido







On Wed, Jul 14, 2021 at 8:55 PM Alex Mattioli 
mailto:alex.matti...@shapeblue.com>> wrote:


Hi Wido,
That's pretty much in line with our thoughts, thanks for the input. 
I believe we agree on the following points then:


- FRR with BGP (no OSPF)
- Route /48 (or/56) down to the VR
- /64 per network
- SLACC for IP addressing

I believe the next big question is then "on which level of ACS do we
manage AS numbers?".  I see two options:
1) Private AS number on a per-zone basis
2) Root Admin assigned AS number on a domain/account basis
3) End-user driven AS number on a per network basis (for bring your
own AS and IP scenario)

Thoughts?

Cheers
Alex




-Original Message-
    From: Wido den Hollander mailto:w...@widodh.nl>>
Sent: 13 July 2021 15:08
To: dev@cloudstack.apache.org <mailto:dev@cloudstack.apache.org>;
Alex Mattioli mailto:alex.matti...@shapeblue.com>>
Cc: Wei Zhou mailto:wei.z...@shapeblue.com>>; Rohit Yadav
mailto:rohit.ya...@shapeblue.com>>;
Gabriel Beims Bräscher mailto:gabr...@pcextreme.nl>>
Subject: Re: IPV6 in Isolated/VPC networks



On 7/7/21 1:16 PM, Alex Mattioli wrote:
 > Hi all,
 > @Wei Zhou<mailto:wei.z...@shapeblue.com
<mailto:wei.z...@shapeblue.com>> @Rohit
Yadav<mailto:rohit.ya...@shapeblue.com
<mailto:rohit.ya...@shapeblue.com>> and myself are investigating how
to enable IPV6 support on Isolated and VPC networks and would like
your input on it.
 > At the moment we are looking at implementing FRR with BGP (and
possibly OSPF) on the ACS VR.
 >
 > We are looking for requirements, recommendations, ideas, rants,
etc...etc...
 >

Ok! Here we go.

I think that you mean that the VR will actually route the IPv6
traffic and for that you need to have a way of getting a subnet
routed to the VR.

BGP is probably you best bet here. Although OSPFv3 technically
supports this it is very badly implemented in Frr for example.

Now FRR is a very good router and one of the fancy features it
supports is BGP Unnumered. This allows for auto configuration of BGP
over a L2 network when both sides are sending Router Advertisements.
This is very easy for flexible BGP configurations where both sides
have dynamic IPs.

What you want to do is that you get a /56, /48 or something which is
 >/64 bits routed to the VR.

Now you can sub-segment this into separate /64 subnets. You don't
want to go smaller then a /64 is that prevents you from using SLAAC
for IPv6 address configuration. This is how it works for Shared
Networks now in Basic and Advanced Zones.

FRR can now also send out the Router Advertisements on the downlinks
sen

Re: IPV6 in Isolated/VPC networks

2021-07-13 Thread Wido den Hollander



On 7/7/21 1:16 PM, Alex Mattioli wrote:
> Hi all,
> @Wei Zhou @Rohit 
> Yadav and myself are investigating how to 
> enable IPV6 support on Isolated and VPC networks and would like your input on 
> it.
> At the moment we are looking at implementing FRR with BGP (and possibly OSPF) 
> on the ACS VR.
> 
> We are looking for requirements, recommendations, ideas, rants, etc...etc...
> 

Ok! Here we go.

I think that you mean that the VR will actually route the IPv6 traffic
and for that you need to have a way of getting a subnet routed to the VR.

BGP is probably you best bet here. Although OSPFv3 technically supports
this it is very badly implemented in Frr for example.

Now FRR is a very good router and one of the fancy features it supports
is BGP Unnumered. This allows for auto configuration of BGP over a L2
network when both sides are sending Router Advertisements. This is very
easy for flexible BGP configurations where both sides have dynamic IPs.

What you want to do is that you get a /56, /48 or something which is
>/64 bits routed to the VR.

Now you can sub-segment this into separate /64 subnets. You don't want
to go smaller then a /64 is that prevents you from using SLAAC for IPv6
address configuration. This is how it works for Shared Networks now in
Basic and Advanced Zones.

FRR can now also send out the Router Advertisements on the downlinks
sending out:

- DNS servers
- DNS domain
- Prefix (/64) to be used

There is no need for DHCPv6. You can calculate the IPv6 address the VM
will obtain by using the MAC and the prefix.

So in short:

- Using BGP you routed a /48 to the VR
- Now you split this into /64 subnets towards the isolated networks

Wido

> Alex Mattioli
> 
>  
> 
> 


Re: Disabling a storage pool

2021-06-29 Thread Wido den Hollander




Op 29-06-2021 om 15:44 schreef Rakesh Venkatesh:

Hello folks

Is there a way to disable a particular storage pool so that it won't be
used for further volume allocation? I don't want to enable the maintenance
mode as that will turn off the VM's whose volumes running on that pool. I
don't want to use a global setting also since this will come into effect
after the threshold value is reached.

In some cases even if the pool is just 10% allocated, I still want to
disable it so that the current volumes will keep existing on the same pool
and at the same time further deployment of volumes on this pool is disabled.

I looked at the storge tags options but that involves adding tags to
service offerings and I dont want to mess up with that tags. Should we add
a new api to enable this feature? or any other better suggestion?



See: 
https://cloudstack.apache.org/api/apidocs-4.15/apis/updateStoragePool.html


The 'enabled' flag does exactly what you want:

'false to disable the pool for allocation of new volumes, true to enable 
it back.'


Wido



Re: [VOTE] Apache CloudStack 4.15.1.0 (RC1)

2021-06-03 Thread Wido den Hollander
+1 (binding)

On 6/2/21 11:50 AM, Rohit Yadav wrote:
> Hi All,
> 
> I've created a 4.15.1.0 release, with the following artifacts up for a vote:
> 
> Git Branch and Commit SHA:
> https://github.com/apache/cloudstack/tree/4.15.1.0-RC20210602T1429
> Commit: aaac4b17099ba838a3f7b57400277ca9b23f98f5
> 
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.15.1.0/
> 
> PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> 
> The vote will be open for 120 hours, until 7 June 2021.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> For users convenience, the packages from this release candidate (RC1) and
> 4.15.1 systemvmtemplates are available here:
> https://download.cloudstack.org/testing/4.15.1.0-RC1/
> https://download.cloudstack.org/systemvm/4.15/
> 
> Documentation is not published yet, but the following may be referenced for
> upgrade related tests: (there's a new 4.15.1 systemvmtemplate to be
> registered prior to upgrade)
> https://github.com/apache/cloudstack-documentation/tree/4.15/source/upgrading/upgrade
> 
> Regards.
> 


Re: Changing the role of the account

2021-04-22 Thread Wido den Hollander
That's already possible: 
https://cloudstack.apache.org/api/apidocs-4.15/apis/updateAccount.html


roleid: "The UUID of the dynamic role to set for the account"

Isn't that what you are looking for?

Wido

On 22/04/2021 11:55, Rakesh Venkatesh wrote:

Hello folks

I dont think there is an endpoint right now to change the role of the
account. If I have to change that in db, that should be doable right by
changing the id's in two or three tables. I hope that wont break anything
else. Is it good to add a new parameter to updateAccount api to take the
new role id?



Re: ACS agent not start from ubuntu 18.04

2021-04-06 Thread Wido den Hollander




On 01/04/2021 18:04, Support Admin wrote:

*Hi,*

I follow this link for setup ACS on Ubuntu 18.04, all is working fine 
without kvm ACS agent.


https://rohityadav.cloud/blog/cloudstack-kvm/

My network setup like this

why can't start ACS agent? please help me.


You also need cloudbr1 for private communication between the hosts. It 
is failing to find that.


Wido



--

*Thanks & Regards.*

Support Admin

Facebook  | Twitter 
 | Website 


116/1 West Malibagh, D. I. T Road

Dhaka-1217, Bangladesh

*Mob :* +088 01716915504

*Email :* support.ad...@technologyrss.com

*Web :* www.technologyrss.com 



Re: ipv6 capability of shared networks

2021-03-30 Thread Wido den Hollander




On 28/03/2021 21:27, Stephan Seitz wrote:

Am Sonntag, den 28.03.2021, 20:33 +0200 schrieb Wido den Hollander:


On 26/03/2021 20:56, Stephan Seitz wrote:

Wido, thank's a lot!

I just had to look into the db. The correctly calculated SLAAC is
already there.



Double-check: The API and UI do show an IPv6 address for the NIC?

It's then up to you to make sure the Routers in  the (shared)
network
send out the proper Router Advertisements.

Also check on the hypervisor with 'ip6tables-save' and ipset to see
if
all the IPs have been programmed properly into the security groups.

Should just work. We have been using this code for years now.

Wido


I was a little puzzled due to the new UI. Indeed, it is shown in the
UI. I didn't check UI and API at first because of the outdated 4.11
docs which mentioned dhcp6. My fault and poor media literacy :)

To summarize: Your code works well and everything is configured (and
shown) as it should, I just tried the wrong approach with dhcp and
didn't look out of the box.

Anyway, thanks for pointing me to SLAAC!


You're welcome!

Keep in mind that you should disable IPv6 privacy extensions or any 
other form that generates a different IPv6 address for the VM other then 
EUI-64/SLAAC.


Windows for example needs to be modified as well as by default it 
doesn't use the MAC of the NIC to generate an IPv6 address.


Wido



Stephan


Sorry for the noise!

Stephan

Am Freitag, den 26.03.2021, 20:28 +0100 schrieb Wido den Hollander:

On 26/03/2021 20:23, Stephan Seitz wrote:

Hi!

I've recently deployed 4.15.0 Advanced Zone with CentOS 8 kvm
hosts
and
classic linux bridges. I do know that CentOS 7 is preferred,
but
with
some initial tweaks here and there, i'ld say it's working quite
well.



VLAN or VXLAN?


small scale, so VLAN fits very well (just for the record)

Currently, I'm trying to use IPv6 on shared networks. I'd
learned
that
IPv6 only does not work, so I switched to IPv6 plus RFC 1918
IPv4
natted at the outer gateway. IPv4 is not a requirement, but if
it's
necessary to add, it doesn't harm.



Yes. IPv4 is still needed and RFC1918 is just fine. Cloud-init
works
over IPv4. It's a lot of work to get rid of IPv4 in CloudStack.

I'm a big IPv6 fan (wrote a lot of the code in CS), but I didn't
bother
getting rid of IPv4. Not a real use-case for v6-only just yet.


The IPv4 addresses of the deployed hosts are provided by the
virtual
router as expected.

My problem is: I don't get any dhcp6 lease out of the VR. I dug
with
tcpdump on the host and VR. I see the solicit message arriving,
but
no
answering advertise message. I've tried almost everything at
the
host:
accepting RA, Autoconf, selectively disabling these. Also
modifying
the
dhcpv6 duid as seen on some 4.11 docs didn't change anything.



IPv6 does not work with DHCPv6. You should see that when the IPv6
CIDR
is set properly for the shared network in the database that
CloudStack
calculates/generates the IPv6 address the Instance should obtain
through
SLAAC (without privacy addresses!)

When that works you have security grouping also working. It then
filters
on source addresses from VMs and such.

We have thousands of VMs connected with IPv6 this way.

Wido


Best case is, that I'm stuck with hosts correctly configured by
the
router advertisement, but ACS doesn't know about it. So
subsequently i
can't add records to the respective DNS Zones.

Alternatively, I could skip ACS and add the provable eui-64
addresses
to the zone, but I'ld like to avoid that.

After a few uneducated peeks into the VR's dnsmasq
configuration, I
cannot spot any setting for providing dhcp6 leases.

Initially I've deployed the 4.15.0 systemvmtemplate downloaded
from
http://download.cloudstack.org/systemvm/4.15/
Right now, I've switched to the 4.15.1 from the same location,
but
that
didn't change anything.

I've also tried switching the Zone from internal DNS to
external
DNS
and vice versa (these are identical, except the internal DNS is
also
equipped with the respective IPv6 addresses, which obviously
cannot
be
added to the external DNS). That didn't change anything either.

So, I'ld like to ask for any advise.

Thanks in advance!

Stephan






Re: ipv6 capability of shared networks

2021-03-28 Thread Wido den Hollander




On 26/03/2021 20:56, Stephan Seitz wrote:

Wido, thank's a lot!

I just had to look into the db. The correctly calculated SLAAC is
already there.



Double-check: The API and UI do show an IPv6 address for the NIC?

It's then up to you to make sure the Routers in  the (shared) network 
send out the proper Router Advertisements.


Also check on the hypervisor with 'ip6tables-save' and ipset to see if 
all the IPs have been programmed properly into the security groups.


Should just work. We have been using this code for years now.

Wido


Sorry for the noise!

Stephan

Am Freitag, den 26.03.2021, 20:28 +0100 schrieb Wido den Hollander:


On 26/03/2021 20:23, Stephan Seitz wrote:

Hi!

I've recently deployed 4.15.0 Advanced Zone with CentOS 8 kvm hosts
and
classic linux bridges. I do know that CentOS 7 is preferred, but
with
some initial tweaks here and there, i'ld say it's working quite
well.



VLAN or VXLAN?


small scale, so VLAN fits very well (just for the record)



Currently, I'm trying to use IPv6 on shared networks. I'd learned
that
IPv6 only does not work, so I switched to IPv6 plus RFC 1918 IPv4
natted at the outer gateway. IPv4 is not a requirement, but if it's
necessary to add, it doesn't harm.



Yes. IPv4 is still needed and RFC1918 is just fine. Cloud-init works
over IPv4. It's a lot of work to get rid of IPv4 in CloudStack.

I'm a big IPv6 fan (wrote a lot of the code in CS), but I didn't
bother
getting rid of IPv4. Not a real use-case for v6-only just yet.


The IPv4 addresses of the deployed hosts are provided by the
virtual
router as expected.

My problem is: I don't get any dhcp6 lease out of the VR. I dug
with
tcpdump on the host and VR. I see the solicit message arriving, but
no
answering advertise message. I've tried almost everything at the
host:
accepting RA, Autoconf, selectively disabling these. Also modifying
the
dhcpv6 duid as seen on some 4.11 docs didn't change anything.



IPv6 does not work with DHCPv6. You should see that when the IPv6
CIDR
is set properly for the shared network in the database that
CloudStack
calculates/generates the IPv6 address the Instance should obtain
through
SLAAC (without privacy addresses!)

When that works you have security grouping also working. It then
filters
on source addresses from VMs and such.

We have thousands of VMs connected with IPv6 this way.

Wido


Best case is, that I'm stuck with hosts correctly configured by the
router advertisement, but ACS doesn't know about it. So
subsequently i
can't add records to the respective DNS Zones.

Alternatively, I could skip ACS and add the provable eui-64
addresses
to the zone, but I'ld like to avoid that.

After a few uneducated peeks into the VR's dnsmasq configuration, I
cannot spot any setting for providing dhcp6 leases.

Initially I've deployed the 4.15.0 systemvmtemplate downloaded from
http://download.cloudstack.org/systemvm/4.15/
Right now, I've switched to the 4.15.1 from the same location, but
that
didn't change anything.

I've also tried switching the Zone from internal DNS to external
DNS
and vice versa (these are identical, except the internal DNS is
also
equipped with the respective IPv6 addresses, which obviously cannot
be
added to the external DNS). That didn't change anything either.

So, I'ld like to ask for any advise.

Thanks in advance!

Stephan






Re: ipv6 capability of shared networks

2021-03-26 Thread Wido den Hollander




On 26/03/2021 20:23, Stephan Seitz wrote:

Hi!

I've recently deployed 4.15.0 Advanced Zone with CentOS 8 kvm hosts and
classic linux bridges. I do know that CentOS 7 is preferred, but with
some initial tweaks here and there, i'ld say it's working quite well.



VLAN or VXLAN?


Currently, I'm trying to use IPv6 on shared networks. I'd learned that
IPv6 only does not work, so I switched to IPv6 plus RFC 1918 IPv4
natted at the outer gateway. IPv4 is not a requirement, but if it's
necessary to add, it doesn't harm.



Yes. IPv4 is still needed and RFC1918 is just fine. Cloud-init works 
over IPv4. It's a lot of work to get rid of IPv4 in CloudStack.


I'm a big IPv6 fan (wrote a lot of the code in CS), but I didn't bother 
getting rid of IPv4. Not a real use-case for v6-only just yet.



The IPv4 addresses of the deployed hosts are provided by the virtual
router as expected.

My problem is: I don't get any dhcp6 lease out of the VR. I dug with
tcpdump on the host and VR. I see the solicit message arriving, but no
answering advertise message. I've tried almost everything at the host:
accepting RA, Autoconf, selectively disabling these. Also modifying the
dhcpv6 duid as seen on some 4.11 docs didn't change anything.



IPv6 does not work with DHCPv6. You should see that when the IPv6 CIDR 
is set properly for the shared network in the database that CloudStack 
calculates/generates the IPv6 address the Instance should obtain through 
SLAAC (without privacy addresses!)


When that works you have security grouping also working. It then filters 
on source addresses from VMs and such.


We have thousands of VMs connected with IPv6 this way.

Wido


Best case is, that I'm stuck with hosts correctly configured by the
router advertisement, but ACS doesn't know about it. So subsequently i
can't add records to the respective DNS Zones.

Alternatively, I could skip ACS and add the provable eui-64 addresses
to the zone, but I'ld like to avoid that.

After a few uneducated peeks into the VR's dnsmasq configuration, I
cannot spot any setting for providing dhcp6 leases.

Initially I've deployed the 4.15.0 systemvmtemplate downloaded from
http://download.cloudstack.org/systemvm/4.15/
Right now, I've switched to the 4.15.1 from the same location, but that
didn't change anything.

I've also tried switching the Zone from internal DNS to external DNS
and vice versa (these are identical, except the internal DNS is also
equipped with the respective IPv6 addresses, which obviously cannot be
added to the external DNS). That didn't change anything either.

So, I'ld like to ask for any advise.

Thanks in advance!

Stephan




Automatic live migration feature for KVM

2021-03-26 Thread Wido den Hollander

Hi,

RHEV and VMWare have features which automatically migrate VMs away from 
very busy hosts to prevent noisy neighbors.


VMware calls this DRS (Distributed Resource Scheduler) where it keeps 
migrating virtual machines to balance the workload over different hosts.


I wonder if somebody knows if something like this is available for KVM 
with CloudStack or if somebody has ever put effort into this.


Thanks,

Wido


Re: [DISCUSS] List of "supported" hypervisors and network devices

2021-03-01 Thread Wido den Hollander




On 25/02/2021 13:06, Andrija Panic wrote:

Hi folks,

in our official documentation, we state that we support MANY things that, I
assume, have not been tested by almost anyone, not being used widely by
CloudStack users.

My questions: should we make a big note  (in the documentation) that "the
following ... might work, but are not actively tested" or something along
these lines



Yes, I agree. Many of these things below are uncertain if they actually 
work properly.



Subjects to discuss bellow:
###

-

LXC Host Containers on RHEL 7


I was already thinking of stripping this out of the code as I don't 
think anyone is using it.



-

Windows Server 2012 R2 (with Hyper-V Role enabled)
-

Hyper-V 2012 R2
-

Oracle VM 3.0+
-

Bare metal hosts are supported, which have no hypervisor. These hosts
can run the following operating systems:
- Fedora 17
   - Ubuntu 12.04

Supported External Devices

- Netscaler VPX and MPX versions 9.3, 10.1e and 10.5
- Netscaler SDX version 9.3, 10.1e and 10.5
- SRX (Model srx100b) versions 10.3 to 10.4 R7.5
- F5 11.X
- Force 10 Switch version S4810 for Baremetal Advanced Networks

#

My point is that we discontinued supporting i.e. VMware 6.0 (due to VMware
stopped supporting it a while ago; valid reason) while in reality it works
very well (I know 4.13 works in production environments with VMware 6.0),
but we keep mentioning we support things that, probably, nobody tested, nor
is using at all - the ones from above.

Opinions, suggestion?



Like I said, this list can probably shrink as many things are untested 
and haven't been used for a while.


Wido


Re: RPM Repository Metadata

2021-02-15 Thread Wido den Hollander




On 16/02/2021 07:00, Rohit Yadav wrote:

Wido - in that case should we disable metadata re-generating cronjobs for both 
rpm/deb repos? We only need to regenerate repo metadata on a new release. 
There's no need to do this via cron for official releases.



I have disabled the CRON jobs for now for both RPM and DEB. Need to run 
them manually if we upload new packages.


Wido



Regards.


From: Wido den Hollander 
Sent: Monday, February 15, 2021 21:30
To: dev@cloudstack.apache.org ; Rohit Yadav 

Subject: Re: RPM Repository Metadata



On 15/02/2021 09:51, Rohit Yadav wrote:

Hi Nathan,

Thanks for reporting, I've been managing the rpm builds/repos on the server and 
I wasn't aware of this issue.
I checked and found there's a hourly cron job that updates rpm repo metadata 
using:

  createrepo --update --workers 1 --baseurl 


Based on your suggestion, I've changed the script to include: "createrepo --update 
--retain-old-md ".

@Wido Hollander<mailto:w...@pcextreme.nl> @Gabriel Beims 
Bräscher<mailto:gabr...@pcextreme.nl> - any reason why we have the cron job to update 
repo metadata?


No, I think it's just an oversight. It was setup and I don't think it
was very well thought of.

The CRON is a very simple Shell script which probably can use some
attention.

Wido



Regards.


From: Nathan McGarvey 
Sent: Sunday, February 14, 2021 08:32
To: dev@cloudstack.apache.org 
Subject: RPM Repository Metadata

To whom this reaches (@widodh, perhaps?):

  First of all, thank you for building binary distributions (rpm, deb)
of CloudStack.

  I am attempting to create a downstream rsync mirror of the
http://download.cloudstack.org/ centos/rhel repos. (Namely, centos and
systemvm) and noticed two oddities:

  1. The frequency with which the metadata is being rebuilt is
astronomical. E.g.
http://download.cloudstack.org/centos/8/4.15/repodata/ looks like it is
being fully rebuilt every hour, though the RPMs contained within haven't
been updated in over a month. Was this supposed to have a
--retain-old-md or --retain-old-md-by-age flag? The default for things
like RHEL 8 is 48 hours for metadata expiry, so re-generating the entire
repo every hour can (and does) cause caching issues.

  2. The metadata contained in
http://download.cloudstack.org/centos/8/4.15/repodata/repomd.xml makes
it virtually impossible to mirror since it points the  tag at
http://cloudstack.apt-get.eu/
  a. Most RPM repos (E.g. The CentOS official ones, just point the
 tag to the repodata/> of the data type without external
links via relative URI. (See
http://mirror.centos.org/centos-8/8.3.2011/BaseOS/x86_64/os/repodata/repomd.xml
, for example.)
  b. Putting the xml:base in there effectively makes it not a
mirror since anyone pointed at their local mirror will actually redirect
to whatever xml:base is set to.

  FWIW: It looks like cloudstack.apt-get.eu and
download.cloudstack.org are *probably* the same host, which likely means
the xml:base being set at all may not actually be doing anything useful.


  Please pardon my ignorance if these technical configurations are
intentional and have already been discussed as I am new poster to this
thread.


Thanks,
-Nathan McGarvey

rohit.ya...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue






rohit.ya...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue
   
  





Re: RPM Repository Metadata

2021-02-15 Thread Wido den Hollander




On 15/02/2021 09:51, Rohit Yadav wrote:

Hi Nathan,

Thanks for reporting, I've been managing the rpm builds/repos on the server and 
I wasn't aware of this issue.
I checked and found there's a hourly cron job that updates rpm repo metadata 
using:

 createrepo --update --workers 1 --baseurl 


Based on your suggestion, I've changed the script to include: "createrepo --update 
--retain-old-md ".

@Wido Hollander @Gabriel Beims 
Bräscher - any reason why we have the cron job to update 
repo metadata?


No, I think it's just an oversight. It was setup and I don't think it 
was very well thought of.


The CRON is a very simple Shell script which probably can use some 
attention.


Wido



Regards.


From: Nathan McGarvey 
Sent: Sunday, February 14, 2021 08:32
To: dev@cloudstack.apache.org 
Subject: RPM Repository Metadata

To whom this reaches (@widodh, perhaps?):

 First of all, thank you for building binary distributions (rpm, deb)
of CloudStack.

 I am attempting to create a downstream rsync mirror of the
http://download.cloudstack.org/ centos/rhel repos. (Namely, centos and
systemvm) and noticed two oddities:

 1. The frequency with which the metadata is being rebuilt is
astronomical. E.g.
http://download.cloudstack.org/centos/8/4.15/repodata/ looks like it is
being fully rebuilt every hour, though the RPMs contained within haven't
been updated in over a month. Was this supposed to have a
--retain-old-md or --retain-old-md-by-age flag? The default for things
like RHEL 8 is 48 hours for metadata expiry, so re-generating the entire
repo every hour can (and does) cause caching issues.

 2. The metadata contained in
http://download.cloudstack.org/centos/8/4.15/repodata/repomd.xml makes
it virtually impossible to mirror since it points the  tag at
http://cloudstack.apt-get.eu/
 a. Most RPM repos (E.g. The CentOS official ones, just point the
 tag to the repodata/> of the data type without external
links via relative URI. (See
http://mirror.centos.org/centos-8/8.3.2011/BaseOS/x86_64/os/repodata/repomd.xml
, for example.)
 b. Putting the xml:base in there effectively makes it not a
mirror since anyone pointed at their local mirror will actually redirect
to whatever xml:base is set to.

 FWIW: It looks like cloudstack.apt-get.eu and
download.cloudstack.org are *probably* the same host, which likely means
the xml:base being set at all may not actually be doing anything useful.


 Please pardon my ignorance if these technical configurations are
intentional and have already been discussed as I am new poster to this
thread.


Thanks,
-Nathan McGarvey

rohit.ya...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue
   
  





Re: [DISCUSS] Terraform CloudStack provider

2021-01-29 Thread Wido den Hollander




On 28/01/2021 10:55, Rohit Yadav wrote:

Agree we can ask that.



Is infra from ASF the place to ask this?

Wido



Regards.


From: Wido den Hollander 
Sent: Wednesday, January 27, 2021 15:35
To: Niclas Lindblom ; us...@cloudstack.apache.org 

Cc: dev@cloudstack.apache.org 
Subject: Re: [DISCUSS] Terraform CloudStack provider



On 1/27/21 12:18 AM, Niclas Lindblom wrote:

I can confirm that the Terraform plugin is working if it is already installed, 
since it was archived it no longer automatically downloads when applying unless 
manually installed.

 From the Hashicorp website, it appears it was archived when they moved all 
plugins to their registry and needs an owner and an email to Hashicorp to be 
moved into to the registry and supported again when running Terraform. I use it 
regularly but haven’t got the technical skills to maintain the code so been 
hoping this would be resolved.



I mailed Hashicorp to ask about this:

"Thanks for reaching out. The provider was archived because we launched
the Terraform Registry last year which allows vendors to host and
publish their own providers. We'd be happy to work with you to transfer
the repository over to a CloudStack Github organization where you can
build and publish releases to the registry.

We'd also like to have CloudStack join our Technology partnership
program so I can mark your Terraform provider as verified."

So I think we don't need to do much technology-wise.

I don't use Terraform and don't have a major stake in it, but I would
hate to see the Provider being removed from Terraform.

Should we request https://github.com/apache/cloudstack-terraform at
infra and then host the Provider there?

Wido


Niclas



rohit.ya...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue
   
  


On 26 Jan 2021, at 18:33, christian.nieph...@zv.fraunhofer.de wrote:





On 26. Jan 2021, at 10:45, Wido den Hollander  wrote:



On 1/26/21 10:40 AM, christian.nieph...@zv.fraunhofer.de wrote:

On 25. Jan 2021, at 12:40, Abhishek Kumar  wrote:


Hi all,

Terraform CoudStack provider by Hashicorp is archived here 
https://github.com/hashicorp/terraform-provider-cloudstack

Is anyone using or maintaining it?


We are also using it heavily and are somewhat worried about the module being 
archived.


Agreed. But do we know why this has been done? What needs to be done to
un-archive it?

If it's just a matter of some love and attention we can maybe arrange
something.

Is it technically broken or just abandoned?


This is just an educated guess, but given that we're not experiencing any 
technical issues, I believe it has just been abandoned.

Christian




Wido




We're aware of Ansible CloudStack module 
(https://docs.ansible.com/ansible/latest/scenario_guides/guide_cloudstack.html) 
but are there any other alternatives of Terraform that you may be using with 
CloudStack?


The ansible module is working quite well. However, one of the advantage of 
terraform imho is that one can easily destroy defined infrastructure with one 
command, while with ansible 'the destrcution' needs to be implemented in the 
playbook. Another advantage is that (at least) Gitlab can now maintain 
terraform states, which quite nicely supports GitOps approaches.

Cheers, Christian



Regards,
Abhishek

abhishek.ku...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue













Re: [DISCUSS] Terraform CloudStack provider

2021-01-27 Thread Wido den Hollander



On 1/27/21 12:18 AM, Niclas Lindblom wrote:
> I can confirm that the Terraform plugin is working if it is already 
> installed, since it was archived it no longer automatically downloads when 
> applying unless manually installed.
> 
> From the Hashicorp website, it appears it was archived when they moved all 
> plugins to their registry and needs an owner and an email to Hashicorp to be 
> moved into to the registry and supported again when running Terraform. I use 
> it regularly but haven’t got the technical skills to maintain the code so 
> been hoping this would be resolved.
> 

I mailed Hashicorp to ask about this:

"Thanks for reaching out. The provider was archived because we launched
the Terraform Registry last year which allows vendors to host and
publish their own providers. We'd be happy to work with you to transfer
the repository over to a CloudStack Github organization where you can
build and publish releases to the registry.

We'd also like to have CloudStack join our Technology partnership
program so I can mark your Terraform provider as verified."

So I think we don't need to do much technology-wise.

I don't use Terraform and don't have a major stake in it, but I would
hate to see the Provider being removed from Terraform.

Should we request https://github.com/apache/cloudstack-terraform at
infra and then host the Provider there?

Wido

> Niclas
> 
>> On 26 Jan 2021, at 18:33, christian.nieph...@zv.fraunhofer.de wrote:
>>
>>
>>
>>> On 26. Jan 2021, at 10:45, Wido den Hollander  wrote:
>>>
>>>
>>>
>>> On 1/26/21 10:40 AM, christian.nieph...@zv.fraunhofer.de wrote:
>>>> On 25. Jan 2021, at 12:40, Abhishek Kumar  
>>>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> Terraform CoudStack provider by Hashicorp is archived here 
>>>>> https://github.com/hashicorp/terraform-provider-cloudstack
>>>>>
>>>>> Is anyone using or maintaining it?
>>>>
>>>> We are also using it heavily and are somewhat worried about the module 
>>>> being archived.
>>>
>>> Agreed. But do we know why this has been done? What needs to be done to
>>> un-archive it?
>>>
>>> If it's just a matter of some love and attention we can maybe arrange
>>> something.
>>>
>>> Is it technically broken or just abandoned?
>>
>> This is just an educated guess, but given that we're not experiencing any 
>> technical issues, I believe it has just been abandoned.
>>
>> Christian 
>>
>>
>>>
>>> Wido
>>>
>>>>
>>>>> We're aware of Ansible CloudStack module 
>>>>> (https://docs.ansible.com/ansible/latest/scenario_guides/guide_cloudstack.html)
>>>>>  but are there any other alternatives of Terraform that you may be using 
>>>>> with CloudStack?
>>>>
>>>> The ansible module is working quite well. However, one of the advantage of 
>>>> terraform imho is that one can easily destroy defined infrastructure with 
>>>> one command, while with ansible 'the destrcution' needs to be implemented 
>>>> in the playbook. Another advantage is that (at least) Gitlab can now 
>>>> maintain terraform states, which quite nicely supports GitOps approaches. 
>>>>
>>>> Cheers, Christian 
>>>>
>>>>>
>>>>> Regards,
>>>>> Abhishek
>>>>>
>>>>> abhishek.ku...@shapeblue.com 
>>>>> www.shapeblue.com
>>>>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>>>>> @shapeblue
>>>>>
>>>>>
>>>>>
>>>>
>>
> 


Re: [DISCUSS] Terraform CloudStack provider

2021-01-26 Thread Wido den Hollander



On 1/26/21 10:40 AM, christian.nieph...@zv.fraunhofer.de wrote:
> On 25. Jan 2021, at 12:40, Abhishek Kumar  
> wrote:
>>
>> Hi all,
>>
>> Terraform CoudStack provider by Hashicorp is archived here 
>> https://github.com/hashicorp/terraform-provider-cloudstack
>>
>> Is anyone using or maintaining it?
> 
> We are also using it heavily and are somewhat worried about the module being 
> archived.

Agreed. But do we know why this has been done? What needs to be done to
un-archive it?

If it's just a matter of some love and attention we can maybe arrange
something.

Is it technically broken or just abandoned?

Wido

> 
>> We're aware of Ansible CloudStack module 
>> (https://docs.ansible.com/ansible/latest/scenario_guides/guide_cloudstack.html)
>>  but are there any other alternatives of Terraform that you may be using 
>> with CloudStack?
> 
> The ansible module is working quite well. However, one of the advantage of 
> terraform imho is that one can easily destroy defined infrastructure with one 
> command, while with ansible 'the destrcution' needs to be implemented in the 
> playbook. Another advantage is that (at least) Gitlab can now maintain 
> terraform states, which quite nicely supports GitOps approaches. 
> 
> Cheers, Christian 
> 
>>
>> Regards,
>> Abhishek
>>
>> abhishek.ku...@shapeblue.com 
>> www.shapeblue.com
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>>
>>
>>
> 


Re: [DISCUSS] Terraform CloudStack provider

2021-01-25 Thread Wido den Hollander




On 25/01/2021 12:40, Abhishek Kumar wrote:

Hi all,

Terraform CoudStack provider by Hashicorp is archived here 
https://github.com/hashicorp/terraform-provider-cloudstack

Is anyone using or maintaining it? We're aware of Ansible CloudStack module 
(https://docs.ansible.com/ansible/latest/scenario_guides/guide_cloudstack.html) 
but are there any other alternatives of Terraform that you may be using with 
CloudStack?



I know a few customers of us who are using it to deploy VMs. I'm not 
aware of anybody maintaining the Terraform module.


Are there any known issues which need to be looked at which caused it to 
be archived?


Wido


Regards,
Abhishek

abhishek.ku...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue
   
  





Re: Experiences with KVM, iSCSI and OCFS2 (SharedMountPoint)

2021-01-22 Thread Wido den Hollander




On 21/01/2021 21:31, n...@li.nux.ro wrote:

Well, there you go..

On 2021-01-21 18:50, Simon Weller wrote:

We used to use CLVM a while ago before we shifted to Ceph. Cluster
suite/corosync was a bit of a nightmare, and fencing events caused all
sorts of locking (DLM) problems.
I helped a CloudStack user out a couple of month ago, after they
upgraded and CLVM broke, so I know it's still out there in limited
places.
I wouldn't recommend using it today unless you're very brave and have
the capability of troubleshooting the code yourself.


In addition:


I assume you used CLVM with Corosync?


Yes



My concern with LVM is:

- No thin provisioning (when used with CloudStack)


Indeed and machine deployment meant a qemu-img convert qcow2 -> lv .. so 
lengthy.


Yes, that's a valid argument against using LVM as you loose a lot of 
features.





- No snapshots (Right?)


Don't remember honestly. If there were, they must have been slow.


- Not very much used


Yep..


OCFS starts to become more appealing. :)

At the time - and probably now as well - OCFS was best supported by 
Oracle Unbreakable Linux (RHEL rebuild), might be worth looking at 
running that instead of Ubuntu or CentOS, hopefully a smoother, bug-free 
experience.




I would need to test that. RHEL6 has been a long time ago and we are now 
at CentOS 8.


In our case we use Ubuntu 20.04 for hypervisors and that means a lot of 
development went into OCFS2 in recent years.


There's also a reason that VMWare uses VMFS with VMDK images on top as 
that provides a lot of flexibility.


This time the use-case is iSCSI, but thinking ahead we'll get new things 
like NVMeOF which provides even lower latency.


Probably best to set up a PoC and see how it works out.

Wido




Lucian


Re: Experiences with KVM, iSCSI and OCFS2 (SharedMountPoint)

2021-01-21 Thread Wido den Hollander




On 21/01/2021 11:34, n...@li.nux.ro wrote:

Hi,

I used SharedMountPoint a very long time ago with GlusterFS before 
Cloudstack had native integration.
Don't remember the details, but overall my impression was that it worked 
surprisingly well, of course back then there weren't as many feature, so 
less stuff to test. I would give it a go.


As a side note, I did also use iSCSI with CLVM with success, it was 
quite fast. I ended up doing it because it was difficult to get OCFS 
running on EL6 and GFS2 had a reputation for being very slow. Marcus has 
a lot of experience with this, might want to get in touch with him:

https://www.slideshare.net/MarcusLSorensen/cloud-stack-clvm


I assume you used CLVM with Corosync?

My concern with LVM is:

- No thin provisioning (when used with CloudStack)
- No snapshots (Right?)
- Not very much used

OCFS2 doesn't have my preference either, but otherwise you have to use 
corosync.


Anybody else otherwise using CLVM?

Wido



HTH,
Lucian

On 2021-01-21 09:32, Wido den Hollander wrote:

Hi,

For a specific use-case I'm looking into the possibility to use iSCSI
in combination with KVM.

Use-case: Low-latenc I/O with 4k blocks and QD=1

KVM with CloudStack doesn't support iSCSI natively and the docs and
other blogs refer to using 'SharedMountPoint' with OCFS2 or GFS2:

-
http://docs.cloudstack.apache.org/en/latest/adminguide/storage.html#hypervisor-support-for-primary-storage 


-
https://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/ 



It has been a really long time since I've used OCFS2 and I wanted to
see what experiences from other people are.

How is the stability and performance of OFCS2? It seems that
performance should be rather good as lock/s is a problem with
clustered filesystems, but since we only lock the QCOW2 file on boot
of the VM that shouldn't be an issue.

In addition to OCFS2, how mature is 'SharedMountPoint' as a storage
pool with KVM. Does is support all the features NFS supports?

Thanks,

Wido


Experiences with KVM, iSCSI and OCFS2 (SharedMountPoint)

2021-01-21 Thread Wido den Hollander

Hi,

For a specific use-case I'm looking into the possibility to use iSCSI in 
combination with KVM.


Use-case: Low-latenc I/O with 4k blocks and QD=1

KVM with CloudStack doesn't support iSCSI natively and the docs and 
other blogs refer to using 'SharedMountPoint' with OCFS2 or GFS2:


- 
http://docs.cloudstack.apache.org/en/latest/adminguide/storage.html#hypervisor-support-for-primary-storage
- 
https://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/


It has been a really long time since I've used OCFS2 and I wanted to see 
what experiences from other people are.


How is the stability and performance of OFCS2? It seems that performance 
should be rather good as lock/s is a problem with clustered filesystems, 
but since we only lock the QCOW2 file on boot of the VM that shouldn't 
be an issue.


In addition to OCFS2, how mature is 'SharedMountPoint' as a storage pool 
with KVM. Does is support all the features NFS supports?


Thanks,

Wido


Re: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]

2021-01-04 Thread Wido den Hollander



On 1/4/21 2:05 PM, Rohit Yadav wrote:
> On comparing git history, turns out the double escaping was added in this PR:
> https://github.com/apache/cloudstack/pull/4231
> 

The double-escape was only wrapped into an if-statement in that PR, it
was always there it seems.

So we might be facing a different situation here.

Wido

> @Wido Hollander<mailto:w...@pcextreme.nl> can you advise how this was tested 
> and can you advise mitigation of the double escape issue or send a PR?
> 
> 
> Regards.
> 
> 
> From: Rohit Yadav 
> Sent: Monday, January 4, 2021 18:20
> To: users ; dev@cloudstack.apache.org 
> ; Wido Hollander ; Wei Zhou 
> 
> Cc: li jerry ; Gabriel Beims Bräscher 
> ; Wei ZHOU ; Daan Hoogland 
> 
> Subject: Re: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> All,
> 
> I've tested a toy Ubuntu 20.04.1 + Ceph 15.2.7 + ACS 4.15RC3 env, and I think 
> the issue is coming from libvirt or rados java. This was seen on an existing 
> Ceph cluster added prior to upgrade (upgrade to both ACS 4.15RC3 which 
> bundles a new rados-java v0.6.0 dependency) where VM deployment previously 
> used to work but now does not:
> 
> 2021-01-04 12:43:57,151 ERROR [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-4:null) (logid:1594715e) Failed to convert from 
> /mnt/cbd31927-e6e2-3c53-92b2-2e16d74ce36c/a87a71f5--371c-867c-ca96abe58395.qcow2
>  to 
> rbd:meowceph/d9eb3660-23e5-4b37-85ec-b6ad4978bf22:mon_host=192.168.1.10\\:6789:auth_supported=cephx:id=meowceph:key=AQA6U5VfIY1lFRAAFTgZvKhyQrYHyTRrdnTsrg==:rbd_default_format=2:client_mount_timeout=30
>  the error was: qemu-img: 
> rbd:meowceph/d9eb3660-23e5-4b37-85ec-b6ad4978bf22:mon_host=192.168.1.10\\:6789:auth_supported=cephx:id=meowceph:key=AQA6U5VfIY1lFRAAFTgZvKhyQrYHyTRrdnTsrg==:rbd_default_format=2:client_mount_timeout=30:
>  error while converting raw: invalid conf option 6789:auth_supported: No such 
> file or directory
> 
> Env details:
> # ceph --version
> ceph version 15.2.7 (88e41c6c49beb18add4fdb6b4326ca466d931db8) octopus 
> (stable)
> # lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description: Ubuntu 20.04.1 LTS
> Release: 20.04
> Codename: focal
> # virsh version
> Compiled against library: libvirt 6.0.0
> Using library: libvirt 6.0.0
> Using API: QEMU 6.0.0
> Running hypervisor: QEMU 4.2.1
> 
> Since both Ubuntu 20.04 and Ceph 15.2.x are new, we need a regression test of 
> 4.15RC3 with an working/older env say 18.04 (with older and stable 
> qemu/libvirt) + Ceph 13.x/14.x. Can you help - @Wido 
> Hollander<mailto:w...@pcextreme.nl> @Gabriel Beims 
> Bräscher<mailto:gabr...@pcextreme.nl> @Wei 
> Zhou<mailto:w.z...@global.leaseweb.com> and other ceph users/experts? Thanks.
> 
> Regards.
> 
> 
> From: Andrija Panic 
> Sent: Monday, January 4, 2021 18:12
> To: users 
> Cc: dev ; li jerry ; Gabriel 
> Beims Bräscher ; Wei ZHOU ; Daan 
> Hoogland 
> Subject: Re: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> Side question, has anyone (Wido, Garbiel) ever tested Ceph 15.x to work
> with any CloudStack version so far?
> 
> 
> 
> rohit.ya...@shapeblue.com
> www.shapeblue.com<http://www.shapeblue.com>
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
> 
> 
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
>   
>  
> 
> On Mon, 4 Jan 2021 at 13:13, Wido den Hollander  wrote:
> 
>>
>>
>> On 1/4/21 12:25 PM, li jerry wrote:
>>> Hi Rohit and Wido
>>>
>>>
>>> According to the document description, I re-tested adding RBD primary
>> storage
>>> monitor: 10.100.250.14:6789
>>>
>>> (createStoragePool api :
>>>   url: rbd://hyperx:AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==@
>> 10.100.250.14:6789/rbd
>>> )
>>> The primary storage is added successfully.
>>>
>>> But, now there are new problems.
>>>
>>> Error when executing template copy from secondary to primary storage
>> (rbd)
>>> (This operation is creating system VM SSVM/CPVM)
>>>
>>> Here is the error message:
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) copyPhysicalDisk: disk
>> size:(356.96 MB) 374304768, virtualsize:(2.44 GB) 262144 format:qcow2
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) The 

Re: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]

2021-01-04 Thread Wido den Hollander



On 1/4/21 1:42 PM, Andrija Panic wrote:
> Side question, has anyone (Wido, Garbiel) ever tested Ceph 15.x to work
> with any CloudStack version so far?
> 

Yes. Running it in production on Ubuntu 18.04 hypervisors and Ceph servers.

This is with CloudStack 4.13.1

Wido

> 
> On Mon, 4 Jan 2021 at 13:13, Wido den Hollander  wrote:
> 
>>
>>
>> On 1/4/21 12:25 PM, li jerry wrote:
>>> Hi Rohit and Wido
>>>
>>>
>>> According to the document description, I re-tested adding RBD primary
>> storage
>>> monitor: 10.100.250.14:6789
>>>
>>> (createStoragePool api :
>>>   url: rbd://hyperx:AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==@
>> 10.100.250.14:6789/rbd
>>> )
>>> The primary storage is added successfully.
>>>
>>> But, now there are new problems.
>>>
>>> Error when executing template copy from secondary to primary storage
>> (rbd)
>>> (This operation is creating system VM SSVM/CPVM)
>>>
>>> Here is the error message:
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) copyPhysicalDisk: disk
>> size:(356.96 MB) 374304768, virtualsize:(2.44 GB) 262144 format:qcow2
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) The source image is not RBD,
>> but the destination is. We will convert into RBD format 2
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Starting copy from source
>> image
>> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>> to RBD image rbd/40165b83-896c-4693-abe7-9fd96b40ce9a
>>> 2021-01-04 11:20:26,302 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Executing: qemu-img convert
>> -O raw
>> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30
>>> 2021-01-04 11:20:26,303 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Executing while with timeout
>> : 1080
>>> 2021-01-04 11:20:26,383 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Exit value is 1
>>> 2021-01-04 11:20:26,383 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) qemu-img:
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30:
>> error while converting raw: invalid conf option 6789:auth_supported: No
>> such file or directory
>>
>> There seems to be a double-escape here. That might the culprit.
>>
>> 'mon_host=10.100.250.14\:6789:auth_supported=cephx:id=hyperx'
>>
>> It might be that it needs to be that string.
>>
>> Wido
>>
>>> 2021-01-04 11:20:26,384 ERROR [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Failed to convert from
>> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>> to
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30
>> the error was: qemu-img:
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30:
>> error while converting raw: invalid conf option 6789:auth_supported: No
>> such file or directory
>>> 2021-01-04 11:20:26,384 INFO  [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Attempting to remove storage
>> pool 466f03d4-9dfe-3af4-a042-33a00dae0e97 from libvirt
>>> 2021-01-04 11:20:26,384 DEBUG [kvm.resource.LibvirtConnection]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Looking for libvirtd
>> connection at: qemu:///system
>>>
>>>
>>> -邮件原件-
>>> 发件人: Rohit Yadav 
>>> 发送时间: 2021年1月4日 19:09
>>> 收件人: Wido den Hollander ; dev@cloudstack.apache.org;
>> us...@cloudstack.apache.org; Gabriel Beims Bräscher ;
>> Wei ZHOU ; Daan Hoogland <
>> daan.hoogl...@shapeblue.com>
&g

Re: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]

2021-01-04 Thread Wido den Hollander



On 1/4/21 12:25 PM, li jerry wrote:
> Hi Rohit and Wido
> 
> 
> According to the document description, I re-tested adding RBD primary storage
> monitor: 10.100.250.14:6789
> 
> (createStoragePool api :
>   url: 
> rbd://hyperx:AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==@10.100.250.14:6789/rbd
> )
> The primary storage is added successfully.
> 
> But, now there are new problems.
> 
> Error when executing template copy from secondary to primary storage (rbd)
> (This operation is creating system VM SSVM/CPVM)
> 
> Here is the error message:
> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-2:null) (logid:587f5b34) copyPhysicalDisk: disk 
> size:(356.96 MB) 374304768, virtualsize:(2.44 GB) 262144 format:qcow2
> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-2:null) (logid:587f5b34) The source image is not RBD, 
> but the destination is. We will convert into RBD format 2
> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-2:null) (logid:587f5b34) Starting copy from source 
> image 
> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>  to RBD image rbd/40165b83-896c-4693-abe7-9fd96b40ce9a
> 2021-01-04 11:20:26,302 DEBUG [utils.script.Script] 
> (agentRequest-Handler-2:null) (logid:587f5b34) Executing: qemu-img convert -O 
> raw 
> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>  
> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30
> 2021-01-04 11:20:26,303 DEBUG [utils.script.Script] 
> (agentRequest-Handler-2:null) (logid:587f5b34) Executing while with timeout : 
> 1080
> 2021-01-04 11:20:26,383 DEBUG [utils.script.Script] 
> (agentRequest-Handler-2:null) (logid:587f5b34) Exit value is 1
> 2021-01-04 11:20:26,383 DEBUG [utils.script.Script] 
> (agentRequest-Handler-2:null) (logid:587f5b34) qemu-img: 
> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30:
>  error while converting raw: invalid conf option 6789:auth_supported: No such 
> file or directory

There seems to be a double-escape here. That might the culprit.

'mon_host=10.100.250.14\:6789:auth_supported=cephx:id=hyperx'

It might be that it needs to be that string.

Wido

> 2021-01-04 11:20:26,384 ERROR [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-2:null) (logid:587f5b34) Failed to convert from 
> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>  to 
> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30
>  the error was: qemu-img: 
> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30:
>  error while converting raw: invalid conf option 6789:auth_supported: No such 
> file or directory
> 2021-01-04 11:20:26,384 INFO  [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-2:null) (logid:587f5b34) Attempting to remove storage 
> pool 466f03d4-9dfe-3af4-a042-33a00dae0e97 from libvirt
> 2021-01-04 11:20:26,384 DEBUG [kvm.resource.LibvirtConnection] 
> (agentRequest-Handler-2:null) (logid:587f5b34) Looking for libvirtd 
> connection at: qemu:///system
> 
> 
> -邮件原件-
> 发件人: Rohit Yadav 
> 发送时间: 2021年1月4日 19:09
> 收件人: Wido den Hollander ; dev@cloudstack.apache.org; 
> us...@cloudstack.apache.org; Gabriel Beims Bräscher ; 
> Wei ZHOU ; Daan Hoogland 
> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> Jerry, Wido, Daan - kindly review 
> https://github.com/apache/cloudstack-documentation/pull/175/files
> 
> 
> Regards.
> 
> 
> From: Rohit Yadav 
> Sent: Monday, January 4, 2021 16:25
> To: Wido den Hollander ; dev@cloudstack.apache.org 
> ; us...@cloudstack.apache.org 
> ; Gabriel Beims Bräscher ; 
> Wei ZHOU ; Daan Hoogland 
> Subject: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> Great thanks for replying Wido. @Daan 
> Hoogland<mailto:daan.hoogl...@shapeblue.com> I think we can continue with RC3 
> vote/tally, I'll send a doc PR.
> 
> 
> Regards.
> 
> 
> From: Wido den Hollander 
> Sent: Monday, January 4, 2021 14:35
> To: dev@cloudstack.

Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]

2021-01-04 Thread Wido den Hollander



On 1/4/21 9:50 AM, Rohit Yadav wrote:
> Thanks for replying Jerry - for now, the workaround you can use is to specify 
> the rados monitor port (such as 10.100.250.14:6789) in the UI form when you 
> add a ceph rbd pool. For example, via API the url parameter would look like: 
> "rbd://cephtest:AQC3u_JfhipzGBAACiILEFKembN8gTJsIvu6nQ==@192.168.1.10:6789/cephtest"
> 
> @Daan Hoogland @Gabriel Beims 
> Bräscher @Wido 
> Hollander @Wei ZHOU - 
> the issue seems to be rbd pool fails to be added if a port is not specified - 
> what do you think, should we treat this as blocker or document it (i.e. ask 
> admins to specify rados monitor port)?

I would not say this is a blocker for now. Ceph is moving away from port
6789 as the default and libvirt is already handling this.

This needs to be fixed though and I see that a ticket is open for this.
I'll look into this with Gabriel.

Keep in mind that port numer 6789 is not the default for Ceph! Messenger
v2 uses port 3300 and therefor it's best not to specify any port and
have the Ceph client sort this out.

In addition I would always suggest to use a hostname with Ceph and not a
static IP of a monitor. Round Robin DNS pointing to the monitors is the
most reliable solution.

Wido

> 
> 
> Regards.
> 
> 
> From: li jerry 
> Sent: Monday, January 4, 2021 13:10
> To: dev@cloudstack.apache.org ; 
> us...@cloudstack.apache.org 
> Subject: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> Hi Rohit
> 
> Yes, I didn't specify a port when I added primary storage;
> 
> After r failed, I checked with virsh and found that the pool had been created 
> successfully, and the capacity, allocation and available of RBD could be 
> displayed.
> So I'm sure it's not the wrong key.
> 
> 
> Please note that:
> In the output pool dump, I see that there is no port target under host
> But the code gets port and converts it to int
> 
> String port = Integer.parseInt(getAttrValue("host", "port", source));
> 
> 
> 
> virsh pool-dumpxml d9b976cb-bcaf-320a-94e6-b337e65dd4f5
> 
> d9b976cb-bcaf-320a-94e6-b337e65dd4f5
> d9b976cb-bcaf-320a-94e6-b337e65dd4f5
> 12122373201920
> 912457728
> 11998204379136
> 
> 
> rbd
> 
> 
> 
> 
> 
> 
> -Jerry
> 
> 发件人: Rohit Yadav
> 发送时间: 2021年1月4日 15:32
> 收件人: us...@cloudstack.apache.org; 
> dev@cloudstack.apache.org
> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> Hi Jerry,
> 
> Can you see my reply? I'm able to add a RBD primary storage if I specify the 
> port, should we still consider it a blocker then?
> 
> 
> Regards.
> 
> 
> From: li jerry 
> Sent: Monday, January 4, 2021 12:52
> To: us...@cloudstack.apache.org 
> Cc: dev 
> Subject: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> I'm creating PR to fix this.
> 
> I think we should block, because it will cause the RBD primary storage to be 
> unable to be added.
> 
> -邮件原件-
> 发件人: Daan Hoogland 
> 发送时间: 2021年1月4日 14:57
> 收件人: users 
> 抄送: dev 
> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
> 
> looks good Jerry,
> Are you making a PR? It seems to me that this would not be a blocker and 
> should go in future releases. Please argue against me if you disagree.
> 
> 
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
> 
> 
> 
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
>   
>  
> 
> On Mon, Jan 4, 2021 at 6:48 AM li jerry  wrote:
> 
>> - Is this a setup that does work with a prior version?
>> - Did you fresh install or upgrade?
>>
>> No, This is a new deployment, there are no upgrades
>>
>> I have changed two methods. At present, RBD storage is running
>>
>>
>> /cloud-plugin-hypervisor-kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
>> //String _xmlPort = Integer.parseInt(getAttrValue("host",
>> "port", source));
>> int port = 0;
>> String _xmlPort = getAttrValue("host", "port", source);
>> if ( ! _xmlPort.isEmpty()) {
>> port = Integer.parseInt(_xmlPort);
>> }
>>
>>
>>
>>
>> /cloud-plugin-hypervisor-kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java
>> //int port = Integer.parseInt(getAttrValue("host",
>> "port", disk));
>> int port = 0;
>> String _xmlPort = getAttrValue("host", "port", disk);
>> if ( ! _xmlPort.isEmpty()) {
>> port = Integer.parseInt(_xmlPort);
>> }
>>
>>
>> -Jerry
>>
>> 发件人: Daan 

Re: [discuss] CentOS announcement

2020-12-28 Thread Wido den Hollander
My opinion at the moment: Wait until the dust settles.

CloudStack should mainly release RPM based packages which work with systemd.

In the end many distros nowadays boil down to:

- DEB or RPM
- systemd

Yes, dependencies might differ in names and versions, but these are the
main differences between Linux distributions.

I could well be that CloudStacks works on CentOS Stream, Rocky, SL and
many other RPM based distributions.

But since this is all very fresh I would wait for now and see how this
turns out.

Wido

On 12/24/20 1:01 PM, Giles Sirett wrote:
> You may have seen this news recently announced by Redhat and the CentOS 
> project.  [1] [2]
> 
> 
> 
> 
> 
> At this early stage, it looks like   CentOS Stream will not provide the 
> stability that we probably expect
> 
> 
> 
> So, what Linux distro should we be targeting for MS and  KVM agents going 
> forward  ?  I guess this is a decision that we have to make as a project.
> 
> 
> 
> CentOS 7 will receive full updates only until the year end (Q4 2020) and 
> maintenance updates will continue until 30 June 2024, so I don't think we 
> need to rush this decision. Also, I think that there will be a lot of 
> emerging competitors to fill the void, like the recent announcement of Rocky. 
> [3] gives a good summary of current choices
> 
> 
> 
>  I guess, actually, the question is: *when* should we aim to make this 
> decision ?  - the advantage of leaving it is we are able to see what others 
> settle on.
> 
> 
> 
> 
> [1] Red Hat makes drastic changes to CentOS, leaves users fuming | 
> TechRadar
> 
> [2] CentOS Project shifts focus to CentOS Stream - 
> Blog.CentOS.org
> [3] About/Product - CentOS Wiki
> [4] Where CentOS Linux users can go from here | 
> ZDNet
> 
> Kind regards
> Giles
> 
> 
> giles.sir...@shapeblue.com 
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
>   
>  
> 


Re: [DISCUSS] Deprecating support for Ubuntu 16.04 in ACS 4.15?

2020-09-01 Thread Wido den Hollander




On 01/09/2020 10:48, Wei ZHOU wrote:

Hi Rohit and community,

I am ok with deprecating ubuntu 16.04 support in CloudStack 4.15.



Me as well. We have until April 2021 before 16.04 goes EOL and I don't 
think it's worth the effort.



It is very important to document the upgrade path from ubuntu
16.04/python2/old cloudstack versions to ubuntu 20.04/python3/cloudstack
4.15. We probably need to modify cloudstack 4.14 to support ubuntu 20.04 as
a bridge, if we do not stop vms.

1.  ubuntu 16.04/python2/old cloudstack versions , upgrade to
2.  ubuntu 16.04/python2/cloudstack 4.14, then upgrade to
3.  ubuntu 20.04/python2/cloudstack 4.14 (and live migrate vms), then
upgrade to
4.  ubuntu 20.04/python3/cloudstack 4.15

any other workaround ?


Not that I can see quickly. But yes, a point release of 4.14 probably 
needs to support 20.04 so people can easily upgrade.


Wido



Kind regards,
Wei

On Sat, 29 Aug 2020 at 11:47, Rohit Yadav  wrote:


All,

Ubuntu 16.04 does not have JRE11 package as well as some packages for
Python3 such as python3-distutils:

https://packages.ubuntu.com/search?suite=default=all=any=python3-distutils=names

The Java 11 JRE requirement was introduced starting the 4.14.0.0 release
and in the release notes workaround for JRE installation was mentioned [0].
With master/4.15 support for Python3 was added for KVM hosts and the
management server but a critical issue [1] was recently identified due to
which cloudstack-agent would fail installation on the older Ubuntu 16.04
due to missing python3-distutil (and other) dependencies.

Ubuntu 16.04 will reach the end of standard support by April 2021 [2][3],
and the next 4.15 release will add support for Ubuntu 20.04 [4][5] - so
should we:

(a) remove Ubuntu 16.04 from our list of supported distribution (for KVM
host and management server) in 4.15?
(b) (if there are users using 16.04) document workaround for
installation/upgrade in the 4.15 release/upgrade notes?

Thoughts, feedback? Thanks.

[0]
http://docs.cloudstack.apache.org/en/4.14.0.0/upgrading/upgrade/upgrade-4.13.html#java-version-requirement

[1] https://github.com/apache/cloudstack/issues/4280

[2] https://wiki.ubuntu.com/Releases

[3]
https://lists.ubuntu.com/archives/ubuntu-announce/2016-April/000207.html

[4] Wei's Ubuntu 20.04 PR https://github.com/apache/cloudstack/pull/4069

[5] Wei's Ubuntu 20.04 PR included with support for CentOS8 PR
https://github.com/apache/cloudstack/pull/4068


Regards.

rohit.ya...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue








Re: Heads-up - Debian 10 Systemvmtemplate

2020-06-09 Thread Wido den Hollander
Indeed. Great stuff!

Thanks!

Wido

On 6/9/20 7:23 PM, Andrija Panic wrote:
> Great stuff Rohit, thanks!
> 
> On Tue, 9 Jun 2020 at 05:51, Rohit Yadav  wrote:
> 
>> Typo - "the 4.14 Debian 10 systemvmtemplate will", was supposed to be "the
>> 4.14 Debian 9 systemvmtemplate may".
>>
>>
>> Regards,
>>
>> Rohit Yadav
>>
>> Software Architect, ShapeBlue
>>
>> https://www.shapeblue.com
>>
>> 
>> From: Rohit Yadav 
>> Sent: Tuesday, June 9, 2020 08:50
>> To: dev@cloudstack.apache.org 
>> Subject: Heads-up - Debian 10 Systemvmtemplate
>>
>> All,
>>
>> Debian 9 (https://wiki.debian.org/DebianReleases#Production_Releases) EOL
>> date is in July 2020 (while the LTS EOL date is still far away in 2022 for
>> security updates).
>>
>> We've moved to a Debian 10 based systemvmtemplate on master/4.15-snapshot
>> now which will contain python3 (python2 still default at /usr/bin/python)
>> and brings newer jdk11, strongswan packages, newer kernel and security
>> updates. This will unblock the community for doing any python2-to-python3
>> work for systemvms/VRs.
>>
>> While the 4.14 Debian 10 systemvmtemplate will continue to work with
>> master, developers working on 4.15 milestone PRs should switch to using the
>> 4.15/Debian10 systemvmtemplate in their local dev-test environments, either
>> building it yourself using packer or get it from
>> http://download.cloudstack.org/systemvm/4.15/ (uploaded for testing until
>> 4.15 GA). Trillian has been configured to run tests against the new
>> systemvmtemplate now.
>>
>> Thanks.
>>
>>
>> Regards,
>>
>> Rohit Yadav
>>
>> Software Architect, ShapeBlue
>>
>> https://www.shapeblue.com
>>
>> rohit.ya...@shapeblue.com
>> www.shapeblue.com
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>>
>>
>>
>>
>> rohit.ya...@shapeblue.com
>> www.shapeblue.com
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>>
>>
>>
>>
> 


Re: [ANNOUNCE] 4.14 Freeze date

2020-05-25 Thread Wido den Hollander
Hi,

Great! Let's get working on 4.15.

>From PCextreme we have a few things coming in:

- Redfish HA support: https://github.com/apache/cloudstack/issues/3624
- Python 3 support: https://github.com/apache/cloudstack/issues/3195

And as a community we should probably work on:

- Ubuntu 20.04 support (and drop 16.04)
- CentOS 8 support

Wido

On 5/25/20 11:48 AM, Andrija Panic wrote:
> Hi all,
> 
> code is free/open for merges again.
> 
> Regards,
> Andrija
> 
> On Sat, 14 Mar 2020 at 10:12, Daan Hoogland  wrote:
> 
>> H all, I've been working with Andrija and all of you to get as much I'm as
>> possible. We are now closed for features. Note however there are three
>> issues open for 4.13 which are blocking 4.14 as well. Please ask help in
>> discussing and finding fixes for those so we can release as soon as
>> possible.
>>
>> On Fri, 13 Mar 2020, 23:32 Andrija Panic,  wrote:
>>
>>> Hi all,
>>>
>>> as already discussed, please find a general code freeze taking place in
>> 30
>>> minutes (23.59h CET).
>>>
>>> Thanks,
>>> Andrija
>>>
>>> On Fri, 6 Mar 2020, 20:03 Andrija Panic, 
>> wrote:
>>>
 I do like Friday the 13th - means nobody would commit anything 

 On Fri, 6 Mar 2020 at 19:29, Daan Hoogland 
 wrote:

> And leave master open for disaster all of Friday the 13th? You hero!
>> 
>
> On Fri, 6 Mar 2020, 18:37 Giles Sirett, 
> wrote:
>
>> In case there is anybody superstitious, could we make it Saturday
>> 14th
>> at 00:01 ? 
>>
>>
>> Kind regards
>> Giles
>>
>> giles.sir...@shapeblue.com
>> www.shapeblue.com
>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>>
>>
>>
>>
>> -Original Message-
>> From: Andrija Panic 
>> Sent: 06 March 2020 13:59
>> To: dev 
>> Subject: [ANNOUNCE] 4.14 Freeze date
>>
>> Hi all,
>>
>> I believe we are nearly there, so I would like to propose/announce a
>> master/4.14 code freeze date for a week from now, on Friday the 13th
>> @
>> 23.59h
>>
>> After this time, no more features and general fixes will be allowed,
>>> and
>> only critical and blocker issues will be fixed after the freeze.
>>
>> Please let me know if you have any questions or concerns.
>>
>> Thank you,
>>
>> --
>>
>> Andrija Panić
>>
>

 --

 Andrija Panić

>>>
>>
> 
> 


Re: [VOTE] Apache CloudStack 4.13.1.0 RC2

2020-04-23 Thread Wido den Hollander



On 4/23/20 11:04 AM, Daan Hoogland wrote:
> never mind found it 3959

Do you mean: https://github.com/apache/cloudstack/pull/3959 ?

Wido

> 
> On Thu, Apr 23, 2020 at 11:03 AM Daan Hoogland 
> wrote:
> 
>> Jerry,
>> you say you have a PR but point to the issue. Please give me the PR so I
>> can review. It sounds like a blocker indeed and we need to fix this.
>>
>> On Thu, Apr 23, 2020 at 10:45 AM li jerry  wrote:
>>
>>> Hi Andrija Panic,
>>> we encountered this bug in production
>>> https://github.com/apache/cloudstack/issues/3958
>>>
>>>
>>> We fixed it through this PR
>>> https://github.com/apache/cloudstack/issues/3958
>>>
>>> I am not clear how to apply for PR merge, can you merge this patch to
>>> 4.13.1?
>>>
>>> This bug can be reproduced 100%
>>>
>>> I don't want any users in the community to lose data because of this bug.
>>>
>>> Thank you!
>>>
>>> -邮件原件-
>>> 发件人: Andrija Panic 
>>> 发送时间: 2020年4月20日 20:14
>>> 收件人: dev ; users 
>>> 主题: [VOTE] Apache CloudStack 4.13.1.0 RC2
>>>
>>> Hi All,
>>>
>>> I've created a 4.13.1.0 release (RC2), with the following artefacts up
>>> for testing and a vote:
>>>
>>> Git Branch and Commit SH:
>>>
>>> https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.13.1.0-RC20200420T1036
>>> Commit: b2596a2be494b39ee5730ece7fdd73166612ef85
>>>
>>> Source release (checksums and signatures are available at the same
>>> location):
>>> https://dist.apache.org/repos/dist/dev/cloudstack/4.13.1.0/
>>>
>>> PGP release keys (signed using 3DC01AE8):
>>> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>>>
>>> The vote will be open until Friday 24th April 2020 (72h)
>>>
>>> For sanity in tallying the vote, can PMC members please be sure to
>>> indicate "(binding)" with their vote?
>>>
>>> [ ] +1 approve
>>> [ ] +0 no opinion
>>> [ ] -1 disapprove (and reason why)
>>>
>>> Additional information:
>>>
>>> For users' convenience, I've built packages from
>>> b2596a2be494b39ee5730ece7fdd73166612ef85 and published RC2 repository
>>> here (CentOS 6, CentOS 7 and Debian/generic):
>>> http://packages.shapeblue.com/testing/41310rc2/
>>> and here (Ubuntu 18.04 specific, thanks to Gabriel):
>>>
>>> https://download.cloudstack.org/testing/4.13.1.0-RC20200420T1036/ubuntu/bionic/
>>>
>>> There are no changes to the systemVM templates vs. 4.13.0.0 - please use
>>> the official ones (systemvmtemplate-4.11.3-*) from
>>> http://download.cloudstack.org/systemvm/4.11/
>>>
>>> If upgrading from < 4.13.0.0, please use the existing 4.13.0.0 Upgrade
>>> guide here:
>>> http://docs.cloudstack.apache.org/en/latest/upgrading/index.html
>>> If upgrading from 4.13.0.0, simply upgrade the packages as usual.
>>>
>>> Regards,
>>> Andrija
>>>
>>
>>
>> --
>> Daan
>>
> 
> 


Re: "[DISCUSS] 4.13.1 at the same time with 4.14"

2020-03-14 Thread Wido den Hollander



On 3/13/20 1:05 PM, Andrija Panic wrote:
> Hi all,
> 
> in order to avoid potential issues around upgrading 4.13.1.0 to 4.14.0.0 in
> future (if we release 4.13.1.0 after 4.14.0.0), I would like to see what
> the community thinks about us doing both releases at the same time?
> 
> The reason for asking is not just "yeah, great, let's do it" but the reason
> is that community would have to do double the TESTING effort for both
> releases during the voting process, at pretty much the same time.

I think that's a good idea. Mainly the databse upgrades are painful and
this makes it a lot easier.

We then have to merge a couple of things for 4.13.1 then :-)

We should prevent dragging the releases even further.

Wido

> 
> Opinion?
> 
> Thanks,
> 


Re: [DISCUSS]/[PROPOSAL] draft PRs

2020-02-14 Thread Wido den Hollander



On 2/14/20 1:03 PM, Daan Hoogland wrote:
> devs, I thought I had already sent a mail about this but i cannot find it.
> I'm sure i had mentioned it somewhere (probably on github).
> this is a follow up on [1] and hopefully you'll agree, a slight improvement.
> 
> here it comes:
> 
> At the moment we are creating PRs with a [WIP] or [DO NOT MERGE] tag in the
> title. This title stands the chance of being merged once we agree the PR is
> ready for merge. It also clutters the title.
> 
> Github has introduced a nice feature a while ago; draft PR. When creating a
> PR you can opt not to open it for merge but as draft. Choose a button left
> of the "Create pull request" button, marked "Create draft PR". It will be a
> full PR with all CI and discussion possibilities open. The only difference
> is the merge button being disabled. One will than have to make/mark it
> "ready for merge" before it *can* be merged.
> 

That sounds like a good idea! Would be a +1 from me :-)

Wido

> [1]
> https://lists.apache.org/thread.html/f3f0988907f85bfc2cfcb0fbcde831037f9b1cb017e94bc68932%40%3Cdev.cloudstack.apache.org%3E
> please shoot any comments you may have back at me,
> thanks
> 


Re: New Joiner

2020-02-07 Thread Wido den Hollander
Welcome Hoang!

Wido

On 2/7/20 3:13 AM, Hoang Nguyen wrote:
> Hello Everyone,
> 
> My name is Hoang and I'm working for Ewerk. It is my great pleasure to have a 
> chance to join Cloudstack Primate project. It has been a wonderful time for 
> me since last December when I started my first task.
> 
> I would look forward to be a part of this team for a long time to go. And I 
> hope to have your kind help.
> Thank you so much.
> 
> Best regards,
> Hoang
> __
> Hoang Nguyen
> Frontend Developer
> 
> EWERK DIGITAL GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 99
> F +49 341 42649 - 98
> www.ewerk.xn--com-zq0a
> 
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter
> Registergericht: Leipzig HRB 9065
> 
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 2-1:2011 
> 
> 


Re: Feature request: support for Proxmox hypervisors

2019-12-05 Thread Wido den Hollander



On 12/5/19 10:08 AM, Nux! wrote:
> Hi Wido,
> 
> When you put it like that it does seem like a pointless exercise, but
> the devil is the details.
> 
> - very up to date kvm/kernel some may seek for various features
> 

You can do the same with KVM right now. Just install a different kernel
on Ubuntu or CentOS and that works.

> - good CEPH integration in their UI/api(?), no command line needed, and
> if you buy the product they will actually support it (quite affordable)
> 

The Ceph integration is only to talk to Ceph and that's all handled by
Qemu+libvirt+librbd+librados.

> - excellent ZFS implementation, the kind you can't have with Cloudstack
> (they don't host qcow files on zfs, but use zvols). Good performance,
> especially with compression. Proxmox replication is a very nice thing
> for DR etc. Very efficient backups.
> 

zvols is nice, yes! But that would be a matter of integrating this into
libvirt. Because then we could also use LVs for storing data and not
only zvols.

> - Clustering & HA - their implementation is superior to Cloudstack's, it
> could simply be offloaded to the Proxmox engine, as it is done with
> Xenserver and Vmware. VM HA for KVM in Cloudstack is currently broken as
> far as I am concerned, with VMs failing to be started automatically on
> available HVs once one goes down.
> 
> The list goes on with finer points that I don't remember right now. It's
> an excellent product, if you haven't tried it yet, give it a go.

I have used Proxmox many times where people wanted a Hypervisor, nothing
more.

It serves it's own purpose imho, not to be controlled by CloudStack. If
you want a no-nonsense hypervisor which just runs VMs: Go with Proxmox!

If you want something which manages networks, templates, firewalling,
ips: Go with CloudStack!

I don't see much benefit in talking to Proxmox with CloudStack because
in the end you are just talking to KVM.

Wido

> 
> Regards,
> Lucian
> 
> 
> 
> ---
> Sent from the Delta quadrant using Borg technology!
> 
> On 2019-12-05 07:46, Wido den Hollander wrote:
>> On 12/4/19 3:20 PM, Nux! wrote:
>>> Hi,
>>>
>>> As a user of Proxmox, I am quite happy with how they got certain things
>>> right. ZFS, CEPH, HA, efficient backups etc are all very nice features.
>>> It's giving ESXi a run for its money in certain circles.
>>> It'd be great if we could orchestrate Proxmox hypervisors/clusters with
>>> Cloudstack.
>>>
>>> Happy to contribute some time and testing, but obviously this requires
>>> an actual developer's attention. Any takers?
>>>
>>
>> But it's still KVM in the end. So what would the true benefit be over
>> KVM which we have right now?
>>
>> It supports Ceph, ZFS (just set it up) and HA.
>>
>> So I don't see what we are lacking or what Proxmox integration would
>> bring us?
>>
>> Wido
>>
>>> Regards,
>>> Lucian
>>>


Re: Feature request: support for Proxmox hypervisors

2019-12-04 Thread Wido den Hollander



On 12/4/19 3:20 PM, Nux! wrote:
> Hi,
> 
> As a user of Proxmox, I am quite happy with how they got certain things
> right. ZFS, CEPH, HA, efficient backups etc are all very nice features.
> It's giving ESXi a run for its money in certain circles.
> It'd be great if we could orchestrate Proxmox hypervisors/clusters with
> Cloudstack.
> 
> Happy to contribute some time and testing, but obviously this requires
> an actual developer's attention. Any takers?
> 

But it's still KVM in the end. So what would the true benefit be over
KVM which we have right now?

It supports Ceph, ZFS (just set it up) and HA.

So I don't see what we are lacking or what Proxmox integration would
bring us?

Wido

> Regards,
> Lucian
> 


Re: [VOTE] Primate as modern UI for CloudStack

2019-10-07 Thread Wido den Hollander
+1 !!

On 10/7/19 1:31 PM, Rohit Yadav wrote:
> All,
> 
> The feedback and response has been positive on the proposal to use Primate as 
> the modern UI for CloudStack [1] [2]. Thank you all.
> 
> I'm starting this vote (to):
> 
>   *   Accept Primate codebase [3] as a project under Apache CloudStack project
>   *   Create and host a new repository (cloudstack-primate) and follow Github 
> based development workflow (issues, pull requests etc) as we do with 
> CloudStack
>   *   Given this is a new project, to encourage cadence until its feature 
> completeness the merge criteria is proposed as:
>  *   Manual testing against each PR and/or with screenshots from the 
> author or testing contributor, integration with Travis is possible once we 
> get JS/UI tests
>  *   At least 1 LGTM from any of the active contributors, we'll move this 
> to 2 LGTMs when the codebase reaches feature parity wrt the existing/old 
> CloudStack UI
>  *   Squash and merge PRs
>   *   Accept the proposed timeline [1][2] (subject to achievement of goals 
> wrt Primate technical release and GA)
>  *   the first technical preview targetted with the winter 2019 LTS 
> release (~Q1 2020) and release to serve a deprecation notice wrt the older UI
>  *   define a release approach before winter LTS
>  *   stop taking feature FRs for old/existing UI after winter 2019 LTS 
> release, work on upgrade path/documentation from old UI to Primate
>  *   the first Primate GA targetted wrt summer LTS 2020 (~H2 2019), but 
> still ship old UI with a final deprecation notice
>  *   old UI codebase removed from codebase in winter 2020 LTS release
> 
> The vote will be up for the next two weeks to give enough time for PMC and 
> the community to gather consensus and still have room for questions, feedback 
> and discussions. The results to be shared on/after 21th October 2019.
> 
> For sanity in tallying the vote, can PMC members please be sure to indicate 
> "(binding)" with their vote?
> 
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
> 
> [1] Primate Proposal:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Proposal%3A+CloudStack+Primate+UI
> 
> [2] Email thread reference:
> https://markmail.org/message/z6fuvw4regig7aqb
> 
> [3] Primate repo current location: https://github.com/shapeblue/primate
> 
> 
> Regards,
> 
> Rohit Yadav
> 
> Software Architect, ShapeBlue
> 
> https://www.shapeblue.com
> 
> rohit.ya...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
> 


Re: 4.13 rbd snapshot delete failed

2019-09-08 Thread Wido den Hollander



On 9/8/19 5:26 AM, Andrija Panic wrote:
> Maaany release ago, deleting Ceph volume snap, was also only deleting it in
> DB, so the RBD performance become terrible with many tens of (i. e. Hourly)
> snapshots. I'll try to verify this on 4.13 myself, but Wido and the guys
> will know better...

I pinged Gabriel and he's looking into it. He'll get back to it.

Wido

> 
> I
> 
> On Sat, Sep 7, 2019, 08:34 li jerry  wrote:
> 
>> I found it had nothing to do with  storage.cleanup.delay and
>> storage.cleanup.interval.
>>
>>
>>
>> The reason is that when DeleteSnapshot Cmd is executed, because the RBD
>> snapshot does not have Copy to secondary storage, it only changes the
>> database information, and does not enter the main storage to delete the
>> snapshot.
>>
>>
>>
>>
>>
>> Log===
>>
>>
>>
>> 2019-09-07 23:27:00,118 DEBUG [c.c.a.ApiServlet]
>> (qtp504527234-17:ctx-2e407b61) (logid:445cbea8) ===START===  192.168.254.3
>> -- GET
>> command=deleteSnapshot=0b50eb7e-4f42-4de7-96c2-1fae137c8c9f=json&_=1567869534480
>>
>> 2019-09-07 23:27:00,139 DEBUG [c.c.a.ApiServer]
>> (qtp504527234-17:ctx-2e407b61 ctx-679fd276) (logid:445cbea8) CIDRs from
>> which account 'Acct[2f96c108-9408-11e9-a820-0200582b001a-admin]' is allowed
>> to perform API calls: 0.0.0.0/0,::/0
>>
>> 2019-09-07 23:27:00,204 DEBUG [c.c.a.ApiServer]
>> (qtp504527234-17:ctx-2e407b61 ctx-679fd276) (logid:445cbea8) Retrieved
>> cmdEventType from job info: SNAPSHOT.DELETE
>>
>> 2019-09-07 23:27:00,217 INFO  [o.a.c.f.j.i.AsyncJobMonitor]
>> (API-Job-Executor-2:ctx-f0843047 job-1378) (logid:c34a368a) Add job-1378
>> into job monitoring
>>
>> 2019-09-07 23:27:00,219 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> (qtp504527234-17:ctx-2e407b61 ctx-679fd276) (logid:445cbea8) submit async
>> job-1378, details: AsyncJobVO {id:1378, userId: 2, accountId: 2,
>> instanceType: Snapshot, instanceId: 13, cmd:
>> org.apache.cloudstack.api.command.user.snapshot.DeleteSnapshotCmd, cmdInfo:
>> {"response":"json","ctxUserId":"2","httpmethod":"GET","ctxStartEventId":"1237","id":"0b50eb7e-4f42-4de7-96c2-1fae137c8c9f","ctxDetails":"{\"interface
>> com.cloud.storage.Snapshot\":\"0b50eb7e-4f42-4de7-96c2-1fae137c8c9f\"}","ctxAccountId":"2","uuid":"0b50eb7e-4f42-4de7-96c2-1fae137c8c9f","cmdEventType":"SNAPSHOT.DELETE","_":"1567869534480"},
>> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
>> result: null, initMsid: 2200502468634, completeMsid: null, lastUpdated:
>> null, lastPolled: null, created: null, removed: null}
>>
>> 2019-09-07 23:27:00,220 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>> (API-Job-Executor-2:ctx-f0843047 job-1378) (logid:1cee5097) Executing
>> AsyncJobVO {id:1378, userId: 2, accountId: 2, instanceType: Snapshot,
>> instanceId: 13, cmd:
>> org.apache.cloudstack.api.command.user.snapshot.DeleteSnapshotCmd, cmdInfo:
>> {"response":"json","ctxUserId":"2","httpmethod":"GET","ctxStartEventId":"1237","id":"0b50eb7e-4f42-4de7-96c2-1fae137c8c9f","ctxDetails":"{\"interface
>> com.cloud.storage.Snapshot\":\"0b50eb7e-4f42-4de7-96c2-1fae137c8c9f\"}","ctxAccountId":"2","uuid":"0b50eb7e-4f42-4de7-96c2-1fae137c8c9f","cmdEventType":"SNAPSHOT.DELETE","_":"1567869534480"},
>> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
>> result: null, initMsid: 2200502468634, completeMsid: null, lastUpdated:
>> null, lastPolled: null, created: null, removed: null}
>>
>> 2019-09-07 23:27:00,221 DEBUG [c.c.a.ApiServlet]
>> (qtp504527234-17:ctx-2e407b61 ctx-679fd276) (logid:445cbea8) ===END===
>> 192.168.254.3 -- GET
>> command=deleteSnapshot=0b50eb7e-4f42-4de7-96c2-1fae137c8c9f=json&_=1567869534480
>>
>> 2019-09-07 23:27:00,305 DEBUG [c.c.a.m.ClusteredAgentAttache]
>> (AgentManager-Handler-12:null) (logid:) Seq 1-8660140608456756853: Routing
>> from 2199066247173
>>
>> 2019-09-07 23:27:00,305 DEBUG [o.a.c.s.s.XenserverSnapshotStrategy]
>> (API-Job-Executor-2:ctx-f0843047 job-1378 ctx-f50e25a4) (logid:1cee5097)
>> Can't find snapshot on backup storage, delete it in db
>>
>>
>>
>> -Jerry
>>
>>
>>
>> 
>> 发件人: Andrija Panic 
>> 发送时间: Saturday, September 7, 2019 1:07:19 AM
>> 收件人: users 
>> 抄送: dev@cloudstack.apache.org 
>> 主题: Re: 4.13 rbd snapshot delete failed
>>
>> storage.cleanup.delay
>> storage.cleanup.interval
>>
>> put both to 60 (seconds) and wait for up to 2min - should be deleted just
>> fine...
>>
>> cheers
>>
>> On Fri, 6 Sep 2019 at 18:52, li jerry  wrote:
>>
>>> Hello All
>>>
>>> When I tested ACS 4.13 KVM + CEPH snapshot, I found that snapshots could
>>> be created and rolled back (using API alone), but deletion could not be
>>> completed.
>>>
>>>
>>>
>>> After executing the deletion API, the snapshot will disappear from the
>>> list Snapshots, but the snapshot on CEPH RBD will not be deleted (rbd
>> snap
>>> list rbd/ac510428-5d09-4e86-9d34-9dfab3715b7c)
>>>
>>>
>>>
>>> Is there any way we can completely delete the snapshot?
>>>
>>> -Jerry
>>>
>>>
>>
>> --
>>
>> Andrija Panić
>>
> 


Re: 4.13 rbd snapshot delete failed

2019-09-07 Thread Wido den Hollander



On 9/6/19 11:34 PM, Andrija Panic wrote:
> One question though... for me (4.13, Nautilus 14.2, test env) - it fails to
> revert back to snapshot with below error
> 

Ok, that's weird.

Gabriel worked on this code recently, maybe he can take a look. I'll
ping him

Wido

> Which CEPH and QEMU/libvirt/os versions are you using?
> 
> 
> Error:
> 2019-09-06 21:27:16,094 ERROR
> [resource.wrapper.LibvirtRevertSnapshotCommandWrapper]
> (agentRequest-Handler-3:null) (logid:9593f65a) Failed to connect to revert
> snapshot due to RBD exception:
> com.ceph.rbd.RbdException: Failed to open image 2
> at com.ceph.rbd.Rbd.open(Rbd.java:243)
> at com.ceph.rbd.Rbd.open(Rbd.java:226)
> at
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRevertSnapshotCommandWrapper.execute(LibvirtRevertSnapshotCommandWrapper.java:92)
> at
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRevertSnapshotCommandWrapper.execute(LibvirtRevertSnapshotCommandWrapper.java:49)
> at
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
> at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1476)
> at com.cloud.agent.Agent.processRequest(Agent.java:640)
> at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1053)
> at com.cloud.utils.nio.Task.call(Task.java:83)
> at com.cloud.utils.nio.Task.call(Task.java:29)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 
> On Fri, 6 Sep 2019 at 19:07, Andrija Panic  wrote:
> 
>> storage.cleanup.delay
>> storage.cleanup.interval
>>
>> put both to 60 (seconds) and wait for up to 2min - should be deleted just
>> fine...
>>
>> cheers
>>
>> On Fri, 6 Sep 2019 at 18:52, li jerry  wrote:
>>
>>> Hello All
>>>
>>> When I tested ACS 4.13 KVM + CEPH snapshot, I found that snapshots could
>>> be created and rolled back (using API alone), but deletion could not be
>>> completed.
>>>
>>>
>>>
>>> After executing the deletion API, the snapshot will disappear from the
>>> list Snapshots, but the snapshot on CEPH RBD will not be deleted (rbd snap
>>> list rbd/ac510428-5d09-4e86-9d34-9dfab3715b7c)
>>>
>>>
>>>
>>> Is there any way we can completely delete the snapshot?
>>>
>>> -Jerry
>>>
>>>
>>
>> --
>>
>> Andrija Panić
>>
> 
> 


Re: kvm: /var/log/cloudstack/agent/agent.log is a binary file

2019-09-06 Thread Wido den Hollander



On 9/5/19 6:03 PM, Riepl, Gregor (SWISS TXT) wrote:
> 
>>> Wido, makes sense that log4j and logrotate would conflict. Log4j
>>> has its
>>> own rotate functionality.
>>
> 
> Note that a few CS log files are not generated by log4j.
> You still need external logrotatation if you don't want them to fill up
> your disk.
> 
> I had this issue with access.log, for example.
> 
> I haven't encountered your "binary" log file issue, however.
> 

log4j and lograte combined seem to be the issue. The agent.log should
not be rotated by logrotate and then the issue is gone.

These files should however:

- security_group.log
- resizevolume.log

I'll check on a PR to fix this.

Create issue: https://github.com/apache/cloudstack/issues/3585

Wido


Re: kvm: /var/log/cloudstack/agent/agent.log is a binary file

2019-09-05 Thread Wido den Hollander



On 9/5/19 11:13 AM, Daan Hoogland wrote:
> Wido, makes sense that log4j and logrotate would conflict. Log4j has its
> own rotate functionality.

It does indeed :-)

> I didn't understand before but the binary data is always the beginning of
> the file? Is always nulls?

I found this out yesterday, it's always the beginning of the file. Not
sure what it is though.

On two servers I removed /etc/logrotate.d/cloudstack-agent and I'll
report back in a few days.

Wido

> 
> Op do 5 sep. 2019 09:18 schreef Wei ZHOU :
> 
>> Hi Wido,
>>
>> I saw this issue in a 4.11.3 platform today.
>>
>> It seems to be caused by file
>>
>> https://github.com/apache/cloudstack/blob/master/agent/conf/cloudstack-agent.logrotate.in
>>
>> Maybe the file /etc/logrotate.d/cloudstack-agent is not needed ( in Ubuntu
>> ?).
>>
>> -Wei
>>
>>
>>
>> On Thu, 5 Sep 2019 at 08:08, Wido den Hollander  wrote:
>>
>>>
>>>
>>> On 9/3/19 2:55 PM, Daan Hoogland wrote:
>>>> On Tue, Sep 3, 2019 at 2:22 PM Wido den Hollander 
>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 9/3/19 9:57 AM, Daan Hoogland wrote:
>>>>>> Can you find/look at the line before in the log. It is probably the
>> one
>>>>>> containing the hindering data. Or otherwise it *might* be a clue
>> where
>>> in
>>>>>> the flow it happens.
>>>>>>
>>>>>
>>>>> Do you have any idea what the easiest way might be?
>>>>>
>>>> In short, no, but.. in totally skewed order of alleged quickness
>>>> 1: I'm afraid it is a line by line analysis. If you are lucky the line
>>>> start with date and class name is not obscured and there is a newline
>>>> entered in the log. That would give you the exact location. when in bad
>>>> luck the binary data contains control chars that erase to beginning of
>>> line
>>>> or so and the best you can find is the previous and next lines.
>>>> 3: code analysis; My first thought is that some binary data is read
>> from
>>> a
>>>> file or a socket and logged as is. but starting this with code analysis
>>> is
>>>> not the quickest way to get to the culprit I imagine.
>>>> 2: Other than line by line log analysis and if you can reproduce the
>>> issue,
>>>> you could try zooming in by playing around with log levels for
>> different
>>>> classes.
>>>>
>>>
>>> I checked, but on all hypervisors I checked the "agent.log" contains
>>> binary data, but all the rotated (.gz) files do not.
>>>
>>> The binary data is at the beginning of the file(s). So it seems like it
>>> is logrotate and maybe a combination of log4j causing this?
>>>
>>> I'm not sure yet, but that seems to be the case.
>>>
>>> Wido
>>>
>>>>
>>>>>
>>>>> I checked agent.log.1.gz and after I decompress is that file is not
>>>>> binary, text only.
>>>>>
>>>> so much for reproducibilty. Hope it was not a buffer overflow hack
>>> attempt.
>>>>
>>>>
>>>>>
>>>>> The binary data only seems to be present in the current file.
>>>>>
>>>>> Wido
>>>>>
>>>>>> On Tue, Sep 3, 2019 at 8:22 AM Wido den Hollander 
>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 9/2/19 10:18 PM, Wei ZHOU wrote:
>>>>>>>> Hi Wido,
>>>>>>>>
>>>>>>>> I had similar issue (not agent.log). It is probably caused by one
>> or
>>>>>>> few> lines with special characters.
>>>>>>>
>>>>>>> And I'm trying to figure out what causes it :-)
>>>>>>>
>>>>>>>> "grep -a" should work.
>>>>>>>
>>>>>>> I know, but other tools which analyze the logfile might also have
>>>>>>> trouble reading the file due to binary characters.
>>>>>>>
>>>>>>> Wido
>>>>>>>
>>>>>>>>
>>>>>>>> -Wei
>>>>>>>>
>>>>>>>> On Mon, 2 Sep 2019 at 19:35, Wido den Hollander 
>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> I've seen this on multiple occasions with Ubuntu 18.04 (and maybe
>>>>>>>>> 16.04?) hypervisors where according to 'grep' the agent.log is a
>>>>> binary
>>>>>>>>> file:
>>>>>>>>>
>>>>>>>>> root@n06:~# grep security_group
>> /var/log/cloudstack/agent/agent.log
>>>>>>>>> Binary file /var/log/cloudstack/agent/agent.log matches
>>>>>>>>> root@n06:~#
>>>>>>>>>
>>>>>>>>> If I open the file with 'less' I indeed see binary data.
>>>>>>>>>
>>>>>>>>> Tailing the file works just fine.
>>>>>>>>>
>>>>>>>>> Does anybody know where this is coming from? What in the
>> CloudStack
>>>>>>>>> agent causes it to write binary data to the logfile?
>>>>>>>>>
>>>>>>>>> Wido
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
> 


Re: kvm: /var/log/cloudstack/agent/agent.log is a binary file

2019-09-05 Thread Wido den Hollander



On 9/3/19 2:55 PM, Daan Hoogland wrote:
> On Tue, Sep 3, 2019 at 2:22 PM Wido den Hollander  wrote:
> 
>>
>>
>> On 9/3/19 9:57 AM, Daan Hoogland wrote:
>>> Can you find/look at the line before in the log. It is probably the one
>>> containing the hindering data. Or otherwise it *might* be a clue where in
>>> the flow it happens.
>>>
>>
>> Do you have any idea what the easiest way might be?
>>
> In short, no, but.. in totally skewed order of alleged quickness
> 1: I'm afraid it is a line by line analysis. If you are lucky the line
> start with date and class name is not obscured and there is a newline
> entered in the log. That would give you the exact location. when in bad
> luck the binary data contains control chars that erase to beginning of line
> or so and the best you can find is the previous and next lines.
> 3: code analysis; My first thought is that some binary data is read from a
> file or a socket and logged as is. but starting this with code analysis is
> not the quickest way to get to the culprit I imagine.
> 2: Other than line by line log analysis and if you can reproduce the issue,
> you could try zooming in by playing around with log levels for different
> classes.
> 

I checked, but on all hypervisors I checked the "agent.log" contains
binary data, but all the rotated (.gz) files do not.

The binary data is at the beginning of the file(s). So it seems like it
is logrotate and maybe a combination of log4j causing this?

I'm not sure yet, but that seems to be the case.

Wido

> 
>>
>> I checked agent.log.1.gz and after I decompress is that file is not
>> binary, text only.
>>
> so much for reproducibilty. Hope it was not a buffer overflow hack attempt.
> 
> 
>>
>> The binary data only seems to be present in the current file.
>>
>> Wido
>>
>>> On Tue, Sep 3, 2019 at 8:22 AM Wido den Hollander 
>> wrote:
>>>
>>>>
>>>>
>>>> On 9/2/19 10:18 PM, Wei ZHOU wrote:
>>>>> Hi Wido,
>>>>>
>>>>> I had similar issue (not agent.log). It is probably caused by one or
>>>> few> lines with special characters.
>>>>
>>>> And I'm trying to figure out what causes it :-)
>>>>
>>>>> "grep -a" should work.
>>>>
>>>> I know, but other tools which analyze the logfile might also have
>>>> trouble reading the file due to binary characters.
>>>>
>>>> Wido
>>>>
>>>>>
>>>>> -Wei
>>>>>
>>>>> On Mon, 2 Sep 2019 at 19:35, Wido den Hollander 
>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I've seen this on multiple occasions with Ubuntu 18.04 (and maybe
>>>>>> 16.04?) hypervisors where according to 'grep' the agent.log is a
>> binary
>>>>>> file:
>>>>>>
>>>>>> root@n06:~# grep security_group /var/log/cloudstack/agent/agent.log
>>>>>> Binary file /var/log/cloudstack/agent/agent.log matches
>>>>>> root@n06:~#
>>>>>>
>>>>>> If I open the file with 'less' I indeed see binary data.
>>>>>>
>>>>>> Tailing the file works just fine.
>>>>>>
>>>>>> Does anybody know where this is coming from? What in the CloudStack
>>>>>> agent causes it to write binary data to the logfile?
>>>>>>
>>>>>> Wido
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>
> 
> 


  1   2   3   4   5   6   7   8   9   10   >