Re: Future of Tungsten Fabric Integration with ACS

2024-01-08 Thread Simon Weller
Rahul,

Keep your eyes peeled for a fork of Tungsten.
Announcements will be coming out soon, with a large number of organizations
backing the project.

-Si







On Mon, Jan 8, 2024 at 5:04 PM Rahul Rai  wrote:

> Dear Dev Community,
>
> Hope this email finds you well and wishing you a great and prosperous new
> year ahead.
>
> Just now noticed that the Linux foundation will archive the Tungsten
> project in next few months, curious to know if the ACS upcoming version is
> going to offer integration with another open source SDN for
> micro-segmentation?
> Thank you for your interest in Tungsten Fabric. The community has decided
> to shut down the project and will sunset this website on August 1, 2024.
>
> Thanks,
> Rahul
>


Re: [DISCUSS] New Design for the Apache CloudStack Website

2023-09-01 Thread Simon Weller
It looks great, Ivet. Nice work!

On Wed, Aug 30, 2023 at 8:35 AM Ivet Petrova 
wrote:

> Hi All,
>
> I uploaded the design here:
> https://drive.google.com/file/d/1pef7xWWMPYAA5UkbS_XMUxrz53KB7J5t/view?usp=sharing
>
>
> Kind regards,
>
>
>
>
> On 30 Aug 2023, at 16:31, Giles Sirett  giles.sir...@shapeblue.com>> wrote:
>
> Hi Ivet – thanks for pushing forward with this – excited to review a new
> design.
>
> On that note, I cant see a link in your mail ☹
>
> Kind Regards
> Giles
>
>
> Giles Sirett
> CEO
> giles.sir...@shapeblue.com
> www.shapeblue.com
>
>
>
>
> From: Ivet Petrova  ivet.petr...@shapeblue.com>>
> Sent: Wednesday, August 30, 2023 10:14 AM
> To: us...@cloudstack.apache.org;
> Marketing mailto:market...@shapeblue.com>>
> Cc: dev mailto:dev@cloudstack.apache.org>>
> Subject: [DISCUSS] New Design for the Apache CloudStack Website
>
> Hello,
>
> I would like to start a discussion on the design of the Apache CloudStack
> Website and to propose a new design for it.
>
> As we all know, the website has not been changed for years in terms of
> design and information. The biggest issue we know we have is that the
> website is not showing the full potential of CloudStack. In addition to it
> during discussions with many community members, I have noted the following
> issues:
> - the existing website design is old-school
> - the current homepage does not collect enough information to show
> CloudStack's strengths
> - current website design is missing images from the ACS UI and cannot
> create a feel for the product in the users
> - the website has issues on a mobile device
> - we lack any graphic and diagrams
> - some important information like how to download is not very visible
>
> I collected a lot of feedback during last months and want to propose a new
> up to date design for the website, which is attached below. The new design
> will bring:
> - improved UX
> - look and feel corresponding to the CloudStack's capabilities and
> strengths
> - more graphical elements, diagrams
> - better branding
> - more important information, easily accessible for the potential users
>
> I hope you will like the new design – all feedback welcome. Once we have
> the design finalised, we will use Rohit’s proposal previously of a CMS,
> which is easy to edit.
>
> [cid:B5517475-02DA-472A-BD1D-F3B600AD28ED]
>
> Kind regards,
>
>


Re: new PMC member: Daniel Salvador

2023-08-25 Thread Simon Weller
It's great to have you as part of the PMC, Daniel!

On Fri, Aug 25, 2023 at 5:56 AM Daan Hoogland  wrote:

> The Project Management Committee (PMC) for Apache [PROJECT]
> has invited Daniel Salvador to become a PMC member and we are pleased
> to announce that they have accepted.
>
> Daniel has contributed in the past and has shown effort to make the
> project run smoothly
>
> please join me in congratulating Daniel
>


Re: new committer: Sina Kashipazha

2023-08-25 Thread Simon Weller
Congrats Sina!

On Fri, Aug 25, 2023 at 5:53 AM Daan Hoogland  wrote:

> The Project Management Committee (PMC) for Apache [PROJECT]
>
> has invited Sina Kashipazha to become a committer and we are pleased
> to announce that they have accepted.
>
> Sina has been active as a contributor in several ways; code, testing,
> talks, documentation
>
> Please join me in welcoming Sina
>


Re: new committer: John Bampton

2023-08-25 Thread Simon Weller
Congrats John!

On Fri, Aug 25, 2023 at 5:51 AM Daan Hoogland 
wrote:

> The Project Management Committee (PMC) for Apache CloudStack
> has invited Jown Bampton to become a committer and we are pleased
> to announce that they have accepted.
>
> John is mostly active on CI and build specific issues.
>
>
> please join me in welcoming John to the project
>
> --
> Daan
>


Re: Cloudstack DB HA, do you use db.ha.enabled?

2023-07-20 Thread Simon Weller
Lucian,

Check to see whether the mysql-ha jar is being loaded. There's a separate
mysql-ha package that needs to be installed.

Is this Ubuntu or rpm? I'm not sure whether the default Ubuntu builds
include the extra package. I believe the shapeblue build does though.

-Si


On Thu, Jul 20, 2023, 3:03 PM Nux  wrote:

> Cheers Daniel,
>
> Can you share any other db.ha parameters you may have tuned?
> For me it didn't work out of the box as you described.
>
> Thanks
>
> On 2023-07-20 14:04, Daniel Salvador wrote:
> > Hello Nux,
> >
> > Normally I set up three nodes with MariaDB and Galera cluster; then, in
> > the
> > "db.properties" file I mark "db.ha.enabled" as true, and I define one
> > of
> > the nodes as main and the other as replicas. When the main node goes
> > down,
> > one of the replicas takes over, and so on.
> >
> > The current properties we have on "db.properties" regarding DB HA are
> > hard
> > coded and only address some MySQL properties; which is not the perfect
> > scenario for MariaDB HA. However, it provides a minimum DB HA setup. Me
> > and
> > other contributors are already working on a flexible solution to
> > address
> > other MySQL properties, and MariaDB properties as well.
> >
> > Best regards,
> > Daniel Salvador (gutoveronezi)
> >
> > On Thu, Jul 20, 2023 at 7:46 AM Nux  wrote:
> >
> >> Hello,
> >>
> >> As per the subject, how do you make your DB layer HA and do you use
> >> the
> >> db.ha.enabled feature/setting in the Cloudstack management server
> >> db.properties file?
> >>
> >> Cheers
> >>
>


[ANNOUNCE] New VP of Apache CloudStack - Rohit Yadav

2023-03-29 Thread Simon Weller
All,

I'm very pleased to announce that the ASF board has accepted the nomination
of Rohit Yadav to be the new VP of the Apache CloudStack project.

It has been my pleasure to serve as the VP over the past year, and I'd like
to thank the community for all of the support.

Rohit, congratulations and I wish you the best as you take on this new role.

-Simon


Re: Daan Hoogland - New ASF Member

2023-03-24 Thread Simon Weller
Wow, that's fantastic. Congratulations Daan!

-Si

On Fri, Mar 24, 2023 at 4:29 AM Paul Angus  wrote:

>
>
> It is my pleasure to announce that Daan Hoogland as been elected to become
> a
> member of the ASF.
>
> The ASF would like to recognize both his practical involvement and the way
> in which he has interacted with others in and around the ASF.
>
>
>
> Congratulations  Daan.
>
>
>
>
>
>
>
>
>
> Kind regards
>
>
>
> Paul Angus
>
>
>
>


Re: [4.18][RC3][VOTE] new release candidate 4.18.0.0-RC20230311T0935

2023-03-14 Thread Simon Weller
Installed RC3 from packages on AlmaLinux 8.

Tested -
KVM advanced zone
Isolated network and VPC network
VM Lifecycle Management

+1 (binding)

On Sat, Mar 11, 2023 at 2:46 AM Daan Hoogland  wrote:

> Hi All,
>
> I've created a new 4.18 release candidate, with the following
> artifacts up for a vote:
>
> Git Branch and Commit
> SH:
> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.18.0.0-RC20230311T0935
> Commit: 0574087284f8b646ebc41617cfd70b3a31e3ae94
>
> Source release (checksums and signatures are available at the same
> location): https://dist.apache.org/repos/dist/dev/cloudstack/4.18.0.0
>
> PGP release keys (signed using
> 256ABDFB8D89EDE07540BE6ACEF9E802B86FEED4):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> Vote will be open for 72 hours. (aiming for wednesday late)
>
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> As usual lately, I will work on getting some convenience packages
> ready and will follow up on this thread to link you to those.
>
> enjoy
>


Re: Adding storage plugin

2023-02-24 Thread Simon Weller
Hi Andre,

Welcome to the community.

As far as I know, the subsystem 2 documentation is still pretty accurate.
We've had a number of new storage plugins over the last 2 major releases,
including LinStor, StorPool and support for Dell VxFlex, so there are quite
a few people on the list who will be able to provide pointers.

Here's the URL to the existing primary storage plugins for reference -
https://github.com/apache/cloudstack/tree/main/plugins/storage/volume

Perhaps you could share with us what you would like to integrate, as well
as the target hypervisors, and I'm sure someone will be able to point you
in the right direction.

-Si



On Fri, Feb 24, 2023 at 8:46 AM Andrei Perapiolkin <
andrei.perepiol...@open-e.com> wrote:

> Hi CloudStack community,
>
>
> I would like to integrate block storage solution into CloudStack.
>
> There is a lots of various guides and documentation on the topic of
> CloudStack development, and storage plugins in particular:
>
> 1. Minimal requirements
> <
> https://docs.cloudstack.apache.org/en/4.17.2.0/developersguide/plugins.html
> >
>
> 2. Code contribution requirements
> <
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Code+Submission+and+Review+Guidelines
> >
>
> 3. Storage subsystem architecture
> <
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+subsystem+2.0>
>
>
>
> 4. Testing system  with Marvin
> <
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Test+Categories+and+Quick-test+in+Marvin>
>
> and Simulator
>
> Unfortunately some of them are "old". So Im curious if there is a
> dedicated group/team or individual that is responsible to storage
> subsystem that I can contact for clarification or advise regarding
> plugin development and testing.
>
>
> Thanks for you time to read through this email,
>
> Best regards,
>
> Andre Perepiolkin
>


[ANNOUNCE] Ivet Petrova has joined the PMC

2023-02-14 Thread Simon Weller
Hi everyone,

It gives me great pleasure to announce that Ivet has been invited to join
the
CloudStack PMC and she has accepted.

Please join me in congratulating Ivet!

-Simon (on behalf of the CloudStack PMC)


Re: Volunteers Needed - CCC

2022-10-03 Thread Simon Weller
Hey Ivet,

I'll be onsite and I'd be happy to help out.

-Si

From: Ivet Petrova 
Sent: Monday, October 3, 2022 11:39 AM
To: Apache CloudStack Marketing ; 
us...@cloudstack.apache.org ; 
dev@cloudstack.apache.org 
Subject: Volunteers Needed - CCC

EXTERNAL EMAIL: This message originated outside of ENA. Use caution when 
clicking links, opening attachments, or complying with requests. Click the 
"Phish Alert Report" button above the email, or contact MIS, regarding any 
suspicious message.



Hi all,

As you know we organise the CloudStack Collab oration Conference in November in 
Sofia, Bulgaria as a hybrid event.
As you can imagine, this requires enormous efforts - dealing with speakers, 
sponsors, facility management, branding, attracting attendees, online ads, 
website and much more.
The work on place during the event will also be a significant amount and I need 
help.
At this stage I am searching for a volunteer/s to handle the registration on 
place.
More details:
The registration will happen only on the first event day and will take approx. 
1 hour. During this period, the task is to find the badge for the people 
arriving and to give it to them. We will have printed badges, which everyone 
can wear.
I think we will need 2 people for this. And will be very happy if I can find 
some volunteers :)

Kind regards,






Re: [DISCUSS] release and release manager 4.18

2022-09-20 Thread Simon Weller
+1 to Daan :-)

From: Rohit Yadav 
Sent: Tuesday, September 20, 2022 9:43 AM
To: us...@cloudstack.apache.org ; dev 

Subject: Re: [DISCUSS] release and release manager 4.18

EXTERNAL EMAIL: This message originated outside of ENA. Use caution when 
clicking links, opening attachments, or complying with requests. Click the 
"Phish Alert Report" button above the email, or contact MIS, regarding any 
suspicious message.



+1 thanks for volunteering as 4.18 RM Daan.

Regards.

Regards.

From: Daan Hoogland 
Sent: Tuesday, September 20, 2022 7:53:53 PM
To: dev ; users 
Subject: [DISCUSS] release and release manager 4.18

Hello everybody,
As Abhishek is wrapping up the 4.17.1 release. I´d like to start the
discussion about 4.18. Nobody has volunteered as the next release manager
yet and I'd like to give the community the opportunity to prevent it
becoming me. My idea is that we will have set a feature freeze date and
then a target date for the first RC. Iḿ not proposing one now as mine would
probably be too tight a deadline for new features and there are some big
ones out there that we'd want in.
Please let us all know what you have in the pipeline and what you want in
and what time you need, so we can discuss if it is feasible for 4.18.
Also please come forward if you want this task of triaging issues and PRs
and tagging them to go in the release.
thanks

--
Daan





Re: Introducing Bart and Ruben

2022-08-08 Thread Simon Weller
Welcome guys!

From: Wido den Hollander 
Sent: Monday, August 8, 2022 7:43 AM
To: dev@cloudstack.apache.org 
Cc: ruben.bo...@cldin.eu ; bart.mey...@cldin.eu 

Subject: Introducing Bart and Ruben

EXTERNAL EMAIL: This message originated outside of ENA. Use caution when 
clicking links, opening attachments, or complying with requests. Click the 
"Phish Alert Report" button above the email, or contact MIS, regarding any 
suspicious message.



Hi devs,

Recently Bart and Ruben (CC) joined CLDIN (was PCextreme) as DevOps
engineers and will be working on CloudStack and start to contribute to
the project.

In the coming months they'll finding their way around in the community
and code and you should start seeing Pull Requests from them coming in.

We'll try to make the CCC in Bulgaria later this year!

Wido


Re: [ANNOUNCE] Next PMC Chair & VP Apache CloudStack Project - Simon Weller

2022-03-17 Thread Simon Weller
Thanks everyone! Special thanks to Gabriel for his dedication to the role over 
the past year.

-Si

From: Will Stevens 
Sent: Thursday, March 17, 2022 12:36 PM
To: dev@cloudstack.apache.org 
Subject: Re: [ANNOUNCE] Next PMC Chair & VP Apache CloudStack Project - Simon 
Weller

Congrats Simon.  Well deserved for sure.  You have been a force of
dedication for the community and I am really happy to see you taking on
this role.

Cheers,

*Will Stevens*
Chief Technology Officer
t 514.447.3456 x1301

<https://goo.gl/NYZ8KK>


On Thu, Mar 17, 2022 at 12:30 PM Giles Sirett 
wrote:

> Many Congratulations Simon
>
> Gabriel - thank you so much for your hard work over the last year - you'll
> be a difficult act to follow!!
>
> Kind Regards
> Giles
>
>
>
>
> -Original Message-
> From: Nicolas Vazquez 
> Sent: 17 March 2022 12:45
> To: us...@cloudstack.apache.org; dev 
> Subject: Re: [ANNOUNCE] Next PMC Chair & VP Apache CloudStack Project -
> Simon Weller
>
> Congratulations Simon! And thanks Gabriel for your great work!
>
> Regards,
> Nicolas Vazquez
> 
> From: Gabriel Beims Bräscher 
> Sent: Thursday, March 17, 2022 6:55:26 AM
> To: users ; dev 
> Subject: [ANNOUNCE] Next PMC Chair & VP Apache CloudStack Project - Simon
> Weller
>
> Hello, all CloudStack community!
>
> It gives me great pleasure to announce that the ASF board last night
> accepted our PMC's nomination of Simon Weller as the next PMC Chair / VP of
> the Apache CloudStack project.
>
> I would like to thank everyone for the support I've received over the past
> year.
> It was a great honor being the PMC Chair of this amazing project/community!
>
> To Simon, my sincere congratulations, and I wish you success in the new
> role!
> Very well deserved!
>
> Please join me in congratulating Simon, the CloudStack PMC Chair / VP.
>
> Best Regards,
> Gabriel Bräscher.
>
>
>
>
>


Re: [VOTE] CentOS 7 KVM binaries

2022-03-04 Thread Simon Weller
We've added features that are only supported on newer versions of qemu-kvm 
packages previously, such as the IOP and bandwidth limits.
I'd have to go hunting here, but I think it was done by have a check on the 
qemu-kvm version and if it was later than a certain release, exposing the 
feature.

@Nathan Johnson might be able to jump in 
here, as he may remember how that was implemented.

-Si



From: Nicolas Vazquez 
Sent: Friday, March 4, 2022 8:28 AM
To: dev@cloudstack.apache.org 
Subject: Re: [VOTE] CentOS 7 KVM binaries

Thanks Daniel,

+1 (binding)

Regards,
Nicolas Vazquez


From: Daniel Augusto Veronezi Salvador 
Date: Friday, 4 March 2022 at 10:55
To: dev@cloudstack.apache.org 
Subject: Re: [VOTE] CentOS 7 KVM binaries
Apologies, Wei, I used the wrong term, it will be a recommendation, not
a requirement.

On 04/03/2022 10:44, Wei ZHOU wrote:
> Hi Daniel,
>
> `qemu-kvm-ev` should be a recommendation, not a requirement on CentOS 7.
> If it is a requirement, I am -1 on it as well.
>
> -Wei
>
> On Fri, 4 Mar 2022 at 13:58, Daniel Augusto Veronezi Salvador <
> dvsalvador...@gmail.com> wrote:
>
>> Thanks, Rohit, for the reply.
>>
>> Although it emerged from the PR, the discussion[¹] and voting was not
>> about the PR per se, but about considering adding *qemu-kvm-ev* as a
>> requirement for deploying ACS + KVM + CentOS 7. Apologies if it wasn't
>> very clear.
>>
>> Solutions for PR 5297[²] and others that may come can be discussed in
>> their history line. The changes regarding this voting will be limited to
>> the documentation:
>> - On CloudStack's Installation Guide > Host KVM Installation[³], we add
>> a section guiding users to install the qemu-kvm-ev binaries, if they are
>> using CentOS 7.
>> - The packages that we will guide users to install will be the latest
>> provided by the official CentOS site[⁴] (the current latest version is
>> '2.12.0-44.1.el7_8.1.x86_64'.
>>
>> With that said and based on last Rohit's reply, could I ask Nicolas and
>> Paul to review, make clear or withdraw their votes?
>>
>> Best Regards,
>> Daniel Salvador
>>
>>
>> [¹] https://lists.apache.org/thread/z7s0774n72v4o9dnl140wvm030bxovjd
>> [²] https://github.com/apache/cloudstack/pull/5297
>> [³]
>>
>> http://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/kvm.html
>> [⁴] http://mirror.centos.org/centos-7/7/virt/x86_64/kvm-common/Packages/q/
>>
>> On 04/03/2022 07:02, Rohit Yadav wrote:
>>> Thanks for sharing your findings Nicolas. I'll base my vote on that.
>>>
>>> Considering the vote is based on:
>>> "- On CloudStack's Installation Guide > Host KVM Installation[²], we add
>> a
>>> section guiding users to install the qemu-kvm-ev binaries, if they are
>>> using CentOS 7.
>>> - The packages that we will guide users to install will be the latest
>>> provided by the official CentOS site[³] (the current latest version is
>>> '2.12.0-44.1.el7_8.1.x86_64')."
>>>
>>> +1 (binding) to doc changes limited to CentOS7 on using qemu-kvm-ev.
>> It's worth noting that qemu-kvm-ev (the -ev releases) isn't available for
>> el8 anymore? This may need double-checking as I couldn't find them on first
>> attempt or also search via yum/dnf [1]. We need at some point revisit the
>> QIG, add QIG for Ubuntu 20.04/22.04 (soon) and EL8 (rocky/alma/rhel8).
>>> I would see the PR as a separate thing than this vote and I don't see
>> why Daniel's PR can't be accepted if he addresses the comments, with the
>> following suggested guidance and checklist:
>>> *   The vote is just for docs and shouldn't be used to justify the
>> removal or regression of stock qemu support in el7/centos7.
>>> *   Please fix the regression found in Nicolas's testing and address
>> other outstanding comments on the PR.
>>> *   Regression test via smoketests in addition to any manual tests
>> can ensure it works for following environments:
>>>*   CentOS7 with stock qemu
>>>*   CentOS7 with qemu-kvm-ev
>>>*   CentOS8 with stock qemu
>>>*   Test Ubuntu 18.04/20.04 and OpenSUSE kvm env
>>>*   Misc: Since the PR changes are around storage/snapshot, we
>> should try to cover for local storage, nfs/qcow2, ceph/rbd (and
>> scaleio/raw) etc.
>>> [1] Here's testing on a rocky8 instance:
>>>
>>> [root@5c0edd1a88c0 /]# yum install centos-release-qemu-ev
>>> Last metadata expiration check: 0:03:47 ago on Fri Mar  4 09:42:27 2022.
>>> No match for argument: centos-release-qemu-ev
>>> Error: Unable to find a match: centos-release-qemu-ev
>>> [root@5c0edd1a88c0 /]# yum search qemu-ev
>>> Last metadata expiration check: 0:05:09 ago on Fri Mar  4 09:42:27 2022.
>>> No matches found.
>>> [root@5c0edd1a88c0 /]# yum install qemu-img
>>> Last metadata expiration check: 0:03:52 ago on Fri Mar  4 09:42:27 2022.
>>> Dependencies resolved.
>>>
>> 

Re: [Discussion] CentOS 7 KVM binaries

2022-02-18 Thread Simon Weller
Daniel,

We've used qemu-kvm-ev in production for years. A number of the enhancements 
we've pushed into Cloudstack have required it. I think you'll find that most 
cloud providers based on Centos (or Alma/Rocky) are also using it.

-Si

From: Daniel Salvador 
Sent: Friday, February 18, 2022 9:53 AM
To: dev@cloudstack.apache.org 
Subject: [Discussion] CentOS 7 KVM binaries

Hi all, hope you are doing fine.

The following discussion emerged from PR #5297[¹].

It is a known fact that, regarding KVM functionalities, CentOS7 default's
QEMU binary is quite limited. This is due to the removal of some features
of KVM default binary in CentOS (like VM's volume live migration,
memory/CPU hotplug/hot-unplug, live disk snapshot, and so on); more
information can be found in CentOS forum's thread[²].

In my point of view, such limitations in the default QEMU binary in CentOS
make it unfeasible to build a cloud with CentOS and the default QEMU
binary, as operators lose a lot of useful/important operations or have to
go through workarounds, which cause VM's disruption (e.g. having to stop a
VM to migrate the volume between different storage pools, which triggers a
secondary storage usage). There is an alternative binary, "qemu-kvm-ev",
which supports more features than the default one. Probably, most people
using KVM with CentOS are using the "ev" binary (I might be wrong though,
however, that seems to be the case when looking at the users' list).

PR #5297[¹] ran into one of the CentOS7 default's QEMU binary limitations
(live disk snapshot). The easiest solution (and, IHMO, is the best option)
is to guide users to upgrade CentOS7 QEMU binary to "qemu-kvm-ev".

Further, it is important to mention that in our experience, it is not
possible to run a highly available cloud environment with CentOS7 and the
default binaries. In a cloud environment with thousands of VMs, sooner or
later the need to hotplug (increase) CPU/RAM, migrate volumes across
different storage pool tyles (such as iSCSI <> NFS), or some other type of
operation appear, and operators/final customers do not want service
disruption. Our customers, for instance, never shut down VMs for these
kinds of operations, and that is only possible because they are all using
KVM with Ubuntu now.

Moreover, CentOS7 is getting close to its EOL. Therefore, We do not think
that CloudStack should limit its features due to a dying operating system
that presents very limited features by default.

With that said, it would be interesting if dev/users that use CentOS7 could
share their experiences with the "qemu-kvm-ev" in this thread, so we can
decide which way to go. Or, users that only use the default binary, if they
are satisfied with it.

If almost no one is relying on default CentOS7 binaries, we could define as
a step in the documentation, that when using CentOS7 people must use the
"ev" binary. This would free us to evolve ACS more freely and avoid
headaches with workarounds to a limited operating system when there are
alternatives out there.


[¹] https://github.com/apache/cloudstack/pull/5297
[²] https://forums.centos.org/viewtopic.php?t=65618


Re: DDoS protection

2021-11-01 Thread Simon Weller
As Hean suggests, the best form of DDOS protection is from your upstream 
internet provider(s), as hopefully they have a lot more capacity to be able to 
absorb a large attack.
If you're looking at protecting web properties, then you might want to take a 
look at services from companies like Akamai or Cloudflare.

From: Hean Seng 
Sent: Saturday, October 30, 2021 1:42 PM
To: us...@cloudstack.apache.org 
Cc: dev 
Subject: Re: DDoS protection

Hi

I suppose this is not related to Cloudstack. You need to Network Provider
that support DDOS protection

On Sun, Oct 31, 2021 at 2:40 AM Ranjit Jadhav 
wrote:

> Hello,
>
> What are options available for DDoS protection which we can integrate with
> Cloudstack?
>
> Thank you,
> Ranjit
>


--
Regards,
Hean Seng


Re: 2FA

2021-08-10 Thread Simon Weller
Rakesh,

ACS does support SAML2 and in order to deploy 2FA/MFA, you could integrate it 
with an Identity and Access Management System such as Keycloak 
(https://www.keycloak.org/).

-Si


From: Rakesh Venkatesh 
Sent: Tuesday, August 10, 2021 4:34 AM
To: users ; dev 
Subject: 2FA

Hello

Has anyone thought about 2FA or about how to implement it in cloudstack?
Looks like this will be good addition to enhance the security. I have some
idea about implementing in the backend but dont have much idea on how to
display the QR code in ui or other functionalities which is needed for
frontend part.

--
Thanks and regards
Rakesh


Re: [PROPOSE] RM for CloudStack Kubernetes Provider v1.0

2021-07-15 Thread Simon Weller
+1



From: David Jumani 
Sent: Thursday, July 15, 2021 1:31 AM
To: users ; dev@cloudstack.apache.org 

Subject: [PROPOSE] RM for CloudStack Kubernetes Provider v1.0

Hi,

I'd like to put myself forward as the release manager for CloudStack Kubernetes 
Provider v1.0.

This will be the first release of CloudStack Kubernetes Provider which 
facilitates Kubernetes deployments on Cloudstack.
It allows Kubernetes to dynamically allocate IP addresses and the respective 
networking rules on CloudStack to ensure seamless TCP, UDP and TCP-Proxy 
LoadBalancer deployments on Kubernetes.

It was initially the Cloudstack provider in Kubernetes which was later 
extracted to allow for pluggable providers.
A lot of work and effort has gone into developing it, and we are looking 
forward to its grand debut.

The list of open issues triaged for the v1.0 milestone can be found at 
https://github.com/apache/cloudstack-kubernetes-provider/milestone/1
If you encounter any issues, please do report them at 
https://github.com/apache/cloudstack-kubernetes-provider/issues

Looking forward to your support

Thanks,
David





Re: Reintroduction

2021-05-05 Thread Simon Weller
Great news...congrats Wei!


From: Wei ZHOU 
Sent: Wednesday, May 5, 2021 7:29 AM
To: dev@cloudstack.apache.org 
Subject: Reintroduction

Hi all,

I would like to reintroduce myself. I am Wei Zhou (github account:
weizhouapache). I joined the community in 2012. I became a cloudstack
committer in 2013 and a PMC member in 2017.

I have recently joined Shapeblue as a Software Architect. I am looking
forward to learning more from you and contributing more to the community.

Kind regards,
Wei


Re: [DISCUSS] Using qemu-kvm vs qemu-kvm-ev with CloudStack

2021-04-09 Thread Simon Weller
Hi Rohit,

We've been using ev exclusively for a few years now. Our main reason was in 
order to support features we upstreamed around KVM iop limits a couple of years 
back.
Short of one challenge that was addressed on the ACS side a while ago related 
to the patchviasocket integration, it has worked very well and has been very 
stable.

-Si


From: Rohit Yadav 
Sent: Friday, April 9, 2021 2:26 AM
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Subject: [DISCUSS] Using qemu-kvm vs qemu-kvm-ev with CloudStack

All,

We've recently seen some tests around live VM with storage failing on CentOS7 
which is addressed in this PR:
https://github.com/apache/cloudstack/pull/4801

Some users have added on the original issue ticket that it works with 
qemu-kvm-ev on CentOS:
https://github.com/apache/cloudstack/issues/4757#issuecomment-812595973

I also see many other IaaS platforms notably oVirt using qemu-kvm-ev, is there 
any interest and argument in saying we test and update our docs to advise users 
to use qemu-kvm-ev on CentOS? Are there any CloudStack users who want to share 
their experience with it who may be using it already?

The installation steps don't require configuring any 3rd party repository 
manually and usually done with:

yum install centos-release-qemu-ev
yum install qemu-kvm-ev

Additional references:
https://lists.centos.org/pipermail/centos-virt/2015-October/004717.html (what 
is qemu-kvm vs qemu-kvm-ev)
https://wiki.centos.org/SpecialInterestGroup/Virtualization (the SIG that is 
behind the qemu-kvm-ev repository)

Thanks and regards.

rohit.ya...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue





Re: [DISCUSS] Renaming default git branch name from 'master' to 'main'

2021-03-25 Thread Simon Weller
+1

From: Gabriel Beims Bräscher 
Sent: Thursday, March 25, 2021 6:56 AM
To: dev 
Subject: Re: [DISCUSS] Renaming default git branch name from 'master' to 'main'

I am +1 on migrating from 'master' to 'main' branch.

We will need to update some scripts, documentations, and the releasing
process.

Regards,
Gabriel.

On Thu, Mar 25, 2021, 08:10  wrote:

> Personally, I'm +1 on this change.
>
>
>
>
> Kind regards
>
> Paul Angus
>
> -Original Message-
> From: Suresh Anaparti 
> Sent: Thursday, March 25, 2021 9:23 AM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] Renaming default git branch name from 'master' to
> 'main'
>
> Yes Wei, all the integrated systems / scripts (using the CloudStack git
> repositories) have to replace the default branch name to 'main' wherever
> applicable.
>
> Regard
> Suresh
>
> On 25/03/21, 2:44 PM, "Wei ZHOU"  wrote:
>
> Will it impact jenkins/travis/trillian and prs ?
>
> -Wei
>
> On Thu, 25 Mar 2021 at 10:00, Suresh Anaparti <
> suresh.anapa...@shapeblue.com>
> wrote:
>
> > Hi all,
> >
> > The default git branch name 'master' was replaced with 'main' on
> GitHub
> > [2][3] and in the wider Git community [4]. For those that have
> missed the
> > broader discussion in society, the term 'master' is offensive to some
> > people [1]. This is widely considered insensitive if not illegal,
> hence the
> > proposed change.
> >
> > It seems fitting the CloudStack would follow this example of
> > inclusiveness. For this, the project would rename its default branch
> name
> > of all the repositories to 'main'. In addition, all the applicable
> > integration points (Eg: Travis-CI, etc) using these repositories
> have to
> > replace the branch name 'master' with 'main'.
> >
> > The sample steps to rename and replace the default branch to 'main'
> are
> > here:
> >
> https://faun.pub/git-step-by-step-renaming-a-master-branch-to-main-16390ca7577b
> >
> > I would like to hear your thoughts and suggestions on this.
> >
> >
> > [1]
> >
> https://www.theserverside.com/feature/Why-GitHub-renamed-its-master-branch-to-main
> > [2]
> >
> https://www.techrepublic.com/article/github-to-replace-master-with-main-starting-in-october-what-developers-need-to-know
> > [3] https://github.com/github/renaming
> > [4]
> https://about.gitlab.com/blog/2021/03/10/new-git-default-branch-name/
> >
> >
> > Regards,
> > Suresh
> >
> >
> > suresh.anapa...@shapeblue.com
> > www.shapeblue.com
> > 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> > @shapeblue
> >
> >
> >
> >
>
>
> suresh.anapa...@shapeblue.com
> www.shapeblue.com
> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
> @shapeblue
>
>
>
>
>


Re: Congratulations to Sven - Apache Software Foundation Member

2021-03-17 Thread Simon Weller
Congrats Sven!

From: Paul Angus 
Sent: Wednesday, March 17, 2021 4:13 PM
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Subject: Congratulations to Sven - Apache Software Foundation Member

Hi All,



More great news.



Please join me in congratulating Sven,  for being made a Member of the
Apache Software Foundation.



Congratulations Sven, keep up the good work!



Kind regards



Paul Angus





Re: Congratulations to Gabriel - CloudStack PMC Chair

2021-03-17 Thread Simon Weller
Congrats Gabriel!

From: Paul Angus 
Sent: Wednesday, March 17, 2021 4:10 PM
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Cc: priv...@cloudstack.apache.org 
Subject: Congratulations to Gabriel - CloudStack PMC Chair

Hi All CloudStack enthusiasts!



Please join me in congratulating Gabriel for becoming the next CloudStack
PMC Chair.

Congratulations Gabriel, very well deserved!



I would also like to thank Sven for his great work of the past year!







Kind regards



Paul Angus





Re: Cloudstack developer training

2021-02-26 Thread Simon Weller
Fantastic contribution!  Thanks to the ShapeBlue team for making this happen.

-Si

From: Giles Sirett 
Sent: Friday, February 26, 2021 9:42 AM
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org ; Apache CloudStack 
Marketing 
Subject: Cloudstack developer training

Hi all

One of the biggest challenges with Cloudstack is learning its architecture and 
codebase  - its big and its complicated. Onboarding new software engineers can 
be a daunting process.
For the last 2 years, we at ShapeBlue have built up a set of resources to help 
us with onboarding on new engineers who will be working on Cloudstack.

This has evolved into a self-study course that we call "hackerbook"- the logic 
being that it's a training course that gets engineers hands-on hacking in the 
code ASAP.  It's a mix of videos, exercises and other resources.

Today, we've opensourced this resource in order to make it available to anybody 
who may want to learn to develop on Cloudstack.

Feedback and improvement PRs will be warmly accepted

Its currently sitting in a shapeblue repo, happy to move under ASF if anybody 
thinks that's important

https://github.com/shapeblue/hackerbook

Happy Hacking

Kind regards
Giles


giles.sir...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue





Re: Experiences with KVM, iSCSI and OCFS2 (SharedMountPoint)

2021-01-21 Thread Simon Weller
We used to use CLVM a while ago before we shifted to Ceph. Cluster 
suite/corosync was a bit of a nightmare, and fencing events caused all sorts of 
locking (DLM) problems.
I helped a CloudStack user out a couple of month ago, after they upgraded and 
CLVM broke, so I know it's still out there in limited places.
I wouldn't recommend using it today unless you're very brave and have the 
capability of troubleshooting the code yourself.

-Si



From: Wido den Hollander 
Sent: Thursday, January 21, 2021 11:26 AM
To: dev@cloudstack.apache.org ; n...@li.nux.ro 

Cc: us...@cloudstack.apache.org 
Subject: Re: Experiences with KVM, iSCSI and OCFS2 (SharedMountPoint)



On 21/01/2021 11:34, n...@li.nux.ro wrote:
> Hi,
>
> I used SharedMountPoint a very long time ago with GlusterFS before
> Cloudstack had native integration.
> Don't remember the details, but overall my impression was that it worked
> surprisingly well, of course back then there weren't as many feature, so
> less stuff to test. I would give it a go.
>
> As a side note, I did also use iSCSI with CLVM with success, it was
> quite fast. I ended up doing it because it was difficult to get OCFS
> running on EL6 and GFS2 had a reputation for being very slow. Marcus has
> a lot of experience with this, might want to get in touch with him:
> https://www.slideshare.net/MarcusLSorensen/cloud-stack-clvm

I assume you used CLVM with Corosync?

My concern with LVM is:

- No thin provisioning (when used with CloudStack)
- No snapshots (Right?)
- Not very much used

OCFS2 doesn't have my preference either, but otherwise you have to use
corosync.

Anybody else otherwise using CLVM?

Wido

>
> HTH,
> Lucian
>
> On 2021-01-21 09:32, Wido den Hollander wrote:
>> Hi,
>>
>> For a specific use-case I'm looking into the possibility to use iSCSI
>> in combination with KVM.
>>
>> Use-case: Low-latenc I/O with 4k blocks and QD=1
>>
>> KVM with CloudStack doesn't support iSCSI natively and the docs and
>> other blogs refer to using 'SharedMountPoint' with OCFS2 or GFS2:
>>
>> -
>> http://docs.cloudstack.apache.org/en/latest/adminguide/storage.html#hypervisor-support-for-primary-storage
>>
>> -
>> https://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/
>>
>>
>> It has been a really long time since I've used OCFS2 and I wanted to
>> see what experiences from other people are.
>>
>> How is the stability and performance of OFCS2? It seems that
>> performance should be rather good as lock/s is a problem with
>> clustered filesystems, but since we only lock the QCOW2 file on boot
>> of the VM that shouldn't be an issue.
>>
>> In addition to OCFS2, how mature is 'SharedMountPoint' as a storage
>> pool with KVM. Does is support all the features NFS supports?
>>
>> Thanks,
>>
>> Wido


Re: [DISCUSS][RELEASE] 4.15 merges and milestone

2020-12-03 Thread Simon Weller
Hi Daan,

Thanks for all the hard work on this.
I don't believe there are any other outstanding items from what I can see. 
Looks like we're ready!

-Si


From: Daan Hoogland 
Sent: Thursday, December 3, 2020 8:14 AM
To: dev 
Subject: Re: [DISCUSS][RELEASE] 4.15 merges and milestone

Hello again,

I have just merged the last critical item for 4.14 and 4.15. All other PRs
are (or should be) moved to future milestones.
I think we are ready for releases. thoughts anyone?

On Mon, Nov 23, 2020 at 2:45 PM Daan Hoogland 
wrote:

> as of now there are one PR marked as critical for 4.14.1 and one for 4.15.
> I propose moving the rest to future milestones 4.14.2, 4.15.1 or 4.16
> depending on whether these are bugs or enhancements. This is about the
> final call on Blockers and critical issues!
>
> any thoughts anybody?
>
>
> On Tue, Nov 17, 2020 at 7:48 PM Daan Hoogland 
> wrote:
>
>> Devs, we have 11 open PRs in milestone 4.15 and 2 in 4.14.1. Not all of
>> them are urgent (lots of severity:minor actually) and some are
>> enhancements. Can we agree to move all of those to 4.14.2, 4.15.1 or 4.16?
>> if not please give them some love urgently. We are slacking traditionally
>> and personally I'd like us to break with that tradition sooner rather than
>> later. As far as I can see we have a good set of functionality to support
>> for the next 18 months.
>> I had some off-line discussion with our RM and want to propose that we
>> cut an RC start of the weekend. I'll ping people on PRs as well, but want
>> to open this for any concern with the content of the release we might have.
>> So any issue that might be wrongly labelled, bring it up here please and be
>> quick about it.
>>
>> hope you all agree,
>> --
>> Daan
>>
>
>
> --
> Daan
>


--
Daan


Re: [ANNOUNCE] new committer: Nguyen Mai Hoang

2020-11-18 Thread Simon Weller
Congrats Hoang!!

From: Suresh Anaparti 
Sent: Wednesday, November 18, 2020 3:37 AM
To: hoang.ngu...@ewerk.com ; dev@cloudstack.apache.org 
; us...@cloudstack.apache.org 

Subject: Re: [ANNOUNCE] new committer: Nguyen Mai Hoang

Congrats Hoang!

Regards,
Suresh

On 18/11/20, 4:19 AM, "Sven Vogel"  wrote:

Hi everyone,



 The Project Management Committee (PMC) for Apache CloudStack
has invited Nguyen Mai Hoang to become a committer and we are pleased
to announce that he has accepted.

Please join me in congratulating Hoang on this accomplishment.


Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.


Thanks and Cheers,



Sven Vogel
Apache CloudStack PMC member


suresh.anapa...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue





Re: [ANNOUNCE] new committer: Rakesh Venkatesh

2020-11-18 Thread Simon Weller
Congrats Rakesh!!

From: Suresh Anaparti 
Sent: Wednesday, November 18, 2020 3:36 AM
To: r.venkat...@global.leaseweb.com ; 
dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Subject: Re: [ANNOUNCE] new committer: Rakesh Venkatesh

Congrats Rakesh!

Regards,
Suresh

On 18/11/20, 4:18 AM, "Sven Vogel"  wrote:

Hi everyone,



 The Project Management Committee (PMC) for Apache CloudStack
has invited Rakesh Venkatesh to become a committer and we are pleased

to announce that he has accepted.

Please join me in congratulating Rakesh on this accomplishment.


Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.


Thanks and Cheers,



Sven Vogel
Apache CloudStack PMC member


suresh.anapa...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue





Re: [ANNOUNCE] new committer: Suresh Anaparti

2020-11-18 Thread Simon Weller
Congrats Suresh!!

From: Rohit Yadav 
Sent: Wednesday, November 18, 2020 3:13 AM
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Cc: Suresh Anaparti 
Subject: Re: [ANNOUNCE] new committer: Suresh Anaparti

Congratulations Suresh!


Regards.


From: Sven Vogel 
Sent: Wednesday, November 18, 2020 04:17
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org 
Cc: Suresh Anaparti 
Subject: [ANNOUNCE] new committer: Suresh Anaparti

Hi everyone,


 The Project Management Committee (PMC) for Apache CloudStack
has invited Suresh Anaparti to become a committer and we are pleased
to announce that he has accepted.

Please join me in congratulating Suresh on this accomplishment.


Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.


Thanks and Cheers,



Sven Vogel
Apache CloudStack PMC member

rohit.ya...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue





Re: [ANNOUNCE] new committer: Abhishek Kumar

2020-11-18 Thread Simon Weller
Congrats Abhishek!!

From: Suresh Anaparti 
Sent: Wednesday, November 18, 2020 3:35 AM
To: Abhishek Kumar ; us...@cloudstack.apache.org 
; dev@cloudstack.apache.org 

Subject: Re: [ANNOUNCE] new committer: Abhishek Kumar

Congrats Abhishek...

Regards,
Suresh

On 18/11/20, 4:18 AM, "Sven Vogel"  wrote:

Hi everyone,


The Project Management Committee (PMC) for Apache CloudStack
has invited Abhishek Kumar to become a committer and we are pleased
to announce that he has accepted.

Please join me in congratulating Abhishek on this accomplishment.


Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.




Thanks and Cheers,



Sven Vogel
Apache CloudStack PMC member


suresh.anapa...@shapeblue.com
www.shapeblue.com
3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
@shapeblue





Re: [DISCUSS]/[PROPOSAL] draft PRs

2020-02-14 Thread Simon Weller

+1

From: Daan Hoogland 
Sent: Friday, February 14, 2020 6:03 AM
To: dev 
Subject: [DISCUSS]/[PROPOSAL] draft PRs

devs, I thought I had already sent a mail about this but i cannot find it.
I'm sure i had mentioned it somewhere (probably on github).
this is a follow up on [1] and hopefully you'll agree, a slight improvement.

here it comes:

At the moment we are creating PRs with a [WIP] or [DO NOT MERGE] tag in the
title. This title stands the chance of being merged once we agree the PR is
ready for merge. It also clutters the title.

Github has introduced a nice feature a while ago; draft PR. When creating a
PR you can opt not to open it for merge but as draft. Choose a button left
of the "Create pull request" button, marked "Create draft PR". It will be a
full PR with all CI and discussion possibilities open. The only difference
is the merge button being disabled. One will than have to make/mark it
"ready for merge" before it *can* be merged.

[1]
https://lists.apache.org/thread.html/f3f0988907f85bfc2cfcb0fbcde831037f9b1cb017e94bc68932%40%3Cdev.cloudstack.apache.org%3E
please shoot any comments you may have back at me,
thanks

--
Daan


[DISCUSS] SIG for SDN Tungsten Fabric Network Plugin

2020-02-03 Thread Simon Weller
All,

During the 2019 CCC @ Apachecon North America, a few of us discussed the need 
for a new Software Defined Networking (SDN) integration for CloudStack, now 
that Nuage has chosen to depreciate their SDN product portfolio.
I've been working closely with Sven Vogel on outlining how we might be able to 
start a Special Interest Group (SIG) to design and build an ACS network plugin 
into the Linux Foundation project Tungsten Fabric (Formally known as Open 
Contrail).

Both Sven's company, EWERK and my company ENA are willing to contribute 
developers to this effort, as we feel it's important for Apache CloudStack to 
have a robust SDN option that utilizes a well known and stable open source SDN 
project and one that is community supported.
Although there is an existing plugin for Contrail that was originally 
contributed by Juniper, it has been orphaned in the ACS code base for many 
years and to my knowledge, it’s unusable.
Over the years, we've had a number of SDN integrations come and go. This has 
left users in the lurch and discouraged other potential companies from 
considering these options, as one has to be confident in the longevity of the 
plugin.

Why Tungsten Fabric?
Tungsten Fabric has been around for quite a while and it is now officially a 
Linux Foundation project, so it has a considerable amount of support behind it. 
It's scalable, multi-tenant, supports a number of advanced security features, 
as well a large chunk of built in components we currently need a Virtual Router 
to provide.
It's dual stack IPV4 and IPV6 and heavily utilizes BGP and MPLS. This makes it 
ideal for those of us that maintain our own networks, as it will provide tight 
integration options and eliminate the need for complicated Private Gateway (PG) 
setups for VPCs.
Additionally, with service stitching and EVPN capabilities, it will make it a 
lot easier for operators to support other platforms without having to build a 
dedicated plugin, or figure out how to support those network or security 
features through other L2/L3 hacks.

Sven and I would like to gauge feedback from the community on this proposal and 
see whether other organizations are interested in participating.

Thanks,

Sven and Simon


Re: Cadf events #3232 - Is there an interest?

2019-12-12 Thread Simon Weller
Hi Nikolaos,

Your work looks very interesting. Honestly, I missed your PR, so I wasn't aware 
this was being worked on. I'll take a look at it and get more familiar with the 
CADF standard in general.

-Si


From: Nikolaos Dalezios 
Sent: Thursday, December 12, 2019 2:37 AM
To: dev@cloudstack.apache.org 
Subject: Cadf events #3232 - Is there an interest?

Greetings to the dev team and community.
I have delivered my thesis on "Implementing CADF in Apache CloudStack" and
got my MSc.
First of all, I would like to thank you for your comments and help.

Is there an interest on implementing the full CADF model on CloudStack? By
full, I mean that each Action and the produced event should be
studied separately, something that is really time consuming and lis
considered as a long term project.

Does the community think that the existing event logging model should be
changed? If so, would it be useful to provide a more detailed plan and
article on CADF event model?

@DaanHoogland  requested review on my pull
request but I got no answer or comments.

I just would like to know if I can contribute to this project

Thanks,

Nikos


Re: Introduction

2019-11-06 Thread Simon Weller

Welcome Pearl!

From: Pearl d'Silva 
Sent: Tuesday, November 5, 2019 10:34 PM
To: dev@cloudstack.apache.org 
Subject: Introduction

Hello Everyone,

I'm Pearl and have recently joined Shapeblue. Really excited about being part 
of Cloudstack community. Looking forward to learn and contribute to the 
community.


Thanks & Regards,
Pearl Dsilva
pearl.dsi...@shapeblue.com




pearl.dsi...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: [VOTE] Primate as modern UI for CloudStack

2019-10-07 Thread Simon Weller

+1 (binding)

From: Rohit Yadav 
Sent: Monday, October 7, 2019 6:31 AM
To: dev@cloudstack.apache.org ; 
us...@cloudstack.apache.org ; 
priv...@cloudstack.apache.org 
Subject: [VOTE] Primate as modern UI for CloudStack

All,

The feedback and response has been positive on the proposal to use Primate as 
the modern UI for CloudStack [1] [2]. Thank you all.

I'm starting this vote (to):

  *   Accept Primate codebase [3] as a project under Apache CloudStack project
  *   Create and host a new repository (cloudstack-primate) and follow Github 
based development workflow (issues, pull requests etc) as we do with CloudStack
  *   Given this is a new project, to encourage cadence until its feature 
completeness the merge criteria is proposed as:
 *   Manual testing against each PR and/or with screenshots from the author 
or testing contributor, integration with Travis is possible once we get JS/UI 
tests
 *   At least 1 LGTM from any of the active contributors, we'll move this 
to 2 LGTMs when the codebase reaches feature parity wrt the existing/old 
CloudStack UI
 *   Squash and merge PRs
  *   Accept the proposed timeline [1][2] (subject to achievement of goals wrt 
Primate technical release and GA)
 *   the first technical preview targetted with the winter 2019 LTS release 
(~Q1 2020) and release to serve a deprecation notice wrt the older UI
 *   define a release approach before winter LTS
 *   stop taking feature FRs for old/existing UI after winter 2019 LTS 
release, work on upgrade path/documentation from old UI to Primate
 *   the first Primate GA targetted wrt summer LTS 2020 (~H2 2019), but 
still ship old UI with a final deprecation notice
 *   old UI codebase removed from codebase in winter 2020 LTS release

The vote will be up for the next two weeks to give enough time for PMC and the 
community to gather consensus and still have room for questions, feedback and 
discussions. The results to be shared on/after 21th October 2019.

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

[1] Primate Proposal:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Proposal%3A+CloudStack+Primate+UI

[2] Email thread reference:
https://markmail.org/message/z6fuvw4regig7aqb

[3] Primate repo current location: https://github.com/shapeblue/primate


Regards,

Rohit Yadav

Software Architect, ShapeBlue

https://www.shapeblue.com

rohit.ya...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: [DISCUSS] Primate - new UI for CloudStack?

2019-09-20 Thread Simon Weller
I like the idea of separating the UI from the main code base. I think that will 
provide a lot more flexibility moving forward and the project is well overdue 
for a new look and feel.
I think the time frame proposed to sunset the old UI is doable and we'll need 
some feedback from those using it today (we have our own UI, so this doesn't 
affect us).

One of our challenges over the last few years is the added work of getting UI 
features into the release and it has added around 30% additional work load due 
to the older style code of the current UI. Having it in VUE is great and I 
think it will also encourage others to contribute.

+1.


From: Anurag A 
Sent: Friday, September 20, 2019 11:26 AM
To: dev@cloudstack.apache.org 
Cc: us...@cloudstack.apache.org 
Subject: Re: [DISCUSS] Primate - new UI for CloudStack?

+1 to the new UI as it supports:
1. Faster development of features
2. Better experience as a user
3. Easy customisation declaratively

Regards,
Anurag

> On 20-Sep-2019, at 8:17 AM, Siddhartha Kattoju  wrote:
>
> +1 from me as well.
>
> Just a side note: I feel like there is a high risk of tldr here. May be its
> just me. It may be would be good to put most of the details in a wiki page
> and just post a summarized version on the list ?
>
> *Sid Kattoju*
>
> Cloud Software Architect | Professional Services
>
> c 514.466.0951
>
>
> * *
>
>
>
>
> On Fri, Sep 20, 2019 at 8:10 AM Rohit Yadav 
> wrote:
>
>> All,
>>
>>
>>
>> == Summary ==
>>
>>
>> I have been working on a new, modern role-based UI for Cloudstack (project
>> Primate: https://github.com/shapeblue/primate) I demoed this for the
>> first time at CCCNA19 last week and it was very well received. It was
>> discussed, at length, as an item in the hackathon and the general consensus
>> there was that this could become Cloudstacks new UI. We discussed a plan to
>> achieve that and now I’m bringing that plan to the list for discussion.
>>
>>
>>
>> == Background ==
>>
>>
>> The current CloudStack UI has grown technical debt over time and it has
>> become harder to extend, develop, maintain in the long run, it is also
>> difficult for new contributors to learn and get started. Since late 2018, I
>> started working on a side-project with the aim to create a modern
>> progressive and role-based declaratively-programmed UI for CloudStack,
>> called Primate. Before creating Primate, I set out to create a list of core
>> requirements of what would give us an extensible, modern UI that was easy
>> to develop now and in the future. These are the requirements I came up with:
>>
>>  *   designed from ground up to be  a complete replacement for our
>> combined user/admin UI
>>  *   to respect all entities in cloudstack and not make assumptions on
>> use-cases of cloudstack
>>  *   data-driven and auto-generation of UI widgets and to be easy to
>> learn, develop, extend, customise and maintain.
>>  *   declarative programming
>>  *   support for API discovery and parameter completion like CloudMonkey
>>  *   support for custom roles
>>
>>
>>
>> I looked at existing Cloudstack UI projects but none of them fully
>> satisfied all these requirements and started Primate.
>>
>>
>>
>> == Project Primate ==
>>
>>
>> For the implementation, I compared a couple of opensource JS and UI
>> frameworks and decided to use VueJS (https://vuejs.org)
>> which is a JavaScript framework and AntD (https://ant.design<
>> https://ant.design/>) which is a UI design language with a well-defined
>> spec, styling guide, and an implementation-specific to VueJS. VueJS was
>> selected because among a few other JS frameworks I surveyed it was the
>> easiest (for me) to learn and get started. I also surveyed a few UI
>> frameworks and selected AntD because it came with a well-defined spec,
>> styling guide and VueJS specific implementation which gives several
>> re-usable components out of the box.
>>
>>
>>
>> During the development of Primate, I used my previous experience from
>> CloudMonkey and another PoC angular-based UI ProjectX, and it currently
>> supports:
>>
>>  *   role-based UI based on API autodiscovery
>>  *   auto-generated action/API forms with parameter completion
>>  *   declarative component-driven views
>>  *   modern programming methodologies (hot reloading, npm based
>> build/run/compile etc.)
>>  *   decoupled from core Cloudstack code
>>  *   dynamic translation (most/many of old translation files ported)
>>  *   includes dashboards, async job/API polling, all list views/tables
>> per the old UI
>>  *   browser history and url/route driven navigation
>>  *   support for mobiles/tables/desktop screens
>>  *   configuration driven UI customisation (of navigation, icons, APIs
>> etc)
>>
>>
>>
>> To get to this point, I’ve had some valuable help from Anurag and Sven et
>> al at EWerk.
>> The development strategy to support all APIs out of the box in a
>> data-driven way gives a functioning UI 

Re: [DISCUSS] Change of the official CHAT channel

2019-08-26 Thread Simon Weller
Sven,

I just sent you an invite.

-Si


From: Sven Vogel 
Sent: Monday, August 26, 2019 10:19 AM
To: users 
Cc: dev@cloudstack.apache.org 
Subject: Re: [DISCUSS] Change of the official CHAT channel

that means,



this is the official URL?

https://apachecloudstack.slack.com/



how can i added to this channel?


On Sunday, 08/25/2019 at 22:19 Andrija Panic wrote:


Right to the bone Nicolas... i.e. exactly my thinking.

On Sun, 25 Aug 2019 at 19:48, Nicolas Vazquez
wrote:

> Hi all,
>
> My 2 cents: I fully agree on promoting the mailing lists as the main
> communication channel. However, I think it will be good to drop IRC
and
> promote the current Slack channel as it is active and with a
considerable
> amount of users (+150 members vs ~30 in the IRC channels).
Personally, I
> have joined the Slack channel a few years ago and have seen new
users
> joining the channel and asking questions. My point is, if we already
have
> this channel working good without "official" promotion, then why not
use it
> as the official 'chat' channel?
>
>
> Regards,
>
> Nicolas Vazquez
>
> 
> From: Gabriel Beims Bräscher
> Sent: Friday, August 23, 2019 3:21 PM
> To: users
> Cc: dev
> Subject: Re: [DISCUSS] Change of the official CHAT channel
>
> I am +1 on opening the VOTE for removal of IRC references.
>
> Em sex, 23 de ago de 2019 às 14:33, Andrija Panic  >
> escreveu:
>
> > I'm inline with that Paul - let's then remove the IRC being
mentioned on
> > main website and any future events. Makes sense?
> >
> > We can keep Slack as it is now, and then definitely (in my
opinion) NOT
> > move to ASF Slack, since as someone mentioned it's a pain to move
all
> > users.
> >
> > Does this sounds good to majority here?
> >
> > If so, I would cast a vote thread for removal of IRC from any
public
> > pages/website/future events marketing.
> >
> >
> > On Fri, Aug 23, 2019, 17:45 Paul Angus  wrote:
> >
> > > I don't think that we should be 'marketing' anything other than
the
> > > mailing lists.That is where most people can be found and
where
> > > previously asked questions can be searched for.
> > > Having a ready to roll 'chat' tool has many advantages, so I'm
cool
> with
> > > having one in the back pocket.
> > > But personally I'd like to see as little fragmentation of the
community
> > as
> > > possible.
> > >
> > > + a new visitor turning up to a slack channel with 4 people on
it is
> not
> > > going to give a positive impression, and that person is
definitely is
> far
> > > less likely to get an answer to any question that they have.
> > >
> > > Paul.
> > >
> > > paul.an...@shapeblue.com
> > > www.shapeblue.com
> > > Amadeus House, Floral Street, London  WC2E 9DPUK
> > > @shapeblue
> > >
> > >
> > >
> > >
> > > -Original Message-
> > > From: Gabriel Beims Bräscher
> > > Sent: 23 August 2019 14:56
> > > To: dev
> > > Cc: Nux!
> > > Subject: Re: [DISCUSS] Change of the official CHAT channel
> > >
> > > Hello folks,
> > >
> > > I see no problem with using chat platforms. The mail should stay
as the
> > > default communication tool indeed, but several times I used
Slack when
> > > helping foes around, raising questions, and pinging folks on
private
> > chat.
> > > As far as we have tools and they are being used by the community
I see
> no
> > > problem with keeping them around.
> > >
> > > However, I think that the main point raised by Andrija is
regarding the
> > > "marketing" of the channels. We normally promote IRC (pointing
to
> > > irc.freenode.net), but CloudStack IRC channels look pretty dead,
> > > especially considering that it is the "official" ACS chat tool.
> > >
> > > Does promoting all channels (IRC + Slack) look a good idea?
Should we
> > keep
> > > only one option? In one hand, IRC currently is not as active as
Slack,
> > but
> > > on the other hand, Slack requires an invitation e-mail.
> > >
> > > Regards,
> > > Gabriel.
> > >
> > >
> > >
> > > Em sex, 23 de ago de 2019 às 08:45, Rohit Yadav <
> > rohit.ya...@shapeblue.com
> > > >
> > > escreveu:
> > >
> > > > All,
> > > >
> > > > I think email is the more persistent form of communication and
it
> must
> > > > continue to be the default communication mechanism for the
project
> > > > dev+user+etc.
> > > >
> > > > For a more real-time/transient communication, IRC or slack are
good
> > > tools.
> > > > I don't have any preference on either, as I'm not an active
user on
> > > > either of them. It just happened that someone asked me to join
> the-asf
> > > > slack for some other group, and I saw many projects having
their
> > > > channels there and I made a comment about it on our current
> > > > apachecloudstack community slack which is outside of the-asf
group.
> > > > I've got both of them setup on my desktop slack now, if it's
too
> > > > difficult to migrate everyone from the old slack group to
the-asf
> > > > slack group, let's continue as is; or perhaps add an

Introduction: Radu Todirica

2019-08-21 Thread Simon Weller
All,

I'd like to introduce Radu Todirica to the community.  Radu is a new(er) member 
of the ENA team and has been making lots of contributions to CloudStack and you 
can expect to see some PRs from him pretty soon.

Please join me in making him welcome!

-Si




Re: Dynamic scaling support for KVM

2019-08-08 Thread Simon Weller
Hi Fariboz,

We'd definitely like to add that functionality, but it's going to be a fairly 
big lift, as the libxml configs are currently built as a single block. A large 
refactor of how we interact with libvirt will be required.
This is definitely something we have on our list and hopefully we can starting 
taking a look at it within the next few months.

-Si

From: Fariborz Navidan 
Sent: Thursday, August 8, 2019 7:27 AM
To: dev@cloudstack.apache.org 
Subject: Dynamic scaling support for KVM

Hello Devs,

Since long time ago libvirt supports live horizental scaling of VMs. Do you
intend for ACS 4.13 to support dynamic scaling of KVM VMs?

TIA


Re: Does ACS utilizes or can utilize IOThreads on KVM hosts?

2019-08-05 Thread Simon Weller
Faiborz,

If you have virtio-scsi configured for the VM, in ACS 4.12 and later iothreads 
will be configured based on the number of vCores you allocate to the VM. So if 
you have 4 cores, you'll have 4 iothreads.

The PR is here - https://github.com/apache/cloudstack/pull/3101

-Si


From: Fariborz Navidan 
Sent: Monday, August 5, 2019 10:10 AM
To: dev@cloudstack.apache.org 
Subject: Does ACS utilizes or can utilize IOThreads on KVM hosts?

Hello All,

It sounds like IOThreads has a great effect on IO performance of KVM guests
if configured correctly. Does ACS supports or utilizes this feature of QEMU
and libvirt?

Thanks


Re: [ANNOUNCE] Apache CloudStack - Newsweek, Best Business Tools 2019 in 'Cloud Services'

2019-08-02 Thread Simon Weller
Fantastic!


From: Paul Angus 
Sent: Thursday, August 1, 2019 10:54 AM
To: us...@cloudstack.apache.org ; 
dev@cloudstack.apache.org 
Subject: [ANNOUNCE] Apache CloudStack - Newsweek, Best Business Tools 2019 in 
'Cloud Services'


Hi Everyone,

I'm delighted to say that 'we' have been recognised by Newsweek in their 
inaugural list of Best Business Tools in the 'Cloud Services' category.
https://www.newsweek.com/best-business-tools-2019/cloud-services

A quick screen grab of the certificate can be found here:
https://imgur.com/a/7epEqeX

The recognition was for:
- Willingness to recommend the brand to colleagues
- Trustworthiness
- Reliability
- Fulfilment of service promise
- Continuous improvement
- Security

Well done everyone, this is a great community achievement and shows that not 
only is CloudStack great, but the world is starting to take notice of it too!


We'll upload the a good copy of the certificate to the ACS website and a bit of 
marketing stuff around it soon.


Paul Angus
CloudStack PMC
#CloudStackWorks


paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: Latest Qemu KVM EV appears to be broken with ACS #3278

2019-07-19 Thread Simon Weller
Hi Sven,

We're still using 2.10 right now and we haven't tested the patches yet for 
2.12. Having said that, we're currently trying to get our internal ACS release 
up to a more recent mainline version, so I'd suspect we'll have to dive into 
this fairly soon.

-Si


From: Sven Vogel 
Sent: Friday, July 19, 2019 6:48 AM
To: dev 
Subject: Re: Latest Qemu KVM EV appears to be broken with ACS #3278

Hi Guys,

Sorry that I formal reopen this issue.

We tested the actual system with this patch #3278

We use CentOS 1805 or 1810 and OVS and QEMU 2.12. It seems the first agent 
contact will not work so that the agent know the virtual machine is running. 
With 2.10 and the old agent, SystemVM it works.

2019-07-19 00:17:30,571 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-5:null) (logid:8ad14091) Executing: 
/usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patch.sh -n v-1728-VM -c 
 template=domP type=consoleproxy host=10.24.48.46 port=8250 name=v-1728-VM 
premium=true zone=4 pod=4 guid=Proxy.1728 proxy_vm=1728 disable_rp_filter=true 
eth2ip=185.232.219.237 eth2mask=255.255.255.248 gateway=185.232.219.233 
eth0ip=169.254.0.193 eth0mask=255.255.0.0 eth1ip=10.24.48.124 
eth1mask=255.255.255.0 mgmtcidr=10.24.48.0/24 localgw=10.24.48.1 
internaldns1=10.24.48.33 internaldns2=10.24.48.34 dns1=217.69.224.73 
dns2=85.232.28.146
2019-07-19 00:17:30,573 DEBUG [kvm.resource.LibvirtComputingResource] 
(agentRequest-Handler-5:null) (logid:8ad14091) Executing while with timeout : 
30
2019-07-19 00:17:30,616 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-2:null) (logid:7577151e) Unable to logon to 169.254.2.189
2019-07-19 00:17:30,616 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-2:null) (logid:7577151e) Trying to connect to 
169.254.2.189
2019-07-19 00:17:31,874 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) (logid:04b4d6ce) Could not connect to 169.254.1.21
2019-07-19 00:17:33,622 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-2:null) (logid:7577151e) Could not connect to 
169.254.2.189
2019-07-19 00:17:36,874 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) (logid:04b4d6ce) Unable to logon to 169.254.1.21
2019-07-19 00:17:36,874 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-3:null) (logid:04b4d6ce) Trying to connect to 169.254.1.21
2019-07-19 00:17:38,622 DEBUG [resource.virtualnetwork.VirtualRoutingResource] 
(agentRequest-Handler-2:null) (logid:7577151e) Trying to connect to 
169.254.2.189

@simonweller you encountered the same problem?

@rohit maybe there is something what we forget or what’s wrong?

@wido do you have ideas?

thanks

Sven


__

Sven Vogel
Teamlead Platform

EWERK DIGITAL GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 11
F +49 341 42649 - 18
s.vo...@ewerk.com
www.ewerk.com<http://www.ewerk.com>

Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 17023

Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 2-1:2011

ISAE 3402 Typ II Assessed

EWERK-Blog<https://blog.ewerk.com/> | 
LinkedIn<https://www.linkedin.com/company/ewerk-group> | 
Xing<https://www.xing.com/company/ewerk> | 
Twitter<https://twitter.com/EWERK_Group> | 
Facebook<https://de-de.facebook.com/EWERK.IT/>

Mit Handelsregistereintragung vom 10.07.2019 ist die EWERK RZ GmbH auf die 
EWERK IT GmbH verschmolzen und firmiert zukünftig gemeinsam unter dem Namen: 
EWERK DIGITAL GmbH, für weitere Informationen klicken Sie 
hier<https://www.ewerk.com/ewerkdigital>.

Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.

Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist 
vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der 
bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, 
Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte 
informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die 
E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen 
Dank.

The contents of this e-mail (including any attachments) are confidential and 
may be legally privileged. If you are not the intended recipient of this 
e-mail, any disclosure, copying, distribution or use of its contents is 
strictly prohibited, and you should please notify the sender immediately and 
then delete it (including any attachments) from your system. Thank you.

Am 22.04.2019 um 21:29 schrieb Simon Weller 
mailto:swel...@ena.com.INVALID>>:

In our case the SystemVMs were booting fine, but ACS wasn't able to inject the 
payload via the socket.



Re: [ANNOUNCE] Andrija Panic has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Andrija!!


From: Paul Angus 
Sent: Saturday, July 13, 2019 10:02 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org; 
priv...@cloudstack.apache.org
Subject: [ANNOUNCE] Andrija Panic has joined the PMC

Fellow CloudStackers,



It gives me great pleasure to say that Adrija has been invited to join the
PMC and has gracefully accepted.


Please joining me in congratulating Andrija!




Kind regards,



Paul Angus

CloudStack PMC


Re: [ANNOUNCE] Sven Vogel has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Sven!


From: Boris Stoyanov 
Sent: Tuesday, July 16, 2019 2:08 AM
To: us...@cloudstack.apache.org; priv...@cloudstack.apache.org; 
dev@cloudstack.apache.org
Subject: Re: [ANNOUNCE] Sven Vogel has joined the PMC

Congrats Sven!

On 13.07.19, 18:45, "Paul Angus"  wrote:

Fellow CloudStackers,



It gives me great pleasure to say that Sven has been invited to join the
PMC and has gracefully accepted.


Please joining me in congratulating Sven!




Kind regards,



Paul Angus

CloudStack PMC



boris.stoya...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: [ANNOUNCE] Gabriel Beims Bräscher has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Gabriel!


From: Paul Angus 
Sent: Saturday, July 13, 2019 11:00 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org; 
priv...@cloudstack.apache.org
Subject: [ANNOUNCE] Gabriel Beims Bräscher has joined the PMC

Fellow CloudStackers,


Its non-stop today!



It gives me great pleasure to say that Gabriel has been invited to join the
PMC and has gracefully accepted.


Please joining me in congratulating Sven!




Kind regards,



Paul Angus

CloudStack PMC


Re: [ANNOUNCE] Bobby (Boris Stoyanov) has joined the PMC

2019-07-17 Thread Simon Weller
Congrats Bobby!!



From: Paul Angus 
Sent: Tuesday, July 16, 2019 4:12 AM
To: priv...@cloudstack.apache.org; dev@cloudstack.apache.org; 
us...@cloudstack.apache.org
Subject: [ANNOUNCE] Bobby (Boris Stoyanov) has joined the PMC

Fellow CloudStackers,



It gives me great pleasure to say that Bobby has been invited to join the
PMC and has gracefully accepted.



Please join me in congratulating  Bobby!





Kind regards,





Paul Angus

CloudStack PMC


Re: Latest Qemu KVM EV appears to be broken with ACS

2019-04-22 Thread Simon Weller
Hey  Andrija,

In our case the SystemVMs were booting fine, but ACS wasn't able to inject the 
payload via the socket.

-Si


From: Andrija Panic 
Sent: Monday, April 22, 2019 1:16 PM
To: dev
Subject: Re: Latest Qemu KVM EV appears to be broken with ACS

Hi Simon, all,

did you try running CentOS with newer kernel - I just got a really strange
issue after upgrading KVM host from stock 1.5.3 to qemu-kvm-ev 2.12 with
stock kernel 3.10 (issues on Intel CPUs, while no issues on AMD Opteron),
which was fixed by upgrading kernel to 4.4 (Elrepo version).

My case was that SystemVM were not able to boot, stuck on "booting from
hard drive" SeaBios message (actually any VM with VirtIO "hardware") using
qemu-kvm-ev 2.12 (while no issues on stock 1.5.3).

What I could find is the that there are obviously some issues when using
nested KVM on top of ESXi (or HyperV), which is what I'm running.
When I switched template to Intel emulated one i.e. "Windows 2016" OS type
- VMs were able to boot just fine (user VM at least).

Might be related to original issue on this thread...

Best,
Andrija

On Thu, 18 Apr 2019 at 22:36, Sven Vogel  wrote:

> Hi Rohit,
>
> Thx we will test it!
>
>
>
> Von meinem iPhone gesendet
>
>
> __
>
> Sven Vogel
> Teamlead Platform
>
> EWERK RZ GmbH
> Brühl 24, D-04109 Leipzig
> P +49 341 42649 - 11
> F +49 341 42649 - 18
> s.vo...@ewerk.com
> www.ewerk.com<http://www.ewerk.com>
>
> Geschäftsführer:
> Dr. Erik Wende, Hendrik Schubert, Frank Richter, Gerhard Hoyer
> Registergericht: Leipzig HRB 17023
>
> Zertifiziert nach:
> ISO/IEC 27001:2013
> DIN EN ISO 9001:2015
> DIN ISO/IEC 2-1:2011
>
> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>
> Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
>
> Disclaimer Privacy:
> Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist
> vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der
> bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung,
> Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte
> informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie
> die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System.
> Vielen Dank.
>
> The contents of this e-mail (including any attachments) are confidential
> and may be legally privileged. If you are not the intended recipient of
> this e-mail, any disclosure, copying, distribution or use of its contents
> is strictly prohibited, and you should please notify the sender immediately
> and then delete it (including any attachments) from your system. Thank you.
> > Am 18.04.2019 um 21:44 schrieb Rohit Yadav :
> >
> > I've sent a PR that attempts to solve the issue. It is under testing but
> ready for review: https://github.com/apache/cloudstack/pull/3278
> >
> >
> > Thanks.
> >
> >
> > Regards,
> >
> > Rohit Yadav
> >
> > Software Architect, ShapeBlue
> >
> > https://www.shapeblue.com
> >
> > 
> > From: Simon Weller 
> > Sent: Monday, April 15, 2019 7:24:40 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Latest Qemu KVM EV appears to be broken with ACS
> >
> > +1 for the qemu guest agent approach.
> >
> >
> > 
> > From: Wido den Hollander 
> > Sent: Saturday, April 13, 2019 2:32 PM
> > To: dev@cloudstack.apache.org; Rohit Yadav
> > Subject: Re: Latest Qemu KVM EV appears to be broken with ACS
> >
> >
> >
> >> On 4/12/19 9:33 PM, Rohit Yadav wrote:
> >> Thanks, I was already exploring a solution using qemu guest agent since
> morning today. It just so happened that you also thought of the approach,
> and I could validate my script to work with qemu ev 2.12 by the end of my
> day.
> >>
> >
> > That would be great actually. The Qemu Guest Agent is a lot better to
> > use. We might want to explore that indeed. Not for now, but it is a
> > better option to talk to VMs imho.
> >
> > Wido
> >
> >> A proper fix might require some additional changes in
> cloud-early-config and therefore a new systemvmtemplate for
> 4.13.0.0/4.11.3.0, I'll start a PR on that in the following week(s).
> >>
> >> Regards.
> >>
> >> Regards,
> >> Rohit Yadav
> >>
> >> 
> >> From: Marcus 
> >> Sent: Saturday, April 13, 2019 12:31:33 AM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Latest Qemu KVM EV appears to be broken with A

Re: CloudStack and SDN

2019-04-18 Thread Simon Weller
Juniper contributed a plugin for Contrail quite a number of years ago. With 
that project now under the umbrella of the Linux Foundation (along with Open 
Daylight), it would be nice to see some of us start to explore the open options 
and see if we could move some better integrations forward. With Nokia pulling 
the Nuage plugin, there are no full SDN options available any longer and I 
think that's unfortunate for the project at large.

-Si


From: Sergey Levitskiy 
Sent: Wednesday, April 17, 2019 3:40 PM
To: dev@cloudstack.apache.org
Cc: users
Subject: Re: CloudStack and SDN

Nicira NVP/NSX is very outdated and doesn’t support latest NSX.

On 4/17/19, 1:22 AM, "Andrija Panic"  wrote:

Simplest one that actually works (call it SDN if you like) - VXLAN.

A few others with various implementation states and all hypervisor
dependent:
Nicira NVP / Vmware NSX
Midonet
Various tunneling protocols


On Wed, 17 Apr 2019 at 09:46, Haijiao <18602198...@163.com> wrote:

>
>
> Noted Nuage's decision on ceasing SDN plugin for CloudStack. Just
> wondering is there any other SDN solutoin can be integrated with 
CloudStack
> ?
>
>
> Thanks !
>
>
> Regards,
>
>

--

Andrija Panić




Re: [DISCUSS] Remove support for el6 packaging in 4.13/4.14

2019-04-15 Thread Simon Weller
+1.



From: Nux! 
Sent: Monday, April 15, 2019 8:20 AM
To: dev
Cc: users
Subject: Re: [DISCUSS] Remove support for el6 packaging in 4.13/4.14

+1, EL6 is in its last phase of support.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Rohit Yadav" 
> To: "dev" , "users" 
> Sent: Monday, 15 April, 2019 08:44:58
> Subject: [DISCUSS] Remove support for el6 packaging in 4.13/4.14

> All,
>
>
> With CentOS8 around the corner to be released sometime around the summer, I
> would like to propose to deprecate CentOS6 as support management server host
> distro and KVM host distro. Non-systemd enabled Ubuntu releases have been
> already deprecated [1].
>
>
> The older CentOS6 version would hold us back as we try to adapt, use and 
> support
> newer JRE version, kvm/libvirt version, the Linux kernel, and several other
> older dependencies. Both CentOS6 and RHEL6 have reached EOL on May 10th, 2017
> wrt full updates [1].
>
>
> If we don't have any disagreements, I propose we remove el6 packaging support 
> in
> the next major release - 4.13. But, if there are users and organisations that
> will be badly impacted, let 4.13 be the last of releases to support el6 and we
> definitely remove el6 support in 4.14.
>
> What are your thoughts?
>
>
> [1] EOL date wiki reference:
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Hypervisor+and+Management+Server+OS+EOL+Dates
>
>
>
> Regards,
>
> Rohit Yadav
>
> Software Architect, ShapeBlue
>
> https://www.shapeblue.com
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue


Re: Latest Qemu KVM EV appears to be broken with ACS

2019-04-15 Thread Simon Weller
om the host like so
>> (I'm using virsh but we could perhaps communicate with the guest agent
>> socket directly or via socat):
>>
>> virsh qemu-agent-command 19 '{"execute":"guest-file-open",
>> "arguments":{"path":"/tmp/testfile","mode":"w+"}}'
>> {"return":1001}
>>
>> virsh qemu-agent-command 19 '{"execute":"guest-file-write",
>> "arguments":{"handle":1001,"buf-b64":"Zm9vIHdhcyBoZXJlCg=="}}'
>> {"return":{"count":13,"eof":false}}
>>
>> virsh qemu-agent-command 19 '{"execute":"guest-file-close",
>> "arguments":{"handle":1001}}'
>> {"return":{}}
>>
>> root@r-54850-VM:~# cat /tmp/testfile
>> foo was here
>>
>> We are also able to detect via libvirt that the qemu guest agent is up and
>> ready. You can see it in the XML when you list a VM.
>>
>> We do need to keep other hypervisors in mind. This is just an option for a
>> fix that doesn't involve a larger redesign.
>>
>> On Fri, Apr 12, 2019 at 10:21 AM Rohit Yadav 
>> wrote:
>>
>>> Hi Simon,
>>>
>>>
>>> I'm exploring a solution for the same, I've found that the python based
>>> patching script fails to wait for the message to be written on the unix
>>> socket before that the socket is closed. I reckon this could be related
>> to
>>> serial port device handling related changes in qemu-ev 2.12, as the same
>>> mechanism used to work in past versions.
>>>
>>>
>>> I'm exploring/testing a solution where I replace the python based
>> patching
>>> script into a bash one. Can you test the following in your envrionment
>>> (ensure socat is installed), just backup and replace the
>> patchviasocket.py
>>> file with this:
>>>
>>> https://gist.github.com/rhtyd/aab23357fef2d8a530c0e83ec8be10c5
>>>
>>>
>>> The short term solution would be one of the ways to ensure patching works
>>> without much change in the scripts or systemvmtemplate. However, longer
>>> term we need to explore and standardize patching mechanism across all
>>> hypervisors, for example by using a small payload via a config drive iso.
>>>
>>>
>>> Regards,
>>>
>>> Rohit Yadav
>>>
>>> Software Architect, ShapeBlue
>>>
>>> https://www.shapeblue.com
>>>
>>> 
>>> From: Simon Weller 
>>> Sent: Friday, April 12, 2019 8:29:04 PM
>>> To: dev; users
>>> Subject: Latest Qemu KVM EV appears to be broken with ACS
>>>
>>> All,
>>>
>>> After troubleshooting a strange issue with a new lab environment
>>> yesterday, it appears that the patchviasocket functionality we rely on
>> for
>>> key and ip injection into our router/SSVM/CPVM images is broken with
>>> qemu-kvm-ev-2.12.0-18.el7 (January 2019 release). This was tested on
>> Centos
>>> 7.6.
>>> No data is injected and this was confirmed using socat on /dev/vport0p1.
>>> qemu-kvm-ev-2.10.0-21.el7_5.7.1 works, so hopefully this will save
>> someone
>>> some pain and suffering trying to figure out why the deployed seems
>> broken.
>>>
>>> We're going to dig in and see if can figure out the patches responsible
>>> for it breaking.
>>>
>>> -Si
>>>
>>>
>>>
>>> rohit.ya...@shapeblue.com
>>> www.shapeblue.com<http://www.shapeblue.com>
>>> Amadeus House, Floral Street, London  WC2E 9DPUK
>>> @shapeblue
>>>
>>>
>>>
>>>
>>
>> rohit.ya...@shapeblue.com
>> www.shapeblue.com<http://www.shapeblue.com>
>> Amadeus House, Floral Street, London  WC2E 9DPUK
>> @shapeblue
>>
>>
>>
>>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com<http://www.shapeblue.com>
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>


Latest Qemu KVM EV appears to be broken with ACS

2019-04-12 Thread Simon Weller
All,

After troubleshooting a strange issue with a new lab environment yesterday, it 
appears that the patchviasocket functionality we rely on for key and ip 
injection into our router/SSVM/CPVM images is broken with 
qemu-kvm-ev-2.12.0-18.el7 (January 2019 release). This was tested on Centos 7.6.
No data is injected and this was confirmed using socat on /dev/vport0p1. 
qemu-kvm-ev-2.10.0-21.el7_5.7.1 works, so hopefully this will save someone some 
pain and suffering trying to figure out why the deployed seems broken.

We're going to dig in and see if can figure out the patches responsible for it 
breaking.

-Si




Re: Windows on KVM - Windows profile with PV drivers vs Other PV?

2019-04-09 Thread Simon Weller
Lucian,

Windows PV also enables the KVM HyperV Enlightenment features for Windows 
guests on 4.12.

https://www.lfasiallc.com/wp-content/uploads/2017/11/Use-Hyper-V-Enlightenments-to-Increase-KVM-VM-Performance_Density_Chao-Peng.pdf

You can also change the disk controller when creating a template -

[cid:1f21ed36-1c9c-401a-bbf2-34bc7b4ce61a]

-Si


From: Nux! 
Sent: Tuesday, April 9, 2019 8:53 AM
To: dev
Cc: users
Subject: Re: Windows on KVM - Windows profile with PV drivers vs Other PV?

Ok, turns out there's a "Windows PV" profile that sorts out my problem. Didn't 
see that initially.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Nux!" 
> To: "dev" 
> Cc: "users" 
> Sent: Tuesday, 9 April, 2019 14:26:40
> Subject: Windows on KVM - Windows profile with PV drivers vs Other PV?

> Hi,
>
> I'm trying to add some Windows templates to my test cloud and by using the
> "Windows Server 2016" I end up with Intel 1000 NICs instead of virtio ones;
> this is a bit suboptimal.
>
> When I add the template initially I can see I can specify a type of storage -
> virtio-scsi is what I need, however I can't do the same for the network
> devices.
>
> What does the community recommend here? Any way to force virtio devices on the
> Windows Server profile or should I go for "Other PV" profiles for Windows
> guests?
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro


Re: [VOTE] Apache CloudStack 4.12.0.0 [RC5]

2019-03-18 Thread Simon Weller
Built and installed RPMS
Added VPC
Added VPC Tier
Create VM
Start VM
Create new volume
Attach volume
Unattach volume
Deleted volume
Installed OS via template and ISO
Detach ISO
Test Console
Tested Snapshot
Checked VR interface assignments
Test ping to VR from VM
Test ping to 8.8.8.8 from VM
Test VR Master shutdown and failover to Backup (while pinging from VM to 
8.8.8.8)
Test new VR Master shutdown and failover to Backup (while pinging from VM to 
8.8.8.8)
Stop VM, Delete VM (with expunge)
Removed VPC Tier
Removed VPC

Hypervisor: KVM
OS: Centos 7.5
Networking: Advanced with VXLAN
Primary Storage: Ceph RBD
Secondary Storage: NFS

+1 (binding)


From: Gabriel Beims Bräscher 
Sent: Thursday, March 14, 2019 4:58 PM
To: dev; users
Subject: [VOTE] Apache CloudStack 4.12.0.0 [RC5]

Hi All,

I've created a 4.12.0.0 release (RC5), with the following artifacts up for
a vote:
The changes since RC4 are listed at the end of this email.

Git Branch: 4.12.0.0-RC20190314T1011
https://github.com/apache/cloudstack/tree/4.12.0.0-RC20190314T1011
https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.12.0.0-RC20190314T1011

Commit: a137398bf106028d2fd5344d599fcd2b560d2944
https://github.com/apache/cloudstack/commit/a137398bf106028d2fd5344d599fcd2b560d2944

Source release for 4.12.0.0-RC20190314T1011:
https://dist.apache.org/repos/dist/dev/cloudstack/4.12.0.0/

PGP release keys (signed using 25908455):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 3 business days (until 19th March).

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Additional information:

For users' convenience, packages are available in
http://cloudstack.apt-get.eu/
The 4.12.0.0 RC5 is available for the following distros:
- Ubuntu 14.04, 16.04, and 18.04:
http://cloudstack.apt-get.eu/ubuntu/dists/trusty/4.12/
http://cloudstack.apt-get.eu/ubuntu/dists/xenial/4.12/
http://cloudstack.apt-get.eu/ubuntu/dists/bionic/4.12/

- CentOS6 and CentOS7:
http://cloudstack.apt-get.eu/centos/6/4.12/
http://cloudstack.apt-get.eu/centos/7/4.12/

Please, use the template 4.11.2 (located in [1]) when testing the RC5.
The release notes [2] still need to be updated.

Changes Since RC4:
Merged #3210 systemd: Fix -Dpid arg passing to systemd usage service [3]

[1] http://download.cloudstack.org/systemvm/4.11/
[2]
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html
[3] https://github.com/apache/cloudstack/pull/3210


Re: [ANNOUNCE] New Committer: Sven Vogel

2019-03-18 Thread Simon Weller
Congrats Sven!



From: Tutkowski, Mike 
Sent: Monday, March 18, 2019 10:31 AM
To: dev@cloudstack.apache.org
Subject: [ANNOUNCE] New Committer: Sven Vogel

Hi everyone,

The Project Management Committee (PMC) for Apache CloudStack
has invited Sven Vogel to become a committer and I am pleased
to announce that he has accepted.

Please join me in congratulating Sven on this accomplishment.

Thanks!
Mike



Re: [ANNOUNCE] New Committer: Dennis Konrad

2019-03-18 Thread Simon Weller
Congrats Dennis!


From: Tutkowski, Mike 
Sent: Monday, March 18, 2019 10:32 AM
To: dev@cloudstack.apache.org
Subject: [ANNOUNCE] New Committer: Dennis Konrad

Hi everyone,

The Project Management Committee (PMC) for Apache CloudStack
has invited Dennis Konrad to become a committer and I am pleased
to announce that he has accepted.

Please join me in congratulating Dennis on this accomplishment.

Thanks!
Mike



Re: [VOTE] Apache CloudStack 4.12.0.0 [RC3]

2019-02-22 Thread Simon Weller
Master -> 4.12 RC3 (no DB upgrade)

Added VPC
Added VPC Tier
Create VM
Start VM
Create new volume
Attach volume
Unattach volume
Deleted volume
Installed OS via template and ISO
Detach ISO
Test Console
Tested Snapshot
Checked VR interface assignments
Test ping to VR from VM
Test ping to 8.8.8.8 from VM
Test VR Master shutdown and failover to Backup (while pinging from VM to 
8.8.8.8)
Test new VR Master shutdown and failover to Backup (while pinging from VM to 
8.8.8.8)
Stop VM, Delete VM (with expunge)
Removed VPC Tier
Removed VPC

Hypervisor: KVM
OS: Centos 7.5
Networking: Advanced with VXLAN
Primary Storage: Ceph RBD
Secondary Storage: NFS

+1 (binding)



From: Gabriel Beims Bräscher 
Sent: Thursday, February 21, 2019 7:55 PM
To: dev; users
Subject: [VOTE] Apache CloudStack 4.12.0.0 [RC3]

Hi All,

I've created a 4.12.0.0 release (RC3), with the following artifacts up for
a vote:

Git Branch and Commit SH:
https://github.com/apache/cloudstack/tree/4.12.0.0-RC20190212T2301
https://github.com/apache/cloudstack/commit/e5b3aa4b5a5d1a25c79313cecd3ae1c9f074baca
Commit: e5b3aa4b5a5d1a25c79313cecd3ae1c9f074baca

Source release for 4.12.0.0-RC20190212T2301:
https://dist.apache.org/repos/dist/dev/cloudstack/4.12.0.0/

PGP release keys (signed using 25908455):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open for 3 business days (until 26th February).

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Additional information:

*Note:* the RC2 VOTE ("[VOTE] Apache CloudStack 4.12.0.0 [RC2]") had no
blocker bug to be fixed; however, the VOTE process led to the need of an
RC3. For more details, please follow the discussions at the respective
email thread.

All Travis and Jenkins checks have passed [1] for the branch
4.12.0.0-RC20190212T2301.

For users' convenience, packages are available in
http://cloudstack.apt-get.eu/
4.12.0.0 RC3 is available for the following distros:
- Ubuntu 14.04, 16.04, and 18.04;
- CentOS6 and CentOS7.

Please, use the template 4.11.2 (located in [2]) when testing the RC3.
The release notes [3] still need to be updated.

[1] https://github.com/apache/cloudstack/pull/3189
[2] http://download.cloudstack.org/systemvm/4.11/
[3]
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/latest/index.html


Re: [VOTE] Apache CloudStack 4.12.0.0 [RC2]

2019-02-20 Thread Simon Weller
I'm also going to find some time to build and test 4.12 this week. So reading 
the thread here, there's no  RC branch and we're using master?

-Si




From: Rohit Yadav 
Sent: Wednesday, February 20, 2019 1:50 AM
To: dev@cloudstack.apache.org; users
Subject: Re: [VOTE] Apache CloudStack 4.12.0.0 [RC2]

Hi Gabriel,


I'll try to find some time this weekend to test the RC2.


However, on top of things, the commit/sha does not seem super stable 
(intermittently failing travis smoketests on master, but not on 4.11 for 
example) and I could not find the 4.12.0.0-RC20190212T2301 branch on asf/github 
remotes. Have you confirmed near ~100% smoketests pass for atleast 
kvm/vmware/xenserver on the RC?


- Rohit






From: Gabriel Beims Bräscher 
Sent: Tuesday, February 19, 2019 9:12:28 PM
To: dev@cloudstack.apache.org; users
Subject: Re: [VOTE] Apache CloudStack 4.12.0.0 [RC2]

Hello all,

I would like to update that we still have 48 hours (extended another 72
hours) for testing and voting 4.12 RC2.

So far we have 2 votes:
+1 (PMC / binding)
* wido

+1 (non binding)
* me

0
none

-1
none


Em seg, 18 de fev de 2019 às 09:37, Paul Angus 
escreveu:

> Ah,
>
> Previously I followed the documentation here:
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+Procedure
>
>
>
> Master can be unfrozen once we reach a point that the code is healthy
> enough to cut the first RC as the release is then on its own branch.
>
>
>
>
>
>
>
>
>
>
>
> *From:* Gabriel Beims Bräscher 
> *Sent:* 18 February 2019 12:16
> *To:* Paul Angus 
> *Cc:* dev@cloudstack.apache.org; users 
> *Subject:* Re: [VOTE] Apache CloudStack 4.12.0.0 [RC2]
>
>
>
> Paul, I did not create a branch. As the master branch is on freeze, I
> considered the master to be used on building and tests.
>
>
>
> Git Branch and Commit SH:
> https://github.com/apache/cloudstack/tree/master
>
> https://github.com/apache/cloudstack/commit/709845f4a333ad2ace0183706433a0653ba159c6
> Commit: 709845f4a333ad2ace0183706433a0653ba159c6
>
>
>
> I can create a 4.12.0.0-RC20190212T2301 branch if needed.
>
>
>
> As we have only 1 binding vote, I will postpone the vote for more 72 hours.
>
>
>
> Em seg, 18 de fev de 2019 às 06:47, Paul Angus 
> escreveu:
>
> [sorry everyone]
>
> @Gabriel Beims Bräscher whats the name of the branch that you've created
> for 4.12 ? It's probably me getting github blindness, but I can't find it
> to build nonoss from.
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> @shapeblue
>
>
>
>
> -Original Message-
> From: Wido den Hollander 
> Sent: 15 February 2019 13:23
> To: dev@cloudstack.apache.org; Gabriel Beims Bräscher <
> gabrasc...@gmail.com>; users 
> Subject: Re: [VOTE] Apache CloudStack 4.12.0.0 [RC2]
>
> +1 (binding)
>
> Tested:
>
> - Building DEB packages
> - Run on Ubuntu 18.04
> - Tested live storage migration
> - Tested Advanced Networking with VXLAN
> - Tested IPv6 deployment in Advanced Networking
> - Tested destroy and re-create of Virtual Routers
>
> Wido
>
> On 2/13/19 2:23 AM, Gabriel Beims Bräscher wrote:
> > Hi All,
> >
> > The issue in RC1 (4.12.0.0-RC20190206T2333) have been addressed and we
> > are ready to go with RC2.
> > I've created the 4.12 RC2 (4.12.0.0-RC20190212T2301) release
> > candidate, with the following artifacts up for a vote:
> >
> > Git Branch and Commit SH:
> > https://github.com/apache/cloudstack/tree/master
> > https://github.com/apache/cloudstack/commit/709845f4a333ad2ace01837064
> > 33a0653ba159c6
> > Commit: 709845f4a333ad2ace0183706433a0653ba159c6
> >
> > Source release for 4.12.0.0-RC20190212T2301:
> > https://dist.apache.org/repos/dist/dev/cloudstack/4.12.0.0/
> >
> > PGP release keys (signed using 25908455):
> > https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> >
> > The vote will be open for 3 business days (until 15th January).
> >
> > For sanity in tallying the vote, can PMC members please be sure to
> > indicate "(binding)" with their vote?
> >
> > [ ] +1  approve
> > [ ] +0  no opinion
> > [ ] -1  disapprove (and reason why)
> >
> > Additional information:
> >
> > For users' convenience, packages are available in
> > http://cloudstack.apt-get.eu/
> > RC1 has been built for the following distros:
> > - Ubuntu 14.04, 16.04, and 18.04;
> > - CentOS6 and CentOS7.
> >
> > The system VM template from 4.11.2 [1] works for RC2. The release
> > notes [2] still need to be updated.
> >
> > Best Regards,
> > Gabriel.
> >
> > [1] http://download.cloudstack.org/systemvm/4.11/
> > [2]
> > http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en
> > /latest/index.html
> >
>
>

rohit.ya...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: project does not build anymore on macos

2019-02-08 Thread Simon Weller
I know Nathan and Gabriel discussed this a couple of weeks ago, as Nathan does 
his dev builds on a mac. I'm not sure if a patch idea came out of that 
discussion or not.

-Si




From: Daan Hoogland 
Sent: Friday, February 8, 2019 9:58 AM
To: dev; Wido den Hollander
Cc: Rafael Weingartner; Gabriel Beims Bräscher; Rohit Yadav
Subject: project does not build anymore on macos

LS,

Due to changes in LibvirtResource recently merged, the project will err out
during unit tests on macos. This happens because of
c496c84c6c727a84862cbbe2d870ff57939488b4 that changes the initialisation of
memstats, requiring /proc/meminfo to exist (during unit tests). The
requirement makes perfect sense in production, so a simple roll-back seems
not the way to go. @Wido den Hollander , you created the
change, can you think with me what to do here?
/cc @Rafael Weingartner  @Gabriel Beims
Bräscher  @Rohit Yadav 

--
Daan


Re: Dropping Nuage Networks support

2019-01-25 Thread Simon Weller
Kris,


I echo what Rohit said. You've all been great to work with and we're going to 
miss your involvement in the community.

I wish you all the best and I hope we can all cross paths again in the future.


- Si


From: Rohit Yadav 
Sent: Friday, January 25, 2019 2:02 PM
To: dev@cloudstack.apache.org
Subject: Re: Dropping Nuage Networks support

Hi Kris, Frank and Raf,


It's unfortunate to read about the decision but I understand your position. I 
would like to thank you, Frank and Raf for your contribution, professionalism, 
participation in the community and the quality of contribution. With your final 
effort to do a proper cleanup with the PR you've set a new standard for other 
vendors and contributors to follow, something we've not seen in the community 
before and is highly appreciated. For that, I thank you all again.


It was a pleasure to work with you all. Good luck in your future projects, 
cheers!


Regards,

Rohit Yadav






From: Kris Sterckx 
Sent: Friday, January 25, 2019 11:44:04 PM
To: dev@cloudstack.apache.org
Subject: Dropping Nuage Networks support

Folks,



A management decision within Nuage Networks / Nokia has been made for
dropping CloudStack support in the upcoming release.

With that, we have been working at a clean cut, taking out Nuage SDN
support from the code, as the last thing we want would be leaving
unsupported/broken code in the repo. Obviously, all generically applicable
contributions remain – only the Nuage specifics are taken out. Some of
these generic contributions include per-NIC extra DHCP options support,
extended Config Drive support (both discussed at the Miami CCC) and
Physical Network Migration (presented at the last Montreal CCC).

The following PR has been uploaded, pending your review:


https://github.com/apache/cloudstack/pull/3146



Together with Frank and Raf, I would like to thank everyone for the great
collaboration, the time we spent at conferences/meetups and the overall joy
we had. And I hope our ways to further cross.

Keep up the great work.



Kris

rohit.ya...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: Snapshots on KVM corrupting disk images

2019-01-22 Thread Simon Weller
Sean,


What underlying primary storage are you using and how is it being utilized by 
ACS (e.g. NFS, shared mount et al)?



- Si



From: Sean Lair 
Sent: Tuesday, January 22, 2019 10:30 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Snapshots on KVM corrupting disk images

Hi all,

We had some instances where VM disks are becoming corrupted when using KVM 
snapshots.  We are running CloudStack 4.9.3 with KVM on CentOS 7.

The first time was when someone mass-enabled scheduled snapshots on a lot of 
large number VMs and secondary storage filled up.  We had to restore all those 
VM disks...  But believed it was just our fault with letting secondary storage 
fill up.

Today we had an instance where a snapshot failed and now the disk image is 
corrupted and the VM can't boot.  here is the output of some commands:

---
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
qemu-img: Could not open './184aa458-9d4b-4c1b-a3c6-23d28ea28e80': Could not 
read snapshots: File too large

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
---

We tried restoring to before the snapshot failure, but still have strange 
errors:

--
[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# ls -lh 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
-rw-r--r--. 1 root root 73G Jan 22 11:04 ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img info 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
image: ./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 73G
cluster_size: 65536
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
Format specific information:
compat: 1.1
lazy refcounts: false

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img check 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
tcmalloc: large alloc 1539750010880 bytes == (nil) @  0x7fb9cbbf7bf3 
0x7fb9cbc19488 0x7fb9cb71dc56 0x55d16ddf1c77 0x55d16ddf1edc 0x55d16ddf2541 
0x55d16ddf465e 0x55d16ddf8ad1 0x55d16de336db 0x55d16de373e6 0x7fb9c63a3c05 
0x55d16ddd9f7d
No errors were found on the image.

[root@cloudkvm02 c3be0ae5-2248-3ed6-a0c7-acffe25cc8d3]# qemu-img snapshot -l 
./184aa458-9d4b-4c1b-a3c6-23d28ea28e80
Snapshot list:
IDTAG VM SIZEDATE   VM CLOCK
1 a8fdf99f-8219-4032-a9c8-87a6e09e7f95   3.7G 2018-12-23 11:01:43 
3099:35:55.242
2 b4d74338-b0e3-4eeb-8bf8-41f6f75d9abd   3.8G 2019-01-06 11:03:16 
3431:52:23.942
--

Everyone is now extremely hesitant to use snapshots in KVM  We tried 
deleting the snapshots in the restored disk image, but it errors out...


Does anyone else have issues with KVM snapshots?  We are considering just 
disabling this functionality now...

Thanks
Sean








Re: Introduction

2019-01-11 Thread Simon Weller
Congrats Andrija!





From: Tutkowski, Mike 
Sent: Friday, January 11, 2019 10:06 AM
To: dev@cloudstack.apache.org
Subject: Re: Introduction

Glad to have you continuing to work in the CloudStack Community, Andrija!

FYI: I've worked with Andrija as a customer of SolidFire the past couple years 
(first having met him in Budapest at a CloudStack Collab Conf). He has great 
experience with CloudStack and it's fantastic that he is able to continue 
helping out the Community while working at ShapeBlue. :)

On 1/11/19, 3:49 AM, "Andrija Panic"  wrote:

NetApp Security WARNING: This is an external email. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.




Hi all,

I would like to take this opportunity to (re)introduce myself - some of you 
already know me from mailing list as Andrija Panic from HIAG/Safe Swiss Cloud.

I have moved forward and joined a great team in ShapeBlue as a Cloud 
Architect and looking forward to further endeavors with CloudStack.
FTR - I'm based in Belgrade, Serbia and been playing with CloudStack for 
last 5 years in production.

Cheers,
Andrija Panić

andrija.pa...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue







Re: Introduction

2019-01-04 Thread Simon Weller
Ivan,


slack invite sent.


- Si



From: Ivan Serdyuk 
Sent: Friday, January 4, 2019 9:01 AM
To: dev@cloudstack.apache.org
Subject: Re: Introduction

Btw: I could I join community's Slack?

On Wed, Jan 2, 2019 at 2:03 PM Rohit Yadav 
wrote:

> Welcome Abhishek!
>
>
> - Rohit
>
> 
>
>
>
> 
> From: Abhishek Kumar 
> Sent: Wednesday, January 2, 2019 4:23:45 PM
> To: dev@cloudstack.apache.org
> Subject: Introduction
>
> Hello all!
>
>
> This is Abhishek Kumar. I've recently joined ShapeBlue as Software
> Engineer to work on Cloudstack.
> Looking forward to learn and contribute in the project and community in a
> meaningful manner.
>
>
> Regards,
>
>
> Abhishek Kumar
>
> Software Engineer
>
> ShapeBlue
>
> abhishek.ku...@shapeblue.com
>
> www.shapeblue.com
>
> abhishek.ku...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>
>
>
>


Re: questions about 4.11 future

2019-01-03 Thread Simon Weller
Rohit,


Thoughts on this? We can base it on 4.11 or the pending 4.12 master.


- Si





From: Nathan Johnson 
Sent: Wednesday, January 2, 2019 12:23 PM
To: Rohit Yadav
Cc: dev@cloudstack.apache.org
Subject: questions about 4.11 future

First off, is there going to be a 4.11.3 release?

Assuming so, at what point would it be appropriate to add database migrations?  
I have a bug fix I’d like to open a PR on, but it will require a small database 
change - namely inserting a record into the configuration table.

Is it appropriate for me to start a new migration path for 4.11.3 to facilitate 
this fix?  Or would it be more appropriate for the release manager to do this?

If you’d like, I have a 4.11.3 database migration started in my branch (i.e., 
added a schema-41120to41130.sql / cleanup , added a Upgrade41120to41130.java , 
and added the Upgrade41120to41130 class to all of the paths mentioned in 
DatabaseUpgradeChecker).  if you’d like me to open a PR on effectively an empty 
4.11.3 migrations path, and then I could make a second PR that just adds the 
appropriate sql statement(s) for the actual bug fix PR.

Thanks in advance,

Nathan Johnson


Re: Re: [PROPOSE] RM for 4.12

2019-01-03 Thread Simon Weller
Let us know how we can help Gabriel!



From: Rohit Yadav 
Sent: Thursday, January 3, 2019 6:03 AM
To: dev
Subject: Re: Re: [PROPOSE] RM for 4.12

Hi Gabriel,


You've my full support, let me add you to blueorangutan to kick tests.


I've slight objection on the freeze date since people are on holidays and we 
still have some outstanding PRs to review, and I think we need to fix master 
branch (which is failing Travis and marvin/trillian smoketests) as early as 
possible without which it will be difficult to merge outstanding PRs with 
confidence. I think it may not be practical to have a stable RC1 by 18th unless 
master is stabilized first, then some of the mentioned/outstanding feature PRs 
are tested and merge. How about moving the freeze date to the end of Jan 2019, 
while still work towards bugfixing and master stability?


Please also check the upgrade paths from 4.11.2.0, 4.11.3.0-SNAPSHOT, I think 
the paths to 4.11.2.0(-SNAPSHOT) may need fixing appropriately. We also have 
some criticial VR/systemvmtemplate fixes especially around VMware that we can 
aim to fix/publish before 4.12.0.0.


- Rohit






From: Gabriel Beims Bräscher 
Sent: Friday, December 28, 2018 5:55:36 PM
To: dev
Subject: Re: Re: [PROPOSE] RM for 4.12

Hi Haijiao,

It would be great to have it. If we get a PR before the release freezing, I
have no problem adding it to 4.12.

Thanks for the feedback.

Em qui, 27 de dez de 2018 às 23:50, Haijiao <18602198...@163.com> escreveu:

> Great !
>
>
> Is it possible to add XenServer 7.6 and XCP-ng 7.6 support into ACS 4.12,
> though there's no PR to address it yet ?
>
>
> Regards,
>
>
>
>
> 在2018年12月27 19时02分, "Gabriel Beims Bräscher"写道:
>
> Thanks for the feedback, Rafael.
>
> Updated the PRs/features list:
>
> I – IPv6 support for Advanced network;
>
>  I a) ipv6: Calculate IPv6 address instead of fetching one from a pool
> #3077 (https://github.com/apache/cloudstack/pull/3077)
>
>  I b) Refactory VXLAN script and add IPv6 support #3070 (
> https://github.com/apache/cloudstack/pull/3070)
>
> II – UI: Update jquery and related libraries #3069 (
> https://github.com/apache/cloudstack/pull/3069)
>
> III – Data motion new features
>
>  III a) KVM-Local storage - fixes: (i) migrate template when it does note
> exist on target storage, and (ii) enable migrations with TLS connection;
>
>  III b) KVM live storage migration intra-cluster from NFS source and
> destination #2983 (https://github.com/apache/cloudstack/pull/2983)
>
>  III c) Vmware offline migration #2848 (
> https://github.com/apache/cloudstack/pull/2848)
>
> IV – Add Influxdb to StatsCollector #3078 (
> https://github.com/apache/cloudstack/pull/3078)
>
> V – Add command to list management servers #2578 (
> https://github.com/apache/cloudstack/pull/2578)
>
> Em qua, 26 de dez de 2018 às 11:25, Rafael Weingärtner <
> rafaelweingart...@gmail.com> escreveu:
>
> > It sounds like a plan.
> >
> > Reading through your suggested PRs (from the backlog we have), I noticed
> > something though. The PR (https://github.com/apache/cloudstack/pull/2997
> )
> > that has been merged introduced a bug in its feature (as we discussed
> last
> > week). Therefore, you need to add the fix for this bug in the list as
> well.
> >
> > On Wed, Dec 26, 2018 at 9:48 AM Gabriel Beims Bräscher <
> > gabrasc...@gmail.com>
> > wrote:
> >
> > > Hi All,
> > >
> > >
> > > It has been one year since we started discussing the 4.11 release,
> which
> > > was released on 12 February 2018. Additionally, 4.11 LTS is supported
> > until
> > > 1st July 2019 [1]; the next release will be 4.12, prior to our next LTS
> > > (5.0?). With that in mind, I'd like to put myself forward as release
> > > manager for 4.12. Please feel free to discuss if you have comments or
> > > concerns.
> > >
> > >
> > > Here is the plan:
> > >
> > > 1. The freeze date for the 4.12.0.0 will be at the 12th of January
> 2019.
> > >
> > > 2. After the freeze date (12th Jan), features will not be allowed on
> > > 4.12.0.0 and fixes only if addressing blocker issues. Fixes for other
> > > issues will be individually judged on their merit and risk.
> > >
> > > 3. RM will triage/report critical and blocker bugs for 4.12 and
> encourage
> > > people to get them fixed.
> > >
> > > 4. RM will create RCs and start voting once blocker bugs are cleared
> and
> > > baseline smoke test results are on par with previous smoke test
> results.
> > >
> > > 5. RM will allocate at least a week for branch stabilization and
> testing.
> > > At the earliest, on 18th January, RM will put 4.12.0.0-rc1 for voting
> > from
> > > the 4.12.0.0 branch, and master will be open to accepting new features.
> > >
> > > 6. RM will repeat 3-5 as required. Voting/testing of -rc2, -rc3 and so
> on
> > > will be created as required.
> > >
> > > 7. Once vote passes - RM will continue with the release procedures [2].
> > >
> > >
> > > I have 

Re: new committer: Boris Stoyanov (AKA Bobby)

2018-12-13 Thread Simon Weller
Congrats Bobby, much deserved!



From: Paul Angus 
Sent: Thursday, December 13, 2018 3:22 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Cc: Boris Stoyanov
Subject: new committer: Boris Stoyanov (AKA Bobby)

Hi Everyone,

The Project Management Committee (PMC) for Apache CloudStack
has invited Boris Stoyanov to become a committer and we are pleased
to announce that he has accepted.

Please join me in congratulating Bobby!


Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.
Being a PMC member enables assistance with the management
and to guide the direction of the project.



paul.an...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: [ANNOUNCE] New committer: Andrija Panić

2018-11-19 Thread Simon Weller
Congratulations Andrija, much deserved!





From: Nicolas Vazquez 
Sent: Monday, November 19, 2018 6:42 AM
To: dev
Subject: Re: [ANNOUNCE] New committer: Andrija Panić

Congratulations Andrija!


Regards,

Nicolas Vazquez


From: Rohit Yadav 
Sent: Monday, November 19, 2018 7:24:26 AM
To: dev
Subject: Re: [ANNOUNCE] New committer: Andrija Panić

Congrats Andrija!



- Rohit






From: Gabriel Beims Bräscher 
Sent: Monday, November 19, 2018 3:50:13 PM
To: dev
Subject: Re: [ANNOUNCE] New committer: Andrija Panić

Congratulations Andrija. Well deserved!

Em seg, 19 de nov de 2018 às 06:48, Wido den Hollander 
escreveu:

> Welcome Andrija!
>
> On 11/19/18 5:27 AM, Tutkowski, Mike wrote:
> > Hi everyone,
> >
> > The Project Management Committee (PMC) for Apache CloudStack
> > has invited Andrija Panić to become a committer and I am pleased
> > to announce that he has accepted.
> >
> > Please join me in congratulating Andrija on this accomplishment.
> >
> > Thanks!
> > Mike
> >
>

rohit.ya...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




nicolas.vazq...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DP
@shapeblue





Re: VXLAN and KVm experiences

2018-11-14 Thread Simon Weller
Wido,


Here is the original document on the implemention for VXLAN in ACS - 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Linux+native+VXLAN+support+on+KVM+hypervisor

It may shed some light on the reasons for the different multicast groups.


- Si


From: Wido den Hollander 
Sent: Tuesday, November 13, 2018 4:40 AM
To: dev@cloudstack.apache.org; Simon Weller
Subject: Re: VXLAN and KVm experiences



On 10/23/18 2:34 PM, Simon Weller wrote:
> Linux native VXLAN uses multicast and each host has to participate in 
> multicast in order to see the VXLAN networks. We haven't tried using PIM 
> across a L3 boundary with ACS, although it will probably work fine.
>
> Another option is to use a L3 VTEP, but right now there is no native support 
> for that in CloudStack's VXLAN implementation, although we've thought about 
> proposing it as feature.
>

Getting back to this I see CloudStack does this:

local mcastGrp="239.$(( ($vxlanId >> 16) % 256 )).$(( ($vxlanId >> 8) %
256 )).$(( $vxlanId % 256 ))"

VNI 1000 would use group 239.0.3.232 and VNI 1001 uses 239.0.3.233 1000.

Why are we using a different mcast group for every VNI? As the VNI is
encoded in the packet this should just work in one group, right?

Because this way you need to configure all those groups on your
Router(s) as each VNI will use a different Multicast Group.

I'm just looking for the reason why we have this different multicast groups.

I was thinking that we might want to add a option to agent.properties
where we allow users to set a fixed Multicast group for all traffic.

Wido

[0]:
https://github.com/apache/cloudstack/blob/master/scripts/vm/network/vnet/modifyvxlan.sh#L33



>
> 
> From: Wido den Hollander 
> Sent: Tuesday, October 23, 2018 7:17 AM
> To: dev@cloudstack.apache.org; Simon Weller
> Subject: Re: VXLAN and KVm experiences
>
>
>
> On 10/23/18 1:51 PM, Simon Weller wrote:
>> We've also been using VXLAN on KVM for all of our isolated VPC guest 
>> networks for quite a long time now. As Andrija pointed out, make sure you 
>> increase the max_igmp_memberships param and also put an ip address on each 
>> interface host VXLAN interface in the same subnet for all hosts that will 
>> share networking, or multicast won't work.
>>
>
> Thanks! So you are saying that all hypervisors need to be in the same L2
> network or are you routing the multicast?
>
> My idea was that each POD would be an isolated Layer 3 domain and that a
> VNI would span over the different Layer 3 networks.
>
> I don't like STP and other Layer 2 loop-prevention systems.
>
> Wido
>
>>
>> - Si
>>
>>
>> 
>> From: Wido den Hollander 
>> Sent: Tuesday, October 23, 2018 5:21 AM
>> To: dev@cloudstack.apache.org
>> Subject: Re: VXLAN and KVm experiences
>>
>>
>>
>> On 10/23/18 11:21 AM, Andrija Panic wrote:
>>> Hi Wido,
>>>
>>> I have "pioneered" this one in production for last 3 years (and suffered a
>>> nasty pain of silent drop of packages on kernel 3.X back in the days
>>> because of being unaware of max_igmp_memberships kernel parameters, so I
>>> have updated the manual long time ago).
>>>
>>> I never had any issues (beside above nasty one...) and it works very well.
>>
>> That's what I want to hear!
>>
>>> To avoid above issue that I described - you should increase
>>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
>>> with more than 20 vxlan interfaces, some of them will stay in down state
>>> and have a hard traffic drop (with proper message in agent.log) with kernel
>>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
>>> pay attention to MTU size as well - anyway everything is in the manual (I
>>> updated everything I though was missing) - so please check it.
>>>
>>
>> Yes, the underlying network will all be 9000 bytes MTU.
>>
>>> Our example setup:
>>>
>>> We have i.e. bond.950 as the main VLAN which will carry all vxlan "tunnels"
>>> - so this is defined as KVM traffic label. In our case it didn't make sense
>>> to use bridge on top of this bond0.950 (as the traffic label) - you can
>>> test it on your own - since this bridge is used only to extract child
>>> bond0.950 interface name, then based on vxlan ID, ACS will provision
>>> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge created
>>> (and then of course vNIC goes to this new bridge), so original bridge (to
>>> which bond0.xxx 

Re: KVM Max Guests Limit

2018-11-08 Thread Simon Weller
I think these is legacy and a guess back in the day. It was 50 at one point and 
it was lifted higher a few releases. ago.





From: Ivan Kudryavtsev 
Sent: Thursday, November 8, 2018 3:58 PM
To: dev
Subject: Re: KVM Max Guests Limit

Hi all, +1 for higher numbers.

чт, 8 нояб. 2018 г. в 16:32, Wido den Hollander :

> Hi,
>
> I see that for KVM we set the limit to 144 guests by default, can
> anybody tell me why we have this limit set to 144?
>
> Searching a bit I found this:
> https://access.redhat.com/articles/rhel-kvm-limits
>
> "This guest limit does not apply to Red Hat Enterprise Linux with
> Unlimited Guests. There is no guest limit for Red Hat Enterprise
> Virtualization"
>
> There is always a limit somewhere, but why do we set it to 144?
>
> I would personally vote for increasing this to 500 or something so that
> users don't run into it that easily.
>
> Also, the log line is printed in DEBUG mode only when a host reaches
> this limit, so I created a PR to set this to INFO:
> https://github.com/apache/cloudstack/pull/3013
>
> Any input?
>
> Wido
>


--
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ 


Re: VXLAN and KVm experiences

2018-10-23 Thread Simon Weller
Yeah, being able to handle EVPN within ACS via FRR would be awesome. FRR has 
added a lot of features since we tested it last. We were having problems with 
FRR honouring route targets and dynamically creating routes based on labels. If 
I recall, it was related to LDP  9.3 not functioning correctly.



From: Ivan Kudryavtsev 
Sent: Tuesday, October 23, 2018 7:54 AM
To: dev
Subject: Re: VXLAN and KVm experiences

Doesn't solution like this works seamlessly for large VXLAN networks?

https://vincent.bernat.ch/en/blog/2017-vxlan-bgp-evpn

вт, 23 окт. 2018 г., 8:34 Simon Weller :

> Linux native VXLAN uses multicast and each host has to participate in
> multicast in order to see the VXLAN networks. We haven't tried using PIM
> across a L3 boundary with ACS, although it will probably work fine.
>
> Another option is to use a L3 VTEP, but right now there is no native
> support for that in CloudStack's VXLAN implementation, although we've
> thought about proposing it as feature.
>
>
> 
> From: Wido den Hollander 
> Sent: Tuesday, October 23, 2018 7:17 AM
> To: dev@cloudstack.apache.org; Simon Weller
> Subject: Re: VXLAN and KVm experiences
>
>
>
> On 10/23/18 1:51 PM, Simon Weller wrote:
> > We've also been using VXLAN on KVM for all of our isolated VPC guest
> networks for quite a long time now. As Andrija pointed out, make sure you
> increase the max_igmp_memberships param and also put an ip address on each
> interface host VXLAN interface in the same subnet for all hosts that will
> share networking, or multicast won't work.
> >
>
> Thanks! So you are saying that all hypervisors need to be in the same L2
> network or are you routing the multicast?
>
> My idea was that each POD would be an isolated Layer 3 domain and that a
> VNI would span over the different Layer 3 networks.
>
> I don't like STP and other Layer 2 loop-prevention systems.
>
> Wido
>
> >
> > - Si
> >
> >
> > 
> > From: Wido den Hollander 
> > Sent: Tuesday, October 23, 2018 5:21 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: VXLAN and KVm experiences
> >
> >
> >
> > On 10/23/18 11:21 AM, Andrija Panic wrote:
> >> Hi Wido,
> >>
> >> I have "pioneered" this one in production for last 3 years (and
> suffered a
> >> nasty pain of silent drop of packages on kernel 3.X back in the days
> >> because of being unaware of max_igmp_memberships kernel parameters, so I
> >> have updated the manual long time ago).
> >>
> >> I never had any issues (beside above nasty one...) and it works very
> well.
> >
> > That's what I want to hear!
> >
> >> To avoid above issue that I described - you should increase
> >> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  -
> otherwise
> >> with more than 20 vxlan interfaces, some of them will stay in down state
> >> and have a hard traffic drop (with proper message in agent.log) with
> kernel
> >>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and
> also
> >> pay attention to MTU size as well - anyway everything is in the manual
> (I
> >> updated everything I though was missing) - so please check it.
> >>
> >
> > Yes, the underlying network will all be 9000 bytes MTU.
> >
> >> Our example setup:
> >>
> >> We have i.e. bond.950 as the main VLAN which will carry all vxlan
> "tunnels"
> >> - so this is defined as KVM traffic label. In our case it didn't make
> sense
> >> to use bridge on top of this bond0.950 (as the traffic label) - you can
> >> test it on your own - since this bridge is used only to extract child
> >> bond0.950 interface name, then based on vxlan ID, ACS will provision
> >> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge
> created
> >> (and then of course vNIC goes to this new bridge), so original bridge
> (to
> >> which bond0.xxx belonged) is not used for anything.
> >>
> >
> > Clear, I indeed thought something like that would happen.
> >
> >> Here is sample from above for vxlan 867 used for tenant isolation:
> >>
> >> root@hostname:~# brctl show brvx-867
> >>
> >> bridge name bridge id   STP enabled interfaces
> >> brvx-8678000.2215cfce99ce   no  vnet6
> >>
> >>  vxlan867
> >>
> >> root@hostname:~# ip -d link show vxlan867
> >>
> >> 297: vxlan867:  m

Re: VXLAN and KVm experiences

2018-10-23 Thread Simon Weller
Linux native VXLAN uses multicast and each host has to participate in multicast 
in order to see the VXLAN networks. We haven't tried using PIM across a L3 
boundary with ACS, although it will probably work fine.

Another option is to use a L3 VTEP, but right now there is no native support 
for that in CloudStack's VXLAN implementation, although we've thought about 
proposing it as feature.



From: Wido den Hollander 
Sent: Tuesday, October 23, 2018 7:17 AM
To: dev@cloudstack.apache.org; Simon Weller
Subject: Re: VXLAN and KVm experiences



On 10/23/18 1:51 PM, Simon Weller wrote:
> We've also been using VXLAN on KVM for all of our isolated VPC guest networks 
> for quite a long time now. As Andrija pointed out, make sure you increase the 
> max_igmp_memberships param and also put an ip address on each interface host 
> VXLAN interface in the same subnet for all hosts that will share networking, 
> or multicast won't work.
>

Thanks! So you are saying that all hypervisors need to be in the same L2
network or are you routing the multicast?

My idea was that each POD would be an isolated Layer 3 domain and that a
VNI would span over the different Layer 3 networks.

I don't like STP and other Layer 2 loop-prevention systems.

Wido

>
> - Si
>
>
> 
> From: Wido den Hollander 
> Sent: Tuesday, October 23, 2018 5:21 AM
> To: dev@cloudstack.apache.org
> Subject: Re: VXLAN and KVm experiences
>
>
>
> On 10/23/18 11:21 AM, Andrija Panic wrote:
>> Hi Wido,
>>
>> I have "pioneered" this one in production for last 3 years (and suffered a
>> nasty pain of silent drop of packages on kernel 3.X back in the days
>> because of being unaware of max_igmp_memberships kernel parameters, so I
>> have updated the manual long time ago).
>>
>> I never had any issues (beside above nasty one...) and it works very well.
>
> That's what I want to hear!
>
>> To avoid above issue that I described - you should increase
>> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
>> with more than 20 vxlan interfaces, some of them will stay in down state
>> and have a hard traffic drop (with proper message in agent.log) with kernel
>>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
>> pay attention to MTU size as well - anyway everything is in the manual (I
>> updated everything I though was missing) - so please check it.
>>
>
> Yes, the underlying network will all be 9000 bytes MTU.
>
>> Our example setup:
>>
>> We have i.e. bond.950 as the main VLAN which will carry all vxlan "tunnels"
>> - so this is defined as KVM traffic label. In our case it didn't make sense
>> to use bridge on top of this bond0.950 (as the traffic label) - you can
>> test it on your own - since this bridge is used only to extract child
>> bond0.950 interface name, then based on vxlan ID, ACS will provision
>> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge created
>> (and then of course vNIC goes to this new bridge), so original bridge (to
>> which bond0.xxx belonged) is not used for anything.
>>
>
> Clear, I indeed thought something like that would happen.
>
>> Here is sample from above for vxlan 867 used for tenant isolation:
>>
>> root@hostname:~# brctl show brvx-867
>>
>> bridge name bridge id   STP enabled interfaces
>> brvx-8678000.2215cfce99ce   no  vnet6
>>
>>  vxlan867
>>
>> root@hostname:~# ip -d link show vxlan867
>>
>> 297: vxlan867:  mtu 8142 qdisc noqueue
>> master brvx-867 state UNKNOWN mode DEFAULT group default qlen 1000
>> link/ether 22:15:cf:ce:99:ce brd ff:ff:ff:ff:ff:ff promiscuity 1
>> vxlan id 867 group 239.0.3.99 dev bond0.950 port 0 0 ttl 10 ageing 300
>>
>> root@ix1-c7-2:~# ifconfig bond0.950 | grep MTU
>>   UP BROADCAST RUNNING MULTICAST  MTU:8192  Metric:1
>>
>> So note how the vxlan interface has by 50 bytes smaller MTU than the
>> bond0.950 parent interface (which could affects traffic inside VM) - so
>> jumbo frames are needed anyway on the parent interface (bond.950 in example
>> above with minimum of 1550 MTU)
>>
>
> Yes, thanks! We will be using 1500 MTU inside the VMs, so all the
> networks underneath will be ~9k.
>
>> Ping me if more details needed, happy to help.
>>
>
> Awesome! We'll be doing a PoC rather soon. I'll come back with our
> experiences later.
>
> Wido
>
>> Cheers
>> Andrija
>>
>> On Tue, 23 Oct 2018 at 08:23, Wido den Hollander  wrote:
>>
>>> Hi,
>>>
>>> I just wanted to know if there are people out there using KVM with
>>> Advanced Networking and using VXLAN for different networks.
>>>
>>> Our main goal would be to spawn a VM and based on the network the NIC is
>>> in attach it to a different VXLAN bridge on the KVM host.
>>>
>>> It seems to me that this should work, but I just wanted to check and see
>>> if people have experience with it.
>>>
>>> Wido
>>>
>>
>>
>


Re: VXLAN and KVm experiences

2018-10-23 Thread Simon Weller
We've also been using VXLAN on KVM for all of our isolated VPC guest networks 
for quite a long time now. As Andrija pointed out, make sure you increase the 
max_igmp_memberships param and also put an ip address on each interface host 
VXLAN interface in the same subnet for all hosts that will share networking, or 
multicast won't work.


- Si



From: Wido den Hollander 
Sent: Tuesday, October 23, 2018 5:21 AM
To: dev@cloudstack.apache.org
Subject: Re: VXLAN and KVm experiences



On 10/23/18 11:21 AM, Andrija Panic wrote:
> Hi Wido,
>
> I have "pioneered" this one in production for last 3 years (and suffered a
> nasty pain of silent drop of packages on kernel 3.X back in the days
> because of being unaware of max_igmp_memberships kernel parameters, so I
> have updated the manual long time ago).
>
> I never had any issues (beside above nasty one...) and it works very well.

That's what I want to hear!

> To avoid above issue that I described - you should increase
> max_igmp_memberships (/proc/sys/net/ipv4/igmp_max_memberships)  - otherwise
> with more than 20 vxlan interfaces, some of them will stay in down state
> and have a hard traffic drop (with proper message in agent.log) with kernel
>> 4.0 (or I silent, bitchy random packet drop on kernel 3.X...) - and also
> pay attention to MTU size as well - anyway everything is in the manual (I
> updated everything I though was missing) - so please check it.
>

Yes, the underlying network will all be 9000 bytes MTU.

> Our example setup:
>
> We have i.e. bond.950 as the main VLAN which will carry all vxlan "tunnels"
> - so this is defined as KVM traffic label. In our case it didn't make sense
> to use bridge on top of this bond0.950 (as the traffic label) - you can
> test it on your own - since this bridge is used only to extract child
> bond0.950 interface name, then based on vxlan ID, ACS will provision
> vxlan...@bond0.xxx and join this new vxlan interface to NEW bridge created
> (and then of course vNIC goes to this new bridge), so original bridge (to
> which bond0.xxx belonged) is not used for anything.
>

Clear, I indeed thought something like that would happen.

> Here is sample from above for vxlan 867 used for tenant isolation:
>
> root@hostname:~# brctl show brvx-867
>
> bridge name bridge id   STP enabled interfaces
> brvx-8678000.2215cfce99ce   no  vnet6
>
>  vxlan867
>
> root@hostname:~# ip -d link show vxlan867
>
> 297: vxlan867:  mtu 8142 qdisc noqueue
> master brvx-867 state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether 22:15:cf:ce:99:ce brd ff:ff:ff:ff:ff:ff promiscuity 1
> vxlan id 867 group 239.0.3.99 dev bond0.950 port 0 0 ttl 10 ageing 300
>
> root@ix1-c7-2:~# ifconfig bond0.950 | grep MTU
>   UP BROADCAST RUNNING MULTICAST  MTU:8192  Metric:1
>
> So note how the vxlan interface has by 50 bytes smaller MTU than the
> bond0.950 parent interface (which could affects traffic inside VM) - so
> jumbo frames are needed anyway on the parent interface (bond.950 in example
> above with minimum of 1550 MTU)
>

Yes, thanks! We will be using 1500 MTU inside the VMs, so all the
networks underneath will be ~9k.

> Ping me if more details needed, happy to help.
>

Awesome! We'll be doing a PoC rather soon. I'll come back with our
experiences later.

Wido

> Cheers
> Andrija
>
> On Tue, 23 Oct 2018 at 08:23, Wido den Hollander  wrote:
>
>> Hi,
>>
>> I just wanted to know if there are people out there using KVM with
>> Advanced Networking and using VXLAN for different networks.
>>
>> Our main goal would be to spawn a VM and based on the network the NIC is
>> in attach it to a different VXLAN bridge on the KVM host.
>>
>> It seems to me that this should work, but I just wanted to check and see
>> if people have experience with it.
>>
>> Wido
>>
>
>


Re: Ansible 2.7: CloudStack related changes and future

2018-10-08 Thread Simon Weller
Rene,


Your contributions to the community have been nothing short of amazing. We're 
going to really miss you!

Best to all of your future endeavours.


- Si



From: Tutkowski, Mike 
Sent: Monday, October 8, 2018 12:31 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Re: Ansible 2.7: CloudStack related changes and future

Thanks, Rene, for all of the work you’ve done!

On 10/8/18, 10:02 AM, "Giles Sirett"  wrote:


Rene
Really sorry to hear that. I want to say a massive thank you for all of 
your work with the ansible/cloudstack modules.  I know lots of people have 
benefitted from the modules, a testament to some very cool work

Thank you and good luck with whatever's next for you

Kind regards
Giles

giles.sir...@shapeblue.com
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue




-Original Message-
From: Rene Moser 
Sent: 08 October 2018 12:43
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Ansible 2.7: CloudStack related changes and future

Hi all

First, please note I am leaving my current job by the end of November and I 
don't see that CloudStack will play any role in my professional future.

As a result, I official announce the end of my maintenance for the Ansible 
CloudStack modules with the release of Ansible v2.8.0 in spring 2019.

If anyone is interested to take over, please let me know so I can 
officially introduce him/her to the Ansible community.

Thanks for all the support and joy I have had with CloudStack and the 
community!

Ansible v2.7.0 is released with the following, CloudStack related changes:

David Passante (1):
  cloudstack: new module cs_disk_offering (#41795)

Rene Moser (4):
  cs_firewall: fix idempotence and tests for cloudstack v4.11 (#42458)
  cs_vpc: fix disabled or wrong vpc offering taken (#42465)
  cs_pod: workaround for 4.11 API break (#43944)
  cs_template: implement update and revamp (#37015)

Yoan Blanc (1):
  cs instance root_disk size update resizes the root volume (#43817)

nishiokay (2):
  [cloudstack] fix cs_host example (#42419)
  Update cs_storage_pool.py (#42454)


Best wishes
René




Re: Montréal Hackathon

2018-10-01 Thread Simon Weller
Mike,


I've got a PR in for the KVM HyperV Enlightenment feature against master. It 
looks like Jenkins is broken right now,  so might need someone to kick it.


- Si


From: Tutkowski, Mike 
Sent: Monday, October 1, 2018 1:28 PM
To: dev@cloudstack.apache.org
Subject: Montréal Hackathon

Hi everyone,

I wanted to send out an e-mail about the hackathon that we held in Montréal 
this past Wednesday (after the two days of the CloudStack Collaboration 
Conference that took place on Monday and Tuesday).

We spent the first 1.5 hours discussing issues we’d like to see addressed 
and/or new features we might be considering. I’ve provided the current list at 
the bottom of this message.

In particular, one item of note is that people seemed interested in quarterly 
remote meetups. The intent of such meetups would be to sync with each other on 
what we’re working on so as to not duplicate effort. We may also have people 
present a bit about a recent feature or item of interest (similar to what we do 
at conferences). In addition, these meetups could provide a nice checkpoint to 
see how we are doing with regards to the items listed below.

Please take a moment, scan through the list, ask questions, and/or send out 
additional areas that you feel the CloudStack Community should be focusing on.

If you were present at the hackathon, feel free to update us on what progress 
you might have made at the hackathon with regards to any topic below.

Thanks!
Mike

Hyper-V enlightenment

Version 5.x of CloudStack

KVM IO bursting

Live VM Migration

RPC Standard interface to VR

Getting INFO easily out of the SSVM

Deprecate old code (OVM?)

CloudMonkey testing

NoVNC in CPVM

CentOS SIG + packaging

VR Programming Optimization

New UI working with API Discovery

Network Models refactoring + designer UI

Marketing Plan

Video series for CloudStack (ex. developers series, users series)

Use GitHub to document aspects of CloudStack (how to build an environment, how 
to start writing code for it, etc.)

Figure out a process for how we'd like issues to be opened, assigned, closed, 
and resolved (using JIRA and GitHub Issues)

Create a true REST API (it can use the existing API behind the scenes).

Logic to generate code in particular use cases so you can focus mainly on your 
business logic.

Use standard libraries that implement JPA, HTTP, etc.

Remote Meetups every quarter

Support IPv6



Re: [4.11.1] VR memory leak?

2018-10-01 Thread Simon Weller
Rene,


Any obvious processes using memory?


- Si


From: Rene Moser 
Sent: Monday, October 1, 2018 4:13 AM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [4.11.1] VR memory leak?

Hi

We observe a specious pattern in memory usage (see graph free memory
https://photos.app.goo.gl/sffEmBEoZ1gbRd18A)

we restarted the VR last Friday, today on Monday, we have less then 20%
memory of 1 GB.

The memory is used memory not cached (also see
https://photos.app.goo.gl/b9eAd3xoETvDVKzH9)

Does anyone see an identical pattern? Anyone a chance to test 4.11.2
system VMs against this issue?

Regards
René


Re: CEPH / CloudStack features

2018-07-27 Thread Simon Weller
They're volume based snapshots at this point. We've looked at what it would 
take to support VMsnapshots, but we're not there yet, as the memory would need 
to be stored outside of the actual volume.

Primary snapshots work well. We still need to reintroduce the code that allows 
for disabling primary to secondary coping of snapshots should an organization 
not want to do that.


Templates are also pre-cached into Ceph to speed up deployment of VMs as Wido 
indicates below. This greatly reduced the secondary to primary copying of 
template images.
Live migration works well land has since Wido introduced the Ceph features 
years ago.

We have started looking at what it would take to support Ceph volume 
replication between zones/regions, as that would be a great Business Continuity 
feature.



From: Dag Sonstebo 
Sent: Friday, July 27, 2018 8:32 AM
To: dev@cloudstack.apache.org
Subject: Re: CEPH / CloudStack features

Excellent, thanks Wido.

When you say snapshotting – is this VM snapshots, volume snapshots or both?

How about live migration, does this work?

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 27/07/2018, 13:41, "Wido den Hollander"  wrote:

Hi,

On 07/27/2018 12:18 PM, Dag Sonstebo wrote:
> Hi all,
>
> I’m trying to find out more about CEPH compatibility with CloudStack / 
KVM – i.e. trying to put together a feature matrix of what works  and what 
doesn’t compared to NFS (or other block storage platforms).
> There’s not a lot of up to date information on this – the configuration 
guide on [1] is all I’ve located so far apart from a couple of one-liners in 
the official documentation.
>
> Could I get some feedback from the Ceph users in the community?
>

Yes! So, at first, Ceph is KVM-only. Other hypervisors do not support
RBD (RADOS Block Device) from Ceph.

What is supported:

- Thin provisioning
- Discard / fstrim (Requires VirtIO-SCSI)
- Volume cloning
- Snapshots
- Disk I/O throttling (done by libvirt)

Meaning, when a template is deployed for the first time in a Primary
Storage it's written to Ceph and all other Instances afterwards are a
clone of that primary image.

You can snapshot a RBD image and then have it copied to Secondary
Storage. Now, I'm not sure if keeping the snapshot in Primary Storage
and reverting works yet, I haven't looked at that in recent times.

The snapshotting part on Primary Storage is probably something that
needs some love and attention, but otherwise I think all other features
are supported.

I would recommend a CentOS 7 or Ubuntu 16.04/18.04 hypervisor, both work
just fine with Ceph.

Wido

> Regards,
> Dag Sonstebo
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-cloudstack/
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>



dag.sonst...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





Re: Snapshots only on Primary Storage feature

2018-05-21 Thread Simon Weller
yeah, I like that idea Mike.


Also, I want to be clear that breaking out the backup to secondary into a 
separate thread is a great feature, so kudos to those that developed it.  That 
along with the ability to turn it off completely makes for a nice overall 
improvement to snapshots.


- Si


From: Tutkowski, Mike <mike.tutkow...@netapp.com>
Sent: Saturday, May 19, 2018 8:40 PM
To: dev@cloudstack.apache.org
Subject: Re: Snapshots only on Primary Storage feature

Perhaps instead of renaming the setting, we can note in its description the 
hypervisors it currently pertains to.

> On May 19, 2018, at 7:03 PM, Glen Baars <g...@onsitecomputers.com.au> wrote:
>
> Based on the responses, I think it is a worthy feature to be retained. Maybe 
> the following changes?
>
> Rename the setting to something like kvmxen.snapshot.primaryonly ( I have no 
> idea of the naming scheme that Cloudstack uses )
> Ensure the code for vmware snapshots does not get impacted by the setting
> Record in the DB that the snapshot is only on the primary storage
> When the create template or download template features are used, use the 
> primary storage as the source.
>
> Kind regards,
> Glen Baars
>
> -Original Message-
> From: Will Stevens <wstev...@cloudops.com>
> Sent: Saturday, 19 May 2018 12:57 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Snapshots only on Primary Storage feature
>
> I think reverting the change in 4.11.1 is probably a good idea.
>
> *Will Stevens*
> Chief Technology Officer
> c 514.826.0190
>
> <https://goo.gl/NYZ8KK>
>
>
> On Fri, May 18, 2018 at 2:52 PM ilya musayev <ilya.mailing.li...@gmail.com>
> wrote:
>
>> Perhaps bring it back into 4.11.1?
>>
>> On Fri, May 18, 2018 at 9:28 AM Suresh Kumar Anaparti <
>> sureshkumar.anapa...@gmail.com> wrote:
>>
>>> Si / Will,
>>>
>>> That is just FYI, if anyone uses VMware with that flag set to false.
>>> I'm neither against the feature nor telling to rip that out.
>>>
>>> You are correct, the PR 2081 supports KVM and Xen as the volume
>>> snapshots are directly supported on them and backup operation is not
>>> tightly
>> coupled
>>> with the create operation.
>>>
>>> -Suresh
>>>
>>> On Fri, May 18, 2018 at 7:38 PM, Simon Weller
>>> <swel...@ena.com.invalid>
>>> wrote:
>>>
>>>> There are plenty of features in ACS that are particular to a
>>>> certain hypervisor (or hypervisor set), including VMware specific items.
>>>>
>>>> It was never claimed this feature worked across all hypervisors.
>>>> In addition to that, the default was to leave the existing
>>>> functionality exactly the way it was originally implemented and if
>>>> a user wished to change the functionality they could via a global config 
>>>> variable.
>>>>
>>>> Your original spec for PR 2081 in confluence states that the PR
>>>> was targeted towards KVM and Xen, so I'm confused as to why VMware
>>>> is even being mentioned here.
>>>>
>>>>
>>>> This is a major feature regression that a number of
>> organizations/service
>>>> providers are relying on and it wasn't called out when the PR was
>>> submitted.
>>>>
>>>>
>>>> 
>>>> From: Will Stevens <wstev...@cloudops.com>
>>>> Sent: Friday, May 18, 2018 6:12 AM
>>>> To: dev@cloudstack.apache.org
>>>> Subject: Re: Snapshots only on Primary Storage feature
>>>>
>>>> Just because it does not work for VMware should not a reason to
>>>> rip out
>>> the
>>>> functionality for other hypervisors where it is being used though.
>>>>
>>>> I know we also have the requirement that snapshots are not
>> automatically
>>>> replicated to secondary storage, so this feature is useful to us.
>>>>
>>>> I don't understand the rational for removing the feature just
>>>> because
>> it
>>>> does not work on VMware.
>>>>
>>>> On Fri, May 18, 2018, 6:27 AM Suresh Kumar Anaparti, <
>>>> sureshkumar.anapa...@gmail.com> wrote:
>>>>
>>>>> Si,
>>>>>
>>>>> The PR# 1697 with the global setting
>>>>> *snapshot.backup.rightafter** -
>>>>> false* doesn't
>>>>> work for VMware as create snapshot never takes a snapshot in
>>

Re: Snapshots only on Primary Storage feature

2018-05-18 Thread Simon Weller
There are plenty of features in ACS that are particular to a certain hypervisor 
(or hypervisor set), including VMware specific items.

It was never claimed this feature worked across all hypervisors. In addition to 
that, the default was to leave the existing functionality exactly the way it 
was originally implemented and if a user wished to change the functionality 
they could via a global config variable.

Your original spec for PR 2081 in confluence states that the PR was targeted 
towards KVM and Xen, so I'm confused as to why VMware is even being mentioned 
here.


This is a major feature regression that a number of organizations/service 
providers are relying on and it wasn't called out when the PR was submitted.



From: Will Stevens <wstev...@cloudops.com>
Sent: Friday, May 18, 2018 6:12 AM
To: dev@cloudstack.apache.org
Subject: Re: Snapshots only on Primary Storage feature

Just because it does not work for VMware should not a reason to rip out the
functionality for other hypervisors where it is being used though.

I know we also have the requirement that snapshots are not automatically
replicated to secondary storage, so this feature is useful to us.

I don't understand the rational for removing the feature just because it
does not work on VMware.

On Fri, May 18, 2018, 6:27 AM Suresh Kumar Anaparti, <
sureshkumar.anapa...@gmail.com> wrote:

> Si,
>
> The PR# 1697 with the global setting *snapshot.backup.rightafter** -
> false* doesn't
> work for VMware as create snapshot never takes a snapshot in Primary pool,
> it just returns the snapshot uuid. The backup snapshot does the complete
> job - creates a VM snapshot with the uuid, extracts and exports the target
> volume to secondary. On demand backup snapshot doesn't work as there is no
> snapshot in primary. Also, there'll be only one entry with Primary store
> role in snapshot_store_ref, which is the latest snapshot taken for that
> volume.
>
> -Suresh
>
> On Fri, May 18, 2018 at 1:03 AM, Simon Weller <swel...@ena.com.invalid>
> wrote:
>
> > The whole point of the original PR was to optionally disable this
> > functionality.
> >
> > We don't expose views of the backup state to our customers (we have our
> > own customer interfaces) and it's a large waste of space for us to be
> > backing up tons of VM images when we have a solid primary storage
> > infrastructure that already has lots of resiliency.
> >
> >
> > I guess we're going to have to revisit this again before we can consider
> > rebasing on 4.11.
> >
> > 
> > From: Suresh Kumar Anaparti <sureshkumar.anapa...@gmail.com>
> > Sent: Thursday, May 17, 2018 2:21 PM
> > To: dev
> > Subject: Re: Snapshots only on Primary Storage feature
> >
> > Hi Si,
> >
> > No. not possible to disable the backup to secondary. It copies the volume
> > snapshot to secondary in a background thread using asyncBackup param (set
> > to true) and allows other operations during that time.
> >
> > I understand that the backup was on demand when any operations are
> > performed on the snapshot. But, backup during that time may take
> > considerable time (depending on the snapshot size and the network
> > bandwidth), which can result in the job timeout and the User may assume
> > that it is already Backed up based on its state, unless it is documented.
> >
> > -Suresh
> >
> > On Fri, May 18, 2018 at 12:23 AM, Simon Weller <swel...@ena.com.invalid>
> > wrote:
> >
> > > Suresh,
> > >
> > >
> > > With this new merged  PR, is it possible to disable the backup to
> > > secondary completely? I can't tell from the reference spec and we're
> not
> > on
> > > a 4.10/4.11 base yet.
> > >
> > > For the record, in the instances where a volume or template from
> snapshot
> > > was required, the backup image was copied on demand to secondary.
> > >
> > > In an ideal world, secondary storage wouldn't even be involved in most
> of
> > > these options, instead using the native clone features of the
> underlying
> > > storage.
> > >
> > >
> > > - Si
> > >
> > > 
> > > From: Suresh Kumar Anaparti <sureshkumar.anapa...@gmail.com>
> > > Sent: Thursday, May 17, 2018 1:37 PM
> > > To: dev@cloudstack.apache.org
> > > Cc: Nathan Johnson
> > > Subject: Re: Snapshots only on Primary Storage feature
> > >
> > > Hi Glen / Si,
> > >
> > > In PR# 1697, the global setting *snapshot.backup.rightafter

Re: Snapshots only on Primary Storage feature

2018-05-17 Thread Simon Weller
The whole point of the original PR was to optionally disable this functionality.

We don't expose views of the backup state to our customers (we have our own 
customer interfaces) and it's a large waste of space for us to be backing up 
tons of VM images when we have a solid primary storage infrastructure that 
already has lots of resiliency.


I guess we're going to have to revisit this again before we can consider 
rebasing on 4.11.


From: Suresh Kumar Anaparti <sureshkumar.anapa...@gmail.com>
Sent: Thursday, May 17, 2018 2:21 PM
To: dev
Subject: Re: Snapshots only on Primary Storage feature

Hi Si,

No. not possible to disable the backup to secondary. It copies the volume
snapshot to secondary in a background thread using asyncBackup param (set
to true) and allows other operations during that time.

I understand that the backup was on demand when any operations are
performed on the snapshot. But, backup during that time may take
considerable time (depending on the snapshot size and the network
bandwidth), which can result in the job timeout and the User may assume
that it is already Backed up based on its state, unless it is documented.

-Suresh

On Fri, May 18, 2018 at 12:23 AM, Simon Weller <swel...@ena.com.invalid>
wrote:

> Suresh,
>
>
> With this new merged  PR, is it possible to disable the backup to
> secondary completely? I can't tell from the reference spec and we're not on
> a 4.10/4.11 base yet.
>
> For the record, in the instances where a volume or template from snapshot
> was required, the backup image was copied on demand to secondary.
>
> In an ideal world, secondary storage wouldn't even be involved in most of
> these options, instead using the native clone features of the underlying
> storage.
>
>
> - Si
>
> 
> From: Suresh Kumar Anaparti <sureshkumar.anapa...@gmail.com>
> Sent: Thursday, May 17, 2018 1:37 PM
> To: dev@cloudstack.apache.org
> Cc: Nathan Johnson
> Subject: Re: Snapshots only on Primary Storage feature
>
> Hi Glen / Si,
>
> In PR# 1697, the global setting *snapshot.backup.rightafter* if set to
> true, it'll be the default behaviour and snapshot is copied to the
> secondary storage. If set to false, then the snapshot state transitions are
> mocked and Snapshot would be in BackedUp state even though it is not really
> in Secondary storage, which doesn't make sense. Also, that will enable to
> create a volume or template from the snapshot, which will obviously fail.
>
> This behavior was changed with the PR
> https://github.com/apache/cloudstack/pull/2081. There is a clear
> separation
> of create and backup volume snapshot operations. The global setting
> *snapshot.backup.rightafter* has been removed in PR# 2081.
>
> -Suresh
>
> On Thu, May 17, 2018 at 8:40 PM, Simon Weller <swel...@ena.com.invalid>
> wrote:
>
> > Glen,
> >
> >
> > This feature was implemented in 4.9 by my colleague Nathan Johnson.  You
> > enable it by changing the global setting  snapshot.backup.rightafter to
> > false.
> >
> >
> > The PR is reference here: https://github.com/apache/cloudstack/pull/1697
> >
> >
> > We have the exact same use case as you, as we also use Ceph.
> >
> >
> > - Si
> >
> >
> > 
> > From: Glen Baars <g...@onsitecomputers.com.au>
> > Sent: Thursday, May 17, 2018 9:46 AM
> > To: dev@cloudstack.apache.org
> > Subject: Snapshots only on Primary Storage feature
> >
> >
> > Hello Devs,
> >
> >
> >
> > I have been thinking about a feature request and want to see what people
> > think about the use case.
> >
> >
> >
> > We use KVM + Ceph RBD as storage.
> >
> >
> >
> > Currently, when a client takes a snapshot, Cloudstack takes a Ceph
> > snapshot and then uses qemu-img to export to secondary storage. This
> > creates a full backup of the server. Clients want to use this as a daily
> > snapshot and it isn’t feasible due to the space requirements.
> >
> >
> >
> > We would like create the snapshot only on primary storage. It is
> > replicated offsite and fault tolerant. I can see that the download
> snapshot
> > and create template features may be an issue.
> >
> >
> >
> > I have seen the below features in the recent releases and wondered if
> this
> > was the direction that the development was going.
> >
> > Separation of volume snapshot creation on primary storage and backing
> > operation on secondary storage.
> >
> > Bypass secondary storage template copy/transfer for KVM.
> >
&

Re: Snapshots only on Primary Storage feature

2018-05-17 Thread Simon Weller
Suresh,


With this new merged  PR, is it possible to disable the backup to secondary 
completely? I can't tell from the reference spec and we're not on a 4.10/4.11 
base yet.

For the record, in the instances where a volume or template from snapshot was 
required, the backup image was copied on demand to secondary.

In an ideal world, secondary storage wouldn't even be involved in most of these 
options, instead using the native clone features of the underlying storage.


- Si


From: Suresh Kumar Anaparti <sureshkumar.anapa...@gmail.com>
Sent: Thursday, May 17, 2018 1:37 PM
To: dev@cloudstack.apache.org
Cc: Nathan Johnson
Subject: Re: Snapshots only on Primary Storage feature

Hi Glen / Si,

In PR# 1697, the global setting *snapshot.backup.rightafter* if set to
true, it'll be the default behaviour and snapshot is copied to the
secondary storage. If set to false, then the snapshot state transitions are
mocked and Snapshot would be in BackedUp state even though it is not really
in Secondary storage, which doesn't make sense. Also, that will enable to
create a volume or template from the snapshot, which will obviously fail.

This behavior was changed with the PR
https://github.com/apache/cloudstack/pull/2081. There is a clear separation
of create and backup volume snapshot operations. The global setting
*snapshot.backup.rightafter* has been removed in PR# 2081.

-Suresh

On Thu, May 17, 2018 at 8:40 PM, Simon Weller <swel...@ena.com.invalid>
wrote:

> Glen,
>
>
> This feature was implemented in 4.9 by my colleague Nathan Johnson.  You
> enable it by changing the global setting  snapshot.backup.rightafter to
> false.
>
>
> The PR is reference here: https://github.com/apache/cloudstack/pull/1697
>
>
> We have the exact same use case as you, as we also use Ceph.
>
>
> - Si
>
>
> 
> From: Glen Baars <g...@onsitecomputers.com.au>
> Sent: Thursday, May 17, 2018 9:46 AM
> To: dev@cloudstack.apache.org
> Subject: Snapshots only on Primary Storage feature
>
>
> Hello Devs,
>
>
>
> I have been thinking about a feature request and want to see what people
> think about the use case.
>
>
>
> We use KVM + Ceph RBD as storage.
>
>
>
> Currently, when a client takes a snapshot, Cloudstack takes a Ceph
> snapshot and then uses qemu-img to export to secondary storage. This
> creates a full backup of the server. Clients want to use this as a daily
> snapshot and it isn’t feasible due to the space requirements.
>
>
>
> We would like create the snapshot only on primary storage. It is
> replicated offsite and fault tolerant. I can see that the download snapshot
> and create template features may be an issue.
>
>
>
> I have seen the below features in the recent releases and wondered if this
> was the direction that the development was going.
>
> Separation of volume snapshot creation on primary storage and backing
> operation on secondary storage.
>
> Bypass secondary storage template copy/transfer for KVM.
>
> Kind regards,
>
> Glen Baars
>
> BackOnline Manager
>
>
>
> T  1300 733 328 / +61 8 6102 3276
>
> NZ +64 9280 3561
>
>
>
> www.timg.com<http://www.timg.com/>
>
>  [http://images.dbonline.com.au/images/fb.png]  Facebook<
> https://www.facebook.com/TheInformationManagementGroup>  [
> http://images.dbonline.com.au/images/li.png] LinkedIn<http://www.linkedin.
> com/company/the-information-management-group?trk=hb_tab_compy_id_2724246>
>
>
>
> Watch a short video about what we do!<https://www.youtube.com/
> watch?v=scLGLwSIFQc>
>
> [http://images.dbonline.com.au/images/timgv3.jpg]<https://goo.gl/eAHLO7>
>
> This e-mail may contain confidential and/or privileged information.If you
> are not the intended recipient (or have received this e-mail in error)
> please notify the sender immediately and destroy this e-mail. Any
> unauthorized copying, disclosure or distribution of the material in this
> e-mail is strictly forbidden.
>
>
>
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
>


Re: Snapshots only on Primary Storage feature

2018-05-17 Thread Simon Weller
Glen,


This feature was implemented in 4.9 by my colleague Nathan Johnson.  You enable 
it by changing the global setting  snapshot.backup.rightafter to false.


The PR is reference here: https://github.com/apache/cloudstack/pull/1697


We have the exact same use case as you, as we also use Ceph.


- Si



From: Glen Baars 
Sent: Thursday, May 17, 2018 9:46 AM
To: dev@cloudstack.apache.org
Subject: Snapshots only on Primary Storage feature


Hello Devs,



I have been thinking about a feature request and want to see what people think 
about the use case.



We use KVM + Ceph RBD as storage.



Currently, when a client takes a snapshot, Cloudstack takes a Ceph snapshot and 
then uses qemu-img to export to secondary storage. This creates a full backup 
of the server. Clients want to use this as a daily snapshot and it isn’t 
feasible due to the space requirements.



We would like create the snapshot only on primary storage. It is replicated 
offsite and fault tolerant. I can see that the download snapshot and create 
template features may be an issue.



I have seen the below features in the recent releases and wondered if this was 
the direction that the development was going.

Separation of volume snapshot creation on primary storage and backing operation 
on secondary storage.

Bypass secondary storage template copy/transfer for KVM.

Kind regards,

Glen Baars

BackOnline Manager



T  1300 733 328 / +61 8 6102 3276

NZ +64 9280 3561



www.timg.com

 [http://images.dbonline.com.au/images/fb.png]  
Facebook  
[http://images.dbonline.com.au/images/li.png] 
LinkedIn



Watch a short video about what we 
do!

[http://images.dbonline.com.au/images/timgv3.jpg]

This e-mail may contain confidential and/or privileged information.If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorized 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.



This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.


Re: Ceph RBD issues in 4.11

2018-05-17 Thread Simon Weller
Glen,


Can you open a  github issue here: https://github.com/apache/cloudstack/issues


Please include logs of the second issue and we'll take a look at it.


- Si



From: Glen Baars 
Sent: Thursday, May 17, 2018 7:20 AM
To: dev@cloudstack.apache.org
Subject: Re: Ceph RBD issues in 4.11

Yes - thanks for that.

Do you have any info about the second issue?

Glen Baars

Sent from my Cyanogen phone

On 17 May 2018 8:14 PM, Rafael Weing?rtner  wrote:
This problem sounds like the one described here
https://github.com/apache/cloudstack/issues/2641.
It seems that it was already fixed and will go out in 4.11.1.0

On Thu, May 17, 2018 at 8:50 AM, Glen Baars 
wrote:

> Hello Dev,
>
> I have recently upgraded our cloudstack environment to 4.11. Mostly all
> has been smooth. ( this environment is legacy from cloud.com days! )
>
> There are some issues that I have run into:
>
> 1.Can't install any VMs from ISO ( I have seen this in the list previously
> but can't find a bug report for it ) If further reports or debug will help
> I can assist. It is easy to reproduce.
> 2.When a VM is created from a template, the RBD features are lost. More
> info below.
>
> Example of VM volume from template: -
>
> user@NAS-AUBUN-RK3-CEPH01:~# rbd info AUBUN-KVM-CLUSTER01-SSD/
> feeb52ec-f111-4a0d-9785-23aadd7650a5
>
> rbd image 'feeb52ec-f111-4a0d-9785-23aadd7650a5':
> size 150 GB in 38400 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.142926a5ee64
> format: 2
> features: layering
> flags:
> create_timestamp: Fri Apr 27 12:46:21 2018
> parent: AUBUN-KVM-CLUSTER01-SSD/d7dcd9e4-ed55-44ae-9a71-
> 52c9307e53b4@cloudstack-base-snap
> overlap: 150 GB
>
> Note the features are not the same as the parent : -
>
> user@NAS-AUBUN-RK3-CEPH01:~# rbd info AUBUN-KVM-CLUSTER01-SSD/
> d7dcd9e4-ed55-44ae-9a71-52c9307e53b4
> rbd image 'd7dcd9e4-ed55-44ae-9a71-52c9307e53b4':
> size 150 GB in 38400 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.141d274b0dc51
> format: 2
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
> flags:
> create_timestamp: Fri Apr 27 12:37:05 2018
>
>
> If you manually clone the volume the expected features are retained. We
> are running the latest Ceph version, KVM hosts on Ubuntu 16.04 with the
> latest Luminous qemu-img.
>
> Kind regards,
> Glen Baars
>
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
>



--
Rafael Weing?rtner
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.


Re: Cloudstack compatiblity Windows 2016 Server

2018-05-14 Thread Simon Weller
On KVM, selecting the "Windows PV" OS type will work fine with Windows Server 
2016. Might be worth trying on vmware.



From: Rafael Weingärtner 
Sent: Monday, May 14, 2018 11:06 AM
To: dev
Cc: users
Subject: Re: Cloudstack compatiblity Windows 2016 Server

There is one extra detail. If your hypervisor version does not support the
OS you want to use, there is no magic ACS can do.
Therefore, first you need to make sure your hypervisor supports the OS you
want. Then, you need to see if you have a guest OS entry for the OS you
want to use, and if this guest OS is mapped to a hypervisor OS mapping.

On Mon, May 14, 2018 at 1:03 PM, Suresh Kumar Anaparti <
sureshkumar.anapa...@gmail.com> wrote:

> Hi Marc,
>
> It seems the compatibility table with Cloudstack version and OS guest
> versions is not listed. May be you can try with db query using version
> (updated column) and guest_os_hypervisor (created column) tables.
>
> Please check the current version OS compatibility using
> *listGuestOsMapping*
> API (
> https://cloudstack.apache.org/api/apidocs-4.9/apis/listGuestOsMapping.html
> )
> with
> *hypervisor* and *hypervisorversion *params. If "Windows Server 2016" OS is
> not in the mapping response and the underlying hypervisor supports it, you
> can add new OS mapping to cloudstack using *addGuestOsMapping* API (
> https://cloudstack.apache.org/api/apidocs-4.9/apis/addGuestOsMapping.html
> ).
> Make
> sure to set the *ostypeid* param to Windows OS UUID (Get this using
> *listOsTypes* API).
>
> -Suresh
>
> 2018-05-14 19:11 GMT+05:30 Marc Poll Garcia :
>
> > Hi all!
> >
> > I am using CloudStack 4.9.2 on VMWare hypervisor, and I tried to create a
> > "Windows Server 2016" OS template but i have some issues working with it,
> > sometimes network does not work properly.
> >
> > Do you know if it is not compatible with this version? is there any
> > compatibility matrix / table like:
> >
> > *Cloudstack version  | OS guest versions*
> >
> > Thanks in advance.
> >
> >
> > --
> > Marc Poll Garcia
> > Technology Infrastructure . Àrea de Serveis TIC
> > Telèfon:  93.405.43.57
> >
> > [image: UPCnet]
> >
> > --
> > Aquest correu electrònic pot contenir informació confidencial o legalment
> > protegida i està exclusivament dirigit a la persona o entitat
> destinatària.
> > Si vostè no és el destinatari final o persona encarregada de recollir-lo,
> > no està autoritzat a llegir-lo, retenir-lo, modificar-lo, distribuir-lo,
> > copiar-lo ni a revelar el seu contingut. Si ha rebut aquest correu
> > electrònic per error, li preguem que informi al remitent i elimini del
> seu
> > sistema el missatge i el material annex que pugui contenir.
> > Gràcies per la seva col.laboració.
> > --
> >
> > *** Si us plau, no m'imprimeixis. Vull seguir sent digital ***
> > *** Por favor, no me imprimas. Quiero seguir siendo digital ***
> > *** Please, don't print me. I want to remain digital ***
> > --
> >
>



--
Rafael Weingärtner


Re: 4.11.0 - can't create guest vms with RBD storage!

2018-05-03 Thread Simon Weller
Andrei,


Nathan has pushed a PR to fix this. Please see: 
https://github.com/apache/cloudstack/pull/2623

He has done some basic testing on it, but your feedback I'm sure would be 
appreciated.


- Si




From: Simon Weller <swel...@ena.com.INVALID>
Sent: Wednesday, May 2, 2018 4:27 PM
To: dev@cloudstack.apache.org
Subject: Re: 4.11.0 - can't create guest vms with RBD storage!

We've starting looking into this particular bug.

We now have a 4.11 lab setup and can reproduce this.


- Si


From: Wei ZHOU <ustcweiz...@gmail.com>
Sent: Monday, April 30, 2018 1:25 PM
To: dev@cloudstack.apache.org
Subject: Re: 4.11.0 - can't create guest vms with RBD storage!

Agreed. agent.log might be helpful for troubleshooting.

it seems to be a bug within kvm plugin.

-Wei

2018-04-30 15:36 GMT+02:00 Rafael Weingärtner <rafaelweingart...@gmail.com>:

> We might need some extra log entries. Can you provide them?
>
> On Mon, Apr 30, 2018 at 10:14 AM, Andrei Mikhailovsky <
> and...@arhont.com.invalid> wrote:
>
> > hello gents,
> >
> > I have just realised that after upgrading to 4.11.0 we are no longer able
> > to create new VMs. This has just been noticed as we have previously used
> > ready made templates, which work just fine.
> >
> > Setup: ACS 4.11.0 (upgraded from 4.9.3), KVM + CEPH, Ubuntu 16.04 on all
> > servers
> >
> > When trying to create a new vm from an ISO image I get the following
> > error:
> >
> >
> > com.cloud.exception.StorageUnavailableException: Resource
> [StoragePool:2]
> > is unreachable: Unable to create Vol[3937|vm=2217|ROOT]:com.
> > cloud.utils.exception.CloudRuntimeException:
> > org.libvirt.LibvirtException: this function is not supported by the
> > connection driver: only RAW volumes are supported by this storage pool
> >
> > at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.
> > recreateVolume(VolumeOrchestrator.java:1336)
> > at org.apache.cloudstack.engine.orchestration.
> VolumeOrchestrator.prepare(VolumeOrchestrator.java:1413)
> >
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:1110)
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:4927)
> > at sun.reflect.GeneratedMethodAccessor498.invoke(Unknown Source)
> > at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(
> > VmWorkJobHandlerProxy.java:107)
> > at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(
> > VirtualMachineManagerImpl.java:5090)
> > at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.
> > runInContext(AsyncJobManagerImpl.java:581)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
> > ManagedContextRunnable.java:49)
> > at org.apache.cloudstack.managed.context.impl.
> > DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > callWithContext(DefaultManagedContext.java:103)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > runWithContext(DefaultManagedContext.java:53)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
> > ManagedContextRunnable.java:46)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(
> AsyncJobManagerImpl.java:529)
> >
> > at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> >
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> >
> > at java.lang.Thread.run(Thread.java:748)
> >
> >
> > My guess is that ACS tried to create a QCOW2 image type whereas it should
> > be RAW on ceph/rbd.
> >
> > I am really struggling to understand how this bug in a function of MAJOR
> > importance could have been missed during the tests ran by developers and
> > community before making a final realise. Anyways, I hope the fix will
> make
> > it to 4.11.1 release, otherwise it's really messed up!
> >
> > Cheers
> >
> > Andrei
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: 4.11.0 - can't create guest vms with RBD storage!

2018-05-02 Thread Simon Weller
We've starting looking into this particular bug.

We now have a 4.11 lab setup and can reproduce this.


- Si


From: Wei ZHOU 
Sent: Monday, April 30, 2018 1:25 PM
To: dev@cloudstack.apache.org
Subject: Re: 4.11.0 - can't create guest vms with RBD storage!

Agreed. agent.log might be helpful for troubleshooting.

it seems to be a bug within kvm plugin.

-Wei

2018-04-30 15:36 GMT+02:00 Rafael Weingärtner :

> We might need some extra log entries. Can you provide them?
>
> On Mon, Apr 30, 2018 at 10:14 AM, Andrei Mikhailovsky <
> and...@arhont.com.invalid> wrote:
>
> > hello gents,
> >
> > I have just realised that after upgrading to 4.11.0 we are no longer able
> > to create new VMs. This has just been noticed as we have previously used
> > ready made templates, which work just fine.
> >
> > Setup: ACS 4.11.0 (upgraded from 4.9.3), KVM + CEPH, Ubuntu 16.04 on all
> > servers
> >
> > When trying to create a new vm from an ISO image I get the following
> > error:
> >
> >
> > com.cloud.exception.StorageUnavailableException: Resource
> [StoragePool:2]
> > is unreachable: Unable to create Vol[3937|vm=2217|ROOT]:com.
> > cloud.utils.exception.CloudRuntimeException:
> > org.libvirt.LibvirtException: this function is not supported by the
> > connection driver: only RAW volumes are supported by this storage pool
> >
> > at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.
> > recreateVolume(VolumeOrchestrator.java:1336)
> > at org.apache.cloudstack.engine.orchestration.
> VolumeOrchestrator.prepare(VolumeOrchestrator.java:1413)
> >
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:1110)
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:4927)
> > at sun.reflect.GeneratedMethodAccessor498.invoke(Unknown Source)
> > at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(
> > VmWorkJobHandlerProxy.java:107)
> > at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(
> > VirtualMachineManagerImpl.java:5090)
> > at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.
> > runInContext(AsyncJobManagerImpl.java:581)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
> > ManagedContextRunnable.java:49)
> > at org.apache.cloudstack.managed.context.impl.
> > DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > callWithContext(DefaultManagedContext.java:103)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > runWithContext(DefaultManagedContext.java:53)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
> > ManagedContextRunnable.java:46)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(
> AsyncJobManagerImpl.java:529)
> >
> > at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> >
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> >
> > at java.lang.Thread.run(Thread.java:748)
> >
> >
> > My guess is that ACS tried to create a QCOW2 image type whereas it should
> > be RAW on ceph/rbd.
> >
> > I am really struggling to understand how this bug in a function of MAJOR
> > importance could have been missed during the tests ran by developers and
> > community before making a final realise. Anyways, I hope the fix will
> make
> > it to 4.11.1 release, otherwise it's really messed up!
> >
> > Cheers
> >
> > Andrei
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: John Kinsella and Wido den Hollander now ASF members

2018-05-02 Thread Simon Weller
Congrats to both of you!



From: Daan Hoogland 
Sent: Wednesday, May 2, 2018 11:53 AM
To: dev
Subject: Re: John Kinsella and Wido den Hollander now ASF members

Wow, nice surprise

On Wed, 2 May 2018, 18:38 Dag Sonstebo,  wrote:

> Congratulations both!
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 02/05/2018, 17:33, "Nitin Kumar Maharana" <
> nitinkumar.mahar...@accelerite.com> wrote:
>
> Congratulations!!
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> > On 02-May-2018, at 9:50 PM, Khosrow Moossavi 
> wrote:
> >
> > That's awesome! Congratulations!
> >
> >
> >
> >
> > On Wed, May 2, 2018 at 12:19 PM Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> > wrote:
> >
> >> Congratulations, guys! :-)
> >>
> >>> On May 2, 2018, at 9:58 AM, David Nalley  wrote:
> >>>
> >>> Hi folks,
> >>>
> >>> As noted in the press release[1] John Kinsella and Wido den
> Hollander
> >>> have been elected to the ASF's membership.
> >>>
> >>> Members are the 'shareholders' of the foundation, elect the board
> of
> >>> directors, and help guide the future of the ASF.
> >>>
> >>> Congrats to both of you, very well deserved.
> >>>
> >>> --David
> >>>
> >>> [1] https://s.apache.org/ysxx
> >>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which
> is the property of Accelerite, a Persistent Systems business. It is
> intended only for the use of the individual or entity to which it is
> addressed. If you are not the intended recipient, you are not authorized to
> read, retain, copy, print, distribute or use this message. If you have
> received this communication in error, please notify the sender and delete
> all copies of this message. Accelerite, a Persistent Systems business does
> not accept any liability for virus infected mails.
>
>
>
>


Re: [DISCUSS] VR upgrade downtime reduction

2018-05-01 Thread Simon Weller
Yes, nice work!





From: Daan Hoogland 
Sent: Tuesday, May 1, 2018 5:28 AM
To: us...@cloudstack.apache.org
Cc: dev
Subject: Re: [DISCUSS] VR upgrade downtime reduction

good work Rohit,
I'll review 2508 https://github.com/apache/cloudstack/pull/2508

On Tue, May 1, 2018 at 12:08 PM, Rohit Yadav 
wrote:

> All,
>
>
> A short-term solution to VR upgrade or network restart (with cleanup=true)
> has been implemented:
>
>
> - The strategy for redundant VRs builds on top of Wei's original patch
> where backup routers are removed and replace in a rolling basis. The
> downtime I saw was usually 0-2 seconds, and theoretically downtime is
> maximum of [0, 3*advertisement interval + skew seconds] or 0-10 seconds
> (with cloudstack's default of 1s advertisement interval).
>
>
> - For non-redundant routers, I've implemented a strategy where first a new
> VR is deployed, then old VR is powered-off/destroyed, and the new VR is
> again re-programmed. With this strategy, two identical VRs may be up for a
> brief moment (few seconds) where both can serve traffic, however the new VR
> performs arp-ping on its interfaces to update neighbours. After the old VR
> is removed, the new VR is re-programmed which among many things performs
> another arpping. The theoretical downtime is therefore limited by the
> arp-cache refresh which can be up to 30 seconds. In my experiments, against
> various VMware, KVM and XenServer versions I found that the downtime was
> indeed less than 30s, usually between 5-20 seconds. Compared to older ACS
> versions, especially in cases where VRs deployment require full volume copy
> (like in VMware) a 10x-12x improvement was seen.
>
>
> Please review, test the following PRs which has test details, benchmarks,
> and some screenshots:
>
> https://github.com/apache/cloudstack/pull/2508
>
>
> Future work can be driven towards making all VRs redundant enabled by
> default that can allow for a firewall+connections state transfer
> (conntrackd + VRRP2/3 based) during rolling reboots.
>
>
> - Rohit
>
> 
>
>
>
> 
> From: Daan Hoogland 
> Sent: Thursday, February 8, 2018 3:11:51 PM
> To: dev
> Subject: Re: [DISCUSS] VR upgrade downtime reduction
>
> to stop the vote and continue the discussion. I personally want unification
> of all router vms: VR, 'shared network', rVR, VPC, rVPC, and eventually the
> one we want to create for 'enterprise topology hand-off points'. And I
> think we have some level of consensus on that but the path there is a
> concern for Wido and for some of my colleagues as well, and rightly so. One
> issue is upgrades from older versions.
>
> I the common scenario as follows:
> + redundancy is deprecated and only number of instances remain.
> + an old VR is replicated in memory by an redundant enabled version, that
> will be in a state of running but inactive.
> - the old one will be destroyed while a ping is running
> - as soon as the ping fails more then three times in a row (this might have
> to have a hypervisor specific implementation or require a helper vm)
> + the new one is activated
>
> after this upgrade Wei's and/or Remi's code will do the work for any
> following upgrade.
>
> flames, please
>
>
>
> On Wed, Feb 7, 2018 at 12:17 PM, Nux!  wrote:
>
> > +1 too
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> >
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> - Original Message -
> > > From: "Rene Moser" 
> > > To: "dev" 
> > > Sent: Wednesday, 7 February, 2018 10:11:45
> > > Subject: Re: [DISCUSS] VR upgrade downtime reduction
> >
> > > On 02/06/2018 02:47 PM, Remi Bergsma wrote:
> > >> Hi Daan,
> > >>
> > >> In my opinion the biggest issue is the fact that there are a lot of
> > different
> > >> code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc. That's
> > why you
> > >> cannot simply switch from a single VPC to a redundant VPC for example.
> > >>
> > >> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a
> > VPC with a
> > >> single tier and made sure all features are supported. Next we merged
> > the single
> > >> and redundant VPC code paths. The idea here is that redundancy or not
> > should
> > >> only be a difference in the number of routers. Code should be the
> same.
> > A
> > >> single router, is also "master" but there just is no "backup".
> > >>
> > >> That simplifies things A LOT, as keepalived is now the master of the
> > whole
> > >> thing. No more assigning ip addresses in Python, but leave that to
> > keepalived
> > >> instead. Lots of code deleted. Easier to maintain, way more stable. We
> > just
> > >> released Cosmic 6 that has this feature and 

Re: Welcoming Mike as the new Apache CloudStack VP

2018-03-26 Thread Simon Weller
Thanks for all of your hard work Wido, we really appreciate it.


Congratulations Mike!


- Si


From: Wido den Hollander 
Sent: Monday, March 26, 2018 9:11 AM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Welcoming Mike as the new Apache CloudStack VP

Hi all,

It's been a great pleasure working with the CloudStack project as the
ACS VP over the past year.

A big thank you from my side for everybody involved with the project in
the last year.

Hereby I would like to announce that Mike Tutkowski has been elected to
replace me as the Apache Cloudstack VP in our annual VP rotation.

Mike has a long history with the project and I am are happy welcome him
as the new VP for CloudStack.

Welcome Mike!

Thanks,

Wido


Re: [VOTE] Move to Github issues

2018-03-26 Thread Simon Weller
+1 (binding).



From: Rohit Yadav 
Sent: Monday, March 26, 2018 1:33 AM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [VOTE] Move to Github issues

All,

Based on the discussion last week [1], I would like to start a vote to put
the proposal into effect:

- Enable Github issues, wiki features in CloudStack repositories.
- Both user and developers can use Github issues for tracking issues.
- Developers can use #id references while fixing an existing/open issue in
a PR [2]. PRs can be sent without requiring to open/create an issue.
- Use Github milestone to track both issues and pull requests towards a
CloudStack release, and generate release notes.
- Relax requirement for JIRA IDs, JIRA still to be used for historical
reference and security issues. Use of JIRA will be discouraged.
- The current requirement of two(+) non-author LGTMs will continue for PR
acceptance. The two(+) PR non-authors can advise resolution to any issue
that we've not already discussed/agreed upon.

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Vote will be open for 120 hours. If the vote passes the following actions
will be taken:
- Get Github features enabled from ASF INFRA
- Update CONTRIBUTING.md and other relevant cwiki pages.
- Update project website

[1] https://markmail.org/message/llodbwsmzgx5hod6
[2] https://blog.github.com/2013-05-14-closing-issues-via-pull-requests/

Regards,
Rohit Yadav


Re: New committer: Dag Sonstebo

2018-03-20 Thread Simon Weller
Congrats Dag, much deserved!


From: John Kinsella 
Sent: Tuesday, March 20, 2018 8:58 AM
To: 
Subject: New committer: Dag Sonstebo

The Project Management Committee (PMC) for Apache CloudStack has
invited Dag Sonsteboto become a committer and we are pleased to
announce that he has accepted.

I’ll take a moment here to remind folks that being an ASF committer
isn’t purely about code - Dag has been helping out for quite a while
on users@, and seems to have a strong interest around ACS and the
community. We welcome this activity, and encourage others to help
out as they can - it doesn’t necessarily have to be purely code-related.

Being a committer enables easier contribution to the project since
there is no need to go via the patch submission process. This should
enable better productivity.

Please join me in welcoming Dag!

John


Re: Notice that Gabriel Bräscher now works at PCextreme

2018-03-20 Thread Simon Weller
Great, congrats Gabriel!





From: Paul Angus 
Sent: Tuesday, March 20, 2018 9:08 AM
To: dev@cloudstack.apache.org
Cc: gabrasc...@gmail.com
Subject: RE: Notice that Gabriel Bräscher now works at PCextreme

Awesome!


Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




-Original Message-
From: Rohit Yadav 
Sent: 20 March 2018 14:04
To: dev@cloudstack.apache.org
Cc: gabrasc...@gmail.com
Subject: Re: Notice that Gabriel Bräscher now works at PCextreme

Congrats Gabriel. Great now you can resume work on your PRs.


- Rohit


From: Wido den Hollander 
Sent: Tuesday, March 20, 2018 7:20:57 PM
To: dev@cloudstack.apache.org
Cc: gabrasc...@gmail.com
Subject: Notice that Gabriel Bräscher now works at PCextreme

Hi,

Just wanted to let you know that Gabriel Bräscher started working at PCextreme 
this week.

He'll be committing and developing on CloudStack for PCextreme and the 
community.

Just so everybody knows that we are colleagues now.

Let's make CloudStack even better!

Wido

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue






RE: I'd like to introduce you to Khosrow

2018-02-22 Thread Simon Weller
Welcome Khosrow.

Simon Weller/615-312-6068

-Original Message-
From: Khosrow Moossavi [kmooss...@cloudops.com]
Received: Thursday, 22 Feb 2018, 7:00PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: I'd like to introduce you to Khosrow

Thank you Pierre-Luc,
I'm super excited to be part of the community.

On Feb 22, 2018 18:42, "Rafael Weingärtner" <rafaelweingart...@gmail.com>
wrote:

> Welcome!
> Congratualations for the great job done so far...
>
> On Thu, Feb 22, 2018 at 8:40 PM, Pierre-Luc Dion <pd...@cloudops.com>
> wrote:
>
> > Hi fellow colleagues,
> >
> > I might be a bit late with this email...
> >
> > I'd like to introduce Khosrow Moossavi, who recently join our team and
> his
> > focus is currently exclusively on dev for Cloudstack with cloud.ca.
> >
> > Our 2 current priorities are:
> > -fixing VRs,SVMs to run has HVM VMs in xenserver.
> > - redesign, or rewrite, the remote management vpn for vpc, poc in
> progress
> > for IKEv2...
> >
> >
> >
> > Some of you might have interact with him already.
> >
> >
> > Also, we are going to be more active for the upcomming 4.12 release.
> >
> >
> > Cheers!
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: HA issues

2018-02-19 Thread Simon Weller
Also these -

https://github.com/myENA/cloudstack/pull/20/commits/1948ce5d24b87433ae9e8f4faebdfc20b56b751a


https://github.com/myENA/cloudstack/pull/12/commits






From: Andrija Panic <andrija.pa...@gmail.com>
Sent: Monday, February 19, 2018 5:23 AM
To: dev
Subject: Re: HA issues

Hi Simon,

a big thank you for this, will have our devs check this!

Thanks!

On 19 February 2018 at 09:02, Simon Weller <swel...@ena.com.invalid> wrote:

> Andrija,
>
>
> We pushed quite a few PRs on the exception and lockup issues related to
> Ceph in the agent.
>
>
> We have a PR for the deletion issue. See if you have it pulled into your
> release - https://github.com/myENA/cloudstack/pull/9
[https://avatars1.githubusercontent.com/u/1444686?s=400=4]<https://github.com/myENA/cloudstack/pull/9>

context cleanup by leprechau · Pull Request #9 · 
myENA/cloudstack<https://github.com/myENA/cloudstack/pull/9>
github.com
cleanup rbd image and rados context even if exceptions are thrown in 
deletePhysicalDisk routine



>
>
> - Si
>
>
>
>
> 
> From: Andrija Panic <andrija.pa...@gmail.com>
> Sent: Saturday, February 17, 2018 1:49 PM
> To: dev
> Subject: Re: HA issues
>
> Hi Sean,
>
> (we have 2 threads interleaving on the libvirt lockd..) - so, did you
> manage to understand what can cause the Agent Disconnect in most cases, for
> you specifically? Is there any software (CloudStack) root cause
> (disregarding i.e. networking issues etc)
>
> Just our examples, which you should probably not have:
>
> We had CEPH cluster running (with ACS), and there any exception in librbd
> would crash JVM and the agent, but this has been fixed mostly -
> Now get i.e. agent disconnect when ACS try to delete volume on CEPH (and
> for some reason not succeed withing 30 minutes, volume deletion fails) -
> then libvirt get's completety stuck (virsh list even dont work)...so  agent
> get's disconnect eventually.
>
> It would be good to get rid of agent disconnections in general, obviously
> :) so that is why I'm asking (you are on NFS, so would like to see your
> experience here).
>
> Thanks
>
> On 16 February 2018 at 21:52, Sean Lair <sl...@ippathways.com> wrote:
>
> > We were in the same situation as Nux.
> >
> > In our test environment we hit the issue with VMs not getting fenced and
> > coming up on two hosts because of VM HA.   However, we updated some of
> the
> > logic for VM HA and turned on libvirtd's locking mechanism.  Now we are
> > working great w/o IPMI.  The locking stops the VMs from starting
> elsewhere,
> > and everything recovers very nicely when the host starts responding
> again.
> >
> > We are on 4.9.3 and haven't started testing with 4.11 yet, but it may
> work
> > along-side IPMI just fine - it would just have affect the fencing.
> > However, we *currently* prefer how we are doing it now, because if the
> > agent stops responding, but the host is still up, the VMs continue
> running
> > and no actual downtime is incurred.  Even when VM HA attempts to power on
> > the VMs on another host, it just fails the power-up and the VMs continue
> to
> > run on the "agent disconnected" host. The host goes into alarm state and
> > our NOC can look into what is wrong the agent on the host.  If IPMI was
> > enabled, it sounds like it would power off the host (fence) and force
> > downtime for us even if the VMs were actually running OK - and just the
> > agent is unreachable.
> >
> > I plan on submitting our updates via a pull request at some point.  But I
> > can also send the updated code to anyone that wants to do some testing
> > before then.
> >
> > -Original Message-
> > From: Marcus [mailto:shadow...@gmail.com]
> > Sent: Friday, February 16, 2018 11:27 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: HA issues
> >
> > From your other emails it sounds as though you do not have IPMI
> > configured, nor host HA enabled, correct? In this case, the correct thing
> > to do is nothing. If CloudStack cannot guarantee the VM state (as is the
> > case with an unreachable hypervisor), it should do nothing, for fear of
> > causing a split brain and corrupting the VM disk (VM running on two
> hosts).
> >
> > Clustering and fencing is a tricky proposition. When CloudStack (or any
> > other cluster manager) is not configured to or cannot guarantee state
> then
> > things will simply lock up, in this case your HA VM on your broken
> > hypervisor will not run elsewhere. This has been the case for a long time
> > with CloudSt

Re: HA issues

2018-02-19 Thread Simon Weller
Andrija,


We pushed quite a few PRs on the exception and lockup issues related to Ceph in 
the agent.


We have a PR for the deletion issue. See if you have it pulled into your 
release - https://github.com/myENA/cloudstack/pull/9


- Si





From: Andrija Panic 
Sent: Saturday, February 17, 2018 1:49 PM
To: dev
Subject: Re: HA issues

Hi Sean,

(we have 2 threads interleaving on the libvirt lockd..) - so, did you
manage to understand what can cause the Agent Disconnect in most cases, for
you specifically? Is there any software (CloudStack) root cause
(disregarding i.e. networking issues etc)

Just our examples, which you should probably not have:

We had CEPH cluster running (with ACS), and there any exception in librbd
would crash JVM and the agent, but this has been fixed mostly -
Now get i.e. agent disconnect when ACS try to delete volume on CEPH (and
for some reason not succeed withing 30 minutes, volume deletion fails) -
then libvirt get's completety stuck (virsh list even dont work)...so  agent
get's disconnect eventually.

It would be good to get rid of agent disconnections in general, obviously
:) so that is why I'm asking (you are on NFS, so would like to see your
experience here).

Thanks

On 16 February 2018 at 21:52, Sean Lair  wrote:

> We were in the same situation as Nux.
>
> In our test environment we hit the issue with VMs not getting fenced and
> coming up on two hosts because of VM HA.   However, we updated some of the
> logic for VM HA and turned on libvirtd's locking mechanism.  Now we are
> working great w/o IPMI.  The locking stops the VMs from starting elsewhere,
> and everything recovers very nicely when the host starts responding again.
>
> We are on 4.9.3 and haven't started testing with 4.11 yet, but it may work
> along-side IPMI just fine - it would just have affect the fencing.
> However, we *currently* prefer how we are doing it now, because if the
> agent stops responding, but the host is still up, the VMs continue running
> and no actual downtime is incurred.  Even when VM HA attempts to power on
> the VMs on another host, it just fails the power-up and the VMs continue to
> run on the "agent disconnected" host. The host goes into alarm state and
> our NOC can look into what is wrong the agent on the host.  If IPMI was
> enabled, it sounds like it would power off the host (fence) and force
> downtime for us even if the VMs were actually running OK - and just the
> agent is unreachable.
>
> I plan on submitting our updates via a pull request at some point.  But I
> can also send the updated code to anyone that wants to do some testing
> before then.
>
> -Original Message-
> From: Marcus [mailto:shadow...@gmail.com]
> Sent: Friday, February 16, 2018 11:27 AM
> To: dev@cloudstack.apache.org
> Subject: Re: HA issues
>
> From your other emails it sounds as though you do not have IPMI
> configured, nor host HA enabled, correct? In this case, the correct thing
> to do is nothing. If CloudStack cannot guarantee the VM state (as is the
> case with an unreachable hypervisor), it should do nothing, for fear of
> causing a split brain and corrupting the VM disk (VM running on two hosts).
>
> Clustering and fencing is a tricky proposition. When CloudStack (or any
> other cluster manager) is not configured to or cannot guarantee state then
> things will simply lock up, in this case your HA VM on your broken
> hypervisor will not run elsewhere. This has been the case for a long time
> with CloudStack, HA would only start a VM after the original hypervisor
> agent came back and reported no VM is running.
>
> The new feature, from what I gather, simply adds the possibility of
> CloudStack being able to reach out and shut down the hypervisor to
> guarantee state. At that point it can start the VM elsewhere. If something
> fails in that process (IPMI unreachable, for example, or bad credentials),
> you're still going to be stuck with a VM not coming back.
>
> It's the nature of the thing. I'd be wary of any HA solution that does not
> reach out and guarantee state via host or storage fencing before starting a
> VM elsewhere, as it will be making assumptions. Its entirely possible a VM
> might be unreachable or unable to access it storage for a short while, a
> new instance of the VM is started elsewhere, and the original VM comes back.
>
> On Wed, Jan 17, 2018 at 9:02 AM Nux!  wrote:
>
> > Hi Rohit,
> >
> > I've reinstalled and tested. Still no go with VM HA.
> >
> > What I did was to kernel panic that particular HV ("echo c >
> > /proc/sysrq-trigger" <- this is a proper way to simulate a crash).
> > What happened next is the HV got marked as "Alert", the VM on it was
> > all the time marked as "Running" and it was not migrated to another HV.
> > Once the panicked HV has booted back the VM reboots and becomes
> available.
> >
> > I'm running on CentOS 7 mgmt + HVs and NFS primary and 

Re: System VMs not migrating when host down

2018-02-15 Thread Simon Weller
Hey Andrija,


So it sounds like your primary storage isn't enforcing an exclusive lock.  How 
is your storage exposed to ACS?


We've found that HA doesn't work at all with a host failure on KVM, as those 
VMs will never be restarted until the host is either recovered, or the host is 
removed from ACS. We are running a heavily patched 4.8.

- Si

From: Andrija Panic 
Sent: Wednesday, February 14, 2018 3:22 AM
To: dev
Subject: Re: System VMs not migrating when host down

Humble opinion (until HOST HA is ready in 4.11 if not mistaken?), avoid
using HA option for VMs  - avoid setting the  "Offer HA" option on any
compute/service offerings, since we did end  up (was it ACS 4.5 or 4.8,
can't remember now) having 2 copies of SAME VM running on 2 different
hosts...imagine storage/volume corruption...this happened a few times for
us.

HOST HA looks like really a nice thing, I have not tested that yet...but
sould completely solve the problem.

On 14 February 2018 at 10:14, Paul Angus  wrote:

> Hi Sean,
>
> The 'problem' with VM HA in KVM is that it relies on the parent host agent
> to be connected to report that the VM is down.  We cannot assume that just
> because a host agent is disconnected, that the VMs on that host are not
> running.
>
> This is where HOST HA comes in, this feature detects loss of connection to
> the agent and then tries to determine if the VMs on that host are active
> and then attempts some corrective action.
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Sean Lair [mailto:sl...@ippathways.com]
> Sent: 13 February 2018 23:06
> To: dev@cloudstack.apache.org
> Subject: System VMs not migrating when host down
>
> Hi all,
>
> We are testing VM HA and are having a problem with our system VMs
> (secondary storage and console) not being started up on another host when a
> host fails.
>
> Shouldn't the system VMs be VM HA-enabled?  Currently they are just in an
> "Alert" agent state, but never migrate.  We are currently running 4.9.3.
>
>
> Thanks
> Sean
>



--

Andrija Panić


Re: [4.11] Testing New "Ability to disable primary storage to secondary storage backups for snapshots" Feature

2018-01-25 Thread Simon Weller
I'm not sure why this was removed from public.  We haven't tested the feature 
set since the below PR was merged.


Nathan,


Thoughts on this?





From: Rohit Yadav 
Sent: Thursday, January 25, 2018 8:20 AM
To: dev@cloudstack.apache.org
Subject: Re: [4.11] Testing New "Ability to disable primary storage to 
secondary storage backups for snapshots" Feature

Hi Ozhan,


The global setting was removed in following PR, however you can get the 
feature/ability via the API:

https://github.com/apache/cloudstack/pull/2081


Also see:

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Separate+creation+and+backup+operations+for+a+volume+snapshot
Separate creation and backup operations for a volume 
...
cwiki.apache.org
DB Changes. NA UI Flow. A checkbox will be added to the "Create Volume 
Snapshot" dialog box, which when checked, snapshot and copy operations will be 
separated and if ...





Please test and share if the changes introduce by above are acceptable, or you 
think is blocker(ish)?


- Rohit


[https://cloudstack.apache.org/images/monkey-144.png]

Apache CloudStack: Open Source Cloud Computing
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services







From: Özhan Rüzgar Karaman 
Sent: Wednesday, January 24, 2018 4:50:41 PM
To: dev@cloudstack.apache.org
Subject: [4.11] Testing New "Ability to disable primary storage to secondary 
storage backups for snapshots" Feature

Hi;
I plan to test "Ability to disable primary storage to secondary storage
backups for snapshots" feature on 4.11 rc1 release. For this test i think i
need to update "snapshot.backup.rightafter" parameter from global settings
but i could not find the parameter on global configuration there.

Is this normal?

Thanks
Özhan

rohit.ya...@shapeblue.com
www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)

2018-01-18 Thread Simon Weller
All,


We're currently working on getting 4.11 stood up on hardware for testing. An 
extension would certainly be helpful to us.


From: Nux! 
Sent: Wednesday, January 17, 2018 1:07 PM
To: dev
Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)

The extension is welcome!

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Boris Stoyanov" 
> To: "dev" 
> Sent: Wednesday, 17 January, 2018 18:24:20
> Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)

> Yes Rohit, tried other browser and I’m not able to login..
>
> I’m +1 on the extend but unfortunately -1 cause of this blocker.
>
> Bobby.
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> On 17 Jan 2018, at 18:24, Rohit Yadav
> > wrote:
>
> The 72hrs window is more of a guideline than a rule, without lazy consensus I
> don't think we've any choice here, so Monday it is.
>
> Kris - thanks, if we need RC2 and your proposed issues are blocker/critical we
> can consider them so meanwhile engage with community to get them reviewed.
>
> Bobby - can you attempt login in incognito mode or in a different browser 
> after
> upgrading to 4.11 from 4.5, rule out caching issue?
>
> Regards.
>
> Get Outlook for Android
>
> 
> From: Tutkowski, Mike
> >
> Sent: Wednesday, January 17, 2018 8:48:28 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)
>
> Or perhaps just the first RC should have a longer window?
>
> On 1/17/18, 8:12 AM, "Tutkowski, Mike"
> > wrote:
>
>   If all of our testing were completely in an automated fashion, then I would
>   agree that the 72-hour window is sufficient. However, we don’t have that 
> kind
>   of automated coverage and people aren’t always able to immediately begin
>   testing things out like migrating from their version of CloudStack to the 
> new
>   one. That being the case, 72 hours does seem (at least for where we are now 
> as
>   a project in terms of automated testing coverage) a bit short.
>
>   On 1/17/18, 7:52 AM, "Daan Hoogland"
>   > wrote:
>
>   The 72 hours is to make sure all stakeholders had a chance to glance. 
> Testing is
>   supposed to have happened before. We have a culture of testing only 
> after
>   RC-cut which is part of the problem. The long duration of a single test 
> run
>   takes, is another part. And finally, in this case there is the new 
> mindblow
>   called meltdown. I think in general we should try to keep the 72 hours 
> but for
>   this release it is not realistic.
>
>   On 17/01/2018, 15:48, "Rene Moser"
>   > wrote:
>
>   On 01/17/2018 03:34 PM, Daan Hoogland wrote:
> People, People,
>
> a lot of us are busy with meltdown fixes and a full component test takes about
> the 72 hours that we have for our voting, I propose to extend the vote period
> until at least Monday.
>
>   +1
>
>   I wonder where this 72 hours windows come from... Is it just be or,
>   based on the amount of changes and "things" to test, I would like to
>   expect a window in the size of 7-14 days ...?
>
>   René
>
>
>
>   daan.hoogl...@shapeblue.com
>   
> www.shapeblue.com>
>   53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>   @shapeblue
>
>
>
>
>
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue


Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Simon Weller
They do not. They receive a link-local ip address that is used for host agent 
to VR communication. All VR commands are proxied through the host agent. Host 
agent to VR communication is over SSH.



From: Rafael Weingärtner 
Sent: Friday, January 12, 2018 1:42 PM
To: dev
Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer

but we are already using this design in vmware deployments (not sure about
KVM). The management network is already an isolated network only used by
system vms and ACS. Unless we are attacked by some internal agent, we are
safe from customer attack through management networks. Also, we can (if we
don't do yet) restrict access only via these management interfaces in
system VMs(VRs, SSVM, console proxy and others to come).



Can someone confirm if VRs receive management IPs in KVM deployments?

On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed  wrote:

> The reason why we used link local in the first place was to isolate the VR
> from directly accessing the management network. This provides another layer
> of security in case of a VR exploit. This will also have a side effect of
> making all VRs visible to each other. Are we okay accepting this?
>
> Thanks,
> -Syed
>
> On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey  wrote:
>
> > dom0 already has a DHCP server listening for requests on internal
> > management networks. I'd be wary trying to manage it from an external
> > service like cloudstack lest it get reset upon XenServer patch. This
> alone
> > makes me favor option #2. I also think option #2 simplifies network
> design
> > for users.
> >
> > Agreed on making this as consistent across flows as possible.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > It looks reasonable to manage VRs via management IP network. We should
> > > focus on using the same work flow for different deployment scenarios.
> > >
> > >
> > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We need to start a architecture discussion about running SystemVM and
> > > > Virtual-Router as HVM instances in XenServer. With recent
> > > Meltdown-Spectre,
> > > > one of the mitigation step is currently to run VMs as HVM on
> XenServer
> > to
> > > > self contain a user space attack from a guest OS.
> > > >
> > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> start
> > > has
> > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > because
> > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > accessible
> > > > to the VM so the VR fail to be properly configured.
> > > >
> > > > I currently see 2 potential approaches for this:
> > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > receive
> > > > is network configuration at boot.
> > > > 2. Change the current way of managing VR, SVMs on XenServer,
> potentiall
> > > do
> > > > same has with VMware: use pod management networks and assign a POD IP
> > to
> > > > each VR.
> > > >
> > > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > > would
> > > > work too, could someone explain how it work on this thread?
> > > >
> > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> VR
> > > > monitoring and logging, although a migration path for an existing
> cloud
> > > > could be complex.
> > > >
> > > > Cheers,
> > > >
> > > >
> > > > Pierre-Luc
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



--
Rafael Weingärtner


Re: [VOTE] Clean up old and obsolete branches.

2018-01-02 Thread Simon Weller
+0


From: Daan Hoogland 
Sent: Tuesday, January 2, 2018 12:19 PM
To: dev
Subject: Re: [VOTE] Clean up old and obsolete branches.

0

On Tue, Jan 2, 2018 at 1:51 PM, Gabriel Beims Bräscher  wrote:

> +1
>
> 2018-01-02 9:46 GMT-02:00 Rafael Weingärtner  >:
>
> > Hope you guys had great holy days!
> >
> > Resuming the discussion we started last year in [1]. It is time to vote
> and
> > then to push (if the vote is successful) the protocol defined to our
> wiki.
> > Later we can start enforcing it.
> > I will summarize the protocol for branches in the official repository.
> >
> >1. We only maintain the master and major release branches. We
> currently
> >have a system of X.Y.Z.S. I define major release here as a release
> that
> >changes either ((X or Y) or (X and Y));
> >2. We will use tags for versioning. Therefore, all versions we release
> >are tagged accordingly, including minor and security releases;
> >3. When releasing the “SNAPSHOT” is removed and the branch of the
> >version is created (if the version is being cut from master). Rule (1)
> > one
> >is applied here; therefore, only major releases will receive branches.
> >Every release must have a tag according to the format X.Y.Z.S. After
> >releasing, we bump the POM of the version to next available SNAPSHOT;
> >4. If there's a need to fix an old version, we work on HEAD of
> >corresponding release branch. For instance, if we want to fix
> something
> > in
> >release 4.1.1.0, we will work on branch 4.1, which will have the POM
> > set to
> >4.1.2.0-SNAPSHOT;
> >5. People should avoid (it is not forbidden though) using the official
> >apache repository to store working branches. If we want to work
> > together on
> >some issues, we can set up a fork and give permission to interested
> > parties
> >(the official repository is restricted to committers). If one uses the
> >official repository, the branch used must be cleaned right after
> > merging;
> >6. Branches not following these rules will be removed if they have not
> >received attention (commits) for over 6 (six) months;
> >7. Before the removal of a branch in the official repository it is
> >mandatory to create a Jira ticket and send a notification email to
> >CloudStack’s dev mailing list. If there are no objections, the branch
> > can
> >be deleted seven (7) business days after the notification email is
> sent;
> >8. After the branch removal, the Jira ticket must be closed.
> >
> > Let’s go to the poll:
> > (+1) – I want to work using this protocol
> > (0) – Indifferent to me
> > (-1) – I prefer the way it is not, without any protocol/guidelines
> >
> >
> > [1]
> > http://mail-archives.apache.org/mod_mbox/cloudstack-dev/
> > 201711.mbox/%3CCAHGRR8ozDBX%3DJJewLz_cu-YP9vA3TEmesvxGArTDBPerAOj8Cw%
> > 40mail.gmail.com%3E
> >
> > --
> > Rafael Weingärtner
> >
>



--
Daan


  1   2   3   4   >