Re: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth round)

2013-09-19 Thread Simon Weller
I think one of the issues is that practically all the community testing is 
based on new installs. Since we didn't appear to have a published upgrade 
procedure available for 4.x to 4.2, it made it very hard to test in a reliable 
way. 
I did try a couple of upgrades from 4.02 and 4.1 in my lab, and was unable to 
bring up the virtual routers (non VPC). But I assumed maybe I missed something 
in the upgrade procedure, so I didn't consider it valid. 

We do need to have a think about how we can encourage the community to try 
upgrades and publish the results. But in order to do so, we need well 
documented upgrade steps to remove any ambiguity. 

- Si 

- Original Message -

From: "Marcus Sorensen"  
To: dev@cloudstack.apache.org 
Sent: Thursday, September 19, 2013 11:49:26 AM 
Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth round) 

We do need to ensure that we have the db upgrade fix that was mentioned on 
the other thread, otherwise people going from 4.1 to 4.2 will have their 
VPCs break. Looks like we are waiting on a script. It sounds like the plan 
will be to provide instructions in the release notes. Really wish we would 
have caught that, its not just that we find bugs at RC, but the severity of 
ones we miss is astounding sometimes. 
On Sep 19, 2013 10:18 AM, "Animesh Chaturvedi" < 
animesh.chaturv...@citrix.com> wrote: 

> 
> 
> > -Original Message- 
> > From: Wido den Hollander [mailto:w...@widodh.nl] 
> > Sent: Wednesday, September 18, 2013 4:55 PM 
> > To: dev@cloudstack.apache.org 
> > Subject: Re: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> > round) 
> > 
> > 
> > 
> > On 09/19/2013 01:28 AM, Animesh Chaturvedi wrote: 
> > > 
> > > The vote has *passed* with the following results (binding PMC votes 
> > indicated with a "*" next to their name: 
> > > 
> > > +1 : Alex*, Chip*, Sebastien*, Prasanna*, Hugo*, Marcus*, Wido*, 
> > > +Sebastien, Rajesh Batala, Sheng, Vijay, Abhi, Likitha, Ian, Gavin, 
> > > +Daan, Amogh, Simon Weller, 
> > > 
> > > I'm going to proceed with moving the release into the distribution 
> > repo now and work on release notes and other documentation tasks. 
> > > 
> > Who is going to build the RPM and Deb packages? 
> > 
> > I think I should build the .deb packages since I'm the one in 
> > debian/changelog who set the version to 4.2 
> > 
> > Give me a sign and I'll put the packages online in the DEB repo. 
> > 
> > Wido 
> [Animesh>] Chip/David do we need to wait for the docs to be closed out 
> first? 
> > 
> > > 
> > > Thanks 
> > > Animesh 
> > > 
> > > 
> > > 
> > > On Fri, Sep 13, 2013 at 4:12 PM, Animesh Chaturvedi < 
> > animesh.chaturv...@citrix.com> wrote: 
> > > 
> > >> 
> > >> I've created a 4.2.0 release, with the following artifacts up for a 
> > vote: 
> > >> 
> > >> Git Branch and Commit SH: 
> > >> 
> > >> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h 
> > >> = 
> > >> refs/heads/4.2 
> > >> Commit: c1e24ff89f6d14d6ae74d12dbca108c35449030f 
> > >> 
> > >> List of changes: 
> > >> 
> > >> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain 
> > >> ; 
> > >> f=CHANGES;hb=4.2 
> > >> 
> > >> Source release (checksums and signatures are available at the same 
> > >> location): 
> > >> https://dist.apache.org/repos/dist/dev/cloudstack/4.2.0/ 
> > >> 
> > >> PGP release keys (signed using 94BE0D7C): 
> > >> https://dist.apache.org/repos/dist/release/cloudstack/KEYS 
> > >> 
> > >> Testing instructions are here: 
> > >> 
> > >> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+p 
> > >> r 
> > >> ocedure 
> > >> 
> > >> Vote will be open for 72 hours (Wednesday 9/18 End of Day PST). 
> > >> 
> > >> For sanity in tallying the vote, can PMC members please be sure to 
> > >> indicate "(binding)" with their vote? 
> > >> 
> > >> [ ] +1 approve 
> > >> [ ] +0 no opinion 
> > >> [ ] -1 disapprove (and reason why) 
> > >> 
> > >> 
> 



Re: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth round)

2013-09-20 Thread Simon Weller
Animesh, 


Are you aware of any proposed upgrade documentation as of yet? I'd really like 
to try a couple of upgrades to test the procedure. 


- Si 

- Original Message -

From: "Animesh Chaturvedi"  
To: dev@cloudstack.apache.org 
Sent: Friday, September 20, 2013 12:03:41 AM 
Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth round) 



> -Original Message- 
> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com] 
> Sent: Thursday, September 19, 2013 5:15 PM 
> To: dev@cloudstack.apache.org 
> Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> round) 
> 
> 
> 
> > -Original Message- 
> > From: Marcus Sorensen [mailto:shadow...@gmail.com] 
> > Sent: Thursday, September 19, 2013 4:46 PM 
> > To: dev@cloudstack.apache.org 
> > Subject: Re: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> > round) 
> > 
> > I prefer a respin, as painful as that seems. VPC is a major feature, 
> > do we really want to release something that we know will break anyone 
> > who tries to upgrade? I could deal with #1 if we are able to include 
> > the hotfix script in the packaging, such that the release notes can 
> > provide dead simple instructions for upgrade (no 'go download this 
> > script'). I'm just not clear on what we can change and what we can't 
> post-vote. 
> 
> [Animesh>] I don't think the release artifacts can be changed so the 
> script will have to be downloaded. David/Chip/Wido any thoughts on that? 
> We can have the script available before the release announcement if we 
> continue with current VOTE. 
> 
> > 
[Animesh>] Only Marcus, Indra and Simon have responded I am prepared to respin 
tomorrow after VPC testing is complete with fixes from Alena. I would like to 
stick to 72 hour VOTE including weekend since last time most of the VOTEs were 
received over weekend. 


> > On Thu, Sep 19, 2013 at 3:59 PM, Animesh Chaturvedi 
> >  wrote: 
> > > 
> > > 
> > >> -Original Message- 
> > >> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com] 
> > >> Sent: Thursday, September 19, 2013 10:43 AM 
> > >> To: dev@cloudstack.apache.org 
> > >> Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> > >> round) 
> > >> 
> > >> 
> > >> 
> > >> > -Original Message- 
> > >> > From: Marcus Sorensen [mailto:shadow...@gmail.com] 
> > >> > Sent: Thursday, September 19, 2013 9:49 AM 
> > >> > To: dev@cloudstack.apache.org 
> > >> > Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 
> > >> > (fifth 
> > >> > round) 
> > >> > 
> > >> > We do need to ensure that we have the db upgrade fix that was 
> > >> > mentioned on the other thread, otherwise people going from 4.1 to 
> > >> > 4.2 will have their VPCs break. Looks like we are waiting on a 
> > >> > script. It sounds like the plan will be to provide instructions 
> > >> > in the release notes. Really wish we would have caught that, its 
> > >> > not just that we find bugs at RC, but the severity of ones we 
> > >> > miss is astounding 
> > >> sometimes. 
> > >> 
> > >> [Animesh>] I am also stumped why this was not caught earlier. 
> > >> Kishan has a fix that is being tested. 
> > > [Animesh>] So while the fix is being tested we have two options 
> > > 
> > > 1. Release 4.2, release note this issue, provide a separate script 
> > > that would have to be run if someone was using VPC in 4.1 and 
> > > upgraded to 4.2, fix this issue in 4.2.1 
> > > 
> > > 2. Since we have not released 4.2 yet, respin another RC and another 
> > > round of VOTE. That would be a record 6th RC Vote and 2 recalls 
> > > after successful votes :( 
> > > 
> > > Thoughts? 
> > > 
> > >> 
> > >> > On Sep 19, 2013 10:18 AM, "Animesh Chaturvedi" < 
> > >> > animesh.chaturv...@citrix.com> wrote: 
> > >> > 
> > >> > > 
> > >> > > 
> > >> > > > -Original Message- 
> > >> > > > From: Wido den Hollander [mailto:w...@widodh.nl] 
> > >> > > > Sent: Wednesday, September 18, 2013 4:55 PM 
> > >> > > > To: dev@cloudstack.apache.org 
> > >> > > > Subject: Re: [VOTE][RESUL

Re: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth round)

2013-09-20 Thread Simon Weller
So, I'll be testing KVM which requires the new VR, hence why I want to test the 
docs. 

- Original Message -

From: "Animesh Chaturvedi"  
To: dev@cloudstack.apache.org 
Sent: Friday, September 20, 2013 12:24:39 PM 
Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth round) 



> -Original Message----- 
> From: Simon Weller [mailto:swel...@ena.com] 
> Sent: Friday, September 20, 2013 8:05 AM 
> To: dev@cloudstack.apache.org 
> Subject: Re: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> round) 
> 
> Animesh, 
> 
> 
> Are you aware of any proposed upgrade documentation as of yet? I'd 
> really like to try a couple of upgrades to test the procedure. 
> 
> 
> - Si 
[Animesh>]Here is the upgrade steps from RN for ACS 4.1.0 release. It should be 
the same for 4.1.0 to 4.2.0 
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Release_Notes/upgrade-instructions.html#upgrade-from-4.0-to-4.1
 

> 
> - Original Message - 
> 
> From: "Animesh Chaturvedi"  
> To: dev@cloudstack.apache.org 
> Sent: Friday, September 20, 2013 12:03:41 AM 
> Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> round) 
> 
> 
> 
> > -Original Message- 
> > From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com] 
> > Sent: Thursday, September 19, 2013 5:15 PM 
> > To: dev@cloudstack.apache.org 
> > Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> > round) 
> > 
> > 
> > 
> > > -Original Message- 
> > > From: Marcus Sorensen [mailto:shadow...@gmail.com] 
> > > Sent: Thursday, September 19, 2013 4:46 PM 
> > > To: dev@cloudstack.apache.org 
> > > Subject: Re: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 (fifth 
> > > round) 
> > > 
> > > I prefer a respin, as painful as that seems. VPC is a major feature, 
> > > do we really want to release something that we know will break 
> > > anyone who tries to upgrade? I could deal with #1 if we are able to 
> > > include the hotfix script in the packaging, such that the release 
> > > notes can provide dead simple instructions for upgrade (no 'go 
> > > download this script'). I'm just not clear on what we can change and 
> > > what we can't 
> > post-vote. 
> > 
> > [Animesh>] I don't think the release artifacts can be changed so the 
> > script will have to be downloaded. David/Chip/Wido any thoughts on 
> that? 
> > We can have the script available before the release announcement if we 
> > continue with current VOTE. 
> > 
> > > 
> [Animesh>] Only Marcus, Indra and Simon have responded I am prepared to 
> respin tomorrow after VPC testing is complete with fixes from Alena. I 
> would like to stick to 72 hour VOTE including weekend since last time 
> most of the VOTEs were received over weekend. 
> 
> 
> > > On Thu, Sep 19, 2013 at 3:59 PM, Animesh Chaturvedi 
> > >  wrote: 
> > > > 
> > > > 
> > > >> -Original Message- 
> > > >> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com] 
> > > >> Sent: Thursday, September 19, 2013 10:43 AM 
> > > >> To: dev@cloudstack.apache.org 
> > > >> Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 
> > > >> (fifth 
> > > >> round) 
> > > >> 
> > > >> 
> > > >> 
> > > >> > -Original Message- 
> > > >> > From: Marcus Sorensen [mailto:shadow...@gmail.com] 
> > > >> > Sent: Thursday, September 19, 2013 9:49 AM 
> > > >> > To: dev@cloudstack.apache.org 
> > > >> > Subject: RE: [VOTE][RESULTS] Release Apache CloudStack 4.2.0 
> > > >> > (fifth 
> > > >> > round) 
> > > >> > 
> > > >> > We do need to ensure that we have the db upgrade fix that was 
> > > >> > mentioned on the other thread, otherwise people going from 4.1 
> > > >> > to 
> > > >> > 4.2 will have their VPCs break. Looks like we are waiting on a 
> > > >> > script. It sounds like the plan will be to provide instructions 
> > > >> > in the release notes. Really wish we would have caught that, 
> > > >> > its not just that we find bugs at RC, but the severity of ones 
> > > >> > we miss is astounding 
> > > >> sometimes. 
> > > >> 
> > > &

Re: Upgrading from 4.1.1 to 4.2.0, cloudstack-sysvmadm fails to execute

2013-09-23 Thread Simon Weller
Indra, 


Have you enabled the "integration.api.port" under Global Settings menu in CS 
management GUI? If not, set it to 8096, restart management and see if the 
problem goes away. 


- Si 

- Original Message -

From: "Indra Pramana"  
To: us...@cloudstack.apache.org, dev@cloudstack.apache.org 
Cc: "Radhika Puthiyetath"  
Sent: Monday, September 23, 2013 8:22:28 AM 
Subject: Upgrading from 4.1.1 to 4.2.0, cloudstack-sysvmadm fails to execute 

Dear all, 

I am currently in the midst of upgrading my CloudStack 4.1.1 to 4.2.0. 
Everything went smooth after I followed the instruction from the PDF file 
provided by Radhika and some addition of notes by Abhinav, except on the 
last step: 

When I tried to run the cloudstack-sysvmadm script: 

nohup cloudstack-sysvmadm -d [DB-SERVER-IP] -u cloud -p[DB-PASSWORD] -a > 
sysvm.log 2>&1 & 

Content of sysvm.log: 

= 
nohup: ignoring input 
/usr/bin/cloudstack-sysvmadm: line 21: /etc/rc.d/init.d/functions: No such 
file or directory 

Stopping and starting 1 secondary storage vm(s)... 
curl: (7) couldn't connect to host 
ERROR: Failed to stop secondary storage vm with id 1902 

Done stopping and starting secondary storage vm(s) 

Stopping and starting 1 console proxy vm(s)... 
curl: (7) couldn't connect to host 
ERROR: Failed to stop console proxy vm with id 1903 

Done stopping and starting console proxy vm(s) . 

Stopping and starting 2 running routing vm(s)... 
curl: (7) couldn't connect to host 
2 
curl: (7) couldn't connect to host 
2 
Done restarting router(s). 
= 

Content of cloud.log: 

== 
Stopping and starting 1 secondary storage vm(s)... 
INFO: Stopping secondary storage vm with id 1902 
ERROR: Failed to stop secondary storage vm with id 1902 
Done stopping and starting secondary storage vm(s). 
Stopping and starting 1 console proxy vm(s)... 
INFO: Stopping console proxy with id 1903 
ERROR: Failed to stop console proxy vm with id 1903 
Done stopping and starting console proxy vm(s) . 
Stopping and starting 2 running routing vm(s)... 
INFO: Restarting router with id 1930 
INFO: Restarting router with id 1931 
ERROR: Failed to restart domainRouter with id 1930 
ERROR: Failed to restart domainRouter with id 1931 
Done restarting router(s). 
= 

It seems that the script fails to restart the system VMs (1 SSVM, 1 CPVM 
and 2 VRs). Anything I have done wrong or have missed out? 

After the upgrade, I am still not able to access the CloudStack GUI with 
this error message: 

= 
HTTP Status 404 - 

type Status report 

message 

description The requested resource () is not available. 
Apache Tomcat/6.0.35 
= 

Anyone can help and assist? :) 

Looking forward to your reply, thank you. 

Cheers. 



Re: Upgrading from 4.1.1 to 4.2.0, cloudstack-sysvmadm fails to execute

2013-09-23 Thread Simon Weller
Indra, 


Can you supply some more information about your installation? Is this built 
from source and installed, or packaged? What distro are you running? Any other 
errors during installation or build? 

- Original Message -

From: "Indra Pramana"  
To: dev@cloudstack.apache.org 
Cc: us...@cloudstack.apache.org 
Sent: Monday, September 23, 2013 8:46:55 AM 
Subject: Re: Upgrading from 4.1.1 to 4.2.0, cloudstack-sysvmadm fails to 
execute 

Hi Simon, 

Thank you for your e-mail. 

I think I haven't set this. However, I am not able to access the GUI at all 
now. Seems that my upgrade itself failed? 

cloudstack-management service is running but I cannot access the GUI. Error 
messages on the management-server log file: 

http://pastebin.com/0FP9itAf 

Can advise? 

Looking forward to your reply, thank you. 

Cheers. 




On Mon, Sep 23, 2013 at 9:36 PM, Simon Weller  wrote: 

> Indra, 
> 
> 
> Have you enabled the "integration.api.port" under Global Settings menu in 
> CS management GUI? If not, set it to 8096, restart management and see if 
> the problem goes away. 
> 
> 
> - Si 
> 
> - Original Message - 
> 
> From: "Indra Pramana"  
> To: us...@cloudstack.apache.org, dev@cloudstack.apache.org 
> Cc: "Radhika Puthiyetath"  
> Sent: Monday, September 23, 2013 8:22:28 AM 
> Subject: Upgrading from 4.1.1 to 4.2.0, cloudstack-sysvmadm fails to 
> execute 
> 
> Dear all, 
> 
> I am currently in the midst of upgrading my CloudStack 4.1.1 to 4.2.0. 
> Everything went smooth after I followed the instruction from the PDF file 
> provided by Radhika and some addition of notes by Abhinav, except on the 
> last step: 
> 
> When I tried to run the cloudstack-sysvmadm script: 
> 
> nohup cloudstack-sysvmadm -d [DB-SERVER-IP] -u cloud -p[DB-PASSWORD] -a > 
> sysvm.log 2>&1 & 
> 
> Content of sysvm.log: 
> 
> = 
> nohup: ignoring input 
> /usr/bin/cloudstack-sysvmadm: line 21: /etc/rc.d/init.d/functions: No such 
> file or directory 
> 
> Stopping and starting 1 secondary storage vm(s)... 
> curl: (7) couldn't connect to host 
> ERROR: Failed to stop secondary storage vm with id 1902 
> 
> Done stopping and starting secondary storage vm(s) 
> 
> Stopping and starting 1 console proxy vm(s)... 
> curl: (7) couldn't connect to host 
> ERROR: Failed to stop console proxy vm with id 1903 
> 
> Done stopping and starting console proxy vm(s) . 
> 
> Stopping and starting 2 running routing vm(s)... 
> curl: (7) couldn't connect to host 
> 2 
> curl: (7) couldn't connect to host 
> 2 
> Done restarting router(s). 
> = 
> 
> Content of cloud.log: 
> 
> == 
> Stopping and starting 1 secondary storage vm(s)... 
> INFO: Stopping secondary storage vm with id 1902 
> ERROR: Failed to stop secondary storage vm with id 1902 
> Done stopping and starting secondary storage vm(s). 
> Stopping and starting 1 console proxy vm(s)... 
> INFO: Stopping console proxy with id 1903 
> ERROR: Failed to stop console proxy vm with id 1903 
> Done stopping and starting console proxy vm(s) . 
> Stopping and starting 2 running routing vm(s)... 
> INFO: Restarting router with id 1930 
> INFO: Restarting router with id 1931 
> ERROR: Failed to restart domainRouter with id 1930 
> ERROR: Failed to restart domainRouter with id 1931 
> Done restarting router(s). 
> = 
> 
> It seems that the script fails to restart the system VMs (1 SSVM, 1 CPVM 
> and 2 VRs). Anything I have done wrong or have missed out? 
> 
> After the upgrade, I am still not able to access the CloudStack GUI with 
> this error message: 
> 
> = 
> HTTP Status 404 - 
> 
> type Status report 
> 
> message 
> 
> description The requested resource () is not available. 
> Apache Tomcat/6.0.35 
> = 
> 
> Anyone can help and assist? :) 
> 
> Looking forward to your reply, thank you. 
> 
> Cheers. 
> 
> 



Re: [VOTE] Accept the donation of a Contrail plugin into Apache CloudStack

2013-09-26 Thread Simon Weller
+1. 

- Original Message -

From: "Chip Childers"  
To: dev@cloudstack.apache.org 
Sent: Wednesday, September 25, 2013 12:13:07 PM 
Subject: [VOTE] Accept the donation of a Contrail plugin into Apache CloudStack 

Hi all! 

As stated in other threads, Juniper is proposing the donation of a 
Contrail plugin to Apache CloudStack. The code itself has been posted 
to reviewboard [1]. The design has been documented by Pedro [2]. 

[1] https://reviews.apache.org/r/14325/ 
[2] 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Contrail+network+plugin 

I'm calling a vote here, so that we have a formal consensus on accepting 
the code into the project. As I've suggested earlier, I'd like us to 
accept the code into a branch, and then work through any technical 
concerns / reviews / changes prior to a master branch merge. 

So... voting will end in ~72 hours. As this is a technical decision, 
committer and PMC votes are binding. 

-chip 


Votes please! 

[ ] +1 - Accept the donation 
[ ] +/-0 - No strong opinion 
[ ] -1 - Do not accept the donation 



Re: [PROPOSAL] Service monitoring tool in virtual router

2013-10-01 Thread Simon Weller
supervisord maybe? 

- Original Message -

From: "Chiradeep Vittal"  
To: dev@cloudstack.apache.org 
Sent: Tuesday, October 1, 2013 4:45:56 PM 
Subject: Re: [PROPOSAL] Service monitoring tool in virtual router 

Got it. Any other OSS tool out there similar to monit? 

On 10/1/13 8:24 AM, "David Nalley"  wrote: 

>On Thu, Sep 26, 2013 at 1:27 AM, Chiradeep Vittal 
> wrote: 
>> SNMP wouldn't restart a failed process nor would it generate alerts. It 
>>is 
>> simply too generic for the requirements outlined here. The proposal does 
>> not talk about modifying monit, just using it. That wouldn't trigger the 
>> AGPL. 
> 
>Let me restate my objection to anything AGPL. 
>People are largely comfortable with GPLv2 software - Linux is 
>ubiquitous. Many legal departments routinely prohibit GPLv3 software 
>(we actually saw this when CS was GPLv3 licensed.) But the Affero GPL 
>license is anathema in many corporate environments, and by forcing it 
>on folks in the default System VM I fear it will hurt adoption of 
>CloudStack. 
> 
>--David 




RE: [VOTE] Release Apache CloudStack 4.2.1

2013-11-12 Thread Simon Weller
The "list of changes" URL is returning a 404.  It doesn't look like this has 
been branched yet.



From: Chip Childers 
Sent: Tuesday, November 12, 2013 2:40 PM
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Release Apache CloudStack 4.2.1

On Tue, Nov 12, 2013 at 03:33:51PM -0500, Chip Childers wrote:
> On Tue, Nov 12, 2013 at 03:52:54PM +, Abhinandan Prateek wrote:
> >
> >This vote is to approve the current RC build for 4.2.1 maintenance 
> > release.
> > For this particular release various upgrade paths have been tested apart 
> > from regression tests and BVTs.
> > Around 175 bugs have been fixed some new features added (see CHANGES).
> >
> > Following are the particulars for this release:
> >
> > https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.2
> > commit: 0b9eadaf14513f5c72de672963b0e2f12ee7206f
> >
> > List of changes:
> > https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.2.1
> >
> > Source release revision 3492 (checksums and signatures are available at the 
> > same location):
> > https://dist.apache.org/repos/dist/dev/cloudstack/4.2.1/
> >
> > PGP release keys (signed using RSA Key ID = 42443AA1):
> > https://dist.apache.org/repos/dist/release/cloudstack/KEYS
> >
> > Vote will be open for 72 hours (until 11/15 End of day PST).
> >
> > For sanity in tallying the vote, can PMC members please be sure to indicate 
> > "(binding)" with their vote?
> >
> > [ ] +1  approve
> > [ ] +0  no opinion
> > [ ] -1  disapprove (and reason why)
>
> The format of apache-cloudstack-4.2.1-src.tar.bz2.md5 doesn't match the
> gpg output.
>
> See this:
>
> localhost :: /tmp/cloudstack »gpg --print-md MD5 
> apache-cloudstack-4.2.1-src.tar.bz2
> apache-cloudstack-4.2.1-src.tar.bz2: AE BF B5 B5 91 24 8B 6D  27 6F 9D 35 C0 
> C3
>  DD D5
> localhost :: /tmp/cloudstack »cat apache-cloudstack-4.2.1-src.tar.bz2.md5
> apache-cloudstack-4.2.1-src.tar.bz2: AE BF B5
>  B5 91 24
>  8B 6D  27
>  6F 9D 35
>  C0 C3 DD
>  D5
>
> There are extra line breaks in the .md5 file [1].  Although I'd actually
> prefer to get the process to switch to the md5sum tool's format anyway,
> can you correct this formatting before we continue with the release
> voting?
>
> -chip
>
> [1] 
> https://dist.apache.org/repos/dist/dev/cloudstack/4.2.1/apache-cloudstack-4.2.1-src.tar.bz2.md5

The same issue exists in the sha file.

RE: [ASF4.2.1] Release Notes

2013-11-14 Thread Simon Weller
I agree completely.
I've spent the last couple of evenings trying to get a KVM lab upgraded from 
4.1 to 4.2.1 by registering the new template first using the name in the 
upgrade*.java, and I've had zero success getting the SSVM to come back up. 
However, if I don't install the new template prior to upgrade and replace the 
existing template, and do some database manipulation (thanks to Kelsey's  
documented experiences in CLOUDSTACK-4826), I can get the SSVM to come up fine. 
Maybe I'm missing something here, but without reliable documented steps of what 
is meant to work, it's hard to test the upgrade process.

- Si


From: Chip Childers 
Sent: Thursday, November 14, 2013 9:40 AM
To: dev@cloudstack.apache.org
Cc: Abhinandan Prateek; Alok Kumar Singh
Subject: Re: [ASF4.2.1] Release Notes

On Thu, Nov 14, 2013 at 09:42:11AM -0500, Sebastien Goasguen wrote:
> Anyway we can wait next week to release.
>
> quite a few of us will be together in Amsterdam, we can dedicate a hackathon 
> session to 4.2.1 , make sure RN are good, upgrade path etc…then test….
>
> I'd recommend keeping the vote open until then.
>
> -sebastien

+1 to Seb's idea (although I already voted)

Re: [VOTE] Release Apache CloudStack 4.1.0 (fifth round)

2013-05-31 Thread Simon Weller
+1 


Built nonoss RPMS from source release. 
Installed and configured on 6.3 
Provisioned CLVM primary storage 
Provisioned advanced networked zone 
Provisioned project 
Provisioned test VMs under project. 

- Original Message -

From: "Chip Childers"  
To: dev@cloudstack.apache.org 
Sent: Tuesday, May 28, 2013 8:47:40 AM 
Subject: [VOTE] Release Apache CloudStack 4.1.0 (fifth round) 

Hi All, 

I've created a 4.1.0 release, with the following artifacts up for a 
vote. 

The changes from round 4 are related to DEB packaging, some 
translation strings, and a functional patch to make bridge type 
optional during the agent setup (for backward compatibility). 

Git Branch and Commit SH: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
 
Commit: a5214bee99f6c5582d755c9499f7d99fd7b5b701 

List of changes: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.1
 

Source release (checksums and signatures are available at the same 
location): 
https://dist.apache.org/repos/dist/dev/cloudstack/4.1.0/ 

PGP release keys (signed using A99A5D58): 
https://dist.apache.org/repos/dist/release/cloudstack/KEYS 

Testing instructions are here: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+procedure 

Vote will be open for 72 hours. 

For sanity in tallying the vote, can PMC members please be sure to 
indicate "(binding)" with their vote? 

[ ] +1 approve 
[ ] +0 no opinion 
[ ] -1 disapprove (and reason why) 



Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Simon Weller
I'd like to comment on this briefly. 



I think an assumption is being made that the SAN is being dedicated to a CS 
instance. 

My person opinion that this whole IOPS calculation is getting rather 
complicated, and could probably be much simpler than this. Over subscription is 
a fact of life on virtually all storage, and is really no different in concept 
than multiple virt instances on a single piece of hardware. All decent SANs 
offer many management options for the storage engineers to keep track of IOPS 
utilization, and plan for spindle augmentation as required. 
Is it really the job of CS to become yet another management layer on top of 
this? 

- Original Message -

From: "Mike Tutkowski"  
To: dev@cloudstack.apache.org 
Cc: "John Burwell" , "Wei Zhou"  
Sent: Friday, June 14, 2013 1:00:26 PM 
Subject: Re: [MERGE] disk_io_throttling to MASTER 

1) We want number of IOPS currently supported by the SAN. 

2) We want the number of IOPS that are committed (sum of min IOPS for each 
volume). 

We could do the following to keep track of IOPS: 

The plug-in could have a timer thread that goes off every, say, 1 minute. 

It could query the SAN for the number of nodes that make up the SAN and 
multiple this by 50,000. This is essentially the number of supported IOPS 
of the SAN. 

The next API call could be to get all of the volumes on the SAN. Iterate 
through them all and add up their min IOPS values. This is the number of 
IOPS the SAN is committed to. 

These two numbers can then be updated in the storage_pool table (a column 
for each value). 

The allocators can get these values as needed (and they would be as 
accurate as the last time the thread asked the SAN for this info). 

These two fields, the min IOPS of the volume to create, and the overcommit 
ratio of the plug-in would tell the allocator if it can select the given 
storage pool. 

What do you think? 


On Fri, Jun 14, 2013 at 11:45 AM, Mike Tutkowski < 
mike.tutkow...@solidfire.com> wrote: 

> "As I mentioned previously, I am very reluctant for any feature to come 
> into master that can exhaust resources." 
> 
> Just wanted to mention that, worst case, the SAN would fail creation of 
> the volume before allowing a new volume to break the system. 
> 
> 
> On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski < 
> mike.tutkow...@solidfire.com> wrote: 
> 
>> Hi John, 
>> 
>> Are you thinking we add a column on to the storage pool table, 
>> IOPS_Count, where we add and subtract committed IOPS? 
>> 
>> That is easy enough. 
>> 
>> How do you want to determine what the SAN is capable of supporting IOPS 
>> wise? Remember we're dealing with a dynamic SAN here...as you add storage 
>> nodes to the cluster, the number of IOPS increases. Do we have a thread we 
>> can use to query external devices like this SAN to update the supported 
>> number of IOPS? 
>> 
>> Also, how do you want to enforce the IOPS limit? Do we pass in an 
>> overcommit ration to the plug-in when it's created? We would need to store 
>> this in the storage_pool table, as well, I believe. 
>> 
>> We should also get Wei involved in this as his feature will need similar 
>> functionality. 
>> 
>> Also, we should do this FAST as we have only two weeks left and many of 
>> us will be out for several days for the CS Collab Conference. 
>> 
>> Thanks 
>> 
>> 
>> On Fri, Jun 14, 2013 at 10:46 AM, John Burwell wrote: 
>> 
>>> Mike, 
>>> 
>>> Querying the SAN only indicates the number of IOPS currently in use. 
>>> The allocator needs to check the number of IOPS committed which is tracked 
>>> by CloudStack. For 4.2, we should be able to query the number of IOPS 
>>> committed to a DataStore, and determine whether or not the number requested 
>>> can be fulfilled by that device. It seems to be that a 
>>> DataStore#getCommittedIOPS() : Long method would be sufficient. 
>>> DataStore's that don't support provisioned IOPS would return null. 
>>> 
>>> As I mentioned previously, I am very reluctant for any feature to come 
>>> into master that can exhaust resources. 
>>> 
>>> Thanks, 
>>> -John 
>>> 
>>> On Jun 13, 2013, at 9:27 PM, Mike Tutkowski < 
>>> mike.tutkow...@solidfire.com> wrote: 
>>> 
>>> > Yeah, I'm not sure I could come up with anything near an accurate 
>>> > assessment of how many IOPS are currently available on the SAN (or 
>>> even a 
>>> > total number that are available for volumes). Not sure if there's yet 
>>> an 
>>> > API call for that. 
>>> > 
>>> > If I did know this number (total number of IOPS supported by the SAN), 
>>> we'd 
>>> > still have to keep track of the total number of volumes we've created 
>>> from 
>>> > CS on the SAN in terms of their IOPS. Also, if an admin issues an API 
>>> call 
>>> > directly to the SAN to tweak the number of IOPS on a given volume or 
>>> set of 
>>> > volumes (not supported from CS, but supported via the SolidFire API), 
>>> our 
>>> > numbers in CS would be off. 
>>> > 
>>> > I'm thinking verifying sufficient number of

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Simon Weller

John, 


I'm not arguing that CloudStack's job isn't to provide resource management. The 
challenge here is we're talking about managing a resource that is extremely 
complex in a very 'one size fits all' manner. For example, lets say you have a 
SAN that supports storage tiering, and can dynamically move data shards to 
different tiers of disks where the IOP vs capacity varies. So your data shard 
starts on an array of fast, lower capacity disks, and based on various criteria 
gets relocated to slower, higher capacity disks. In this scenario , your IOP 
max capacity has just changed for some subset (or all) of the data tied to this 
primary storage object based on some usage profile. Likewise, you may have some 
high use shards that never get demoted to larger disks, so you run out of your 
primary tier storage capacity. 
My point is, It's hard to account for these scenarios in an absolute world. I'm 
just concerned that we're getting ourselves tied up trying to paint all storage 
as being the same, when in fact every product and project whether commercial or 
open source has a different set of features and objectives. 


- Si 
- Original Message -

From: "John Burwell"  
To: dev@cloudstack.apache.org 
Sent: Friday, June 14, 2013 2:59:47 PM 
Subject: Re: [MERGE] disk_io_throttling to MASTER 

Simon, 

Yes, it is CloudStack's job to protect, as best it can, from oversubscribing 
resources. I would argue that resource management is one, if not the most, 
important functions of the system. It is no different than the 
allocation/planning performed for hosts relative to cores and memory. We can 
still oversubscribe resources, but we have rails + knobs and dials to avoid it. 
Without these controls in place, we could easily allow users to deploy 
workloads that overrun resources harming all tenants. 



I also think that we are over thinking this issue for provisioned IOPS. When 
the DataStore is configured, the administrator/operator simply needs to tell us 
the total number of IOPS that can be committed to it and an overcommitment 
factor. As we allocate volumes to that DataStore, we sum up the committed IOPS 
of the existing Volumes attached to the DataStore, apply the overcommitment 
factor, and determine whether or not the requested minimum IOPS for the new 
volume can be fulfilled. We can provide both general and vendor specific 
documentation for determining these values -- be they to consume the entire 
device or a portion of it. 

Querying the device is unnecessary and deceptive. CloudStack resource 
management is not interested in the current state of the device which could be 
anywhere from extremely heavy to extremely light at any given time. We are 
interested in the worst case load that is anticipated for resource. In my view, 
it is up to administrators/operators to instrument their environment to 
understand usage patterns and capacity. We should provide information that will 
help determine what should be instrumented/monitored, but that function should 
be performed outside of CloudStack. 



Thanks, 
-John 

On Jun 14, 2013, at 2:20 PM, Simon Weller  wrote: 

> I'd like to comment on this briefly. 
> 
> 
> 
> I think an assumption is being made that the SAN is being dedicated to a CS 
> instance. 
> 
> My person opinion that this whole IOPS calculation is getting rather 
> complicated, and could probably be much simpler than this. Over subscription 
> is a fact of life on virtually all storage, and is really no different in 
> concept than multiple virt instances on a single piece of hardware. All 
> decent SANs offer many management options for the storage engineers to keep 
> track of IOPS utilization, and plan for spindle augmentation as required. 
> Is it really the job of CS to become yet another management layer on top of 
> this? 
> 
> - Original Message - 
> 
> From: "Mike Tutkowski"  
> To: dev@cloudstack.apache.org 
> Cc: "John Burwell" , "Wei Zhou"  
> Sent: Friday, June 14, 2013 1:00:26 PM 
> Subject: Re: [MERGE] disk_io_throttling to MASTER 
> 
> 1) We want number of IOPS currently supported by the SAN. 
> 
> 2) We want the number of IOPS that are committed (sum of min IOPS for each 
> volume). 
> 
> We could do the following to keep track of IOPS: 
> 
> The plug-in could have a timer thread that goes off every, say, 1 minute. 
> 
> It could query the SAN for the number of nodes that make up the SAN and 
> multiple this by 50,000. This is essentially the number of supported IOPS 
> of the SAN. 
> 
> The next API call could be to get all of the volumes on the SAN. Iterate 
> through them all and add up their min IOPS values. This is the number of 
> IOPS the SAN is committed to. 
> 
> These two numbers can then be updated in the storage_pool ta

Re: Trouble uploading ISO file

2013-07-17 Thread Simon Weller
Mike, 


Are you running in an advanced network zone? If you ssh into the SSVM, which 
interface does your default route take? Can you telnet to the httpd server on 
port 80 from the SSVM? 

- Original Message -

From: "Mike Tutkowski"  
To: dev@cloudstack.apache.org 
Sent: Wednesday, July 17, 2013 11:38:36 AM 
Subject: Re: Trouble uploading ISO file 

This is where I'm trying to copy the ISO from (from the same computer 
that's running the CS MS): 

http://172.16.140.1/~mtutkowski/ubuntu-12.04.1-desktop-amd64.iso 


On Wed, Jul 17, 2013 at 10:37 AM, Mike Tutkowski < 
mike.tutkow...@solidfire.com> wrote: 

> Hi, 
> 
> I'm seeing the following error in the console when I try to upload an ISO 
> file: 
> 
> WARN [storage.download.DownloadListener] (Timer-11:) Entering download 
> error state: timeout waiting for response from storage host, TEMPLATE: 210 
> at host 0 
> 
> Any thoughts on this? 
> 
> I seem to be able to access the ISO that I want to copy just fine via my 
> browser. 
> 
> Also, the CS MS has been able to successfully create a snapshots and 
> volumes folder on my NFS share. 
> 
> Thanks! 
> 
> -- 
> *Mike Tutkowski* 
> *Senior CloudStack Developer, SolidFire Inc.* 
> e: mike.tutkow...@solidfire.com 
> o: 303.746.7302 
> Advancing the way the world uses the 
> cloud 
> *™* 
> 



-- 
*Mike Tutkowski* 
*Senior CloudStack Developer, SolidFire Inc.* 
e: mike.tutkow...@solidfire.com 
o: 303.746.7302 
Advancing the way the world uses the 
cloud 
*™* 



Re: Trouble uploading ISO file {Spam?}

2013-07-17 Thread Simon Weller
Mike, 


Use the GUI to determine which VM Host the SSVM is currently on, as well the 
listed Link Local IP Address, then ssh to that server and run this: 


ssh -i /root/.ssh/id_rsa.cloud -p 3922 root@ 

- Original Message -

From: "Mike Tutkowski"  
To: dev@cloudstack.apache.org 
Sent: Wednesday, July 17, 2013 11:54:06 AM 
Subject: Re: Trouble uploading ISO file 

Hi Simon, 

I am just running a Basic Zone. 

The public IP address of the SSVM is 172.16.140.50. 

When I try to SSH in, it times out: 

mtutkowski-LT:~ mtutkowski$ ssh root@172.16.140.50 
ssh: connect to host 172.16.140.50 port 22: Operation timed out 

When I see a dash by Agent State in the GUI, do you know what that 
indicates? 

Thanks! 


On Wed, Jul 17, 2013 at 10:46 AM, Simon Weller  wrote: 

> Mike, 
> 
> 
> Are you running in an advanced network zone? If you ssh into the SSVM, 
> which interface does your default route take? Can you telnet to the httpd 
> server on port 80 from the SSVM? 
> 
> - Original Message - 
> 
> From: "Mike Tutkowski"  
> To: dev@cloudstack.apache.org 
> Sent: Wednesday, July 17, 2013 11:38:36 AM 
> Subject: Re: Trouble uploading ISO file 
> 
> This is where I'm trying to copy the ISO from (from the same computer 
> that's running the CS MS): 
> 
> http://172.16.140.1/~mtutkowski/ubuntu-12.04.1-desktop-amd64.iso 
> 
> 
> On Wed, Jul 17, 2013 at 10:37 AM, Mike Tutkowski < 
> mike.tutkow...@solidfire.com> wrote: 
> 
> > Hi, 
> > 
> > I'm seeing the following error in the console when I try to upload an ISO 
> > file: 
> > 
> > WARN [storage.download.DownloadListener] (Timer-11:) Entering download 
> > error state: timeout waiting for response from storage host, TEMPLATE: 
> 210 
> > at host 0 
> > 
> > Any thoughts on this? 
> > 
> > I seem to be able to access the ISO that I want to copy just fine via my 
> > browser. 
> > 
> > Also, the CS MS has been able to successfully create a snapshots and 
> > volumes folder on my NFS share. 
> > 
> > Thanks! 
> > 
> > -- 
> > *Mike Tutkowski* 
> > *Senior CloudStack Developer, SolidFire Inc.* 
> > e: mike.tutkow...@solidfire.com 
> > o: 303.746.7302 
> > Advancing the way the world uses the cloud< 
> http://solidfire.com/solution/overview/?video=play> 
> > *™* 
> > 
> 
> 
> 
> -- 
> *Mike Tutkowski* 
> *Senior CloudStack Developer, SolidFire Inc.* 
> e: mike.tutkow...@solidfire.com 
> o: 303.746.7302 
> Advancing the way the world uses the 
> cloud<http://solidfire.com/solution/overview/?video=play> 
> *™* 
> 
> 


-- 
*Mike Tutkowski* 
*Senior CloudStack Developer, SolidFire Inc.* 
e: mike.tutkow...@solidfire.com 
o: 303.746.7302 
Advancing the way the world uses the 
cloud<http://solidfire.com/solution/overview/?video=play> 
*™* 



[ACS411] CLOUDSTACK-2188: patch request

2013-07-22 Thread Simon Weller
Hi, 

I'd like to request a patch for Cloudstack-2188 for ACS 4.1.1. 

The master commit was e56d2a401c40b4208d062c0a0ce1ec01df73dd08, but it appears 
the code has been greatly re-factored since 4.1 was originally branched. 
This NPE appears to be causing a memory leak in our production environment that 
consumes memory quickly due to the number of times this NPE is being triggered. 
We currently have tomcat max memory set to 2G, and individual management 
servers are running out of memory within 4 to 5 days. 

MS NPE log for reference: 

2013-07-22 09:40:47,404 DEBUG [agent.manager.AgentManagerImpl] 
(AgentConnectTaskPool-1066:null) Details from executing class 
com.cloud.agent.api.storage.ListVolumeCommand: success 
2013-07-22 09:40:47,407 ERROR [agent.manager.AgentManagerImpl] 
(AgentConnectTaskPool-1066:null) Monitor DownloadListener says there is an 
error in the connect process for 28 due to null 
java.lang.NullPointerException 
at 
com.cloud.storage.download.DownloadMonitorImpl.handleVolumeSync(DownloadMonitorImpl.java:694)
 
at 
com.cloud.storage.download.DownloadMonitorImpl.handleSync(DownloadMonitorImpl.java:620)
 
at 
com.cloud.storage.download.DownloadListener.processConnect(DownloadListener.java:385)
 
at 
com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection(AgentManagerImpl.java:611)
 
at 
com.cloud.agent.manager.AgentManagerImpl.handleConnectedAgent(AgentManagerImpl.java:)
 
at 
com.cloud.agent.manager.AgentManagerImpl.access$100(AgentManagerImpl.java:145) 
at 
com.cloud.agent.manager.AgentManagerImpl$HandleAgentConnectTask.run(AgentManagerImpl.java:1186)
 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:679) 


Thanks, 

- Si 


Re: [ACS411] CLOUDSTACK-2188: patch request

2013-07-22 Thread Simon Weller
Ok, will do. 

The reason I hadn't brought this up until now is that we only upgraded to 4.1 
in production last week. That's when I started digging into this NPE and found 
the reported related issues. 

- Original Message -

From: "Ilya Musayev"  
To: dev@cloudstack.apache.org 
Sent: Monday, July 22, 2013 11:32:50 AM 
Subject: RE: [ACS411] CLOUDSTACK-2188: patch request 

Simon, 

Since this issue has been resolved, please open a separate issue for ACS 4.1 
and link CLOUDSTACK-2177 and CLOUDSTACK-2188 to it. 

Please mention your environment and steps to reproduce along with error trace. 

Thanks 
ilya 

> -Original Message- 
> From: Musayev, Ilya [mailto:imusa...@webmd.net] 
> Sent: Monday, July 22, 2013 12:24 PM 
> To: dev@cloudstack.apache.org 
> Subject: RE: [ACS411] CLOUDSTACK-2188: patch request 
> 
> Simon, 
> 
> I've looked at patch and yes - it's been refactored greatly. 
> 
> I wish this would have been brought up earlier :( While due to time 
> constraint, we probably wont be able to make into 4.1.1, if we can get the 
> patch, we will try to push it into next 4.1.2 update. 
> 
> Please update the ticket and ask if this patch can be back ported to 4.1. 
> I'll do 
> the same. 
> 
> Thanks 
> ilya 
> 
> > -Original Message- 
> > From: Simon Weller [mailto:swel...@ena.com] 
> > Sent: Monday, July 22, 2013 11:30 AM 
> > To: dev@cloudstack.apache.org 
> > Subject: [ACS411] CLOUDSTACK-2188: patch request 
> > 
> > Hi, 
> > 
> > I'd like to request a patch for Cloudstack-2188 for ACS 4.1.1. 
> > 
> > The master commit was e56d2a401c40b4208d062c0a0ce1ec01df73dd08, but 
> it 
> > appears the code has been greatly re-factored since 4.1 was originally 
> > branched. 
> > This NPE appears to be causing a memory leak in our production 
> > environment that consumes memory quickly due to the number of times 
> > this NPE is being triggered. We currently have tomcat max memory set 
> > to 2G, and individual management servers are running out of memory 
> > within 4 to 5 days. 
> > 
> > MS NPE log for reference: 
> > 
> > 2013-07-22 09:40:47,404 DEBUG [agent.manager.AgentManagerImpl] 
> > (AgentConnectTaskPool-1066:null) Details from executing class 
> > com.cloud.agent.api.storage.ListVolumeCommand: success 
> > 2013-07-22 09:40:47,407 ERROR [agent.manager.AgentManagerImpl] 
> > (AgentConnectTaskPool-1066:null) Monitor DownloadListener says there 
> > is an error in the connect process for 28 due to null 
> > java.lang.NullPointerException at 
> > 
> com.cloud.storage.download.DownloadMonitorImpl.handleVolumeSync(Do 
> > wnloadMonitorImpl.java:694) 
> > at 
> > 
> com.cloud.storage.download.DownloadMonitorImpl.handleSync(Download 
> > MonitorImpl.java:620) 
> > at 
> > 
> com.cloud.storage.download.DownloadListener.processConnect(DownloadL 
> > istener.java:385) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection( 
> > AgentManagerImpl.java:611) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl.handleConnectedAgent(Agen 
> > tManagerImpl.java:) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl.access$100(AgentManagerIm 
> > pl.java:145) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl$HandleAgentConnectTask.ru 
> > n(AgentManagerImpl.java:1186) 
> > at 
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j 
> > av 
> > a:1146) 
> > at 
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor. 
> > ja 
> > va:615) 
> > at java.lang.Thread.run(Thread.java:679) 
> > 
> > 
> > Thanks, 
> > 
> > - Si 



Re: [ACS411] CLOUDSTACK-2188: patch request

2013-07-22 Thread Simon Weller
Issue filed as CLOUDSTACK-3716 . 

- Original Message -

From: "Simon Weller"  
To: dev@cloudstack.apache.org 
Sent: Monday, July 22, 2013 11:52:08 AM 
Subject: Re: [ACS411] CLOUDSTACK-2188: patch request 

Ok, will do. 

The reason I hadn't brought this up until now is that we only upgraded to 4.1 
in production last week. That's when I started digging into this NPE and found 
the reported related issues. 

- Original Message - 

From: "Ilya Musayev"  
To: dev@cloudstack.apache.org 
Sent: Monday, July 22, 2013 11:32:50 AM 
Subject: RE: [ACS411] CLOUDSTACK-2188: patch request 

Simon, 

Since this issue has been resolved, please open a separate issue for ACS 4.1 
and link CLOUDSTACK-2177 and CLOUDSTACK-2188 to it. 

Please mention your environment and steps to reproduce along with error trace. 

Thanks 
ilya 

> -Original Message- 
> From: Musayev, Ilya [mailto:imusa...@webmd.net] 
> Sent: Monday, July 22, 2013 12:24 PM 
> To: dev@cloudstack.apache.org 
> Subject: RE: [ACS411] CLOUDSTACK-2188: patch request 
> 
> Simon, 
> 
> I've looked at patch and yes - it's been refactored greatly. 
> 
> I wish this would have been brought up earlier :( While due to time 
> constraint, we probably wont be able to make into 4.1.1, if we can get the 
> patch, we will try to push it into next 4.1.2 update. 
> 
> Please update the ticket and ask if this patch can be back ported to 4.1. 
> I'll do 
> the same. 
> 
> Thanks 
> ilya 
> 
> > -Original Message- 
> > From: Simon Weller [mailto:swel...@ena.com] 
> > Sent: Monday, July 22, 2013 11:30 AM 
> > To: dev@cloudstack.apache.org 
> > Subject: [ACS411] CLOUDSTACK-2188: patch request 
> > 
> > Hi, 
> > 
> > I'd like to request a patch for Cloudstack-2188 for ACS 4.1.1. 
> > 
> > The master commit was e56d2a401c40b4208d062c0a0ce1ec01df73dd08, but 
> it 
> > appears the code has been greatly re-factored since 4.1 was originally 
> > branched. 
> > This NPE appears to be causing a memory leak in our production 
> > environment that consumes memory quickly due to the number of times 
> > this NPE is being triggered. We currently have tomcat max memory set 
> > to 2G, and individual management servers are running out of memory 
> > within 4 to 5 days. 
> > 
> > MS NPE log for reference: 
> > 
> > 2013-07-22 09:40:47,404 DEBUG [agent.manager.AgentManagerImpl] 
> > (AgentConnectTaskPool-1066:null) Details from executing class 
> > com.cloud.agent.api.storage.ListVolumeCommand: success 
> > 2013-07-22 09:40:47,407 ERROR [agent.manager.AgentManagerImpl] 
> > (AgentConnectTaskPool-1066:null) Monitor DownloadListener says there 
> > is an error in the connect process for 28 due to null 
> > java.lang.NullPointerException at 
> > 
> com.cloud.storage.download.DownloadMonitorImpl.handleVolumeSync(Do 
> > wnloadMonitorImpl.java:694) 
> > at 
> > 
> com.cloud.storage.download.DownloadMonitorImpl.handleSync(Download 
> > MonitorImpl.java:620) 
> > at 
> > 
> com.cloud.storage.download.DownloadListener.processConnect(DownloadL 
> > istener.java:385) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection( 
> > AgentManagerImpl.java:611) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl.handleConnectedAgent(Agen 
> > tManagerImpl.java:) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl.access$100(AgentManagerIm 
> > pl.java:145) 
> > at 
> > 
> com.cloud.agent.manager.AgentManagerImpl$HandleAgentConnectTask.ru 
> > n(AgentManagerImpl.java:1186) 
> > at 
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j 
> > av 
> > a:1146) 
> > at 
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor. 
> > ja 
> > va:615) 
> > at java.lang.Thread.run(Thread.java:679) 
> > 
> > 
> > Thanks, 
> > 
> > - Si 




Re: [VOTE] Apache Cloudstack 4.1.1

2013-07-25 Thread Simon Weller
-1 

mvn -P deps -D nonoss 
[INFO] Scanning for projects... 
[ERROR] The build could not read 1 project -> [Help 1] 
[ERROR] 
[ERROR] The project org.apache.cloudstack:xapi:5.6.100-1-SNAPSHOT 
(/home/sweller/apache-cloudstack-4.1.1-src/deps/XenServerJava/pom.xml) has 1 
error 
[ERROR] Non-resolvable parent POM: Could not find artifact 
org.apache.cloudstack:cloudstack:pom:4.1.1-SNAPSHOT and 'parent.relativePath' 
points at wrong local POM @ line 21, column 11 -> [Help 2] 
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch. 
[ERROR] Re-run Maven using the -X switch to enable full debug logging. 
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles: 
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException 
[ERROR] [Help 2] 
http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException 


Doesn't build due to a pom.xml reference to 4.1.1-SNAPSHOT 
in apache-cloudstack-4.1.1-src/deps/XenServerJava/pom.xml 

Here's a patch to fix it: 

--- apache-cloudstack-4.1.1-src/deps/XenServerJava/pom.xml.old 2013-07-25 
08:09:41.702309598 -0500 
+++ apache-cloudstack-4.1.1-src/deps/XenServerJava/pom.xml 2013-07-25 
08:09:51.877251365 -0500 
@@ -21,7 +21,7 @@ 
 
org.apache.cloudstack 
cloudstack 
- 4.1.1-SNAPSHOT 
+ 4.1.1 
../../pom.xml 
 
xapi 

- Si 

- Original Message -

From: "Ilya Musayev"  
To: dev@cloudstack.apache.org 
Sent: Wednesday, July 24, 2013 12:23:17 PM 
Subject: [VOTE] Apache Cloudstack 4.1.1 

Hi All, 

I've created a 4.1.1 release, with the following artifacts up for a vote: 

Git Branch and Commit SH: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
 
Commit: 8fe3505cba45756a51e9e9ee97cd09bf1e71c79e 

Source release (checksums and signatures are available at the same location): 
https://dist.apache.org/repos/dist/dev/cloudstack/4.1.1/ 

PGP release keys (signed using B7B5E7FD): 
https://dist.apache.org/repos/dist/release/cloudstack/KEYS 

Vote will be open for 72 hours. 

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote? 

[ ] +1 approve 
[ ] +0 no opinion 
[ ] -1 disapprove (and reason why) 





Re: [VOTE] Apache Cloudstack 4.1.1 (Second Round)

2013-07-25 Thread Simon Weller
+1 

Built NONOSS rpms and deployed on RHEL 6.3 

Tested vm creation + deletion, network creation + deletion, template creation + 
deletion. 

- Si 

- Original Message -

From: "Ilya Musayev"  
To: dev@cloudstack.apache.org 
Sent: Thursday, July 25, 2013 1:04:27 PM 
Subject: [VOTE] Apache Cloudstack 4.1.1 (Second Round) 

Hi All, 

I've created a 4.1.1 release, with the following artifacts up for a vote: 

Git Branch and Commit SH: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.1
 
Commit: d2c646f0a434831bd5aef88b5692c108fe26e718 

Release notes: 

http://jenkins.cloudstack.org/view/4.1/job/docs-4.1-releasenotes/23/ 


Source release (checksums and signatures are available at the same 
location): 
https://dist.apache.org/repos/dist/dev/cloudstack/4.1/ 

PGP release keys (signed using B7B5E7FD): 
https://dist.apache.org/repos/dist/release/cloudstack/KEYS 

Vote will be open for 72 hours. 

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote? 

[ ] +1 approve 
[ ] +0 no opinion 
[ ] -1 disapprove (and reason why) 

d2c646f0a434831bd5aef88b5692c108fe26e718 



Re: Virtio

2013-07-30 Thread Simon Weller
Jeronimo, 


Are you running Windows VMs? If so, create a template using "Other PV" as the 
OS type. 


- Si 

- Original Message -

From: "Jeronimo Garcia"  
To: cloudstack-...@incubator.apache.org 
Sent: Tuesday, July 30, 2013 1:47:29 PM 
Subject: Virtio 

Hi List. 

Is there anyway to load virtio drives by default in my guests. 

When i examine the dumpxls .. virtio isn't enabled by default. 

Please let me know. 



4.1.x KVM cloudVirBr to br-

2013-08-05 Thread Simon Weller
All, 

As most know, the upgrade from 4.0 to 4.1 changed the interface naming schema. 
When a host in a cluster is rebooted, the interface naming changes. When this 
occurs, live migration breaks to that host. 

Example config: 
All Management and hosts running CS 4.1.1 
Hypervisor: KVM on RHEL 6.3 
Host 1 has older 4.0 interface naming schema 
Host 2 was rebooted and has newer interface schema 

Live Migration is looking for older interface schema name (i.e. 
cloudVirBr) when attempting a migration from Host 1 to Host 2. 

Here's a sample log: 


2013-08-05 16:45:21,846 DEBUG [agent.transport.Request] 
(Job-Executor-34:job-1660) Seq 19-1921285594: Sending { Cmd , MgmtId: 
159090354809909, via: 19, Ver: v1, Flags: 100111, 
[{"MigrateCommand":{"vmName":"i-44-255-VM","destIp":"","hostGuid":"91e9b633-f46b-31f3-9a4b-92285fd94b62-LibvirtComputingResource","isWindows":false,"wait":0}}]
 } 
2013-08-05 16:45:21,926 DEBUG [agent.transport.Request] (StatsCollector-1:null) 
Seq 1-1768126050: Received: { Ans: , MgmtId: 159090354809909, via: 1, Ver: v1, 
Flags: 10, { GetVmStatsAnswer } } 
2013-08-05 16:45:21,963 DEBUG [agent.manager.AgentManagerImpl] 
(AgentManager-Handler-7:null) Ping from 5 
2013-08-05 16:45:22,012 DEBUG [agent.transport.Request] 
(AgentManager-Handler-9:null) Seq 19-1921285594: Processing: { Ans: , MgmtId: 
159090354809909, via: 19, Ver: v1, Flags: 110, 
[{"MigrateAnswer":{"result":false,"details":"Cannot get interface MTU on 
'cloudVirBr18': No such device","wait":0}}] } 
2013-08-05 16:45:22,012 DEBUG [agent.manager.AgentAttache] 
(AgentManager-Handler-9:null) Seq 19-1921285594: No more commands found 
2013-08-05 16:45:22,012 DEBUG [agent.transport.Request] 
(Job-Executor-34:job-1660) Seq 19-1921285594: Received: { Ans: , MgmtId: 
159090354809909, via: 19, Ver: v1, Flags: 110, { MigrateAnswer } } 
2013-08-05 16:45:22,012 ERROR [cloud.vm.VirtualMachineManagerImpl] 
(Job-Executor-34:job-1660) Unable to migrate due to Cannot get interface MTU on 
'cloudVirBr18': No such device 
2013-08-05 16:45:22,013 INFO [cloud.vm.VirtualMachineManagerImpl] 
(Job-Executor-34:job-1660) Migration was unsuccessful. Cleaning up: 
VM[User|app01-dev] 
2013-08-05 16:45:22,018 

Is there any current way to change the destination network CS Management uses 
so that a complete VM shutdown and restart isn't required to re-enable 
migration between hosts? 

Any ideas would be appreciated. 

- Si 


Re: 4.1.x KVM cloudVirBr to br-

2013-08-05 Thread Simon Weller
Thanks Marcus. We already thought of creating the old bridge, but we were 
hoping we had better options ;-) 



- Original Message -

From: "Marcus Sorensen"  
To: dev@cloudstack.apache.org 
Cc: cloudstack-...@incubator.apache.org 
Sent: Monday, August 5, 2013 5:42:38 PM 
Subject: Re: 4.1.x KVM cloudVirBr to br- 

Yes, the vm definition already has the bridge name chosen (on initial 
startup), and that definition is migrated between hosts. This is 
further exacerbated by the fact that the bridges are created on the 
destination host prior to migration (so they'll be ready), rather than 
somehow being able to see the existing configuration and prepare for 
the vm based on that. That ultimately probably doesn't matter much 
anyway, since we can't host two different bridges for the same vlan 
(the advantage of detecting what bridge name a migrating VM has would 
be to bring up the required bridge name for migrating VMs, and use the 
new bridge name for freshly started VMs, which isn't possible). 

As a workaround, if you want to force any rebooted, upgraded KVM hosts 
to use the old style naming, you can create any random bridge named 
'cloudVirBr', and the agent will detect it and continue to use the old 
style naming until such point when you can or need to switch to the 
capabilities of the new style naming. At that point you'll need to 
stop any and all VMs that are using the old style name, remove any old 
style bridges (or reboot the host), and then start things back up. 

This really should have been documented in the release notes. I think 
it was a misunderstanding that the upgrade process would require 
everything be restarted. 

On Mon, Aug 5, 2013 at 4:05 PM, Simon Weller  wrote: 
> All, 
> 
> As most know, the upgrade from 4.0 to 4.1 changed the interface naming 
> schema. When a host in a cluster is rebooted, the interface naming changes. 
> When this occurs, live migration breaks to that host. 
> 
> Example config: 
> All Management and hosts running CS 4.1.1 
> Hypervisor: KVM on RHEL 6.3 
> Host 1 has older 4.0 interface naming schema 
> Host 2 was rebooted and has newer interface schema 
> 
> Live Migration is looking for older interface schema name (i.e. 
> cloudVirBr) when attempting a migration from Host 1 to Host 2. 
> 
> Here's a sample log: 
> 
> 
> 2013-08-05 16:45:21,846 DEBUG [agent.transport.Request] 
> (Job-Executor-34:job-1660) Seq 19-1921285594: Sending { Cmd , MgmtId: 
> 159090354809909, via: 19, Ver: v1, Flags: 100111, 
> [{"MigrateCommand":{"vmName":"i-44-255-VM","destIp":"","hostGuid":"91e9b633-f46b-31f3-9a4b-92285fd94b62-LibvirtComputingResource","isWindows":false,"wait":0}}]
>  } 
> 2013-08-05 16:45:21,926 DEBUG [agent.transport.Request] 
> (StatsCollector-1:null) Seq 1-1768126050: Received: { Ans: , MgmtId: 
> 159090354809909, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } } 
> 2013-08-05 16:45:21,963 DEBUG [agent.manager.AgentManagerImpl] 
> (AgentManager-Handler-7:null) Ping from 5 
> 2013-08-05 16:45:22,012 DEBUG [agent.transport.Request] 
> (AgentManager-Handler-9:null) Seq 19-1921285594: Processing: { Ans: , MgmtId: 
> 159090354809909, via: 19, Ver: v1, Flags: 110, 
> [{"MigrateAnswer":{"result":false,"details":"Cannot get interface MTU on 
> 'cloudVirBr18': No such device","wait":0}}] } 
> 2013-08-05 16:45:22,012 DEBUG [agent.manager.AgentAttache] 
> (AgentManager-Handler-9:null) Seq 19-1921285594: No more commands found 
> 2013-08-05 16:45:22,012 DEBUG [agent.transport.Request] 
> (Job-Executor-34:job-1660) Seq 19-1921285594: Received: { Ans: , MgmtId: 
> 159090354809909, via: 19, Ver: v1, Flags: 110, { MigrateAnswer } } 
> 2013-08-05 16:45:22,012 ERROR [cloud.vm.VirtualMachineManagerImpl] 
> (Job-Executor-34:job-1660) Unable to migrate due to Cannot get interface MTU 
> on 'cloudVirBr18': No such device 
> 2013-08-05 16:45:22,013 INFO [cloud.vm.VirtualMachineManagerImpl] 
> (Job-Executor-34:job-1660) Migration was unsuccessful. Cleaning up: 
> VM[User|app01-dev] 
> 2013-08-05 16:45:22,018 
> 
> Is there any current way to change the destination network CS Management uses 
> so that a complete VM shutdown and restart isn't required to re-enable 
> migration between hosts? 
> 
> Any ideas would be appreciated. 
> 
> - Si 



Re: [API][4.1.1] ALL VMs for ALL users, domains and projects

2013-08-06 Thread Simon Weller
Ilya, 


This has been the status quo since 4.0. It would definitely be nice to be able 
to list all running VMs, rather than have to drill down to projects. 


- Si 

- Original Message -

From: "Ilya Musayev"  
To: dev@cloudstack.apache.org 
Sent: Tuesday, August 6, 2013 7:16:43 PM 
Subject: RE: [API][4.1.1] ALL VMs for ALL users, domains and projects 

When I say it does not do it, I don't see all VMs under general instances tab, 
I have to drill down to specific project to see them. 

> -Original Message- 
> From: Musayev, Ilya [mailto:imusa...@webmd.net] 
> Sent: Tuesday, August 06, 2013 8:15 PM 
> To: dev@cloudstack.apache.org 
> Subject: [API][4.1.1] ALL VMs for ALL users, domains and projects 
> 
> How do you go about seeing *all* vms for *all* users in *all* domains and 
> projects? 
> 
> The admin user UI does not do it in 4.1.1 
> 
> Thanks 
> ilya 




Re: [VOTE] Apache CloudStack 4.2.0 (fourth round)

2013-09-06 Thread Simon Weller
We use CLVM in production today. I'm ok to apply a forward patch in our rpm 
builds if we need to, although I'm pretty sure CLVM is fairly widely deployed 
in organizations with fiber/iscsi SANs. I haven't had a chance to build the 
latest RC as of yet. I'll hopefully get that tested before midday Monday. I did 
start building the first RC, but packaging was still broken...then I got 
sidetracked. 

- Original Message -

From: "Marcus Sorensen"  
To: dev@cloudstack.apache.org 
Sent: Friday, September 6, 2013 5:08:53 PM 
Subject: RE: [VOTE] Apache CloudStack 4.2.0 (fourth round) 

By the way, not sure if it was clear but the template download portion 
worked just fine. 
On Sep 6, 2013 3:45 PM, "Edison Su"  wrote: 

> In order to support it, I need to manually, find a way to install 
> CLVM(find an ISCSI disk, create CLVM on it etc) in my Lab, a lot of work 
> to do. 
> Is it possible, I write some code, then have you help to test? I almost 
> finished the code. 
> 
> > -Original Message- 
> > From: Marcus Sorensen [mailto:shadow...@gmail.com] 
> > Sent: Friday, September 06, 2013 2:32 PM 
> > To: dev@cloudstack.apache.org 
> > Subject: Re: [VOTE] Apache CloudStack 4.2.0 (fourth round) 
> > 
> > I don't use CLVM any more, and I have no idea how many people do. I'm 
> > relatively certain that some do, because of past questions/bug reports. 
> > Regardless, I don't think it should be decided based on someone's guess 
> at 
> > how many people it will impact, but instead on whether we are willing to 
> ship 
> > regressions for the sake of a time based release. If we had some hard 
> data it 
> > could be useful, but since we don't I think we have to assume the worst. 
> > On Sep 6, 2013 2:55 PM, "Chip Childers"  
> wrote: 
> > 
> > > On Fri, Sep 06, 2013 at 08:18:35PM +, Animesh Chaturvedi wrote: 
> > > > 
> > > > 
> > > > > -Original Message- 
> > > > > From: Chip Childers [mailto:chip.child...@sungard.com] 
> > > > > Sent: Friday, September 06, 2013 10:21 AM 
> > > > > To: dev@cloudstack.apache.org 
> > > > > Subject: Re: [VOTE] Apache CloudStack 4.2.0 (fourth round) 
> > > > > 
> > > > > Animesh, 
> > > > > 
> > > > > I'd ask that this vote stay open until EOD Monday. I've tested 
> > > > > the basic sig / artifact, but I want an opportunity to run it in 
> > > > > an actual environment before formally voting. 
> > > > [Animesh>] Sure I can keep the VOTE open until Monday. 
> > > > > 
> > > > > Also, there is an issue being raised by Marcus (that Edison was 
> > > > > going 
> > > to 
> > > > > look into) in another thread around the CPVM. Is this a blocker 
> issue? 
> > > > [Animesh>]Edison is working on a fix and will put in 4.2-forward 
> branch. 
> > > I am not sure who else uses CLVM to access the broader impact, if it 
> > > is Marcus and he is fine with using Edison's fix in his environment 
> > > than not an issue. 
> > > 
> > > Ha... I thought it was around the cPvm, not cLvm. 
> > > 
> > > Anyway, I guess it's up to Marcus (and others) to vote accordingly WRT 
> > > there concern about this issue. 
> > > 
> > > Thanks for holding off until EOD Monday for me (and perhaps others) to 
> > > vote! 
> > > 
> > > -chip 
> > > 
> 



Re: [VOTE] Apache CloudStack 4.2.0 (fourth round)

2013-09-09 Thread Simon Weller
-1 from me as well. 


I know we're trying to hit timed releases, but I think it's very important to 
preserve key underlying functionality across releases. If a supported and 
documented feature is known to be broken, we need to address it...if we don't, 
it's going to cause lots of pain, and reflect badly on ACS as a project. 

- Original Message -

From: "Chip Childers"  
To: dev@cloudstack.apache.org 
Sent: Monday, September 9, 2013 9:24:23 AM 
Subject: Re: [VOTE] Apache CloudStack 4.2.0 (fourth round) 

On Sun, Sep 08, 2013 at 12:40:30AM -0600, Marcus Sorensen wrote: 
> -1 ... sorry guys, especially with Simon chiming in. 
> 
> I'd request f2c5b5fbfe45196dfad2821fca513ddd6efa25c9 be cherry-picked. 

Agreed. 

I'm -1, given simon's perspective as well. Since we have the fix, let's 
get it into the release. 



Test procedure for upgrade from 4.1.x to 4.2 RC

2013-09-16 Thread Simon Weller
All, 

Could someone provide the documented upgrade procedure for those upgrading from 
4.1.x to 4.2? It's quite possible I'm just not looking in the correct place, 
but I've been unable to locate it thus far. 
I'd like to test this in our lab environment today for the voting round. I do 
have nonoss rpms built for it from the latest RC round. 

Our lab environment is RHEL 6.3, KVM using CLVM to a Compellent SAN with 
advanced networking. 

Thanks, 

- Si 


Re: Apache CloudStack 4.2.0 (fifth round)

2013-09-17 Thread Simon Weller
+1 

Built nonoss RPMS and deployed fresh installation on RHEL 6.3. 
Tested CLVM on KVM, adding users, adding projects, adding templates/isos and 
deploying VMs. 

I have not tested an upgrade as of yet. 

- Original Message -

From: "Animesh Chaturvedi"  
To: dev@cloudstack.apache.org 
Sent: Friday, September 13, 2013 6:12:39 PM 
Subject: Apache CloudStack 4.2.0 (fifth round) 


I've created a 4.2.0 release, with the following artifacts up for a vote: 

Git Branch and Commit SH: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.2
 
Commit: c1e24ff89f6d14d6ae74d12dbca108c35449030f 

List of changes: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.2
 

Source release (checksums and signatures are available at the same 
location): 
https://dist.apache.org/repos/dist/dev/cloudstack/4.2.0/ 

PGP release keys (signed using 94BE0D7C): 
https://dist.apache.org/repos/dist/release/cloudstack/KEYS 

Testing instructions are here: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+test+procedure 

Vote will be open for 72 hours (Wednesday 9/18 End of Day PST). 

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote? 

[ ] +1 approve 
[ ] +0 no opinion 
[ ] -1 disapprove (and reason why) 




Re: HA redundant virtual router

2013-09-17 Thread Simon Weller
I think monit is installed currently in the system vm image. Sounds like it 
might make more sense to manage haproxy via monit, and allow it to recover the 
service should it fail. 

- Original Message -

From: "Sheng Yang"  
To: ""  
Cc: "Daan Hoogland" , "int-cloud" 
 
Sent: Tuesday, September 17, 2013 6:10:42 PM 
Subject: Re: HA redundant virtual router 

No, it's not intentional. 

the HAproxy is a part of services that redundant router would 
enable/disabled according to the MASTER/BACKUP status. All the services 
related to redundant router are controlled by services.sh. 

What's the failure of HAproxy exactly in your case? And what's the root 
cause? 

Also, I think just yield due to haproxy failure won't help much since 
effort still needed for CS to recover the situation, at least it would need 
to notify admin. Better transiting to FAULT state if it's a critical error. 

--Sheng 


On Tue, Sep 17, 2013 at 12:07 AM, Sten Spans  wrote: 

> On Mon, 16 Sep 2013, Sheng Yang wrote: 
> 
> The reason for no HA as I said before, due to the complexity. E.g, if 
>> there 
>> can be 3 routers in the network(which control network is down but not the 
>> guest network), and it would cause two of them with the same priority(at 
>> certain time). The doc is mainly for describing the current policy and 
>> reason, as the basic for possible improvement. 
>> 
>> I haven't thought much about redundant router with HA, but many times 
>> we're 
>> dealing with intermittent network issue, so you try to plug off then plug 
>> in the network cable to see if HA works as expect. 
>> 
>> The priority cannot changed on the fly, it's a parameter of keepalived 
>> process, which is running. So at least both router need to be stopped 
>> before priority reset. And it's not reset to minimum, since the value can 
>> go up or down based on the different cases. 
>> 
> 
> Looking at the doc you wrote I see no mention of HAProxy. 
> Is this intentional? 
> 
> A failure of HAProxy (which we've observed in practice) would 
> result in a loss of service for loadbalanced ports. 
> 
> Currently I'm thinking of adding something like the following: 
> 
> diff -Nru keepalived.conf.templ.orig keepalived.conf.templ 
> --- keepalived.conf.templ.orig 2013-09-17 09:02:28.410646521 +0200 
> +++ keepalived.conf.templ 2013-09-17 09:03:34.131434084 +0200 
> @@ -19,6 +19,12 @@ 
> router_id [ROUTER_ID] 
> } 
> 
> +vrrp_script check_haproxy { 
> + script "/usr/bin/killall -0 haproxy" 
> + interval 5 
> + weight 10 
> +} 
> + 
> vrrp_script check_bumpup { 
> script "[RROUTER_BIN_PATH]/check_**bumpup.sh" 
> interval 5 
> @@ -47,6 +53,7 @@ 
> } 
> 
> track_script { 
> + check_haproxy 
> check_bumpup 
> heartbeat 
> } 
> 
> 
> This would boost vrrp priorities if haproxy is running, trigger 
> a failover if it fails, and should be harmless on hosts not running 
> haproxy. 
> 
> -- 
> Sten Spans 
> 
> "There is a crack in everything, that's how the light gets in." 
> Leonard Cohen - Anthem 
> 



Re: [VOTE][ACS402] Apache CloudStack 4.0.2 (Second Round)

2013-04-19 Thread Simon Weller
-1 for the same reason David mentions below. 


Changing the vmware-base pom version from 4.0.2-SNAPSHOT to 4.0.2 fixes the 
build for me. 

- Original Message -

From: "David Nalley"  
To: dev@cloudstack.apache.org 
Sent: Friday, April 19, 2013 4:43:31 PM 
Subject: Re: [VOTE][ACS402] Apache CloudStack 4.0.2 (Second Round) 

> 
> [ ] +1 approve 
> [ ] +0 no opinion 
> [ ] -1 disapprove (and reason why) 
> 

Sadly -1 (binding) 
nonoss doesn't build with the release tarball. 
I think the problem is version in the vmware-base pom not being updated. 

--David 



Re: [VOTE][ACS402] Apache CloudStack 4.0.2 (Third Round)

2013-04-21 Thread Simon Weller


Build process completed with NONOSS patches. 
Rolled RPMS for 6.3. 
Installed and tested basic functionality successfully. 

+1 
- Original Message -

From: "Joe Brockmeier"  
To: dev@cloudstack.apache.org 
Sent: Saturday, April 20, 2013 10:50:11 AM 
Subject: [VOTE][ACS402] Apache CloudStack 4.0.2 (Third Round) 

Hi all, 

I've created a 4.0.2 release, and am asking for you to test the 
artifacts and *after* testing, please submit a vote. 

Suggested testing procedure here: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+4.0+test+procedure
 

(Please don't just +1 a release without any testing!) 

The following artifacts up for a vote: 

Git Branch and Commit SH: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.0
 
Commit: 48e7eadb85bd033b642cbc83b0698d4175a183fb 

List of changes: 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.0
 

Source release (checksums and signatures are available at the same 
location): 
http://people.apache.org/~jzb/cloudstack/dist/releases/4.0.2/ 

PGP release keys (signed using A0207CD4): 
http://www.apache.org/dist/incubator/cloudstack/KEYS 

Vote will be open for 72 hours. 

NOTE: The only changes between this and the last voting round are to the 
version numbers in the three pom.xml files that aren't automatically 
changed to 4.0.2. The nonoss build breaks if the versions aren't 
correct, so that is fixed now. 

If you are trying to build nonoss RPMs, the default cloud.spec won't 
work. So if you want to build nonoss RPMs for 4.0.2, grab this spec file 
and cp it to cloud.spec under your cloudstack source directory: 

http://people.apache.org/~jzb/cloudstack/dist/releases/4.0.2/nonoss.cloud.spec 

For sanity in tallying the vote, can PMC members please be sure to 
indicate "(binding)" with their vote? 

[ ] +1 approve 
[ ] +0 no opinion 
[ ] -1 disapprove (and reason why) 

Best, 

jzb 
-- 
Joe Brockmeier 
j...@zonker.net 
Twitter: @jzb 
http://www.dissociatedpress.net/ 









Re: midonet-client and Guava dependency conflict

2017-03-09 Thread Simon Weller
So this brings up a good discussion point. As Jeff points out, the Midonet 
plugin hasn't been actively supported for almost 5 years. At what point do we 
consider retiring unsupported plugins?


- Si



From: Jeff Hair 
Sent: Thursday, March 9, 2017 9:43 AM
To: dev@cloudstack.apache.org
Subject: Re: midonet-client and Guava dependency conflict

After doing some more digging, I have confirmed the following:

   - The midonet plugin is using the Maven Shade plugin to put a bunch of
   dependencies into itself.
   - The plugin hosted in this repository was last updated in 2013.
   - Most importantly: removing all the guava stuff out of the midonet
   plugin fixes this issue.

I have not had any success in applying
https://github.com/openwide-java/tomcat-classloader-ordered to get Tomcat
[https://avatars1.githubusercontent.com/u/1385131?v=3&s=400]

GitHub - openwide-java/tomcat-classloader-ordered: A 
...
github.com
README.md tomcat-classloader-ordered. A classloader for Apache Tomcat 8 which 
loads the jars of WEB-INF lib in alphabetical order. Prior to version 8, Apache 
Tomcat ...



to load its jars in alphabetical order, for whatever reason. I tried
putting the Loader in various context definition locations, but it refuses
to work. Any ideas?

Jeff


On Thu, Mar 9, 2017 at 1:43 PM, Jeff Hair  wrote:

> Hi,
>
> I'm deploying 4.9.2.0 (not the vanilla version, but rather an upgraded
> version of our fork) on Tomcat 8. Management server startup fails with the
> following error:
>
> java.lang.IncompatibleClassChangeError: Found interface
> com.google.common.base.Equivalence, but class was expected
>
> I've traced this down to the OutOfBandServiceManagerImpl. More
> specifically, when it tries to build the hostAlertCache using Guava's
> CacheBuilder. Deep in Guava, it's calling an "identity()" method on the
> Equivalence class.  All of the Guava classes are coming from guava-19.0
> except for com/google/common/base/Equivalence.class. The Equivalence
> class is being loaded from the midonet jar for some reason, and that
> version does not have the method needed. Thus, the error.
>
> This is because Tomcat apparently does not load jars in alphabetical order
> anymore, starting with version 8. An open ticket for them to fix this is
> here: https://bz.apache.org/bugzilla/show_bug.cgi?id=57129
57129 – Regression. Load WEB-INF/lib jarfiles in 
...
bz.apache.org
ASF Bugzilla – Bug 57129 Regression. Load WEB-INF/lib jarfiles in alphabetical 
order Last modified: 2016-03-17 09:59:50 UTC



>
> It could be possible to "fix" this by using a custom ClassLoader to force
> Tomcat to load things alphabetically (testing that right now--and not
> really succeeding), but the proper fix is to have the midonet client not be
> packaging guava with itself. Does anyone know why this is?
>
> Jeff
>


Re: [Proposal] - StorageHA

2017-03-14 Thread Simon Weller
So a few questions come to mind here.


So if all networking is lost, how are you going to being able reliably fence 
the VM on the hosts?

Are you assuming you still have out of band IPMI connectivity?

If you're running bonded interfaces to different switches, what scenario would 
occur where the host loses network connectivity?


- Si


From: Tutkowski, Mike 
Sent: Tuesday, March 14, 2017 8:25 AM
To: dev@cloudstack.apache.org
Cc: Alex Bonilla
Subject: Re: [Proposal] - StorageHA

Thanks for your clarification. I see now. You were referring to a networking 
problem where one host could not see the storage (but the storage was still up 
and running).

On 3/13/17, 10:31 PM, "Jeromy Grimmett"  wrote:

I apologize for the delay on the response, let me clarify the points 
requested:

Mike asked:

"What I was curious about is if you plan to exclusively build your feature 
as a set of scripts and/or if you plan to update the CloudStack code base, as 
well."

JG:  My idea was to do this separately as a plugin, then add it to the code 
base down the road.

"Also, if a primary storage actually goes offline, I'm not clear on how 
starting an impacted VM on a different compute host would help. Could you 
clarify this for me?"

JG:  The VM would be started on another host that still has access to the 
storage.  Individually a host can have problems and lose its connectivity to a 
primary storage device.  The solution we are working on would help to get the 
VM back and up running much faster than waiting for Cloudstack to make a 
decision to restart the VM on a different host.

Paul asked:

"  1.  We can't/don't run scripts on vSphere hosts (not sure about Hyper-V)"

JG:  I should have been more clear, this is for KVM hosts.

"2.  I know of one failure scenario (which happened) where MTU issues in 
intermediate switches meant that small amounts of data could pass, but anything 
that was passed as jumbo frames then failed. So it would be important to 
exercise that."

JG:  I have faced this Jumbo Frame issue as well, perhaps we need to have 
an option that would indicate Jumbo Frames are being used to access that 
storage and the test result would reflect a failure to access using Jumbo 
Frames.

"3.  You need to be very sure of failures before shutting hosts down.  Also 
a host is likely to be connected to multiple storage pools, so you wouldn't 
want to shut down a host due to one pool becoming unavailable."

JG:  The script wouldn’t shut down any hosts at all.  Just force stop the 
affected VMs on that specific host and then start them on a host that is not 
having the issue with storage.

"4.  Environments can have hundreds of storage pools, so watch out for 
spamming the logs with updates."

JG:  The polling/testing time increments are configurable, so I am hoping 
that can help with that.  The results are pretty small and should be relatively 
negligible.

"5.  The primary storage pools have a 'state' which should get updated and 
used by the deployment planners"

JG:  I have copied Alex on this email to make sure he sees this suggestion. 
 We will figure out how to incorporate that 'state' field.

"6.  Secondary storage pools don't have a 'state' - but it would be great 
if that were added in the DB and reflected in the UI."

JG:  For now, I think this might be a feature request that maybe we should 
submit through the normal Cloudstack request process.  Otherwise, we can 
definitely include that into our work when we start to add it into the code 
base.

To take this a step further, we are also working on a KVM host load 
balancer that will be used as a factor when moving the VMs.  We have a number 
of little projects we are working on.

Thank you all for reviewing the information.  All suggestions are welcome.

Jeromy Grimmett
P: 603.766.3625
jer...@cloudbrix.com
www.cloudbrix.com


-Original Message-
From: Paul Angus [mailto:paul.an...@shapeblue.com]
Sent: Saturday, March 11, 2017 2:43 AM
To: dev@cloudstack.apache.org
Subject: RE: [Proposal] - StorageHA

Hi Jeromy,

I love the idea, I'm not really a developer, so those guys will look at 
things a different way, but...

These would be by my initial comments:


  1.  We can't/don't run scripts on vSphere hosts (not sure about Hyper-V)
  2.  I know of one failure scenario (which happened) where MTU issues in 
intermediate switches meant that small amounts of data could pass, but anything 
that was passed as jumbo frames then failed. So it would be important to 
exercise that.
  3.  You need to be very sure of failures before shutting hosts down.  
Also a host is likely to be connected to multiple storage pools, so you 
wouldn't want to shut down a host due to one pool becoming unavailable.
  4.  Environments can have hundreds of storage pools, so watch

Re: midonet-client and Guava dependency conflict

2017-03-14 Thread Simon Weller
We took a look at testing it in the lab back in early 2016  and Midokura told 
us that it was a bad idea, probably wouldn't work and we should just switch to 
openstack.


- Si


From: Erik Weber 
Sent: Tuesday, March 14, 2017 3:28 AM
To: dev
Subject: Re: midonet-client and Guava dependency conflict

On Mon, Mar 13, 2017 at 7:45 PM, Rafael Weingärtner
 wrote:
> I got a reply from Midonet community; they said that midonet-client was
> incorporated by midonet-cluster (
> https://github.com/midonet/midonet/tree/staging/v5.4/midonet-cluster).
[https://avatars2.githubusercontent.com/u/9136532?v=3&s=400]

midonet/midonet
github.com
midonet - MidoNet is an Open Source network virtualization system for Openstack 
clouds



>
>
> So, if anyone wants to invest energy on this, it might be a good idea to
> upgrade the dependency. Moreover, I start to question the compatibility of
> the current client we are using, with the mido-net server side that might
> be deployed by users. Will this partial integration that we have work?


Just as important to ask: "Has it ever worked?"
Do we know anyone who use, or have used, this integration?

--
Erik


RE: [DISCUSS] Retirement of midonet plugin

2017-03-14 Thread Simon Weller
I agree with Sergey and Will. Let's disable it first.

Simon Weller/615-312-6068

-Original Message-
From: Rafael Weingärtner [rafaelweingart...@gmail.com]
Received: Tuesday, 14 Mar 2017, 9:10PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: [DISCUSS] Retirement of midonet plugin

Dear ACS fellows,
Recently there have been two threads asking and discussing the “midonet”
integration with Apache CloudStack (ACS) [1-2].

After quite some discussions, we noticed that despite having some people
willing to use it, the plugin has never been fully developed by its vendor
(Midokura). Further, nobody else has put the effort on fully testing and
finishing its implementation. It seems that the plugin was incorporated
into our code base without being fully finished. Moreover, I have asked
around at the Midonet community, and the java client they use has changed
quite a bit from the one we use.

It begs the question, if it does not work, why do we advertise such
integration? [3]. In my opinion, it would be great if we had such
integration; however, we as a community of individuals cannot bear the
burden with the cost of such task by ourselves.

It seems we have three options; (i) disable the build for the plugin and
let the code create its own mystic throughout time in ACS code base; (ii)
remove everything; or (iii) someone that may benefit from this plugin jumps
in and concludes the integration with Midonet using their new client.

There maybe other solutions that I am not seeing. So, @Devs yours thoughts
and comments are welcome ;)

[1]
http://cloudstack.markmail.org/thread/qyedle5jb2c34gsc#query:+page:1+mid:xn2zq2v3eim5vl2q+state:results
[2]
http://cloudstack.markmail.org/message/rewzk4v7dgzpsxkm?q=midonet+order:date-backward&page=1#query:midonet%20order%3Adate-backward+page:1+mid:i563khxlginf6smg+state:results
[3] http://docs.cloudstack.apache.org/en/latest/networking/midonet.html


--
Rafael Weingärtner


Re: [ANNOUNCE] Wei Zhou and Rene Moser are now PMC members

2017-03-17 Thread Simon Weller
Congrats Rene and Wei!


From: Daan Hoogland 
Sent: Friday, March 17, 2017 2:06 AM
To: dev@cloudstack.apache.org
Subject: [ANNOUNCE] Wei Zhou and Rene Moser are now PMC members

dear devs,

The PMC has invited Rene Moser and Wei Zhou over the last couple of weeks to 
join their ranks. Both have accepted. Please join me in congratulating both, 
thanking them for their contributions so far and encouraging them to show their 
best for Apache CloudStack ;) \o/ ...

regards,


daan.hoogl...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





Re: Just a thanks for the help

2017-03-21 Thread Simon Weller
This is a great write up. Thanks for taking the time to do it and share it with 
the community!


- Si


From: Jeromy Grimmett 
Sent: Monday, March 20, 2017 9:13 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Cc: Giles Sirett
Subject: Just a thanks for the help


Props to everyone for the help over the months and years, a little article I 
wrote on LinkedIn.  Feel free to share as you wish:



https://www.linkedin.com/pulse/underdog-cloud-computing-world-jeromy-grimmett



I’m looking forward to working with you all more.



Regards,

j



Jeromy Grimmett

[cb-sig-logo2]
155 Fleet Street

Portsmouth, NH 03801

Direct: 603.766.3625
Office: 603.766.4908

Fax: 603.766.4729
jer...@cloudbrix.com

www.cloudbrix.com




Re: Cannot migrate VMware VM with root disk to host in different cluster (CloudStack 4.10)

2017-03-23 Thread Simon Weller
Mike,


It is possible to do this on vcenter, but it requires a special license I 
believe.


Here's the info on it :

https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-A16BA123-403C-4D13-A581-DC4062E11165.html

https://pubs.vmware.com/vsphere-51/index.jsp#com.vmware.vsphere.vcenterhost.doc/GUID-561681D9-6511-44DF-B169-F20E6CA94944.html


- Si

From: Tutkowski, Mike 
Sent: Thursday, March 23, 2017 3:09 PM
To: dev@cloudstack.apache.org
Subject: Re: Cannot migrate VMware VM with root disk to host in different 
cluster (CloudStack 4.10)

This is interesting:

If I shut the VM down and then migrate its root disk to storage in the other 
cluster, then start up the VM, the VM gets started up correctly (running on the 
new host using the other datastore).

Perhaps you simply cannot live migrate a VM and its storage from one cluster to 
another with VMware? This works for XenServer and I probably just assumed it 
would work in VMware, but maybe it doesn’t?

The reason I’m asking now is because I’m investigating the support of 
cross-cluster migration of a VM that uses managed storage. This works for 
XenServer as of 4.9 and I was looking to implement similar functionality for 
VMware.

On 3/23/17, 2:01 PM, "Tutkowski, Mike"  wrote:

Another piece of info:

I tried this same VM + storage migration using NFS for both datastores 
instead of iSCSI for both datastores and it failed with the same error message:

Required property datastore is missing from data object of type 
VirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec.DiskLocator
at line 1, column 326

while parsing property "disk" of static type 
ArrayOfVirtualMachineRelocateSpecDiskLocator

while parsing serialized DataObject of type vim.vm.RelocateSpec
at line 1, column 187

while parsing call information for method RelocateVM_Task
at line 1, column 110

while parsing SOAP body
at line 1, column 102

while parsing SOAP envelope
at line 1, column 38

while parsing HTTP request for method relocate
on object of type vim.VirtualMachine
at line 1, column 0

On 3/23/17, 12:33 PM, "Tutkowski, Mike"  wrote:

Slight typo:

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datastore.

Should be (datastore changed to datacenter):

Both ESXi hosts are version 5.5 and both clusters are within the same 
VMware datacenter.

On 3/23/17, 12:31 PM, "Tutkowski, Mike"  
wrote:

A little update here:

In the debugger, I made sure we asked for the correct source 
datastore (I edited the UUID we were using for the source datastore).

When VirtualMachineMO.changeDatastore is later invoked having the 
proper source and target datastores, I now see this error message:

Virtual disk 'Hard disk 1' is not accessible on the host: Unable to 
access file [SIOC-1]

Both ESXi hosts are version 5.5 and both clusters are within the 
same VMware datastore.

The source datastore and the target datastore are both using iSCSI.

On 3/23/17, 11:53 AM, "Tutkowski, Mike"  
wrote:

Also, in case it matters, both datastores are iSCSI based.

> On Mar 23, 2017, at 11:52 AM, Tutkowski, Mike 
 wrote:
>
> My version is 5.5 in both clusters.
>
>> On Mar 23, 2017, at 9:48 AM, Sateesh Chodapuneedi 
 wrote:
>>
>>
 On 23/03/17, 7:21 PM, "Tutkowski, Mike" 
 wrote:
>>
 However, perhaps someone can clear this up for me:
 With XenServer, we are able to migrate a VM and its 
volumes from a host using a shared SR in one cluster to a host using a shared 
SR in another cluster even though the source host can’t see the target SR.
 Is the same thing possible with VMware or does the source 
host have to be able to see the target datastore? If so, does that mean the 
target datastore has to be zone-wide primary storage when using VMware to make 
this work?
>> Yes, Mike. But that’s the case with versions less than 5.1 
only. In vSphere 5.1 and later, vMotion does not require environments with 
shared storage. This is useful for performing cross-cluster migrations, when 
the target cluster machines might not have access to the source cluster's 
storage.
>> BTW, what is the version of ESXi hosts in this setup?
>>
>> Regards,
>> Sateesh,
>> CloudStack development,
>> Accelerite, CA-95054
>>
>>   On 3/23/17, 7:47 AM, "Tutkowski, Mike" 
 wrote:
>>
>>   This looks a little suspicious to me (in 
Vmwa

RE: [VOTE] Retirement of midonet plugin

2017-03-29 Thread Simon Weller
+1

Simon Weller/615-312-6068

-Original Message-
From: Wido den Hollander [w...@widodh.nl]
Received: Wednesday, 29 Mar 2017, 6:18AM
To: Rafael Weingärtner [rafaelweingart...@gmail.com]; dev@cloudstack.apache.org 
[dev@cloudstack.apache.org]; us...@cloudstack.apache.org 
[us...@cloudstack.apache.org]
Subject: Re: [VOTE] Retirement of midonet plugin

+1

> Op 28 maart 2017 om 22:46 schreef Rafael Weingärtner 
> :
>
>
> Dear ACS fellows,
> We have discussed the retirement of Midonet plugin [*]. After quite some
> talk, we converged in a retirement process and it seems that we all agree
> that the Midonet plugin should be retired. So, to formalize things, we
> should vote Midonet retirement.
>
> All users and devs are welcome to vote here:
> [+1] I *do want to retire *the Midonet plugin
> [0] Whatever happens I am happy
> [-1] I *do not want to retire* the Midonet plugin
>
>
> [*] http://markmail.org/message/x6p3gnvqbbxcj6gs
>
> --
> Rafael Weingärtner


CloudStack Collaboration Conference Miami deadline fast approaching!

2017-04-03 Thread Simon Weller
All,


We're only weeks away from an extremely exciting CloudStack Collaboration 
Conference (CCC) colocated at ApacheCon NA, in beautiful Miami, FL.

Don't miss out on great talks, exciting networking opportunities and the 
ability to see the latest and greatest on everything CloudStack.


If you are a current Apache committer, or in the academic field, you qualify 
for a registration discount, so don't hesitate -  register immediately!

Standard registration discount ends on April 16th, so hurry and confirm your 
seat now, as space is limited.


If you are a confirmed speaker, you should have received a special registration 
code, so don't forget that you still need to register in order to attend!


http://events.linuxfoundation.org/events/apachecon-north-america/attend/register-


- Si






RE: [VOTE] Apache Cloudstack should join the gitbox experiment.

2017-04-10 Thread Simon Weller
+1

Simon Weller/615-312-6068

-Original Message-
From: Daan Hoogland [daan.hoogl...@gmail.com]
Received: Monday, 10 Apr 2017, 9:22AM
To: dev [dev@cloudstack.apache.org]
Subject: [VOTE] Apache Cloudstack should join the gitbox experiment.

In the Apache foundation an experiment has been going on to host
mirrors of Apache project on github with more write access then just
to the mirror-bot. For those projects committers can merge on github
and put labels on PRs.

I move to have the project added to the gitbox experiment
please cast your votes

+1 CloudStack should be added to the gitbox experiment
+-0 I don't care
-1 CloudStack shouldn't be added to the gitbox experiment and give your reasons

thanks,
--
Daan


RE: How are router checks scheduled?

2017-04-10 Thread Simon Weller
Do you have 2 management servers?

Simon Weller/615-312-6068

-Original Message-
From: Sean Lair [sl...@ippathways.com]
Received: Monday, 10 Apr 2017, 2:54PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: How are router checks scheduled?

According to my management server logs, some of the period checks are getting 
kicked off twice at the same time.  The CheckRouterTask is kicked off every 
30-seconds, but each time it is ran, it is ran twice at the same second...  See 
logs below for example:

2017-04-10 21:48:12,879 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-5f7bc584) (logid:4d5b1031) Found 10 routers to 
update status.
2017-04-10 21:48:12,932 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-d027ab6f) (logid:1bc50629) Found 10 routers to 
update status.
2017-04-10 21:48:42,877 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-2c8f4d18) (logid:e9111785) Found 10 routers to 
update status.
2017-04-10 21:48:42,927 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-1bfd5351) (logid:ad0f95ef) Found 10 routers to 
update status.
2017-04-10 21:49:12,874 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-ede0d2bb) (logid:6f244423) Found 10 routers to 
update status.
2017-04-10 21:49:12,928 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-d58842d5) (logid:8442d73c) Found 10 routers to 
update status.

How is this scheduled/kicked off?  I am debugging some site-to-site VPN alert 
problems, and they seem to be related to a race condition due to the 
"CheckRouterTask" be kicked off two at a time.

Thanks
Sean





Re: How are router checks scheduled?

2017-04-10 Thread Simon Weller
We've seen something very similar. By any chance, are you seeing any strange 
cpu load issues that grow over time as well?

Our team has been chasing down an issue that appears to be related to s2s vpn 
checks, where a race condition seems to occur that threads out the cpu over 
time.




From: Sean Lair 
Sent: Monday, April 10, 2017 5:11 PM
To: dev@cloudstack.apache.org
Subject: RE: How are router checks scheduled?

I do have two mgmt servers, but I have one powered off.  The log excerpt is 
from one management server.  This can be checked in the environment by running:

cat /var/log/cloudstack/management/management-server.log | grep "routers to 
update status"

This is happening both in prod and our dev environment.  I've been digging 
through the code and have some ideas and will post back later if successful in 
correcting the issue.

The biggest problem is the race condition between the two simultaneous S2S VPN 
checks.  They step on each other and spam the heck out of us with the email 
alerting.



-Original Message-
From: Simon Weller [mailto:swel...@ena.com]
Sent: Monday, April 10, 2017 5:02 PM
To: dev@cloudstack.apache.org
Subject: RE: How are router checks scheduled?

Do you have 2 management servers?

Simon Weller/615-312-6068

-Original Message-
From: Sean Lair [sl...@ippathways.com]
Received: Monday, 10 Apr 2017, 2:54PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: How are router checks scheduled?

According to my management server logs, some of the period checks are getting 
kicked off twice at the same time.  The CheckRouterTask is kicked off every 
30-seconds, but each time it is ran, it is ran twice at the same second...  See 
logs below for example:

2017-04-10 21:48:12,879 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-5f7bc584) (logid:4d5b1031) Found 10 routers to 
update status.
2017-04-10 21:48:12,932 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-d027ab6f) (logid:1bc50629) Found 10 routers to 
update status.
2017-04-10 21:48:42,877 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-2c8f4d18) (logid:e9111785) Found 10 routers to 
update status.
2017-04-10 21:48:42,927 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-1bfd5351) (logid:ad0f95ef) Found 10 routers to 
update status.
2017-04-10 21:49:12,874 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-ede0d2bb) (logid:6f244423) Found 10 routers to 
update status.
2017-04-10 21:49:12,928 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-d58842d5) (logid:8442d73c) Found 10 routers to 
update status.

How is this scheduled/kicked off?  I am debugging some site-to-site VPN alert 
problems, and they seem to be related to a race condition due to the 
"CheckRouterTask" be kicked off two at a time.

Thanks
Sean





Re: How are router checks scheduled?

2017-04-10 Thread Simon Weller
We've seen something very similar. By any chance, are you seeing any strange 
cpu load issues that grow over time?

Our team has been chasing down an issue that appears to be related to s2s vpn 
checks, where a race condition seems to occur that threads out the cpu over 
time.




From: Sean Lair 
Sent: Monday, April 10, 2017 5:11 PM
To: dev@cloudstack.apache.org
Subject: RE: How are router checks scheduled?

I do have two mgmt servers, but I have one powered off.  The log excerpt is 
from one management server.  This can be checked in the environment by running:

cat /var/log/cloudstack/management/management-server.log | grep "routers to 
update status"

This is happening both in prod and our dev environment.  I've been digging 
through the code and have some ideas and will post back later if successful in 
correcting the issue.

The biggest problem is the race condition between the two simultaneous S2S VPN 
checks.  They step on each other and spam the heck out of us with the email 
alerting.



-Original Message-
From: Simon Weller [mailto:swel...@ena.com]
Sent: Monday, April 10, 2017 5:02 PM
To: dev@cloudstack.apache.org
Subject: RE: How are router checks scheduled?

Do you have 2 management servers?

Simon Weller/615-312-6068

-Original Message-
From: Sean Lair [sl...@ippathways.com]
Received: Monday, 10 Apr 2017, 2:54PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: How are router checks scheduled?

According to my management server logs, some of the period checks are getting 
kicked off twice at the same time.  The CheckRouterTask is kicked off every 
30-seconds, but each time it is ran, it is ran twice at the same second...  See 
logs below for example:

2017-04-10 21:48:12,879 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-5f7bc584) (logid:4d5b1031) Found 10 routers to 
update status.
2017-04-10 21:48:12,932 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-d027ab6f) (logid:1bc50629) Found 10 routers to 
update status.
2017-04-10 21:48:42,877 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-2c8f4d18) (logid:e9111785) Found 10 routers to 
update status.
2017-04-10 21:48:42,927 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-1bfd5351) (logid:ad0f95ef) Found 10 routers to 
update status.
2017-04-10 21:49:12,874 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-ede0d2bb) (logid:6f244423) Found 10 routers to 
update status.
2017-04-10 21:49:12,928 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] 
(RouterStatusMonitor-1:ctx-d58842d5) (logid:8442d73c) Found 10 routers to 
update status.

How is this scheduled/kicked off?  I am debugging some site-to-site VPN alert 
problems, and they seem to be related to a race condition due to the 
"CheckRouterTask" be kicked off two at a time.

Thanks
Sean





Re: [DISCUSS][PROPOSAL] CA authority plugin definition

2017-04-14 Thread Simon Weller
Daan,


What about integrating some like Vault (https://github.com/hashicorp/vault)?


- Si


From: Daan Hoogland 
Sent: Friday, April 14, 2017 5:46 AM
To: dev@cloudstack.apache.org
Subject: [DISCUSS][PROPOSAL] CA authority plugin definition

Devs,

Following a discussion with a client they came up with the idea to create a 
pluggable CA-framework. A plugin would serve components in cloudstack that so 
require (management servers, agents, load balancers, SVMs, etc.) with 
certificates answering certificate requests and validating certificates on 
request.

A default plugin can be written that serves according to its own self signed 
root certificate and have its own revocation list to be managed by the admin. 
Other plugin could forward by mail or web requests to external parties.

A CA-plugin will have to

-  Setup, for the default this means creating its certificate, for 
others it might mean install an intermediate certificate or configure a mail, 
or website address.

-  Accept and answer certificate requests

oFor client certificates

oFor server certificates

-  Accept revocation requests

-  Validate a connection request according to origin and certificate 
and . What extra data is is defined by the plugin and can be 
credentials or field-definitions referring the x509 entries or for instance 
port numbers allowed… this is basically free to the implementer.

A next step will have to be integrating the request calls with installs on 
targets but I think as is this feature merits itself as it could be used with 
out of band configuration management tools as well.

Any thoughts, remarks and critiques are welcome,

daan.hoogl...@shapeblue.com
www.shapeblue.com
Shapeblue - The CloudStack Company
www.shapeblue.com
Background Cloudstack relies on a fixed download site when it fetches the 
built-in guest VM templates. That download site has historically



53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





Re: [DISCUSS][PROPOSAL] CA authority plugin definition

2017-04-14 Thread Simon Weller
Yeah, I agree it would be better as a plugin. We feel a big thing missing in 
ACS right now is a KMS style service.



From: Daan Hoogland 
Sent: Friday, April 14, 2017 10:05 AM
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS][PROPOSAL] CA authority plugin definition

Simon, I can think of use cases for that and it is an interesting topic. I can 
also see it as being implemented in a CA-plugin. I do not think it should be in 
the base of this framework though. That would complicate cloudstack for simple 
users to much I think. On the other hand, it would have more use cases then 
just for CA-plugins (fantasy running now)

On 14/04/17 16:57, "Simon Weller"  wrote:

Daan,


What about integrating some like Vault (https://github.com/hashicorp/vault)?
[https://avatars2.githubusercontent.com/u/761456?v=3&s=400]<https://github.com/hashicorp/vault>

GitHub - hashicorp/vault: A tool for managing 
secrets.<https://github.com/hashicorp/vault>
github.com
README.md Vault . Please note: We take Vault's security and our users' trust 
very seriously. If you believe you have found a security issue in Vault, please 
...





- Si


From: Daan Hoogland 
Sent: Friday, April 14, 2017 5:46 AM
To: dev@cloudstack.apache.org
Subject: [DISCUSS][PROPOSAL] CA authority plugin definition

Devs,

Following a discussion with a client they came up with the idea to create a 
pluggable CA-framework. A plugin would serve components in cloudstack that so 
require (management servers, agents, load balancers, SVMs, etc.) with 
certificates answering certificate requests and validating certificates on 
request.

A default plugin can be written that serves according to its own self 
signed root certificate and have its own revocation list to be managed by the 
admin. Other plugin could forward by mail or web requests to external parties.

A CA-plugin will have to

-  Setup, for the default this means creating its certificate, for 
others it might mean install an intermediate certificate or configure a mail, 
or website address.

-  Accept and answer certificate requests

oFor client certificates

oFor server certificates

-  Accept revocation requests

-  Validate a connection request according to origin and 
certificate and . What extra data is is defined by the plugin and 
can be credentials or field-definitions referring the x509 entries or for 
instance port numbers allowed… this is basically free to the implementer.

A next step will have to be integrating the request calls with installs on 
targets but I think as is this feature merits itself as it could be used with 
out of band configuration management tools as well.

Any thoughts, remarks and critiques are welcome,

daan.hoogl...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
Shapeblue - The CloudStack Company<http://www.shapeblue.com/>
www.shapeblue.com
Background Cloudstack relies on a fixed download site when it fetches the 
built-in guest VM templates. That download site has historically



Shapeblue - The CloudStack Company<http://www.shapeblue.com/>
Shapeblue - The CloudStack Company<http://www.shapeblue.com/>
www.shapeblue.com
Background Cloudstack relies on a fixed download site when it fetches the 
built-in guest VM templates. That download site has historically



www.shapeblue.com<http://www.shapeblue.com>
Background Cloudstack relies on a fixed download site when it fetches the 
built-in guest VM templates. That download site has historically



53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue






daan.hoogl...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





Re: Private gateways - experience, anyone really using it ?

2017-05-03 Thread Simon Weller
We use private gateways extensively with KVM and redundant VRs. We have our own 
ACS 4.8 branch (https://github.com/myENA/cloudstack/tree/release/ENA-4.8) with 
quite a number of backported fixes that relate to PGs.

As far as I'm aware, vxlan is not supported on PGs today. We would like to see 
it though. Today we use vlans and connect a customer lan to a PG via MPLS.



From: Andrija Panic 
Sent: Wednesday, May 3, 2017 10:09 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: Private gateways - experience, anyone really using it ?

Hi all,

I'm interested to know if anyone is using private gatewyas in production,
and what are your experience with it, any undocumented limitations, etc ?

I dont see iif t supports vxlans (perhaps will try to test it myself)

I really would appreciate any feedback.

Thanks,

--

Andrija Panić


Re: Private gateways - experience, anyone really using it ?

2017-05-03 Thread Simon Weller
So we manage static routes within our middleware and UI (we don't use the ACS 
native UI), so I don't think we've experienced this.


From: williamstev...@gmail.com  on behalf of Will 
Stevens 
Sent: Wednesday, May 3, 2017 10:26 AM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: Private gateways - experience, anyone really using it ?

@sweller: I would be interested in the PG fixes you have done.  Anything
related to overlapping routes?

*Will STEVENS*
Lead Developer

<https://goo.gl/NYZ8KK>
CloudOps | Managed Private Cloud | Managed Public Cloud | Cloud Building | 
Cloud Architecture | Cloud Migration<https://goo.gl/NYZ8KK>
goo.gl
We provide private, public and hybrid cloud solutions for businesses seeking to 
scale, and for enterprises making their move to the Cloud.




On Wed, May 3, 2017 at 11:23 AM, Simon Weller  wrote:

> We use private gateways extensively with KVM and redundant VRs. We have
> our own ACS 4.8 branch (https://github.com/myENA/
> cloudstack/tree/release/ENA-4.8) with quite a number of backported fixes
> that relate to PGs.
>
> As far as I'm aware, vxlan is not supported on PGs today. We would like to
> see it though. Today we use vlans and connect a customer lan to a PG via
> MPLS.
>
>
> 
> From: Andrija Panic 
> Sent: Wednesday, May 3, 2017 10:09 AM
> To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> Subject: Private gateways - experience, anyone really using it ?
>
> Hi all,
>
> I'm interested to know if anyone is using private gatewyas in production,
> and what are your experience with it, any undocumented limitations, etc ?
>
> I dont see iif t supports vxlans (perhaps will try to test it myself)
>
> I really would appreciate any feedback.
>
> Thanks,
>
> --
>
> Andrija Panić
>


Re: help/advise needed: Private gateway vs. new physcial network issue

2017-05-03 Thread Simon Weller
Andrija,


Do you have any network tagging setup for your vpc network offerings that 
correspond to your zone network tags?


From: Andrija Panic 
Sent: Wednesday, May 3, 2017 3:46 PM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: help/advise needed: Private gateway vs. new physcial network issue

Hi all,

I'm trying to to test Private Gateway on our production (actually on DEV
first :) ) setup, of ACS 4.5,
but I'm hitting some strange issues during actual creation of PV GTW.

My setup is the following:

ACS 4.5, advanced zone KVM (ubuntu 14)
mgmt network: KVM label/name: cloudbr0
sec. stor.network KMV label/name: cloudbr2
guest network KVM label/name: bond0.950 (we use vxlans, so this is
apropriate...)
public network KVM label/name: cloudbr3

This above is all fine, but when adding PRIV.GTW, ACS tries to provision
new vlan interface (later with bridge...) on top of selected physical
interface (from the list above) - which in my case is impossible, as it
seems.

So I decided to add addional Physical Network (name: bond0), so I expect
ACS will provision i.e. bond0.999 vlan interface for one PRIV.GTW for
testing purposes (vlan 999)

PROBLEM:
- in running zone, I need to disable it, then I use CloudMonkey to add zone:
* create physicalnetwork name=bond0 broadcastdomainrange=zone
zoneid=d27f6354-a715-40c7-8322-a31091f97699 isolationmethod=vlan
Afterwards I do enable the zone: update physicalnetwork state=Enabled
id=3424e392-e0a1-4c21-81d9-db69acbe6c8e

First command above, does NOT update DB table
cloud.physical_network_isolation_methods
with new record, so when you list network it dont mentions isolation_method.
OK, I edit DB directly, and create new row referencing new network by ID,
and vlan set as isolation method.

BTW, table cloud.physical_network_traffic_types is not populated, which I
assume is OK/good since I don't want any normal traffci
(mgmt/guest.public/storage) to go over this physical net - but again this
might be the root of problems ? Since the only guest network is on PIF
bond0.950

When I try to create PRIV.GTW, ACS does some magic, and again tries to
provision vlan 999 interface (example vlan from above) on bond0.950 (guest
network) (bond0.950.999)

I checked the logs (attached below) and it does trie to provision GTW on
new physical network really.

I'm assuming, that maybe since no values for new bond0 network inside table
cloud.physical_network_traffic_types is populated, that than ACS fails back
to only available guest network, and that is bond0.950 - also I recall we
need to define KVM label so the ACS will actaully know on which interface
to use... (which is missing from DB for new bond0 network, as explained...)

I checked the logs, and didn't see any intersting stuff really (perhaps I'm
missing something...)
https://pastebin.com/MZXrK31M
[https://pastebin.com/i/facebook.png]

PRIV.GTW created on wrong PIF - Pastebin.com
pastebin.com




I would really appreciate any help, since I dont know which direction to go
now...





--

Andrija Panić


Re: help/advise needed: Private gateway vs. new physcial network issue

2017-05-03 Thread Simon Weller
We deploy with 2 physical interfaces. 1 is for vxlan guest networks and the 
other is a trunk interfaces for public, mgmt and private gateways. We found 
that tagging was necessary, or the incorrect interface can be selected because 
both have guest networks.



From: Andrija Panic 
Sent: Wednesday, May 3, 2017 4:09 PM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: help/advise needed: Private gateway vs. new physcial network issue

Hi Simon,

not at all. We use tags only for storage and compute(service)/disk
offerings...

But,

I just found out, even when I change recird in DB record, change KVM label
from bond0.950 to bond0, then disable/enable zone, and even restart mgmt
servers, still ACS provision vlan 999 on top of bond0.950 although I
selected bond0.


Her is funny thing: when I changed agent.properties
file guest.network.device=bond0.950 to bond0, then it worked (at least
proper PIF selected)...but again this can't be done on production in my case

It would be interesting to know (Cloudops and others) if you guys use same
physical network to carrrie guest private networks (vlans or vxlans?) AND
these new vlans for PRIV.GTW. We use vxlans gor guest traffic...


Thanks Simon,

Andrija

On 3 May 2017 at 23:01, Simon Weller  wrote:

> Andrija,
>
>
> Do you have any network tagging setup for your vpc network offerings that
> correspond to your zone network tags?
>
> 
> From: Andrija Panic 
> Sent: Wednesday, May 3, 2017 3:46 PM
> To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> Subject: help/advise needed: Private gateway vs. new physcial network issue
>
> Hi all,
>
> I'm trying to to test Private Gateway on our production (actually on DEV
> first :) ) setup, of ACS 4.5,
> but I'm hitting some strange issues during actual creation of PV GTW.
>
> My setup is the following:
>
> ACS 4.5, advanced zone KVM (ubuntu 14)
> mgmt network: KVM label/name: cloudbr0
> sec. stor.network KMV label/name: cloudbr2
> guest network KVM label/name: bond0.950 (we use vxlans, so this is
> apropriate...)
> public network KVM label/name: cloudbr3
>
> This above is all fine, but when adding PRIV.GTW, ACS tries to provision
> new vlan interface (later with bridge...) on top of selected physical
> interface (from the list above) - which in my case is impossible, as it
> seems.
>
> So I decided to add addional Physical Network (name: bond0), so I expect
> ACS will provision i.e. bond0.999 vlan interface for one PRIV.GTW for
> testing purposes (vlan 999)
>
> PROBLEM:
> - in running zone, I need to disable it, then I use CloudMonkey to add
> zone:
> * create physicalnetwork name=bond0 broadcastdomainrange=zone
> zoneid=d27f6354-a715-40c7-8322-a31091f97699 isolationmethod=vlan
> Afterwards I do enable the zone: update physicalnetwork state=Enabled
> id=3424e392-e0a1-4c21-81d9-db69acbe6c8e
>
> First command above, does NOT update DB table
> cloud.physical_network_isolation_methods
> with new record, so when you list network it dont mentions
> isolation_method.
> OK, I edit DB directly, and create new row referencing new network by ID,
> and vlan set as isolation method.
>
> BTW, table cloud.physical_network_traffic_types is not populated, which I
> assume is OK/good since I don't want any normal traffci
> (mgmt/guest.public/storage) to go over this physical net - but again this
> might be the root of problems ? Since the only guest network is on PIF
> bond0.950
>
> When I try to create PRIV.GTW, ACS does some magic, and again tries to
> provision vlan 999 interface (example vlan from above) on bond0.950 (guest
> network) (bond0.950.999)
>
> I checked the logs (attached below) and it does trie to provision GTW on
> new physical network really.
>
> I'm assuming, that maybe since no values for new bond0 network inside table
> cloud.physical_network_traffic_types is populated, that than ACS fails
> back
> to only available guest network, and that is bond0.950 - also I recall we
> need to define KVM label so the ACS will actaully know on which interface
> to use... (which is missing from DB for new bond0 network, as explained...)
>
> I checked the logs, and didn't see any intersting stuff really (perhaps I'm
> missing something...)
> https://pastebin.com/MZXrK31M
[https://pastebin.com/i/facebook.png]<https://pastebin.com/MZXrK31M>

PRIV.GTW created on wrong PIF - Pastebin.com<https://pastebin.com/MZXrK31M>
pastebin.com



> [https://pastebin.com/i/facebook.png]<https://pastebin.com/MZXrK31M>
>
> PRIV.GTW created on wrong PIF - Pastebin.com<https://pastebin.com/MZXrK31M
> >
> pastebin.com
>
>
>
>
> I would really appreciate any help, since I dont know which direction to go
> now...
>
>
>
>
>
> --
>
> Andrija Panić
>



--

Andrija Panić


Re: [PROPOSAL] branch on first RC and open up master for features

2017-05-08 Thread Simon Weller
The only PR that is currently showing in blocker status is : 
https://github.com/apache/cloudstack/pull/2062

Are there others that should be tagged?


- Si



From: Rohit Yadav 
Sent: Monday, May 8, 2017 6:43 AM
To: dev@cloudstack.apache.org
Subject: Re: [PROPOSAL] branch on first RC and open up master for features

Rajani,


Can we have a list of outstanding blockers/issues?


I also saw some enhancement PRs merged, which I think we should be avoiding and 
instead have our resources spent on fixing the release blockers, thanks.


Regards.


From: Rajani Karuturi 
Sent: 08 May 2017 16:20:01
To: dev@cloudstack.apache.org
Subject: Re: [PROPOSAL] branch on first RC and open up master for features

I disagree. The release process is taking long because we dont
have enough people working on the release. Sometimes, even the
blockers don't get enough attention. There is no point in adding
features on already broken/blocked master which is not
releasable. "un-freezing" master for new features shouldn't be
the goal in my opinion. We should move towards faster
releases/release cycles.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On May 8, 2017 at 2:11 PM, Daan Hoogland
(daan.hoogl...@shapeblue.com) wrote:

LS,

In a lot of occasions, we have seen new features that are
waiting for releases with blocker bugs and while these bugs
certainly must be solved, users that are not hindered by those
bugs directly are stopped by them. Therefore, I propose we will
fork on the first RC branches for future releases, so that
development is not stopped for it. If the release process takes
too long and to nice features get merged in between we can always
decide to re-branch before releasing.

Thoughts..?
Daan

daan.hoogl...@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK
@shapeblue

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





RE: Extend CloudStack for a new hypervisor

2017-05-18 Thread Simon Weller
Which hypervisor are you wanting to implement​?

Simon Weller/615-312-6068

-Original Message-
From: John Smith [john.smith@gmail.com]
Received: Thursday, 18 May 2017, 5:29PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Extend CloudStack for a new hypervisor

Greetings!

I have a need to extend CloudStack to support an additional hypervisor.
This is not something I consider strategic for CloudStack itself, but I
have a project with a very specific need.

I have a development background but am not an active developer right now
... so looking forward to getting back in the saddle!  I've never developed
against the CloudStack tree before.

I can't find any docs on how one would introduce support for a new
hypervisor (eg. what classes, methods, etc, need to be implemented,
extended, etc) and checking the source tree I can't easily see if there is
a base to build from.  I would appreciate any pointers about where to start
looking to save me going through the entire tree from scratch.

The standard CloudStack concepts should be easy enough (ha!) to map 1:1 to
this additional hypervisor (including primary & secondary storage, router &
secondary storage VMs, the networking concepts, etc) so I'm hoping that I
can simply implement it like a VMware or Xen backend ...

Thanks in advance!

John.


RE: [DISCUSS] Config Drive: Using the OpenStack format?

2017-05-19 Thread Simon Weller
+1

Simon Weller/615-312-6068

-Original Message-
From: Rafael Weingärtner [rafaelweingart...@gmail.com]
Received: Friday, 19 May 2017, 11:24AM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: [DISCUSS] Config Drive: Using the OpenStack format?

This seems a very interesting idea.
+1

On Fri, May 19, 2017 at 9:33 AM, Marc-Aurèle Brothier 
wrote:

> Hi Widoo,
>
> That sounds like a pretty good idea in my opinion. +1 for adding it
>
> Marco
>
>
> > On 19 May 2017, at 15:15, Wido den Hollander  wrote:
> >
> > Hi,
> >
> > Yesterday at ApacheCon Kris from Nuage networks gave a great
> presentation about alternatives for userdata from the VR: Config Drive
> >
> > In short, a CD-ROM/ISO attached to the Instance containing the
> meta/userdata instead of having the VR serve it.
> >
> > The outstanding PR [0] uses it's own format on the ISO while cloud-init
> already has support for config drive [1].
> >
> > This format uses 'openstack' in the name, but it seems to be in
> cloud-init natively and well supported.
> >
> > I started the discussion yesterday during the talk and thought to take
> it to the list.
> >
> > My opinion is that we should use the OpenStack format for the config
> drive:
> >
> > - It's already in cloud-init
> > - Easier to templates to be used on CloudStack
> > - Easier adoption
> >
> > We can always write a file like "GENERATED_BY_APACHE_CLOUDSTACK" or
> something on the ISO.
> >
> > We can also symlink the 'openstack' directory to a directory called
> 'cloudstack' on the ISO.
> >
> > Does anybody else have a opinion on this one?
> >
> > Wido
> >
> > [0]: https://github.com/apache/cloudstack/pull/2097
> > [1]: http://cloudinit.readthedocs.io/en/latest/topics/
> datasources/configdrive.html#version-2
>
>


--
Rafael Weingärtner


RE: [DISCUSS] Config Drive: Using the OpenStack format?

2017-05-20 Thread Simon Weller
Yes, I don't see any point in reinventing the wheel.

Simon Weller/615-312-6068

-Original Message-
From: Wido den Hollander [w...@widodh.nl]
Received: Saturday, 20 May 2017, 8:45AM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: [DISCUSS] Config Drive: Using the OpenStack format?

Just to check all the +1's, they are about using the OpenStack format. Right?

Config Drive will be there no matter what.

Wido

> Op 19 mei 2017 om 19:45 heeft Kris Sterckx  
> het volgende geschreven:
>
> FYI
>
> Slides are here :
> https://www.slideshare.net/2000monkeys/apache-cloudstack-collab-miami-user-data-alternatives-to-the-vr
>
> Thanks
>
> - Kris
>
>> On 19 May 2017 at 12:58, Wei ZHOU  wrote:
>>
>> gd idea
>>
>> 2017-05-19 15:33 GMT+02:00 Marc-Aurèle Brothier :
>>
>>> Hi Widoo,
>>>
>>> That sounds like a pretty good idea in my opinion. +1 for adding it
>>>
>>> Marco
>>>
>>>
>>>> On 19 May 2017, at 15:15, Wido den Hollander  wrote:
>>>>
>>>> Hi,
>>>>
>>>> Yesterday at ApacheCon Kris from Nuage networks gave a great
>>> presentation about alternatives for userdata from the VR: Config Drive
>>>>
>>>> In short, a CD-ROM/ISO attached to the Instance containing the
>>> meta/userdata instead of having the VR serve it.
>>>>
>>>> The outstanding PR [0] uses it's own format on the ISO while cloud-init
>>> already has support for config drive [1].
>>>>
>>>> This format uses 'openstack' in the name, but it seems to be in
>>> cloud-init natively and well supported.
>>>>
>>>> I started the discussion yesterday during the talk and thought to take
>>> it to the list.
>>>>
>>>> My opinion is that we should use the OpenStack format for the config
>>> drive:
>>>>
>>>> - It's already in cloud-init
>>>> - Easier to templates to be used on CloudStack
>>>> - Easier adoption
>>>>
>>>> We can always write a file like "GENERATED_BY_APACHE_CLOUDSTACK" or
>>> something on the ISO.
>>>>
>>>> We can also symlink the 'openstack' directory to a directory called
>>> 'cloudstack' on the ISO.
>>>>
>>>> Does anybody else have a opinion on this one?
>>>>
>>>> Wido
>>>>
>>>> [0]: https://github.com/apache/cloudstack/pull/2097
>>>> [1]: http://cloudinit.readthedocs.io/en/latest/topics/
>>> datasources/configdrive.html#version-2
>>>
>>>
>>


Miami CCC '17 Roundtable/Hackathon Summary

2017-05-23 Thread Simon Weller
Hi everyone,


During the CCC last week in Miami, we held a roundtable/hackathon to discuss 
some of the major areas the community would like to focus more attention.


The discussions were passionate and were mainly focused around networking and 
our current use of our home-spun Virtual Router.


For most of the us, the VR has become a challenging beast, mainly due to how 
difficult it is to end-to-end test for new releases.

Quite often PRs are pushed that fix an issue on one feature set, but break 
another unintentionally. This has a great deal to do with how inter-mingled all 
the features are currently.


We floated some ideas related to short term VR fixes in order to make it more 
modular, as well as API driven, rather than the currently SSH JSON injections.

A number of possible alternatives were also brought up to see what VR feature 
coverage could be handled by other virtual appliances currently out on the 
market.


These included (but not limited to):


VyOS (current PR out there for integration via a plugin – thanks Matthew!)

Microtek (Commerical)

Openswitch/Flexswitch

Cloud Router


The second major topic of the day was related to how we want to integrate 
networking moving forward.


A fair number of individuals felt that we shouldn't be focusing so much on 
integrating network functions, but relying on other network orchestrators to 
hand this.

It was also noted that what draws a lot of people to ACS is the fact we have a 
VR and do provide these functions out of the box.


We discussed how we could standardize the network sub system to use some sort 
of queuing bus to make it easier for others projects to integrate their 
solutions.

The current plugin implementation is fairly complex and often other projects 
(or commercial entities) put it into the too hard basket, until someone either 
does it themselves or is willing to pay for the development.

Most also felt it was important to maintain a default network function that 
works out of the box so that the complexity of a full orchestrator could be 
avoided if not needed.


I'm sure I've missed some key points, so hopefully this starts a discussion 
with the entire community of where we focused next.


Thanks to all those that participated on Tuesday afternoon.


- Si



Re: [DISCUSS] API versioning

2017-06-05 Thread Simon Weller

+1. Echoing what Rohit pointed out, we have a lot of cleanup to do :-) It 
certainly makes it a lot easier though when you're not breaking compatibility 
with existing code.


From: Rohit Yadav 
Sent: Monday, June 5, 2017 4:04 AM
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS] API versioning

+1 Good idea, though bear in mind there are 500+ APIs with no 
modern-RESTful-standardization, a lot of work.


Regards.


From: Nitin Kumar Maharana 
Sent: 05 June 2017 12:37:24
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS] API versioning

This looks good. +1

rohit.ya...@shapeblue.com
www.shapeblue.com
[http://shapeblue.com/wp-content/uploads/2014/03/sungardonline1.jpg]

Shapeblue - The CloudStack Company
www.shapeblue.com
The city of Prague was the venue for the spring meeting of the Cloudstack 
European user group. There was



53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue



> On 04-Jun-2017, at 2:34 PM, Rene Moser  wrote:
>
> Hi
>
> I recently developed ansible modules for the ACL API and ... found this
> has a really inconsistent API naming. E.g.
>
> createNetworkACL <<-- this creates an ACL rule
> createNetworkACLList <<-- this create the ACL
>
> updateNetworkACLItem <<-- this updates an ACL rule
> updateNetworkACLList <<-- this updates the ACL
>
> My first thoughs was, someone has to fix this, like
>
> createNetworkAclRule <<-- this create the ACL rule
> createNetworkAcl <<-- this creates an ACL
>
> updateNetworkAclRule <<-- this updates the ACL rule
> updateNetworkAcl <<-- this updates an ACL
>
> But how without breaking the API for backwards compatibility? I know a
> few other places where the API has inconsistent namings. Fixing the API
> but in a controlled way? What about by adding a version to the API?
>
> I would like to introduce a API versioning to cloudstack: The current
> API would be frozen into verison v1. The new API will have v2. The
> versioned API has the URL scheme:
>
> /client/api/
>
> The current API would be /client/api/v1 and the /client/api would be an
> alias for v1. This ensures backwards compatibility.
>
> This would allow us to deprecate and change APIs.
>
> Any thoughts?
>
>
>




DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.


Re: Private Gateway on REDUNDANT VPC

2017-06-23 Thread Simon Weller
Paul,


Could it be related to rp_filter in some way?


- Si



From: Paul Angus 
Sent: Tuesday, June 20, 2017 3:39 AM
To: dev@cloudstack.apache.org
Subject: RE: Private Gateway on REDUNDANT VPC

I don't believe so.
Rules look OK and consistent with std VPC as well.


Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
[http://shapeblue.com/wp-content/uploads/2014/03/sungardonline1.jpg]

Shapeblue - The CloudStack Company
www.shapeblue.com
The city of Prague was the venue for the spring meeting of the Cloudstack 
European user group. There was



53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




-Original Message-
From: Jayapal Uradi [mailto:jayapal.ur...@accelerite.com]
Sent: 20 June 2017 09:30
To: dev@cloudstack.apache.org
Subject: Re: Private Gateway on REDUNDANT VPC

Did you check iptables, Is it blocking on the VR ?

> On Jun 20, 2017, at 1:30 PM, Paul Angus  wrote:
>
> Hi All,
>
> I've been looking at the failing Marvin tests for Private Gateways.It 
> passes on std VPC and fails on rVPC.
> The test tries to ping a VM on a remote VPC via the private gateways on both 
> VRs.
> Digging into it, I found that an ARP request goes out for the remote VM from 
> the local VR to the remote VR, the local VR receives it, then nothing.  On 
> the std VRs a reply goes back out.
>
> I've checked all interfaces to see if the reply is going out of the wrong 
> interface, but it just isn't going out anywhere.  I can't figure out why no 
> reply seems to be generated...  Obviously the answer is in the difference in 
> config and packages on VPC vs rVPC - but I can't find it.
>
> HELP!  Any ideas anyone?
>
>
> Kind regards,
>
> Paul Angus
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.



Re: [RESULT][VOTE] Apache CloudStack 4.10.0.0

2017-07-09 Thread Simon Weller
Great, nice work!





From: Rohit Yadav 
Sent: Friday, July 7, 2017 1:04 AM
To: dev@cloudstack.apache.org
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.10.0.0

Congratulations and well done!


From: Paul Angus 
Sent: 07 July 2017 00:15:30
To: dev@cloudstack.apache.org
Subject: RE: [RESULT][VOTE] Apache CloudStack 4.10.0.0

Well done!



Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




-Original Message-
From: Wei ZHOU [mailto:ustcweiz...@gmail.com]
Sent: 06 July 2017 11:10
To: dev@cloudstack.apache.org
Subject: Re: [RESULT][VOTE] Apache CloudStack 4.10.0.0

nice!!

2017-07-06 11:56 GMT+02:00 Rajani Karuturi :

> Hi all,
>
> After 72 hours, the vote for CloudStack 4.10.0.0 [1] *passes* with
> 4 PMC + 2 non-PMC votes.
>
> +1 (PMC / binding)
> * Mike Tutkowski
> * Wido den Hollander
> * Daan Hoogland
> * Milamber
>
> +1 (non binding)
> * Kris Sterckx
> * Boris Stoyanov
>
> 0
> none
>
> -1
> none
>
> Thanks to everyone participating.
>
> I will now prepare the release announcement to go out after 24 hours
> to give the mirrors time to catch up.
>
> [1] http://markmail.org/thread/dafndhtflon4pshf
>
> ~Rajani
> http://cloudplatform.accelerite.com/
>


Re: [DISCUSS] Host HA in 4.11

2017-07-12 Thread Simon Weller
We are very excited about this feature set, as it adds some really important 
features for KVM.

We don't use NFS, so I think our goal will be seeing what we can contribute to 
include Ceph on the supported storage list.

- Si

From: Rohit Yadav 
Sent: Wednesday, July 12, 2017 5:43 AM
To: dev@cloudstack.apache.org
Subject: [DISCUSS] Host HA in 4.11

All,


Few months ago I had started discussion on Host HA for CloudStack and given 
4.10 is voted and to be announced with master branch cut I would like to 
re-kick discussion around reviewing and acceptance of the feature that is 
pending since Feb 2017.


To briefly share some key points:

- This feature is disabled by default and provides zone/cluster/host level kill 
switches

- This brings in a reliable way to fence (power off) and recover (reboot) a host

- Allows implementation of HA provider plugin specific to a hypervisor and 
storage stack, by default we've implemented a plugin for hosts that have KVM+NFS

- For more details please read the FS: 
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Host+HA
Host HA - Apache Cloudstack - Apache Software 
Foundation
cwiki.apache.org
CLOUDSTACK-9782. Branch. Yet to start, share the PR. Introduction. CloudStack 
lacks a way to reliably fence a host, the idea of the host-ha feature is to 
provide a ...





I had also given a talk about this feature during CCCNA17:

Reliable host fencing - 
http://rohit.yadav.xyz/files/talks/cccna17-reliable-host-fencing.pdf


Pull request: https://github.com/apache/cloudstack/pull/1960 (as soon as the 
4.10->4.11 db upgrade paths are fixed, I can rebase and fix the branch)
[https://avatars1.githubusercontent.com/u/95203?v=3&s=400]

[4.11/Future] CLOUDSTACK-9782: Host HA and KVM HA provider by rhtyd · Pull 
Request #1960 · 
apache/cloudstack
github.com
Host-HA offers investigation, fencing and recovery mechanisms for host that for 
any reason are malfunctioning. It uses Activity and Health checks to determine 
current host state based on which it m...





- Rohit

rohit.ya...@shapeblue.com
www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





Re: DISCUSS : Vmware to Cloudstack migration support

2017-07-14 Thread Simon Weller
We're actively working on migrations as well. We submitted a PR recently to 
support specifying a mac address on an interface to ease migration pain.


https://github.com/apache/cloudstack/pull/2143




From: williamstev...@gmail.com  on behalf of Will 
Stevens 
Sent: Thursday, July 13, 2017 9:15 AM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org; Vinay Patil; Siddheshwar More
Subject: Re: DISCUSS : Vmware to Cloudstack migration support

If you are going from VMware to CloudStack Managed VMware, this is
something I have done quite a bit.  I even built a tool to do this:
https://github.com/swill/migrate2cs
[https://avatars4.githubusercontent.com/u/13644?v=4&s=400]

swill/migrate2cs
github.com
Contribute to migrate2cs development by creating an account on GitHub.




This is not a polished product.  Well, it is pretty polished once you get
it setup, but the setup is a bit complicated to get started.

If you would like to use it, let me know and I will do what I can to get
you setup.

*Will Stevens*
CTO


CloudOps | Managed Private Cloud | Managed Public Cloud | Cloud Building | 
Cloud Architecture | Cloud Migration
goo.gl
We provide private, public and hybrid cloud solutions for businesses seeking to 
scale, and for enterprises making their move to the Cloud.




On Thu, Jul 13, 2017 at 9:52 AM, Shreya Nair  wrote:

> Hello,
>
> *An update to the migration task:*
>
> We have installed Cloudstack onto a vm on VMware to work around the
> cross-hypervisor migration issue. Now the underlying hypervisor would be
> the ESXi server for both setups (VMware and Cloudstack).
>
> Currently, we create the corresponding equivalent infrastructure on
> CloudStack wrt vmware setup.  We create a zone, pod, cluster and host set
> up with the underlying network infrastructure. However, while setting up
> the storage (Primary storage at cluster-wide scope and Secondary storage at
> zone-wide) the documentation mentions the following warnings:
>
>
>- *Primary storage warning:*
>   - When using preallocated storage for primary storage, be sure there
>   is nothing on the storage (ex. you have an empty SAN volume or
> an empty NFS
>   share). Adding the storage to CloudStack will destroy any existing
> data.
>- *Secondary storage warning:*
>   - Ensure that nothing is stored on the server. Adding the server to
>   CloudStack will destroy any existing data.
>
>
> We have obtained the mysql dump of the datastore of the source VM on
> VMware. The datastore, as you may be aware, is a logical container that
> holds virtual machine files and other files necessary for VM operations. As
> such, it may be logically mapped to the Secondary storage setup in the
> CloudStack infrastructure.
>
> Would it be possible to use the mysql dump from source to update the
> Secondary storage?
>
> Thanks & Regards,
>
> Shreya
>
> On Fri, Jul 7, 2017 at 4:04 PM, Paul Angus 
> wrote:
>
> > Maybe you should try qemu-img instead.
> >
> >
> > Kind regards,
> >
> > Paul Angus
> >
> >
> > paul.an...@shapeblue.com
> > www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> > From: Shreya Nair [mailto:shreya.n...@opcito.com]
> > Sent: 07 July 2017 11:30
> > To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> > Cc: Vinay Patil ; Siddheshwar More <
> > siddheshwar.m...@opcito.com>
> > Subject: Re: DISCUSS : Vmware to Cloudstack migration support
> >
> > Hi Paul,
> > We explored the XenConvert solution. The XenConvert utility has been
> > retired from XenServer 6.2 and upward. So the only solution would be
> using
> > an old copy or trial version of Xen Conversion Manager.
> >
> > Instead of the qemu-img utility, we used the VirtualBox VBoxManage.exe to
> > support conversion of *.VMDK file to VHD. This VHD file was used to
> create
> > a CS template and create an instance. However, the VM was unable to mount
> > the drives as it was unable to find xvdXX partitions
> >
> >
> > I get the following error on CS instance on boot:
> > [Inline image 2]
> >
> > and the logs shows us this:
> >
> > You might have to change the root from /dev/hd[a-d] to /dev/xvd[a-d]
> >
> >
> >
> > However, on using lsblk command on the source vmware instance, we
> realized
> > that the partitions on SCSI storage devices (Used by vmware) are named as
> > /dev/sdXX while Xen supports /dev/xvdXX.
> > [Inline image 1]
> >
> > Note: VMware tools has been removed from VM prior to migration
> 

Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-07-19 Thread Simon Weller
...unless you take a conference phone to the pub 😉





From: Tutkowski, Mike 
Sent: Wednesday, July 19, 2017 2:19 PM
To: dev@cloudstack.apache.org
Cc: us...@cloudstack.apache.org
Subject: Re: [README][Quarterly Call] - CloudStack Development, Blockers and 
Community Efforts

Hi Ilya,

I think this is a good idea and thanks for the proposed breakdown of the 
contents of the call.

One thing to note about August 17 is that there is a CloudStack Meetup in 
London on that day. The call would be happening around 5 PM local time (during 
part of the meetup or during the after-meetup activities). As such, I believe 
the participants of that meetup probably won't be attending the call.

Perhaps we should consider another day?

Talk to you later,
Mike

> On Jul 19, 2017, at 12:59 PM, ilya  wrote:
>
> Hi Devs and Users
>
> Hope this message finds you well,
>
> As mentioned earlier, we would like to start with quarterly calls to
> discuss the direction of cloudstack project.
>
> I propose to split the 90 minute call into 3 topics:
>
>1) Development efforts - 60 minutes
>Upcoming Features you are working on developing (to avoid
> collision andmaintain the roadmap).
>  Depending on number of topics we need to discuss - time for
> each topic will be set accordingly.
>  If you would like to particiapate - please respond to this
> thread and adhere to sample format below:
>
>
>
>
>
>
>
>
>
>
>
>
>
>2) Release Blockers - 20 minutes
>  If you would like to participate - please respond to this
> thread and adhere to sample format below:
>
>
>
>
>
>
>3) Community Efforts - 10+ minutes
>
>
>
>
>
> The proposed date and time  - Thursday August 17th 9AM PT.
>
> Minutes will be taken and posted on dev list. Due to number of things we
> need to discuss - we have to keep the call very structured, each topic -
> timed and very high level.
> If there are issues and or suggestions, we will note it down in few
> sentences, identify interested parties and have them do a "post"
> discussion on the mailing list.
>
> Looking forward to your comments,
>
> Regards,
> ilya
>


Re: Introduction

2017-07-31 Thread Simon Weller

Welcome Nicolas, nice to have you on-board!


- Si



From: Nicolas Vazquez 
Sent: Monday, July 31, 2017 10:07:04 AM
To: dev@cloudstack.apache.org
Subject: Introduction

Hi all,


My name is Nicolas Vazquez, today is my first day at @ShapeBlue as a Software 
Engineer. I am based in Montevideo, Uruguay and I've been working with 
CloudStack since mid-2015. Looking forward to working with you!


Thanks,

Nicolas

nicolas.vazq...@shapeblue.com
www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



,
@shapeblue





Re: 4.10 release announcement?

2017-08-11 Thread Simon Weller
Rajani,


Great job on this release!


Wido,


Here are few more items for the feature list. I've combined those already 
mentioned as well.


- IPV6 support for basic networking
- Virtio-Scsi disk controller support for KVM
- Ability to disable primary storage to secondary storage backups for snapshots
- VMSnapshot (including memory) support for KVM on NFS
- RBD snapshot backups to secondary are now QCOW2 rather than raw to save space
- Strongwan VPN Improvements
- Nuage VSP SDN Plugin: Shared networks support, Guest DNS support, Source- and 
Static-nat to Underlay and support for Nuage VSP 4.0
- Significant performances improvements related to Virtual Router Deployment
- KVM force stop support
- Lots of bug fixes and performance improvements

Can those that submitted other features please continue to add to this list? 
This will also help with the release notes.

- Si




From: Wido den Hollander 
Sent: Friday, August 11, 2017 5:07 AM
To: Rajani Karuturi; dev@cloudstack.apache.org
Subject: Re: 4.10 release announcement?


> Op 11 augustus 2017 om 11:56 schreef Rajani Karuturi :
>
>
> Yes the packages are pushed. Status is still the same as my
> previous update.
>

Ok!

> api changes and fixed issues are available at
> https://github.com/apache/cloudstack-docs-rn/pull/30
[https://avatars0.githubusercontent.com/u/186833?v=4&s=400]

4.10 api changes and fixed issues by karuturi · Pull Request #30 · 
apache/cloudstack-docs-rn
github.com
copied from list generated by @swill



>
> Pierre-Luc is helping on upgrade docs.
>
> Someone needs to handle the announcement part. I remember seeing
> a draft from you?
>

Once we have the docs we can send it out + update website. Anything other 
people can help with?

I wrote a draft indeed:


"Hi,

How does this look like initially?

# Apache CloudStack Release 4.10

The Apache CloudStack project is pleased to announce the release of the
CloudStack 4.10 This release contains many bug-fixes and improvements since
the CloudStack previous release.

A few highlights:
- IPv6 support in Basic Networking
- More
- More

Apache CloudStack is an integrated Infrastructure-as-a-Service (IaaS)
software platform that allows users to build feature-rich public and
private cloud environments. CloudStack includes an intuitive user interface
and rich API for managing the compute, networking, software, and storage
resources. The project became an Apache top level project in March, 2013.

More information about Apache CloudStack can be found at:
http://cloudstack.apache.org/
Apache CloudStack: Open Source Cloud Computing
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services




# Documentation

What's new in CloudStack 4.10:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.10.0/about.html

The 4.10.0 release notes include a full list of corrected issues, as well
as upgrade instructions from previous versions of Apache CloudStack, and
can be found at:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.10.0

The official installation, administration and API documentation for
each release are available on our Documentation Page:
http://docs.cloudstack.apache.org/
Welcome to CloudStack Documentation ! — Apache CloudStack 
...
docs.cloudstack.apache.org
Navigating the docs¶ Now that you have gone over the basic concepts of 
CloudStack you are ready to dive into installation and operation documentation.




# Downloads

The official source code for the 4.10.0 release can be downloaded from our
downloads page:
http://cloudstack.apache.org/downloads.html
Apache CloudStack: Open Source Cloud 
Computing
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services




## Downloads

The official source code release can be downloaded from:

http://cloudstack.apache.org/downloads.html
Apache CloudStack: Open Source Cloud 
Computing
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services




In addition to the official source code release, individual contributors
have also made convenience binaries available on the Apache CloudStack
download page, and can be found at:

https://download.cloudstack.org/ubuntu/
https://download.cloudstack.org/centos/7/
"

> ~ Rajani
>
> http://cloudplatform.accelerite.com/
>
> On August 10, 2017 at 4:27 PM, Wido den Hollander
> (w...@widodh.nl) wrote:
>
> Hi Rajani,
>
> Did you need any help with the 4.10 release announcement?
>
> All the packages are out there and so are the sources. We j

Re: 4.10 release announcement?

2017-08-11 Thread Simon Weller
Thanks Rohit.

i've modified the list based on that.


- IPV6 support for basic networking
- Virtio-Scsi disk controller support for KVM
- Ability to disable primary storage to secondary storage backups for snapshots
- VMSnapshot (including memory) support for KVM on NFS
- RBD snapshot backups to secondary are now QCOW2 rather than raw to save space
- Strongwan VPN Improvements
- Nuage VSP SDN Plugin: Shared networks support, Guest DNS support, Source- and 
Static-nat to Underlay and support for Nuage VSP 4.0
- Significant performances improvements related to Virtual Router Deployment
- Force power off/stop support for KVM, VMware and XenServer
- Lots of bug fixes and performance improvements




From: Rohit Yadav 
Sent: Friday, August 11, 2017 12:51 PM
To: Rajani Karuturi; dev@cloudstack.apache.org
Cc: Wido den Hollander
Subject: Re: 4.10 release announcement?

The power off operation (or force stop flag in deployVM API) on VM is supported 
for all KVM, VMware and XenServer.


- Rohit


From: Simon Weller 
Sent: Friday, August 11, 2017 2:31:52 PM
To: Rajani Karuturi; dev@cloudstack.apache.org
Cc: Wido den Hollander
Subject: Re: 4.10 release announcement?

Rajani,


Great job on this release!


Wido,


Here are few more items for the feature list. I've combined those already 
mentioned as well.


- IPV6 support for basic networking
- Virtio-Scsi disk controller support for KVM
- Ability to disable primary storage to secondary storage backups for snapshots
- VMSnapshot (including memory) support for KVM on NFS
- RBD snapshot backups to secondary are now QCOW2 rather than raw to save space
- Strongwan VPN Improvements
- Nuage VSP SDN Plugin: Shared networks support, Guest DNS support, Source- and 
Static-nat to Underlay and support for Nuage VSP 4.0
- Significant performances improvements related to Virtual Router Deployment
- KVM force stop support
- Lots of bug fixes and performance improvements

Can those that submitted other features please continue to add to this list? 
This will also help with the release notes.

- Si




From: Wido den Hollander 
Sent: Friday, August 11, 2017 5:07 AM
To: Rajani Karuturi; dev@cloudstack.apache.org
Subject: Re: 4.10 release announcement?


> Op 11 augustus 2017 om 11:56 schreef Rajani Karuturi :
>
>
> Yes the packages are pushed. Status is still the same as my
> previous update.
>

Ok!

> api changes and fixed issues are available at
> https://github.com/apache/cloudstack-docs-rn/pull/30
[https://avatars0.githubusercontent.com/u/186833?v=4&s=400]<https://github.com/apache/cloudstack-docs-rn/pull/30>

4.10 api changes and fixed issues by karuturi · Pull Request #30 · 
apache/cloudstack-docs-rn<https://github.com/apache/cloudstack-docs-rn/pull/30>
github.com
copied from list generated by @swill



[https://avatars0.githubusercontent.com/u/186833?v=4&s=400]<https://github.com/apache/cloudstack-docs-rn/pull/30>

4.10 api changes and fixed issues by karuturi · Pull Request #30 · 
apache/cloudstack-docs-rn<https://github.com/apache/cloudstack-docs-rn/pull/30>
github.com
copied from list generated by @swill



>
> Pierre-Luc is helping on upgrade docs.
>
> Someone needs to handle the announcement part. I remember seeing
> a draft from you?
>

Once we have the docs we can send it out + update website. Anything other 
people can help with?

I wrote a draft indeed:


"Hi,

How does this look like initially?

# Apache CloudStack Release 4.10

The Apache CloudStack project is pleased to announce the release of the
CloudStack 4.10 This release contains many bug-fixes and improvements since
the CloudStack previous release.

A few highlights:
- IPv6 support in Basic Networking
- More
- More

Apache CloudStack is an integrated Infrastructure-as-a-Service (IaaS)
software platform that allows users to build feature-rich public and
private cloud environments. CloudStack includes an intuitive user interface
and rich API for managing the compute, networking, software, and storage
resources. The project became an Apache top level project in March, 2013.

More information about Apache CloudStack can be found at:
http://cloudstack.apache.org/
Apache CloudStack: Open Source Cloud Computing<http://cloudstack.apache.org/>
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services



Apache CloudStack: Open Source Cloud Computing<http://cloudstack.apache.org/>
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services




# Documentation

What's new in CloudStack 4.10:
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.10.0/about.html

The 4.10.0 release notes include a full list of corrected issu

Re: [README][Quarterly Call] - CloudStack Development, Blockers and Community Efforts

2017-08-18 Thread Simon Weller
Ilya,


I'll be attending with a few other folks from ENA.


Here's one for the Dev efforts -



 Ability to Specify Mac Address when plugging a network
  We're working on cloud migration strategies and part of that 
is making the move as seamless as possible.

 The ability to specify a mac address when shifting a VM 
workload from another environment makes the transition a lot easier.
  https://issues.apache.org/jira/browse/CLOUDSTACK-9949

  https://github.com/apache/cloudstack/pull/2143
  Nathan Johnson
  PR has been submitted as of 7/13 is is awaiting review from 
the community (Targetting 4.11)


We'll discuss our roadmap internally for the next half and get back to you with 
additions before the call.


- Si


From: ilya 
Sent: Thursday, August 17, 2017 7:29 PM
To: dev@cloudstack.apache.org
Subject: Re: [README][Quarterly Call] - CloudStack Development, Blockers and 
Community Efforts

Hi All,

I'd like to pick this thread back up and see if you are joining. As a
reminder, proposed date is September 6th 2017, time 9AM PST.

If you are, please kindly respond. If you have things to discuss -
please use the outline below:

  1) Development efforts - 60 minutes
Upcoming Features you are working on developing (to avoid
collision and maintain the roadmap).
  Depending on number of topics we need to discuss - time for
each topic will be set accordingly.
  If you would like to particiapate - please respond to this
thread and adhere to sample format below:













2) Release Blockers - 20 minutes
  If you would like to participate - please respond to this
thread and adhere to sample format below:






3) Community Efforts - 10+ minutes







Thanks
ilya

On 8/1/17 10:55 AM, ilya wrote:
> Hi Team
>
> Proposed new date for first quarterly call
>
> September 6th 2017, time 9AM PST.
>
> This is a month out and hopefully can work with most folks. If it does
> not work with your timing - please consider finding delegates and/or
> representatives.
>
> Regards
> ilya
>
> On 7/20/17 6:11 AM, Wido den Hollander wrote:
>>
>>> Op 20 juli 2017 om 14:58 schreef Giles Sirett :
>>>
>>>
>>> Hi Ilya
>>> Sorry, I should have highlighted that User Group meeting clash before
>>>
>>> Under normal circumstances, I would say: its futile trying to coordinate 
>>> calendars with such a broad audience - there will always be some people not 
>>> available , just set a regular date, keep it rolling (build and they will 
>>> come)
>>>
>>> However, for the first call, there will be at least Wido, Mike, Paul, Daan, 
>>> me and probably a lot more PMC members not available because of the user 
>>> group meeting
>>>
>>
>> +1 I will be present!
>>
>> Wido
>>
>>> To keep it simple, I'd therefore say, go with the following day (Friday 
>>> 18th) or the next Thursday (24th )
>>>
>>> I'm not even going to respond to Simons pub/phone suggestion.
>>>
>>> Kind regards
>>> Giles
>>>
>>> giles.sir...@shapeblue.com
>>> www.shapeblue.com<http://www.shapeblue.com>
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]<http://www.shapeblue.com/>

Shapeblue - The CloudStack Company<http://www.shapeblue.com/>
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>> @shapeblue
>>>
>>>
>>>
>>>
>>> -Original Message-
>>> From: ilya [mailto:ilya.mailing.li...@gmail.com]
>>> Sent: 19 July 2017 22:25
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: [README][Quarterly Call] - CloudStack Development, Blockers 
>>> and Community Efforts
>>>
>>> The date conflict is noted - please provide range of alternative dates.
>>>
>>> Thanks,
>>> ilya
>>>
>>> On 7/19/17 12:35 PM, Tutkowski, Mike wrote:
>>>> I thought about that, but sometimes you can barely hear the person
>>>> 

Re: one question network survey

2017-08-28 Thread Simon Weller
Daan,


We use vxlan via the native driver on KVM. Our use currently is within standard 
isolated networks and VPC isolated networks. We use standard VLANs on the 
private gateways, but we're currently exploring some other methods there to 
eliminate our vlan requirement.


- Si



From: Imran Ahmed 
Sent: Monday, August 28, 2017 4:29 AM
To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
Subject: RE: one question network survey

Hi Daan,

I use a separate trunk  (OVS or non OVS bonded with LACP ) connected to 
multiple switches (which are already configured into a switch stack). There can 
be multiple case scenarios but I am mentioning the most generic one .

Hope that answers your question if I have correctly understood your question.


Regards,



-Original Message-
From: Daan Hoogland [mailto:daan.hoogl...@shapeblue.com]
Sent: Monday, August 28, 2017 12:20 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: one question network survey

Devs and users,

Can you all please tell me how you are using VxLan in your cloudstack 
environments?

The reason behind this is that I am planning some refactoring in the 
networkgurus and I don’t want to break any running installations on upgrade. If 
you are not using vxlan but know of people that might not react, using it, 
please point me to them.

Thanks,

daan.hoogl...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue






Re: Need to ask for help again (Migration in cloudstack)

2017-09-01 Thread Simon Weller
Dmitriy,

Can you give us a bit more information about what you're trying to do?
If you're looking for live migration on non shared storage with KVM, there is 
an outstanding PR  in the works to support that:

https://github.com/apache/cloudstack/pull/1709

- Si



From: Rajani Karuturi 
Sent: Friday, September 1, 2017 4:07 AM
To: dev@cloudstack.apache.org
Subject: Re: Need to ask for help again (Migration in cloudstack)

You might start with this commit
https://github.com/apache/cloudstack/commit/21ce3befc8ea9e1a6de449a21499a50ff141a183


and storage_motion_supported column in hypervisor_capabilities
table.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On August 31, 2017 at 6:29 PM, Dmitriy Kaluzhniy
(dmitriy.kaluzh...@gmail.com) wrote:

Hello!
I contacted this mail before, but I wasn't subscribed to mailing
list.
The reason I'm contacting you - I need advise.
During last week I was learning cloudstack code to find where is
implemented logic of this statements I found in cloudstack
documentation:
"(KVM) The VM must not be using local disk storage. (On
XenServer and
VMware, VM live migration with local disk is enabled by
CloudStack support
for XenMotion and vMotion.)

(KVM) The destination host must be in the same cluster as the
original
host. (On XenServer and VMware, VM live migration from one
cluster to
another is enabled by CloudStack support for XenMotion and
vMotion.)"

I made up a long road through source code but still can't see
it. If you
can give me any advise - it will be amazing.
Anyway, thank you.

--

*Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*


Re: Primary interface on Windows templates

2017-10-04 Thread Simon Weller
Also note that as of 4.10 there is new support for Virtio-Scsi on KVM


Check out this PR note for an example of how to set it up on a template:

https://github.com/apache/cloudstack/pull/1955#issuecomment-284440859


- Si






From: Dmitriy Kaluzhniy 
Sent: Wednesday, October 4, 2017 8:00 AM
To: dev@cloudstack.apache.org
Subject: Re: Primary interface on Windows templates

As I found - it is hardcoded here -
libvirtComputingResource.isGuestPVEnabled
So, there are two ways or I change it in code, or I set OS Type for
template "Windows PV" or "Other PV". What is the difference between this
two types?

2017-10-04 14:36 GMT+03:00 Dmitriy Kaluzhniy :

> Hello,
> Thank you for your answer, Ivan. Yes, I think it will work, but it is very
> unstable variant. I hope it can be done in other way.
>
> 2017-10-02 17:46 GMT+03:00 Ivan Kudryavtsev :
>
>> Hi, I believe that if you change os type to linux, you'll get it. But it
>> could lead to problems with storage drivers as acs will announce it as
>> virtio too.
>>
>> 2 окт. 2017 г. 19:58 пользователь "Dmitriy Kaluzhniy" <
>> dmitriy.kaluzh...@gmail.com> написал:
>>
>> > Hello,
>> > I was working with templates and find out that Windows templates
>> > automatically gets E1000 interface. Is there any way to change it to
>> > Virtio?
>> >
>> > --
>> >
>> >
>> >
>> >
>> > *Best regards,Dmitriy Kaluzhniy+38 (073) 101 14 73*
>> >
>>
>
>
>
> --
>
>
>
> *--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*
>



--



*--С уважением,Дмитрий Калюжный+38 (073) 101 14 73*


RE: [ANNOUNCE] Syed Mushtaq Ahmed has joined the PMC

2017-10-09 Thread Simon Weller
Congrats Syed!

Simon Weller/615-312-6068

-Original Message-
From: Wido den Hollander [w...@widodh.nl]
Received: Monday, 09 Oct 2017, 6:31AM
To: Paul Angus [paul.an...@shapeblue.com]; dev@cloudstack.apache.org 
[dev@cloudstack.apache.org]; us...@cloudstack.apache.org 
[us...@cloudstack.apache.org]
Subject: Re: [ANNOUNCE] Syed Mushtaq Ahmed has joined the PMC


> Op 9 oktober 2017 om 13:26 schreef Paul Angus :
>
>
> Fellow CloudStackers,
>
> It gives me great pleasure to say that Syed has be invited to join the PMC 
> and has gracefully accepted.
> Please joining me in congratulating Syed!
>
>

Congratulations Syed! Welcome :-)

Wido

> Kind regards,
>
> Paul Angus
>
>
> paul.an...@shapeblue.com
> www.shapeblue.com<http://www.shapeblue.com>
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>


Re: Help/Advice needed - some traffic don't reach VNET / VM

2017-10-09 Thread Simon Weller
Andrija,


What is the guest OS for this VM, or does this issue not discriminate?

- Si

From: Andrija Panic 
Sent: Monday, October 9, 2017 3:52 PM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Help/Advice needed - some traffic don't reach VNET / VM

Hi guys,

we have occasional but serious problem, that starts happening as it seems
randomly (i.e. NOT under high load)  - not ACS related afaik, purely KVM,
but feedback is really welcomed.

- VM is reachable in general from everywhere, but not reachable from
specific IP address ?!
- VM is NOT under high load, network traffic next to zero, same for
CPU/disk...
- We mitigate this problem by migrating VM away to another host, not much
of a solution...

Description of problem:

We let ping from "problematic" source IP address to the problematic VM, and
we capture traffic on KVM host where the problematic VM lives:

- Tcpdump on VXLAN interface (physical incoming interface on the host) - we
see packet fine
- tcpdump on BRIDGE = we see packet fine
- tcpdump on VNET = we DON'T see packet.

In the scenario above, I need to say that :
- we can tcpdump packets from other source IPs on the VNET interface just
fine (as expected), so should also see this problematic source IP's packets
- we can actually ping in oposite direction - from the problematic VM to
the problematic "source" IP

We checked everything possible, from bridge port forwarding, to mac-to-vtep
mapping, to many other things, removed traffic shaping from VNET interface,
no iptables/ebtables, no STP on bridge, remove and rejoin interfaces to
bridge, destroy bridge and create manually on the fly,

Problem is really crazy, and I can not explain it - no iptables, no
ebtables for troubleshooting pruposes (on this host) and

We mitigate this problem by migrating VM away to another host, not much of
a solution...

This is Ubuntu 14.04, Qemu 2.5 (libvirt 1.3.1),
Stock kernel 3.16-xx, regular bridge (not OVS)

Anyone else ever heard of such problem - this is not intermittent packet
dropping, but complete blackout/packet drop in some way...

Thanks,

--

Andrija Panić


RE: MAX 50 VM per KVM HOST ?

2017-10-14 Thread Simon Weller
That is an old limit that I believe has been raised for new installs. With the 
new segmented CPUs out now, you can scale to pretty crazy numbers of VMs per 
host.
It will very much depend on your hardware, but I believe rhel7 doesn't publish 
a guest limit any longer.

Simon Weller/615-312-6068

-Original Message-
From: Andrija Panic [andrija.pa...@gmail.com]
Received: Saturday, 14 Oct 2017, 1:13PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: MAX 50 VM per KVM HOST ?

Hi all,

I would like to understand, is there any practical reason, why we choose
max 50 VMs per single KVM host ? Is there any relation between number of
VMs on host vs. CPU core number (GHz to be exact) or anything else ?


Thanks,

--

Andrija Panić


RE: HTTPS LB and x-forwarded-for

2017-11-08 Thread Simon Weller
I'm assuming we would have the standard openssl version with Intel TLS offload 
though, right? RHEL ships their FIPS compliant version that strips all the 
acceleration out. The cpu instruction sets should be passed through from the 
host, so hopefully that will make a massive difference to decryption speeds and 
cpu load.

Simon Weller/615-312-6068

-Original Message-
From: Pierre-Luc Dion [pd...@cloudops.com]
Received: Wednesday, 08 Nov 2017, 9:00AM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]; Khosrow Moossavi 
[kmooss...@cloudops.com]; Will Stevens [wstev...@cloudops.com]
CC: users [us...@cloudstack.apache.org]
Subject: Re: HTTPS LB and x-forwarded-for

Same challenge here too!

Let's look at improving Load-balancing offering from cloudstack, I guest we
should do a feature spec draft soon..,  from my perspective, doing SSL
offload on the VR could be problematic if the VR spec if too small, and the
default spec of the VR being 1vcpu@256MB, considering it can be the router
of a VPC, doing VPN termination, adding HTTPS  is a bit ish... What would
be your thought about this ?

I'd be curious to have a LB offering in ACS where it would deploy a
redundant traefik[1] beside the VR for doing http and https Load-balancing.
I think it would also be useful if the API of that traefik instance would
be available from within the VPC or LBnetwork so is API would be accessible
to other apps orchestration tools such as  kubernetes or rancher.

traefik or not, here is what I think is needed by cloudstack in the LB
improvement:

- support http, https (X-Forwarded-For)
- basic persistence tuning (API already exist)
- better backend monitoring, currently only a tcp connect validate if the
webserver is up.
- ssl offload
- metric collection, more stats, maybe just export the tool status page to
the private network.
- Container world support, right now if you have Rancher or kubernetes
cluster, you need to deploy your own LB solution behing mostlikely a static
nat., If cloudstack would deploy a traefik instance, Kub or Rancher could
reuse this instance and managed it to properly do LB between containers.


What would be your prefered LB tool:
haproxy, traefik or nginx?

CloudStack already have to code to handle SSL certs per projects and
accounts if not mistaking because that code was added to support NetScaler
as Load-balancer in the past. so one less thing to think about :-)


[1] https://traefik.io/


PL,



On Mon, Nov 6, 2017 at 7:10 AM, Nux!  wrote:

> Thanks Andrija,
>
> LB outside of the VR sounds like a good idea. An appliance based on, say
> cloud-init + ansible and so on could do the trick; alas it'd need to be
> outside ACS.
> I guess as users we could maybe come up with a spec for an improvement, at
> least we'd have something the devs could look at whenever it is possible.
>
> Regards,
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "Andrija Panic" 
> > To: "dev" 
> > Cc: "users" 
> > Sent: Thursday, 2 November, 2017 23:21:37
> > Subject: Re: HTTPS LB and x-forwarded-for
>
> > We used to make some special stuff for one of the clients, where all LB
> > configuration work is done from outside of the ACS, i.e. python script to
> > feed/configure VR - install latest haproxy 1.5.x for transparent proxy,
> > since client insisted on SSL termination done on backend web SSL
> servers
> > Not good idea, that is all I can say (custom configuration thing) - but
> the
> > LB setup is actually good - transparent mode haproxy, works on TCP level,
> > so you can see "real client IP" on the backend servers (which must use VR
> > as the default gtw, as per default, so the whole setup works properly).
> >
> > I'm still looking forward to see some special support of LB inside VR via
> > ACS - proper LB setup inside VR via GUI/API -  i.e. to enable LB
> > provisioning SCRIPT (bash, or whatever),  where all needed
> > install+configure can be done from client side  - otherwise covering all
> > user cases, with proper HTTP checks and similaris impossible to do
> > IMHO.
> >
> > Some other clients, actually have internal FW appliance (i.e. multihomed
> > VM, acting as gtw for all VMs in all networks), and haproxy instaled on
> > this device (with NAT configured from VR to this internal FW/VM, so
> remote
> > IP can be seen properly) - this setup is fully under customer control,
> and
> > can provide any kind of special haproxy config...
> >
> >
> >
> >
> >
> >
> > On 31 October 2017 at 19:54, Nux!  wrote:
> >
> >> Hello,
> >>
>

Re: Apache CloudStack 4.10 VR/BasicZone/KVM Problem

2017-11-10 Thread Simon Weller
Ivan,


Can you put the host agents into debug mode? Hopefully that will provide more 
information.


https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+agent+debug


- Si


From: Ivan Kudryavtsev 
Sent: Friday, November 10, 2017 11:34 AM
To: dev@cloudstack.apache.org
Subject: Apache CloudStack 4.10 VR/BasicZone/KVM Problem

Hello, Devs.

I experience VR Start Problem in the fresh ACS 4.10 deployment

Intersting place of logs is here: https://pastebin.com/iBXRBA5N

Basically, the situation looks like:

1. Management Server tries to launch VR
2. It gets from Agent proper VR response with VR details
3. It sends StopCommand without explanation.

I'm trying to figure out what happens inside, but the codebase is huge and
still no positive results. Please, let me know if you have any ideas which
could help me finding the reason. Thanks a lot.

--
With best regards, Ivan Kudryavtsev
Bitworks Software, Ltd.
Cell: +7-923-414-1515
WWW: http://bitworks.software/ 


Re: Apache CloudStack 4.10 VR/BasicZone/KVM Problem

2017-11-10 Thread Simon Weller
Is the storage ceph?



From: Ivan Kudryavtsev 
Sent: Friday, November 10, 2017 11:52 AM
To: dev@cloudstack.apache.org
Subject: Re: Apache CloudStack 4.10 VR/BasicZone/KVM Problem

Hi, I did, and it does the things right, I even added "tee" to ssh 3922
communication script to out vr response to additional log and it only
receives VR version line and sends all info (the same from pastebin) to ACS
and receives "stop" order.

I'll try to provide additional info, but ad you can see, management
receives proper response and sends stop next op. It looks very freaky
without any notification...

11 нояб. 2017 г. 0:37 пользователь "Simon Weller" 
написал:

> Ivan,
>
>
> Can you put the host agents into debug mode? Hopefully that will provide
> more information.
>
>
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+agent+debug
>
>
> - Si
>
> 
> From: Ivan Kudryavtsev 
> Sent: Friday, November 10, 2017 11:34 AM
> To: dev@cloudstack.apache.org
> Subject: Apache CloudStack 4.10 VR/BasicZone/KVM Problem
>
> Hello, Devs.
>
> I experience VR Start Problem in the fresh ACS 4.10 deployment
>
> Intersting place of logs is here: https://pastebin.com/iBXRBA5N
>
> Basically, the situation looks like:
>
> 1. Management Server tries to launch VR
> 2. It gets from Agent proper VR response with VR details
> 3. It sends StopCommand without explanation.
>
> I'm trying to figure out what happens inside, but the codebase is huge and
> still no positive results. Please, let me know if you have any ideas which
> could help me finding the reason. Thanks a lot.
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>


Re: Apache CloudStack 4.10 VR/BasicZone/KVM Problem

2017-11-10 Thread Simon Weller
What VR template image are you using?



From: Ivan Kudryavtsev 
Sent: Friday, November 10, 2017 11:59 AM
To: dev@cloudstack.apache.org
Subject: Re: Apache CloudStack 4.10 VR/BasicZone/KVM Problem

Hi. No, regular NFS. VR starts great, but stopped by ms, other system vms
are working. I even added to communication script on compute node "sleep
3600" before ssh, so response to management is delayed, I logged so to VR,
all interfaces are up, iptables rules are OK.

So agent rolls vr good, but stops it by management order with no obvious
reason.

11 нояб. 2017 г. 0:54 пользователь "Simon Weller" 
написал:

> Is the storage ceph?
>
>
> 
> From: Ivan Kudryavtsev 
> Sent: Friday, November 10, 2017 11:52 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Apache CloudStack 4.10 VR/BasicZone/KVM Problem
>
> Hi, I did, and it does the things right, I even added "tee" to ssh 3922
> communication script to out vr response to additional log and it only
> receives VR version line and sends all info (the same from pastebin) to ACS
> and receives "stop" order.
>
> I'll try to provide additional info, but ad you can see, management
> receives proper response and sends stop next op. It looks very freaky
> without any notification...
>
> 11 нояб. 2017 г. 0:37 пользователь "Simon Weller"  >
> написал:
>
> > Ivan,
> >
> >
> > Can you put the host agents into debug mode? Hopefully that will provide
> > more information.
> >
> >
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+agent+debug
KVM agent debug - Apache Cloudstack - Apache Software 
...<https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+agent+debug>
cwiki.apache.org
Steps to debug the KVM agent from eclipse: In KVM agent edit 
‘/usr/libexec/agent-runner ‘, add "-Xrunjdwp:transport=dt_socket,address=8787 
...



> >
> >
> > - Si
> >
> > 
> > From: Ivan Kudryavtsev 
> > Sent: Friday, November 10, 2017 11:34 AM
> > To: dev@cloudstack.apache.org
> > Subject: Apache CloudStack 4.10 VR/BasicZone/KVM Problem
> >
> > Hello, Devs.
> >
> > I experience VR Start Problem in the fresh ACS 4.10 deployment
> >
> > Intersting place of logs is here: https://pastebin.com/iBXRBA5N
[https://pastebin.com/i/facebook.png]<https://pastebin.com/iBXRBA5N>

2017-11-10 23:05:35,853 DEBUG [c.c.a.t.Request] 
(Work-Job-Executor-15:ctx-6fdf61 - Pastebin.com<https://pastebin.com/iBXRBA5N>
pastebin.com



> >
> > Basically, the situation looks like:
> >
> > 1. Management Server tries to launch VR
> > 2. It gets from Agent proper VR response with VR details
> > 3. It sends StopCommand without explanation.
> >
> > I'm trying to figure out what happens inside, but the codebase is huge
> and
> > still no positive results. Please, let me know if you have any ideas
> which
> > could help me finding the reason. Thanks a lot.
> >
> > --
> > With best regards, Ivan Kudryavtsev
> > Bitworks Software, Ltd.
> > Cell: +7-923-414-1515
> > WWW: http://bitworks.software/ <http://bw-sw.com/>
> >
>


Re: POLL: ACL default egress policy rule in VPC

2017-11-13 Thread Simon Weller
3 definitely seems to make the most sense.


From: Rafael Weingärtner 
Sent: Monday, November 13, 2017 12:02 PM
To: dev@cloudstack.apache.org
Cc: u...@cloudstack.apache.org
Subject: Re: POLL: ACL default egress policy rule in VPC

3

On Mon, Nov 13, 2017 at 3:51 PM, Daan Hoogland 
wrote:

> 3 of course ;)
>
> On Mon, Nov 13, 2017 at 6:47 PM, Rene Moser  wrote:
>
> > Hi Devs
> >
> > The last days I fought with the ACL egress rule behaviour and I would
> > like to make a poll in which direction the fix should go.
> >
> > Short Version:
> >
> > We need to define a better default behaviour for acl default egress
> > rule. I see 3 different options:
> >
> > 1. always add a default deny all egress rule.
> >
> > This would be super easy to do (should probably also the intermediate
> > fix for 4.9, see https://github.com/apache/cloudstack/pull/2323)
> >
> >
> > 2. add a deny all egress rule in case if have at least one egress allow
> > rule.
> >
> > A bit intransparent to the user, but doable. This seems to be the
> > behaviour how it was designed and should have been implemented.
> >
> >
> > 3. use the default setting in the network offering "egressdefaultpolicy"
> > to specify the default behavior.
> >
> > There is already a setting which specifies this behaviour but is not
> > used in VPC. Why not use it?
> >
> > As a consequence when using this setting, the user should get more infos
> > about the policy of the network offering while choosing it for the tier.
> >
> >
> > Poll:
> >
> > 1. []
> > 2. []
> > 3. []
> > 4. [] Other? What?
> >
> >
> > Long Version:
> >
> > First, let's have a look of the issue:
> >
> > In version 4.5, creating a new acl with no egress (ACL_OUTBOUND) rule
> > would result in a "accept egress all":
> >
> > -A PREROUTING -s 10.10.0.0/24 ! -d 10.10.0.1/32 -i eth2 -m state --state
> > NEW -j ACL_OUTBOUND_eth2
> > -A ACL_OUTBOUND_eth2 -j ACCEPT
> >
> > When an egress (here deny 25 egress) rule (no mather if deny or allow)
> > gets added the result is a "deny all" appended:
> >
> > -A PREROUTING -s 10.10.0.0/24 ! -d 10.10.0.1/32 -i eth2 -m state --state
> > NEW -j ACL_OUTBOUND_eth2
> > -A ACL_OUTBOUND_eth2 -p tcp -m tcp --dport 25 -j DROP
> > -A ACL_OUTBOUND_eth2 -j DROP
> >
> > This does not make any sense and is a bug IMHO.
> >
> >
> > In 4.9 the behaviour is different:
> >
> > (note there is a bug in the ordering of egress rules which is fixed by
> > https://github.com/apache/cloudstack/pull/2313)
> >
> > The default policy is kept accept egress all.
> >
> > -A PREROUTING -s 10.11.1.0/24 ! -d 10.11.1.1/32 -i eth2 -m state --state
> > NEW -j ACL_OUTBOUND_eth2
> > -A ACL_OUTBOUND_eth2 -d 224.0.0.18/32 -j ACCEPT
> > -A ACL_OUTBOUND_eth2 -d 225.0.0.50/32 -j ACCEPT
> > -A ACL_OUTBOUND_eth2 -p tcp -m tcp --dport 80 -j ACCEPT
> >
> >
> > To me it looks like the wanted behavior was "egress all as default. If
> > we have allow rules, append deny all". This would make sense but is
> > quite instransparent.
> >
> > But let's poll
> >
> >
> >
>
>
> --
> Daan
>



--
Rafael Weingärtner


Re: New committer: Gabriel Beims Bräscher

2017-11-16 Thread Simon Weller
Congrats Gabriel, much deserved!



From: Wei ZHOU 
Sent: Thursday, November 16, 2017 2:16 AM
To: dev@cloudstack.apache.org
Subject: Re: New committer: Gabriel Beims Bräscher

Congratulations Gabriel!

-Wei

2017-11-15 11:32 GMT+01:00 Rafael Weingärtner :

> The Project Management Committee (PMC) for Apache CloudStack has invited
> Gabriel Beims Bräscher to become committer and we are pleased to announce
> that he has accepted.
>
> Gabriel has shown commitment to Apache CloudStack community, contributing
> with PRs in a constant fashion. Moreover, he has also proved great
> abilities to interact with the community quite often in our mailing lists
> and Slack channel trying to help people.
>
> Let´s congratulate and welcome Apache CloudStack’s newest committer.
>
> --
> Rafael Weingärtner
>


Re: Circumventing VXLAN MTU issues

2017-11-20 Thread Simon Weller
Lucian,

Can you run jumbos on your switches?

- Si



From: Nux! 
Sent: Monday, November 20, 2017 10:08 AM
To: dev
Cc: users
Subject: Circumventing VXLAN MTU issues

Hello,

I am playing around with the native VXLAN implementation and I have of course 
hit the situation where MTU on the host gets chomped by 50 bytes and traffic 
crawls to a stop in the VMs.
How are you people circumventing this? The easiest, but kind of inconvenient 
for customers would be to set MTU 1450 inside the VM, though it could be baked 
into the templates or via cloud-init.

Is the OpenVSwitch implementation (of VXLAN) suffer from the same problem, I 
understand ovs can dynamically adjust MTUs to compensate (?), but at the same 
time this VXLAN implementation suffers from some limitations (multicast etc).

Lucian


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: Circumventing VXLAN MTU issues

2017-11-20 Thread Simon Weller
Change your host interface MTU to something a lot higher.



From: Nux! 
Sent: Monday, November 20, 2017 10:51 AM
To: users
Cc: dev
Subject: Re: Circumventing VXLAN MTU issues

Hi,

I probably can use jumbo frames, but for now my lab is restricted to a single 
machine.
Anything I can do in this situation?

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Simon Weller" 
> To: "dev" , "users" 
> Sent: Monday, 20 November, 2017 16:38:33
> Subject: Re: Circumventing VXLAN MTU issues

> Lucian,
>
> Can you run jumbos on your switches?
>
> - Si
>
>
> 
> From: Nux! 
> Sent: Monday, November 20, 2017 10:08 AM
> To: dev
> Cc: users
> Subject: Circumventing VXLAN MTU issues
>
> Hello,
>
> I am playing around with the native VXLAN implementation and I have of course
> hit the situation where MTU on the host gets chomped by 50 bytes and traffic
> crawls to a stop in the VMs.
> How are you people circumventing this? The easiest, but kind of inconvenient 
> for
> customers would be to set MTU 1450 inside the VM, though it could be baked 
> into
> the templates or via cloud-init.
>
> Is the OpenVSwitch implementation (of VXLAN) suffer from the same problem, I
> understand ovs can dynamically adjust MTUs to compensate (?), but at the same
> time this VXLAN implementation suffers from some limitations (multicast etc).
>
> Lucian
>
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro


RE: Introduction

2017-11-23 Thread Simon Weller
Welcome Dingane.

Simon Weller/615-312-6068

-Original Message-
From: Gabriel Beims Bräscher [gabrasc...@gmail.com]
Received: Thursday, 23 Nov 2017, 7:34AM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
CC: Dingane Hlaluku [dingane.hlal...@shapeblue.com]
Subject: Re: Introduction

Welcome, Dingane!

2017-11-23 7:45 GMT-02:00 Wido den Hollander :

>
> > Op 23 november 2017 om 9:00 schreef Dingane Hlaluku <
> dingane.hlal...@shapeblue.com>:
> >
> >
> > Dear Community
> >
> >
> >
> >
> >
> > It is with great pleasure that I would like to introduce myself to this
> community. I am Dingane and I have recently joined ShapeBlue to work on
> CloudStack. I am looking forward to future collaborating and learning from
> everyone.
> >
> >
>
> I Dingane, welcome! Looking forward to your contributions!
>
> Wido
>
> >
> > Thank you,
> >
> > dingane.hlal...@shapeblue.com
> > www.shapeblue.com<http://www.shapeblue.com>
> > ,
> > @shapeblue
> >
> >
> >
>


Re: [PROPOSE] RM for 4.11

2017-11-30 Thread Simon Weller
Great Rohit! We'll provide as much support to you guys as we can.


From: Rohit Yadav 
Sent: Wednesday, November 29, 2017 4:14 AM
To: dev@cloudstack.apache.org
Subject: [PROPOSE] RM for 4.11

Hi All,

I’d like to put myself forward as release manager for 4.11. The 4.11
releases will be the next major version LTS release since 4.9 and will be
supported for 20 months per the LTS manifesto [2] until 1 July 2019.

Daan Hoogland and Paul Angus will assist during the process and all of us
will be the gatekeepers for reviewing/testing/merging the PRs, others will
be welcome to support as well.

As a community member, I will try to help get PRs reviewed, tested and
merged (as would everyone else I hope) but with an RM hat on I would like
to see if we can make that role less inherently life-consuming and put the
onus back on the community to get stuff done.

Here the plan:
1. As RM I put forward the freeze date of the 8th of January 2018, hoping
for community approval.
2. After the freeze date (8th Jan) until GA release, features will not be
allowed and fixes only as long as there are blocker issues outstanding.
Fixes for other issues will be individually judged on their merit and risk.
3. RM will triage/report critical and blocker bugs for 4.11 [4] and
encourage people to get them fixed.
4. RM will create RCs and start voting once blocker bugs are cleared and
baseline smoke test results are on par with previous 4.9.3.0/4.10.0.0 smoke
test results.
5. RM will allocate at least a week for branch stabilization and testing.
At the earliest, on 15th January, RM will put 4.11.0.0-rc1 for voting from
the 4.11 branch, and master will be open to accepting new features.
6. RM will repeat 3-5 as required. Voting/testing of -rc2, -rc3 and so on
will be created as required.
7. Once vote passes - RM will continue with the release procedures [1].

In conjunction with that, I also propose and put forward the date of 4.12
cut-off as 4 months [3] after GA release of 4.11 (so everyone knows when
the next one is coming hopefully giving peace of mind to those who have
features which would not make the proposed 4.11 cut off).

I’d like the community (including myself and colleagues) to:
- Up to 8th January, community members try to review, test and merge as
many fixes as possible, while super-diligent to not de-stabilize the master
branch.
- Engage with gatekeepers to get your PRs reviewed, tested and merged
(currently myself, Daan and Paul, others are welcome to engage as well). Do
not merge the PRs
- A pull request may be reverted where the author(s) are not responding and
authors may be asked to re-submit their changes after taking suitable
remedies.
- Find automated method to show (at a glance) statuses of PRs with respect
to:
  · Number of LGTMs
  · Smoke tests
  · Functional tests
  · Travis tests passing
  · Mergeability
- Perform a weekly run of a component-test matrix against the master branch
before Jan 8th cut off (based on current hypervisors including basic (KVM)
and advanced networking).
- Continue to fix broken tests.

Thoughts, feedback, comments?

[1] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+Procedure
Release Procedure - Apache Cloudstack - Apache Software 
...
cwiki.apache.org
This page describes the steps that a release manager needs to take to perform a 
release of Apache CloudStack. (Thanks to the couchdb project for a great 
template to ...



[2] https://cwiki.apache.org/confluence/display/CLOUDSTACK/LTS
LTS - Apache Cloudstack - Apache Software 
Foundation
cwiki.apache.org
The project plans to produce the following types of releases: Regular: 
Introduce new features and enhancements. These releases are targeted for users 
who require the ...



[3] https://cwiki.apache.org/confluence/display/CLOUDSTACK/Releases
Releases - Apache Cloudstack - Apache Software 
Foundation
cwiki.apache.org
Overview. All available releases can be found on the Apache CloudStack 
project's website, on the Downloads page. The release-specific child pages in 
this wiki are not ...



[4] The current list of blocker and critical bugs currently stands as per
the following list:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20CLOUDSTACK%20AND%20issuetype%20%3D%20Bug%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)%20AND%20priority%20in%20(Blocker%2C%20Critical)%20AND%20affectedVersion%20in%20(4.10.0.0%2C%204.10.1.0%2C%204.11.0.0%2C%20Future)%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC

Regards,
Rohit Yadav


RE: [DISCUSS] Replace the default built-in centos template

2017-12-09 Thread Simon Weller
Yes, long overdue. Let's do it.

Simon Weller/615-312-6068

-Original Message-
From: Milamber [milam...@apache.org]
Received: Saturday, 09 Dec 2017, 12:18PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: [DISCUSS] Replace the default built-in centos template


Great idea. Perhaps using the official CentOS 7 generic image

https://wiki.centos.org/Download
http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
or
http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz
(need to update the download method to add the support of XZ format)



On 09/12/2017 14:36, Rohit Yadav wrote:
> All,
>
> I would like to kick a discussion thread on replacing the current centos5 
> based built-in guest vm template with a systemd and cloud-init enabled 
> centos7 one?
>
> Thoughts, comments?
>
> Regards.
>
> Get Outlook for Android<https://aka.ms/ghei36>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com<http://www.shapeblue.com>
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>



Re: [VOTE] Clean up old and obsolete branches.

2018-01-02 Thread Simon Weller
+0


From: Daan Hoogland 
Sent: Tuesday, January 2, 2018 12:19 PM
To: dev
Subject: Re: [VOTE] Clean up old and obsolete branches.

0

On Tue, Jan 2, 2018 at 1:51 PM, Gabriel Beims Bräscher  wrote:

> +1
>
> 2018-01-02 9:46 GMT-02:00 Rafael Weingärtner  >:
>
> > Hope you guys had great holy days!
> >
> > Resuming the discussion we started last year in [1]. It is time to vote
> and
> > then to push (if the vote is successful) the protocol defined to our
> wiki.
> > Later we can start enforcing it.
> > I will summarize the protocol for branches in the official repository.
> >
> >1. We only maintain the master and major release branches. We
> currently
> >have a system of X.Y.Z.S. I define major release here as a release
> that
> >changes either ((X or Y) or (X and Y));
> >2. We will use tags for versioning. Therefore, all versions we release
> >are tagged accordingly, including minor and security releases;
> >3. When releasing the “SNAPSHOT” is removed and the branch of the
> >version is created (if the version is being cut from master). Rule (1)
> > one
> >is applied here; therefore, only major releases will receive branches.
> >Every release must have a tag according to the format X.Y.Z.S. After
> >releasing, we bump the POM of the version to next available SNAPSHOT;
> >4. If there's a need to fix an old version, we work on HEAD of
> >corresponding release branch. For instance, if we want to fix
> something
> > in
> >release 4.1.1.0, we will work on branch 4.1, which will have the POM
> > set to
> >4.1.2.0-SNAPSHOT;
> >5. People should avoid (it is not forbidden though) using the official
> >apache repository to store working branches. If we want to work
> > together on
> >some issues, we can set up a fork and give permission to interested
> > parties
> >(the official repository is restricted to committers). If one uses the
> >official repository, the branch used must be cleaned right after
> > merging;
> >6. Branches not following these rules will be removed if they have not
> >received attention (commits) for over 6 (six) months;
> >7. Before the removal of a branch in the official repository it is
> >mandatory to create a Jira ticket and send a notification email to
> >CloudStack’s dev mailing list. If there are no objections, the branch
> > can
> >be deleted seven (7) business days after the notification email is
> sent;
> >8. After the branch removal, the Jira ticket must be closed.
> >
> > Let’s go to the poll:
> > (+1) – I want to work using this protocol
> > (0) – Indifferent to me
> > (-1) – I prefer the way it is not, without any protocol/guidelines
> >
> >
> > [1]
> > http://mail-archives.apache.org/mod_mbox/cloudstack-dev/
> > 201711.mbox/%3CCAHGRR8ozDBX%3DJJewLz_cu-YP9vA3TEmesvxGArTDBPerAOj8Cw%
> > 40mail.gmail.com%3E
> >
> > --
> > Rafael Weingärtner
> >
>



--
Daan


Re: [DISCUSS] running sVM and VR as HVM on XenServer

2018-01-12 Thread Simon Weller
They do not. They receive a link-local ip address that is used for host agent 
to VR communication. All VR commands are proxied through the host agent. Host 
agent to VR communication is over SSH.



From: Rafael Weingärtner 
Sent: Friday, January 12, 2018 1:42 PM
To: dev
Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer

but we are already using this design in vmware deployments (not sure about
KVM). The management network is already an isolated network only used by
system vms and ACS. Unless we are attacked by some internal agent, we are
safe from customer attack through management networks. Also, we can (if we
don't do yet) restrict access only via these management interfaces in
system VMs(VRs, SSVM, console proxy and others to come).



Can someone confirm if VRs receive management IPs in KVM deployments?

On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed  wrote:

> The reason why we used link local in the first place was to isolate the VR
> from directly accessing the management network. This provides another layer
> of security in case of a VR exploit. This will also have a side effect of
> making all VRs visible to each other. Are we okay accepting this?
>
> Thanks,
> -Syed
>
> On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey  wrote:
>
> > dom0 already has a DHCP server listening for requests on internal
> > management networks. I'd be wary trying to manage it from an external
> > service like cloudstack lest it get reset upon XenServer patch. This
> alone
> > makes me favor option #2. I also think option #2 simplifies network
> design
> > for users.
> >
> > Agreed on making this as consistent across flows as possible.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > It looks reasonable to manage VRs via management IP network. We should
> > > focus on using the same work flow for different deployment scenarios.
> > >
> > >
> > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We need to start a architecture discussion about running SystemVM and
> > > > Virtual-Router as HVM instances in XenServer. With recent
> > > Meltdown-Spectre,
> > > > one of the mitigation step is currently to run VMs as HVM on
> XenServer
> > to
> > > > self contain a user space attack from a guest OS.
> > > >
> > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> start
> > > has
> > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > because
> > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > accessible
> > > > to the VM so the VR fail to be properly configured.
> > > >
> > > > I currently see 2 potential approaches for this:
> > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > receive
> > > > is network configuration at boot.
> > > > 2. Change the current way of managing VR, SVMs on XenServer,
> potentiall
> > > do
> > > > same has with VMware: use pod management networks and assign a POD IP
> > to
> > > > each VR.
> > > >
> > > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > > would
> > > > work too, could someone explain how it work on this thread?
> > > >
> > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> VR
> > > > monitoring and logging, although a migration path for an existing
> cloud
> > > > could be complex.
> > > >
> > > > Cheers,
> > > >
> > > >
> > > > Pierre-Luc
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



--
Rafael Weingärtner


Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)

2018-01-18 Thread Simon Weller
All,


We're currently working on getting 4.11 stood up on hardware for testing. An 
extension would certainly be helpful to us.


From: Nux! 
Sent: Wednesday, January 17, 2018 1:07 PM
To: dev
Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)

The extension is welcome!

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Boris Stoyanov" 
> To: "dev" 
> Sent: Wednesday, 17 January, 2018 18:24:20
> Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)

> Yes Rohit, tried other browser and I’m not able to login..
>
> I’m +1 on the extend but unfortunately -1 cause of this blocker.
>
> Bobby.
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> On 17 Jan 2018, at 18:24, Rohit Yadav
> mailto:rohit.ya...@shapeblue.com>> wrote:
>
> The 72hrs window is more of a guideline than a rule, without lazy consensus I
> don't think we've any choice here, so Monday it is.
>
> Kris - thanks, if we need RC2 and your proposed issues are blocker/critical we
> can consider them so meanwhile engage with community to get them reviewed.
>
> Bobby - can you attempt login in incognito mode or in a different browser 
> after
> upgrading to 4.11 from 4.5, rule out caching issue?
>
> Regards.
>
> Get Outlook for Android
>
> 
> From: Tutkowski, Mike
> mailto:mike.tutkow...@netapp.com>>
> Sent: Wednesday, January 17, 2018 8:48:28 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [VOTE] Apache Cloudstack 4.11.0.0 (LTS)
>
> Or perhaps just the first RC should have a longer window?
>
> On 1/17/18, 8:12 AM, "Tutkowski, Mike"
> mailto:mike.tutkow...@netapp.com>> wrote:
>
>   If all of our testing were completely in an automated fashion, then I would
>   agree that the 72-hour window is sufficient. However, we don’t have that 
> kind
>   of automated coverage and people aren’t always able to immediately begin
>   testing things out like migrating from their version of CloudStack to the 
> new
>   one. That being the case, 72 hours does seem (at least for where we are now 
> as
>   a project in terms of automated testing coverage) a bit short.
>
>   On 1/17/18, 7:52 AM, "Daan Hoogland"
>   mailto:daan.hoogl...@shapeblue.com>> wrote:
>
>   The 72 hours is to make sure all stakeholders had a chance to glance. 
> Testing is
>   supposed to have happened before. We have a culture of testing only 
> after
>   RC-cut which is part of the problem. The long duration of a single test 
> run
>   takes, is another part. And finally, in this case there is the new 
> mindblow
>   called meltdown. I think in general we should try to keep the 72 hours 
> but for
>   this release it is not realistic.
>
>   On 17/01/2018, 15:48, "Rene Moser"
>   mailto:m...@renemoser.net>> wrote:
>
>   On 01/17/2018 03:34 PM, Daan Hoogland wrote:
> People, People,
>
> a lot of us are busy with meltdown fixes and a full component test takes about
> the 72 hours that we have for our voting, I propose to extend the vote period
> until at least Monday.
>
>   +1
>
>   I wonder where this 72 hours windows come from... Is it just be or,
>   based on the amount of changes and "things" to test, I would like to
>   expect a window in the size of 7-14 days ...?
>
>   René
>
>
>
>   daan.hoogl...@shapeblue.com
>   
> www.shapeblue.com>
>   53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>   @shapeblue
>
>
>
>
>
>
>
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue


Re: [4.11] Testing New "Ability to disable primary storage to secondary storage backups for snapshots" Feature

2018-01-25 Thread Simon Weller
I'm not sure why this was removed from public.  We haven't tested the feature 
set since the below PR was merged.


Nathan,


Thoughts on this?





From: Rohit Yadav 
Sent: Thursday, January 25, 2018 8:20 AM
To: dev@cloudstack.apache.org
Subject: Re: [4.11] Testing New "Ability to disable primary storage to 
secondary storage backups for snapshots" Feature

Hi Ozhan,


The global setting was removed in following PR, however you can get the 
feature/ability via the API:

https://github.com/apache/cloudstack/pull/2081


Also see:

https://cwiki.apache.org/confluence/display/CLOUDSTACK/Separate+creation+and+backup+operations+for+a+volume+snapshot
Separate creation and backup operations for a volume 
...
cwiki.apache.org
DB Changes. NA UI Flow. A checkbox will be added to the "Create Volume 
Snapshot" dialog box, which when checked, snapshot and copy operations will be 
separated and if ...





Please test and share if the changes introduce by above are acceptable, or you 
think is blocker(ish)?


- Rohit


[https://cloudstack.apache.org/images/monkey-144.png]

Apache CloudStack: Open Source Cloud Computing
cloudstack.apache.org
CloudStack is open source cloud computing software for creating, managing, and 
deploying infrastructure cloud services







From: Özhan Rüzgar Karaman 
Sent: Wednesday, January 24, 2018 4:50:41 PM
To: dev@cloudstack.apache.org
Subject: [4.11] Testing New "Ability to disable primary storage to secondary 
storage backups for snapshots" Feature

Hi;
I plan to test "Ability to disable primary storage to secondary storage
backups for snapshots" feature on 4.11 rc1 release. For this test i think i
need to update "snapshot.backup.rightafter" parameter from global settings
but i could not find the parameter on global configuration there.

Is this normal?

Thanks
Özhan

rohit.ya...@shapeblue.com
www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue





Re: System VMs not migrating when host down

2018-02-15 Thread Simon Weller
Hey Andrija,


So it sounds like your primary storage isn't enforcing an exclusive lock.  How 
is your storage exposed to ACS?


We've found that HA doesn't work at all with a host failure on KVM, as those 
VMs will never be restarted until the host is either recovered, or the host is 
removed from ACS. We are running a heavily patched 4.8.

- Si

From: Andrija Panic 
Sent: Wednesday, February 14, 2018 3:22 AM
To: dev
Subject: Re: System VMs not migrating when host down

Humble opinion (until HOST HA is ready in 4.11 if not mistaken?), avoid
using HA option for VMs  - avoid setting the  "Offer HA" option on any
compute/service offerings, since we did end  up (was it ACS 4.5 or 4.8,
can't remember now) having 2 copies of SAME VM running on 2 different
hosts...imagine storage/volume corruption...this happened a few times for
us.

HOST HA looks like really a nice thing, I have not tested that yet...but
sould completely solve the problem.

On 14 February 2018 at 10:14, Paul Angus  wrote:

> Hi Sean,
>
> The 'problem' with VM HA in KVM is that it relies on the parent host agent
> to be connected to report that the VM is down.  We cannot assume that just
> because a host agent is disconnected, that the VMs on that host are not
> running.
>
> This is where HOST HA comes in, this feature detects loss of connection to
> the agent and then tries to determine if the VMs on that host are active
> and then attempts some corrective action.
>
>
> Kind regards,
>
> Paul Angus
>
> paul.an...@shapeblue.com
> www.shapeblue.com
[http://www.shapeblue.com/wp-content/uploads/2017/06/logo.png]

Shapeblue - The CloudStack Company
www.shapeblue.com
Rapid deployment framework for Apache CloudStack IaaS Clouds. CSForge is a 
framework developed by ShapeBlue to deliver the rapid deployment of a 
standardised ...



> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>
> -Original Message-
> From: Sean Lair [mailto:sl...@ippathways.com]
> Sent: 13 February 2018 23:06
> To: dev@cloudstack.apache.org
> Subject: System VMs not migrating when host down
>
> Hi all,
>
> We are testing VM HA and are having a problem with our system VMs
> (secondary storage and console) not being started up on another host when a
> host fails.
>
> Shouldn't the system VMs be VM HA-enabled?  Currently they are just in an
> "Alert" agent state, but never migrate.  We are currently running 4.9.3.
>
>
> Thanks
> Sean
>



--

Andrija Panić


Re: HA issues

2018-02-19 Thread Simon Weller
Andrija,


We pushed quite a few PRs on the exception and lockup issues related to Ceph in 
the agent.


We have a PR for the deletion issue. See if you have it pulled into your 
release - https://github.com/myENA/cloudstack/pull/9


- Si





From: Andrija Panic 
Sent: Saturday, February 17, 2018 1:49 PM
To: dev
Subject: Re: HA issues

Hi Sean,

(we have 2 threads interleaving on the libvirt lockd..) - so, did you
manage to understand what can cause the Agent Disconnect in most cases, for
you specifically? Is there any software (CloudStack) root cause
(disregarding i.e. networking issues etc)

Just our examples, which you should probably not have:

We had CEPH cluster running (with ACS), and there any exception in librbd
would crash JVM and the agent, but this has been fixed mostly -
Now get i.e. agent disconnect when ACS try to delete volume on CEPH (and
for some reason not succeed withing 30 minutes, volume deletion fails) -
then libvirt get's completety stuck (virsh list even dont work)...so  agent
get's disconnect eventually.

It would be good to get rid of agent disconnections in general, obviously
:) so that is why I'm asking (you are on NFS, so would like to see your
experience here).

Thanks

On 16 February 2018 at 21:52, Sean Lair  wrote:

> We were in the same situation as Nux.
>
> In our test environment we hit the issue with VMs not getting fenced and
> coming up on two hosts because of VM HA.   However, we updated some of the
> logic for VM HA and turned on libvirtd's locking mechanism.  Now we are
> working great w/o IPMI.  The locking stops the VMs from starting elsewhere,
> and everything recovers very nicely when the host starts responding again.
>
> We are on 4.9.3 and haven't started testing with 4.11 yet, but it may work
> along-side IPMI just fine - it would just have affect the fencing.
> However, we *currently* prefer how we are doing it now, because if the
> agent stops responding, but the host is still up, the VMs continue running
> and no actual downtime is incurred.  Even when VM HA attempts to power on
> the VMs on another host, it just fails the power-up and the VMs continue to
> run on the "agent disconnected" host. The host goes into alarm state and
> our NOC can look into what is wrong the agent on the host.  If IPMI was
> enabled, it sounds like it would power off the host (fence) and force
> downtime for us even if the VMs were actually running OK - and just the
> agent is unreachable.
>
> I plan on submitting our updates via a pull request at some point.  But I
> can also send the updated code to anyone that wants to do some testing
> before then.
>
> -Original Message-
> From: Marcus [mailto:shadow...@gmail.com]
> Sent: Friday, February 16, 2018 11:27 AM
> To: dev@cloudstack.apache.org
> Subject: Re: HA issues
>
> From your other emails it sounds as though you do not have IPMI
> configured, nor host HA enabled, correct? In this case, the correct thing
> to do is nothing. If CloudStack cannot guarantee the VM state (as is the
> case with an unreachable hypervisor), it should do nothing, for fear of
> causing a split brain and corrupting the VM disk (VM running on two hosts).
>
> Clustering and fencing is a tricky proposition. When CloudStack (or any
> other cluster manager) is not configured to or cannot guarantee state then
> things will simply lock up, in this case your HA VM on your broken
> hypervisor will not run elsewhere. This has been the case for a long time
> with CloudStack, HA would only start a VM after the original hypervisor
> agent came back and reported no VM is running.
>
> The new feature, from what I gather, simply adds the possibility of
> CloudStack being able to reach out and shut down the hypervisor to
> guarantee state. At that point it can start the VM elsewhere. If something
> fails in that process (IPMI unreachable, for example, or bad credentials),
> you're still going to be stuck with a VM not coming back.
>
> It's the nature of the thing. I'd be wary of any HA solution that does not
> reach out and guarantee state via host or storage fencing before starting a
> VM elsewhere, as it will be making assumptions. Its entirely possible a VM
> might be unreachable or unable to access it storage for a short while, a
> new instance of the VM is started elsewhere, and the original VM comes back.
>
> On Wed, Jan 17, 2018 at 9:02 AM Nux!  wrote:
>
> > Hi Rohit,
> >
> > I've reinstalled and tested. Still no go with VM HA.
> >
> > What I did was to kernel panic that particular HV ("echo c >
> > /proc/sysrq-trigger" <- this is a proper way to simulate a crash).
> > What happened next is the HV got marked as "Alert", the VM on it was
> > all the time marked as "Running" and it was not migrated to another HV.
> > Once the panicked HV has booted back the VM reboots and becomes
> available.
> >
> > I'm running on CentOS 7 mgmt + HVs and NFS primary and secondary storage.
> > The VM has HA enabled service offering.
> > H

Re: HA issues

2018-02-19 Thread Simon Weller
Also these -

https://github.com/myENA/cloudstack/pull/20/commits/1948ce5d24b87433ae9e8f4faebdfc20b56b751a


https://github.com/myENA/cloudstack/pull/12/commits






From: Andrija Panic 
Sent: Monday, February 19, 2018 5:23 AM
To: dev
Subject: Re: HA issues

Hi Simon,

a big thank you for this, will have our devs check this!

Thanks!

On 19 February 2018 at 09:02, Simon Weller  wrote:

> Andrija,
>
>
> We pushed quite a few PRs on the exception and lockup issues related to
> Ceph in the agent.
>
>
> We have a PR for the deletion issue. See if you have it pulled into your
> release - https://github.com/myENA/cloudstack/pull/9
[https://avatars1.githubusercontent.com/u/1444686?s=400&v=4]<https://github.com/myENA/cloudstack/pull/9>

context cleanup by leprechau · Pull Request #9 · 
myENA/cloudstack<https://github.com/myENA/cloudstack/pull/9>
github.com
cleanup rbd image and rados context even if exceptions are thrown in 
deletePhysicalDisk routine



>
>
> - Si
>
>
>
>
> 
> From: Andrija Panic 
> Sent: Saturday, February 17, 2018 1:49 PM
> To: dev
> Subject: Re: HA issues
>
> Hi Sean,
>
> (we have 2 threads interleaving on the libvirt lockd..) - so, did you
> manage to understand what can cause the Agent Disconnect in most cases, for
> you specifically? Is there any software (CloudStack) root cause
> (disregarding i.e. networking issues etc)
>
> Just our examples, which you should probably not have:
>
> We had CEPH cluster running (with ACS), and there any exception in librbd
> would crash JVM and the agent, but this has been fixed mostly -
> Now get i.e. agent disconnect when ACS try to delete volume on CEPH (and
> for some reason not succeed withing 30 minutes, volume deletion fails) -
> then libvirt get's completety stuck (virsh list even dont work)...so  agent
> get's disconnect eventually.
>
> It would be good to get rid of agent disconnections in general, obviously
> :) so that is why I'm asking (you are on NFS, so would like to see your
> experience here).
>
> Thanks
>
> On 16 February 2018 at 21:52, Sean Lair  wrote:
>
> > We were in the same situation as Nux.
> >
> > In our test environment we hit the issue with VMs not getting fenced and
> > coming up on two hosts because of VM HA.   However, we updated some of
> the
> > logic for VM HA and turned on libvirtd's locking mechanism.  Now we are
> > working great w/o IPMI.  The locking stops the VMs from starting
> elsewhere,
> > and everything recovers very nicely when the host starts responding
> again.
> >
> > We are on 4.9.3 and haven't started testing with 4.11 yet, but it may
> work
> > along-side IPMI just fine - it would just have affect the fencing.
> > However, we *currently* prefer how we are doing it now, because if the
> > agent stops responding, but the host is still up, the VMs continue
> running
> > and no actual downtime is incurred.  Even when VM HA attempts to power on
> > the VMs on another host, it just fails the power-up and the VMs continue
> to
> > run on the "agent disconnected" host. The host goes into alarm state and
> > our NOC can look into what is wrong the agent on the host.  If IPMI was
> > enabled, it sounds like it would power off the host (fence) and force
> > downtime for us even if the VMs were actually running OK - and just the
> > agent is unreachable.
> >
> > I plan on submitting our updates via a pull request at some point.  But I
> > can also send the updated code to anyone that wants to do some testing
> > before then.
> >
> > -Original Message-
> > From: Marcus [mailto:shadow...@gmail.com]
> > Sent: Friday, February 16, 2018 11:27 AM
> > To: dev@cloudstack.apache.org
> > Subject: Re: HA issues
> >
> > From your other emails it sounds as though you do not have IPMI
> > configured, nor host HA enabled, correct? In this case, the correct thing
> > to do is nothing. If CloudStack cannot guarantee the VM state (as is the
> > case with an unreachable hypervisor), it should do nothing, for fear of
> > causing a split brain and corrupting the VM disk (VM running on two
> hosts).
> >
> > Clustering and fencing is a tricky proposition. When CloudStack (or any
> > other cluster manager) is not configured to or cannot guarantee state
> then
> > things will simply lock up, in this case your HA VM on your broken
> > hypervisor will not run elsewhere. This has been the case for a long time
> > with CloudStack, HA would only start a VM after the original hypervisor
> > agent came back and report

RE: I'd like to introduce you to Khosrow

2018-02-22 Thread Simon Weller
Welcome Khosrow.

Simon Weller/615-312-6068

-Original Message-
From: Khosrow Moossavi [kmooss...@cloudops.com]
Received: Thursday, 22 Feb 2018, 7:00PM
To: dev@cloudstack.apache.org [dev@cloudstack.apache.org]
Subject: Re: I'd like to introduce you to Khosrow

Thank you Pierre-Luc,
I'm super excited to be part of the community.

On Feb 22, 2018 18:42, "Rafael Weingärtner" 
wrote:

> Welcome!
> Congratualations for the great job done so far...
>
> On Thu, Feb 22, 2018 at 8:40 PM, Pierre-Luc Dion 
> wrote:
>
> > Hi fellow colleagues,
> >
> > I might be a bit late with this email...
> >
> > I'd like to introduce Khosrow Moossavi, who recently join our team and
> his
> > focus is currently exclusively on dev for Cloudstack with cloud.ca.
> >
> > Our 2 current priorities are:
> > -fixing VRs,SVMs to run has HVM VMs in xenserver.
> > - redesign, or rewrite, the remote management vpn for vpc, poc in
> progress
> > for IKEv2...
> >
> >
> >
> > Some of you might have interact with him already.
> >
> >
> > Also, we are going to be more active for the upcomming 4.12 release.
> >
> >
> > Cheers!
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: Notice that Gabriel Bräscher now works at PCextreme

2018-03-20 Thread Simon Weller
Great, congrats Gabriel!





From: Paul Angus 
Sent: Tuesday, March 20, 2018 9:08 AM
To: dev@cloudstack.apache.org
Cc: gabrasc...@gmail.com
Subject: RE: Notice that Gabriel Bräscher now works at PCextreme

Awesome!


Kind regards,

Paul Angus

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




-Original Message-
From: Rohit Yadav 
Sent: 20 March 2018 14:04
To: dev@cloudstack.apache.org
Cc: gabrasc...@gmail.com
Subject: Re: Notice that Gabriel Bräscher now works at PCextreme

Congrats Gabriel. Great now you can resume work on your PRs.


- Rohit


From: Wido den Hollander 
Sent: Tuesday, March 20, 2018 7:20:57 PM
To: dev@cloudstack.apache.org
Cc: gabrasc...@gmail.com
Subject: Notice that Gabriel Bräscher now works at PCextreme

Hi,

Just wanted to let you know that Gabriel Bräscher started working at PCextreme 
this week.

He'll be committing and developing on CloudStack for PCextreme and the 
community.

Just so everybody knows that we are colleagues now.

Let's make CloudStack even better!

Wido

rohit.ya...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue






Re: New committer: Dag Sonstebo

2018-03-20 Thread Simon Weller
Congrats Dag, much deserved!


From: John Kinsella 
Sent: Tuesday, March 20, 2018 8:58 AM
To: 
Subject: New committer: Dag Sonstebo

The Project Management Committee (PMC) for Apache CloudStack has
invited Dag Sonsteboto become a committer and we are pleased to
announce that he has accepted.

I’ll take a moment here to remind folks that being an ASF committer
isn’t purely about code - Dag has been helping out for quite a while
on users@, and seems to have a strong interest around ACS and the
community. We welcome this activity, and encourage others to help
out as they can - it doesn’t necessarily have to be purely code-related.

Being a committer enables easier contribution to the project since
there is no need to go via the patch submission process. This should
enable better productivity.

Please join me in welcoming Dag!

John


Re: [VOTE] Move to Github issues

2018-03-26 Thread Simon Weller
+1 (binding).



From: Rohit Yadav 
Sent: Monday, March 26, 2018 1:33 AM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: [VOTE] Move to Github issues

All,

Based on the discussion last week [1], I would like to start a vote to put
the proposal into effect:

- Enable Github issues, wiki features in CloudStack repositories.
- Both user and developers can use Github issues for tracking issues.
- Developers can use #id references while fixing an existing/open issue in
a PR [2]. PRs can be sent without requiring to open/create an issue.
- Use Github milestone to track both issues and pull requests towards a
CloudStack release, and generate release notes.
- Relax requirement for JIRA IDs, JIRA still to be used for historical
reference and security issues. Use of JIRA will be discouraged.
- The current requirement of two(+) non-author LGTMs will continue for PR
acceptance. The two(+) PR non-authors can advise resolution to any issue
that we've not already discussed/agreed upon.

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?

[ ] +1  approve
[ ] +0  no opinion
[ ] -1  disapprove (and reason why)

Vote will be open for 120 hours. If the vote passes the following actions
will be taken:
- Get Github features enabled from ASF INFRA
- Update CONTRIBUTING.md and other relevant cwiki pages.
- Update project website

[1] https://markmail.org/message/llodbwsmzgx5hod6
[2] https://blog.github.com/2013-05-14-closing-issues-via-pull-requests/

Regards,
Rohit Yadav


Re: Welcoming Mike as the new Apache CloudStack VP

2018-03-26 Thread Simon Weller
Thanks for all of your hard work Wido, we really appreciate it.


Congratulations Mike!


- Si


From: Wido den Hollander 
Sent: Monday, March 26, 2018 9:11 AM
To: dev@cloudstack.apache.org; us...@cloudstack.apache.org
Subject: Welcoming Mike as the new Apache CloudStack VP

Hi all,

It's been a great pleasure working with the CloudStack project as the
ACS VP over the past year.

A big thank you from my side for everybody involved with the project in
the last year.

Hereby I would like to announce that Mike Tutkowski has been elected to
replace me as the Apache Cloudstack VP in our annual VP rotation.

Mike has a long history with the project and I am are happy welcome him
as the new VP for CloudStack.

Welcome Mike!

Thanks,

Wido


Re: [DISCUSS] VR upgrade downtime reduction

2018-05-01 Thread Simon Weller
Yes, nice work!





From: Daan Hoogland 
Sent: Tuesday, May 1, 2018 5:28 AM
To: us...@cloudstack.apache.org
Cc: dev
Subject: Re: [DISCUSS] VR upgrade downtime reduction

good work Rohit,
I'll review 2508 https://github.com/apache/cloudstack/pull/2508

On Tue, May 1, 2018 at 12:08 PM, Rohit Yadav 
wrote:

> All,
>
>
> A short-term solution to VR upgrade or network restart (with cleanup=true)
> has been implemented:
>
>
> - The strategy for redundant VRs builds on top of Wei's original patch
> where backup routers are removed and replace in a rolling basis. The
> downtime I saw was usually 0-2 seconds, and theoretically downtime is
> maximum of [0, 3*advertisement interval + skew seconds] or 0-10 seconds
> (with cloudstack's default of 1s advertisement interval).
>
>
> - For non-redundant routers, I've implemented a strategy where first a new
> VR is deployed, then old VR is powered-off/destroyed, and the new VR is
> again re-programmed. With this strategy, two identical VRs may be up for a
> brief moment (few seconds) where both can serve traffic, however the new VR
> performs arp-ping on its interfaces to update neighbours. After the old VR
> is removed, the new VR is re-programmed which among many things performs
> another arpping. The theoretical downtime is therefore limited by the
> arp-cache refresh which can be up to 30 seconds. In my experiments, against
> various VMware, KVM and XenServer versions I found that the downtime was
> indeed less than 30s, usually between 5-20 seconds. Compared to older ACS
> versions, especially in cases where VRs deployment require full volume copy
> (like in VMware) a 10x-12x improvement was seen.
>
>
> Please review, test the following PRs which has test details, benchmarks,
> and some screenshots:
>
> https://github.com/apache/cloudstack/pull/2508
>
>
> Future work can be driven towards making all VRs redundant enabled by
> default that can allow for a firewall+connections state transfer
> (conntrackd + VRRP2/3 based) during rolling reboots.
>
>
> - Rohit
>
> 
>
>
>
> 
> From: Daan Hoogland 
> Sent: Thursday, February 8, 2018 3:11:51 PM
> To: dev
> Subject: Re: [DISCUSS] VR upgrade downtime reduction
>
> to stop the vote and continue the discussion. I personally want unification
> of all router vms: VR, 'shared network', rVR, VPC, rVPC, and eventually the
> one we want to create for 'enterprise topology hand-off points'. And I
> think we have some level of consensus on that but the path there is a
> concern for Wido and for some of my colleagues as well, and rightly so. One
> issue is upgrades from older versions.
>
> I the common scenario as follows:
> + redundancy is deprecated and only number of instances remain.
> + an old VR is replicated in memory by an redundant enabled version, that
> will be in a state of running but inactive.
> - the old one will be destroyed while a ping is running
> - as soon as the ping fails more then three times in a row (this might have
> to have a hypervisor specific implementation or require a helper vm)
> + the new one is activated
>
> after this upgrade Wei's and/or Remi's code will do the work for any
> following upgrade.
>
> flames, please
>
>
>
> On Wed, Feb 7, 2018 at 12:17 PM, Nux!  wrote:
>
> > +1 too
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> >
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> - Original Message -
> > > From: "Rene Moser" 
> > > To: "dev" 
> > > Sent: Wednesday, 7 February, 2018 10:11:45
> > > Subject: Re: [DISCUSS] VR upgrade downtime reduction
> >
> > > On 02/06/2018 02:47 PM, Remi Bergsma wrote:
> > >> Hi Daan,
> > >>
> > >> In my opinion the biggest issue is the fact that there are a lot of
> > different
> > >> code paths: VPC versus non-VPC, VPC versus redundant-VPC, etc. That's
> > why you
> > >> cannot simply switch from a single VPC to a redundant VPC for example.
> > >>
> > >> For SBP, we mitigated that in Cosmic by converting all non-VPCs to a
> > VPC with a
> > >> single tier and made sure all features are supported. Next we merged
> > the single
> > >> and redundant VPC code paths. The idea here is that redundancy or not
> > should
> > >> only be a difference in the number of routers. Code should be the
> same.
> > A
> > >> single router, is also "master" but there just is no "backup".
> > >>
> > >> That simplifies things A LOT, as keepalived is now the master of the
> > whole
> > >> thing. No more assigning ip addresses in Python, but leave that to
> > keepalived
> > >> instead. Lots of code deleted. Easier to maintain, way more stable. We
> > just
> > >> released Cosmic 6 that has this feature and are now rolling it out in
> > >> production. Looking good so far. This change unlocks a lot of
> > possibilities,
> > >> like live upgrading 

Re: John Kinsella and Wido den Hollander now ASF members

2018-05-02 Thread Simon Weller
Congrats to both of you!



From: Daan Hoogland 
Sent: Wednesday, May 2, 2018 11:53 AM
To: dev
Subject: Re: John Kinsella and Wido den Hollander now ASF members

Wow, nice surprise

On Wed, 2 May 2018, 18:38 Dag Sonstebo,  wrote:

> Congratulations both!
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 02/05/2018, 17:33, "Nitin Kumar Maharana" <
> nitinkumar.mahar...@accelerite.com> wrote:
>
> Congratulations!!
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> > On 02-May-2018, at 9:50 PM, Khosrow Moossavi 
> wrote:
> >
> > That's awesome! Congratulations!
> >
> >
> >
> >
> > On Wed, May 2, 2018 at 12:19 PM Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> > wrote:
> >
> >> Congratulations, guys! :-)
> >>
> >>> On May 2, 2018, at 9:58 AM, David Nalley  wrote:
> >>>
> >>> Hi folks,
> >>>
> >>> As noted in the press release[1] John Kinsella and Wido den
> Hollander
> >>> have been elected to the ASF's membership.
> >>>
> >>> Members are the 'shareholders' of the foundation, elect the board
> of
> >>> directors, and help guide the future of the ASF.
> >>>
> >>> Congrats to both of you, very well deserved.
> >>>
> >>> --David
> >>>
> >>> [1] https://s.apache.org/ysxx
> >>
>
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which
> is the property of Accelerite, a Persistent Systems business. It is
> intended only for the use of the individual or entity to which it is
> addressed. If you are not the intended recipient, you are not authorized to
> read, retain, copy, print, distribute or use this message. If you have
> received this communication in error, please notify the sender and delete
> all copies of this message. Accelerite, a Persistent Systems business does
> not accept any liability for virus infected mails.
>
>
>
>


Re: 4.11.0 - can't create guest vms with RBD storage!

2018-05-02 Thread Simon Weller
We've starting looking into this particular bug.

We now have a 4.11 lab setup and can reproduce this.


- Si


From: Wei ZHOU 
Sent: Monday, April 30, 2018 1:25 PM
To: dev@cloudstack.apache.org
Subject: Re: 4.11.0 - can't create guest vms with RBD storage!

Agreed. agent.log might be helpful for troubleshooting.

it seems to be a bug within kvm plugin.

-Wei

2018-04-30 15:36 GMT+02:00 Rafael Weingärtner :

> We might need some extra log entries. Can you provide them?
>
> On Mon, Apr 30, 2018 at 10:14 AM, Andrei Mikhailovsky <
> and...@arhont.com.invalid> wrote:
>
> > hello gents,
> >
> > I have just realised that after upgrading to 4.11.0 we are no longer able
> > to create new VMs. This has just been noticed as we have previously used
> > ready made templates, which work just fine.
> >
> > Setup: ACS 4.11.0 (upgraded from 4.9.3), KVM + CEPH, Ubuntu 16.04 on all
> > servers
> >
> > When trying to create a new vm from an ISO image I get the following
> > error:
> >
> >
> > com.cloud.exception.StorageUnavailableException: Resource
> [StoragePool:2]
> > is unreachable: Unable to create Vol[3937|vm=2217|ROOT]:com.
> > cloud.utils.exception.CloudRuntimeException:
> > org.libvirt.LibvirtException: this function is not supported by the
> > connection driver: only RAW volumes are supported by this storage pool
> >
> > at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.
> > recreateVolume(VolumeOrchestrator.java:1336)
> > at org.apache.cloudstack.engine.orchestration.
> VolumeOrchestrator.prepare(VolumeOrchestrator.java:1413)
> >
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:1110)
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:4927)
> > at sun.reflect.GeneratedMethodAccessor498.invoke(Unknown Source)
> > at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(
> > VmWorkJobHandlerProxy.java:107)
> > at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(
> > VirtualMachineManagerImpl.java:5090)
> > at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.
> > runInContext(AsyncJobManagerImpl.java:581)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
> > ManagedContextRunnable.java:49)
> > at org.apache.cloudstack.managed.context.impl.
> > DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > callWithContext(DefaultManagedContext.java:103)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > runWithContext(DefaultManagedContext.java:53)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
> > ManagedContextRunnable.java:46)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(
> AsyncJobManagerImpl.java:529)
> >
> > at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> >
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> >
> > at java.lang.Thread.run(Thread.java:748)
> >
> >
> > My guess is that ACS tried to create a QCOW2 image type whereas it should
> > be RAW on ceph/rbd.
> >
> > I am really struggling to understand how this bug in a function of MAJOR
> > importance could have been missed during the tests ran by developers and
> > community before making a final realise. Anyways, I hope the fix will
> make
> > it to 4.11.1 release, otherwise it's really messed up!
> >
> > Cheers
> >
> > Andrei
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: 4.11.0 - can't create guest vms with RBD storage!

2018-05-03 Thread Simon Weller
Andrei,


Nathan has pushed a PR to fix this. Please see: 
https://github.com/apache/cloudstack/pull/2623

He has done some basic testing on it, but your feedback I'm sure would be 
appreciated.


- Si




From: Simon Weller 
Sent: Wednesday, May 2, 2018 4:27 PM
To: dev@cloudstack.apache.org
Subject: Re: 4.11.0 - can't create guest vms with RBD storage!

We've starting looking into this particular bug.

We now have a 4.11 lab setup and can reproduce this.


- Si


From: Wei ZHOU 
Sent: Monday, April 30, 2018 1:25 PM
To: dev@cloudstack.apache.org
Subject: Re: 4.11.0 - can't create guest vms with RBD storage!

Agreed. agent.log might be helpful for troubleshooting.

it seems to be a bug within kvm plugin.

-Wei

2018-04-30 15:36 GMT+02:00 Rafael Weingärtner :

> We might need some extra log entries. Can you provide them?
>
> On Mon, Apr 30, 2018 at 10:14 AM, Andrei Mikhailovsky <
> and...@arhont.com.invalid> wrote:
>
> > hello gents,
> >
> > I have just realised that after upgrading to 4.11.0 we are no longer able
> > to create new VMs. This has just been noticed as we have previously used
> > ready made templates, which work just fine.
> >
> > Setup: ACS 4.11.0 (upgraded from 4.9.3), KVM + CEPH, Ubuntu 16.04 on all
> > servers
> >
> > When trying to create a new vm from an ISO image I get the following
> > error:
> >
> >
> > com.cloud.exception.StorageUnavailableException: Resource
> [StoragePool:2]
> > is unreachable: Unable to create Vol[3937|vm=2217|ROOT]:com.
> > cloud.utils.exception.CloudRuntimeException:
> > org.libvirt.LibvirtException: this function is not supported by the
> > connection driver: only RAW volumes are supported by this storage pool
> >
> > at org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.
> > recreateVolume(VolumeOrchestrator.java:1336)
> > at org.apache.cloudstack.engine.orchestration.
> VolumeOrchestrator.prepare(VolumeOrchestrator.java:1413)
> >
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:1110)
> > at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(
> > VirtualMachineManagerImpl.java:4927)
> > at sun.reflect.GeneratedMethodAccessor498.invoke(Unknown Source)
> > at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(
> > VmWorkJobHandlerProxy.java:107)
> > at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(
> > VirtualMachineManagerImpl.java:5090)
> > at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.
> > runInContext(AsyncJobManagerImpl.java:581)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(
> > ManagedContextRunnable.java:49)
> > at org.apache.cloudstack.managed.context.impl.
> > DefaultManagedContext$1.call(DefaultManagedContext.java:56)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > callWithContext(DefaultManagedContext.java:103)
> > at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.
> > runWithContext(DefaultManagedContext.java:53)
> > at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(
> > ManagedContextRunnable.java:46)
> > at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(
> AsyncJobManagerImpl.java:529)
> >
> > at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> >
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> >
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> >
> > at java.lang.Thread.run(Thread.java:748)
> >
> >
> > My guess is that ACS tried to create a QCOW2 image type whereas it should
> > be RAW on ceph/rbd.
> >
> > I am really struggling to understand how this bug in a function of MAJOR
> > importance could have been missed during the tests ran by developers and
> > community before making a final realise. Anyways, I hope the fix will
> make
> > it to 4.11.1 release, otherwise it's really messed up!
> >
> > Cheers
> >
> > Andrei
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: Cloudstack compatiblity Windows 2016 Server

2018-05-14 Thread Simon Weller
On KVM, selecting the "Windows PV" OS type will work fine with Windows Server 
2016. Might be worth trying on vmware.



From: Rafael Weingärtner 
Sent: Monday, May 14, 2018 11:06 AM
To: dev
Cc: users
Subject: Re: Cloudstack compatiblity Windows 2016 Server

There is one extra detail. If your hypervisor version does not support the
OS you want to use, there is no magic ACS can do.
Therefore, first you need to make sure your hypervisor supports the OS you
want. Then, you need to see if you have a guest OS entry for the OS you
want to use, and if this guest OS is mapped to a hypervisor OS mapping.

On Mon, May 14, 2018 at 1:03 PM, Suresh Kumar Anaparti <
sureshkumar.anapa...@gmail.com> wrote:

> Hi Marc,
>
> It seems the compatibility table with Cloudstack version and OS guest
> versions is not listed. May be you can try with db query using version
> (updated column) and guest_os_hypervisor (created column) tables.
>
> Please check the current version OS compatibility using
> *listGuestOsMapping*
> API (
> https://cloudstack.apache.org/api/apidocs-4.9/apis/listGuestOsMapping.html
> )
> with
> *hypervisor* and *hypervisorversion *params. If "Windows Server 2016" OS is
> not in the mapping response and the underlying hypervisor supports it, you
> can add new OS mapping to cloudstack using *addGuestOsMapping* API (
> https://cloudstack.apache.org/api/apidocs-4.9/apis/addGuestOsMapping.html
> ).
> Make
> sure to set the *ostypeid* param to Windows OS UUID (Get this using
> *listOsTypes* API).
>
> -Suresh
>
> 2018-05-14 19:11 GMT+05:30 Marc Poll Garcia :
>
> > Hi all!
> >
> > I am using CloudStack 4.9.2 on VMWare hypervisor, and I tried to create a
> > "Windows Server 2016" OS template but i have some issues working with it,
> > sometimes network does not work properly.
> >
> > Do you know if it is not compatible with this version? is there any
> > compatibility matrix / table like:
> >
> > *Cloudstack version  | OS guest versions*
> >
> > Thanks in advance.
> >
> >
> > --
> > Marc Poll Garcia
> > Technology Infrastructure . Àrea de Serveis TIC
> > Telèfon:  93.405.43.57
> >
> > [image: UPCnet]
> >
> > --
> > Aquest correu electrònic pot contenir informació confidencial o legalment
> > protegida i està exclusivament dirigit a la persona o entitat
> destinatària.
> > Si vostè no és el destinatari final o persona encarregada de recollir-lo,
> > no està autoritzat a llegir-lo, retenir-lo, modificar-lo, distribuir-lo,
> > copiar-lo ni a revelar el seu contingut. Si ha rebut aquest correu
> > electrònic per error, li preguem que informi al remitent i elimini del
> seu
> > sistema el missatge i el material annex que pugui contenir.
> > Gràcies per la seva col.laboració.
> > --
> >
> > *** Si us plau, no m'imprimeixis. Vull seguir sent digital ***
> > *** Por favor, no me imprimas. Quiero seguir siendo digital ***
> > *** Please, don't print me. I want to remain digital ***
> > --
> >
>



--
Rafael Weingärtner


Re: Ceph RBD issues in 4.11

2018-05-17 Thread Simon Weller
Glen,


Can you open a  github issue here: https://github.com/apache/cloudstack/issues


Please include logs of the second issue and we'll take a look at it.


- Si



From: Glen Baars 
Sent: Thursday, May 17, 2018 7:20 AM
To: dev@cloudstack.apache.org
Subject: Re: Ceph RBD issues in 4.11

Yes - thanks for that.

Do you have any info about the second issue?

Glen Baars

Sent from my Cyanogen phone

On 17 May 2018 8:14 PM, Rafael Weing?rtner  wrote:
This problem sounds like the one described here
https://github.com/apache/cloudstack/issues/2641.
It seems that it was already fixed and will go out in 4.11.1.0

On Thu, May 17, 2018 at 8:50 AM, Glen Baars 
wrote:

> Hello Dev,
>
> I have recently upgraded our cloudstack environment to 4.11. Mostly all
> has been smooth. ( this environment is legacy from cloud.com days! )
>
> There are some issues that I have run into:
>
> 1.Can't install any VMs from ISO ( I have seen this in the list previously
> but can't find a bug report for it ) If further reports or debug will help
> I can assist. It is easy to reproduce.
> 2.When a VM is created from a template, the RBD features are lost. More
> info below.
>
> Example of VM volume from template: -
>
> user@NAS-AUBUN-RK3-CEPH01:~# rbd info AUBUN-KVM-CLUSTER01-SSD/
> feeb52ec-f111-4a0d-9785-23aadd7650a5
>
> rbd image 'feeb52ec-f111-4a0d-9785-23aadd7650a5':
> size 150 GB in 38400 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.142926a5ee64
> format: 2
> features: layering
> flags:
> create_timestamp: Fri Apr 27 12:46:21 2018
> parent: AUBUN-KVM-CLUSTER01-SSD/d7dcd9e4-ed55-44ae-9a71-
> 52c9307e53b4@cloudstack-base-snap
> overlap: 150 GB
>
> Note the features are not the same as the parent : -
>
> user@NAS-AUBUN-RK3-CEPH01:~# rbd info AUBUN-KVM-CLUSTER01-SSD/
> d7dcd9e4-ed55-44ae-9a71-52c9307e53b4
> rbd image 'd7dcd9e4-ed55-44ae-9a71-52c9307e53b4':
> size 150 GB in 38400 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.141d274b0dc51
> format: 2
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
> flags:
> create_timestamp: Fri Apr 27 12:37:05 2018
>
>
> If you manually clone the volume the expected features are retained. We
> are running the latest Ceph version, KVM hosts on Ubuntu 16.04 with the
> latest Luminous qemu-img.
>
> Kind regards,
> Glen Baars
>
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
>



--
Rafael Weing?rtner
This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.


Re: Snapshots only on Primary Storage feature

2018-05-17 Thread Simon Weller
Glen,


This feature was implemented in 4.9 by my colleague Nathan Johnson.  You enable 
it by changing the global setting  snapshot.backup.rightafter to false.


The PR is reference here: https://github.com/apache/cloudstack/pull/1697


We have the exact same use case as you, as we also use Ceph.


- Si



From: Glen Baars 
Sent: Thursday, May 17, 2018 9:46 AM
To: dev@cloudstack.apache.org
Subject: Snapshots only on Primary Storage feature


Hello Devs,



I have been thinking about a feature request and want to see what people think 
about the use case.



We use KVM + Ceph RBD as storage.



Currently, when a client takes a snapshot, Cloudstack takes a Ceph snapshot and 
then uses qemu-img to export to secondary storage. This creates a full backup 
of the server. Clients want to use this as a daily snapshot and it isn’t 
feasible due to the space requirements.



We would like create the snapshot only on primary storage. It is replicated 
offsite and fault tolerant. I can see that the download snapshot and create 
template features may be an issue.



I have seen the below features in the recent releases and wondered if this was 
the direction that the development was going.

Separation of volume snapshot creation on primary storage and backing operation 
on secondary storage.

Bypass secondary storage template copy/transfer for KVM.

Kind regards,

Glen Baars

BackOnline Manager



T  1300 733 328 / +61 8 6102 3276

NZ +64 9280 3561



www.timg.com

 [http://images.dbonline.com.au/images/fb.png]  
Facebook  
[http://images.dbonline.com.au/images/li.png] 
LinkedIn



Watch a short video about what we 
do!

[http://images.dbonline.com.au/images/timgv3.jpg]

This e-mail may contain confidential and/or privileged information.If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorized 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.



This e-mail is intended solely for the benefit of the addressee(s) and any 
other named recipient. It is confidential and may contain legally privileged or 
confidential information. If you are not the recipient, any use, distribution, 
disclosure or copying of this e-mail is prohibited. The confidentiality and 
legal privilege attached to this communication is not waived or lost by reason 
of the mistaken transmission or delivery to you. If you have received this 
e-mail in error, please notify us immediately.


  1   2   3   4   >